I use Cloudinary with Laravel and I want to let users to choose from the existing files or upload new. How can I generate authentication signatures on the BackEnd(php in my case)? and pass it to the front end (JavaScript)?
I've tried to use unsigned upload preset to let users upload without requiring them to be signed in, but no luck.
mediaWidget = cloudinary.createMediaLibrary({
cloud_name: "my-cloud-name",
api_key: 'my-api-key',
username: 'email id',
uploadPreset: "unsigned-upload-preset",
multiple: false,
}, {
insertHandler: function (data) {
data.assets.forEach(asset => { console.log("Inserted asset:",
JSON.stringify(asset, null, 2)) })
}
}
);
I'm getting no error, but it always require user to login into cloudinary account
You can use any SHA-256 hashing function/library to create the signature using the values (cloud name, timestamp, username) mentioned in the documentation. Using PHP as an example, the example in the Media Library documentation looks like this:
<?php
$cloud_name = 'my_company';
$timestamp = '1518601863';
$username= 'jane#mycompany.com';
$api_secret = 'abcd';
$payload_to_sign = 'cloud_name='.$cloud_name.'×tamp='.$timestamp.'&username='.$username;
$signature = hash('sha256', $payload_to_sign . $api_secret);
print($signature);
?>
This provides the same output as the documentation example: 5cbc5a2a695cbda4fae85de692d446af68b96c6c81db4eb9dd2f63af984fb247
Then, in the Javascript code used to initiate the Media Library widget, you pass the same timestamp, and the signature from the server-side code, and it should open and log you in as the specified user:
window.ml = cloudinary.createMediaLibrary({
cloud_name: 'my_company',
api_key: '1234567890',
username: 'jane#mycompany.com',
timestamp: '1518601863',
signature: '5cbc5a2a695cbda4fae85de692d446af68b96c6c81db4eb9dd2f63af984fb247'
}, function(error, result) {
console.log(error, result)
});
You need to use the Cloudinary PHP SDK or there is a Cloudder, a Laravel wrapper for Cloudinary which has some nice helpers although it doesn't cover every use case.
The Cloudinary documentation covers image and video upload using the PHP SDK.
Related
I have an app which uses AWS Lambda functions to store images in a AWS PostgreSQL RDS as bytea file types.
The app is written in javascript and allows users to upload an image (typically small).
<input
className={style.buttonInputImage}
id="logo-file-upload"
type="file"
name="myLogo"
accept="image/*"
onChange={onLogoChange}
/>
The image is handled with the following function:
function onLogoChange(event) {
if (event.target.files && event.target.files[0]) {
let img = event.target.files[0];
setFormData({
name: "logo",
value: URL.createObjectURL(img),
});
}
}
Currently I am not concerned about what format the images are in, although if it makes storage and retrieval easier I could add restrictions.
I am using python to query my database and post and retrieve these files.
INSERT INTO images (logo, background_image, uuid) VALUES ('{0}','{1}','{2}') ON CONFLICT (uuid) DO UPDATE SET logo='{0}', background_image='{1}';".format(data['logo'], data['background_image'], data['id']);
and when I want to retrieve the images:
"SELECT logo, background_image FROM clients AS c JOIN images AS i ON c.id = i.uuid WHERE c.id = '{0}';".format(id);
I try to return this data to the frontend:
return {
'statusCode': 200,
'body': json.dumps(response_list),
'headers': {
"Access-Control-Allow-Origin" : "*"
},
}
I get the following error: Object of type memoryview is not JSON serializable.
So I have a two part question. First, the images are files being uploaded by a customer (typically they are logos or background images). Does it make sense to store these in my database as bytea files? Or is there a better way to store image uploads.
Second, how do I go about retrieving these files and converting them into a format usable by my front end.
I am still having issues with this. I added a print statement to try and see what exactly the images look like.
Running:
records = cursor.fetchall()
for item in records:
print(item)
I can see the image data looks like <memory at 0x7f762b8f7dc0>
Here is the full backend function:
cursor = connection.cursor()
print(event['pathParameters'].get('id'))
id = event['pathParameters'].get('id')
postgres_insert_query = "SELECT name, phone, contact, line1, city, state, zip, monday_start, monday_end, tuesday_start, tuesday_end, wednesday_start, wednesday_end, thursday_start, thursday_end, friday_start, friday_end, saturday_start, saturday_end, sunday_start, sunday_end, logo, background_image FROM clients AS c JOIN address AS a ON c.id = a.uuid JOIN hours AS h ON c.id = h.uuid JOIN images AS i ON c.id = i.uuid WHERE c.id = '{0}';".format(id);
query = postgres_insert_query;
cursor.execute(query)
records = cursor.fetchall()
response_list= []
for item in records:
item_dict ={'name': item[0], 'phone': item[1], 'contact': item[2], 'address':{'line1': item[3], 'city': item[4], 'state': item[5], 'zip': item[6]}, 'hours':{'monday_start': item[7], 'monday_end': item[8], 'tuesday_start': item[9], 'tuesday_end': item[10], 'wednesday_start': item[11], 'wednesday_end': item[12], 'thursday_start': item[13], 'thursday_end': item[14], 'friday_start': item[15], 'friday_end': item[16], 'saturday_start': item[17], 'saturday_end': item[18], 'sunday_start': item[19], 'sunday_end': item[20]}, 'image': {'background_image': item[21], 'logo': item[22]}}
response_list.append(item_dict)
# print(response_list)
# connection.commit()
return {
'statusCode': 200,
'body': response_list,
'headers': {
"Access-Control-Allow-Origin" : "*"
},
}
A byte format is not always castable to JSON, likely characters are used that are not allowed in json. Return a different data format. return a different datatype to your frontend.
For example, if you look at quill rich editor you'll see that you can send a base64 image in a .html file that you can send from backend to frontend.
I would also suggest that you use Sqlalchemy (https://www.sqlalchemy.org/), this makes your application SQL injection proof and also offers support for special datatypes.
Workflow
Load the image and encode with base64
Source: https://stackoverflow.com/a/3715530/9611924
import base64
with open("yourfile.ext", "rb") as image_file:
encoded_string = base64.b64encode(image_file.read())
Send in your API request
return {
'statusCode': 200,
'body': {"image":encoded_string },
'headers': {
"Access-Control-Allow-Origin" : "*"
},
}
Frontend
Decode the image .. (with base64)
I know this is not the initial question.
But have you consider storing images on a dedicated S3 bucket instead?
That would be cleaner and not complicated at all to implement IMHO.
So you would store the actual image file on a S3 bucket and store its path in your DB.
Your database would be lighter and front will load image based on the returned path.
I know it could sound like a lot of changes but the AWS SDK is very well done and that is not that long to do.
This is what I personally use for my project and it works like a charm.
I tried https://www.backblaze.com/b2/cloud-storage.html.
Follow through doc, it's not that hard to upload a file. I mainly through command line, but the doc also offer other options.
After you upload, you can get all the uploaded file metadata.
So overall, you can upload file to backblaze(or other cloud storage) and insert all the metadata to database.
Then when you retrieve the images, you retrieve through download url.
How can I send an image from a React App to Adonis, “save” it on the database, and when needed to fetch it to use in the front-end?
Right now, I was only successful in processing an image via Postman, my code would be like this:
const image = request.file('photo', {
types: ['image'],
size: '2mb',
});
await image.move(Helpers.tmpPath('uploads'), {
name: `${Date.now()}-${image.clientName}`,
});
if (image.status !== 'moved') {
return image.error;
}
/////
const data = {
username,
email,
role,
photo: image.fileName,
password,
access: 1,
};
const user = await User.create(data);
In the first part, I process the image move it to tmp inside the backend, on the next part I use image.fileName and create a User.
And when I need to fetch my user list, I do it like this:
const colaboradoresList = await Database.raw(
'select * from colaboradores where access = 1'
);
const userList = colaboradoresList[0];
userList.map((i) => (i.url = Helpers.tmpPath(`uploads/${i.photo}`)));
But as you can tell, Helpers.tmpPath('uploads/${i.photo}')) will return the local path to the current image, and I cannot display it on React since I need to use the public folder or download it and import.
Is there a way to do it locally, or the only way would be to create an AWS and use Drive.getUrl() to create a URL and send back to my front end?
Yes the UI cannot display local files, you need to expose a public url for the image. You have two options depending of the scope of this application
If you plan to run this as a professional app, I would highly suggest to use something like AWS S3 to store images.
Otherwise you can probably get away with setting up a route for the React UI to query. Something like /api/image/:id could return the binary or base64 encoded data of the image, which React could then display.
instead of
await image.move(Helpers.tmpPath('uploads'), {
name: `${Date.now()}-${image.clientName}`,
});
I use:
await image.move(Helpers.publicPath('uploads'),
{name: `${Date.now()}-${image.clientName}`})
For that you will need to change the folders to make it store correctly:
And then send to the front-end url = '/uploads/${i.photo}', where i.photo is the file name, so I can concatenate in React like so apiBase + url.
The result being your apiUrl + your file path that should be on public folder:
looking for some AWS Javascript SDK help.
I am in the situation that my User Pool is defined in a separate account to my Lambda which needs to send a DescribeUserPoolClient command.
Code snippet below:
import {
CognitoIdentityProviderClient,
DescribeUserPoolClientCommand,
} from '#aws-sdk/client-cognito-identity-provider';
export async function describeUserPoolClient(userPoolClientId: string, userPoolId: string) {
const cognitoClient = new CognitoIdentityProviderClient({});
const describeUserPoolClientCommand = new DescribeUserPoolClientCommand({
ClientId: userPoolClientId,
UserPoolId: userPoolId,
});
const userPoolClient = await cognitoClient.send(describeUserPoolClientCommand);
return userPoolClient;
}
Since I can only provide a userPoolId and not a full ARN, I can't see a way to send this request cross-account without assuming a role in the other account which makes my local testing a nightmare with getting roles and policies set up.
Can anyone see another way of doing this? Thanks for your help.
I would like to be able to create a file in a project Bucket as part of a Firestore cloud trigger.
When there is a change to a document on a specific collection I need to be able to take data from that document and write it to a file in a bucket in cloud storage
Example
exports.myFunction = functions.firestore
.document('documents/{docId}')
.onUpdate((change, context) => {
const after = change.after.data() as Document;
// CREATE AND WRITE TO file IN BUCKET HERE
});
I have found many examples on how to upload files. I have explored
admin.storage().bucket().file(path)
createWriteStream()
write()
But I can't seem to find documentation on how exactly to achieve the above.
Is this possible from within a trigger and if so where can I find documentation on how to do this?
Here is why I want to do this (just in case I am approaching this all wrong) . We have an application where our users are able to generate purchase orders for work they have done. At the time they initiate a generate from the software we need to create a timestamped document [pdf] (in a secure location but on that is accessible to authenticated users) representing this purchase order. The data to create this will come from the document that triggers the change.
As #Doug Stevenson said, you can use node streams.
You can see how to do this in this sample from the GCP getting started samples repo.
You need to provide a file name and the file buffer in order to stream it to GCS:
function sendUploadToGCS(req, res, next) {
if (!req.file) {
return next();
}
const gcsname = Date.now() + req.file.originalname;
const file = bucket.file(gcsname);
const stream = file.createWriteStream({
metadata: {
contentType: req.file.mimetype,
},
resumable: false,
});
stream.on('error', err => {
req.file.cloudStorageError = err;
next(err);
});
stream.on('finish', async () => {
req.file.cloudStorageObject = gcsname;
await file.makePublic();
req.file.cloudStoragePublicUrl = getPublicUrl(gcsname);
next();
});
stream.end(req.file.buffer);
}
I have to upload an image to the firebase storage. I'm not using the web version of storage (I shouldn't use it). I am using the firebase admin.
No problem, I upload the file without difficulty and I get the result in the variable "file".
and if I access the firebase storage console, the image is there. all right.
return admin.storage().bucket().upload(filePath, {destination: 'demo/images/restaurantCover.jpg',
metadata:{contentType: 'image/jpeg'}
public: true
}).then(file =>{
console.log(`file --> ${JSON.stringify(file, null, 2)}`);
let url = file["0"].metadata.mediaLink; // image url
return resolve(res.status(200).send({data:file})); // huge data
}) ;
Now, I have some questions.
Why so much information and so many objects as a response to the upload () method? Reviewing the immense object, I found a property called mediaLink inside metadata and it is the download url of the image. but...
Why is the url different from the one shown by firebase? Why can not I find the downloadURL property?
How can get the url of firebase?
firebase: https://firebasestorage.googleapis.com/v0/b/myfirebaseapp.appspot.com/o/demo%2Fimages%2Fthumb_restaurant.jpg?alt=media&token=bee96b71-2094-4492-96aa-87469363dd2e
mediaLink: https://www.googleapis.com/download/storage/v1/b/myfirebaseapp.appspot.com/o/demo%2Fimages%2Frestaurant.jpg?generation=1530193601730593&alt=media
If I use the mediaLink url is there any problem with different urls? (read, update from ios and Web Client)
Looking at Google Cloud Storage: Node.js Client documentation, they have a link to sample code which shows exactly how to do this. Also, see the File class documentation example (below)
// Imports the Google Cloud client library
const Storage = require('#google-cloud/storage');
// Creates a client
const storage = new Storage();
/**
* TODO(developer): Uncomment the following lines before running the sample.
*/
// const bucketName = 'Name of a bucket, e.g. my-bucket';
// const filename = 'File to access, e.g. file.txt';
// Gets the metadata for the file
storage
.bucket(bucketName)
.file(filename)
.getMetadata()
.then(results => {
const metadata = results[0];
console.log(`File: ${metadata.name}`);
console.log(`Bucket: ${metadata.bucket}`);
console.log(`Storage class: ${metadata.storageClass}`);
console.log(`Self link: ${metadata.selfLink}`);
console.log(`ID: ${metadata.id}`);
console.log(`Size: ${metadata.size}`);
console.log(`Updated: ${metadata.updated}`);
console.log(`Generation: ${metadata.generation}`);
console.log(`Metageneration: ${metadata.metageneration}`);
console.log(`Etag: ${metadata.etag}`);
console.log(`Owner: ${metadata.owner}`);
console.log(`Component count: ${metadata.component_count}`);
console.log(`Crc32c: ${metadata.crc32c}`);
console.log(`md5Hash: ${metadata.md5Hash}`);
console.log(`Cache-control: ${metadata.cacheControl}`);
console.log(`Content-type: ${metadata.contentType}`);
console.log(`Content-disposition: ${metadata.contentDisposition}`);
console.log(`Content-encoding: ${metadata.contentEncoding}`);
console.log(`Content-language: ${metadata.contentLanguage}`);
console.log(`Metadata: ${metadata.metadata}`);
console.log(`Media link: ${metadata.mediaLink}`);
})
.catch(err => {
console.error('ERROR:', err);
});