I am using the base api url below like in many examples ..https://api.cloudinary.com/v1_1/name.
But if I use dotenv to hide the name and to hide the upload preset like below then will that keep my api secure or will people be able to find it from the img urls that are returned when an image is uploaded.
formData.append('upload_preset', process.env.REACT_APP_UPLOAD_PRESET);
https://api.cloudinary.com/v1_1/${process.env.REACT_APP_CLOUD_NAME}/image/upload,
If you're allowing uploads from client-side code and are sending them from the client to Cloudinary directly, users will always be able to see the cloud name and upload preset name you use, if not easily in your app's source, then certainly via a proxy or other debug tools.
However, that's expected, and is the reason for using the unsigned upload option: unsigned uploads to allow you to perform uploads in cases where the client can't authenticate itself with a server component by using the upload preset you specify - what happens to the uploaded files is determined by the pre-configured options in the upload preset so you can name them a certain way, put them in a specific folder, add tags, edit the images via resizing or other transformations before they're saved, etc.
If you don't want to expose the cloud name or upload preset name, you'll need to pass the files to a server endpoint you control, and then upload them to Cloudinary from there, which would put you in the same basic situation where the client code has the ability to upload files without authentication [or using authentication your users can see and can copy], although then it would be the endpoint on your server allowing that rather than Cloudinary's /v1_1/
Related
Currently, I have a PHP app running on Heroku using a Postgresql database. I want my users to be able to upload an image to a folder on my dropbox, and store other information (in this case, product information such as price, title, weight, location of the image on dropbox) on my database.
Right now, I'm using a file input inside an HTML form to submit the image by posting the whole form to my server (including the image), and then I use cURL to send the image to dropbox and wait for the response to succeed. On success, I create my database record that has the other information I mentioned earlier.
This works well for small files, but Heroku has a 30 second timeout that I can't change. For large files, the whole file uploads to the server, and then it uploads to dropbox. These two upload operations are time-intensive and takes more time than the timeout allows.
I had the idea of sending the file to dropbox using javascript (jQuery ajax commands specifically) so that it's handled by the client, and then POSTing to my server on success, but I'm worried about how secure that is since I would need to have my own authorization tokens in the source code that the client can view.
Is there any way for PHP to send a file from the client to an external URL without it touching the server? How do I do this securely?
This sounds like a good fit for the Dropbox API /2/files/get_temporary_upload_link endpoint.
You can call that on your server to retrieve the temporary upload link, and then pass that link down to the browser. You can then have some JavaScript code perform the upload directly from the browser using that link.
Since only the /2/files/get_temporary_upload_link endpoint call requires your Dropbox access token (whereas the temporary upload link itself doesn't), you can keep your access token secret on the server only, without exposing it to the client. And since the upload happens directly from the browser to the Dropbox servers, you don't have to pass the file data through your own server, avoiding the timeout issue.
First off, I know this seems illogical when I could just send the download URL to the server. The issue with that is that user's can access these download links and so for those who can I need to be able to download it. I can't really explain why as I am under NDA.
I am trying to download a file from a URL via the client (browser) and stream the data directly to the server where the file is saved so the client essentially acts as a "middleman" and does not require the file to be downloaded to the client's machine.
I have been experimenting with "socket.io-stream" and "socket.io-file" but i am having a few issues with both. "socket.io-stream" allows me to upload a specific file from the client to the server but the uploaded file has a size of 0kb and doesn't have any examples on Github.
"socket.io-file" has examples, which I followed and currently have it setup so I can use an input tag to select a file to upload to the server successfully.
From what I can see the "socket.io-file" upload function takes a file object as the parameter.
So I have two questions really:
Is there a plugin for JS (Browser) & NodeJs (Server) that would allow me to do this?
or
How can I create a File Object from an external url?
I solved this is the end, using a chrome extension to download the file as a blob object, pass the object to the content script and then use socket.io-stream to upload it to the server.
I have implemented javascript code to upload files in multiple chunks to google cloud storage.
Below is flow that I execute the upload file:
1. user selects a file to upload using javascript client web app {request is from ASIA region}
2. javascript client app request to our app server implemented in NODEJS {hosted in google cloud's compute engine - US region} to allow the file upload {authorization}
2. Nodejs App server returns a signedurl to the client app
3. client app start uploading file to google storage in multiple chunks using that signed url
4. on upload successful client reports to app server
I am able to upload files in multiple chunks, but I have observed that upload is 2-3 times slower if I host nodejs app server in google cloud US region rather than hosting on same machine from where I am executing client app request
Please let me know if you have solution how to improve the upload performance.
There is some workaround mentioned in google cloud signed-url documentation :
Resumable uploads are pinned in the region they start in. For example,
if you create a resumable upload URL in the US and give it to a client
in Asia, the upload still goes through the US. Performing a resumable
upload in a region where it wasn't initiated can cause slow uploads.
To avoid this, you can have the initial POST request constructed and
signed by the server, but then give the signed URL to the client so
that the upload is initiated from their location. Once initiated, the
client can use the resulting session URI normally to make PUT requests
that do not need to be signed.
But with that reference:
I couldnt found any code sample for: once client receives the signed
url from server how the initial JSON API call can be constructed ??
what should be expected response in 1st call? and how to extract
session URI
how to use the session URI to upload further chunks?
You may be confusing two separate GCS features. GCS allows for resumable uploads to be authorized to third parties without credentials in a couple of ways.
First, and preferred, is signed URLs. Your sends a signed URL to a client that will allow that client to begin a resumable upload.
Second, and less preferred due to the region pinning you mention above, is having the server initiate a resumable upload itself and then passes the upload ID to the client.
It sounds like you want the first thing but are using the second.
Using signed URLs requires making use of the XML API, which handles resumable uploads in a similar way to the JSON API: https://cloud.google.com/storage/docs/xml-api/resumable-upload
You'll want to sign that very first POST call to create an upload and pass that URL to the user to invoke on their own.
I have a web application with a Javascript Frontend and a Java Backend.
I have a Use Case where the users can upload their profile pictures. The Use Case is simple: send the image from the user browser and store it in S3 with a maximum file size. Currently the Frontend sends the image data stream to the Backend, and this the stores the image in S3 using the AWS Java SDK.
The Backend is obliged to previously store this image in a file system in order to know the image file size (and avoid reading more bytes than a certain maximum), since it is required by S3 to send the PUT Object Request along with the file Content Length.
Is there any other way I can do this with AWS? Using another service? maybe Lambda? I don't like this method of having to previously store the file in a file system and then open again a stream to send it to S3.
Thank you very much in advance.
You might get the file size on the client side as mentioned here but consider browser support.
You shouldn't share your keys with the client side code. I believe Query String Authentication should be used in this scenario.
Assuming your maximum file size is less than your available memory, why can't you just read it in a byte[] or something similar where you can send it to S3 without writing it to disk. You also should be able to get the size that way.
Depends on the S3 java client you are using. If it allows you to read from ByteArrayInputStream then you should be able to do whatever.
Looks like you can use InputStream. See javadoc.
new PutObjectRequest(String bucketName,
String key,
InputStream input,
ObjectMetadata metadata)
I allow users to upload to s3 directly without the need to go through the server ; Everything works perfectly; my only worry is the security.
In my javascript code I check for file extensions . However I know in javascript code Users can manipulate script- in my case to allow upload of xml files - (since client side upload) and thus would'nt they be able to replace the crossdomain.xml in my bucket and accordingly be able to control my bucket?
Note: I am using the bucket owner access key and secret key.
Update:
Any possible approaches to overcome this issue...?
If you are not adverse to running additional resources, you can accomplish this by running a Token Vending Machine.
Here's the gist:
Your token vending machine (TVM) runs as a reduced privileged user.
Your client code still uploads directly to S3, but it needs to contact your TVM to get temporary, user-specific token to access your bucket when the user logs in
The TVM calls the Amazon Security Token Service to create temporary credentials for your user to access S3
The S3 API uses the temporary credentials when it makes requests to upload/download
You can define polices on your buckets to limit what areas of the bucket each user can access
A simple example of creating a "dropbox"-like service using per-user access is detailed here.