I want:
To send a file (I have my eye on .docx and text files, but let's use a .pdf as an example) as binary in the body of a POST request from a browser (Javascript).
Main problem
I can do this just fine in POSTMAN. You can select "binary" as the body type, and viola! your file is configured to be the body. But I don't know how to mimic that behavior in Javascript.
My question is: On the client-side, in Javascript, how can I get a file into my POST request as binary?
Specifically, how can I get the file into the same format that POSTMAN uses when you select Body -> Binary in a POST request?
For context:
I have been using this guide to get everything configured how I want in AWS. It ends with making requests in POSTMAN. But adding a file as binary in POSTMAN is one thing - doing it from a browser in Javascript is another, and the main question that I have.
I am sending this through API Gateway to a Lambda function. I have API Gateway configured to handle application/pdf as binary, and the Lambda function to decode it once it arrives.
So I think I want to hand it in as a binary blob, not base64. But not sure exactly how.
JavaScript
postBinary() {
var settings = {
"url": "https:<my-aws-api>.amazonaws.com/v1/upload",
"method": "POST",
"timeout": 0,
"headers": {
"Content-Type": "application/pdf"
},
"data": <my pdf file here as binary>
};
$.ajax(settings).done(function (response) {
console.log(response);
});
},
API Gateway:
Integration has 'When there are no templates defined (recommended)' set to 'Content-Type':'application/pdf'. The API's Binary Media Types have 'application/pdf' set. I know I have CORS set correctly - I can pass strings through the POST request and get success messages back, but I would like to handle files here, not just a simple string. I also want to avoid requiring the client side to parse out data on their end.
My Lambda function will take the file and then parse information out of it, then send it back.
Lambda function:
import json
import base64
import boto3
BUCKET_NAME = 'my-bucket'
def lambda_handler(event, context):
file_content = base64.b64decode(event['content'])
parsed_data = some_function(file_content) # parse information from file
return {
'statusCode': 200,
'body': {
'file_path': file_path
}
}
In the end, we want a user experience of: choose a file, send to API, get back parsed data. Simple.
Note: I know there are lots of good reasons to put files in s3 instead of going through Lambda first. But our files are small, and we are not concerned about taking considerable compute time/power in Lambda. Further, we want to avoid sending to s3 right away because we would like the user to only have to make one call to the API: send file in POST request, get results. If we send to s3 first, the user has to send multiple requests: request pre-signed URL, send file to s3, request results from parsing.
I am mostly concerned with the fact that this is possible in POSTMAN and it must be possible via browser/Javascript as well.
Thanks, everyone!
Related
I am trying to send a base64 string of a file in an axios request body. The file size is arond 370KB.
I got request payload too large 413 error. After doing some research in internet learned that server is limiting the request size.
Till now my understanding is clear.
Now I changed it to formData and passing that form data as a request body. And I am not getting any 413 error. Server neatly provessed my request.
So what hapenned between formData and server?
Server is running on Nginx, Node, Express.
By default axios sends data as JSON and on Node.js side JSON parser has a default limit of 100KB. So you can either continue to use formData or increase a limit in JSON parser options.
app.use(json({
limit: '20mb'
}));
But if you intend to send large content often then consider using formData or even send content as binary.
Upd. For FormData if you usually will process it by multer you will have the following limits by default:
Field size: 1048576 bytes
File size: unlimited
I have just finished a task on updating the database written in JSON from the front end.
I am still not clarified on the use of headers which is a type of AxiosConfig. What is the difference between using axios with it and without it.
For example:
What is the difference between
await axios.post(`${URL}/update`, JSON.stringify({urlSlug: apiHash}),
{
headers: {
'content-type': 'application/json'
}
})
).data
And
await axios.post(`${URL}/update`, {apiHash})).data
I tried both and only the first one works in my case. But, after finishing the task, I still don't really know the difference in how they work.
Headers are part of the HTTP request structure. A request consists of many parts including URL, headers and body. Not everything is considered suitable to be specified in the body like api-keys, tokens, etc. These can be included in the header.
There are some details that the service running in the back-end needs in order to process a request. If the back-end service expects the request containing JSON data in the body to specify its type in header as JSON, it will not work if this isn't specified as there may be no handlers for non JSON type request in the back-end service.
You can read more about HTTP request structure here: https://developer.mozilla.org/en-US/docs/Web/HTTP/Messages
I am building a frontend application in which I'm going to retrieve files via the API provided from backend.
The API consumes json request like most restful APIs whereas responses file in multipart/form-data type. Therefore when I tried to get the body of the response with axios, data appears like this.
--77d4f4ac-bcb2-4457-ad81-810cf8c3ce47
Content-Disposition: attachment; filename=20170822.txt
Content-Type: text/plain
...data...
--77d4f4ac-bcb2-4457-ad81-810cf8c3ce47--
It confused me quite a lot since I'm used to deal with raw data with blob object. However it seems that I have to parse the response by myself here in order to get the raw data. I searched around but found almost all of the articles and questions are discussing about server-side handling. So my question can be separated into 2 pieces.
Is it okay/possible to handle multipart/form-data on client side?
If it is, how can I handle it? (Of course, it will be really appreciated if there's a library for it)
I'm totally new to file uploads....I'm using angular-file-upload which creates an XHR outside of angularjs to upload files to google cloud storage. When I try to upload I keep getting the error below. How can I resolve this?
400 Bad content type. Please use multipart.
Here's my controller setup:
var uploader = $scope.uploader = new FileUploader({
url: 'https://www.googleapis.com/upload/storage/v1/b/bucketsbuckets/o?uploadType=multipart',
headers : {
'Authorization': 'Bearer ya29.lgHmbYk-5FgxRElKafV4qdyWsdMjBFoO97S75p4vB0G0d6fryD5LASpf3JUY8Av9Yzhp9cQP8IQqhA',
'Content-Type': 'multipart/form-data'
},
autoUpload:true
});
For those who are using fetch just remove the content-type on your header.
I found this issue on github and I quote:
Setting the Content-Type header manually means it's missing the boundary parameter. Remove that header and allow fetch to generate the full content type. It will look something like this:
Content-Type: multipart/form-data;boundary=----WebKitFormBoundaryyrV7KO0BoCBuDbTL
Fetch knows which content type header to create based on the FormData object passed in as > the request body content.
^^ this one worked for me.
The problem is that the endpoint you're using is for multipart uploads, but not FORM-based multipart uploads. If you set your Content-Type to "multipart/related" instead of "multipart/form-data", you should be able to proceed.
A multipart upload to that endpoint, "www.googleapis.com/upload/storage/etc?uploadType=multipart", expects a multipart message with exactly two parts, the first part being the metadata of the object, in JSON, and the second part being the data.
More on these requirements are here: https://cloud.google.com/storage/docs/json_api/v1/how-tos/upload
If, however, you'd like to do an upload in the style of a form submission, that's also possible, but the rules are different. In this case, you'll submit a POST to "storage.googleapis.com/BUCKET/OBJECT" with a variety of appropriate form parameters: https://cloud.google.com/storage/docs/reference-methods#postobject
Now, this all assumes you're trying to do an upload as a multipart request for some reason. That might be necessary if you need to set some specific metadata properties on the object, but if not, you may be making things harder for yourself. A simple PUT to "storage.googleapis.com/BUCKET/OBJECT" will work just fine (although I'm not familiar with angular-file-upload and don't know whether it supports non-form-style uploads).
I'm pretty confused on this one. I'm attempting to post some data straight to S3 from the client rather than bogging down my web server. I'm using Python with the boto library to generate the signature, and javascript to (hopefully) schlep everything over to S3.
This issue I'm running into is that javascript gives me the following error whenever I send the request:
The request signature we calculated does not match the signature you
provided. Check your key and signing method.
I've verified that (a) my keys are valid, and (b) that I can actually upload/download things from S3. I did this by uploading small test files via Python, as well as using plain ol' curl.
The basic setup I have is this:
js reads the form data on the website
makes a request to my web server to get an upload url
uses the url to make an ajax post request to S3
The server side is super simple. It just spits out a url to use for uploading:
c = boto.connect_s3(settings.S3_ACCESS_KEY, settings.S3_AWS_SECRET_ACCESS_KEY)
c.generate_url(5000, 'PUT', settings.S3_BUCKET, 'mycoolkey')
Which spits out something like this:
https://mytestbucket.s3.amazonaws.com/mycoolkey?Signature=OX6vJCNjb4Pz3Fuzh0840qCnY5U%3D&Expires=1415394562&AWSAccessKeyId=MYACCESSKEY
This output is the same that I used with curl to verify that I can actually post to S3
curl --request PUT --upload-file myfilename "https://mytestbucket.s3.amazonaws.com/mycoolkey?Signature=OX6vJCNjb4Pz3Fuzh0840qCnY5U%3D&Expires=1415394562&AWSAccessKeyId=MYACCESSKEY"
So, it's all well and good, until I try to do the same thing via Javascript.
$.ajax({
url: 'https://mytestbucket.s3.amazonaws.com/mycoolkey?Signature=OX6vJCNjb4Pz3Fuzh0840qCnY5U%3D&Expires=1415394562&AWSAccessKeyId=MYACCESSKEY',
type: 'PUT',
data: 'sometestdata',
success: function() {
console.log('Uploaded data successfully.');
}
});
I've set the CORS settings in S3 to allow everything from all domains (while testing), and as far as I can tell, the js should be making the same request as curl does.. so.. what gives?
Why would one request fail while the other succeeds when they're both using the exact same URL?