I am building a frontend application in which I'm going to retrieve files via the API provided from backend.
The API consumes json request like most restful APIs whereas responses file in multipart/form-data type. Therefore when I tried to get the body of the response with axios, data appears like this.
--77d4f4ac-bcb2-4457-ad81-810cf8c3ce47
Content-Disposition: attachment; filename=20170822.txt
Content-Type: text/plain
...data...
--77d4f4ac-bcb2-4457-ad81-810cf8c3ce47--
It confused me quite a lot since I'm used to deal with raw data with blob object. However it seems that I have to parse the response by myself here in order to get the raw data. I searched around but found almost all of the articles and questions are discussing about server-side handling. So my question can be separated into 2 pieces.
Is it okay/possible to handle multipart/form-data on client side?
If it is, how can I handle it? (Of course, it will be really appreciated if there's a library for it)
Related
Actually, I´ve create a Batch HTTP API that receives a JSON array with many different requests to our backend server. The Batch API just call all of these requests to a load balancer, wait for the return of all of them and return a new JSON to the client.
The client receives a huge JSON array response with its indices in the same position as the request, so it is easy to know what response is addressed for what request.
The motivation for this API was to solve the 5 browser simultaneous connections and improve performance as the Batch API has a much more direct access to the server (we do not have a reverse proxy or a SSL server between then).
The service is running pretty well, but now I have some new requirements as it is gaining more use. First, the service can use a lot of memory as it has a buffer for each request that will only be flushed when all responses are ready (I am using an ordered JSON Array). More, since it can take some time to all requests be delivered, the client will need to wait everything be processed before receiving a single byte.
I am planning change the service to return each response as soon it is available (and solve both issues above). And would like to share and validate my ideas with you:
I will change the response from a JSON response to a multipart response.
The server will include, for every part, the index of the response
The server will flush the response once its available
The client XHR will need to understand multipart content type response and be able to process each part as soon as it is available.
I will create a PoC to validate every step, but at this moment I would like to validate the idea and hear some thoughts about it. Here some doubts I have about the solution:
From what I read, I am in doubbt of that content-type is right for the response. multipart/mixed? multipart/digest?
Can I use an accept request header to identify if the client is able to handle the new service implementation? If so, what is the right accept header for this? My plan is to use the same endpoint but very accept header.
How can I develop a XHR client that is able to process many parts of a single response as soon as they are available? I found some ideias on the Web but I am not entirely confident with then.
I will change the response from a JSON response to a multipart
response.
The server will include, for every part, the index of the
response
The server will flush the response once its available
The
client XHR will need to understand multipart content type response and
be able to process each part as soon as it is available.
The XHR protocol will not support this work flow through a single request from the client. Since XHR relies heavily on the HTTP protocol for communications, XHR follows the HTTP connection rules. The first and most important rule: HTTP connections are always initiated by the client. Another rule: XHR returns the entire content-body or fails.
The implications for your workflow is that each part of the multipart response must be requested individually by the client.
From what I read, I am in doubbt of that content-type is right for the
response. multipart/mixed? multipart/digest?
You should be in doubt as there is no provision in the specfication to do this. The response-type attribute is limited to the empty string (default), "arraybuffer", "blob", "document", "json", and "text". it is possible to set the override MIME type header, but that does not change the response type. Event given that case, the XHR spec is very clear about what it will send back. It is one of the types listed above as documented here.
Can I use an accept
request header to identify if the client is able to handle the new
service implementation? If so, what is the right accept header for
this? My plan is to use the same endpoint but very accept header.
Custom HTTP headers are designed to assist us in telling the server what our capabilities are on the client. This is easily done. it doesn't necessarily have to be in the accept header (as that also is a defined list of MIME types).
How
can I develop a XHR client that is able to process many parts of a
single response as soon as they are available? I found some ideias on
the Web but I am not entirely confident with then.
XHR is processed natively by the client and cannot be overridden for all sorts of security reasons. So this is unlikely to be available as a solution for this reason.
Note: ordinarily one might suggest the use of a custom version of Chromium, but your constraints do not allow for that.
There are basically 2 queries:
First:
My httpd server (APIs) has a different origin the my front end. So, I need to handle OPTIONS request which makes my final response somewhat (300-400ms) slow. To avoid that, I want to use to text/plain content, because it doesn't enable OPTIONS request. I've already tested this and it works (but not sure about performance).
But I'm just wondering if JSON is faster than text/plain? Here's the data sample I'm sending to server.
{"name": "example", "fullName": "example exam", "data": "**THIS IS MOST IMPORTANT. IT IS BASE64 ENCODED IMAGE DATA, SIZE AROUND 100-200 KB**")
I want to send same data as text/plain to avoid OPTIONS request. So is there any performance issue with it? I'm asking this because most people prefer application/json (even I prefer it but I need to make my API fast for this special case. Main concern is image data that I'm sending.)
Second:
I'm using PHP Slim Framework for APIs. In their documentation, they have explained you can use custom methods to parse text/plain content but I couldn't find any example anywhere. However, following worked for me:
$text_content = $request->getBody()->getContents();
And then I used json_decode method and it worked fine. But is this the right way or it can break in some cases?
I can't seem to figure out a way to ignore the for(;;); in the response body of my cross domain JSONP requests. I am doing this on my own servers, nothing else going on here. I am trying to include that for(;;); inside the response body of my callback as such:
_callbacks_.callback(for(;;);[jsondata....]);
but how can I remove it from the response body before the JS code gets parsed? I am using the Google Closure Library btw.
Ok I think I figured it out.
The reason why the for(;;); is there is to prevent cross-domain data requests of certain information. So basically if you have information you are trying to protect you go through a normal Ajax JSON channel and if you are storing data on multiple servers you deal with it on server level.
JSONP requests are actually a remote script inclusion, which means whatever the server outputs is actual Javascript code, so if you have a for(;;); before your _callbacks_.callback(); the code will be executed on the origin domain on request success. If it's an infinite for loop, it will obviously jam the page.
So the normal implementation method is the following:
Send a normal Ajax request to a file located on the same server.
Perform the server level stuff and send requests to external servers via encrypted CURL.
Add security to the server response(a for(;;); or while(1); or throw(1); followed by a <prevent eval statements> string.
Get the response as a text string.
Remove your security implementations from the string.
Convert the string(which is now a "JSON string") to a JS Object/Array etc with a standard JSON parser.
Do whatever you want to do with the data.
Just thought I should put this out here in case someone else will Google it in the future, as I didn't find proper information by Google-ing. This should help prevent cross domain request forgery.
I have an FPGA that is hosting a website with an html and javascript front end and a C backend (ugh).
Is there anyway to send a file from the C backend to the client? I'm talking to the backend via an html form (since the back end is hosted on an FPGA I'm unsure how it will handle AJAX).
Some tricky points, the website is hosted in read only memory (hence the desire to send the client a file).
I'm going nuts, is this impossible?
No, this is possible. You just need to ensure the relevant HTTP headers are set in the GET response. Specifically Content-type and Content-Disposition. e.g.:
Content-type: application/pdf
Content-Disposition: attachment; filename="downloaded.pdf"
It is certainly possible via CGI, see for example: http://www.cs.tut.fi/~jkorpela/forms/cgic.html
Forged POST requests can be constructed by untrusted websites by creating a form and posting it to the target site. However, the raw contents of this POST will be encoded by the browser to be in the format:
param1=value1¶m2=value2
Is it possible for untrusted websites to construct forged POSTs which contain arbitrary raw content -- such as stringified JSON?
{param1: value1, param2: value2}
Put another way: Can websites cause the browser to POST arbitrary content to third-party domains?
The POST body of an HTML form’s request is always either application/x-www-form-urlencoded, multipart/form-data, or text/plain as these reflect the valid values for the enctype attribute. Especially text/plain one can be used to form valid JSON data. So form-based CSRF can be used here, however, it requires the server to accept it as text/plain.
Additionally, XHR-based CSRF can be used as the XMLHttpRequest API allows so send arbitrary POST data. The only remaining obstacle with this is the Same-Origin Policy: Only if both have the same origin or your server supports Cross-Origin Request Sharing and allows resource sharing, such valid POST requests can be forged.
Yes!, a POST request is nothing more than text with a specific format sent to a web server. You can use IE or Chrome developer tools to look at what each requests looks like.
So yes, you can create a forged POST request and change whatever you want, however if the request is not well-formed most web servers will reject it.
https://www.rfc-editor.org/rfc/rfc2616
The client side code of a web site would have difficulties to forge a request like that, but the server side code could very easily do that.
As your web site can't tell if the request comes from a browser or a server that behaves just like a browser, the limitations in the browser is no protection.
You can create valid JSON via a regular form post. It's just a matter of creatively naming the form parameters. In particular, parameter names can contain quotes.
http://blog.opensecurityresearch.com/2012/02/json-csrf-with-parameter-padding.html
In the case of pure HTML forms, yes it will always be encoded according to the spec. But there are other encoding schemes such as MIME multipart. There is also the question of Javascript and XMLHttpRequest. Encoding is specifically mentioned in only one case. This strongly implies that there is no encoding applied in the other cases.