I return the whole HTML as the response of an ajax request (not just an array as JSON). So as you know, the response will be larger a bit (than JSON). Because it contains some more things like html tags, html attributes etc ... So to
tncreased scalability (reduced server load).
Less network bandwidth (lower cost).
Better user experience (faster).
I want to compress the response. Something like .gzip format. Based on so tests, this is the size of an ajax response:
And this is the size of an ajax response which is zipped:
See? There is a huge different between them in theirs sizes.
All I want to know, is it possible to compress the response of an ajax request on the way and convert it to a regular text on the client side? For doing that do I need to do some changes in the web service (like nginx) configuration? Or it does that automatically?
You should use Lz-string method to compress data and send using ajax call and at the time of response, again use it by decompressing it
GITHUB links : https://github.com/pieroxy/lz-string/
Documentation & usage : https://coderwall.com/p/mekopw/jsonc-compress-your-json-data-up-to-80
You may want to check out some of the other StackOverflow posts, plus Wikipedia links & articles on the web regarding Deflate & Gzip. Here are some links for you to peruse. They have some interesting charts & information in them:
StackOverflow: Why Use Deflate Instead of Gzip for Text Files Served by Apache
StackOverflow: Is There Any Performance Hit Involved In Choosing Gzip Over Deflate For-
HTTP.com
Wikpedia: Deflate
Wikipedia: Gzip
Article: How to Optimize Your Site with GZip Compression
RFC: Deflate
Related
I was looking at this MDN tutorial https://developer.mozilla.org/en-US/docs/Web/HTTP/Messages
where it says
HTTP messages are composed of textual information encoded in ASCII.
I thought it means that HTTP can only transfer textual info aka strings, assuming the HTTP message here refers to header + body in responses.
But later I found out that HTTP response body can have multiple MIME types outside of text, such as image, video, application/json etc. Doesn't that mean HTTP can also transfer non-textual information, which contradicts what that MDN page says about HTTP messages?
I am aware of encoding methods like utf-8 and base64, I guess you can use Base64 Encoding for the binary data so that it is transformed into text — and then can be sent with an application/json content type as another property of the JSON payload. But when you choose not to do encoding, instead using correct content-type you can just transfer the binary data? I am still trying to figure this out.
Also I have some experience consuming REST APIs from the front end. My impression is that you typically don't transfer any binary data e.g. images, files, audios with RESTful APIs. They often serve JSON or XML as the response. I wonder why is that? Is it because REST APIs is not suitable for transferring binary data directly? What are some of the common practice for transferring images or audios files to the front end?
The line you quoted is talking about the start line, status line, and headers, which use only ASCII.
The body of a request or response is an arbitrary sequence of bytes. It's mainly intepreted by the application, not by the HTTP layer. It doesn't need to be in any particular encoding. The header has a Content-Length field, and the client simply reads that many bytes after the header (there's also chunked encoding, which breaks the content up into chunks, but each one starts with a byte length, and the client simply concatenates them).
In addition, HTTP includes Transfer-Encoding types that specify the encoding of the data. This includes a number of compression formats that produce binary data.
While it's possible to use textual encodings such as base64, this is not usually done in HTTP because it increases the size of the message and it's not necessary.
There are basically 2 queries:
First:
My httpd server (APIs) has a different origin the my front end. So, I need to handle OPTIONS request which makes my final response somewhat (300-400ms) slow. To avoid that, I want to use to text/plain content, because it doesn't enable OPTIONS request. I've already tested this and it works (but not sure about performance).
But I'm just wondering if JSON is faster than text/plain? Here's the data sample I'm sending to server.
{"name": "example", "fullName": "example exam", "data": "**THIS IS MOST IMPORTANT. IT IS BASE64 ENCODED IMAGE DATA, SIZE AROUND 100-200 KB**")
I want to send same data as text/plain to avoid OPTIONS request. So is there any performance issue with it? I'm asking this because most people prefer application/json (even I prefer it but I need to make my API fast for this special case. Main concern is image data that I'm sending.)
Second:
I'm using PHP Slim Framework for APIs. In their documentation, they have explained you can use custom methods to parse text/plain content but I couldn't find any example anywhere. However, following worked for me:
$text_content = $request->getBody()->getContents();
And then I used json_decode method and it worked fine. But is this the right way or it can break in some cases?
I'm messing around with the Darksky API and under one of the query parameters it states:
extend=hourly optional
When present, return hour-by-hour data for the next 168 hours, instead
of the next 48. When using this option, we strongly recommend enabling
HTTP compression.
I'm using Express as a node proxy which hits the Darksky api (i.e. localhost:3000/api/forecast/LATITUDE, LONGITUDE).
What does "HTTP compression" mean and how would I go about enabling it?
Here compression means the gzip compression on the express server. You can use the compression middleware to add easy gzip compression to your server.
Read more about how you can install that middleware on here.
https://github.com/expressjs/compression
An example implementation should be look like this.
var compression = require('compression')
var express = require('express')
var app = express()
// compress all responses
app.use(compression())
// add all routes
To quote from https://darksky.net/dev/docs
The Forecast Data API supports HTTP compression. We heartily recommend using it, as it will make responses much smaller over the wire. To enable it, simply add an Accept-Encoding: gzip header to your request. (Most HTTP client libraries wrap this functionality for you, please consult your library’s documentation for details.)
I'm not familiar with the Dark Sky API but I would imagine it returns a large amount of highly redundant data, which is ideal for compression. HTTP requests have a compression mechanism built in via Accept-Encoding, as mentioned above.
In your case that data will be travelling across the wire twice, once from Dark Sky to your server and then again from your server to your end user. You could compress just one of these two transmissions or both, it's up to you but it's likely you'd want both unless the end user is on the same local network as your server.
There are various SO questions about making compressed requests, such as:
node.js - easy http requests with gzip/deflate compression
The key decision for you is whether you want to decompress and recompress the data in your proxy or just stream it through. If you don't need a decompressed copy of the data in the server then it would be more efficient to skip the extra steps. You'd need to be careful to ensure all the headers are set correctly but if you just pass on the relevant headers that you receive (in both directions) it should be relatively simple to pipe through the response from Dark Sky.
I am building a frontend application in which I'm going to retrieve files via the API provided from backend.
The API consumes json request like most restful APIs whereas responses file in multipart/form-data type. Therefore when I tried to get the body of the response with axios, data appears like this.
--77d4f4ac-bcb2-4457-ad81-810cf8c3ce47
Content-Disposition: attachment; filename=20170822.txt
Content-Type: text/plain
...data...
--77d4f4ac-bcb2-4457-ad81-810cf8c3ce47--
It confused me quite a lot since I'm used to deal with raw data with blob object. However it seems that I have to parse the response by myself here in order to get the raw data. I searched around but found almost all of the articles and questions are discussing about server-side handling. So my question can be separated into 2 pieces.
Is it okay/possible to handle multipart/form-data on client side?
If it is, how can I handle it? (Of course, it will be really appreciated if there's a library for it)
I know that it's better to use something like AWS for static files but I am on developing stage and I prefer to have javascript/css files on localhost.
It would be great if I could get gzip working on my javascript files for testing. I am using the default gzip middleware but it's just compressing the view request.
My template looks like:
<script src='file.js' type='application/javascript'></script>
There should be a type-of-file list similar to Nginx for the django-based-server. How can I add application/javascript, text/javascript, etc for gzip compression?
You should read the GZipMiddleware documentation, where it's explained that the middleware will not compress responses when the "Content-Type header contains javascript or starts with anything other than text/".
EDIT:
To clarify what the documentation says, if the Content-Type header value contains javascript or doesn't begin with text/, then the response won't be compressed. That means both text/javascript and application/javascript will be invalid responses, since they match javascript.
Those restrictions are intentionally imposed by the middleware itself, but you can still circumvent that by wrapping the static files view handler with the gzip_page() decorator and adding it to your URL configuration manually.
During development you are using the Django built-in webserver, this server is really simple and does not have any other options than what you can see with ./manage.py help runserver
You options are either to setup a real webserver or use the staticfiles app with a custom StaticFilesStorage
But honestly, this is overkill, why would want to test gzip compression?