I have an ajax response receiving as Content-Encoding as gzip.
I would like to convert it into base64 encoded string.
In firefox/firebug Net tab I can open response and see in base64 encoded text.
As you can see below, firebug can convert response into base64 string.
I can save this file as abc.zip and unzip successfully.
How can I achieve it in JavaScript? (Converting ajax response into base64 string)
You need to decompress the gzip-encoded string first to get the string in uncompressed form, then you can base64 encode the uncompressed string.
See JavaScript implementation of Gzip for info on how to decompress gzip-encoded strings.
Related
I'm using fetch to retrieve a URL. This is in code that is acting as a proxy and if it gets a response with content-encoding: gzip, I want to get the raw data so I can send it back to the consumer without decoding it.
But I see no way to do this. Response.blob, Response.arrayBuffer, Response.body.getReader(), .... these all seem to decode the content.
So, for example, in my test case, I end up with a 15K array of bytes instead of the actual response body which was only 4k.
Is there a way to get the raw contents of the response without it being decoded?
The browser automatically decodes compressed, gzip-encoded HTTP responses in its low-level networking layer before surfacing response data to the programatic Javascript layer. Given that the compressed bytes are already transmitted over the network and available in users' browsers, that data has already been "sent to the customer".
The Compression Streams API can performantly create a Gzip-encoded, ArrayBuffer, Blob, etc:
const [ab, abGzip] = await Promise.all((await fetch(url)).body.tee().map(
(stream, i) => new Response(i === 0
? stream
: stream.pipeThrough(new CompressionStream('gzip'))
).arrayBuffer()
))
Full example
https://batman.dev/static/74634392/
A gzip-encoded SVG image is provided to users as a downloadable file link. The original SVG image is retrived using fetch() from a gzip-encoded response.
I was looking at this MDN tutorial https://developer.mozilla.org/en-US/docs/Web/HTTP/Messages
where it says
HTTP messages are composed of textual information encoded in ASCII.
I thought it means that HTTP can only transfer textual info aka strings, assuming the HTTP message here refers to header + body in responses.
But later I found out that HTTP response body can have multiple MIME types outside of text, such as image, video, application/json etc. Doesn't that mean HTTP can also transfer non-textual information, which contradicts what that MDN page says about HTTP messages?
I am aware of encoding methods like utf-8 and base64, I guess you can use Base64 Encoding for the binary data so that it is transformed into text — and then can be sent with an application/json content type as another property of the JSON payload. But when you choose not to do encoding, instead using correct content-type you can just transfer the binary data? I am still trying to figure this out.
Also I have some experience consuming REST APIs from the front end. My impression is that you typically don't transfer any binary data e.g. images, files, audios with RESTful APIs. They often serve JSON or XML as the response. I wonder why is that? Is it because REST APIs is not suitable for transferring binary data directly? What are some of the common practice for transferring images or audios files to the front end?
The line you quoted is talking about the start line, status line, and headers, which use only ASCII.
The body of a request or response is an arbitrary sequence of bytes. It's mainly intepreted by the application, not by the HTTP layer. It doesn't need to be in any particular encoding. The header has a Content-Length field, and the client simply reads that many bytes after the header (there's also chunked encoding, which breaks the content up into chunks, but each one starts with a byte length, and the client simply concatenates them).
In addition, HTTP includes Transfer-Encoding types that specify the encoding of the data. This includes a number of compression formats that produce binary data.
While it's possible to use textual encodings such as base64, this is not usually done in HTTP because it increases the size of the message and it's not necessary.
I'm building a react application and I use Node (with Express) as a proxy-server. I send data from react app to node-express, then in Node I use that data to form URI and to make requests to another server.
My question is this: Shouldn't 'content-type': 'charset: utf-8' be enough when I send data including greek characters to Node? For example, I make a post request (using Fetch) to Node and I send code 'ΠΕ0001' using the header I already mentioned. Why do I get the error 'Path contains unescaped characters'? When I use encodeURIComponent it does work, but why 'charset: utf-8' is not enough?
Just setting the header 'content-type': 'charset: utf-8' is not enough. Essentially with this Header you're just telling the server (Node in this instance), that the data you send is in utf-8 format, which it should expect anyway.
Your string, however, is in UTF-16 format, because the letter Π needs two bytes to be represented..
You can read more about character encoding here.
Hence you need encodeURIComponent first. In our case, Π is then represented as %CE%A0, which are its byte's representations in UTF-8.
use this method JSON.stringify() to convert data to json object.
then pass that json object into encodeURIComponent()
then call fetch method
I am working on a Javascript project that uses AngularJS. When I get data with http request, all characters are appearing well. For example, a downloaded string with ajax is "räksmörgås", when written to the console as plain text, is appearing with ugly charecters.
console.log("räksmörgås") results into this: r�ksm�rg�s
Is this a file type encoding problem? Or are JavaScript strings always UTF-16 causing this problem?
I think the problem is that you are not using the correct charset. For Swedish try to change the character encoding to iso-8859-1 or windows-1252. I suppose that you are sending the server response without the correct headers and the browser interprets it as UTF-8 as the default charset.
So maybe changing the header charset as below will resolv the issue:
Content-Type: text/plain; charset=windows-1252 // or
Content-Type: text/plain; charset=iso-8859-1
Another solution would be to declare your script tag with charset, this way forcing Js to handle the characters to be interpreted with a specific encoding.
<script src="yourscritp.js" charset="UTF-8"></script>
I have encoded all the images in my css to base64 encoded data to reduce the number of http requests in the website. However, it appears that there is still an http request for the data encoded images as you can see below.
I tried checking for a solution on the web but everywhere it says that there should be no http request for images which are encoded to base64. What am I doing wrong ?
It is not another request however it will show up in your assets.