Node.JS binary data URL Decoding to UTF-8 issue - javascript

When i am sending binary data with UTF-8 encoded from PHP client to Node.JS server, Node.JS internally encodes this data to URL Encode(percentage encoding). I guess Node.JS doesn't have support for UTF-8 as i already checked with base64 and it was working fine. I googled a lot and i found that everyone is facing the same issue. I manually checked the string but I get "URIError: URI malformed" error.
I also use the deprecated functions unescape, escape, encodeURI and decodeURI.
Little background of my work:
console.log(decodeURIComponent('%B1');
As you can search from the URLEncoding list that %B1 is for "±" sign. But i am getting the above mentioned error. This function can't decode other many special characters too. I don't know why Node.JS doesn't have support for the standard decoding style like UTF-8.
Please help me guys.

Related

encoded text what is this and how to decode it?

So I'm trying to find out how a web app works and I was looking at its requests the raw data was encoded like this what is this and how to reproduce it to use their api?
é£Ü-JM[#ÀóÚÀ5"Ò
¿ eyJ4NXQiOiJOVEF4Wm1NeE5ETXlaRGczTVRVMVpHTTBNekV6T0RKaFpXSTRORE5sWkRVMU9HRmtOakZpTVEiLCJraWQiOiJOVEF4Wm1NeE5ETXlaRGczTVRVMVpHTTBNekV6T0RKaFpXSTRORE5sWkRVMU9HRmtOakZpTVEiLCJhbGciOiJSUzI1NiJ9.eyJtaWJQcml2aWxlZ2VzIjoiRk9SQ0VEX09UUCIsInN1YiI6ImxpbGFwbkBjYXJib24uc3VwZXIiLCJreWNTdGF0dXMiOiJ7XCJXeWlnUENXUjJnRDFveGVKeTJvNXNZXCI6XCJjXCJ9IiwiaXNzIjoiaHR0cHM6XC9cL2xvY2FsaG9zdDoxNDI1N1wvb2F1dGgyXC90b2tlbiIsIm1vYmlsZSI6IjA5MTI3NzkwNTI5IiwiYXVkIjoicmpySko5V2RmUTM3X3VlMENfS3JBVkp6MUN3YSIsIlVzZXJPVFBTdGF0dXMiOiJGSU5BTCIsIlBhcnR5SWQiOiI1MzU3NjY3MSIsImF6cCI6InJqckpKOVdkZlEzN191ZTBDX0tyQVZKejFDd2EiLCJzY29wZSI6Im9wZW5pZCIsIkZvcmNlQ2hhbmdlIjoiRmFsc2UiLCJleHAiOjE2NzU3MDkyMDQsImlhdCI6MTY3NTcwODMwNCwianRpIjoiNzExZjMzMTItMmMyNS00NmI4LWFmN2YtZjQ2ZmEwZmRlZjE1In0.fPGGaVObDPuhYOVhAPAk9mw_Vtiyzq-v43iCGuULLioPpuR73eb8f1ufB6Y5ZnX3zsPp7Mq-XkLC0wwiY_6_-ykDg3huNwmI-bpKxB0szQkolpShE0QyqQ93dS8sB5yaB_qoHvzhsWM1_sRfJB3F246MiYietw9UAqJsmFrXAEq2diX2idjNeAc_SuPqNBtd39qw6cUcchW1n8m5VyLXogO9gN5TzVntvmUIh9r047E87Fdm1mXZEiuLEub2ljrrIrK9-zuPmfQx4FuEr8CQOlwvunmALLVjSRK1WsVEFwSW8pmrXvsu0LZOMgEWoSb7zfYYa4m-gbmvMrsuWYQ7CQB P²> Ü¡©DÓW×}ÿº> J
fa_IR
I'm just trying to understand the requests to the server so that I can reproduce them again to use their api.
That is base-64 encoded text. When decoded, this is what I got back:
{"x5t":"NTAxZmMxNDMyZDg3MTU1ZGM0MzEzODJhZWI4NDNlZDU1OGFkNjFiMQ","kid":"NTAxZmMxNDMyZDg3MTU1ZGM0MzEzODJhZWI4NDNlZDU1OGFkNjFiMQ","alg":"RS256"}
{"mibPrivileges":"FORCED_OTP","sub":"lilapn#carbon.super","kycStatus":"{"WyigPCWR2gD1oxeJy2o5sY":"c"}","iss":"https://localhost:14257/oauth2/token" ,"mobile":"09127790529","aud":"rjrJJ9WdfQ37_ue0C_KrAVJz1Cwa","UserOTPStatus":"FINAL","PartyId":"53576671","azp":"rjrJJ9WdfQ37_ue0C_KrAVJz1Cwa","scope":"openid","ForceChange":"False","exp":1675709204,"iat":1675708304,"jti":"711f3312-2c25-46b8-af7f-f46fa0fdef15"}

Netsuite Suitescript Decode Base64

I'm doing Api integration with Suitescript 2.0. A data encoded with base64 is returned from the Api. Here I need to reach the data I want by decoding the base64 and saving the xml data returned as a .zip and unzip it.
The relevant data can be run in Notepad++ with Plugins > MIME Tools > Decode Base64, saved as zip and opened with unzip.
The script I'm working with is a shcedule script.
I tried the two methods mentioned to decode in Suite Answers.
1- From base64 to UTF_8 with N/encode module (Returned result is completely wrong for this problem)
2 - The solution in the link:
https://netsuite.custhelp.com/app/answers/detail/a_id/41271/kw/base64%20decode
(In this solution, when you save the returned data as zip, it gives an "Unexpected end of the archive" error when opening the zip.)
ArrayBuffer() and atob() are not available in Suitescript.
The thing I know will work is to proxy the call through a Lambda on some external system.
However if your data is already in base64 you might try just creating a file cabinet file and give it the base64 encoded value as its content. Netsuite already handles base64 for files so you might be overworking the issue. It doesn't sound like you are actually processing the xml if your end goal is to save it as a zip.
If this doesn't help see my comments regarding some clarifications you could add to your question.
require(["N/encode"], function(encode){
var txt = encode.convert({
string: "your Base64 string",
inputEncoding: encode.Encoding.BASE_64,
outputEncoding: encode.Encoding.UTF_8
});
}
SuiteScript example
All types of encode

nodejs gmail api - attachment download url

so i got the attachment id from the api, and the payload is data in bytes
from https://developers.google.com/gmail/api/v1/reference/users/messages/attachments#resource
The body data of a MIME message part as a base64url encoded string.........
so i tried decoding it with https://github.com/brianloveswords/base64url
and few other base64 plugins - all returing another unreadable string format.
since i wasn't sure if it was an actual file or decoded url i tried exporting contents to a file with known extension without much success, so it should be a download link.
does anyone have any idea how to get a readable format of this string?
payload string sample
1op669AzYx7c9caYxOd0ZWzNE7IqPyUUnmRkib4HPAUbc3FzzEs7xWl6glqXXo2Y_hjSfT9CtS0THzTkf2rZ8UbKU6S
base64 utf8 decoded sample
↑8�k垬☼A��VN��\�>�/i��U�ՁR∟F����A�☺ň<�v¶��C� I-t��]⌂�R☺
string is much muh bigger, testing can be done here for full response https://developers.google.com/gmail/api/v1/reference/users/messages/attachments/get
UPDATE
Solved it
https://stackoverflow.com/a/16350999/7579200
Sounds like you should use the get function.

Handle non-ASCII filenames in XHR uploading

I have pretty standard javascript/XHR drag-and-drop file upload code, and just came across an unfortunate real-world snag. I have a file on my (Win7) desktop called "TEST-é-TEST.txt". In Chrome (30.0.1599.69), it arrives at the server with filename in UTF-8, which works out fine. In Firefox (24.0), the filename seems mangled when it arrives at the server.
I didn't trust what Firebug/Chrome might be telling me about the encoding, so I examined the hex of the request packet. Everything else is the same except the non-ASCII character is indeed being encoded differently in the two browsers:
Chrome: C3 A9 (this is the expected UTF-8 for that character)
Firefox: EF BF BD (UTF-8 "replacement character"?!)
Is this a Firefox bug? I tried renaming the file, replacing the é with ó, and the Firefox hex was the same... so such a mangle really seems like a browser bug. (If Firefox were confusedly sending along ISO-8859-1, for example, without touching it, I'd see an E9 byte, and I could handle that on the server side, but it shouldn't mangle it!)
Regardless of the reason, is there something I can do on either the client or server sides to correct for this? If a replacement character is indeed being sent to the server, then it would seem unrecoverable there, so I almost certainly need to do it on the client side.
And yes, the page on which this code exists has charset=utf-8, and Firefox confirms that it perceives the page as UTF-8 under View>Character Encoding.
Furthermore, if I dump the filename to console.log, it appears fine there--I guess it's just getting mangled in/after setRequestHeader("X-File-Name",file.name).
Finally, it would seem that the value passed to setRequestHeader() should be able to have code points up to U+00FF, so U+00E9 (é) and U+00F3 (ó) shouldn't cause a problem, though higher codes could trigger a SyntaxError: http://www.w3.org/TR/XMLHttpRequest2/#the-setrequestheader-method
Thanks so much for Boris's help. Here's a summary of what I discovered through our interactions in comments:
1) The core issue is that HTTP Request headers are supposed to be ISO-8859-1. Prior versions of Chrome and Firefox both passed along UTF-8 strings unchanged in setRequestHeader() calls. This changed in FF24.0 (and apparently will be changing in Chrome soon too), such that FF drops high bytes and passes along only the low byte for each character. In the example I gave in the question, this was recoverable, but characters with higher codes could be mangled irretrievably.
2) One workaround would be to encode on the client side, e.g.:
setRequestHeader('X-File-Name',encodeURIComponent(filename))
and then decode on the server side, e.g. in PHP:
$filename=rawurldecode($_SERVER['HTTP_X_FILE_NAME'])
3) Note that this is only problematic because my ajax file upload approach is to send the raw file data in the request body, so I need to send the filename via a custom request header (as shown in many tutorials online). If I used FormData instead, I wouldn't have to worry about this. I believe if you want solid, standards-based unicode filename support, you should use FormData and not the request header approach.

ajax gzip compress a string for postdata

Can't seem to find anything related to gzip compressing a string. Only find broken sites or suggestions for compressions that won't work as gzip. Also lots of talk of server side implementation. However I wish to send the encoded data from the client.
To give clarification all my clients use greasemonkey or scriptish and all my clients are generally on some version of a recent Firefox or one of it's derivatives, so content encoding for everyone is not an issue.
What I do need is a pure javascript or some sort of library loadable by javascript to gzip compress a string.
Just achieved this using https://github.com/dankogai/js-deflate However the postdata for whatever reason will strip the + signs and replace them with spaces.
To send the data via javascript:
params.mapdata= btoa(RawDeflate.deflate(JSON.stringify(mapdata)));
To receive the data via php:
$value = gzinflate(base64_decode(preg_replace('/\s/', '+',$value)));

Categories