create binary payload in node-red - javascript

New to node-red and javascript. I need to use the TCP input to connect to a relay controller for status. I'm using a function node to generate a two-byte request that will flow to the TCP input node and on to the controller but don't know how to format this in java. I can set
msg.payload = "hello";
to send a string, but I need to send 2 bytes: 0xEF 0xAA. In C# I would just create the string
msg.payload = "\xEF\xAA";
or something. How to do this in java/node-red?

Binary payloads are NodeJS buffer objects so can be created like this:
msg.payload = new Buffer([0xEF,0xAA]);

As of today (nodered 0.17.5), this can be achieved doing the following, see the documentation:
msg.payload = Buffer.from("\xEF\xAA")
or
msg.payload = Buffer.from('hello world', 'ascii');
As you can see, you can also specify an encoding parameter:
The character encodings currently supported by Node.js include:
'ascii' - for 7-bit ASCII data only. This encoding is fast and will strip the high bit if set.
'utf8' - Multibyte encoded Unicode characters. Many web pages and other document formats use UTF-8.
'utf16le' - 2 or 4 bytes, little-endian encoded Unicode characters. Surrogate pairs (U+10000 to U+10FFFF) are supported.
'ucs2' - Alias of 'utf16le'.
'base64' - Base64 encoding. When creating a Buffer from a string, this encoding will also correctly accept "URL and Filename Safe Alphabet" as specified in RFC4648, Section 5.
'latin1' - A way of encoding the Buffer into a one-byte encoded string (as defined by the IANA in RFC1345, page 63, to be the Latin-1 supplement block and C0/C1 control codes).
'binary' - Alias for 'latin1'.
'hex' - Encode each byte as two hexadecimal characters.

Related

What kind of binary data cannot be stringified to JSON?

I've read everywhere that JSON cannot encode binary data, so I wrote this simple test to check if that's actually true.
function test(elem){
let reader = new FileReader ;
reader.onload = ()=>{
let json = JSON.stringify(reader.result) ;
let isCorrect = JSON.parse(json) === reader.result ;
alert('JSON stringification correct: ' + isCorrect) ;
} ;
reader.readAsBinaryString(elem.files[0]) ;
}
Choose a binary file: <br>
<input type=file onchange="test(this)">
You have to choose a binary file from your computer and the test function will read that file as a binary string, then it will JSON.stringify that string and then parse it back and compare it with the original binary string.
I have tried with lots and lots of binary files (.exe files mostly), and I just can't find a single file that cannot be JSON-ified.
Can you give an example of something that cannot be converted to a JSON string?
I think you do not understand this correctly.
First of all what do mean with "a JSON string"? Do you mean the result of JSON.stringify() or the data type in a JSON document? Let's look at the latter, because I think this what the statement "cannot contain binary data" is about.
If you look at the spec a JSON string cannot contain every possible character. Especially control characters are not allowed. This means a JSON string cannot contain arbitrary (binary) data directly. However, you can use an escape sequence (\u) to represent these characters, which is a type of encoding. JSON.stringify() does this for you automatically.
For example:
s = String.fromCodePoint(65,0,66); // A "binary" string, 'A', 0x00, 'B'
JSON.stringify(s); // "A\u0000B";
JSON.parse() knows about these escape sequences as well and will restore the binary data.
So a JSON string data type can encode binary data but it cannot contain all binary data directly, without encoding.
Some additional notes:
Handling binary data correctly in JavaScript (and many other languages) can be difficult. String data types were not designed for binary data. For example, you have to know the encoding that is used to store the String data internally.
Usually, binary data is not encoded using escape sequences but using more efficient encoding schemes such as Base64.

Convert ANSI file text into UTF8 in node.js usinf Fyle System

I´m trying to convert the text from an ANSI encoded file to an UTF8 encoded text in node.js.
I´m reading the info from the file using node´s core Fyle System. Is there any way to 'tell' to readFile that the encoding is ANSI?
fs = require('fs');
fs.readFile('..\\\\LogSSH\\' + fileName + '.log', 'utf8', function (err, data) {
if (err) {
console.log(err);
}
If not, how can I convert that text?
Of course, ANSI is not actually an encoding. But no matter what exact encoding we're talking about I can't see any Microsoft code pages included in the relatively short list documented at Buffers and Character Encodings:
ascii - for 7-bit ASCII data only. This encoding is fast and will strip the high bit if set.
utf8 - Multibyte encoded Unicode characters. Many web pages and other document formats use UTF-8.
utf16le - 2 or 4 bytes, little-endian encoded Unicode characters. Surrogate pairs (U+10000 to U+10FFFF) are supported.
ucs2 - Alias of 'utf16le'.
base64 - Base64 encoding. When creating a Buffer from a string, this encoding will also correctly accept "URL and Filename Safe Alphabet" as specified in RFC4648, Section 5.
latin1 - A way of encoding the Buffer into a one-byte encoded string (as defined by the IANA in RFC1345, page 63, to be the Latin-1 supplement block and C0/C1 control codes).
binary - Alias for 'latin1'.
hex - Encode each byte as two hexadecimal characters.
If you work in Western Europe you may be tempted to use latin1 as synonym for Windows-1252 but it'll render incorrect results as soon as you print a € symbol.
So the answer is no, you need to install a third-party package like iconv-lite.
In my case the convertion between types was due to the need to use special latin characters as 'í' or 'ó'. I solve it changing the encoding from 'utf8' to binary in the fs.readFile() function:
fs.readFile('..\\LogSSH\\' + fileName + '.log', {encoding: "binary"}, function (err, data) {
if (err) {
console.log(err);
}

Javascript unicode to ASCII

Need to convert the unicode of the SOH value '\u0001' to Ascii. Why is this not working?
var soh = String.fromCharCode(01);
It returns '\u0001'
Or when I try
var soh = '\u0001'
It returns a smiley face.
How can I get the unicode to become the proper SOH value(a blank unprintable character)
JS has no ASCII strings, they're intrinsically UTF-16.
In a browser you're out of luck. If you're coding for node.js you're lucky!
You can use a buffer to transcode strings into octets and then manipulate the binary data at will. But you won't get necessarily a valid string back out of the buffer once you've messed with it.
Either way you'll have to read more about it here:
https://mathiasbynens.be/notes/javascript-encoding
or here:
https://nodejs.org/api/buffer.html
EDIT: in the comment you say you use node.js, so this is an excerpt from the second link above.
const buf5 = Buffer.from('test');
// Creates a Buffer containing ASCII bytes [74, 65, 73, 74].
To create the SOH character embedded in a common ASCII string use the common escape sequence\x01 like so:
const bufferWithSOH = Buffer.from("string with \x01 SOH", "ascii");
This should do it. You can then send the bufferWithSOH content to an output stream such as a network, console or file stream.
Node.js documentation will guide you on how to use strings in a Buffer pretty well, just look up the second link above.
To ascii would be would be an array of bytes: 0x00 0x01 You would need to extract the unicode code point after the \u and call parseInt, then extract the bytes from the Number. JavaScript might not be the best language for this.

How to encode 256bit AES key using base64?

I am using NodeJS to interact with Amazon Web Services (specifically s3). I am attempting to use Server side encryption with customer keys. Only AES256 is allowed as an encryption method. The API specifies that the keys be base64 encoded.
At the moment I am merely testing the AWS api, I am using throwaway test files, so security (and secure key generation) are not an issue at the moment.
My problem is as follows: Given that I am in posession of a 256bit hexadecimal string, how do I obtain a base64 encoded string of the integer that that represents?
My first instinct was to first parse the Hexadecimal string as an integer, and convert it to a string using toString(radix) specifying a radix of 64. However toString() accepts a maximum radix of 36. Is there another way of doing this?
And even if I do this, is that a base64 encoded string of 256bit encryption key? The API reference just says that it expects a key that is "appropriate for use with the algorithm specified". (I am using the putObject method).
To convert a hex string to a base64 string in node.js, you can very easily use a Buffer;
var key_in_hex = '11223344556677881122334455667788'
var buf = new Buffer(key_in_hex, 'hex')
var str = buf.toString('base64')
...which will set str to the base64 equivalent of the hex string passed in ('112233...')
You could also of course combine it to a one liner;
var key_in_hex = '11223344556677881122334455667788'
var str = new Buffer(key_in_hex, 'hex').toString('base64')

Adding UTF-8 BOM to string/Blob

I need to add a UTF-8 byte-order-mark to generated text data on client side. How do I do that?
Using new Blob(['\xEF\xBB\xBF' + content]) yields '"my data"', of course.
Neither did '\uBBEF\x22BF' work (with '\x22' == '"' being the next character in content).
Is it possible to prepend the UTF-8 BOM in JavaScript to a generated text?
Yes, I really do need the UTF-8 BOM in this case.
Prepend \ufeff to the string. See http://msdn.microsoft.com/en-us/library/ie/2yfce773(v=vs.94).aspx
See discussion between #jeff-fischer and #casey for details on UTF-8 and UTF-16 and the BOM. What actually makes the above work is that the string \ufeff is always used to represent the BOM, regardless of UTF-8 or UTF-16 being used.
See p.36 in The Unicode Standard 5.0, Chapter 2 for a detailed explanation. A quote from that page
The endian order entry for UTF-8 in Table 2-4 is marked N/A because
UTF-8 code units are 8 bits in size, and the usual machine issues of
endian order for larger code units do not apply. The serialized order
of the bytes must not depart from the order defined by the UTF- 8
encoding form. Use of a BOM is neither required nor recommended for
UTF-8, but may be encountered in contexts where UTF-8 data is
converted from other encoding forms that use a BOM or where the BOM is
used as a UTF-8 signature.
I had the same issue and this is the solution I came up with:
var blob = new Blob([
new Uint8Array([0xEF, 0xBB, 0xBF]), // UTF-8 BOM
"Text",
... // Remaining data
],
{ type: "text/plain;charset=utf-8" });
Using Uint8Array prevents the browser from converting those bytes into string (tested on Chrome and Firefox).
You should replace text/plain with your desired MIME type.
I'm editing my original answer. The above answer really demands elaboration as this is a convoluted solution by Node.js.
The short answer is, yes, this code works.
The long answer is, no, FEFF is not the byte order mark for utf-8. Apparently node took some sort of shortcut for writing encodings within files. FEFF is the UTF16 Little Endian encoding as can be seen within the Byte Order Mark wikipedia article and can also be viewed within a binary text editor after having written the file. I've verified this is the case.
http://en.wikipedia.org/wiki/Byte_order_mark#Representations_of_byte_order_marks_by_encoding
Apparently, Node.JS uses the \ufeff to signify any number of encoding. It takes the \ufeff marker and converts it into the correct byte order mark based on the 3rd options parameter of writeFile. The 3rd parameter you pass in the encoding string. Node.JS takes this encoding string and converts the \ufeff fixed byte encoding into any one of the actual encoding's byte order marks.
UTF-8 Example:
fs.writeFile(someFilename, '\ufeff' + html, { encoding: 'utf8' }, function(err) {
/* The actual byte order mark written to the file is EF BB BF */
}
UTF-16 Little Endian Example:
fs.writeFile(someFilename, '\ufeff' + html, { encoding: 'utf16le' }, function(err) {
/* The actual byte order mark written to the file is FF FE */
}
So, as you can see the \ufeff is simply a marker stating any number of resulting encodings. The actual encoding that makes it into the file is directly dependent the encoding option specified. The marker used within the string is really irrelevant to what gets written to the file.
I suspect that the reasoning behind this is because they chose not to write byte order marks and the 3 byte mark for UTF-8 isn't easily encoded into the javascript string to be written to disk. So, they used the UTF16LE BOM as a placeholder mark within the string which gets substituted at write-time.
This is my solution:
var blob = new Blob(["\uFEFF"+csv], {
type: 'text/csv; charset=utf-18'
});
This works for me:
let blob = new Blob(["\ufeff", csv], { type: 'text/csv;charset=utf-8' });
BOM (Byte Order Marker) might be necessary to use because some programs need it to use the correct character encoding.
Example:
When opening a csv file without a BOM in a system with a default character encoding of Shift_JIS instead of UTF-8 in MS Excel, it will open it in default encoding. This will result in garbage characters. If you specify the BOM for UTF-8, it will fix it.
This fixes it for me. was getting a BOM with authorize.net api and cloudflare workers:
const data = JSON.parse((await res.text()).trim());

Categories