I'm trying to get rgb from my wifi bulb, but I'm getting this: "2442110"
How do I decode it? I've tried to search about it but found nothing
If you convert this integer to hex you'll see the following
>>> hex(2442110)
'0x25437e'
I would guess that this can be broken into RGB as
(0x25, 0x43, 0x7e) # hex
(37, 67, 126) # decimal
So starting with your original integer, first convert to a hex string, then use string slicing to extract each component, and finally convert back to int (base-10 i.e. decimal)
>>> s = hex(2442110)
>>> [int(s[i:i+2], 16) for i in range(2, 7, 2)]
[37, 67, 126]
Related
I am trying to write code in JavaScript to convert binary-coded decimal into decimal and vice-versa.
How can I achieve this?
To encode and decode values using Binary Coded Decimal (BCD):
dec2bcd = dec => parseInt(dec.toString(10),16);
bcd2dec = bcd => parseInt(bcd.toString(16),10);
console.log(dec2bcd(42)); // 66 (0x42)
console.log(bcd2dec(66)); // 42
Explanation
Suppose you have the number 42. To encode this as binary coded decimal, you place '4' in the high nibble and a '2' in the low nibble. The most straightforward way of doing this is to reinterpret the string representation of the decimal number as hex. So convert the value 42 to the string '42' and then parse this as hex to arrive at the number 66, which, if you examine it in binary is 01000010b, which indeed has a 4 (0100b) in the high nibble and a 2 (0010b) in the low nibble.
To decode, just format the number as a string using hexadecimal encoding, and then reinterpret this string as decimal.
I am manually sending numbers to an arduino board as hexadecimal like this: sendToBoard(0xE)
I am now trying to get decimal numbers converted into hexadecimal, but I can only get strings
const number = 14
number.toString(16) //e --> string
Could I possibly get this 'e' as hexadecimal to send it to the board like sendToBoard(number) //number === e (in hex)
In Javascript numbers are implemented using double-precision 64-bit binary format. So even if you present them in hexadecimal format, under the hood they will be saved using floating-point representation.
if you have a number and you want the sendToBoard function to receive a number as its input, then just pass in the number:
function sendToBoard(number){
// you can later convert it to a string:
const str = '0x' + number.toString(16).toUpperCase();
}
Alternatively, if you have a string in a hexadecimal representation, and you want sendToBoard to receive a number type, you could do the following:
const number = parseInt('0xf', 16);
sendToBoard(number);
Hope this helps :)
Let's say I have a hexadecimal, for example "0xdc", how do I convert this hexadecimal string to a hexadecimal Number type in JS?
Literally just losing the quotes. The Number() constructor and parseInt() just converted it to an integer between 0 and 255, I just want 0xdc.
EDIT:
To make my point more clear:
I want to go from "0xdc" (of type String), to 0xdc (of type Number)
You can use the Number function, which parses a string into a number according to a certain format.
console.log(Number("0xdc"));
JavaScript uses some notation to recognize numbers format like -
0x = Hexadecimal
0b = Binary
0o = Octal
if you want convert string to hex representation, you can covert to number with 16 as radix.
parseInt("0xdc", 16) // gives you 0xdc
TL;DR
#dhaker 's answer of
parseInt(Number("0xdc"), 10) is correct.
In-Memory Number Representation
Both numbers 0xdc and 220 are represented in the same way in javascript
so
0xdc == 220 will return true.
the prefix 0x is used to tell javascript that the number is hex
So wherever you are passing 220 you can safely pass 0xdc or vice-versa
String Format
Numbers are always shown in base 10 unless specified not to.
'0x' + Number(220).toString(16) gives '0xdc' if you want to print it as string.
In a nutshell
parseInt('0x' + Number(220).toString(16),16) => 220
In our database, IP address is stored as Binary(16), and it is a ipv6
On the client side I'm getting it as string, which is a hybrid of octal codes and printable ASCII characters. A byte in the range of printable ASCII characters (the range [0x20, 0x7e]) is represented by the corresponding ASCII character, with the exception of the backslash ('\'), which is escaped as '\'. All other byte values are represented by their corresponding octal values. For example, the bytes {97,92,98,99}, which in ASCII are {a,\,b,c}, are translated to text as 'a\bc'.
" \001\015\270\000\000\000\000\000\010\010\000\014Az\000"
The problem is I would like to show it like a human readable IPv6. I tried some libraries but they require a byte arrays as an input.
I think I can solve my problem by converting the hybrid octal to a byte array and then use https://www.npmjs.com/package/ipaddr.js to convert to IPv6.
The string above translate to byte array in decimal values as:
[32, 1, 13, 184, 0, 0, 0, 0, 0, 8, 8, 0, 12, 65, 122, 0]
the blank space is 32 ascii, A=65 and z=122
Im working in a function to parse the hybrid octal to byte array. I will share when ready.
A parser solution could be
const input = " \\001\\015\\270\\000\\000\\000\\000\\000\\010\\010\\000\\014Az\\000";
const output = Uint8Array.from(input.match(/\\(\d\d\d)|\\([ -~])|\\(\\)|([ -~])/g), x =>
x.length == 1 ? x.charCodeAt(0) : x.length == 2 ? 92 : parseInt(x.slice(1), 8)
);
console.log(output); // Uint8Array(16) [32,1,13,184,0,0,0,0,0,8,8,0,12,65,122,0]
but I would really recommend to use a different (easier parseable) format such as a hex string, a base64-encoded string or just an array of numbers.
Check if IPv6 decoded first from Binary(16), looks like it store with inet6_pton() function but return without decoding it.
arr = new Int8Array([-1,-1],0); // gives [-1,-1]
str = new TextDecoder('utf-8').decode(arr); // gives "��"
res = new TextEncoder('utf-8').encode(str); // gives [239, 191, 189, 239, 191, 189] instead of [-1,-1]
Its not working for negative values only.Perfectly working for positives. Any other options?
Part 1: Bytes aren't negative
The TextDecoder interface operates on the underlying ArrayBuffer (a byte sequence) that the Int8Array view wraps.
This:
new TextDecoder('utf-8').decode(new Int8Array([-1, -1]))
is the same as:
new TextDecoder('utf-8').decode(new Int8Array([-1, -1]).buffer)
Which is an ArrayBuffer containing the bytes 0xFF, 0xFF. So that's the same as:
new TextDecoder('utf-8').decode(new Uint8Array([255, 255]))
Part 2: UTF-8 Decode
0xFF is not a valid code sequence in UTF-8, so it decodes as an error. That results in the REPLACEMENT CHARACTER (U+FFFD). Since there are two 0xFF bytes you get U+FFFD U+FFFD or:
"��"
Part 3: UTF-8 Encode
Encoding U+FFFD as UTF-8 gives you the bytes 0xEF 0xBF 0xBD. So encoding a string with U+FFFD U+FFFD would give you the bytes 0xEF 0xBF 0xBD or in decimal: 239 191 189 239 191 189
... which is exactly what you got as the result.
So this is working exactly as specified.
So... what's the problem?
My guess is that you are assuming that you can encode any byte into a string. That's not how text encodings work. Text encodings define a mapping from elements of a string to a byte sequence.
Not all encodings can represent all elements of a string, but UTF-8 (and UTF-16) can represent all code points that can occur in a JavaScript string.
But the reverse is not true. Not all byte sequences correspond to characters. When invalid byte sequences are found an error occurs. By default, the TextDecoder API produces replacement characters (� U+FFFD), but you can use the fatal flag to make it throw an exception instead.