Support [ and ] characters in PDU mode - javascript

I am writing application in nodejs for sending and recieving sms in PDU mode.I use wavecom GSM modem(7-bit encoding) to send sms. It also support 8 bit(AT+CSMP=1,167,0,8) encoding scheme.
I can send alpha numeric character properly.But I can't send some character like ([,],| etc).
Here string :
AT+CMGS=14
0001030C911989890878800004015B
Text String : [
But I recieve some junk character. Any Idea?
And how to send multipart sms. I have refer this, and this but I does not get desired output. can anyone suggest 8-bit(7-bit encoding scheme) text encoding scheme?
Please Help me...

According to this page (see section Sending an Unicode SMS message), the 8bit encoding is in fact UCS-2.
I don't know enough about nodejs to give you the full implementation, but here is a .NET sample:
string EncodeSmsText(string text)
{
// Convert input string to a sequence of bytes in BigEndian UCS-2 encoding
// 'Hi' -> [0, 72, 0, 105]
var bytes = Encoding.BigEndianUnicode.GetBytes(text);
// Encode bytes to hex representation
// [0, 72, 0, 105] -> '00480069'
return BitConverter.ToString(bytes).Replace("-", "");
}
Please note that according to this post my code will not work for characters encoded as surrogate pairs, because Encoding.BigEndianEncoding is UTF-16 (not UCS-2).
Edit
Here is NodeJS version that uses the built-in UCS2 converter in Buffer class:
function swapBytes(buffer) {
var l = buffer.length;
if (l & 0x01) {
throw new Error('Buffer length must be even');
}
for (var i = 0; i < l; i += 2) {
var a = buffer[i];
buffer[i] = buffer[i+1];
buffer[i+1] = a;
}
return buffer;
}
function encodeSmsText(input) {
var ucs2le = new Buffer(input, 'ucs2');
var ucs2be = swapBytes(ucs2le);
return ucs2be.toString('hex');
}
console.log(encodeSmsText('Hi'));
Inspired by these SO answers:
Node.JS Big-Endian UCS-2
How to do Base64 encoding in node.js?

Thanks,
Finally I got the answer :)
This characters([,],|) are encode as two characters like
[ is encode as 1B1E (Combination of escape character and < sign)
] is encode as 1B20 (Combination of escap character and > sign)
So whenever I fond such characters I replaced it with corresponding value then I use 7 bit encoding. It's Work good...
So my encoding string for [ is
> AT+CMGS=15
> 0001000C911989890878800000021B1E
And for "[hello]"
> AT+CMGS=21
> 0001000C911989890878800000091B1EBACC66BF373E
Thanks once again..

Related

How to use javascript (in Angular) to get bytes encoded by java.util.Base64? [duplicate]

I need to convert a base64 encode string into an ArrayBuffer.
The base64 strings are user input, they will be copy and pasted from an email, so they're not there when the page is loaded.
I would like to do this in javascript without making an ajax call to the server if possible.
I found those links interesting, but they didt'n help me:
ArrayBuffer to base64 encoded string
this is about the opposite conversion, from ArrayBuffer to base64, not the other way round
http://jsperf.com/json-vs-base64/2
this looks good but i can't figure out how to use the code.
Is there an easy (maybe native) way to do the conversion? thanks
Try this:
function _base64ToArrayBuffer(base64) {
var binary_string = window.atob(base64);
var len = binary_string.length;
var bytes = new Uint8Array(len);
for (var i = 0; i < len; i++) {
bytes[i] = binary_string.charCodeAt(i);
}
return bytes.buffer;
}
Using TypedArray.from:
Uint8Array.from(atob(base64_string), c => c.charCodeAt(0))
Performance to be compared with the for loop version of Goran.it answer.
For Node.js users:
const myBuffer = Buffer.from(someBase64String, 'base64');
myBuffer will be of type Buffer which is a subclass of Uint8Array. Unfortunately, Uint8Array is NOT an ArrayBuffer as the OP was asking for. But when manipulating an ArrayBuffer I almost always wrap it with Uint8Array or something similar, so it should be close to what's being asked for.
Goran.it's answer does not work because of unicode problem in javascript - https://developer.mozilla.org/en-US/docs/Web/API/WindowBase64/Base64_encoding_and_decoding.
I ended up using the function given on Daniel Guerrero's blog: http://blog.danguer.com/2011/10/24/base64-binary-decoding-in-javascript/
Function is listed on github link: https://github.com/danguer/blog-examples/blob/master/js/base64-binary.js
Use these lines
var uintArray = Base64Binary.decode(base64_string);
var byteArray = Base64Binary.decodeArrayBuffer(base64_string);
Async solution, it's better when the data is big:
// base64 to buffer
function base64ToBufferAsync(base64) {
var dataUrl = "data:application/octet-binary;base64," + base64;
fetch(dataUrl)
.then(res => res.arrayBuffer())
.then(buffer => {
console.log("base64 to buffer: " + new Uint8Array(buffer));
})
}
// buffer to base64
function bufferToBase64Async( buffer ) {
var blob = new Blob([buffer], {type:'application/octet-binary'});
console.log("buffer to blob:" + blob)
var fileReader = new FileReader();
fileReader.onload = function() {
var dataUrl = fileReader.result;
console.log("blob to dataUrl: " + dataUrl);
var base64 = dataUrl.substr(dataUrl.indexOf(',')+1)
console.log("dataUrl to base64: " + base64);
};
fileReader.readAsDataURL(blob);
}
Javascript is a fine development environment so it seems odd than it doesn't provide a solution to this small problem. The solutions offered elsewhere on this page are potentially slow. Here is my solution. It employs the inbuilt functionality that decodes base64 image and sound data urls.
var req = new XMLHttpRequest;
req.open('GET', "data:application/octet;base64," + base64Data);
req.responseType = 'arraybuffer';
req.onload = function fileLoaded(e)
{
var byteArray = new Uint8Array(e.target.response);
// var shortArray = new Int16Array(e.target.response);
// var unsignedShortArray = new Int16Array(e.target.response);
// etc.
}
req.send();
The send request fails if the base 64 string is badly formed.
The mime type (application/octet) is probably unnecessary.
Tested in chrome. Should work in other browsers.
Pure JS - no string middlestep (no atob)
I write following function which convert base64 in direct way (without conversion to string at the middlestep). IDEA
get 4 base64 characters chunk
find index of each character in base64 alphabet
convert index to 6-bit number (binary string)
join four 6 bit numbers which gives 24-bit numer (stored as binary string)
split 24-bit string to three 8-bit and covert each to number and store them in output array
corner case: if input base64 string ends with one/two = char, remove one/two numbers from output array
Below solution allows to process large input base64 strings. Similar function for convert bytes to base64 without btoa is HERE
function base64ToBytesArr(str) {
const abc = [..."ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"]; // base64 alphabet
let result = [];
for(let i=0; i<str.length/4; i++) {
let chunk = [...str.slice(4*i,4*i+4)]
let bin = chunk.map(x=> abc.indexOf(x).toString(2).padStart(6,0)).join('');
let bytes = bin.match(/.{1,8}/g).map(x=> +('0b'+x));
result.push(...bytes.slice(0,3 - (str[4*i+2]=="=") - (str[4*i+3]=="=")));
}
return result;
}
// --------
// TEST
// --------
let test = "Alice's Adventure in Wonderland.";
console.log('test string:', test.length, test);
let b64_btoa = btoa(test);
console.log('encoded string:', b64_btoa);
let decodedBytes = base64ToBytesArr(b64_btoa); // decode base64 to array of bytes
console.log('decoded bytes:', JSON.stringify(decodedBytes));
let decodedTest = decodedBytes.map(b => String.fromCharCode(b) ).join``;
console.log('Uint8Array', JSON.stringify(new Uint8Array(decodedBytes)));
console.log('decoded string:', decodedTest.length, decodedTest);
Caution!
If you want to decode base64 to STRING (not bytes array) and you know that result contains utf8 characters then atob will fail in general e.g. for character 💩 the atob("8J+SqQ==") will give wrong result . In this case you can use above solution and convert result bytes array to string in proper way e.g. :
function base64ToBytesArr(str) {
const abc = [..."ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"]; // base64 alphabet
let result = [];
for(let i=0; i<str.length/4; i++) {
let chunk = [...str.slice(4*i,4*i+4)]
let bin = chunk.map(x=> abc.indexOf(x).toString(2).padStart(6,0)).join('');
let bytes = bin.match(/.{1,8}/g).map(x=> +('0b'+x));
result.push(...bytes.slice(0,3 - (str[4*i+2]=="=") - (str[4*i+3]=="=")));
}
return result;
}
// --------
// TEST
// --------
let testB64 = "8J+SqQ=="; // for string: "💩";
console.log('input base64 :', testB64);
let decodedBytes = base64ToBytesArr(testB64); // decode base64 to array of bytes
console.log('decoded bytes :', JSON.stringify(decodedBytes));
let result = new TextDecoder("utf-8").decode(new Uint8Array(decodedBytes));
console.log('properly decoded string :', result);
let result_atob = atob(testB64);
console.log('decoded by atob :', result_atob);
Snippets tested 2022-08-04 on: chrome 103.0.5060.134 (arm64), safari 15.2, firefox 103.0.1 (64 bit), edge 103.0.1264.77 (arm64), and node-js v12.16.1
I would strongly suggest using an npm package implementing correctly the base64 specification.
The best one I know is rfc4648
The problem is that btoa and atob use binary strings instead of Uint8Array and trying to convert to and from it is cumbersome. Also there is a lot of bad packages in npm for that. I lose a lot of time before finding that one.
The creators of that specific package did a simple thing: they took the specification of Base64 (which is here by the way) and implemented it correctly from the beginning to the end. (Including other formats in the specification that are also useful like Base64-url, Base32, etc ...) That doesn't seem a lot but apparently that was too much to ask to the bunch of other libraries.
So yeah, I know I'm doing a bit of proselytism but if you want to avoid losing your time too just use rfc4648.
I used the accepted answer to this question to create base64Url string <-> arrayBuffer conversions in the realm of base64Url data transmitted via ASCII-cookie [atob, btoa are base64[with +/]<->js binary string], so I decided to post the code.
Many of us may want both conversions and client-server communication may use the base64Url version (though a cookie may contain +/ as well as -_ characters if I understand well, only ",;\ characters and some wicked characters from the 128 ASCII are disallowed). But a url cannot contain / character, hence the wider use of b64 url version which of course not what atob-btoa supports...
Seeing other comments, I would like to stress that my use case here is base64Url data transmission via url/cookie and trying to use this crypto data with the js crypto api (2017) hence the need for ArrayBuffer representation and b64u <-> arrBuff conversions... if array buffers represent other than base64 (part of ascii) this conversion wont work since atob, btoa is limited to ascii(128). Check out an appropriate converter like below:
The buff -> b64u version is from a tweet from Mathias Bynens, thanks for that one (too)! He also wrote a base64 encoder/decoder:
https://github.com/mathiasbynens/base64
Coming from java, it may help when trying to understand the code that java byte[] is practically js Int8Array (signed int) but we use here the unsigned version Uint8Array since js conversions work with them. They are both 256bit, so we call it byte[] in js now...
The code is from a module class, that is why static.
//utility
/**
* Array buffer to base64Url string
* - arrBuff->byte[]->biStr->b64->b64u
* #param arrayBuffer
* #returns {string}
* #private
*/
static _arrayBufferToBase64Url(arrayBuffer) {
console.log('base64Url from array buffer:', arrayBuffer);
let base64Url = window.btoa(String.fromCodePoint(...new Uint8Array(arrayBuffer)));
base64Url = base64Url.replaceAll('+', '-');
base64Url = base64Url.replaceAll('/', '_');
console.log('base64Url:', base64Url);
return base64Url;
}
/**
* Base64Url string to array buffer
* - b64u->b64->biStr->byte[]->arrBuff
* #param base64Url
* #returns {ArrayBufferLike}
* #private
*/
static _base64UrlToArrayBuffer(base64Url) {
console.log('array buffer from base64Url:', base64Url);
let base64 = base64Url.replaceAll('-', '+');
base64 = base64.replaceAll('_', '/');
const binaryString = window.atob(base64);
const length = binaryString.length;
const bytes = new Uint8Array(length);
for (let i = 0; i < length; i++) {
bytes[i] = binaryString.charCodeAt(i);
}
console.log('array buffer:', bytes.buffer);
return bytes.buffer;
}
made a ArrayBuffer from a base64:
function base64ToArrayBuffer(base64) {
var binary_string = window.atob(base64);
var len = binary_string.length;
var bytes = new Uint8Array(len);
for (var i = 0; i < len; i++) {
bytes[i] = binary_string.charCodeAt(i);
}
return bytes.buffer;
}
I was trying to use above code and It's working fine.
The result of atob is a string that is separated with some comma
,
A simpler way is to convert this string to a json array string and after that parse it to a byteArray
below code can simply be used to convert base64 to an array of number
let byteArray = JSON.parse('['+atob(base64)+']');
let buffer = new Uint8Array(byteArray);
Solution without atob
I've seen many people complaining about using atob and btoa in the replies. There are some issues to take into account when using them.
There's a solution without using them in the MDN page about Base64. Below you can find the code to convert a base64 string into a Uint8Array copied from the docs.
Note that the function below returns a Uint8Array. To get the ArrayBuffer version you just need to do uintArray.buffer.
function b64ToUint6(nChr) {
return nChr > 64 && nChr < 91
? nChr - 65
: nChr > 96 && nChr < 123
? nChr - 71
: nChr > 47 && nChr < 58
? nChr + 4
: nChr === 43
? 62
: nChr === 47
? 63
: 0;
}
function base64DecToArr(sBase64, nBlocksSize) {
const sB64Enc = sBase64.replace(/[^A-Za-z0-9+/]/g, "");
const nInLen = sB64Enc.length;
const nOutLen = nBlocksSize
? Math.ceil(((nInLen * 3 + 1) >> 2) / nBlocksSize) * nBlocksSize
: (nInLen * 3 + 1) >> 2;
const taBytes = new Uint8Array(nOutLen);
let nMod3;
let nMod4;
let nUint24 = 0;
let nOutIdx = 0;
for (let nInIdx = 0; nInIdx < nInLen; nInIdx++) {
nMod4 = nInIdx & 3;
nUint24 |= b64ToUint6(sB64Enc.charCodeAt(nInIdx)) << (6 * (3 - nMod4));
if (nMod4 === 3 || nInLen - nInIdx === 1) {
nMod3 = 0;
while (nMod3 < 3 && nOutIdx < nOutLen) {
taBytes[nOutIdx] = (nUint24 >>> ((16 >>> nMod3) & 24)) & 255;
nMod3++;
nOutIdx++;
}
nUint24 = 0;
}
}
return taBytes;
}
If you're interested in the reverse operation, ArrayBuffer to base64, you can find how to do it in the same link.

AES encryption in JS equivalent of C#

I need to encrypt a string using AES encryption. This encryption was happening in C# earlier, but it needs to be converted into JavaScript (will be run on a browser).
The current code in C# for encryption is as following -
public static string EncryptString(string plainText, string encryptionKey)
{
byte[] clearBytes = Encoding.Unicode.GetBytes(plainText);
using (Aes encryptor = Aes.Create())
{
Rfc2898DeriveBytes pdb = new Rfc2898DeriveBytes(encryptionKey, new byte[] { 0x49, 0x76, 0x61, 0x6e, 0x20, 0x4d, 0x65, 0x64, 0x76, 0x65, 0x64, 0x65, 0x76 });
encryptor.Key = pdb.GetBytes(32);
encryptor.IV = pdb.GetBytes(16);
using (MemoryStream ms = new MemoryStream())
{
using (CryptoStream cs = new CryptoStream(ms, encryptor.CreateEncryptor(), CryptoStreamMode.Write))
{
cs.Write(clearBytes, 0, clearBytes.Length);
cs.Close();
}
plainText = Convert.ToBase64String(ms.ToArray());
}
}
return plainText;
}
I have tried to use CryptoJS to replicate the same functionality, but it's not giving me the equivalent encrypted base64 string. Here's my CryptoJS code -
function encryptString(encryptString, secretKey) {
var iv = CryptoJS.enc.Hex.parse('Ivan Medvedev');
var key = CryptoJS.PBKDF2(secretKey, iv, { keySize: 256 / 32, iterations: 500 });
var encrypted = CryptoJS.AES.encrypt(encryptString, key,{iv:iv);
return encrypted;
}
The encrypted string has to be sent to a server which will be able to decrypt it. The server is able to decrypt the encrypted string generated from the C# code, but not the encrypted string generated from JS code. I tried to compare the encrypted strings generated by both the code and found that the C# code is generating longer encrypted strings. For example keeping 'Example String' as plainText and 'Example Key' as the key, I get the following result -
C# - eAQO+odxOdGlNRB81SHR2XzJhyWtz6XmQDko9HyDe0w=
JS - 9ex5i2g+8iUCwdwN92SF+A==
The length of JS encrypted string is always shorter than the C# one. Is there something I am doing wrong? I just have to replicated the C# code into the JS code.
Update:
My current code after Zergatul's answer is this -
function encryptString(encryptString, secretKey) {
var keyBytes = CryptoJS.PBKDF2(secretKey, 'Ivan Medvedev', { keySize: 48 / 4, iterations: 1000 });
console.log(keyBytes.toString());
// take first 32 bytes as key (like in C# code)
var key = new CryptoJS.lib.WordArray.init(keyBytes.words, 32);
// skip first 32 bytes and take next 16 bytes as IV
var iv = new CryptoJS.lib.WordArray.init(keyBytes.words.splice(32 / 4), 16);
console.log(key.toString());
console.log(iv.toString());
var encrypted = CryptoJS.AES.encrypt(encryptString, key, { iv: iv });
return encrypted;
}
As illustrated in his/her answer that if the C# code converts the plainText into bytes using ASCII instead of Unicode, both the C# and JS code will produce exact results. But since I am not able to modify the decryption code, I have to convert the code to be equivalent of the original C# code which was using Unicode.
So, I tried to see, what's the difference between both the bytes array between ASCII and Unicode byte conversion in C#. Here's what I found -
ASCII Byte Array: [69,120,97,109,112,108,101,32,83,116, 114, 105, 110, 103]
Unicode Byte Array: [69,0,120,0,97,0,109,0,112,0,108,0,101,0,32,0,83,0,116,0, 114,0, 105,0, 110,0, 103,0]
So some extra bytes are available for each character in C# (So Unicode allocates twice as much bytes to each character than ASCII).
Here's the difference between both Unicode and ASCII conversion respectively -
ASCII
clearBytes: [69,120,97,109,112,108,101,32,83,116,114,105,110,103,]
encryptor.Key: [123,213,18,82,141,249,182,218,247,31,246,83,80,77,195,134,230,92,0,125,232,210,135,115,145,193,140,239,228,225,183,13,]
encryptor.IV: [101,74,46,177,46,233,68,252,83,169,211,13,249,61,118,167,]
Result: eQus9GLPKULh9vhRWOJjog==
Unicode:
clearBytes: [69,0,120,0,97,0,109,0,112,0,108,0,101,0,32,0,83,0,116,0,114,0,105,0,110,0,103,0,]
encryptor.Key: [123,213,18,82,141,249,182,218,247,31,246,83,80,77,195,134,230,92,0,125,232,210,135,115,145,193,140,239,228,225,183,13,]
encryptor.IV: [101,74,46,177,46,233,68,252,83,169,211,13,249,61,118,167,]
Result: eAQO+odxOdGlNRB81SHR2XzJhyWtz6XmQDko9HyDe0w=
So since both the key and iv being generated have exact same byte array in both Unicode and ASCII approach, it should not have generated different output, but somehow it's doing that. I think it's because of clearBytes' length, as it's using its length to write to CryptoStream.
I tried to see what's the output of the generated bytes in the JS code is and found that it uses words which needed to be converted into Strings using toString() method.
keyBytes: 7bd512528df9b6daf71ff653504dc386e65c007de8d2877391c18cefe4e1b70d654a2eb12ee944fc53a9d30df93d76a7
key: 7bd512528df9b6daf71ff653504dc386e65c007de8d2877391c18cefe4e1b70d
iv: 654a2eb12ee944fc53a9d30df93d76a7
Since, I am not able to affect the generated encrypted string's length in the JS code (No access to the write stream directly), thus still stuck here.
Here is the example how to reproduce the same ciphertext between C# and CryptoJS:
static void Main(string[] args)
{
byte[] plainText = Encoding.Unicode.GetBytes("Example String"); // this is UTF-16 LE
string cipherText;
using (Aes encryptor = Aes.Create())
{
var pdb = new Rfc2898DeriveBytes("Example Key", Encoding.ASCII.GetBytes("Ivan Medvedev"));
encryptor.Key = pdb.GetBytes(32);
encryptor.IV = pdb.GetBytes(16);
using (MemoryStream ms = new MemoryStream())
{
using (CryptoStream cs = new CryptoStream(ms, encryptor.CreateEncryptor(), CryptoStreamMode.Write))
{
cs.Write(plainText, 0, plainText.Length);
cs.Close();
}
cipherText = Convert.ToBase64String(ms.ToArray());
}
}
Console.WriteLine(cipherText);
}
And JS:
var keyBytes = CryptoJS.PBKDF2('Example Key', 'Ivan Medvedev', { keySize: 48 / 4, iterations: 1000 });
// take first 32 bytes as key (like in C# code)
var key = new CryptoJS.lib.WordArray.init(keyBytes.words, 32);
// skip first 32 bytes and take next 16 bytes as IV
var iv = new CryptoJS.lib.WordArray.init(keyBytes.words.splice(32 / 4), 16);
// use the same encoding as in C# code, to convert string into bytes
var data = CryptoJS.enc.Utf16LE.parse("Example String");
var encrypted = CryptoJS.AES.encrypt(data, key, { iv: iv });
console.log(encrypted.toString());
Both codes return: eAQO+odxOdGlNRB81SHR2XzJhyWtz6XmQDko9HyDe0w=
TL;DR the final code looks like this -
function encryptString(encryptString, secretKey) {
encryptString = addExtraByteToChars(encryptString);
var keyBytes = CryptoJS.PBKDF2(secretKey, 'Ivan Medvedev', { keySize: 48 / 4, iterations: 1000 });
console.log(keyBytes.toString());
var key = new CryptoJS.lib.WordArray.init(keyBytes.words, 32);
var iv = new CryptoJS.lib.WordArray.init(keyBytes.words.splice(32 / 4), 16);
var encrypted = CryptoJS.AES.encrypt(encryptString, key, { iv: iv, });
return encrypted;
}
function addExtraByteToChars(str) {
let strResult = '';
for (var i = 0; i < str.length; ++i) {
strResult += str.charAt(i) + String.fromCharCode(0);
}
return strResult;
}
Explanation:
The C# code in the Zergatul's answer (Thanks to him/her) was using ASCII to convert the plainText into bytes, while my C# code was using Unicode. Unicode was assigning extra byte to each character in the resultant byte array, which was not affecting the generation of both key and iv bytes, but affecting the result since the length of the encryptedString was dependent on the length of the bytes generated from plainText.
As seen in the following bytes generated for each of them using "Example String" and "Example Key" as the plainText and secretKey respectively -
ASCII
clearBytes: [69,120,97,109,112,108,101,32,83,116,114,105,110,103,]
encryptor.Key: [123,213,18,82,141,249,182,218,247,31,246,83,80,77,195,134,230,92,0,125,232,210,135,115,145,193,140,239,228,225,183,13,]
encryptor.IV: [101,74,46,177,46,233,68,252,83,169,211,13,249,61,118,167,]
Result: eQus9GLPKULh9vhRWOJjog==
Unicode:
clearBytes: [69,0,120,0,97,0,109,0,112,0,108,0,101,0,32,0,83,0,116,0,114,0,105,0,110,0,103,0,]
encryptor.Key: [123,213,18,82,141,249,182,218,247,31,246,83,80,77,195,134,230,92,0,125,232,210,135,115,145,193,140,239,228,225,183,13,]
encryptor.IV: [101,74,46,177,46,233,68,252,83,169,211,13,249,61,118,167,]
Result: eAQO+odxOdGlNRB81SHR2XzJhyWtz6XmQDko9HyDe0w=
The JS result was similar too, which confirmed that it's using ASCII byte conversion -
keyBytes: 7bd512528df9b6daf71ff653504dc386e65c007de8d2877391c18cefe4e1b70d654a2eb12ee944fc53a9d30df93d76a7
key: 7bd512528df9b6daf71ff653504dc386e65c007de8d2877391c18cefe4e1b70d
iv: 654a2eb12ee944fc53a9d30df93d76a7
Thus I just need to increase the length of the plainText to make it use Unicode equivalent byte generation (Sorry, not familiar with the term). Since Unicode was assigning 2 space for each character in the byteArray, keeping the second space as 0, I basically created gap in the plainText's characters and filled that gap with character whose ASCII value was 0 using the addExtraByteToChars() function. And it made all the difference.
It's a workaround for sure, but started working for my scenario. I suppose this may or may not prove useful to others, thus sharing the findings. If anyone can suggest better implementation of the addExtraByteToChars() function (probably some term for this conversion instead of ASCII to Unicode or a better, efficient, and not hacky way to do that), please suggest it.

Size of SQS messages

I'm sending messages to SQS using the AWS-SDK for JavaScript. Each message needs to be 256kb in size tops.
Each message is a JSON object that gets decoded on another service.
Option 1: JSON Object as string: Count length and make sure it's less than 262144?
function* getStuff(rows, someConfig) {
let totesPayload = 0
let payload = []
for (const row of rows) {
const singleItemInPayload = rowToPayload(row, someConfig)
if (singleItemInPayload.length + totesPayload < 262144 - (enclosingObjectSize())) {
payload.push(singleItemInPayload)
totesPayload += singleItemInPayload.length
} else {
yield({ payload })
payload = []
}
}
Option 2: Buffer.from(JSON Object as string): Count length of JSON Object and make sure it's less than 262144?
Most of the data is text, so I'm not sure I'm going to get any good help from putting it in a byte array.
Is option 2 necessary?
SQS uses UTF-8 for strings, so if any part of your message can contain non-ASCII characters, then you will need to measure the size by converting to bytes because UTF-8 is a variable width encoding which uses 1 to 4 bytes for a single character.

How to make a simple encoder and decoder?

I have the following code:
<DOCTYPE HTML>
<html>
<body>
<script type="text/javascript">
//Chat Encoder
//Made by Hducke aka Hunter Ducker
//VARS
var userInputA = "";
var userInputB = "";
var result = userInputB.split("");
//FUNCTIONS
var encodeMessage = function(){
var output = "";
userInputB = prompt("Type your message here:", "PLEASE TYPE YOUR MESSAGE IN LOWER CASE!");
for(var i = 0; i <= result.length; i++){
switch(result[i]){
case("a"):
result[i] = "1";
break;
case("b"):
result[i] = "2";
break;
case("c"):
result[i] = "3";
break;
}
var tempStr = "";
result[i] + tempStr;
}
return tempStr;
}
var decodeMessage = function(){
}
var promptUser = function(){
var tempBool = true;
while(tempBool){
userInputA = prompt("Type '1' to encode a message and '2' to decode a message!", "Type '1' or '2' here.");
switch(userInputA){
case("1"):
encodeMessage();
tempBool = false;
break;
case("2"):
decodeMessage();
tempBool = false;
break;
default:
alert("Try again. Please type a '1' or a '2'.");
}
}
}
var printMessage = function(){
alert(encodeMessage);
}
//LOGIC
promptUser();
printMessage();
</script>
</body>
</html>
Info: The way it is atm is it takes in the users input userInputB and parses it into separate characters. Then it sets the characters to a different character (scambles the character). Then it outputs the string to the user. My goal is to have it to where you can enter a message I love this website! and turn it into 1 2324 5654 7503947. Then another user can enter the encoded message and the decodeMessage function will decode the message and output it to the user.
First issue: It won't currently work as is.*
*EDIT: Now when I run the code after fixing the result[i].
Output I Get Now
Second issue: How can I do this (IE. Is there a better way of doing this)
Any tips can help. I'm kinda a noob at javascript. Thanks!
The first problem is with your input; result is not going to be updated when the user enters the prompt.
function encodeMessage() {
var output = '';
userInputB = prompt(...);
result = userInputB.split('');
...
}
The second problem is the encoding itself. Instead of using a giant switch, create an algorithm to perform the encoding. In your case, you have a simple 1:1 mapping of a character to a number, conveniently in the natural order.
Did you know that your computer stores those letters as numbers? 'a' is 97, 'b' is 98, etc. so you could simply subtract 96 from the character to get a=1, b=2, etc.
However, this poses a problem for your decoder once you reach 'j'. "java" is going to be turned into "101221", and if you simply perform the reverse of your encoder on that, you'll end up with "a`abba".
One option would be to return to your encoding scheme and the ASCII table. '1' is character 49; perhaps you could subtract 48 from your characters instead? 'a' would then become '1' (not much different from 1) and so on. 'j' becomes ':', and if you encode "java" you get ":1F1".
Once you're doing that, the reverse of your encoding scheme will become your decoder. Iterate over the encoded string, and add 48 instead of subtracting it.
UPDATE
I've updated the http://jsfiddle.net/ntyt4/5/ with a sample scenario of the encoding and decoding process. It should give you enough to get started.
A simpler way would be to use a dictionary with keys and their translated values. You could store this in an object literal as:
var translation = {
"a": 1,
"b": 2,
"c": 3,
"d": "A"
};
I've included a jsfiddle to show you an example. Just change the textbox value to one value to see the translation.
One thing to keep in mind though is that you will have to add every single character that needs to be translated to the object literal. For example "a" will not translate the upper case version "A" for you because it is a different character.
I don't know very well the javascript but you have three options:
Mapping the chars with two Vector
If you want to include the simple characters you need 25(a-z)+25(A-Z)+9(0-9)=59 conversion.
You can do this with an algoritm that if found a letter in the first vector for example at index "6", take the value from the corrispondent position in the second vector.
Decoding is the same way, only take from second vector and convert it in the equivalent for the first vector.
ASCII Table
The char letter '0' casted in integer is 48.
From 48-57 you have the number, in the range 65-90 you have the Upper char and in the range 97-122 you have the lower char. If you try to subtrack for example two your text is encoded in a simple system.
MD5 Algoritm
You can use/create a function that generate the md5 hash of a text, for example "encoding" in md5 is "84bea1f0fd2ce16f7e562a9f06ef03d3". If you want use an encript system for encript an area this is the better way.

Javascript ascii string to hex byte array

I am trying to convert an ASCII string into a byte array.
Problem is my code is converting from ASCII to a string array and not a Byte array:
var tx = '[86400:?]';
for (a = 0; a < tx.length; a = a + 1) {
hex.push('0x'+tx.charCodeAt(a).toString(16));
}
This results in:
[ '0x5b','0x38','0x36','0x30','0x30','0x30','0x3a','0x3f','0x5d' ]
But what I am looking for is:
[0x5b,0x38 ,0x30 ,0x30 ,0x30 ,0x30 ,0x3a ,0x3f,0x5d]
How can I convert to a byte rather than a byte string ?
This array is being streamed to a USB device:
device.write([0x5b,0x38 ,0x30 ,0x30 ,0x30 ,0x30 ,0x3a ,0x3f,0x5d])
And it has to be sent as one array and not looping sending device.write() for each value in the array.
A single liner :
'[86400:?]'.split ('').map (function (c) { return c.charCodeAt (0); })
returns
[91, 56, 54, 52, 48, 48, 58, 63, 93]
This is, of course, is an array of numbers, not strictly a "byte array". Did you really mean a "byte array"?
Split the string into individual characters then map each character to its numeric code.
Per your added information about device.write I found this :
Writing to a device
Writing to a device is performed using the write call in a device
handle. All writing is synchronous.
device.write([0x00, 0x01, 0x01, 0x05, 0xff, 0xff]);
on https://npmjs.org/package/node-hid
Assuming this is what you are using then my array above would work perfectly well :
device.write('[86400:?]'.split ('').map (function (c) { return c.charCodeAt (0); }));
As has been noted the 0x notation is just that, a notation. Whether you specify 0x0a or 10 or 012 (in octal) the value is the same.
function getBytes(str){
let intArray=str.split ('').map (function (c) { return c.charCodeAt (0); });
let byteArray=new Uint8Array(intArray.length);
for (let i=0;i<intArray.length;i++)
byteArray[i]=intArray[i];
return byteArray;
}
device.write(getBytes('[86400:?]'));

Categories