Javascript ascii string to hex byte array - javascript

I am trying to convert an ASCII string into a byte array.
Problem is my code is converting from ASCII to a string array and not a Byte array:
var tx = '[86400:?]';
for (a = 0; a < tx.length; a = a + 1) {
hex.push('0x'+tx.charCodeAt(a).toString(16));
}
This results in:
[ '0x5b','0x38','0x36','0x30','0x30','0x30','0x3a','0x3f','0x5d' ]
But what I am looking for is:
[0x5b,0x38 ,0x30 ,0x30 ,0x30 ,0x30 ,0x3a ,0x3f,0x5d]
How can I convert to a byte rather than a byte string ?
This array is being streamed to a USB device:
device.write([0x5b,0x38 ,0x30 ,0x30 ,0x30 ,0x30 ,0x3a ,0x3f,0x5d])
And it has to be sent as one array and not looping sending device.write() for each value in the array.

A single liner :
'[86400:?]'.split ('').map (function (c) { return c.charCodeAt (0); })
returns
[91, 56, 54, 52, 48, 48, 58, 63, 93]
This is, of course, is an array of numbers, not strictly a "byte array". Did you really mean a "byte array"?
Split the string into individual characters then map each character to its numeric code.
Per your added information about device.write I found this :
Writing to a device
Writing to a device is performed using the write call in a device
handle. All writing is synchronous.
device.write([0x00, 0x01, 0x01, 0x05, 0xff, 0xff]);
on https://npmjs.org/package/node-hid
Assuming this is what you are using then my array above would work perfectly well :
device.write('[86400:?]'.split ('').map (function (c) { return c.charCodeAt (0); }));
As has been noted the 0x notation is just that, a notation. Whether you specify 0x0a or 10 or 012 (in octal) the value is the same.

function getBytes(str){
let intArray=str.split ('').map (function (c) { return c.charCodeAt (0); });
let byteArray=new Uint8Array(intArray.length);
for (let i=0;i<intArray.length;i++)
byteArray[i]=intArray[i];
return byteArray;
}
device.write(getBytes('[86400:?]'));

Related

AES encryption in JS equivalent of C#

I need to encrypt a string using AES encryption. This encryption was happening in C# earlier, but it needs to be converted into JavaScript (will be run on a browser).
The current code in C# for encryption is as following -
public static string EncryptString(string plainText, string encryptionKey)
{
byte[] clearBytes = Encoding.Unicode.GetBytes(plainText);
using (Aes encryptor = Aes.Create())
{
Rfc2898DeriveBytes pdb = new Rfc2898DeriveBytes(encryptionKey, new byte[] { 0x49, 0x76, 0x61, 0x6e, 0x20, 0x4d, 0x65, 0x64, 0x76, 0x65, 0x64, 0x65, 0x76 });
encryptor.Key = pdb.GetBytes(32);
encryptor.IV = pdb.GetBytes(16);
using (MemoryStream ms = new MemoryStream())
{
using (CryptoStream cs = new CryptoStream(ms, encryptor.CreateEncryptor(), CryptoStreamMode.Write))
{
cs.Write(clearBytes, 0, clearBytes.Length);
cs.Close();
}
plainText = Convert.ToBase64String(ms.ToArray());
}
}
return plainText;
}
I have tried to use CryptoJS to replicate the same functionality, but it's not giving me the equivalent encrypted base64 string. Here's my CryptoJS code -
function encryptString(encryptString, secretKey) {
var iv = CryptoJS.enc.Hex.parse('Ivan Medvedev');
var key = CryptoJS.PBKDF2(secretKey, iv, { keySize: 256 / 32, iterations: 500 });
var encrypted = CryptoJS.AES.encrypt(encryptString, key,{iv:iv);
return encrypted;
}
The encrypted string has to be sent to a server which will be able to decrypt it. The server is able to decrypt the encrypted string generated from the C# code, but not the encrypted string generated from JS code. I tried to compare the encrypted strings generated by both the code and found that the C# code is generating longer encrypted strings. For example keeping 'Example String' as plainText and 'Example Key' as the key, I get the following result -
C# - eAQO+odxOdGlNRB81SHR2XzJhyWtz6XmQDko9HyDe0w=
JS - 9ex5i2g+8iUCwdwN92SF+A==
The length of JS encrypted string is always shorter than the C# one. Is there something I am doing wrong? I just have to replicated the C# code into the JS code.
Update:
My current code after Zergatul's answer is this -
function encryptString(encryptString, secretKey) {
var keyBytes = CryptoJS.PBKDF2(secretKey, 'Ivan Medvedev', { keySize: 48 / 4, iterations: 1000 });
console.log(keyBytes.toString());
// take first 32 bytes as key (like in C# code)
var key = new CryptoJS.lib.WordArray.init(keyBytes.words, 32);
// skip first 32 bytes and take next 16 bytes as IV
var iv = new CryptoJS.lib.WordArray.init(keyBytes.words.splice(32 / 4), 16);
console.log(key.toString());
console.log(iv.toString());
var encrypted = CryptoJS.AES.encrypt(encryptString, key, { iv: iv });
return encrypted;
}
As illustrated in his/her answer that if the C# code converts the plainText into bytes using ASCII instead of Unicode, both the C# and JS code will produce exact results. But since I am not able to modify the decryption code, I have to convert the code to be equivalent of the original C# code which was using Unicode.
So, I tried to see, what's the difference between both the bytes array between ASCII and Unicode byte conversion in C#. Here's what I found -
ASCII Byte Array: [69,120,97,109,112,108,101,32,83,116, 114, 105, 110, 103]
Unicode Byte Array: [69,0,120,0,97,0,109,0,112,0,108,0,101,0,32,0,83,0,116,0, 114,0, 105,0, 110,0, 103,0]
So some extra bytes are available for each character in C# (So Unicode allocates twice as much bytes to each character than ASCII).
Here's the difference between both Unicode and ASCII conversion respectively -
ASCII
clearBytes: [69,120,97,109,112,108,101,32,83,116,114,105,110,103,]
encryptor.Key: [123,213,18,82,141,249,182,218,247,31,246,83,80,77,195,134,230,92,0,125,232,210,135,115,145,193,140,239,228,225,183,13,]
encryptor.IV: [101,74,46,177,46,233,68,252,83,169,211,13,249,61,118,167,]
Result: eQus9GLPKULh9vhRWOJjog==
Unicode:
clearBytes: [69,0,120,0,97,0,109,0,112,0,108,0,101,0,32,0,83,0,116,0,114,0,105,0,110,0,103,0,]
encryptor.Key: [123,213,18,82,141,249,182,218,247,31,246,83,80,77,195,134,230,92,0,125,232,210,135,115,145,193,140,239,228,225,183,13,]
encryptor.IV: [101,74,46,177,46,233,68,252,83,169,211,13,249,61,118,167,]
Result: eAQO+odxOdGlNRB81SHR2XzJhyWtz6XmQDko9HyDe0w=
So since both the key and iv being generated have exact same byte array in both Unicode and ASCII approach, it should not have generated different output, but somehow it's doing that. I think it's because of clearBytes' length, as it's using its length to write to CryptoStream.
I tried to see what's the output of the generated bytes in the JS code is and found that it uses words which needed to be converted into Strings using toString() method.
keyBytes: 7bd512528df9b6daf71ff653504dc386e65c007de8d2877391c18cefe4e1b70d654a2eb12ee944fc53a9d30df93d76a7
key: 7bd512528df9b6daf71ff653504dc386e65c007de8d2877391c18cefe4e1b70d
iv: 654a2eb12ee944fc53a9d30df93d76a7
Since, I am not able to affect the generated encrypted string's length in the JS code (No access to the write stream directly), thus still stuck here.
Here is the example how to reproduce the same ciphertext between C# and CryptoJS:
static void Main(string[] args)
{
byte[] plainText = Encoding.Unicode.GetBytes("Example String"); // this is UTF-16 LE
string cipherText;
using (Aes encryptor = Aes.Create())
{
var pdb = new Rfc2898DeriveBytes("Example Key", Encoding.ASCII.GetBytes("Ivan Medvedev"));
encryptor.Key = pdb.GetBytes(32);
encryptor.IV = pdb.GetBytes(16);
using (MemoryStream ms = new MemoryStream())
{
using (CryptoStream cs = new CryptoStream(ms, encryptor.CreateEncryptor(), CryptoStreamMode.Write))
{
cs.Write(plainText, 0, plainText.Length);
cs.Close();
}
cipherText = Convert.ToBase64String(ms.ToArray());
}
}
Console.WriteLine(cipherText);
}
And JS:
var keyBytes = CryptoJS.PBKDF2('Example Key', 'Ivan Medvedev', { keySize: 48 / 4, iterations: 1000 });
// take first 32 bytes as key (like in C# code)
var key = new CryptoJS.lib.WordArray.init(keyBytes.words, 32);
// skip first 32 bytes and take next 16 bytes as IV
var iv = new CryptoJS.lib.WordArray.init(keyBytes.words.splice(32 / 4), 16);
// use the same encoding as in C# code, to convert string into bytes
var data = CryptoJS.enc.Utf16LE.parse("Example String");
var encrypted = CryptoJS.AES.encrypt(data, key, { iv: iv });
console.log(encrypted.toString());
Both codes return: eAQO+odxOdGlNRB81SHR2XzJhyWtz6XmQDko9HyDe0w=
TL;DR the final code looks like this -
function encryptString(encryptString, secretKey) {
encryptString = addExtraByteToChars(encryptString);
var keyBytes = CryptoJS.PBKDF2(secretKey, 'Ivan Medvedev', { keySize: 48 / 4, iterations: 1000 });
console.log(keyBytes.toString());
var key = new CryptoJS.lib.WordArray.init(keyBytes.words, 32);
var iv = new CryptoJS.lib.WordArray.init(keyBytes.words.splice(32 / 4), 16);
var encrypted = CryptoJS.AES.encrypt(encryptString, key, { iv: iv, });
return encrypted;
}
function addExtraByteToChars(str) {
let strResult = '';
for (var i = 0; i < str.length; ++i) {
strResult += str.charAt(i) + String.fromCharCode(0);
}
return strResult;
}
Explanation:
The C# code in the Zergatul's answer (Thanks to him/her) was using ASCII to convert the plainText into bytes, while my C# code was using Unicode. Unicode was assigning extra byte to each character in the resultant byte array, which was not affecting the generation of both key and iv bytes, but affecting the result since the length of the encryptedString was dependent on the length of the bytes generated from plainText.
As seen in the following bytes generated for each of them using "Example String" and "Example Key" as the plainText and secretKey respectively -
ASCII
clearBytes: [69,120,97,109,112,108,101,32,83,116,114,105,110,103,]
encryptor.Key: [123,213,18,82,141,249,182,218,247,31,246,83,80,77,195,134,230,92,0,125,232,210,135,115,145,193,140,239,228,225,183,13,]
encryptor.IV: [101,74,46,177,46,233,68,252,83,169,211,13,249,61,118,167,]
Result: eQus9GLPKULh9vhRWOJjog==
Unicode:
clearBytes: [69,0,120,0,97,0,109,0,112,0,108,0,101,0,32,0,83,0,116,0,114,0,105,0,110,0,103,0,]
encryptor.Key: [123,213,18,82,141,249,182,218,247,31,246,83,80,77,195,134,230,92,0,125,232,210,135,115,145,193,140,239,228,225,183,13,]
encryptor.IV: [101,74,46,177,46,233,68,252,83,169,211,13,249,61,118,167,]
Result: eAQO+odxOdGlNRB81SHR2XzJhyWtz6XmQDko9HyDe0w=
The JS result was similar too, which confirmed that it's using ASCII byte conversion -
keyBytes: 7bd512528df9b6daf71ff653504dc386e65c007de8d2877391c18cefe4e1b70d654a2eb12ee944fc53a9d30df93d76a7
key: 7bd512528df9b6daf71ff653504dc386e65c007de8d2877391c18cefe4e1b70d
iv: 654a2eb12ee944fc53a9d30df93d76a7
Thus I just need to increase the length of the plainText to make it use Unicode equivalent byte generation (Sorry, not familiar with the term). Since Unicode was assigning 2 space for each character in the byteArray, keeping the second space as 0, I basically created gap in the plainText's characters and filled that gap with character whose ASCII value was 0 using the addExtraByteToChars() function. And it made all the difference.
It's a workaround for sure, but started working for my scenario. I suppose this may or may not prove useful to others, thus sharing the findings. If anyone can suggest better implementation of the addExtraByteToChars() function (probably some term for this conversion instead of ASCII to Unicode or a better, efficient, and not hacky way to do that), please suggest it.

Javascript Convert int value to octet stream Array

I want convert an integer (signed) to 32 bit (big endian) into a octet stream and give the octet stream as a array value to the constructor of a
Buffer Object.
I can create it in the console for example for the value -2000:
<code>
buf = Buffer(4)
buf.writeInt32BE(-2000)
buf // is <Buffer ff ff f8 30>
buf1 = new Buffer([0xff, 0xff, 0xf8, 0x30])
</code>
The value -3000 is for example -3000 : 0xff ,0xff, 0xf4, 0x48
But the framework i use accepts not the writeInt32BE function and throws exception.
How can i convert a 32 bit integer value signed to a octet Array stream without the writeInt32BE ?
A function that takes a value and returns an array of octet stream.
Using a 4 byte array buffer, converted to a data view and calling setInt32 on the view seems to work. This approach supports specification of both little endian and big endian (the default) formats independent of machine architecture.
function bigEnd32( value) {
var buf = new ArrayBuffer(4);
var view = new DataView(buf);
view.setInt32( 0, value);
return view;
}
// quick test (in a browser)
var n = prompt("Signed 32: ");
var view = bigEnd32( +n);
for(var i = 0 ; i < 4; ++i)
console.log(view.getUint8( i));
Documentation was located searching for "MDN ArrayBuffer" "MDN Dataview" etc. Check out DataView in detail for properties that access the underlying array buffer - you may be able to tweak the code to suite your application.

Mimic this Erlang code behaviour in Javascript

I'm trying to obtain in Javascript the same value returned by the following generate_hash erlang function
-define(b2l(V), binary_to_list(V)).
-define(l2b(V), list_to_binary(V)).
generate_hash(User, Secret, TimeStamp) ->
SessionData = User ++ ":" ++ erlang:integer_to_list(TimeStamp, 16),
Hash = crypto:sha_mac(Secret, SessionData),
base64:encode(SessionData ++ ":" ++ ?b2l(Hash)).
make_time() ->
{NowMS, NowS, _} = erlang:now(),
NowMS * 1000000 + NowS.
This function is being called in erlang in this way:
Username = "username"
Secret = ?l2b("secret"),
UserSalt = "usersalt",
CurrentTime = make_time(),
Hash = generate_hash( ?b2l(UserName), <<Secret/binary, UserSalt/binary>>, CurrentTime).
I managed to use the google CryptoJS library to calculate the hash, but the base64 returned value does not match the one returned in erlang.
<script src="http://crypto-js.googlecode.com/svn/tags/3.1.2/build/rollups/hmac-sha1.js"></script>
function generate_hash(User, Secret, TimeStamp) {
var SessionData = User + ":" + parseInt(TimeStamp,16);
var hash = CryptoJS.HmacSHA1(Secret,SessionData);
return atob(SessionData + ":" + hash.toString())
}
var Hash = generate_hash( "username", "secret"+"usersalt", new Date().getTime())
alert(Hash);
There are three problems in your code.
Firstly: CryptoJS.HmacSHA1(Secret,SessionData); has its arguments reversed. It should be CryptoJS.HmacSHA1(SessionData, Secret);.
You can check it out in JS console:
var hash = CryptoJS.HmacSHA1("b", "a");
0: 1717011798
1: -2038285946
2: -931908057
3: 823367506
4: 21804555
Now, go to Erlang console and type this:
crypto:sha_mac(<<"a">>, <<"b">>).
<<102,87,133,86,134,130,57,134,200,116,54,39,49,19,151,82,1,76,182,11>>
binary:encode_unsigned(1717011798).
<<102,87,133,86>>
binary:encode_unsigned(21804555).
<<1,76,182,11>>
I don't know equivalent method for signed integers, but this proves, that changing the order of arguments gives the same binary value.
Second problem is with hash.toString(), which following my example gives something like:
hash = CryptoJS.HmacSHA1("b", "a");
hash.toString();
"6657855686823986c874362731139752014cb60b"
while Erlang binary to list will result in:
Str = binary_to_list(Hash).
[102,87,133,86,134,130,57,134,200,116,54,39,49,19,151,82,1,76,182,11]
io:format("~s", [Str]).
fWV9Èt6'1^SR^AL¶^K
I am not sure, what toString does with word array, but this messes up the final result.
Third problem is, that new Date().getTime() will return time in milliseconds, while in Erlang, you have microseconds. This shouldn't matter, when you test it with static integer, though.
Two things:
The make_time function in the Erlang code above returns the number of seconds since the Unix epoch, while the Javascript method getTime returns the number of milliseconds.
Besides that, since you're probably not running the two functions in the same second, you'll get different timestamps and therefore different hashes anyway.
The Javascript function parseInt parses a string and returns an integer, while the Erlang function integer_to_list takes an integer and converts it to a string (in Erlang, strings are represented as lists, thus the name). You probably want to use the toString method instead.
This algorithm can generate the same sequence of bytes generated by erlang counterpart:
var ret = [];
var hash = CryptoJS.HmacSHA1("b","a").words;
angular.forEach(hash,function(v){
var pos = v>=0, last=ret.length;
for(v=pos?v:v>>>0; v>0; v=Math.floor(v/256)) {
ret.splice(last, 0, v%256);
}
});
console.info(ret);
console.info(String.fromCharCode.apply(String,ret));
The above outputs:
[102, 87, 133, 86, 134, 130, 57, 134, 200, 116, 54, 39, 49, 19, 151, 82, 1, 76, 182, 11]
fWV9Èt6'1RL¶

Why do I get a QUOTA_BYTES_PER_ITEM error for strings that are not too long?

I've got an object that represents playlist in my extension and I need to save it to chrome.storage.sync.
I know about 4096 QUOTA_BYTES_PER_ITEM, that means key.length + JSON.stringify(val).length must be less than 4096. but my object is 3956 (val stringify length + ket length), and I still can't write it to the storage. What am I doing wrong?
My object JSON stringification result:
{"playlist":{"state":{"tracks":[{"artist":"In Flames","title":"Delight And Angers (Instrumental)"},{"artist":"Marilyn Manson","title":"Coma Black"},{"artist":"Red Hot Chili Peppers","title":"Can't Stop"},{"artist":"Jack Johnson","title":"Better Together (Hawaiian Version)"},{"artist":"Joel Nielsen","title":"Surface Tension 2"},{"artist":"Katatonia","title":"Deliberation"},{"artist":"Rev Theory","title":"Hell Yeah"},{"artist":"Die drei Friseure ","title":"Parikmaher"},{"artist":"In Flames","title":"Drenched in Fear"},{"artist":"In Flames","title":"A New Dawn"},{"artist":"Before The Dawn","title":"The First Snow/Winter Within"},{"artist":"Corey Taylor & James Root","title":"Zzyzx Road"},{"artist":"In Flames","title":"Ropes"},{"artist":"In Flames","title":"Come Clarity"},{"artist":"Jack Johnson","title":"Better Together"},{"artist":"In Flames","title":"Crawl Through Knives"},{"artist":"Ленинград","title":"День Рождения.а я вот день рожденье не буду справлять!"},{"artist":"Ellen McLain","title":"Still Alive"},{"artist":"Richard Cheese","title":"People Equals Shit "},{"artist":"Papa Roach","title":"Last Resort"},{"artist":"Killswitch Engage","title":"The End of Heartache"},{"artist":"Sonic Syndicate","title":"Denied"},{"artist":"Trivium","title":"Pull Harder On The Strings Of Your Martyr"},{"artist":"Bon Jovi","title":"Last Man Standing"},{"artist":"Jelonek","title":"Beast"},{"artist":"Gorillaz","title":"Feel Good Inc"},{"artist":"Five Finger Death Punch","title":"Falling In Hate"},{"artist":"Metallica","title":"The Memory Remains (Live)"},{"artist":"Richard Z. Kruspe","title":"Wake up"},{"artist":"Nylithia","title":"Infector (Intro)"},{"artist":"Nylithia","title":"Super Mario B Castle Theme"},{"artist":"Scorpions/Скорпионс","title":"Wind Of Change/Ветер Перемен (Версия на русском языке)"},{"artist":"Michael Andrews","title":"Mad World"},{"artist":"John 5","title":"2 Die 4"},{"artist":"Escape the Fate","title":"This War Is Ours (The Guillotine Part II)"},{"artist":"John 5","title":"Damaged"},{"artist":"Marty Friedman","title":"Dragon Mistress"},{"artist":"Pelican","title":"The Creeper"},{"artist":"JELONEK","title":"BaRock"},{"artist":"Blotted Science","title":"Laser Lobotomy"},{"artist":"The String Quartet Tribute to NIRVANA","title":"Come As You Are "},{"artist":"String Tribute","title":"Tears Don't Fall (BFMV)"},{"artist":"Papa Roach","title":"Change or Die"},{"artist":"Trivium","title":"Dying in your arms"},{"artist":"Disturbed","title":"Decadance"},{"artist":"Bullet For My Valentine","title":"Turn To Despair"},{"artist":"Metallica","title":"Orion [Instrumental]"},{"artist":"Divination","title":"The Heretic Anthem"},{"artist":"Bullet for my Valentine","title":"Say Goodnight (Acoustic Version)"},{"artist":"Кувалда","title":"Бетономешалка"},{"artist":"Slipknot","title":"Confessions"},{"artist":"Bullet For My Valentine","title":"7 Days (Bonus Track)"},{"artist":"Bullet for My Valentine","title":"Forewer and Always (Acoustic Version)"},{"artist":"Bullet For My Valentine","title":"Hearts Burst Into Fire (Acoustic Version)"},{"artist":"In Flames","title":"Everlost (Part II)"},{"artist":"In Flames","title":"Acoustic Medley"},{"artist":"In Flames","title":"Cloud Connected"},{"artist":"In Flames","title":"Crawl Through Knives"},{"artist":"In Flames","title":"Free Fall"},{"artist":"Metallica","title":"Die, Die My Darling"},{"artist":"Slipknot","title":"Psychosocial (Album Version)"},{"artist":"Korn","title":"Jingle Bells"},{"artist":"Stone Sour","title":"Through Glass"},{"artist":"Slipknot","title":"Snuff"},{"artist":"Звонок в компанию Microsoft","title":"Как крякнуть Висту?"},{"artist":"Furious Ball","title":"Fog"},{"artist":"The Beatles","title":"Yellow Submarine"},{"artist":"Lumen","title":"Космонавт"},{"artist":"Lumen","title":"Государство"},{"artist":"Bullet For My Valentine","title":"Tears Don't Fall (Acoustic) (Bonus Track)"},{"artist":"Karunesh","title":"The Wanderer "}],"currentTrack":0}}}
the following code calculates my object size to be stored in storage:
for(var i in obj) {
console.log(i, JSON.stringify(obj[i]).length + i.length)
}
and it returns
> playlist 3956
I can't understand that, maybe such kind of magic because of UTF-8 non latin symbols in my object? Maybe chrome on native side performs escaping (\uXXXX) of this characters and got more than 4096 length? If so, how can I get JSON.stringify() to also do that escaping?
chrome.storage.sync.QUOTA_BYTES_PER_ITEM specifies the maximum size in bytes. In Chrome, JavaScript strings are encoded as UTF-8. A UTF-8 "character" has a variable byte length, it is either 1 or 2.
Your string contains "Бетономешалка". The string length is 13, but the byte size is 26.
To detect the size of a character, you could use string.charCodeAt(index) to get the character code of the string at a specified index. If this number is smaller than 256 (<= 0xFF), then it consists of one byte. Otherwise it will have two bytes.
Some other ways to count the number of bytes in a string are listed at How many bytes in a JavaScript string?.
/**
* Count bytes in a UTF-8 string's representation.
*
* #param {string}
* #return {number}
*/
function byteLength(str) {
str = String(str); // force string type
return (new TextEncoder('utf-8').encode(str)).length;
}

Support [ and ] characters in PDU mode

I am writing application in nodejs for sending and recieving sms in PDU mode.I use wavecom GSM modem(7-bit encoding) to send sms. It also support 8 bit(AT+CSMP=1,167,0,8) encoding scheme.
I can send alpha numeric character properly.But I can't send some character like ([,],| etc).
Here string :
AT+CMGS=14
0001030C911989890878800004015B
Text String : [
But I recieve some junk character. Any Idea?
And how to send multipart sms. I have refer this, and this but I does not get desired output. can anyone suggest 8-bit(7-bit encoding scheme) text encoding scheme?
Please Help me...
According to this page (see section Sending an Unicode SMS message), the 8bit encoding is in fact UCS-2.
I don't know enough about nodejs to give you the full implementation, but here is a .NET sample:
string EncodeSmsText(string text)
{
// Convert input string to a sequence of bytes in BigEndian UCS-2 encoding
// 'Hi' -> [0, 72, 0, 105]
var bytes = Encoding.BigEndianUnicode.GetBytes(text);
// Encode bytes to hex representation
// [0, 72, 0, 105] -> '00480069'
return BitConverter.ToString(bytes).Replace("-", "");
}
Please note that according to this post my code will not work for characters encoded as surrogate pairs, because Encoding.BigEndianEncoding is UTF-16 (not UCS-2).
Edit
Here is NodeJS version that uses the built-in UCS2 converter in Buffer class:
function swapBytes(buffer) {
var l = buffer.length;
if (l & 0x01) {
throw new Error('Buffer length must be even');
}
for (var i = 0; i < l; i += 2) {
var a = buffer[i];
buffer[i] = buffer[i+1];
buffer[i+1] = a;
}
return buffer;
}
function encodeSmsText(input) {
var ucs2le = new Buffer(input, 'ucs2');
var ucs2be = swapBytes(ucs2le);
return ucs2be.toString('hex');
}
console.log(encodeSmsText('Hi'));
Inspired by these SO answers:
Node.JS Big-Endian UCS-2
How to do Base64 encoding in node.js?
Thanks,
Finally I got the answer :)
This characters([,],|) are encode as two characters like
[ is encode as 1B1E (Combination of escape character and < sign)
] is encode as 1B20 (Combination of escap character and > sign)
So whenever I fond such characters I replaced it with corresponding value then I use 7 bit encoding. It's Work good...
So my encoding string for [ is
> AT+CMGS=15
> 0001000C911989890878800000021B1E
And for "[hello]"
> AT+CMGS=21
> 0001000C911989890878800000091B1EBACC66BF373E
Thanks once again..

Categories