I need to encrypt a string using AES encryption. This encryption was happening in C# earlier, but it needs to be converted into JavaScript (will be run on a browser).
The current code in C# for encryption is as following -
public static string EncryptString(string plainText, string encryptionKey)
{
byte[] clearBytes = Encoding.Unicode.GetBytes(plainText);
using (Aes encryptor = Aes.Create())
{
Rfc2898DeriveBytes pdb = new Rfc2898DeriveBytes(encryptionKey, new byte[] { 0x49, 0x76, 0x61, 0x6e, 0x20, 0x4d, 0x65, 0x64, 0x76, 0x65, 0x64, 0x65, 0x76 });
encryptor.Key = pdb.GetBytes(32);
encryptor.IV = pdb.GetBytes(16);
using (MemoryStream ms = new MemoryStream())
{
using (CryptoStream cs = new CryptoStream(ms, encryptor.CreateEncryptor(), CryptoStreamMode.Write))
{
cs.Write(clearBytes, 0, clearBytes.Length);
cs.Close();
}
plainText = Convert.ToBase64String(ms.ToArray());
}
}
return plainText;
}
I have tried to use CryptoJS to replicate the same functionality, but it's not giving me the equivalent encrypted base64 string. Here's my CryptoJS code -
function encryptString(encryptString, secretKey) {
var iv = CryptoJS.enc.Hex.parse('Ivan Medvedev');
var key = CryptoJS.PBKDF2(secretKey, iv, { keySize: 256 / 32, iterations: 500 });
var encrypted = CryptoJS.AES.encrypt(encryptString, key,{iv:iv);
return encrypted;
}
The encrypted string has to be sent to a server which will be able to decrypt it. The server is able to decrypt the encrypted string generated from the C# code, but not the encrypted string generated from JS code. I tried to compare the encrypted strings generated by both the code and found that the C# code is generating longer encrypted strings. For example keeping 'Example String' as plainText and 'Example Key' as the key, I get the following result -
C# - eAQO+odxOdGlNRB81SHR2XzJhyWtz6XmQDko9HyDe0w=
JS - 9ex5i2g+8iUCwdwN92SF+A==
The length of JS encrypted string is always shorter than the C# one. Is there something I am doing wrong? I just have to replicated the C# code into the JS code.
Update:
My current code after Zergatul's answer is this -
function encryptString(encryptString, secretKey) {
var keyBytes = CryptoJS.PBKDF2(secretKey, 'Ivan Medvedev', { keySize: 48 / 4, iterations: 1000 });
console.log(keyBytes.toString());
// take first 32 bytes as key (like in C# code)
var key = new CryptoJS.lib.WordArray.init(keyBytes.words, 32);
// skip first 32 bytes and take next 16 bytes as IV
var iv = new CryptoJS.lib.WordArray.init(keyBytes.words.splice(32 / 4), 16);
console.log(key.toString());
console.log(iv.toString());
var encrypted = CryptoJS.AES.encrypt(encryptString, key, { iv: iv });
return encrypted;
}
As illustrated in his/her answer that if the C# code converts the plainText into bytes using ASCII instead of Unicode, both the C# and JS code will produce exact results. But since I am not able to modify the decryption code, I have to convert the code to be equivalent of the original C# code which was using Unicode.
So, I tried to see, what's the difference between both the bytes array between ASCII and Unicode byte conversion in C#. Here's what I found -
ASCII Byte Array: [69,120,97,109,112,108,101,32,83,116, 114, 105, 110, 103]
Unicode Byte Array: [69,0,120,0,97,0,109,0,112,0,108,0,101,0,32,0,83,0,116,0, 114,0, 105,0, 110,0, 103,0]
So some extra bytes are available for each character in C# (So Unicode allocates twice as much bytes to each character than ASCII).
Here's the difference between both Unicode and ASCII conversion respectively -
ASCII
clearBytes: [69,120,97,109,112,108,101,32,83,116,114,105,110,103,]
encryptor.Key: [123,213,18,82,141,249,182,218,247,31,246,83,80,77,195,134,230,92,0,125,232,210,135,115,145,193,140,239,228,225,183,13,]
encryptor.IV: [101,74,46,177,46,233,68,252,83,169,211,13,249,61,118,167,]
Result: eQus9GLPKULh9vhRWOJjog==
Unicode:
clearBytes: [69,0,120,0,97,0,109,0,112,0,108,0,101,0,32,0,83,0,116,0,114,0,105,0,110,0,103,0,]
encryptor.Key: [123,213,18,82,141,249,182,218,247,31,246,83,80,77,195,134,230,92,0,125,232,210,135,115,145,193,140,239,228,225,183,13,]
encryptor.IV: [101,74,46,177,46,233,68,252,83,169,211,13,249,61,118,167,]
Result: eAQO+odxOdGlNRB81SHR2XzJhyWtz6XmQDko9HyDe0w=
So since both the key and iv being generated have exact same byte array in both Unicode and ASCII approach, it should not have generated different output, but somehow it's doing that. I think it's because of clearBytes' length, as it's using its length to write to CryptoStream.
I tried to see what's the output of the generated bytes in the JS code is and found that it uses words which needed to be converted into Strings using toString() method.
keyBytes: 7bd512528df9b6daf71ff653504dc386e65c007de8d2877391c18cefe4e1b70d654a2eb12ee944fc53a9d30df93d76a7
key: 7bd512528df9b6daf71ff653504dc386e65c007de8d2877391c18cefe4e1b70d
iv: 654a2eb12ee944fc53a9d30df93d76a7
Since, I am not able to affect the generated encrypted string's length in the JS code (No access to the write stream directly), thus still stuck here.
Here is the example how to reproduce the same ciphertext between C# and CryptoJS:
static void Main(string[] args)
{
byte[] plainText = Encoding.Unicode.GetBytes("Example String"); // this is UTF-16 LE
string cipherText;
using (Aes encryptor = Aes.Create())
{
var pdb = new Rfc2898DeriveBytes("Example Key", Encoding.ASCII.GetBytes("Ivan Medvedev"));
encryptor.Key = pdb.GetBytes(32);
encryptor.IV = pdb.GetBytes(16);
using (MemoryStream ms = new MemoryStream())
{
using (CryptoStream cs = new CryptoStream(ms, encryptor.CreateEncryptor(), CryptoStreamMode.Write))
{
cs.Write(plainText, 0, plainText.Length);
cs.Close();
}
cipherText = Convert.ToBase64String(ms.ToArray());
}
}
Console.WriteLine(cipherText);
}
And JS:
var keyBytes = CryptoJS.PBKDF2('Example Key', 'Ivan Medvedev', { keySize: 48 / 4, iterations: 1000 });
// take first 32 bytes as key (like in C# code)
var key = new CryptoJS.lib.WordArray.init(keyBytes.words, 32);
// skip first 32 bytes and take next 16 bytes as IV
var iv = new CryptoJS.lib.WordArray.init(keyBytes.words.splice(32 / 4), 16);
// use the same encoding as in C# code, to convert string into bytes
var data = CryptoJS.enc.Utf16LE.parse("Example String");
var encrypted = CryptoJS.AES.encrypt(data, key, { iv: iv });
console.log(encrypted.toString());
Both codes return: eAQO+odxOdGlNRB81SHR2XzJhyWtz6XmQDko9HyDe0w=
TL;DR the final code looks like this -
function encryptString(encryptString, secretKey) {
encryptString = addExtraByteToChars(encryptString);
var keyBytes = CryptoJS.PBKDF2(secretKey, 'Ivan Medvedev', { keySize: 48 / 4, iterations: 1000 });
console.log(keyBytes.toString());
var key = new CryptoJS.lib.WordArray.init(keyBytes.words, 32);
var iv = new CryptoJS.lib.WordArray.init(keyBytes.words.splice(32 / 4), 16);
var encrypted = CryptoJS.AES.encrypt(encryptString, key, { iv: iv, });
return encrypted;
}
function addExtraByteToChars(str) {
let strResult = '';
for (var i = 0; i < str.length; ++i) {
strResult += str.charAt(i) + String.fromCharCode(0);
}
return strResult;
}
Explanation:
The C# code in the Zergatul's answer (Thanks to him/her) was using ASCII to convert the plainText into bytes, while my C# code was using Unicode. Unicode was assigning extra byte to each character in the resultant byte array, which was not affecting the generation of both key and iv bytes, but affecting the result since the length of the encryptedString was dependent on the length of the bytes generated from plainText.
As seen in the following bytes generated for each of them using "Example String" and "Example Key" as the plainText and secretKey respectively -
ASCII
clearBytes: [69,120,97,109,112,108,101,32,83,116,114,105,110,103,]
encryptor.Key: [123,213,18,82,141,249,182,218,247,31,246,83,80,77,195,134,230,92,0,125,232,210,135,115,145,193,140,239,228,225,183,13,]
encryptor.IV: [101,74,46,177,46,233,68,252,83,169,211,13,249,61,118,167,]
Result: eQus9GLPKULh9vhRWOJjog==
Unicode:
clearBytes: [69,0,120,0,97,0,109,0,112,0,108,0,101,0,32,0,83,0,116,0,114,0,105,0,110,0,103,0,]
encryptor.Key: [123,213,18,82,141,249,182,218,247,31,246,83,80,77,195,134,230,92,0,125,232,210,135,115,145,193,140,239,228,225,183,13,]
encryptor.IV: [101,74,46,177,46,233,68,252,83,169,211,13,249,61,118,167,]
Result: eAQO+odxOdGlNRB81SHR2XzJhyWtz6XmQDko9HyDe0w=
The JS result was similar too, which confirmed that it's using ASCII byte conversion -
keyBytes: 7bd512528df9b6daf71ff653504dc386e65c007de8d2877391c18cefe4e1b70d654a2eb12ee944fc53a9d30df93d76a7
key: 7bd512528df9b6daf71ff653504dc386e65c007de8d2877391c18cefe4e1b70d
iv: 654a2eb12ee944fc53a9d30df93d76a7
Thus I just need to increase the length of the plainText to make it use Unicode equivalent byte generation (Sorry, not familiar with the term). Since Unicode was assigning 2 space for each character in the byteArray, keeping the second space as 0, I basically created gap in the plainText's characters and filled that gap with character whose ASCII value was 0 using the addExtraByteToChars() function. And it made all the difference.
It's a workaround for sure, but started working for my scenario. I suppose this may or may not prove useful to others, thus sharing the findings. If anyone can suggest better implementation of the addExtraByteToChars() function (probably some term for this conversion instead of ASCII to Unicode or a better, efficient, and not hacky way to do that), please suggest it.
I have searched high and low for a solution which and encrypt on Node.js server and Objective-C client, and vise versa using AES (or other if appropriate)
I am relatively new to cryptography, and understanding why my encrypted text is different in each language is beyond my knowledge.
This is what I have so far:
Node.js crypto methods Using this CryptoJS Library - node-cryptojs-aes
var node_cryptojs = require("node-cryptojs-aes");
var CryptoJS = node_cryptojs.CryptoJS;
var textToEncrypt = 'Hello';
var key_clear = 'a16byteslongkey!';
//encrypted + decrypted
var encrypted = CryptoJS.AES.encrypt(clearText, key_clear, { iv: null });
var decrypted = CryptoJS.AES.decrypt(encrypted, key_clear, { iv: null });
//Outputs
console.log("encrypted: " + encrypted); //encrypted: U2FsdGVkX1/ILXOjqIw2Vvz6DzRh1LMHgEQhDm3OunY=
console.log("decrypted: " + decrypted.toString(CryptoJS.enc.Utf8)); // decrypted: Hello
Objective-C crypto methods Using AESCrypt library
NSString* textToEncrypt = #"Hello";
// encrypt
NSString* encryptedText = [AESCrypt encrypt:textToEncrypt password:#"a16byteslongkey!"];
// decrypt
NSString* decryptedText = [AESCrypt decrypt:encryptedText password:#"a16byteslongkey!"];
// output
NSLog(#"Text to encrypt: %#", textToEncrypt); // Text to encrypt: Hello
NSLog(#"Encrypted text: %#", encryptedText); // Encrypted text: wY80MJyxRRJdE+eKw6kaIA==
NSLog(#"Decrypted text: %#", decryptedText); // Decrypted text: Hello
I've been scratching my head for ages and tried everything I can think of. Can show underlying crypto methods from the libraries if required. There is SHAR256 hash applied to the key in AESCrypt library but I have removed this, and think there is some missmatch with the string encoding.
I'm posting this here because there are bound to be others trying to interop Node.js and iOS. Everyone seems to get hung up on keeping everything in the correct structures, buffers, strings etc. I know I did. So here is a step-by-step process to creating a key, creating an iv, encrypting, decrypting and placing in base64 for easy transfer:
JavaScript (Node.js using the CryptoJS module)
// Generate key from password and salt using SHA256 to hash and PDKDF2 to harden
var password = "1234567890123456";
var salt = "gettingsaltyfoo!";
var hash = CryptoJS.SHA256(salt);
var key = CryptoJS.PBKDF2(password, hash, { keySize: 256/32, iterations: 1000 });
console.log("Hash :",hash.toString(CryptoJS.enc.Base64));
console.log("Key :",key.toString(CryptoJS.enc.Base64));
// Generate a random IV
var iv = CryptoJS.lib.WordArray.random(128/8);
console.log("IV :",iv.toString(CryptoJS.enc.Base64));
// Encrypt message into base64
var message = "Hello World!";
var encrypted = CryptoJS.AES.encrypt(message, key, { iv: iv });
var encrypted64 = encrypted.ciphertext.toString(CryptoJS.enc.Base64);
console.log("Encrypted :",encrypted64);
// Decrypt base64 into message again
var cipherParams = CryptoJS.lib.CipherParams.create({ ciphertext: CryptoJS.enc.Base64.parse(encrypted64) });
var decrypted = CryptoJS.AES.decrypt(cipherParams, key, { iv: iv }).toString(CryptoJS.enc.Utf8);
console.log("Decrypted :",decrypted);
iOS SDK using CommonCrypto
// Generate key from password and salt using SHA256 to hash and PDKDF2 to harden
NSString* password = #"1234567890123456";
NSString* salt = #"gettingsaltyfoo!";
NSMutableData* hash = [NSMutableData dataWithLength:CC_SHA256_DIGEST_LENGTH];
NSMutableData* key = [NSMutableData dataWithLength:CC_SHA256_DIGEST_LENGTH];
CC_SHA256(salt.UTF8String, (CC_LONG)strlen(salt.UTF8String), hash.mutableBytes);
CCKeyDerivationPBKDF(kCCPBKDF2, password.UTF8String, strlen(password.UTF8String), hash.bytes, hash.length, kCCPRFHmacAlgSHA1, 1000, key.mutableBytes, key.length);
NSLog(#"Hash : %#",[hash base64EncodedStringWithOptions:0]);
NSLog(#"Key : %#",[key base64EncodedStringWithOptions:0]);
// Generate a random IV (or use the base64 version from node.js)
NSString* iv64 = #"ludWXFqwWeLOkmhutxiwHw=="; // Taken from node.js CryptoJS IV output
NSData* iv = [[NSData alloc] initWithBase64EncodedString:iv64 options:0];
NSLog(#"IV : %#",[iv base64EncodedStringWithOptions:0]);
// Encrypt message into base64
NSData* message = [#"Hello World!" dataUsingEncoding:NSUTF8StringEncoding];
NSMutableData* encrypted = [NSMutableData dataWithLength:message.length + kCCBlockSizeAES128];
size_t bytesEncrypted = 0;
CCCrypt(kCCEncrypt,
kCCAlgorithmAES128,
kCCOptionPKCS7Padding,
key.bytes,
key.length,
iv.bytes,
message.bytes, message.length,
encrypted.mutableBytes, encrypted.length, &bytesEncrypted);
NSString* encrypted64 = [[NSMutableData dataWithBytes:encrypted.mutableBytes length:bytesEncrypted] base64EncodedStringWithOptions:0];
NSLog(#"Encrypted : %#",encrypted64);
// Decrypt base 64 into message again
NSData* encryptedWithout64 = [[NSData alloc] initWithBase64EncodedString:encrypted64 options:0];
NSMutableData* decrypted = [NSMutableData dataWithLength:encryptedWithout64.length + kCCBlockSizeAES128];
size_t bytesDecrypted = 0;
CCCrypt(kCCDecrypt,
kCCAlgorithmAES128,
kCCOptionPKCS7Padding,
key.bytes,
key.length,
iv.bytes,
encryptedWithout64.bytes, encryptedWithout64.length,
decrypted.mutableBytes, decrypted.length, &bytesDecrypted);
NSData* outputMessage = [NSMutableData dataWithBytes:decrypted.mutableBytes length:bytesDecrypted];
NSString* outputString = [[NSString alloc] initWithData:outputMessage encoding:NSUTF8StringEncoding];
NSLog(#"Decrypted : %#",outputString);
The output should be the same
/*
Hash : AEIHWLT/cTUfXdYJ+oai6sZ4tXlc4QQcYTbI9If/Moc=
Key : WdHhJ19dSBURBA25HZSpbCJ4KnNEEgwzqjgyTBqa3eg=
IV : ludWXFqwWeLOkmhutxiwHw==
Encrypted : D3JpubesPMgQTiXbaoxAIw==
Decrypted : Hello World!
*/
Hopefully this saves someone else the time I've wasted :)
Are you sure the same key is being used in both libraries? You say you took out the SHA-256 part in AESCrypt, how is the library using the password parameter now? The AES algorithm can only use keys of 16, 24, or 32 bytes in length. Your password is 16 bytes long, but did you change the corresponding parameter to 128 (instead of 256) in the encrypt function?
Do you know how CryptoJS is using the key parameter? Are you sure it's being used directly, or might there be some processing (for example, hashing) before it's passed to the underlying primitive AES encryption function?
What mode of encryption is the CryptoJS library using? Its documentation doesn't say. Given that it asks for an IV, it's probably CBC, but you would have to look at the source to know for sure.
AESCrypt's documentation claims to use CBC mode, but you don't give it an IV anywhere. That must mean that it generates it own somewhere, or always uses a fixed one. (Which half defeats the purpose of CBC mode, but that's another story). So you need to figure out what the IV actually is.
TL;DR: unless you make sure that the same key and key length, the same mode, and the same IV are used across both libraries, then you will have different cipher text.