Crypto - Is SJCL (javascript) encryption compatible with OpenSSL? - javascript

I am trying to decrypt some information that has been encrypted with the SJCL (Stanford Javascript Crypto Library). An example page is at http://bitwiseshiftleft.github.io/sjcl/demo/.
If I encrypt some data, I have been unable to decrypt it using OpenSSL (version 1.0.1f). I have seen another question on stackoverflow asking about this - but that question and its answers didn't really help.
For instance, encrypting with a password of 'password', and a random salt of '6515636B 82C5AC56' with 10000 iterations of a 256 bit key size gives a Key of 'D8CCAA75 3E2983F0 3657AB3C 8A68A85A 9E9F1CAC 43DAB645 489CDE58 0A9EBDAE', which is exactly what I get with OpenSSL. So far, so good.
When I use SJCL with this key and an IV of '9F62544C 9D3FCAB2 DD0833DF 21CA80CF' to encrypt, say, the message 'mymessage', then I get the ciphertext:
{"iv":"n2JUTJ0/yrLdCDPfIcqAzw==",
"v":1,
"iter":10000,
"ks":256,
"ts":64,
"mode":"ccm",
"adata":"",
"cipher":"aes",
"salt":"ZRVja4LFrFY=",
"ct":"FCuQWGYz3onE/lRt/7vCl5A="}
However, no matter how I modify or rewrite my OpenSSL C++ code, I cannot decrypt this data.
I've googled and found a few code samples, but nothing that has actually worked.
I'm aware that I need to use the CCM cipher mode in OpenSSL - but this mode is poorly documented.
Can anyone post some OpenSSL code to successfully decrypt this data?

You can copy-paste the example at http://wiki.openssl.org/index.php/EVP_Authenticated_Encryption_and_Decryption with a few changes.
First, you need to Base64 decode SJCL's data. But you knew that.
Second, you need to split the message into ct and tag. In this case, ct is the first 9 bytes, and tag is the 8 bytes starting at ct[9].
Third, you need to set the tag length to ts/8 = 8, and you need to set the IV length correctly. If you set the IV too long in SJCL, it will truncate it to 15 - LOL (length of length), where LOL is between 2 and 4 (because SJCL enforces <2^32 bytes length), and is the number of bytes required to describe the length of the message. This is 2 unless the message is at least 65536 bytes long, in which case it is 3, unless the message is at least 2^24 bytes long, in which case it is 4. Keep in mind that if you're decrypting, the ciphertext you are passed includes the tag but LOL must be computed from the message length, which does not include the tag.
With those changes, it should work:
#include <openssl/evp.h>
void handleErrors() {
abort();
}
int decryptccm(unsigned char *ciphertext, int ciphertext_len, unsigned char *aad,
int aad_len, unsigned char *tag, unsigned char *key, unsigned char *iv,
unsigned char *plaintext)
{
EVP_CIPHER_CTX *ctx;
int len;
int plaintext_len;
int ret;
/* Create and initialise the context */
if(!(ctx = EVP_CIPHER_CTX_new())) handleErrors();
/* Initialise the decryption operation. */
if(1 != EVP_DecryptInit_ex(ctx, EVP_aes_256_ccm(), NULL, NULL, NULL))
handleErrors();
int lol = 2;
if (ciphertext_len >= 1<<16) lol++;
if (ciphertext_len >= 1<<24) lol++;
if(1 != EVP_CIPHER_CTX_ctrl(ctx, EVP_CTRL_CCM_SET_IVLEN, 15-lol, NULL))
handleErrors();
/* Set expected tag value. */
if(1 != EVP_CIPHER_CTX_ctrl(ctx, EVP_CTRL_CCM_SET_TAG, 8, tag))
handleErrors();
/* Initialise key and IV */
if(1 != EVP_DecryptInit_ex(ctx, NULL, NULL, key, iv)) handleErrors();
/* Provide the total ciphertext length
*/
if(1 != EVP_DecryptUpdate(ctx, NULL, &len, NULL, ciphertext_len))
handleErrors();
/* Provide any AAD data. This can be called zero or more times as
* required
*/
if(1 != EVP_DecryptUpdate(ctx, NULL, &len, aad, aad_len))
handleErrors();
/* Provide the message to be decrypted, and obtain the plaintext output.
* EVP_DecryptUpdate can be called multiple times if necessary
*/
ret = EVP_DecryptUpdate(ctx, plaintext, &len, ciphertext, ciphertext_len);
plaintext_len = len;
/* Clean up */
EVP_CIPHER_CTX_free(ctx);
if(ret > 0)
{
/* Success */
return plaintext_len;
}
else
{
/* Verify failed */
return -1;
}
}
int main(int argc, char **argv) {
// base64-decoded from your example
unsigned char iv[] = { 0x9F,0x62,0x54,0x4C,0x9D,0x3F,0xCA,0xB2,0xDD,0x08,0x33,0xDF,0x21,0xCA,0x80,0xCF };
unsigned char ct[] = { 0x14,0x2B,0x90,0x58,0x66,0x33,0xDE,0x89,0xC4,0xFE,0x54,0x6D,0xFF,0xBB,0xC2,0x97,0x90 };
unsigned char ky[] = { 0xD8,0xCC,0xAA,0x75 ,0x3E,0x29,0x83,0xF0 ,0x36,0x57,0xAB,0x3C ,0x8A,0x68,0xA8,0x5A ,0x9E,0x9F,0x1C,0xAC ,0x43,0xDA,0xB6,0x45 ,0x48,0x9C,0xDE,0x58 ,0x0A,0x9E,0xBD,0xAE };
const unsigned char *message = (const unsigned char *)"mymessage";
unsigned char plaintext[1000];
int ret = decryptccm(ct, 9, "", 0, &ct[9], ky, iv, plaintext);
plaintext[9] = 0;
printf("%d,%s\n",ret,plaintext);
return 0;
}
This program returns "9,mymessage" on my machine.

Related

How to translate Go des code in javascript

I want to translate Go DES code in javascript
This is my Go code, use des zeropadding to encrypt. Now I want use javascript to get the same function, but I try it many times, the result is always wrong.
package zeropadding
import (
"crypto/cipher"
"crypto/des"
"encoding/hex"
"fmt"
"strings"
)
func main() {
text := DesEncrypt([]byte("12345678"), []byte("12345678"))
fmt.Println(strings.ToUpper(hex.EncodeToString(text)))
}
func DesEncrypt(origData, key []byte) []byte {
iv := []byte{0, 0, 0, 0, 0, 0, 0, 0}
block, err := des.NewCipher(key)
if err != nil {
return nil
}
origData = ZeroPadding(origData)
blockMode := cipher.NewCBCEncrypter(block, iv)
crypted := make([]byte, len(origData))
blockMode.CryptBlocks(crypted, origData)
return crypted
}
func ZeroPadding(in []byte) []byte {
length := len(in)
blockCount := length / 8
out := make([]byte, (blockCount+1)*8)
var i int
for i = 0; i < length; i++ {
out[i] = in[i]
}
return out
}
how to use javascript to do it
const CryptoJS = require('./crypto-js.min')
function encryptByDES(message, key, iv) {
var keyHex = CryptoJS.enc.Utf8.parse(key);
var ivHex = CryptoJS.enc.Utf8.parse(iv);
encrypted = CryptoJS.DES.encrypt(message, keyHex, {
iv: ivHex,
mode: CryptoJS.mode.CBC,
padding: CryptoJS.pad.ZeroPadding
}
);
return encrypted.ciphertext.toString();
}
console.log(encryptByDES("12345678", "12345678", 0))
I use the lib crypto-js, use mode CryptoJS.mode.CBC and padding CryptoJS.pad.ZeroPadding, but the result is not equals with the Go code output.
It looks like there are issues with both your go code and your javascript which is probably confusing things. (caveat - I'm far from being an encryption expert!).
Go
Zero pad should be filling the []byte out to a multiple of the blocksize (8 in your case) but as it stands if the input is 8 bytes long you will pad out to 16 bytes. Try:
blockCount := (length - 1) / blockSize
Playground
Javascript
You are providing the IV as a number but then converting it as if it was utf8. Lets mirror what you are doing in go:
var ivHex = CryptoJS.enc.Hex.parse(iv);
And pass the IV in as hex:
console.log(encryptByDES("1234567890", "12345678", "0000000000000000"))
Js Fiddle
Another approach would be to use WebAssembly (see webassembly.org), considering that:
you can compile your Go project to wasm
GOOS=js GOARCH=wasm go build -o main.wasm
(see "Best Practices for WebAssembly using GoLang (1.15+)" from Cesar William Alvarenga for more best practices)
you can call said wasm from your Javascript project

AES encryption in JS equivalent of C#

I need to encrypt a string using AES encryption. This encryption was happening in C# earlier, but it needs to be converted into JavaScript (will be run on a browser).
The current code in C# for encryption is as following -
public static string EncryptString(string plainText, string encryptionKey)
{
byte[] clearBytes = Encoding.Unicode.GetBytes(plainText);
using (Aes encryptor = Aes.Create())
{
Rfc2898DeriveBytes pdb = new Rfc2898DeriveBytes(encryptionKey, new byte[] { 0x49, 0x76, 0x61, 0x6e, 0x20, 0x4d, 0x65, 0x64, 0x76, 0x65, 0x64, 0x65, 0x76 });
encryptor.Key = pdb.GetBytes(32);
encryptor.IV = pdb.GetBytes(16);
using (MemoryStream ms = new MemoryStream())
{
using (CryptoStream cs = new CryptoStream(ms, encryptor.CreateEncryptor(), CryptoStreamMode.Write))
{
cs.Write(clearBytes, 0, clearBytes.Length);
cs.Close();
}
plainText = Convert.ToBase64String(ms.ToArray());
}
}
return plainText;
}
I have tried to use CryptoJS to replicate the same functionality, but it's not giving me the equivalent encrypted base64 string. Here's my CryptoJS code -
function encryptString(encryptString, secretKey) {
var iv = CryptoJS.enc.Hex.parse('Ivan Medvedev');
var key = CryptoJS.PBKDF2(secretKey, iv, { keySize: 256 / 32, iterations: 500 });
var encrypted = CryptoJS.AES.encrypt(encryptString, key,{iv:iv);
return encrypted;
}
The encrypted string has to be sent to a server which will be able to decrypt it. The server is able to decrypt the encrypted string generated from the C# code, but not the encrypted string generated from JS code. I tried to compare the encrypted strings generated by both the code and found that the C# code is generating longer encrypted strings. For example keeping 'Example String' as plainText and 'Example Key' as the key, I get the following result -
C# - eAQO+odxOdGlNRB81SHR2XzJhyWtz6XmQDko9HyDe0w=
JS - 9ex5i2g+8iUCwdwN92SF+A==
The length of JS encrypted string is always shorter than the C# one. Is there something I am doing wrong? I just have to replicated the C# code into the JS code.
Update:
My current code after Zergatul's answer is this -
function encryptString(encryptString, secretKey) {
var keyBytes = CryptoJS.PBKDF2(secretKey, 'Ivan Medvedev', { keySize: 48 / 4, iterations: 1000 });
console.log(keyBytes.toString());
// take first 32 bytes as key (like in C# code)
var key = new CryptoJS.lib.WordArray.init(keyBytes.words, 32);
// skip first 32 bytes and take next 16 bytes as IV
var iv = new CryptoJS.lib.WordArray.init(keyBytes.words.splice(32 / 4), 16);
console.log(key.toString());
console.log(iv.toString());
var encrypted = CryptoJS.AES.encrypt(encryptString, key, { iv: iv });
return encrypted;
}
As illustrated in his/her answer that if the C# code converts the plainText into bytes using ASCII instead of Unicode, both the C# and JS code will produce exact results. But since I am not able to modify the decryption code, I have to convert the code to be equivalent of the original C# code which was using Unicode.
So, I tried to see, what's the difference between both the bytes array between ASCII and Unicode byte conversion in C#. Here's what I found -
ASCII Byte Array: [69,120,97,109,112,108,101,32,83,116, 114, 105, 110, 103]
Unicode Byte Array: [69,0,120,0,97,0,109,0,112,0,108,0,101,0,32,0,83,0,116,0, 114,0, 105,0, 110,0, 103,0]
So some extra bytes are available for each character in C# (So Unicode allocates twice as much bytes to each character than ASCII).
Here's the difference between both Unicode and ASCII conversion respectively -
ASCII
clearBytes: [69,120,97,109,112,108,101,32,83,116,114,105,110,103,]
encryptor.Key: [123,213,18,82,141,249,182,218,247,31,246,83,80,77,195,134,230,92,0,125,232,210,135,115,145,193,140,239,228,225,183,13,]
encryptor.IV: [101,74,46,177,46,233,68,252,83,169,211,13,249,61,118,167,]
Result: eQus9GLPKULh9vhRWOJjog==
Unicode:
clearBytes: [69,0,120,0,97,0,109,0,112,0,108,0,101,0,32,0,83,0,116,0,114,0,105,0,110,0,103,0,]
encryptor.Key: [123,213,18,82,141,249,182,218,247,31,246,83,80,77,195,134,230,92,0,125,232,210,135,115,145,193,140,239,228,225,183,13,]
encryptor.IV: [101,74,46,177,46,233,68,252,83,169,211,13,249,61,118,167,]
Result: eAQO+odxOdGlNRB81SHR2XzJhyWtz6XmQDko9HyDe0w=
So since both the key and iv being generated have exact same byte array in both Unicode and ASCII approach, it should not have generated different output, but somehow it's doing that. I think it's because of clearBytes' length, as it's using its length to write to CryptoStream.
I tried to see what's the output of the generated bytes in the JS code is and found that it uses words which needed to be converted into Strings using toString() method.
keyBytes: 7bd512528df9b6daf71ff653504dc386e65c007de8d2877391c18cefe4e1b70d654a2eb12ee944fc53a9d30df93d76a7
key: 7bd512528df9b6daf71ff653504dc386e65c007de8d2877391c18cefe4e1b70d
iv: 654a2eb12ee944fc53a9d30df93d76a7
Since, I am not able to affect the generated encrypted string's length in the JS code (No access to the write stream directly), thus still stuck here.
Here is the example how to reproduce the same ciphertext between C# and CryptoJS:
static void Main(string[] args)
{
byte[] plainText = Encoding.Unicode.GetBytes("Example String"); // this is UTF-16 LE
string cipherText;
using (Aes encryptor = Aes.Create())
{
var pdb = new Rfc2898DeriveBytes("Example Key", Encoding.ASCII.GetBytes("Ivan Medvedev"));
encryptor.Key = pdb.GetBytes(32);
encryptor.IV = pdb.GetBytes(16);
using (MemoryStream ms = new MemoryStream())
{
using (CryptoStream cs = new CryptoStream(ms, encryptor.CreateEncryptor(), CryptoStreamMode.Write))
{
cs.Write(plainText, 0, plainText.Length);
cs.Close();
}
cipherText = Convert.ToBase64String(ms.ToArray());
}
}
Console.WriteLine(cipherText);
}
And JS:
var keyBytes = CryptoJS.PBKDF2('Example Key', 'Ivan Medvedev', { keySize: 48 / 4, iterations: 1000 });
// take first 32 bytes as key (like in C# code)
var key = new CryptoJS.lib.WordArray.init(keyBytes.words, 32);
// skip first 32 bytes and take next 16 bytes as IV
var iv = new CryptoJS.lib.WordArray.init(keyBytes.words.splice(32 / 4), 16);
// use the same encoding as in C# code, to convert string into bytes
var data = CryptoJS.enc.Utf16LE.parse("Example String");
var encrypted = CryptoJS.AES.encrypt(data, key, { iv: iv });
console.log(encrypted.toString());
Both codes return: eAQO+odxOdGlNRB81SHR2XzJhyWtz6XmQDko9HyDe0w=
TL;DR the final code looks like this -
function encryptString(encryptString, secretKey) {
encryptString = addExtraByteToChars(encryptString);
var keyBytes = CryptoJS.PBKDF2(secretKey, 'Ivan Medvedev', { keySize: 48 / 4, iterations: 1000 });
console.log(keyBytes.toString());
var key = new CryptoJS.lib.WordArray.init(keyBytes.words, 32);
var iv = new CryptoJS.lib.WordArray.init(keyBytes.words.splice(32 / 4), 16);
var encrypted = CryptoJS.AES.encrypt(encryptString, key, { iv: iv, });
return encrypted;
}
function addExtraByteToChars(str) {
let strResult = '';
for (var i = 0; i < str.length; ++i) {
strResult += str.charAt(i) + String.fromCharCode(0);
}
return strResult;
}
Explanation:
The C# code in the Zergatul's answer (Thanks to him/her) was using ASCII to convert the plainText into bytes, while my C# code was using Unicode. Unicode was assigning extra byte to each character in the resultant byte array, which was not affecting the generation of both key and iv bytes, but affecting the result since the length of the encryptedString was dependent on the length of the bytes generated from plainText.
As seen in the following bytes generated for each of them using "Example String" and "Example Key" as the plainText and secretKey respectively -
ASCII
clearBytes: [69,120,97,109,112,108,101,32,83,116,114,105,110,103,]
encryptor.Key: [123,213,18,82,141,249,182,218,247,31,246,83,80,77,195,134,230,92,0,125,232,210,135,115,145,193,140,239,228,225,183,13,]
encryptor.IV: [101,74,46,177,46,233,68,252,83,169,211,13,249,61,118,167,]
Result: eQus9GLPKULh9vhRWOJjog==
Unicode:
clearBytes: [69,0,120,0,97,0,109,0,112,0,108,0,101,0,32,0,83,0,116,0,114,0,105,0,110,0,103,0,]
encryptor.Key: [123,213,18,82,141,249,182,218,247,31,246,83,80,77,195,134,230,92,0,125,232,210,135,115,145,193,140,239,228,225,183,13,]
encryptor.IV: [101,74,46,177,46,233,68,252,83,169,211,13,249,61,118,167,]
Result: eAQO+odxOdGlNRB81SHR2XzJhyWtz6XmQDko9HyDe0w=
The JS result was similar too, which confirmed that it's using ASCII byte conversion -
keyBytes: 7bd512528df9b6daf71ff653504dc386e65c007de8d2877391c18cefe4e1b70d654a2eb12ee944fc53a9d30df93d76a7
key: 7bd512528df9b6daf71ff653504dc386e65c007de8d2877391c18cefe4e1b70d
iv: 654a2eb12ee944fc53a9d30df93d76a7
Thus I just need to increase the length of the plainText to make it use Unicode equivalent byte generation (Sorry, not familiar with the term). Since Unicode was assigning 2 space for each character in the byteArray, keeping the second space as 0, I basically created gap in the plainText's characters and filled that gap with character whose ASCII value was 0 using the addExtraByteToChars() function. And it made all the difference.
It's a workaround for sure, but started working for my scenario. I suppose this may or may not prove useful to others, thus sharing the findings. If anyone can suggest better implementation of the addExtraByteToChars() function (probably some term for this conversion instead of ASCII to Unicode or a better, efficient, and not hacky way to do that), please suggest it.

java encryption aes value and javascript encryption value does not match

I have java code which produce aes encryption code for me now I am trying to use it on javascript using crypto-js but both codes provides different keys I dont know why and how to get the same key here is my code
public static String encrypt(String text, byte[] iv, byte[] key)throws Exception{
Cipher cipher = Cipher.getInstance("AES/CBC/PKCS5Padding");
SecretKeySpec keySpec = new SecretKeySpec(key, "AES");
System.out.println("KEY SPECCCC: "+keySpec);
IvParameterSpec ivSpec = new IvParameterSpec(iv);
cipher.init(Cipher.ENCRYPT_MODE,keySpec,ivSpec);
byte [] results = cipher.doFinal(text.getBytes("UTF-8"));
BASE64Encoder encoder = new BASE64Encoder();
return encoder.encode(results);
}
JavaScript code
require(["crypto-js/core", "crypto-js/aes"], function (CryptoJS, AES) {
ciphertext = CryptoJS.AES.encrypt(JSON.stringify(jsondata),
arr.toString(),arr.toString());
});
string to utf-8
var utf8 = unescape(encodeURIComponent(key));
var arr = [];
for (var i = 0; i < utf8.length; i++) {
arr.push(utf8.charCodeAt(i));
}
First of all even tough your code works fine you wont be able to decrypt it back properly because while you are creating your AES cipher in Java you are using CBC Cipher and You are implementing a Padding algorithm which is PKCS5Padding.
So your java code does the followings;
When it gets the input it first divide it into the 16 bits blocks then if your input doesnt divide into the 16 overall then the reminders will be padded for filling the block with the same number of reminder.You can see what i mean by the following picture.
So it will do the encryption with the padded ciphers in the java side but in the Javascript Part You neither declare what type of Mode Aes will use nor declaring the what type of Padding it suppose to do. So you should add those values into the your code.You can make search following code parts.
mode:CryptoJS.mode.CBC,
padding: CryptoJS.pad.Pkcs7
About the different keys it is occuring because you are sending a Byte[] in to the your Encrypt method then use this unknown Byte[] while you are creating your Key.You didnt mention why your encryption method will be used in your program but you should create that "Byte[] key" same way in the both method.For instance you can refer following code as a example of generating that but it is not secure way of generating keys I just added it for showing you what I mean by you should generate both keys in the same way.
//DONT USE THIS IMPLEMENTATION SINCE IT IS NOT SAFE!
byte[] key = (username + password).getBytes("UTF-8");
Java code generates an encrypted string and for JavaScript to also generate same encrypted string, Following code works!
(function (CryptoJS) {
var C_lib = CryptoJS.lib;
// Converts ByteArray to stadnard WordArray.
// Example: CryptoJS.MD5(CryptoJS.lib.ByteArray ([ Bytes ])).toString(CryptoJS.enc.Base64);
C_lib.ByteArray = function (arr) {
var word = [];
for (var i = 0; i < arr.length; i += 4) {
word.push (arr[i + 0] << 24 | arr[i + 1] << 16 | arr[i + 2] << 8 | arr[i + 3] << 0);
}
return C_lib.WordArray.create (word, arr.length);
};
})(CryptoJS);
var IVstring = CryptoJS.lib.ByteArray(your IV bytearray).toString(CryptoJS.enc.Base64);
var keystring = CryptoJS.lib.ByteArray(your KEY bytearray).toString(CryptoJS.enc.Base64);
var text = 'texttobeencrypted';
var key = CryptoJS.enc.Base64.parse(keystring);
var iv = CryptoJS.enc.Base64.parse(IVstring);
var encrypted = CryptoJS.AES.encrypt(text, key, {iv: iv});
console.log(encrypted.toString());
Edited: Removed dangerous third party resource reference.
aes encryption javascript cryptojs java

Trouble decrypting openSSL AES CTR encrypted text

I have trouble decrypting an message encrypted in php with the openssl_encrypt method. I am using the new WebCrypto API (so I use crypto.subtle).
Encrypting in php:
$ALGO = "aes-256-ctr";
$key = "ae6865183f6f50deb68c3e8eafbede0b33f9e02961770ea5064f209f3bf156b4";
function encrypt ($data, $key) {
global $ALGO;
$iv = openssl_random_pseudo_bytes(openssl_cipher_iv_length($ALGO), $strong);
if (!$strong) {
exit("can't generate strong IV");
}
return bin2hex($iv).openssl_encrypt($data, $ALGO, $key, 0, $iv);
}
$enc = encrypt("Lorem ipsum dolor", $key);
exit($enc);
example output:
8d8c3a57d2dbb3287aca61be0bce59fbeAQ4ILKouAQ5eizPtlUTeHU=
(I can decrypt that in php and get the cleartext back)
In JS I decrypt like this:
function Ui8FromStr (StrStart) {
const Ui8Result = new Uint8Array(StrStart.length);
for (let i = 0; i < StrStart.length; i++) {
Ui8Result[i] = StrStart.charCodeAt(i);
}
return Ui8Result;
}
function StrFromUi8 (Ui8Start) {
let StrResult = "";
Ui8Start.forEach((charcode) => {
StrResult += String.fromCharCode(charcode);
});
return StrResult;
}
function Ui8FromHex (hex) {
for (var bytes = new Uint8Array(Math.ceil(hex.length / 2)), c = 0; c < hex.length; c += 2)
bytes[c/2] = parseInt(hex.substr(c, 2), 16);
return bytes;
}
const ALGO = 'AES-CTR'
function decrypt (CompCipher, HexKey) {
return new Promise (function (resolve, reject) {
// remove IV from cipher
let HexIv = CompCipher.substr(0, 32);
let B64cipher = CompCipher.substr(32);
let Ui8Cipher = Ui8FromStr(atob(B64cipher));
let Ui8Iv = Ui8FromHex (HexIv);
let Ui8Key = Ui8FromHex (HexKey);
crypto.subtle.importKey("raw", Ui8Key, {name: ALGO}, false, ["encrypt", "decrypt"]). then (function (cryptokey){
return crypto.subtle.decrypt({ name: ALGO, counter: Ui8Iv, length: 128}, cryptokey, Ui8Cipher).then(function(result){
let Ui8Result = new Uint8Array(result);
let StrResult = StrFromUi8(Ui8Result);
resolve(StrResult);
}).catch (function (err){
reject(err)
});
})
})
}
when I now run decrypt("8d8c3a57d2dbb3287aca61be0bce59fbeAQ4ILKouAQ5eizPtlUTeHU=", "ae6865183f6f50deb68c3e8eafbede0b33f9e02961770ea5064f209f3bf156b4").then(console.log) I get gibberish: SÌõÅ°blfçSÑ-
The problem I have is, that I am not sure what is meant with counter. I tried the IV but failed.
This Github tutorial suggests*1, that it is the IV - or at least part of it, as I've seen people talk about that the counter is part of the IV (something like 4 bytes, that means that the IV is made from 12 bytes IV and 4 bytes Counter)
If that is indeed true, my question then becomes: Where do I give the script the other 12 bytes of IV when counter is only 4 bytes of it.
Can anyone maybe give me a working example of encryption in php
*1 It says that the same counter has to be used for en- and decryption. This leads me to believe, that it is at least something similar to the IV
You are handling the key incorrectly in PHP.
In the PHP code you are passing the hex encoded key directly to the openssl_encrypt function, without decoding it. This means the key you are trying to use is twice as long as expected (i.e. 64 bytes). OpenSSL doesn’t check the key length, however—it just truncates it, taking the first 32 bytes and using them as the encryption key.
The Javascript code handles the key correctly, hex decoding it before passing the decoded array to the decryption function.
The overall result is you are using a different key in each case, and so the decryption doesn’t work.
You need to add a call to hex2bin on the key in your PHP code, to convert it from the hex encoding to the actual 32 raw bytes.

Why do EnumPrintersA and EnumPrintersW request the same amount of memory?

I call EnumPrintersA/EnumPrintersW functions using node-ffi to get list of local printers accessible from my PC.
You should create the buffer which will be filled with information by EnumPrinters function.
But you do not know the required size of the buffer.
In this case you need to execute EnumPrintersA/EnumPrintersW twice.
During the first call this function calculates the amount of memory for information about printers, during the second call this function fills the buffer with information about printers.
In case of Unicode version of EnumPrinters function, each letter in printers name will be encoded using two characters in Windows.
Why the first call to EnumPrintersW returns the same required amount of memory as the first call to EnumPrintersA?
Unicode strings are twice as long as not-unicode strings, but required buffer size is the same.
var ffi = require('ffi')
var ref = require('ref')
var Struct = require('ref-struct')
var wchar_t = require('ref-wchar')
var int = ref.types.int
var intPtr = ref.refType(ref.types.int)
var wchar_string = wchar_t.string
var getPrintersA = function getPrinters() {
var PRINTER_INFO_4A = Struct({
'pPrinterName' : ref.types.CString,
'pServerName' : ref.types.CString,
'Attributes' : int
});
var printerInfoPtr = ref.refType(PRINTER_INFO_4A);
var winspoolLib = new ffi.Library('winspool', {
'EnumPrintersA': [ int, [ int, ref.types.CString, int, printerInfoPtr, int, intPtr, intPtr ] ]
});
var pcbNeeded = ref.alloc(int, 0);
var pcReturned = ref.alloc(int, 0);
//Get amount of memory for the buffer with information about printers
var res = winspoolLib.EnumPrintersA(6, ref.NULL, 4, ref.NULL, 0, pcbNeeded, pcReturned);
if (res != 0) {
console.log("Cannot get list of printers. Error during first call to EnumPrintersA. Error: " + res);
return;
}
var bufSize = pcbNeeded.deref();
var buf = Buffer.alloc(bufSize);
console.log(bufSize);
//Fill buf with information about printers
res = winspoolLib.EnumPrintersA(6, ref.NULL, 4, buf, bufSize, pcbNeeded, pcReturned);
if (res == 0) {
console.log("Cannot get list of printers. Eror: " + res);
return;
}
var countOfPrinters = pcReturned.deref();
var printers = Array(countOfPrinters);
for (var i = 0; i < countOfPrinters; i++) {
var pPrinterInfo = ref.get(buf, i*PRINTER_INFO_4A.size, PRINTER_INFO_4A);
printers[i] = pPrinterInfo.pPrinterName;
}
return printers;
};
var getPrintersW = function getPrinters() {
var PRINTER_INFO_4W = Struct({
'pPrinterName' : wchar_string,
'pServerName' : wchar_string,
'Attributes' : int
});
var printerInfoPtr = ref.refType(PRINTER_INFO_4W);
var winspoolLib = new ffi.Library('winspool', {
'EnumPrintersW': [ int, [ int, wchar_string, int, printerInfoPtr, int, intPtr, intPtr ] ]
});
var pcbNeeded = ref.alloc(int, 0);
var pcReturned = ref.alloc(int, 0);
//Get amount of memory for the buffer with information about printers
var res = winspoolLib.EnumPrintersW(6, ref.NULL, 4, ref.NULL, 0, pcbNeeded, pcReturned);
if (res != 0) {
console.log("Cannot get list of printers. Error during first call to EnumPrintersW. Eror code: " + res);
return;
}
var bufSize = pcbNeeded.deref();
var buf = Buffer.alloc(bufSize);
console.log(bufSize);
//Fill buf with information about printers
res = winspoolLib.EnumPrintersW(6, ref.NULL, 4, buf, pcbNeeded.deref(), pcbNeeded, pcReturned);
if (res == 0) {
console.log("Cannot get list of printers. Eror code: " + res);
return;
}
var countOfPrinters = pcReturned.deref();
var printers = new Array(countOfPrinters);
for (var i = 0; i < countOfPrinters; i++) {
var pPrinterInfo = ref.get(buf, i*PRINTER_INFO_4W.size, PRINTER_INFO_4W);
printers[i] = pPrinterInfo.pPrinterName;
}
return printers;
};
https://msdn.microsoft.com/ru-ru/library/windows/desktop/dd162692(v=vs.85).aspx
BOOL EnumPrinters(
_In_ DWORD Flags,
_In_ LPTSTR Name,
_In_ DWORD Level,
_Out_ LPBYTE pPrinterEnum,
_In_ DWORD cbBuf,
_Out_ LPDWORD pcbNeeded,
_Out_ LPDWORD pcReturned
);
https://msdn.microsoft.com/ru-ru/library/windows/desktop/dd162847(v=vs.85).aspx
typedef struct _PRINTER_INFO_4 {
LPTSTR pPrinterName;
LPTSTR pServerName;
DWORD Attributes;
} PRINTER_INFO_4, *PPRINTER_INFO_4;
I can confirm that what you found with EnumPrintersA and EnumPrintersW is reproducible.
In my machine, they both require 240 bytes.
This got me curious, so I decided to allocate a separate buffer for each function and dump each buffer to a file and opened them with a hex editor.
The interesting part of each file is of course the names of the printers.
To keep this short, I'll show you the first 3 names of the printers.
The first line is from EnumPrintersA, the second is from EnumPrintersW:
Fax.x...FX DocuPrint C1110 PCL 6..C.1.1.1.0. .P.C.L. .6...Microsoft XPS Document Writer.o.c.u.m.e.n.t. .W.r.i.t.e.r...
F.a.x...F.X. .D.o.c.u.P.r.i.n.t. .C.1.1.1.0. .P.C.L. .6...M.i.c.r.o.s.o.f.t. .X.P.S. .D.o.c.u.m.e.n.t. .W.r.i.t.e.r...
From this result, it appears that EnumPrintersA calls EnumPrintersW for the actual work and then simply converts each string in the buffer to single byte characters and puts the resulting string in the same place.
To confirm this, I decided to trace EnumPrintersA code and I found that it definitely calls EnumPrintersW at position winspool.EnumPrintersA + 0xA7.
The actual position is likely different in a different Windows version.
This got me even more curious, so I decided to test other functions that have A and W versions.
This is what I found:
EnumMonitorsA 280 bytes needed
EnumMonitorsW 280 bytes needed
EnumServicesStatusA 20954 bytes needed
EnumServicesStatusW 20954 bytes needed
EnumPortsA 2176 bytes needed
EnumPortsW 2176 bytes needed
EnumPrintProcessorsA 24 bytes needed
EnumPrintProcessorsW 24 bytes needed
From this result, my conclusion is that EnumPrintersA calls EnumPrintersW for the actual work and converts the string in the buffer and other functions that have A and W versions also do the same thing.
This appears to be a common mechanism to avoid duplication of code in expense of larger buffers, maybe because buffers can be deallocated anyway.
At the beginning I thought that there's something wrong with your code, so I kept looking for a mistake (introduced by the FFI or JS layers, or a typo or something similar), but I couldn't find anything.
Then, I started to write a program similar to yours in C (to eliminate any extra layers that could introduce errors).
main.c:
#include <stdio.h>
#include <Windows.h>
#include <conio.h> // !!! Deprecated!!!
typedef BOOL (__stdcall *EnumPrintersAFuncPtr)(_In_ DWORD Flags, _In_ LPSTR Name, _In_ DWORD Level, _Out_ LPBYTE pPrinterEnum, _In_ DWORD cbBuf, _Out_ LPDWORD pcbNeeded, _Out_ LPDWORD pcReturned);
typedef BOOL (__stdcall *EnumPrintersWFuncPtr)(_In_ DWORD Flags, _In_ LPWSTR Name, _In_ DWORD Level, _Out_ LPBYTE pPrinterEnum, _In_ DWORD cbBuf, _Out_ LPDWORD pcbNeeded, _Out_ LPDWORD pcReturned);
void testFunc()
{
PPRINTER_INFO_4A ppi4a = NULL;
PPRINTER_INFO_4W ppi4w = NULL;
BOOL resa, resw;
DWORD neededa = 0, returneda = 0, neededw = 0, returnedw = 0, gle = 0, i = 0, flags = PRINTER_ENUM_LOCAL | PRINTER_ENUM_CONNECTIONS;
LPBYTE bufa = NULL, bufw = NULL;
resa = EnumPrintersA(flags, NULL, 4, NULL, 0, &neededa, &returneda);
if (resa) {
printf("EnumPrintersA(1) succeeded with NULL buffer. Exiting...\n");
return;
} else {
gle = GetLastError();
if (gle != ERROR_INSUFFICIENT_BUFFER) {
printf("EnumPrintersA(1) failed with %d(0x%08X) which is different than %d. Exiting...\n", gle, gle, ERROR_INSUFFICIENT_BUFFER);
return;
} else {
printf("EnumPrintersA(1) needs a %d(0x%08X) bytes long buffer.\n", neededa, neededa);
}
}
resw = EnumPrintersW(flags, NULL, 4, NULL, 0, &neededw, &returnedw);
if (resw) {
printf("EnumPrintersW(1) succeeded with NULL buffer. Exiting...\n");
return;
} else {
gle = GetLastError();
if (gle != ERROR_INSUFFICIENT_BUFFER) {
printf("EnumPrintersW(1) failed with %d(0x%08X) which is different than %d. Exiting...\n", gle, gle, ERROR_INSUFFICIENT_BUFFER);
return;
} else {
printf("EnumPrintersW(1) needs a %d(0x%08X) bytes long buffer.\n", neededw, neededw);
}
}
bufa = (LPBYTE)calloc(1, neededa);
if (bufa == NULL) {
printf("calloc failed with %d(0x%08X). Exiting...\n", errno, errno);
return;
} else {
printf("buffera[0x%08X:0x%08X]\n", (long)bufa, (long)bufa + neededa - 1);
}
bufw = (LPBYTE)calloc(1, neededw);
if (bufw == NULL) {
printf("calloc failed with %d(0x%08X). Exiting...\n", errno, errno);
free(bufa);
return;
} else {
printf("bufferw[0x%08X:0x%08X]\n", (long)bufw, (long)bufw + neededw - 1);
}
resa = EnumPrintersA(flags, NULL, 4, bufa, neededa, &neededa, &returneda);
if (!resa) {
gle = GetLastError();
printf("EnumPrintersA(2) failed with %d(0x%08X). Exiting...\n", gle, gle);
free(bufa);
free(bufw);
return;
}
printf("EnumPrintersA(2) copied %d bytes in the buffer out of which the first %d(0x%08X) represent %d structures of size %d\n", neededa, returneda * sizeof(PRINTER_INFO_4A), returneda * sizeof(PRINTER_INFO_4A), returneda, sizeof(PRINTER_INFO_4A));
resw = EnumPrintersW(flags, NULL, 4, bufw, neededw, &neededw, &returnedw);
if (!resw) {
gle = GetLastError();
printf("EnumPrintersW(2) failed with %d(0x%08X). Exiting...\n", gle, gle);
free(bufw);
free(bufa);
return;
}
printf("EnumPrintersW(2) copied %d bytes in the buffer out of which the first %d(0x%08X) represent %d structures of size %d\n", neededw, returnedw * sizeof(PRINTER_INFO_4W), returnedw * sizeof(PRINTER_INFO_4W), returnedw, sizeof(PRINTER_INFO_4W));
ppi4a = (PPRINTER_INFO_4A)bufa;
ppi4w = (PPRINTER_INFO_4W)bufw;
printf("\nPrinting ASCII results:\n");
for (i = 0; i < returneda; i++) {
printf(" Item %d\n pPrinterName: [%s]\n", i, ppi4a[i].pPrinterName ? ppi4a[i].pPrinterName : "NULL");
}
printf("\nPrinting WIDE results:\n");
for (i = 0; i < returnedw; i++) {
wprintf(L" Item %d\n pPrinterName: [%s]\n", i, ppi4w[i].pPrinterName ? ppi4w[i].pPrinterName : L"NULL");
}
free(bufa);
free(bufw);
}
int main()
{
testFunc();
printf("\nPress a key to exit...\n");
getch();
return 0;
}
Note: in terms of variable names (I kept them short - and thus not very intuitive), the a or w at the end of their names means that they are used for ASCII / WIDE version.
Initially, I was afraid that EnumPrinters might not return anything, since I'm not connected to any printer at this point, but luckily I have some (7 to be more precise) "saved". Here's the output of the above program (thank you #qxz for correcting my initial (and kind of faulty) version):
EnumPrintersA(1) needs a 544(0x00000220) bytes long buffer.
EnumPrintersW(1) needs a 544(0x00000220) bytes long buffer.
buffera[0x03161B20:0x03161D3F]
bufferw[0x03165028:0x03165247]
EnumPrintersA(2) copied 544 bytes in the buffer out of which the first 84(0x00000054) represent 7 structures of size 12
EnumPrintersW(2) copied 544 bytes in the buffer out of which the first 84(0x00000054) represent 7 structures of size 12
Printing ASCII results:
Item 0
pPrinterName: [Send To OneNote 2013]
Item 1
pPrinterName: [NPI060BEF (HP LaserJet Professional M1217nfw MFP)]
Item 2
pPrinterName: [Microsoft XPS Document Writer]
Item 3
pPrinterName: [Microsoft Print to PDF]
Item 4
pPrinterName: [HP Universal Printing PCL 6]
Item 5
pPrinterName: [HP LaserJet M4345 MFP [7B63B6]]
Item 6
pPrinterName: [Fax]
Printing WIDE results:
Item 0
pPrinterName: [Send To OneNote 2013]
Item 1
pPrinterName: [NPI060BEF (HP LaserJet Professional M1217nfw MFP)]
Item 2
pPrinterName: [Microsoft XPS Document Writer]
Item 3
pPrinterName: [Microsoft Print to PDF]
Item 4
pPrinterName: [HP Universal Printing PCL 6]
Item 5
pPrinterName: [HP LaserJet M4345 MFP [7B63B6]]
Item 6
pPrinterName: [Fax]
Press a key to exit...
Amazingly (at least for me), the behavior you described could be reproduced.
Note that the above output is from the 032bit compiled version of the program (064bit pointers are harder to read :) ), but the behavior is reproducible when building for 064bit as well (I am using VStudio 10.0 on Win10).
Since there are for sure strings at the end of the buffer, I started debugging:
Above is a picture of VStudio 10.0 Debug window, with the program interrupted at the end of testFunc, just before freeing the 1st pointer. Now, I don't know how familiar are you with debugging on VStudio, so I'm going to walk through the (relevant) window areas:
At the bottom, there are 2 Watch windows (used to display variables while the program is running). As seen, the variable Name, Value and Type are displayed
At the right, (Watch 1): the 1st (0th) and the last (6th - as there are 7) of the structures at the beginning of each of the 2 buffers
At the left, (Watch 2): the addresses of the 2 buffers
Above the Watch windows, (Memory 2) is the memory content for bufw. A Memory window contains a series of rows and in each row there's the memory address (grayed, at the left), followed by its contents in hex (each byte corresponds to 2 hex digits - e.g. 1E), then at the right the same contents in char representation (each byte corresponds to 1 char - I'm going to come back on this), then the next row, and so on
Above Memory 2, (Memory 1): it's the memory content for bufa
Now, going back to the memory layout: not all the chars at the right are necessarily what they seem, some of them are just displayed like that for human readability. For example there are a lot of dots (.) on the right side, but they are not all dots. If you look for a dot at the corresponding hex representation, you'll notice that for many of them it's 00 or NULL (which is a non printable char, but it's displayed as a dot).
Regarding the buffer contents each of the 2 Memory windows (looking at the char representation), there are 3 zones:
The PRINTER_INFO_4* zone or the gibberish at the beginning: 544 bytes corresponding to approximately the 1st 3 rows
The funky chars from the last ~1.5 rows: they are outside of our buffers so we don't care about them
The mid zone: where the strings are stored
Let's look at the WIDE strings zone (Memory 2 - mid zone): as you mentioned, each character has 2 bytes: because in my case they're all ASCII chars, the MSB (or the codepage byte) is always 0 (that's why you see chars and dots interleaved: e.g. ".L.a.s.e.r.J.e.t" in row 4).
Since there are multiple strings in the buffer (or string, if you will) - or even better: multiple TCHAR*s in a TCHAR* - they must be separated: that is done by a NULL WIDE char (hex: 00 00, char: "..") at the end of each string; combined with the fact that the next string's 1st byte (char) is also 00 (.), you'll see a sequence of 3 NULL bytes (hex: 00 00 00, char: "...") and that is the separator between 2 (WIDE) strings in the mid zone.
Now, comparing the 2 mid parts (corresponding to the 2 buffers), you'll notice that the string separators are exactly in the same positions and more: the last parts of each string are the also same (the last halves of each string to be more precise).
Considering this, here's my theory:
I think EnumPrintersA calls EnumPrintersW, and then it iterates through each of the strings (at the end of the buffer), and calls wcstombs or even better: [MS.Docs]: WideCharToMultiByte function on them (converting them in place - and thus the resulting ASCII string only takes the 1st half of the WIDE string, leaving the 2nd half unmodified), without converting all the buffer. I'll have to verify this by looking with a disassembler in winspool.drv.
Personally (if I'm right) I think that it is a lame workaround (or a gainarie as I like to call it), but who knows, maybe all the *A, *W function pairs (at least those who return multiple char*s in a char*) work like this. Anyway, there are also pros for this approach (at least for these 2 funcs):
dev-wise: it's OK for one function to call the other and keep the implementation in 1 place (instead of duping it in both functions)
performance-wise: it's OK not to recreate the buffer since that would
imply additional computation; after all, the buffer consumer doesn't normally reach the second halves of each ASCII string in the buffer

Categories