Length of generated RSA key is different on Puttygen and NodeJS - javascript

I have been trying to generate a RSA key pair to on an AWS Lambda in order to replace the current manual generation of keys on PuTTYgen. The key generated on PuTTYgen is of RSA type with a length of 2048 which results in a sample key (without comment) as follows -
---- BEGIN SSH2 PUBLIC KEY ----
Comment: ""
AAAAB3NzaC1yc2EAAAABJQAAAQEAjloNCA4mycem+WTb49zUhYK7aRmg1uuorUvD
7GzE97C9EmmhUrVbp4d5dWF8zkT2sh5mRFrAnsSogxEtCzvh59mzbqUj+3Xw+xqJ
DMrHmnT8XKIGep++v3e+SV7RLio06ymp0H7zyHhbxLhZEnpGEKwkXmY53+RSUF7s
wfmvxS5mCo7677lbIZxGvvx65tT5as5m+ng7tKlqDAliuPl2vslyFhQw9B49cvOx
Z+UekK2iHD+DNCMQyxEelOru9YMwRozOwgtWPEyHcLinonAn2fUne28POsT3zXbv
rW10hkGH5JIHzGUoPxP6N7RRCnSN/NgS8rrHs51Skvhl0WzV6w==
---- END SSH2 PUBLIC KEY ----
Now I have been trying to replicate the same on a NodeJS lambda with the following code -
const generation = util.promisify(crypto.generateKeyPair);
const result = await generation('rsa',{
modulusLength: 2048,
publicKeyEncoding: {
type: 'spki',
format: 'pem'
},
privateKeyEncoding: {
type: 'pkcs8',
format: 'pem',
}
});
console.log(result['publicKey']);
console.log(result['privateKey']);
On execution of this lambda, the generated public key looks something like this -
-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAnF1VDHq0vu5iL0nkbum8
cVzdhxiqmR6XcZbcsilF+Se6tlS9VAbN8QTTLdqwhJ5Dw7DvBGUXpCqUIqyT5IU5
wjQGnWHAWhPmalAgYWDwwdiOxxgd6NnNRR2Q5P4PSruxvFG7BtiSKGXSZpMzTIyZ
sXajEY2vhkf77bMEgzJhpXGAvzZsGEDi9jni8FCabVH6jvXh/svpmoCxwhQY1HHh
9RksscuAfllMwOE4uiQvfq6CpPNJUwU4kWtiaAtgX26nnPvqaUX52xMuYBrWQI2m
vUiXuxynqnrVSAFt/QY/0lMKRgnzwkq6YTIf8PeMQQA6TVQbtGN+j0MFQJDxF2/l
dQIDAQAB
-----END PUBLIC KEY-----
As far as I understand about RSA keys, the first and the last line should not make any difference since they are basically comments. But I see that the PuTTYgen key consists of about 5 and 1/2 lines of content whereas the NodeJS key has 6+ lines of content. Why is there a difference between the two when both of them have a length of 2048?
Thank you.

Thanks for the comments, helped me understand the formats of the keys better. The key generated with the use of the NodeJS code was of PEM format which would need to be converted to OpenSSH format to match the output key from PuTTYgen.
The module sshpk helps in converting the PEM key to a OpenSSH format. The code to do that is as follows -
var sshpk = require('sshpk');
let pemPublicKey = sshpk.parseKey(publicKey, 'pem');
let openSSHPublicKey = pemPublicKey.toString('ssh');
console.log(openSSHPublicKey);
This would result in a key with the following format -
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqWWfYMm/q1Ee4AzRJMc+BgsqzVIZgwTidGDb6E3V2EQ85+iX09pmDF8E23Fsmd9WoUMB2L/FXUIlWZmofPmoInKk740UcsQGT0MVYQfDAiVBqrbymIR/lMBCirFpFq7hIDzACgWYAmSEFtDBdP4LmFozx6Vi3+Ss5g8EjkXcDLkPiBSM8YjCQO6CraH1dMIC4/hywitL1G2rngFdwgFAuG1HEqgXZKRZAm2043OVY2SDVnMvHcYjarPk94BydZ7mzZRCdaetDY73+8opVL0uRol2GfYwAoediz65iIm183w4p4F3JH09W4xSoT8FeTRfIc2iAlde3wFA6aK/NV0G5 (unnamed)
The (unnamed) at the end can be removed by adding a name when you are using the parseKey() method.
Now, you could format this string as per your requirements as follows -
let openSSHPublicKeyMaterial = openSSHPublicKey.split(" ")[1];
let formattedSSH2KeyArray = []
formattedSSH2KeyArray.push('---- BEGIN SSH2 PUBLIC KEY ----');
formattedSSH2KeyArray.push('Comment: ""');
for (let index = 0; index <= openSSHPublicKeyMaterial.length; index += 64 ) { // 64 corresponds to the number of characters in a single line
let singleLine = openSSHPublicKeyMaterial.substring(index, index + 64);
formattedSSH2KeyArray.push(singleLine);
}
formattedSSH2KeyArray.push('---- END SSH2 PUBLIC KEY ----');
let formattedSSH2Key = formattedSSH2KeyArray.join('\n');
console.log(formattedSSH2Key);
The execution of the above section would give you the key in the required format -
---- BEGIN SSH2 PUBLIC KEY ----
Comment: ""
AAAAB3NzaC1yc2EAAAADAQABAAABAQCqWWfYMm/q1Ee4AzRJMc+BgsqzVIZgwTid
GDb6E3V2EQ85+iX09pmDF8E23Fsmd9WoUMB2L/FXUIlWZmofPmoInKk740UcsQGT
0MVYQfDAiVBqrbymIR/lMBCirFpFq7hIDzACgWYAmSEFtDBdP4LmFozx6Vi3+Ss5
g8EjkXcDLkPiBSM8YjCQO6CraH1dMIC4/hywitL1G2rngFdwgFAuG1HEqgXZKRZA
m2043OVY2SDVnMvHcYjarPk94BydZ7mzZRCdaetDY73+8opVL0uRol2GfYwAoedi
z65iIm183w4p4F3JH09W4xSoT8FeTRfIc2iAlde3wFA6aK/NV0G5
---- END SSH2 PUBLIC KEY ----

Related

Solidity and web3 sha3

I try to hash a tokenId with a seed in my smart contract. For simplicity and to avoid other errors I leave the seed out for now. I basically just want to hash a number on my contract and hash the same number on my javascript code and receive the same output.
Code looks something like this on Solidity:
function _tokenURI(uint256 tokenId) internal view returns (string memory) {
string memory currentBaseURI = _baseURI();
bytes32 hashedToken = keccak256(abi.encodePacked(tokenId));
return
bytes(currentBaseURI).length > 0
? string(abi.encodePacked(currentBaseURI, hashedToken, baseExtension))
: "";
}
which also leads to an error on client side invalid codepoint at offset. To tackle this I tried to cast bit32 to string using these functions
function _bytes32ToString(bytes32 _bytes32)
private
pure
returns (string memory)
{
uint8 i = 0;
bytes memory bytesArray = new bytes(64);
for (i = 0; i < bytesArray.length; i++) {
uint8 _f = uint8(_bytes32[i / 2] & 0x0f);
uint8 _l = uint8(_bytes32[i / 2] >> 4);
bytesArray[i] = _toByte(_f);
i = i + 1;
bytesArray[i] = _toByte(_l);
}
return string(bytesArray);
}
function _toByte(uint8 _uint8) private pure returns (bytes1) {
if (_uint8 < 10) {
return bytes1(_uint8 + 48);
} else {
return bytes1(_uint8 + 87);
}
}
though I'm not sure if this is equivalent. Code on the frontend looks like:
const hashed = web3.utils.soliditySha3(
{ type: "uint256", value: tokenId}
);
What do I need to change in order to receive the exact same output? And what does
invalid codepoint at offset
mean?
Maybe issue is that tokenId is not uint256 or Web3, Solidity version? I did few tests with Remix IDE and I recieved the same results.
Solidity code:
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;
contract Hash {
function getHashValue_1() public view returns(bytes32){
return keccak256(abi.encodePacked(uint256(234)));
}
// bytes32: 0x61c831beab28d67d1bb40b5ae1a11e2757fa842f031a2d0bc94a7867bc5d26c2
function getHashValue_3() public view returns(bytes32){
return keccak256(abi.encodePacked(uint256(10),string('StringSecretValue')));
}
// bytes32: 0x5938b4caf29ac4903ee34628c3dc1eb5c670a6bd392a006d0cb91f1fc5db3819
}
JS code:
(async () => {
try {
console.log('Web3 version is '+ Web3.version);
// Web3 version is 1.3.0
let theValueYouNeed = web3.utils.soliditySha3("234");
theValueYouNeed = web3.utils.soliditySha3({type: 'uint256', value: '234'});
theValueYouNeed = web3.utils.soliditySha3({t: 'uint256', v: '234'});
// above hashed value is 0x61c831beab28d67d1bb40b5ae1a11e2757fa842f031a2d0bc94a7867bc5d26c2
console.log('Hashed value 1 is '+theValueYouNeed);
theValueYouNeed = web3.utils.soliditySha3({t: 'uint256', v: '10'},{t: 'string', v: 'StringSecretValue'});
console.log('Hashed value 2 is '+theValueYouNeed);
// above hashed value is 0x5938b4caf29ac4903ee34628c3dc1eb5c670a6bd392a006d0cb91f1fc5db3819
} catch (e) {
console.log(e.message)
}
})()
I'm not sure but invalid codepoint at offset should mean that a designated value does not fall within the range or set of allowed values... So maybe there is something wrong with tokenId and you could do some tests with hardcoded values?
You get the invalid codepoint error because you mix string and byte data when you call abi.encodePacked(currentBaseURI, hashedToken, baseExtension)).
When Javascript gets the return value from the contract it expects a UTF8 string, but inside your hashedToken you have byte values that are not valid for a UTF8-encoded string.
This kind of error might be "intermittent". It might happen in just some cases. You're lucky to see it during development and not in production.
How to fix it?
You are on the right track converting the hash result to a string.
There is an alternative way to do it in this answer which uses less gas by using only bitwise operations.
To convert the hex value in Javascript you can use web3.utils.hexToNumberString().

Different encryption values from NodeJs Crypto and CryptoJS library

I am doing Encryption and Decryption using both NodeJS Crypto and CryptoJS library in both front-end and backend. I am able to do the encryption using NodeJS Crypto which works perfectly fine and the encrypted string is also consistent with the Java encryption. But when I use, CryptoJS library the encryption string does not as expected. I provide below the standalone Typescript code.
Please help me. I am using aes-256-cfb8 Algorithm. Is it because of this Algorithm which may not be supported by CryptoJS library? Please give me some direction, I am struck now.
import * as crypto from "crypto";
import CryptoJS, { mode } from "crypto-js";
export class Test3 {
private static readonly CRYPTO_ALGORITHM = 'aes-256-cfb8'
private static readonly DEFAULT_IV: string = "0123456789123456";
// NodeJS Crypto - Works fine
public encryptValue(valueToEncrypt: string, secretKey: string): string {
let encryptedValue: string = '';
const md5 = crypto.createHash('md5').update(secretKey).digest('hex');
const key: string = md5;
console.log("MD5 Value : ", md5);
const iv = Buffer.from(Test3.DEFAULT_IV)
console.log("iv: ", iv)
const cipher = crypto.createCipheriv(Test3.CRYPTO_ALGORITHM, key, iv);
let encrypted = cipher.update(valueToEncrypt, 'utf8', 'base64');
console.log("encrypted ::: ", encrypted);
encrypted += cipher.final('base64');
encryptedValue = encrypted;
return encryptedValue;
}
// CryptoJS library - does not work
public check3(valueToEncrypt: string, secretKey: string): void {
const hash = CryptoJS.MD5(secretKey);
const md5Key: string = hash.toString(CryptoJS.enc.Hex)
console.log("MD5 Value: ", md5Key); // Upto correct
const IV11 = CryptoJS.enc.Utf8.parse(Test3.DEFAULT_IV);
const key11 = CryptoJS.enc.Utf8.parse(md5Key);
const data = CryptoJS.enc.Utf8.parse(valueToEncrypt);
const enc1 = CryptoJS.AES.encrypt(data, key11, {
format: CryptoJS.format.OpenSSL,
iv: IV11,
mode: CryptoJS.mode.CFB,
padding: CryptoJS.pad.NoPadding
});
console.log("Cipher text11: ", enc1.toString());
}
}
const text2Encrypt = "A brown lazy dog";
const someSecretKey: string = "projectId" + "~" + "India";
const test = new Test3();
// Using NodeJS Crypto
const enc = test.encryptValue(text2Encrypt, someSecretKey);
console.log("Encrypted Value Using NodeJS Crypto: ", enc); // ===> nJG4Fk27JZ7E5klLqFAt
// Using CryptoJS
console.log("****************** Using CryptoJS ******************")
test.check3(text2Encrypt, someSecretKey);
Please help me for the method check3() of the above class. Also I have checked many links in SO.
You suspect right. Both codes use different variants of the CFB mode. For CFB a segment size must be defined. The segment size of the CFB mode corresponds to the bits encrypted per encryption step, see CFB.
The crypto module of NodeJS supports several segment sizes, e.g. 8 bits (aes-256-cfb8) or 128 bits (aes-256-cfb), while CryptoJS only supports a segment size of 128 bits (CryptoJS.mode.CFB).
If the segment size is changed to 128 bits (aes-256-cfb) in the crypto module, both ciphertexts match.
Edit:
In this post you will find an extension of CryptoJS for the CFB mode, so that also a variable segment size, e.g. 8 bits, is supported.

AES encryption in JS equivalent of C#

I need to encrypt a string using AES encryption. This encryption was happening in C# earlier, but it needs to be converted into JavaScript (will be run on a browser).
The current code in C# for encryption is as following -
public static string EncryptString(string plainText, string encryptionKey)
{
byte[] clearBytes = Encoding.Unicode.GetBytes(plainText);
using (Aes encryptor = Aes.Create())
{
Rfc2898DeriveBytes pdb = new Rfc2898DeriveBytes(encryptionKey, new byte[] { 0x49, 0x76, 0x61, 0x6e, 0x20, 0x4d, 0x65, 0x64, 0x76, 0x65, 0x64, 0x65, 0x76 });
encryptor.Key = pdb.GetBytes(32);
encryptor.IV = pdb.GetBytes(16);
using (MemoryStream ms = new MemoryStream())
{
using (CryptoStream cs = new CryptoStream(ms, encryptor.CreateEncryptor(), CryptoStreamMode.Write))
{
cs.Write(clearBytes, 0, clearBytes.Length);
cs.Close();
}
plainText = Convert.ToBase64String(ms.ToArray());
}
}
return plainText;
}
I have tried to use CryptoJS to replicate the same functionality, but it's not giving me the equivalent encrypted base64 string. Here's my CryptoJS code -
function encryptString(encryptString, secretKey) {
var iv = CryptoJS.enc.Hex.parse('Ivan Medvedev');
var key = CryptoJS.PBKDF2(secretKey, iv, { keySize: 256 / 32, iterations: 500 });
var encrypted = CryptoJS.AES.encrypt(encryptString, key,{iv:iv);
return encrypted;
}
The encrypted string has to be sent to a server which will be able to decrypt it. The server is able to decrypt the encrypted string generated from the C# code, but not the encrypted string generated from JS code. I tried to compare the encrypted strings generated by both the code and found that the C# code is generating longer encrypted strings. For example keeping 'Example String' as plainText and 'Example Key' as the key, I get the following result -
C# - eAQO+odxOdGlNRB81SHR2XzJhyWtz6XmQDko9HyDe0w=
JS - 9ex5i2g+8iUCwdwN92SF+A==
The length of JS encrypted string is always shorter than the C# one. Is there something I am doing wrong? I just have to replicated the C# code into the JS code.
Update:
My current code after Zergatul's answer is this -
function encryptString(encryptString, secretKey) {
var keyBytes = CryptoJS.PBKDF2(secretKey, 'Ivan Medvedev', { keySize: 48 / 4, iterations: 1000 });
console.log(keyBytes.toString());
// take first 32 bytes as key (like in C# code)
var key = new CryptoJS.lib.WordArray.init(keyBytes.words, 32);
// skip first 32 bytes and take next 16 bytes as IV
var iv = new CryptoJS.lib.WordArray.init(keyBytes.words.splice(32 / 4), 16);
console.log(key.toString());
console.log(iv.toString());
var encrypted = CryptoJS.AES.encrypt(encryptString, key, { iv: iv });
return encrypted;
}
As illustrated in his/her answer that if the C# code converts the plainText into bytes using ASCII instead of Unicode, both the C# and JS code will produce exact results. But since I am not able to modify the decryption code, I have to convert the code to be equivalent of the original C# code which was using Unicode.
So, I tried to see, what's the difference between both the bytes array between ASCII and Unicode byte conversion in C#. Here's what I found -
ASCII Byte Array: [69,120,97,109,112,108,101,32,83,116, 114, 105, 110, 103]
Unicode Byte Array: [69,0,120,0,97,0,109,0,112,0,108,0,101,0,32,0,83,0,116,0, 114,0, 105,0, 110,0, 103,0]
So some extra bytes are available for each character in C# (So Unicode allocates twice as much bytes to each character than ASCII).
Here's the difference between both Unicode and ASCII conversion respectively -
ASCII
clearBytes: [69,120,97,109,112,108,101,32,83,116,114,105,110,103,]
encryptor.Key: [123,213,18,82,141,249,182,218,247,31,246,83,80,77,195,134,230,92,0,125,232,210,135,115,145,193,140,239,228,225,183,13,]
encryptor.IV: [101,74,46,177,46,233,68,252,83,169,211,13,249,61,118,167,]
Result: eQus9GLPKULh9vhRWOJjog==
Unicode:
clearBytes: [69,0,120,0,97,0,109,0,112,0,108,0,101,0,32,0,83,0,116,0,114,0,105,0,110,0,103,0,]
encryptor.Key: [123,213,18,82,141,249,182,218,247,31,246,83,80,77,195,134,230,92,0,125,232,210,135,115,145,193,140,239,228,225,183,13,]
encryptor.IV: [101,74,46,177,46,233,68,252,83,169,211,13,249,61,118,167,]
Result: eAQO+odxOdGlNRB81SHR2XzJhyWtz6XmQDko9HyDe0w=
So since both the key and iv being generated have exact same byte array in both Unicode and ASCII approach, it should not have generated different output, but somehow it's doing that. I think it's because of clearBytes' length, as it's using its length to write to CryptoStream.
I tried to see what's the output of the generated bytes in the JS code is and found that it uses words which needed to be converted into Strings using toString() method.
keyBytes: 7bd512528df9b6daf71ff653504dc386e65c007de8d2877391c18cefe4e1b70d654a2eb12ee944fc53a9d30df93d76a7
key: 7bd512528df9b6daf71ff653504dc386e65c007de8d2877391c18cefe4e1b70d
iv: 654a2eb12ee944fc53a9d30df93d76a7
Since, I am not able to affect the generated encrypted string's length in the JS code (No access to the write stream directly), thus still stuck here.
Here is the example how to reproduce the same ciphertext between C# and CryptoJS:
static void Main(string[] args)
{
byte[] plainText = Encoding.Unicode.GetBytes("Example String"); // this is UTF-16 LE
string cipherText;
using (Aes encryptor = Aes.Create())
{
var pdb = new Rfc2898DeriveBytes("Example Key", Encoding.ASCII.GetBytes("Ivan Medvedev"));
encryptor.Key = pdb.GetBytes(32);
encryptor.IV = pdb.GetBytes(16);
using (MemoryStream ms = new MemoryStream())
{
using (CryptoStream cs = new CryptoStream(ms, encryptor.CreateEncryptor(), CryptoStreamMode.Write))
{
cs.Write(plainText, 0, plainText.Length);
cs.Close();
}
cipherText = Convert.ToBase64String(ms.ToArray());
}
}
Console.WriteLine(cipherText);
}
And JS:
var keyBytes = CryptoJS.PBKDF2('Example Key', 'Ivan Medvedev', { keySize: 48 / 4, iterations: 1000 });
// take first 32 bytes as key (like in C# code)
var key = new CryptoJS.lib.WordArray.init(keyBytes.words, 32);
// skip first 32 bytes and take next 16 bytes as IV
var iv = new CryptoJS.lib.WordArray.init(keyBytes.words.splice(32 / 4), 16);
// use the same encoding as in C# code, to convert string into bytes
var data = CryptoJS.enc.Utf16LE.parse("Example String");
var encrypted = CryptoJS.AES.encrypt(data, key, { iv: iv });
console.log(encrypted.toString());
Both codes return: eAQO+odxOdGlNRB81SHR2XzJhyWtz6XmQDko9HyDe0w=
TL;DR the final code looks like this -
function encryptString(encryptString, secretKey) {
encryptString = addExtraByteToChars(encryptString);
var keyBytes = CryptoJS.PBKDF2(secretKey, 'Ivan Medvedev', { keySize: 48 / 4, iterations: 1000 });
console.log(keyBytes.toString());
var key = new CryptoJS.lib.WordArray.init(keyBytes.words, 32);
var iv = new CryptoJS.lib.WordArray.init(keyBytes.words.splice(32 / 4), 16);
var encrypted = CryptoJS.AES.encrypt(encryptString, key, { iv: iv, });
return encrypted;
}
function addExtraByteToChars(str) {
let strResult = '';
for (var i = 0; i < str.length; ++i) {
strResult += str.charAt(i) + String.fromCharCode(0);
}
return strResult;
}
Explanation:
The C# code in the Zergatul's answer (Thanks to him/her) was using ASCII to convert the plainText into bytes, while my C# code was using Unicode. Unicode was assigning extra byte to each character in the resultant byte array, which was not affecting the generation of both key and iv bytes, but affecting the result since the length of the encryptedString was dependent on the length of the bytes generated from plainText.
As seen in the following bytes generated for each of them using "Example String" and "Example Key" as the plainText and secretKey respectively -
ASCII
clearBytes: [69,120,97,109,112,108,101,32,83,116,114,105,110,103,]
encryptor.Key: [123,213,18,82,141,249,182,218,247,31,246,83,80,77,195,134,230,92,0,125,232,210,135,115,145,193,140,239,228,225,183,13,]
encryptor.IV: [101,74,46,177,46,233,68,252,83,169,211,13,249,61,118,167,]
Result: eQus9GLPKULh9vhRWOJjog==
Unicode:
clearBytes: [69,0,120,0,97,0,109,0,112,0,108,0,101,0,32,0,83,0,116,0,114,0,105,0,110,0,103,0,]
encryptor.Key: [123,213,18,82,141,249,182,218,247,31,246,83,80,77,195,134,230,92,0,125,232,210,135,115,145,193,140,239,228,225,183,13,]
encryptor.IV: [101,74,46,177,46,233,68,252,83,169,211,13,249,61,118,167,]
Result: eAQO+odxOdGlNRB81SHR2XzJhyWtz6XmQDko9HyDe0w=
The JS result was similar too, which confirmed that it's using ASCII byte conversion -
keyBytes: 7bd512528df9b6daf71ff653504dc386e65c007de8d2877391c18cefe4e1b70d654a2eb12ee944fc53a9d30df93d76a7
key: 7bd512528df9b6daf71ff653504dc386e65c007de8d2877391c18cefe4e1b70d
iv: 654a2eb12ee944fc53a9d30df93d76a7
Thus I just need to increase the length of the plainText to make it use Unicode equivalent byte generation (Sorry, not familiar with the term). Since Unicode was assigning 2 space for each character in the byteArray, keeping the second space as 0, I basically created gap in the plainText's characters and filled that gap with character whose ASCII value was 0 using the addExtraByteToChars() function. And it made all the difference.
It's a workaround for sure, but started working for my scenario. I suppose this may or may not prove useful to others, thus sharing the findings. If anyone can suggest better implementation of the addExtraByteToChars() function (probably some term for this conversion instead of ASCII to Unicode or a better, efficient, and not hacky way to do that), please suggest it.

java encryption aes value and javascript encryption value does not match

I have java code which produce aes encryption code for me now I am trying to use it on javascript using crypto-js but both codes provides different keys I dont know why and how to get the same key here is my code
public static String encrypt(String text, byte[] iv, byte[] key)throws Exception{
Cipher cipher = Cipher.getInstance("AES/CBC/PKCS5Padding");
SecretKeySpec keySpec = new SecretKeySpec(key, "AES");
System.out.println("KEY SPECCCC: "+keySpec);
IvParameterSpec ivSpec = new IvParameterSpec(iv);
cipher.init(Cipher.ENCRYPT_MODE,keySpec,ivSpec);
byte [] results = cipher.doFinal(text.getBytes("UTF-8"));
BASE64Encoder encoder = new BASE64Encoder();
return encoder.encode(results);
}
JavaScript code
require(["crypto-js/core", "crypto-js/aes"], function (CryptoJS, AES) {
ciphertext = CryptoJS.AES.encrypt(JSON.stringify(jsondata),
arr.toString(),arr.toString());
});
string to utf-8
var utf8 = unescape(encodeURIComponent(key));
var arr = [];
for (var i = 0; i < utf8.length; i++) {
arr.push(utf8.charCodeAt(i));
}
First of all even tough your code works fine you wont be able to decrypt it back properly because while you are creating your AES cipher in Java you are using CBC Cipher and You are implementing a Padding algorithm which is PKCS5Padding.
So your java code does the followings;
When it gets the input it first divide it into the 16 bits blocks then if your input doesnt divide into the 16 overall then the reminders will be padded for filling the block with the same number of reminder.You can see what i mean by the following picture.
So it will do the encryption with the padded ciphers in the java side but in the Javascript Part You neither declare what type of Mode Aes will use nor declaring the what type of Padding it suppose to do. So you should add those values into the your code.You can make search following code parts.
mode:CryptoJS.mode.CBC,
padding: CryptoJS.pad.Pkcs7
About the different keys it is occuring because you are sending a Byte[] in to the your Encrypt method then use this unknown Byte[] while you are creating your Key.You didnt mention why your encryption method will be used in your program but you should create that "Byte[] key" same way in the both method.For instance you can refer following code as a example of generating that but it is not secure way of generating keys I just added it for showing you what I mean by you should generate both keys in the same way.
//DONT USE THIS IMPLEMENTATION SINCE IT IS NOT SAFE!
byte[] key = (username + password).getBytes("UTF-8");
Java code generates an encrypted string and for JavaScript to also generate same encrypted string, Following code works!
(function (CryptoJS) {
var C_lib = CryptoJS.lib;
// Converts ByteArray to stadnard WordArray.
// Example: CryptoJS.MD5(CryptoJS.lib.ByteArray ([ Bytes ])).toString(CryptoJS.enc.Base64);
C_lib.ByteArray = function (arr) {
var word = [];
for (var i = 0; i < arr.length; i += 4) {
word.push (arr[i + 0] << 24 | arr[i + 1] << 16 | arr[i + 2] << 8 | arr[i + 3] << 0);
}
return C_lib.WordArray.create (word, arr.length);
};
})(CryptoJS);
var IVstring = CryptoJS.lib.ByteArray(your IV bytearray).toString(CryptoJS.enc.Base64);
var keystring = CryptoJS.lib.ByteArray(your KEY bytearray).toString(CryptoJS.enc.Base64);
var text = 'texttobeencrypted';
var key = CryptoJS.enc.Base64.parse(keystring);
var iv = CryptoJS.enc.Base64.parse(IVstring);
var encrypted = CryptoJS.AES.encrypt(text, key, {iv: iv});
console.log(encrypted.toString());
Edited: Removed dangerous third party resource reference.
aes encryption javascript cryptojs java

Issue using RSA encryption in javascript

I'm working with a project that currently is doing encryption in a salesforce apex class (using the Crypto library) and that logic needs to be moved into a javascript file. The node.js package I'm trying to use to do the encryption is node-rsa.
Here's the code that currently exists in apex:
String algName = 'RSA';
blob signature;
String signGen = '';
String pKey = 'MIIEvgIBADANBgkqhkiG<rest of key snipped>';
String payload = 'some payload';
blob privateKey = EncodingUtil.base64Decode(pKey);
blob input = Blob.valueOf(payload);
signature = Crypto.sign(algName, input, privateKey);
signGen = EncodingUtil.base64Encode(signature);
And here's the initial javascript implementation:
var tmp = forge.util.decode64(pKey);
var privateKey2 = new NodeRSA(tmp);
payload = 'some payload
var encrypted = key.encrypt(payload, 'base64');
The problem I'm having is that the line:
var privateKey2 = new NodeRSA(tmp);
is causing the following error:
Invalid PEM format
The private key that the node-rsa uses in their example has markets at the beginning and end of the key of:
---- BEGIN RSA PRIVATE KEY -----
---- END RSA PRIVATE KEY -----
So I'm not sure if I have to somehow indicate to the node-rsa library that this key is in a different format. Or maybe there's another RSA javascript library I could try using?
I left you a response for how to do this using forge here: https://github.com/digitalbazaar/forge/issues/150
var pkey = 'some base64-encoded private key';
var pkeyDer = forge.util.decode64(pkey);
var pkeyAsn1 = forge.asn1.fromDer(pkeyDer);
var privateKey = forge.pki.privateKeyFromAsn1(pkeyAsn1);
// above could be simplified if pkey is stored in standard PEM format, then just do this:
// var pkey = 'some private key in pem format';
// var privateKey = forge.pki.privateKeyFromPem(pkey);
var payload = 'some string payload';
var md = forge.md.sha1.create();
md.update(payload, 'utf8');
var signature = privateKey.sign(md);
var signature64 = forge.util.encode64(signature);
// signature64 is now a base64-encoded RSA signature on a SHA-1 digest
// using PKCS#1v1.5 padding... see the examples for other padding options if necessary

Categories