Need help displaying the result of an asynchronous function on a webpage - javascript

I am trying to use the Spotify PKCE authorization with Siri Shortcuts. Unfortunately, none of the solutions I have found have been applicable to my specific situation. I have this bit of code
And I really have no idea what I am doing. Basically I need a SHA256 hash of a string of characters, but this needs to be by bytes vs the hex. This then needs to be base64Url encoded. I’ve have tried most of the solutions on stack but I can’t seem to output the final product onto a webpage, which is the main way I am able to run Java script natively on iPhone. Any help would be greatly appreciated
<script>
function sha256(plain) {
// returns promise ArrayBuffer
const encoder = new TextEncoder();
const data = encoder.encode(plain);
return window.crypto.subtle.digest('SHA-256', data);
}
function base64urlencode(a) {
// Convert the ArrayBuffer to string using Uint8 array.
// btoa takes chars from 0-255 and base64 encodes.
// Then convert the base64 encoded to base64url encoded.
// (replace + with -, replace / with _, trim trailing =)
return btoa(String.fromCharCode.apply(null, new Uint8Array(a)))
.replace(/\+/g, '-').replace(/\//g, '_').replace(/=+$/, '');
}
async function pkce_challenge_from_verifier(v) {
hashed = await sha256(v);
base64encoded = base64urlencode(hashed);
return base64encoded;
}
const code = await pkce_challenge_from_verifier("Zg6klgrnixQJ629GsawRMV8MjWvwRAr-vyvP1MHnB6X8WKZN")
document.getElementById("value").innerHTML = code;
</script>
<body>
<p id="value"></p>
</body>
</html> ```

await is not allowed at global level i.e you can use await only inside async functions.
So how to solve? use promise instead of await. like this
pkce_challenge_from_verifier("Zg6klgrnixQJ629GsawRMV8MjWvwRAr-vyvP1MHnB6X8WKZN")
.then((code) => document.getElementById("value").innerHTML = code)
.catch((error) => console.error(error))
also place <script> ... </script> after body end tag otherwise JS cant find p element.

Related

Java Script File Reader class is not reading the contents properly

I am using readAsText method in FileReader class (java script) with encoding type as "UTF-8" to read a file from client. It works well for all kind of characters with ascii values ranging from 1 to 65000. The only problem I have is, when I read chunk by chunk from the file, any char has ascii value after 3000 sometimes not read properly, After the investigation, I found that it is happening only when I do this reading for big files and the particular char is accidently sitting as first letter of a chunk. And I tested with multiple chunks of a file. This problem is not happening for all the chunks, happening one or 2 chunks out of 10. This is weird and strange. Am I missing something here? and do we have any other options to read local file in Java script? Any help will be much appreciated.
this might be one solution
new Blob(['hi']).text().then(console.log)
Here is another, not so cross browser friendly... but this could work...
new Blob(['foo']) // or new File(['foo'], 'test.txt')
.stream()
.pipeThrough(new TextDecoderStream('utf-8'))
.pipeTo(new WritableStream({
write(part) {
console.log(part)
}
}))
Another lower level solution that don't depend on WritableStream or TextDecoderStream would be to use regular TextDecoder with stream option by doing
var res = ''
const decoder = new TextDecoder()
res += decoder.decode(chunk1, { stream: true })
res += decoder.decode(chunk2, { stream: true })
res += decoder.decode(chunk3, { stream: true })
res += decoder.decode() // flush the end
how u get each chunk could be by using new Response(blob).body or by blob.stream() or simply slicing the blob using blob.slice(start, end) and use FileReader.prototype.readAsArrayBuffer()

Md5 always same hash when drag&drop in javascript

I have drag&drop event and I would like to hash the filed dragged. I have this:
var file = ev.dataTransfer.items[i].getAsFile();
var hashf = CryptoJS.SHA512(file).toString();
console.log("hashf", hashf)
But when I drag differents files, "hashf" is always the same string.
https://jsfiddle.net/9rfvnbza/1/
The issue is that you are attempting to hash the File object. Hash algorithms expect a string to hash.
When passing the File Object to the CryptoJS.SHA512() method, the API attempts to convert the object to a string. That conversion results in CryptoJS.SHA512() receiving the same string not matter what File object you send provide it.
The string is [object File] - you can replace file in your code with that string and discover it is the same hash code you've see all along.
To fix this, retrieve the text from the file first and pass that to the hashing algorithm:
file.text().then((text) => {
const hashf = CryptoJS.SHA512(text).toString();
console.log("hashf", hashf);
});
If you prefer async/await, you can put it in an IIFE:
(async() => {
const text = await file.text()
const hashf = CryptoJS.SHA512(text).toString();
console.log("hashf", hashf);
})();

GZip decompression in JavaScript with offset

An API that I am trying to use is returning base64 encoded responses. The response is first compressed using GZip with 4 bits of offset and then base64 encoded. I tried parsing the response using JavaScript (pako and zlib) and in both cases, it failed. The API has an example of C# code on how should the response decompression work, but I don't really know how can I transform that into JavaScript. So can anyone give me a hand transforming this function into JavaScript or just give me some tips on how to handle the 4 bytes offset? I didn't find anything relevant in the libraries' documentation.
public string Decompress(string value)
{
byte[] gzBuffer = Convert.FromBase64String(value);
using (MemoryStream ms = new MemoryStream())
{
int msgLength = BitConverter.ToInt32(gzBuffer, 0);
ms.Write(gzBuffer, 4, gzBuffer.Length - 4);
byte[] buffer = new byte[msgLength];
ms.Position = 0;
using (System.IO.Compression.GZipStream zip = new System.IO.Compression.GZipStream(ms, System.IO.Compression.CompressionMode.Decompress))
{
zip.Read(buffer, 0, buffer.Length);
}
return System.Text.Encoding.Unicode.GetString(buffer, 0, buffer.Length);
}
}
I'm going to use fflate (disclaimer, I'm the author) to do this. If you want to translate that function line for line:
// This is ES6 code; if you want better browser compatibility
// use the ES5 variant.
import { gunzipSync, strToU8, strFromU8 } from 'fflate';
const decompress = str => {
// atob converts Base64 to Latin-1
// strToU8(str, true) converts Latin-1 to binary
const bytes = strToU8(atob(str), true);
// subarray creates a new view on the same memory buffer
// gunzipSync synchronously decompresses
// strFromU8 converts decompressed binary to UTF-8
return strFromU8(gunzipSync(bytes.subarray(4)));
}
If you don't know what ES6 is:
In your HTML file:
<script src="https://cdn.jsdelivr.net/npm/fflate/umd/index.js"></script>
In your JS:
var decompress = function(str) {
var bytes = fflate.strToU8(atob(str), true);
return fflate.strFromU8(fflate.gunzipSync(bytes.subarray(4)));
}
I'd like to mention that streams are almost totally useless if you're going to accumulate into a string at the end, so the C# code is suboptimal. At the same time, since you're using the standard library, it is the only option.
In addition, I'd highly recommend using the callback variant (i.e. gunzip instead of gunzipSync) if possible because that runs on a separate thread to avoid causing the browser to freeze.

Mechanisms for hashing a file in JavaScript

I am relatively new to JavaScript and I want to get the hash of a file, and would like to better understand the mechanism and code behind the process.
So, what I need: An MD5 or SHA-256 hash of an uploaded file to my website.
My understanding of how this works: A file is uploaded via an HTML input tag of type 'file', after which it is converted to a binary string, which is consequently hashed.
What I have so far: I have managed to get the hash of an input of type 'text', and also, somehow, the hash of an uploaded file, although the hash did not match with websites I looked at online, so I'm guessing it hashed some other details of the file, instead of the binary string.
Question 1: Am I correct in my understanding of how a file is hashed? Meaning, is it the binary string that gets hashed?
Question 2: What should my code look like to upload a file, hash it, and display the output?
Thank you in advance.
Basically yes, that's how it works.
But, to generate such hash, you don't need to do the conversion to string yourself. Instead, let the SubtleCrypto API handle it itself, and just pass an ArrayBuffer of your file.
async function getHash(blob, algo = "SHA-256") {
// convert your Blob to an ArrayBuffer
// could also use a FileRedaer for this for browsers that don't support Response API
const buf = await new Response(blob).arrayBuffer();
const hash = await crypto.subtle.digest(algo, buf);
let result = '';
const view = new DataView(hash);
for (let i = 0; i < hash.byteLength; i += 4) {
result += view.getUint32(i).toString(16).padStart(2, '0');
}
return result;
}
inp.onchange = e => {
getHash(inp.files[0]).then(console.log);
};
<input id="inp" type="file">

How to convert character encoding from CP932 to UTF-8 in nodejs javascript, using the nodejs-iconv module (or other solution)

I'm attempting to convert a string from CP932 (aka Windows-31J) to utf8 in javascript. Basically I'm crawling a site that ignores the utf-8 request in the request header and returns cp932 encoded text (even though the html metatag indicates that the page is shift_jis).
Anyway, I have the entire page stored in a string variable called "html". From there I'm attempting to convert it to utf8 using this code:
var Iconv = require('iconv').Iconv;
var conv = new Iconv('CP932', 'UTF-8//TRANSLIT//IGNORE');
var myBuffer = new Buffer(html.length * 3);
myBuffer.write(html, 0, 'utf8')
var utf8html = (conv.convert(myBuffer)).toString('utf8');
The result is not what it's supposed to be. For example, the string: "投稿者さんの 稚内全日空ホテル のクチコミ (感想・情報)" comes out as "ソスソスソスeソスメゑソスソスソスソスソス ソスtソスソスソスSソスソスソスソスソスzソスeソスソス ソスフクソス`ソスRソス~ (ソスソスソスzソスEソスソスソスソス)"
If I remove //TRANSLIT//IGNORE (Which should cause it to return similar characters for missing characters, and failing that omit non-transcode-able characters), I get this error:
Error: EILSEQ, Illegal character sequence.
I'm open to using any solution that can be implemented in nodejs, but my search results haven't yielded many options outside of the nodejs-iconv module.
nodejs-iconv ref: https://github.com/bnoordhuis/node-iconv
Thanks!
Edit 24.06.2011:
I've gone ahead and implemented a solution in Java. However I'd still be interested in a javascript solution to this problem if somebody can solve it.
I got same trouble today :)
It depends libiconv. You need libiconv-1.13-ja-1.patch.
Please check followings.
http://d.hatena.ne.jp/ushiboy/20110422/1303481470
http://code.xenophy.com/?p=1529
or you can avoid problem using iconv-jp try
npm install iconv-jp
I had same problem, but with CP1250. I was looking for problem everywhere and everything was OK, except call of request – I had to add encoding: 'binary'.
request = require('request')
Iconv = require('iconv').Iconv
request({uri: url, encoding: 'binary'}, function(err, response, body) {
body = new Buffer(body, 'binary')
iconv = new Iconv('CP1250', 'UTF8')
body = iconv.convert(body).toString()
// ...
})
https://github.com/bnoordhuis/node-iconv/issues/19
I tried /Users/Me/node_modules/iconv/test.js
node test.js.
It return error.
On Mac OS X Lion, this problem seems depend on gcc.

Categories