I did not find yet anywhere a working solution for this simple thing: I want to read the content of an IPFS JSON file that I previously uploaded.
const chunks = [];
const client = await create('https://ipfs.infura.io:5001/api/v0');
for await (const chunk of client.cat(
'QmPChd2hVbrJ6bfo3WBcTW4iZnpHm8TEzWkLHmLpXhF68A',
)) {
chunks.push(chunk);
}
console.log(chunks);
Output: 72,101,108,108,111,44,32,60,89,79,85,82,32,78,65,77,69,32,72,69,82,69,62
I tried this snippet that I found in various formats on the internet. But it gives me a Uint8Array that consists of numbers. Yet I was not able to convert chunks to a JSON object as I need it. Can somebody please provide a snippet for this, please?
I am using
"ipfs-http-client": "^53.0.1"
Thanks in advance!
You'll need to convert the Uint8Array buffer to a string first:
const result = JSON.parse(chunk.toString());
Related
I am using readAsText method in FileReader class (java script) with encoding type as "UTF-8" to read a file from client. It works well for all kind of characters with ascii values ranging from 1 to 65000. The only problem I have is, when I read chunk by chunk from the file, any char has ascii value after 3000 sometimes not read properly, After the investigation, I found that it is happening only when I do this reading for big files and the particular char is accidently sitting as first letter of a chunk. And I tested with multiple chunks of a file. This problem is not happening for all the chunks, happening one or 2 chunks out of 10. This is weird and strange. Am I missing something here? and do we have any other options to read local file in Java script? Any help will be much appreciated.
this might be one solution
new Blob(['hi']).text().then(console.log)
Here is another, not so cross browser friendly... but this could work...
new Blob(['foo']) // or new File(['foo'], 'test.txt')
.stream()
.pipeThrough(new TextDecoderStream('utf-8'))
.pipeTo(new WritableStream({
write(part) {
console.log(part)
}
}))
Another lower level solution that don't depend on WritableStream or TextDecoderStream would be to use regular TextDecoder with stream option by doing
var res = ''
const decoder = new TextDecoder()
res += decoder.decode(chunk1, { stream: true })
res += decoder.decode(chunk2, { stream: true })
res += decoder.decode(chunk3, { stream: true })
res += decoder.decode() // flush the end
how u get each chunk could be by using new Response(blob).body or by blob.stream() or simply slicing the blob using blob.slice(start, end) and use FileReader.prototype.readAsArrayBuffer()
I am receiving data reading from an S3 bucket, which will contain .html files. they are being received by Node like so:
{"type":"Buffer","data":[60,104,116,109,108,32,120,109,108,110,115,58,118,61,34,117,114 ....]}
Is there any way you can take this buffer and use it somewhere to insert directly into like an iFrame src? Or what would be the best way to go about taking this buffer and being able to show it as an HTML page?
Was also thinking of taking the buffer and writing the file with fs.writeFile(...) and using the local path as the src.
Any suggestions / advice would be appreciated thank you in advance.
That would work as well
const d = {"type":"Buffer","data":[60,104,116,109,108,32,120,109,108,110,115,58,118,61,34,117,114]}
const r = Buffer.from(d.data).toString()
console.log(r) // <html xmlns:v="ur
You can convert the ArrayBuffer to a String easily. Given your partial Buffer, it would look like this:
const dataFromS3 = {"type":"Buffer","data":[60,104,116,109,108,32,120,109,108,110,115,58,118,61,34,117,114]};
const output = String.fromCharCode.apply(null, dataFromS3.data);
console.log(output);
I have drag&drop event and I would like to hash the filed dragged. I have this:
var file = ev.dataTransfer.items[i].getAsFile();
var hashf = CryptoJS.SHA512(file).toString();
console.log("hashf", hashf)
But when I drag differents files, "hashf" is always the same string.
https://jsfiddle.net/9rfvnbza/1/
The issue is that you are attempting to hash the File object. Hash algorithms expect a string to hash.
When passing the File Object to the CryptoJS.SHA512() method, the API attempts to convert the object to a string. That conversion results in CryptoJS.SHA512() receiving the same string not matter what File object you send provide it.
The string is [object File] - you can replace file in your code with that string and discover it is the same hash code you've see all along.
To fix this, retrieve the text from the file first and pass that to the hashing algorithm:
file.text().then((text) => {
const hashf = CryptoJS.SHA512(text).toString();
console.log("hashf", hashf);
});
If you prefer async/await, you can put it in an IIFE:
(async() => {
const text = await file.text()
const hashf = CryptoJS.SHA512(text).toString();
console.log("hashf", hashf);
})();
I am relatively new to JavaScript and I want to get the hash of a file, and would like to better understand the mechanism and code behind the process.
So, what I need: An MD5 or SHA-256 hash of an uploaded file to my website.
My understanding of how this works: A file is uploaded via an HTML input tag of type 'file', after which it is converted to a binary string, which is consequently hashed.
What I have so far: I have managed to get the hash of an input of type 'text', and also, somehow, the hash of an uploaded file, although the hash did not match with websites I looked at online, so I'm guessing it hashed some other details of the file, instead of the binary string.
Question 1: Am I correct in my understanding of how a file is hashed? Meaning, is it the binary string that gets hashed?
Question 2: What should my code look like to upload a file, hash it, and display the output?
Thank you in advance.
Basically yes, that's how it works.
But, to generate such hash, you don't need to do the conversion to string yourself. Instead, let the SubtleCrypto API handle it itself, and just pass an ArrayBuffer of your file.
async function getHash(blob, algo = "SHA-256") {
// convert your Blob to an ArrayBuffer
// could also use a FileRedaer for this for browsers that don't support Response API
const buf = await new Response(blob).arrayBuffer();
const hash = await crypto.subtle.digest(algo, buf);
let result = '';
const view = new DataView(hash);
for (let i = 0; i < hash.byteLength; i += 4) {
result += view.getUint32(i).toString(16).padStart(2, '0');
}
return result;
}
inp.onchange = e => {
getHash(inp.files[0]).then(console.log);
};
<input id="inp" type="file">
To start off. I am currently using npm fast-csv which is a nice CSV reader/writer that is pretty straightforward and simple. What Im attempting to do is use this in conjunction with iconv to process "accented" character and non-ASCII characters and either convert them to an ASCII equivalent or remove them depending on the character.
My current process Im doing with fast-csv is to bring in a chunk for processing (comes in as one row) via a read stream, pause the read stream, process the data, pipe the data to a write stream and then resume the read stream using a callback. Fast-csv currently knows where to separate the chunks based on the format of the data coming in from the readstream.
The entire process looks like this:
var stream = fs.createReadStream(inputFileName);
function csvPull(source) {
csvWrite = csv.createWriteStream({ headers: true });
writableStream = fs.createWriteStream(outputFileName);
csvStream = csv()
.on("data", function (data) {
csvStream.pause();
processRow(data, function () {
csvStream.resume();
});
})
.on("end", function () {
console.log('END OF CSV FILE');
});
csvWrite.pipe(writableStream);
source.pipe(csvStream);
}
csvPull(stream);
The problem I am currently running into is that Im noticing that for some reason, when my javascript compiles, it does not inherently recognise non-ASCII characters, so I am resorting to having to use npm iconv-lite to encode the data stream as it comes in to something usable. However, this presents a bigger issue as fast-csv will no longer know where to split the chunks (rows) due to the now encoded data. This is a problem due to the sizes of the CSVs I will be working with; it will not be an option to load the entire CSV into the buffer to then decode.
Are there any suggestions on how I might get around this without writing my own CSV parser into my code?
Try reading your file with binary for the encoding option. I had to read few csv with some accented characters and it worked fine with that.
var stream = fs.createReadStream(inputFileName, { encoding: 'binary' });
Unless I misunderstand, you should be able to fix this by setting the encoding on the stream to utf-8 (docs).
for the first line:
var stream = fs.createReadStream(inputFileName, {encoding: 'utf8'});
and if needed:
writableStream = fs.createWriteStream(outputFileName, {defaultEncoding: 'utf8'});