store file metadata and retrieve the file later (in web) - javascript

I have a cross platform application for uploading files and I want to add the resume capability to the application. In the native version I can simply save the path of the file and retrieve the file later and split it and send the remaining of the file. but in the web version for that I have to store the whole file (in binary) to the web storage (by reading and creating the binary)(like: indexedDB or cacheAPI) and for retrieving I have to get the binary and put it into the file.
I wonder is there some way that I can save the metadata of the file and then access to the file. (not reading and storing the whole file in binary)
reading the file:
const fileToBinary = async (file: File) => {
return new Promise<string>((resolve, _) => {
const success = (event) => {
resolve(event.target.result);
}
const fileReader = new FileReader();
fileReader.addEventListener('load', success);
fileReader.readAsBinaryString(file);
});
}
Retrieving from binary:
const binaryToByte = (binary: string, progress?: number) => {
let bytes = new Uint8Array(binary.length);
for (let i=0; i<binary.length; i++)
bytes[i] = binary.charCodeAt(i);
return bytes.slice(progress, binary.length);
}
Creating file from binary:
const bytes = binaryToByte(blob.binary, progress);
const _blob = new Blob([bytes], {type: blob.type});
const file = new File([_blob], blob.name, {type: blob.type, lastModified: blob.lastModified});

Related

Flushing file's with webkit's filesystem API with Safari

I am trying to use filesystem api to create permanent files, write and read data from them.
Although I succeeded creating the file and writing data to it, after calling flush() the file becomes empty (and it's size is 0).
The files that I created exist and I can still see them in a different running of safari, but the data is lost and the file's are all 0 sized.
Even if I try to read the file just after writing to it and flushing, the data is lost and it's size returns to 0.
Does anybody know what I am doing wrong?
I tried running this:
console.log("Starting");
async function files() {
// Set a Message
const message = "Thank you for reading this.";
const fileName = "Draft.txt";
// Get handle to draft file
const root = await navigator.storage.getDirectory();
const draftHandle = await root.getFileHandle(fileName, { create: true });
console.log("File Name: ", fileName);
// Get sync access handle
const accessHandle = await draftHandle.createSyncAccessHandle();
// Get size of the file.
const fileSize = await accessHandle.getSize();
console.log("File Size: ", fileSize); // getting 0 here
// Read file content to a buffer.
const buffer = new DataView(new ArrayBuffer(fileSize));
const readBuffer = accessHandle.read(buffer, { at: 0 });
console.log("Read: ", readBuffer); // getting 0 here because the file was just created
// Write the message to the file.
const encoder = new TextEncoder();
const encodedMessage = encoder.encode(message);
const writeBuffer = accessHandle.write(encodedMessage, { at: readBuffer });
console.log("Write: ", writeBuffer); // writing 27 bytes here, succeeding
// Persist changes to disk.
accessHandle.flush();
// Always close FileSystemSyncAccessHandle if done.
accessHandle.close();
console.log("Closed file ");
// Find files under root/ and print their details.
for await (const handle of root.values()) {
console.log('Item in root: ', handle.name);
if (handle.kind == "file") {
let f = await handle.getFile();
console.log("File Details: ", f); // I can see here the file I created now and earlier created files, but all of them are 0 sized
}
}
}
files();

Arraybuffer conversion error while Unzipping and Load shapefile with SHP.JS

I am trying to unzip a zipped file, and if one of the files is a shapefile, then load it as a variable. However, from the JSzip docs, I gather that the shp() function accepts a buffer. I am trying to convert to a buffer, but it not working.
console.log("Unzipping now: ");
var jsZip = new JSZip();
var fileNum =0;
jsZip.loadAsync(v_objFile).then(function (zip) {
Object.keys(zip.files).forEach(function (filename){
//now we iterate over each zipped file
zip.files[filename].async('string').then(function (fileData){
console.log("\t filename: " + filename);
//if we found the shapefile file
if (filename.endsWith('.zip') == true){
zip.file(filename).async('blob').then( (blob) => {
console.log("Downloading File")
//saveAs(blob, filename);
//const buf = blob.arrayBuffer();
const buffer = new Response(blob).arrayBuffer();
shp(buffer).then(function (geojson) {
console.log(" Loaded");
// THIS CODE IS NOT REACHED
});
});
console.log("Called loadShapeFile")
}
})
})
}).catch(err => window.alert(err))
I tried the attached code, but it did not work.
The code did not reach the place where it says, "THIS CODE IS NOT REACHED"
This is the code I found as to how to convert from blob to Arraybuffer.
(async () => {
const blob = new Blob(['hello']);
const buf = await blob.arrayBuffer();
console.log( buf.byteLength ); // 5
})();

Import data from a .json file to a Brain.js neural network

I want to import data from a data.json file into the neural network (which uses the Brain.js framework).Here is the part which is supposed to bring that data to the network and analyse it:
const result = brain.likely(
require('data.js')
,net);
alert("This is the result: " + result);
And get that data analysed by the neural network and shown to the user.
Here are the contents of the data.json file for reference:
{
'Rating1': 0.12434213,
'Rating2': 0.987653236,
'Rating3': 0.432543654
}
For your information this is on written on node.js enviroment.
Assuming your data.json file is in the same directory:
fetch('data.json')
.then(response => response.json())
.then(json => {
const result = brain.likely(json, net);
});
Alternatively, with async/await:
(async () => {
const json = await (await fetch('data.json')).json();
const result = brain.likely(json, net);
})();
If done through a file upload:
// target input element
const input = document.querySelector('input');
// upload event
input.addEventListener('change', () => {
const file = this.files[0];
const reader = new FileReader();
reader.addEventListener('load', e => {
const json = JSON.parse(e.target.result);
const result = brain.likely(json, net);
});
reader.readAsText(file);
});
If done through Node:
const json = require('./data.json');
brain.likely(json, net);
Useful resources for handling files:
Using files from web apps - practical examples on how to use the FileReader API
Fetch API - how to use files already on your server in the browser
Node's File System readFileSync method - to read file contents synchronously in a Node environment
JSON.parse - native JS method to convert a string to JSON, useful in all environments

Downloading an Azure Storage Blob using pure JavaScript and Azure-Storage-Js

I'm trying to do this with just pure Javascript and the SDK. I am not using Node.js. I'm converting my application from v2 to v10 of the SDK azure-storage-js-v10
The azure-storage.blob.js bundled file is compatible with UMD
standard, if no module system is found, following global variable
will be exported: azblob
My code is here:
const serviceURL = new azblob.ServiceURL(`https://${account}.blob.core.windows.net${accountSas}`, pipeline);
const containerName = "container";
const containerURL = azblob.ContainerURL.fromServiceURL(serviceURL, containerName);
const blobURL = azblob.BlobURL.fromContainerURL(containerURL, blobName);
const downloadBlobResponse = await blobURL.download(azblob.Aborter.none, 0);
The downloadBlobResponse looks like this:
downloadBlobResponse
Using v10, how can I convert the downloadBlobResponse into a new blob so it can be used in the FileSaver saveAs() function?
In azure-storage-js-v2 this code worked on smaller files:
let readStream = blobService.createReadStream(containerName, blobName, (err, res) => {
if (error) {
// Handle read blob error
}
});
// Use event listener to receive data
readStream.on('data', data => {
// Uint8Array retrieved
// Convert the array back into a blob
var newBlob = new Blob([new Uint8Array(data)]);
// Saves file to the user's downloads directory
saveAs(newBlob, blobName); // FileSaver.js
});
I've tried everything to get v10 working, any help would be greatly appreciated.
Thanks,
You need to get the body by await blobBody.
downloadBlobResponse = await blobURL.download(azblob.Aborter.none, 0);
// data is a browser Blob type
const data = await downloadBlobResponse.blobBody;
Thanx Mike Coop and Xiaoning Liu!
I was busy making a Vuejs plugin to download blobs from a storage account. Thanx to you, I was able to make this work.
var FileSaver = require('file-saver');
const { BlobServiceClient } = require("#azure/storage-blob");
const downloadButton = document.getElementById("download-button");
const downloadFiles = async() => {
try {
if (fileList.selectedOptions.length > 0) {
reportStatus("Downloading files...");
for await (const option of fileList.selectedOptions) {
var blobName = option.text;
const account = '<account name>';
const sas = '<blob sas token>';
const containerName = '< container name>';
const blobServiceClient = new BlobServiceClient(`https://${account}.blob.core.windows.net${sas}`);
const containerClient = blobServiceClient.getContainerClient(containerName);
const blobClient = containerClient.getBlobClient(blobName);
const downloadBlockBlobResponse = await blobClient.download(blobName, 0, undefined);
const data = await downloadBlockBlobResponse.blobBody;
// Saves file to the user's downloads directory
FileSaver.saveAs(data, blobName); // FileSaver.js
}
reportStatus("Done.");
listFiles();
} else {
reportStatus("No files selected.");
}
} catch (error) {
reportStatus(error.message);
}
};
downloadButton.addEventListener("click", downloadFiles);
Thanks Xiaoning Liu!
I'm still learning about async javascript functions and promises. Guess I was just missing another "await". I saw that "downloadBlobResponse.blobBody" was a promise and also a blob type, but, I couldn't figure out why it wouldn't convert to a new blob. I kept getting the "Iterator getter is not callable" error.
Here's my final working solution:
// Create a BlobURL
const blobURL = azblob.BlobURL.fromContainerURL(containerURL, blobName);
// Download blob
downloadBlobResponse = await blobURL.download(azblob.Aborter.none, 0);
// In browsers, get downloaded data by accessing downloadBlockBlobResponse.blobBody
const data = await downloadBlobResponse.blobBody;
// Saves file to the user's downloads directory
saveAs(data, blobName); // FileSaver.js

Calculate SHA-1 checksum of local html5 video file using JavaScript

When a video on my local storage—let's say it's currently located at file:///home/user/video.m4v—is opened by dragging it into a new tab in Chrome, how can I calculate the SHA-1 checksum for the file using JavaScript?
Purpose:
I am planning to write a Chrome extension which will store the calculated checksum of videos (files with extensions matching a pattern) as localStorage objects in order to save the playback position of video upon tab close and then restore it when the file is loaded again, even if the location or filename of the video is changed.
You need a crypto library for this. A well known one is Google CryptoJS.
I found this as an specific example for your task: https://gist.github.com/npcode/11282867
After including the crypto-js source:
function sha1sum() {
var oFile = document.getElementById('uploadFile').files[0];
var sha1 = CryptoJS.algo.SHA1.create();
var read = 0;
var unit = 1024 * 1024;
var blob;
var reader = new FileReader();
reader.readAsArrayBuffer(oFile.slice(read, read + unit));
reader.onload = function(e) {
var bytes = CryptoJS.lib.WordArray.create(e.target.result);
sha1.update(bytes);
read += unit;
if (read < oFile.size) {
blob = oFile.slice(read, read + unit);
reader.readAsArrayBuffer(blob);
} else {
var hash = sha1.finalize();
console.log(hash.toString(CryptoJS.enc.Hex)); // print the result
}
}
}
I wouldn't recommend to calculate a hash over the whole video file as it can be pretty resource consuming depending on the file size. Maybe you can use just the meta information or reconsider about the filename and filepath again?
Web APIs have progressed considerably since I asked this question. Calculating a hex digest is now possible using the built-in SubtleCrypto.digest().
TS Playground link
function u8ToHex (u8: number): string {
return u8.toString(16).padStart(2, '0');
}
/** Ref: https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/digest#supported_algorithms */
const supportedAlgorithms = [
'SHA-1',
'SHA-256',
'SHA-384',
'SHA-512',
] as const;
type SupportedAlgorithm = typeof supportedAlgorithms[number];
type Message = string | Blob | BufferSource;
async function hexDigest (
algorithm: SupportedAlgorithm,
message: Message,
): Promise<string> {
let buf: BufferSource;
if (typeof message === 'string') buf = new TextEncoder().encode(message);
else if (message instanceof Blob) buf = await message.arrayBuffer();
else buf = message;
const hash = await crypto.subtle.digest(algorithm, buf);
return [...new Uint8Array(hash)].map(u8ToHex).join('');
}

Categories