Arraybuffer conversion error while Unzipping and Load shapefile with SHP.JS - javascript

I am trying to unzip a zipped file, and if one of the files is a shapefile, then load it as a variable. However, from the JSzip docs, I gather that the shp() function accepts a buffer. I am trying to convert to a buffer, but it not working.
console.log("Unzipping now: ");
var jsZip = new JSZip();
var fileNum =0;
jsZip.loadAsync(v_objFile).then(function (zip) {
Object.keys(zip.files).forEach(function (filename){
//now we iterate over each zipped file
zip.files[filename].async('string').then(function (fileData){
console.log("\t filename: " + filename);
//if we found the shapefile file
if (filename.endsWith('.zip') == true){
zip.file(filename).async('blob').then( (blob) => {
console.log("Downloading File")
//saveAs(blob, filename);
//const buf = blob.arrayBuffer();
const buffer = new Response(blob).arrayBuffer();
shp(buffer).then(function (geojson) {
console.log(" Loaded");
// THIS CODE IS NOT REACHED
});
});
console.log("Called loadShapeFile")
}
})
})
}).catch(err => window.alert(err))
I tried the attached code, but it did not work.
The code did not reach the place where it says, "THIS CODE IS NOT REACHED"

This is the code I found as to how to convert from blob to Arraybuffer.
(async () => {
const blob = new Blob(['hello']);
const buf = await blob.arrayBuffer();
console.log( buf.byteLength ); // 5
})();

Related

store file metadata and retrieve the file later (in web)

I have a cross platform application for uploading files and I want to add the resume capability to the application. In the native version I can simply save the path of the file and retrieve the file later and split it and send the remaining of the file. but in the web version for that I have to store the whole file (in binary) to the web storage (by reading and creating the binary)(like: indexedDB or cacheAPI) and for retrieving I have to get the binary and put it into the file.
I wonder is there some way that I can save the metadata of the file and then access to the file. (not reading and storing the whole file in binary)
reading the file:
const fileToBinary = async (file: File) => {
return new Promise<string>((resolve, _) => {
const success = (event) => {
resolve(event.target.result);
}
const fileReader = new FileReader();
fileReader.addEventListener('load', success);
fileReader.readAsBinaryString(file);
});
}
Retrieving from binary:
const binaryToByte = (binary: string, progress?: number) => {
let bytes = new Uint8Array(binary.length);
for (let i=0; i<binary.length; i++)
bytes[i] = binary.charCodeAt(i);
return bytes.slice(progress, binary.length);
}
Creating file from binary:
const bytes = binaryToByte(blob.binary, progress);
const _blob = new Blob([bytes], {type: blob.type});
const file = new File([_blob], blob.name, {type: blob.type, lastModified: blob.lastModified});

How to add new columns to an Excel file in my local system using Vanilla Javascript?

I am getting a JSON back from an API and want to add this data as a new column to an Excel file that already has some data. I wanted to ask that is this possible using just frontend Javascript (without involving Node.js)? If yes, then how?
Yes, you can do it using the library exceljs
Github: https://github.com/exceljs/exceljs
NPM: https://www.npmjs.com/package/exceljs
CDN: https://cdn.jsdelivr.net/npm/exceljs#1.13.0/dist/exceljs.min.js
<input type="file" onchange="parseChoosenExcelFile(this)">
function parseChoosenExcelFile(inputElement) {
var files = inputElement.files || [];
if (!files.length) return;
var file = files[0];
console.time();
var reader = new FileReader();
reader.onloadend = function(event) {
var arrayBuffer = reader.result;
// var buffer = Buffer.from(arrayBuffer)
// debugger
var workbook = new ExcelJS.Workbook();
// workbook.xlsx.read(buffer)
workbook.xlsx.load(arrayBuffer).then(function(workbook) {
console.timeEnd();
var result = ''
workbook.worksheets.forEach(function (sheet) {
sheet.eachRow(function (row, rowNumber) {
result += row.values + ' | \n'
})
})
// Output somewhere your result file
// result2.innerHTML = result
});
};
reader.readAsArrayBuffer(file);
}

Downloading an Azure Storage Blob using pure JavaScript and Azure-Storage-Js

I'm trying to do this with just pure Javascript and the SDK. I am not using Node.js. I'm converting my application from v2 to v10 of the SDK azure-storage-js-v10
The azure-storage.blob.js bundled file is compatible with UMD
standard, if no module system is found, following global variable
will be exported: azblob
My code is here:
const serviceURL = new azblob.ServiceURL(`https://${account}.blob.core.windows.net${accountSas}`, pipeline);
const containerName = "container";
const containerURL = azblob.ContainerURL.fromServiceURL(serviceURL, containerName);
const blobURL = azblob.BlobURL.fromContainerURL(containerURL, blobName);
const downloadBlobResponse = await blobURL.download(azblob.Aborter.none, 0);
The downloadBlobResponse looks like this:
downloadBlobResponse
Using v10, how can I convert the downloadBlobResponse into a new blob so it can be used in the FileSaver saveAs() function?
In azure-storage-js-v2 this code worked on smaller files:
let readStream = blobService.createReadStream(containerName, blobName, (err, res) => {
if (error) {
// Handle read blob error
}
});
// Use event listener to receive data
readStream.on('data', data => {
// Uint8Array retrieved
// Convert the array back into a blob
var newBlob = new Blob([new Uint8Array(data)]);
// Saves file to the user's downloads directory
saveAs(newBlob, blobName); // FileSaver.js
});
I've tried everything to get v10 working, any help would be greatly appreciated.
Thanks,
You need to get the body by await blobBody.
downloadBlobResponse = await blobURL.download(azblob.Aborter.none, 0);
// data is a browser Blob type
const data = await downloadBlobResponse.blobBody;
Thanx Mike Coop and Xiaoning Liu!
I was busy making a Vuejs plugin to download blobs from a storage account. Thanx to you, I was able to make this work.
var FileSaver = require('file-saver');
const { BlobServiceClient } = require("#azure/storage-blob");
const downloadButton = document.getElementById("download-button");
const downloadFiles = async() => {
try {
if (fileList.selectedOptions.length > 0) {
reportStatus("Downloading files...");
for await (const option of fileList.selectedOptions) {
var blobName = option.text;
const account = '<account name>';
const sas = '<blob sas token>';
const containerName = '< container name>';
const blobServiceClient = new BlobServiceClient(`https://${account}.blob.core.windows.net${sas}`);
const containerClient = blobServiceClient.getContainerClient(containerName);
const blobClient = containerClient.getBlobClient(blobName);
const downloadBlockBlobResponse = await blobClient.download(blobName, 0, undefined);
const data = await downloadBlockBlobResponse.blobBody;
// Saves file to the user's downloads directory
FileSaver.saveAs(data, blobName); // FileSaver.js
}
reportStatus("Done.");
listFiles();
} else {
reportStatus("No files selected.");
}
} catch (error) {
reportStatus(error.message);
}
};
downloadButton.addEventListener("click", downloadFiles);
Thanks Xiaoning Liu!
I'm still learning about async javascript functions and promises. Guess I was just missing another "await". I saw that "downloadBlobResponse.blobBody" was a promise and also a blob type, but, I couldn't figure out why it wouldn't convert to a new blob. I kept getting the "Iterator getter is not callable" error.
Here's my final working solution:
// Create a BlobURL
const blobURL = azblob.BlobURL.fromContainerURL(containerURL, blobName);
// Download blob
downloadBlobResponse = await blobURL.download(azblob.Aborter.none, 0);
// In browsers, get downloaded data by accessing downloadBlockBlobResponse.blobBody
const data = await downloadBlobResponse.blobBody;
// Saves file to the user's downloads directory
saveAs(data, blobName); // FileSaver.js

writefile with fs, nodejs and express

I'm trying to save an image on the server with fs.writefile and base64, it can write and save the image in the correct directory but the image is blank and says "no support this type of file".
my server function:
let base64 = '';
let file = req.body.arquivo
let reader = new FileReader()
reader.onloadend = function(){
base64 = reader.result
}
let img = base64.replace(/^data:image\/\w+;base64,/, "");
let buffer = new Buffer(img, "base64")
fs.writeFile(`./public${caminho}${nome}`, buffer, (err) => {console.log(err)});
const candy = await Candy.create({
nome: nome,
doce: doce,
caminho: caminho,
tema: tema
});
return res.json(candy);
},
here is when I open the image in the directory that was saved:

Ipfs-mini cat APi's output buffer seems like corrupted for the hash pointing the image file

I am a newbie to both Javascript and ipfs and I am trying an experiment to fetch an image buffer from the ipfs hash "QmdD8FL7N3kFnWDcPSVeD9zcq6zCJSUD9rRSdFp9tyxg1n" using ipfs-mini node module.
Below is my code
const IPFS = require('ipfs-mini');
const FileReader = require('filereader');
var multer = require('multer');
const ipfs = initialize();
app.post('/upload',function(req,res){
upload(req,res, function(err){
console.log(req.file.originalname);
ipfs.cat('QmdD8FL7N3kFnWDcPSVeD9zcq6zCJSUD9rRSdFp9tyxg1n', function(err, data){
if(err) console.log("could not get the image from the ipfs for hash " + ghash);
else {
var wrt = data.toString('base64');
console.log('size ; ' + wrt.length);
fs.writeFile('tryipfsimage.gif',wrt, (err) =>{
if(err)console.log('can not write file');
else {
//console.log(data);
ipfs.stat('QmdD8FL7N3kFnWDcPSVeD9zcq6zCJSUD9rRSdFp9tyxg1n', (err, data)=>{
// console.log(hexdump(wrt));
});
console.log("files written successfully");
}
});
}
});
});
});
function initialize() {
console.log('Initializing the ipfs object');
return new IPFS({
host: 'ipfs.infura.io',
protocol: 'https'
});
}
I could view the image properly in the browser using the link below "https://ipfs.io/ipfs/QmdD8FL7N3kFnWDcPSVeD9zcq6zCJSUD9rRSdFp9tyxg1n", but if I open the file 'tryipfsimage.gif' in which I dump the return buffer of the cat API in above program, the content of the image seems corrupted. I am not sure what the mistake I am doing in the code. it would be great If someone points me the mistake.
From ipfs docs https://github.com/ipfs/interface-ipfs-core/blob/master/SPEC/FILES.md#javascript---ipfsfilescatipfspath-callback
file in the callback is actually a Buffer so by toString('base64')'ing it you are writing actual base64 into the .gif file - no need to do this. you can pass the Buffer directly to the fs.writeFile api with
fs.writeFile('tryipsimage.gif', file, ...
For larger files I would recommend using the ipfs catReadableStream, where you can do something more like:
const stream = ipfs.catReadableStream('QmdD8FL7N3kFnWDcPSVeD9zcq6zCJSUD9rRSdFp9tyxg1n')
// don't forget to add error handlers to stream and whatnot
const fileStream = fs.createWriteStream('tryipsimage.gif')
stream.pipe(fileStream);

Categories