How can I Read/Write Images with Node.js? - javascript

I'm pretty new to JS and I'm working on a Discord bot that uses Discord.js that uses COCO-SSD JS and requires me to Read/Write files from and to my PC.
I know this is probably not the best idea but I'll cross that bridge when I get to it.
Right now, I need a way to Read and write Images to and from my PC.
NOTE: The files need to be Written to my computer from a URL. However if there's any way to circumvent having to downlead the images on my pc I would appreciate some help with that as well.
I'm using "fs", "https", and "fetch".
The problem with my method is that the pixels that I'm receiving from the images are NULL and so I cant do much with them.
Here's my current code:
Sorry for the horrible formatting, english is not my first language and it's 2 AM here.
const fetchUrl = require("fetch").fetchUrl;
const https = require('https');
function saveImageToDisk(url,path)
{
var fullUrl = url;
var localPath = fs.createWriteStream(path);
var request = https.get(fullUrl,function(response) { console.log(response)
response.pipe(localPath)
});
}
saveImageToDisk("https://post.greatist.com/wp-content/uploads/sites/3/2020/02/322868_1100-1100x628.jpg","./images/" + Date.now + ".png");
const img = fs.readFile("./images/" + Date.now + ".png", function(response){console.log(response)});

I know how to achieve something like this using the module node-fetch, so I will provide an example below. I don't know what exactly you need this functionality for in your discord bot, but the below code will be able to properly save an image url to a specified path.
const fetch = require("node-fetch");
const fs = require("fs");
function saveImageToDisk(url, path)
{
fetch(url)
.then(res => res.buffer())
.then(buffer => fs.writeFileSync(path, buffer));
}
I've tested the above example code and it works perfectly. As for the rest of the code, I do have a few fixes:
const filepath = "./images/" + Date.now() + ".png";
const imageurl = "https://post.greatist.com/wp-content/uploads/sites/3/2020/02/322868_1100-1100x628.jpg";
saveImageToDisk(imageurl, filepath);
const img = fs.readFile(filepath, (err, response) =>{
if (err) throw err;
console.log(response);
});
Specifically, the main issues were:
Date.now is a function (hence it should be Date.now())
You should only create the filepath string once instead of doing so twice (for accuracy and simplicity)
The callback of fs.readFile actually has its error object as the first parameter (so you were actually logging an error, or lack thereof, to the console instead of the response/file itself).
It is entirely possible that these issues alone were causing your code to not work, and that your original saveImageToDisk() was working fine. Just be aware that these issues were definitely causing problems with how you were reading the file, regardless.

Related

reading avro compressed data with snappy generates an error of "snappy codec decompression error"

I have an application (KafkaConnect) that is generating me avro files into S3.
This files are compressed with avro code "snappy".
I'm trying to read them with javascript (I'm not a very strong javascript developer as you will be able to guess).
I tried to use avro-js or avsc as libraries to help me with this since they are referenced in most of the online examples I found for doing this.
The most complete example and very useful I found was here.
Anyway it seems most examples I found are using snappy version 6 which seems to be a bit different than version 7 (the latest).
One of the main things I noticed is that it now provides two methods of uncompress. One with sync and another which returns a promise, but none that can receive a call back function.
Anyway I think this is not an issue because I could make it work regardless, but my best example to read this files would be something like this (with avsc).
const avsc = require('avsc');
const avsc = require('avsc');
const snappy = require('snappy');
const codecs = {
snappy: function (buf, cb) {
// Avro appends checksums to compressed blocks, which we skip here.
const buffer = snappy.uncompressSync(buf.slice(0, buf.length - 4));
return cb(buffer);
}
};
avsc.createFileDecoder('person-10.snappy.avro', {codecs})
.on('metadata', function (writerType) {
console.log(writerType.name);
})
.on('data', function (obj) {
console.log('on data ');
console.log('obj');
})
.on('end', function () {
console.log('end');
});
Anyway the processing of metadata works without issues (I can access the full schema information) but the data always fails with
Uncaught Error: snappy codec decompression error
I'm looking for someone that has by some reason worked with avro and snappy in the latest versions and managed to make this work.
Because I'm really struggling with understanding this I created a fork of the official avsc repo and tried to introduce my examples there to see how this work but if more useful I could try and create a simpler
reproducible scenario.
the documentation of the package I was using was updated and now the problem is fixed.
https://github.com/mtth/avsc/wiki/API#class-blockdecoderopts
mainly I was just wrong on how to call the call back function and how to handle the buffer to snappy.
this is the correct way (as documented)
const crc32 = require('buffer-crc32');
const snappy = require('snappy');
const blockDecoder = new avro.streams.BlockDecoder({
codecs: {
snappy: (buf, cb) => {
// Avro appends checksums to compressed blocks.
const len = buf.length;
const checksum = buf.slice(len - 4, len);
snappy.uncompress(buf.slice(0, len - 4))
.then((inflated) => {
if (!checksum.equals(crc32(inflated))) {
// We make sure that the checksum matches.
throw new Error('invalid checksum');
}
cb(null, inflated);
})
.catch(cb);
}
}
});

MS Graph API file replace SharePoint ReactJS 404 item not found or stream issue

I am trying to use the MS Graph API and ReactJS to download a file from SharePoint and then replace the file. I have managed the download part after using the #microsoft.graph.downloadUrl value. Here is the code that gets me the XML document from SharePoint.
export async function getDriveFileList(accessToken,siteId,driveId,fileName) {
const client = getAuthenticatedClient(accessToken);
//https://graph.microsoft.com/v1.0/sites/{site-id}/drives/{drive-id}/root:/{item-path}
const files = await client
.api('/sites/' + siteId + '/drives/' + driveId + '/root:/' + fileName)
.select('id,name,webUrl,content.downloadUrl')
.orderby('name')
.get();
//console.log(files['#microsoft.graph.downloadUrl']);
return files;
}
When attempting to upload the same file back up I get a 404 itemNotFounderror return. Because this user was able to get it to work I think I have the MS Graph API correct, although I am not sure I'm translating correctly to ReactJS syntax. Even though the error message says item not found I think MS Graph might actually be upset with how I'm sending the XML file back. The Microsoft documentation for updating an existing file state the contents of the file in a stream should be returned. Since I've loaded the XML file into the state I'm not entirely sure how to send it back. The closest match I found involved converting a PDF to a blob so I tried that.
export async function putDriveFile(accessToken,siteId,itemId,xmldoc) {
const client = getAuthenticatedClient(accessToken);
// /sites/{site-id}/drive/items/{item-id}/content
let url = '/sites/' + siteId + '/drive/items/' + itemId + '/content';
var convertedFile = null;
try{
convertedFile = new Blob(
[xmldoc],
{type: 'text/xml'});
}
catch(err) {
console.log(err);
}
const file = await client
.api(url)
.put(convertedFile);
console.log(file);
return file;
}
I'm pretty sure it's the way I'm sending the file back but the Graph API has some bugs so I can't entirely be sure. I was convinced I was getting the correct ID of the drive item but I've seen where the site ID syntax can be different with the Graph API so maybe it is the item ID.
The correct syntax for putting an (existing) file into a document library in SharePoint is actually PUT /sites/{site-id}/drive/items/{parent-id}:/{filename}:/content I also found this code below worked for taking the XML document and converting into a blob that could be uploaded
var xmlText = new XMLSerializer().serializeToString(this.state.xmlDoc);
var blob = new Blob([xmlText], { type: "text/xml"});
var file = new File([blob], this.props.location.state.fileName, {type: "text/xml",});
var graphReturn = await putDriveFile(accessToken, this.props.location.state.driveId, this.state.fileId,file);

Automating Priority-Web-SDK file upload

I would like to create a command line (or other automated) method for uploading files to priority using the Web-SDK. The best solution I have right now seems to be a simple webform activated by a python script.
Are there tools/examples for using Javascript and a file picker without opening the browser? Are there Priority-Web-SDK ports to other environments? C#, Python, etc?
Any other suggestions also welcome.
UPDATE June 14, 2020:
I was able to complete the task for this client using a combination of Javascript, Python and C#. A tangled mess indeed, but files were uploaded. I am now revisiting the task and looking for cleaner solutions.
I found a working and usable Node module to compact the program into an executable to make it a viable option for deployment.
So the question becomes more focused => creating the input for uploadDataUrl() or uploadFile() without a browser form.
You run node locally and use priority SDK.
*As long as you work in an environment that is capable to render JS.
You can send files through the function uploadFile.
The data inside the file object need to be written as 64 base file.
This nodejs script will upload a file to Priority. Make sure that fetch-base64 is npm installed:
"use strict";
const priority = require('priority-web-sdk');
const fetch = require('fetch-base64');
const configuration = {...};
async function uploadFile(formName, zoomVal, filepath, filename) {
try {
await priority.login(configuration);
let form = await priority.formStartEx(formName, null, null, null, 1, {zoomValue: zoomVal});
await form.startSubForm("EXTFILES", null ,null);
let data = await fetch.local(filepath + '/' + filename);
let f = await form.uploadDataUrl(data[0], filename.match(/\..+$/i)[0], () => {});
await form.fieldUpdate("EXTFILENAME", f.file); // Path
await form.fieldUpdate("EXTFILEDES", filename); // Name
await form.saveRow(0);
} catch(err) {
console.log('Something bad happened:');
console.dir(err);
}
}
uploadFile('AINVOICES', 'T9679', 'C:/my/path', 'a.pdf');

NodeJs Microsoft Azure Storage SDK Download File to Stream

I just started working with the Microsoft Azure Storage SDK for NodeJS (https://github.com/Azure/azure-storage-node) and already successfully uploaded my first pdf files to the cloud storage.
However, now I started looking at the documentation, in order to download my files as a node_buffer (so I dont have to use fs.createWriteStream), however the documentation is not giving any examples of how this works. The only thing they are writing is "There are also several ways to download files. For example, getFileToStream downloads the file to a stream:", but then they only show one example, which is using the fs.createWriteStream, which I dont want to use.
I was also not able to find anything on Google that really helped me, so I was wondering if anybody has experience with doing this and could share a code sample with me?
The getFileToStream function need a writable stream as param. If you want all the data wrote to a Buffer instead of a file, you just need to create a custom writable stream.
const { Writable } = require('stream');
let bufferArray = [];
const myWriteStream = new Writable({
write(chunk, encoding, callback) {
bufferArray.push(...chunk)
callback();
}
});
myWriteStream.on('finish', function () {
// all the data is stored inside this dataBuffer
let dataBuffer = Buffer.from(bufferArray);
})
then pass myWriteStream to getFileToStream function
fileService.getFileToStream('taskshare', 'taskdirectory', 'taskfile', myWriteStream, function(error, result, response) {
if (!error) {
// file retrieved
}
});

How do I save a downloaded file to the file system? (node.js)

I am trying to make a small scraper with node (electron) for learning purposes. I am stuck at trying to download files from the webpage.
For now I do :
fetch(fileUrl).then(function(response){
return response.arrayBuffer();
}).then(function(buffer){
var buff = new Int32Array(buffer);
fsp.writeFile("filename.pdf",buff).then(function(){console.log('Success!')})
})
But the fs part is wrong - I just can't figure out how to make it right. How do I know what sort of data (uint8, int32, etc.) I should use? I'm really confused about how this should work.
Assuming you're running Electron v0.37.5 or later, I think this should do the trick:
fetch(fileUrl).then(response => {
var buff = Buffer.from(response.arrayBuffer());
fsp.writeFile("filename.pdf", buff).then(() => {
console.log('Success!')
});
});

Categories