I would like to create a command line (or other automated) method for uploading files to priority using the Web-SDK. The best solution I have right now seems to be a simple webform activated by a python script.
Are there tools/examples for using Javascript and a file picker without opening the browser? Are there Priority-Web-SDK ports to other environments? C#, Python, etc?
Any other suggestions also welcome.
UPDATE June 14, 2020:
I was able to complete the task for this client using a combination of Javascript, Python and C#. A tangled mess indeed, but files were uploaded. I am now revisiting the task and looking for cleaner solutions.
I found a working and usable Node module to compact the program into an executable to make it a viable option for deployment.
So the question becomes more focused => creating the input for uploadDataUrl() or uploadFile() without a browser form.
You run node locally and use priority SDK.
*As long as you work in an environment that is capable to render JS.
You can send files through the function uploadFile.
The data inside the file object need to be written as 64 base file.
This nodejs script will upload a file to Priority. Make sure that fetch-base64 is npm installed:
"use strict";
const priority = require('priority-web-sdk');
const fetch = require('fetch-base64');
const configuration = {...};
async function uploadFile(formName, zoomVal, filepath, filename) {
try {
await priority.login(configuration);
let form = await priority.formStartEx(formName, null, null, null, 1, {zoomValue: zoomVal});
await form.startSubForm("EXTFILES", null ,null);
let data = await fetch.local(filepath + '/' + filename);
let f = await form.uploadDataUrl(data[0], filename.match(/\..+$/i)[0], () => {});
await form.fieldUpdate("EXTFILENAME", f.file); // Path
await form.fieldUpdate("EXTFILEDES", filename); // Name
await form.saveRow(0);
} catch(err) {
console.log('Something bad happened:');
console.dir(err);
}
}
uploadFile('AINVOICES', 'T9679', 'C:/my/path', 'a.pdf');
Related
I am trying the following code (from sample of parquetjs-lite and stackoverflow) to read a parquet file in nodejs :
const readParquetFile = async () => {
try {
// create new ParquetReader that reads from test.parquet
let reader = await parquet.ParquetReader.openFile('test.parquet');
}
catch (e){
console.log(e);
throw e;
}
// create a new cursor
let cursor = reader.getCursor();
// read all records from the file and print them
let record = null;
while (record = await cursor.next()) {
console.log(record);
}
await reader.close();
};
When I run this code nothing happens . There is nothing written to the console, for testing purpose I have only used a small csv file which I converted using python to parquet.
Is it because I have converted from csv to parquet using python (I couldn't find any JS equivalent for large files on which I have to ultimately be able to use).
I want my application to be able to take in any parquet file and read it. Is there any limitation for parquetjs-lite in this regard.
There are NaN values in my CSV could that be a problem ?
Any pointers would be helpful.
Thanks
Possible failure cases are
you are calling this function in some file without a webserver running.
In this case the file will run in async mode and as async function goes in callback stack and your main stack is empty the program will end and even is you have code in your call stack it will never run or log anything.
To solve this try running a webserver or better use sync calls
//app.js (without webserver)
const readParquetFile = async () => {
console.log("running")
}
readParquetFile()
console.log("exit")
when you run the above code the output will be
exit
//syncApp.js
const readParquetFile = () => {
console.log("running")
// all function should be sync
}
readParquetFile()
console.log("exit")
here the console log will be
running
exit
I have a local JSON file which I intent to read/write from a NodeJS electron app. I am not sure, but I believe that instead of using readFile() and writeFile(), I should get a FileHandle to avoid multiple open and close actions.
So I've tried to grab a FileHandle from fs.promises.open(), but the problem seems to be that I am unable to get a FileHandle from an existing file without truncate it and clear it to 0.
const { resolve } = require('path');
const fsPromises = require('fs').promises;
function init() {
// Save table name
this.path = resolve(__dirname, '..', 'data', `test.json`);
// Create/Open the json file
fsPromises
.open(this.path, 'wx+')
.then(fileHandle => {
// Grab file handle if the file don't exists
// because of the flag 'wx+'
this.fh = fileHandle;
})
.catch(err => {
if (err.code === 'EEXIST') {
// File exists
}
});
}
Am I doing something wrong? Are there better ways to do it?
Links:
https://nodejs.org/api/fs.html#fs_fspromises_open_path_flags_mode
https://nodejs.org/api/fs.html#fs_file_system_flags
Because JSON is a text format that has to be read or written all at once and can't be easily modified or added onto in place, you're going to have to read the whole file or write the whole file at once.
So, your simplest option will be to just use fs.promises.readFile() and fs.promises.writeFile() and let the library open the file, read/write it and close the file. Opening and closing a file in a modern OS takes advantage of disk caching so if you're reopening a file you just previously opened not long ago, it's not going to be a slow operation. Further, since nodejs performs these operations in secondary threads in libuv, it doesn't block the main thread of nodejs either so its generally not a performance issue for your server.
If you really wanted to open the file once and hold it open, you would open it for reading and writing using the r+ flag as in:
const fileHandle = await fsPromises.open(this.path, 'r+');
Reading the whole file would be simple as the new fileHandle object has a .readFile() method.
const text = await fileHandle.readFile({encoding 'utf8'});
For writing the whole file from an open filehandle, you would have to truncate the file, then write your bytes, then flush the write buffer to ensure the last bit of the data got to the disk and isn't sitting in a buffer.
await fileHandle.truncate(0); // clear previous contents
let {bytesWritten} = await fileHandle.write(mybuffer, 0, someLength, 0); // write new data
assert(bytesWritten === someLength);
await fileHandle.sync(); // flush buffering to disk
I just started working with the Microsoft Azure Storage SDK for NodeJS (https://github.com/Azure/azure-storage-node) and already successfully uploaded my first pdf files to the cloud storage.
However, now I started looking at the documentation, in order to download my files as a node_buffer (so I dont have to use fs.createWriteStream), however the documentation is not giving any examples of how this works. The only thing they are writing is "There are also several ways to download files. For example, getFileToStream downloads the file to a stream:", but then they only show one example, which is using the fs.createWriteStream, which I dont want to use.
I was also not able to find anything on Google that really helped me, so I was wondering if anybody has experience with doing this and could share a code sample with me?
The getFileToStream function need a writable stream as param. If you want all the data wrote to a Buffer instead of a file, you just need to create a custom writable stream.
const { Writable } = require('stream');
let bufferArray = [];
const myWriteStream = new Writable({
write(chunk, encoding, callback) {
bufferArray.push(...chunk)
callback();
}
});
myWriteStream.on('finish', function () {
// all the data is stored inside this dataBuffer
let dataBuffer = Buffer.from(bufferArray);
})
then pass myWriteStream to getFileToStream function
fileService.getFileToStream('taskshare', 'taskdirectory', 'taskfile', myWriteStream, function(error, result, response) {
if (!error) {
// file retrieved
}
});
Assuming there's a server storing multiple files (not necessarily text documents):
http://<server>/<path>/file0001.txt ... http://<server>/<path>/file9999.txt
If user was to download all of those files as one, how would I do it in javascript?
Normally user would have to download 9999 files and join them on his drive.
How can I prompt a download of a file and stream the data of multiple files while javascript gets them, just like it's a stream of one, big file.
I imagine it would be something like this (excuse me for lack of javascript, just trying to explain):
With (download prompt of 'onefile.txt') as connection:
While connection is open:
For file in file_list:
get file
return file.contents
connection close
Downloading each file and storing it in memory until the last one is retrieved is not a good idea, since overall size of that file can be quite big.
I'm wondering if that's even possible. I can write it in python, but that's another story. I wanted to make it a javascript function on a website.
I'm surprised javascript can't just create a "virtual localhost connection" where it uses some generator to "yield" the contents of each file...
Well, if you use a service worker then you can manipulate the response and give it a readableStream which you can "yield" the content of each file...
This is what the streamSaver dose internally but takes away all hassle...
I will show you an example using es6 and StreamSaver.js
It's not tested it's just a ruffly idea.
This will consume very little memory, but it's limited to only Blink ATM if you wanna use StreamSaver
let download = Promise.coroutine(function* (files) {
const fileStream = streamSaver.createWriteStream('onefile.txt')
const writeStream = fileStream.getWriter()
// Later you will be able to just simply do
// yield res.body.pipeTo(fileStream) instead of pumping
for (let file of files) {
let res = yield fetch(file)
let reader = res.body.getReader()
let pump = () => reader.read()
.then(({ value, done }) => !done &&
// Write one chunk, then get the next one
writeStream.write(value).then(pump)
)
yield pump()
}
// Close the stream when you are done writing
writeStream.close()
}
download([
'http://<server>/<path>/file0001.txt',
'http://<server>/<path>/file9999.txt'
]).then(() => {
alert('all done')
})
I am trying to make a small scraper with node (electron) for learning purposes. I am stuck at trying to download files from the webpage.
For now I do :
fetch(fileUrl).then(function(response){
return response.arrayBuffer();
}).then(function(buffer){
var buff = new Int32Array(buffer);
fsp.writeFile("filename.pdf",buff).then(function(){console.log('Success!')})
})
But the fs part is wrong - I just can't figure out how to make it right. How do I know what sort of data (uint8, int32, etc.) I should use? I'm really confused about how this should work.
Assuming you're running Electron v0.37.5 or later, I think this should do the trick:
fetch(fileUrl).then(response => {
var buff = Buffer.from(response.arrayBuffer());
fsp.writeFile("filename.pdf", buff).then(() => {
console.log('Success!')
});
});