Using CSV node module in chrome extension - javascript

I am trying to make an extension for chrome and one of the needed functionality is to make it possible to save user's data locally (But I do plan to integrate Gdrive saving). I wanted to save user's data into CSV assuming that it would be easier but it turned out a disaster.
I have tried many different methods in order to make it possible but it seems like it's no use.
Here's part of the code that doesn't work:
var { parse } = require('csv-parse');
const fs = require("chrome-fs");
const records = [];
const csvFile = chrome.runtime.getURL("save_file.csv");
var parser = parse({ columns: true }, function(err, records) {
console.log(records);
});
fs.createReadStream(chrome.runtime.getURL("save_file.csv")).pipe(parser);
and here's an error that browser throws at me when I try to initialize this code:
Uncaught Error
at chrome.js:511:1
if (err.name === 'NotFoundError') {
var enoent = new Error()
511: enoent.code = 'ENOENT'
callback(enoent)

Related

Exceljs not load static file

I have a need to generate the received data in excel, however I am facing the problem of accessing a static file on my server
I use NuxtJS + VUE a static Excel file is located in my /static folder
/static/Excel.xlsx
/static/Word.docx
If I try to access via FileReader () then I get the error
var workBook = new Excel.Workbook();
workBook.xlsx.readFile("/Excel.xlsx").then(function() {
const ws = workbook.getWorksheet("Sheet1");
cell = ws.getCell("A1").value;
console.log(cell);
});
Error:
TypeError: Cannot read property 'F_OK' of undefined
This error, as I understood from the discussions, is related to browser security.
I would not want to violate the security of the browser and the application to access the file.
The option according to which the user loads the template himself is a little inconvenient, you have to do a lot of unnecessary actions
Link to solution on https://github.com/
at the same time I found a solution for Word using package: docxtemplater - it uses JSZipUtils
and the example with Word works fine, however the module for xlsx is very expensive for me
loadFile(url, callback) {
JSZipUtils.getBinaryContent(url, callback);
},
this.loadFile("/Word.docx", function(error, content) {
if (error) {
throw error;
}
var zip = new JSZip(content);
var doc = new Docxtemplater();
doc.loadZip(zip);
//... getting data from the database and inserting it into the Word template
});
Is it possible to somehow use the JSZipUtils package to implement access to Excel? or have
another way is it different to get an instance of a static file without violating security?

MS Graph API file replace SharePoint ReactJS 404 item not found or stream issue

I am trying to use the MS Graph API and ReactJS to download a file from SharePoint and then replace the file. I have managed the download part after using the #microsoft.graph.downloadUrl value. Here is the code that gets me the XML document from SharePoint.
export async function getDriveFileList(accessToken,siteId,driveId,fileName) {
const client = getAuthenticatedClient(accessToken);
//https://graph.microsoft.com/v1.0/sites/{site-id}/drives/{drive-id}/root:/{item-path}
const files = await client
.api('/sites/' + siteId + '/drives/' + driveId + '/root:/' + fileName)
.select('id,name,webUrl,content.downloadUrl')
.orderby('name')
.get();
//console.log(files['#microsoft.graph.downloadUrl']);
return files;
}
When attempting to upload the same file back up I get a 404 itemNotFounderror return. Because this user was able to get it to work I think I have the MS Graph API correct, although I am not sure I'm translating correctly to ReactJS syntax. Even though the error message says item not found I think MS Graph might actually be upset with how I'm sending the XML file back. The Microsoft documentation for updating an existing file state the contents of the file in a stream should be returned. Since I've loaded the XML file into the state I'm not entirely sure how to send it back. The closest match I found involved converting a PDF to a blob so I tried that.
export async function putDriveFile(accessToken,siteId,itemId,xmldoc) {
const client = getAuthenticatedClient(accessToken);
// /sites/{site-id}/drive/items/{item-id}/content
let url = '/sites/' + siteId + '/drive/items/' + itemId + '/content';
var convertedFile = null;
try{
convertedFile = new Blob(
[xmldoc],
{type: 'text/xml'});
}
catch(err) {
console.log(err);
}
const file = await client
.api(url)
.put(convertedFile);
console.log(file);
return file;
}
I'm pretty sure it's the way I'm sending the file back but the Graph API has some bugs so I can't entirely be sure. I was convinced I was getting the correct ID of the drive item but I've seen where the site ID syntax can be different with the Graph API so maybe it is the item ID.
The correct syntax for putting an (existing) file into a document library in SharePoint is actually PUT /sites/{site-id}/drive/items/{parent-id}:/{filename}:/content I also found this code below worked for taking the XML document and converting into a blob that could be uploaded
var xmlText = new XMLSerializer().serializeToString(this.state.xmlDoc);
var blob = new Blob([xmlText], { type: "text/xml"});
var file = new File([blob], this.props.location.state.fileName, {type: "text/xml",});
var graphReturn = await putDriveFile(accessToken, this.props.location.state.driveId, this.state.fileId,file);

Read and Write text file using create-react-app from the browser

I am trying to read a text file that is in the source(src) folder of the react project(creat-react-app), manipulate the values and write back the new value to the same text file.
I am unable to read the values from the file, even though the code that reads the file is logging out old data, not sure where is that coming from. Because even if change the data in the text file directly, it doesn't read the new value.
I am using a package called browserify-fs (https://www.npmjs.com/package/browserify-fs) for reading and writing to a file.
var fs = require('browserify-fs');
var reader = new FileReader();
export const getData = () => {
let initialString = "abcd";
fs.readFile('file.txt', function (err, data) {
if (err) {
return console.error(err);
}
console.log(initialString + data.toString());
});
};
export const writeData = () => {
let data = "abcd";
fs.writeFile("file.txt", data, err => {
// In case of a error throw err.
if (err) throw err;
});
}
Does it have to do something with webpack-loader for importing the types of file for the build or is it related specifically to create-react-app package which defines the files and folder structure for auto-importing types of files?
I am still not sure what is the actual issue causing. Any help would be appreciated.
P.S: I know using CRUD operations on the browser is not a recommended practice, just using for a personal project(learning purpose).

Write a variable into a buffer

I am very new to node.js and I think I understand the basics of how it functions but I feel like I am not seeing something that is vital to how fs.write and buffers function.
I am trying to send a user defined variable over socket.io and write it into an html file. I have a main site that has the button, when clicked it sends the information to the socket in a variable.
The thing I can't figure out is how to insert the variable into the html file.
I can save strings that I type, into a file:
(e.g.) var writeBuffer = new Buffer ('13');
But not variables that I put in:
(e.g.) var writeBuffer = new Buffer ($(newval));
I even tried different encoding methods, I think I am missing something.
Server.js
var newval = "User String";
var fd = fsC.open(fileName, 'rs+', function (error, fd) {
if (error) { throw error }
var writeBuffer = new Buffer($(newval));
var bufferLength = writeBuffer.length;
fsC.write( fd, writeBuffer, 0, bufferLength, 937,
function (error, written) {
if (error) { throw error }
fsC.close(fd, function() {
console.log('File Closed');
});
}
);
});
If you are using a version of jsdom 4.0.0 or later, it will not work with Node.js. As per the jsdom github readme:
Note that as of our 4.0.0 release, jsdom no longer works with
Node.js™, and instead requires io.js. You are still welcome to install
a release in the 3.x series if you use Node.js™.

createBlockBlob and commitBlobBlocks create empty files in BlobStorage

I'm developing a web app that can upload large file into the Azure Blob Storage.
As a backend, I am using Windows Azure Mobile Services (the web app will generate contents for mobile devices) in nodeJS.
My client can successfully send chunks of data to the backend, everything looks fine but, at the end, the uploaded file is empty. The data upload has been prepared by following this tutorial: it works perfectly when the file is small enough to be uploaded in a single requests. The process fails when the file needs to be broken in chunks. It uses the ReadableStreamBuffer from the tutorial.
Can somebody help me?
Here the code:
Back-end : createBlobBlockFromStream
[...]
//Get references
var azure = require('azure');
var qs = require('querystring');
var appSettings = require('mobileservice-config').appSettings;
var accountName = appSettings.STORAGE_NAME;
var accountKey = appSettings.STORAGE_KEY;
var host = accountName + '.blob.core.windows.net';
var container = "zips";
//console.log(request.body);
var blobName = request.body.file;
var blobExt = request.body.ext;
var blockId = request.body.blockId;
var data = new Buffer(request.body.data, "base64");
var stream = new ReadableStreamBuffer(data);
var streamLen = stream.size();
var blobFull = blobName+"."+blobExt;
console.log("BlobFull: "+blobFull+"; id: "+blockId+"; len: "+streamLen+"; "+stream);
var blobService = azure.createBlobService(accountName, accountKey, host);
//console.log("blockId: "+blockId+"; container: "+container+";\nblobFull: "+blobFull+"streamLen: "+streamLen);
blobService.createBlobBlockFromStream(blockId, container, blobFull, stream, streamLen,
function(error, response){
if(error){
request.respond(statusCodes.INTERNAL_SERVER_ERROR, error);
} else {
request.respond(statusCodes.OK, {message : "block created"});
}
});
[...]
Back-end: commitBlobBlock
[...]
var azure = require('azure');
var qs = require('querystring');
var appSettings = require('mobileservice-config').appSettings;
var accountName = appSettings.STORAGE_NAME;
var accountKey = appSettings.STORAGE_KEY;
var host = accountName + '.blob.core.windows.net';
var container = "zips";
var blobName = request.body.file;
var blobExt = request.body.ext;
var blobFull = blobName+"."+blobExt;
var blockIdList = request.body.blockList;
console.log("blobFull: "+blobFull+"; blockIdList: "+JSON.stringify(blockIdList));
var blobService = azure.createBlobService(accountName, accountKey, host);
blobService.commitBlobBlocks(container, blobFull, blockIdList, function(error, result){
if(error){
request.respond(statusCodes.INTERNAL_SERVER_ERROR, error);
} else {
request.respond(statusCodes.OK, result);
blobService.listBlobBlocks(container, blobFull)
}
});
[...]
The second method returns the correct list of blockId, so I think that the second part of the process works fine. I think that it is the first method that fails to write the data inside the block, as if it creates some empty blocks.
In the client-side, I read the file as an ArrayBuffer, by using the FileReader JS API.
Then I convert it in a Base4 encoded string by using the following code. This approach works perfectly if I create the blob in a single call, good for small files.
[...]
//data contains the ArrayBuffer read by the FileReader API
var requestData = new Uint8Array(data);
var binary = "";
for (var i = 0; i < requestData.length; i++) {
binary += String.fromCharCode( requestData[ i ] );
}
[...]
Any idea?
Thank you,
Ric
Which version of the Azure Storage Node.js SDK are you using? It looks like you might be using an older version; if so I would recommend upgrading to the latest (0.3.0 as of this writing). We’ve improved many areas with the new library, including blob upload; you might be hitting a bug that has already been fixed. Note that there may be breaking changes between versions.
Download the latest Node.js Module (code is also on Github)
https://www.npmjs.org/package/azure-storage
Read our blog post: Microsoft Azure Storage Client Module for Node.js v. 0.2.0 http://blogs.msdn.com/b/windowsazurestorage/archive/2014/06/26/microsoft-azure-storage-client-module-for-node-js-v-0-2-0.aspx
If that’s not the issue, can you check a Fiddler trace (or equivalent) to see if the raw data blocks are being sent to the service?
Not too sure if your still suffering from this problem but i was experiencing the exact same thing and came across this looking for a solution. Well i found one and though id share.
My problem was not with how i push the block but in how i committed it. My little proxy server had no knowledge of prior commits, it just pushes the data its sent and commits it. Trouble is i wasn't providing the commit message with the previously committed blocks so it was overwriting them with the current commit each time.
So my solution:
var opts = {
UncommittedBlocks: [IdOfJustCommitedBlock],
CommittedBlocks: [IdsOfPreviouslyCommittedBlocks]
}
blobService.commitBlobBlocks('containerName', 'blobName', opts, function(e, r){});
For me the bit that broke everything was the format of the opts object. I wasn't providing an array of previously committed block names. Its also worth noting that i had to base64 decode the existing block names as:
blobService.listBlobBlocks('containerName', 'fileName', 'type IE committed', fn)
Returns an object for each block with the name being base64 encoded.
Just for completeness here's how i push my blocks, req is from the express route:
var blobId = blobService.getBlockId('blobName', 'lengthOfPreviouslyCommittedArray + 1 as Int');
var length = req.headers['content-length'];
blobService.createBlobBlockFromStream(blobId, 'containerName', 'blobName', req, length, fn);
Also with the upload i had a strange issue where the content-length header caused it to break and so had to delete it from the req.headers object.
Hope this helps and is detailed enough.

Categories