Write a variable into a buffer - javascript

I am very new to node.js and I think I understand the basics of how it functions but I feel like I am not seeing something that is vital to how fs.write and buffers function.
I am trying to send a user defined variable over socket.io and write it into an html file. I have a main site that has the button, when clicked it sends the information to the socket in a variable.
The thing I can't figure out is how to insert the variable into the html file.
I can save strings that I type, into a file:
(e.g.) var writeBuffer = new Buffer ('13');
But not variables that I put in:
(e.g.) var writeBuffer = new Buffer ($(newval));
I even tried different encoding methods, I think I am missing something.
Server.js
var newval = "User String";
var fd = fsC.open(fileName, 'rs+', function (error, fd) {
if (error) { throw error }
var writeBuffer = new Buffer($(newval));
var bufferLength = writeBuffer.length;
fsC.write( fd, writeBuffer, 0, bufferLength, 937,
function (error, written) {
if (error) { throw error }
fsC.close(fd, function() {
console.log('File Closed');
});
}
);
});

If you are using a version of jsdom 4.0.0 or later, it will not work with Node.js. As per the jsdom github readme:
Note that as of our 4.0.0 release, jsdom no longer works with
Node.js™, and instead requires io.js. You are still welcome to install
a release in the 3.x series if you use Node.js™.

Related

Using CSV node module in chrome extension

I am trying to make an extension for chrome and one of the needed functionality is to make it possible to save user's data locally (But I do plan to integrate Gdrive saving). I wanted to save user's data into CSV assuming that it would be easier but it turned out a disaster.
I have tried many different methods in order to make it possible but it seems like it's no use.
Here's part of the code that doesn't work:
var { parse } = require('csv-parse');
const fs = require("chrome-fs");
const records = [];
const csvFile = chrome.runtime.getURL("save_file.csv");
var parser = parse({ columns: true }, function(err, records) {
console.log(records);
});
fs.createReadStream(chrome.runtime.getURL("save_file.csv")).pipe(parser);
and here's an error that browser throws at me when I try to initialize this code:
Uncaught Error
at chrome.js:511:1
if (err.name === 'NotFoundError') {
var enoent = new Error()
511: enoent.code = 'ENOENT'
callback(enoent)

Read and Write text file using create-react-app from the browser

I am trying to read a text file that is in the source(src) folder of the react project(creat-react-app), manipulate the values and write back the new value to the same text file.
I am unable to read the values from the file, even though the code that reads the file is logging out old data, not sure where is that coming from. Because even if change the data in the text file directly, it doesn't read the new value.
I am using a package called browserify-fs (https://www.npmjs.com/package/browserify-fs) for reading and writing to a file.
var fs = require('browserify-fs');
var reader = new FileReader();
export const getData = () => {
let initialString = "abcd";
fs.readFile('file.txt', function (err, data) {
if (err) {
return console.error(err);
}
console.log(initialString + data.toString());
});
};
export const writeData = () => {
let data = "abcd";
fs.writeFile("file.txt", data, err => {
// In case of a error throw err.
if (err) throw err;
});
}
Does it have to do something with webpack-loader for importing the types of file for the build or is it related specifically to create-react-app package which defines the files and folder structure for auto-importing types of files?
I am still not sure what is the actual issue causing. Any help would be appreciated.
P.S: I know using CRUD operations on the browser is not a recommended practice, just using for a personal project(learning purpose).

Save json object as a text file

I am using an API for a Twitch.tv streaming bot called DeepBot.
Here is the link to it on github https://github.com/DeepBot-API/client-websocket
My goal is to create a text document listing all the information pulled from the bot using the command api|get_users|. The bot's response is always a json object. How can I take the json object from the bot and save it as a text file?
Edit: My code
var WebSocket = require('ws');
var ws = new WebSocket('ws://Ip and Port/');
ws.on('open', function () {
console.log('sending API registration');
ws.send('api|register|SECRET');
});
ws.on('close', function close() {
console.log('disconnected');
});
ws.on('message', function (message) {
console.log('Received: ' + message);
});
ws.on('open', function () {
ws.send('api|get_users|');
});
Well that depends on how your setup is? You posted this under javascript. So I guess you are either:
using a browser, to make the websocket connection, in with case there is no direct way to save a file on the client. But in HTML5 you can store key,value pairs with local storage.
using node js (server side javascript) in witch case the code is as below:
some other setup, that I can't guess. in witch case you might tell a little more about it?
In browser with HTML5 capabilities:
// where msg is an object returned from the API
localStorage.setItem('Some key', JSON.stringify(msg));
In Node JS
var fs = require("fs"); // Has to be installed first with “npm install fs”
// where msg is an object returned from the API
fs.writeFile("some-file.json", JSON.stringify(msg), function (err) {
if (err) throw err;
});
Edit: OK, Thanks for clearing it up.
I believe Blag's solution is the way to go.
Good luck with your project!
If it's for a client side JS save :
Create a file in memory for user to download, not through server
and
Convert JS object to JSON string
Is what you need. ( I don't test it, but it'll look like this : )
var j = {"name":"binchen"};
var s = JSON.stringify(j);
window.location = 'data:text/plain;charset=utf-8,'+encodeURIComponent(s);

Parse json object when node server is started

I need to parse json object when the node server.js(which is my entry point to the program) is started ,the parse of the json file is done in diffrent module in my project.
I've two questions
Is it recommended to invoke the parse function with event in the server.js file
I read about the event.emiter but not sure how to invoke function
from different module...example will be very helpful
I've multiple JSON files
UPDATE to make it more clear
if I read 3 json file object (50 lines each) when the server/app is loaded (server.js file) this will be fast I guess. my scenario is that the list of the valid path's for the express call is in this json files
app.get('/run1', function (req, res) {
res.send('Hello World!');
});
So run1 should be defined in the json file(like white list of path's) if user put run2 which I not defined I need to provide error so I think that when the server is up to do this call and keep this obj with all config valid path and when user make a call just get this object which alreay parsed (when the server loaded ) and verify if its OK, I think its better approach instead doing this on call
UPDATE 2
I'll try explain more simple.
Lets assume that you have white list of path which you should listen,
like run1
app.get('/run1', function
Those path list are defined in jsons files inside your project under specific folder,before every call to your application via express you should verify that this path that was requested is in the path list of json. this is given. now how to do it.
Currently I've develop module which seek the json files in this and find if specific path is exist there.
Now I think that right solution is that when the node application is started to invoke this functionality and keep the list of valid paths in some object which I can access very easy during the user call and check if path there.
my question is how to provide some event to the validator module when the node app(Server.js) is up to provide this object.
If it's a part of your application initialization, then you could read and parse this JSON file synchronously, using either fs.readFileSync and JSON.parse, or require:
var config = require('path/to/my/config.json');
Just make sure that the module handling this JSON loading is required in your application root before app.listen call.
In this case JSON data will be loaded and parsed by the time you server will start, and there will be no need to trouble yourself with callbacks or event emitters.
I can't see any benefits of loading your initial config asynchronously for two reasons:
The bottleneck of JSON parsing is the parser itself, but since it's synchronous, you won't gain anything here. So, the only part you'll be able to optimize is interactions with your file system (i.e. reading data from disk).
Your application won't be able to work properly until this data will be loaded.
Update
If for some reason you can't make your initialization synchronous, you could delay starting your application until initialization is done.
The easiest solution here is to move app.listen part inside of initialization callback:
// initialization.js
var glob = require('glob')
var path = require('path')
module.exports = function initialization (done) {
var data = {}
glob('./config/*.json', function (err, files) {
if (err) throw err
files.forEach(function (file) {
var filename = path.basename(file)
data[filename] = require(file)
})
done(data);
})
}
// server.js
var initialization = require('./initialization')
var app = require('express')()
initialization(function (data) {
app.use(require('./my-middleware')(data))
app.listen(8000)
})
An alternative solution is to use simple event emitter to signal that your data is ready:
// config.js
var glob = require('glob')
var path = require('path')
var events = require('events')
var obj = new events.EventEmitter()
obj.data = {}
glob('./config/*.json', function (err, files) {
if (err) throw err
files.forEach(function (file) {
var filename = path.basename(file)
obj.data[filename] = require(file)
})
obj.emit('ready')
})
module.exports = obj
// server.js
var config = require('./config')
var app = require('express')()
app.use(require('./my-middleware'))
config.on('ready', function () {
app.listen(8000)
})

createBlockBlob and commitBlobBlocks create empty files in BlobStorage

I'm developing a web app that can upload large file into the Azure Blob Storage.
As a backend, I am using Windows Azure Mobile Services (the web app will generate contents for mobile devices) in nodeJS.
My client can successfully send chunks of data to the backend, everything looks fine but, at the end, the uploaded file is empty. The data upload has been prepared by following this tutorial: it works perfectly when the file is small enough to be uploaded in a single requests. The process fails when the file needs to be broken in chunks. It uses the ReadableStreamBuffer from the tutorial.
Can somebody help me?
Here the code:
Back-end : createBlobBlockFromStream
[...]
//Get references
var azure = require('azure');
var qs = require('querystring');
var appSettings = require('mobileservice-config').appSettings;
var accountName = appSettings.STORAGE_NAME;
var accountKey = appSettings.STORAGE_KEY;
var host = accountName + '.blob.core.windows.net';
var container = "zips";
//console.log(request.body);
var blobName = request.body.file;
var blobExt = request.body.ext;
var blockId = request.body.blockId;
var data = new Buffer(request.body.data, "base64");
var stream = new ReadableStreamBuffer(data);
var streamLen = stream.size();
var blobFull = blobName+"."+blobExt;
console.log("BlobFull: "+blobFull+"; id: "+blockId+"; len: "+streamLen+"; "+stream);
var blobService = azure.createBlobService(accountName, accountKey, host);
//console.log("blockId: "+blockId+"; container: "+container+";\nblobFull: "+blobFull+"streamLen: "+streamLen);
blobService.createBlobBlockFromStream(blockId, container, blobFull, stream, streamLen,
function(error, response){
if(error){
request.respond(statusCodes.INTERNAL_SERVER_ERROR, error);
} else {
request.respond(statusCodes.OK, {message : "block created"});
}
});
[...]
Back-end: commitBlobBlock
[...]
var azure = require('azure');
var qs = require('querystring');
var appSettings = require('mobileservice-config').appSettings;
var accountName = appSettings.STORAGE_NAME;
var accountKey = appSettings.STORAGE_KEY;
var host = accountName + '.blob.core.windows.net';
var container = "zips";
var blobName = request.body.file;
var blobExt = request.body.ext;
var blobFull = blobName+"."+blobExt;
var blockIdList = request.body.blockList;
console.log("blobFull: "+blobFull+"; blockIdList: "+JSON.stringify(blockIdList));
var blobService = azure.createBlobService(accountName, accountKey, host);
blobService.commitBlobBlocks(container, blobFull, blockIdList, function(error, result){
if(error){
request.respond(statusCodes.INTERNAL_SERVER_ERROR, error);
} else {
request.respond(statusCodes.OK, result);
blobService.listBlobBlocks(container, blobFull)
}
});
[...]
The second method returns the correct list of blockId, so I think that the second part of the process works fine. I think that it is the first method that fails to write the data inside the block, as if it creates some empty blocks.
In the client-side, I read the file as an ArrayBuffer, by using the FileReader JS API.
Then I convert it in a Base4 encoded string by using the following code. This approach works perfectly if I create the blob in a single call, good for small files.
[...]
//data contains the ArrayBuffer read by the FileReader API
var requestData = new Uint8Array(data);
var binary = "";
for (var i = 0; i < requestData.length; i++) {
binary += String.fromCharCode( requestData[ i ] );
}
[...]
Any idea?
Thank you,
Ric
Which version of the Azure Storage Node.js SDK are you using? It looks like you might be using an older version; if so I would recommend upgrading to the latest (0.3.0 as of this writing). We’ve improved many areas with the new library, including blob upload; you might be hitting a bug that has already been fixed. Note that there may be breaking changes between versions.
Download the latest Node.js Module (code is also on Github)
https://www.npmjs.org/package/azure-storage
Read our blog post: Microsoft Azure Storage Client Module for Node.js v. 0.2.0 http://blogs.msdn.com/b/windowsazurestorage/archive/2014/06/26/microsoft-azure-storage-client-module-for-node-js-v-0-2-0.aspx
If that’s not the issue, can you check a Fiddler trace (or equivalent) to see if the raw data blocks are being sent to the service?
Not too sure if your still suffering from this problem but i was experiencing the exact same thing and came across this looking for a solution. Well i found one and though id share.
My problem was not with how i push the block but in how i committed it. My little proxy server had no knowledge of prior commits, it just pushes the data its sent and commits it. Trouble is i wasn't providing the commit message with the previously committed blocks so it was overwriting them with the current commit each time.
So my solution:
var opts = {
UncommittedBlocks: [IdOfJustCommitedBlock],
CommittedBlocks: [IdsOfPreviouslyCommittedBlocks]
}
blobService.commitBlobBlocks('containerName', 'blobName', opts, function(e, r){});
For me the bit that broke everything was the format of the opts object. I wasn't providing an array of previously committed block names. Its also worth noting that i had to base64 decode the existing block names as:
blobService.listBlobBlocks('containerName', 'fileName', 'type IE committed', fn)
Returns an object for each block with the name being base64 encoded.
Just for completeness here's how i push my blocks, req is from the express route:
var blobId = blobService.getBlockId('blobName', 'lengthOfPreviouslyCommittedArray + 1 as Int');
var length = req.headers['content-length'];
blobService.createBlobBlockFromStream(blobId, 'containerName', 'blobName', req, length, fn);
Also with the upload i had a strange issue where the content-length header caused it to break and so had to delete it from the req.headers object.
Hope this helps and is detailed enough.

Categories