I'm building something like Amblilight and I need to capture the screen (1920x1080) as fast as possible and process it to get colors for LEDs.
I'm using Node.js as programming language. I tried to capture screen using VNC protocol (using my own client impementation), but it gave me like 1 FPS and there was delay like 3 seconds. I need fastest way to capture the screen of the computer, which runs node.js.
I'm using Ubuntu-based linux distro.
This is going to be difficult to answer because screen capturing is system dependent. If I were going to write this type of thing for my current system on MacOSX, I would utilize the command line for taking a screen shot.
//from Terminal
>screencapture ~/Desktop/test.png
Then incorporate this into Node.JS:
//screenshot.js
var file = '~/Desktop/test.png';
var worker = require('child_process');
var fs = require('fs');
worker.exec('screencapture ' + file, function(err, results) {
if(err) return console.error(err);
//process image
var imageBuffer = fs.readFileSync(file);
while(imageBuffer.read()) {
//process lines in buffer
}
});
Related
I have successfully ported a C++ script to wasm but now I am having trouble sending data to it from my web app.
Long story short, in the C++ "version" of the app I am using OpenCV to open two images, that get passed in via the cli - something along the lines of:
#include <opencv.hpp>
using namespace cv;
int calc(int argc, char** argv) {
Mat img1_temp, img2_temp;
img1_temp = imread(argv[1],-1);
img2_temp = imread(argv[2],-1);
// further processing ...
}
this works from the CLI like: ./app one.jpg two.jpg.
When porting the app to wasm, I can no longer use the CLI to send the image and of course I cannot send the file(s) so I ended up having the following code (this is now client side JavaScript):
// url is a remote image
const urlToUint8Array = async url => {
const response = await fetch(url);
const buffer = await response.arrayBuffer();
const arr = new Uint8Array(buffer);
return arr;
};
const waModule = await WAModule();
document.getElementById('calculate').addEventListener('click', () => {
waModule.FS.writeFile('in1.ext', one, { encoding: 'binary' });
waModule.FS.writeFile('in2.ext', two, { encoding: 'binary' });
waModule._calc('in1.ext', 'in2.ext')); // this does not work
});
The question is, how can I send the image from JS to wasm. I have also tried to modify the C++ file to use imdecode instead of imread (and recompile the wasm) but I'm not sure if that's the right path.
To summarise, I am looking for a way to send image data for further analysis from my client side JavaScript over to WASM to be process by OpenCV.
Hello I have mainly used enscripten and by no means I am an expert in this. I give my 2 cents in case that it might help you:
Wasm has his own memory. It is a sandbox to which you transfer the data. This means that if you created some memory on JS side there will going to be a copy of this data when transferred to C++ side.
From security purposes, this memory is not shared. The one way to allocate memory and use it for both WASM and JS is to allocate the memory on WASM side and use it in JS. (Module.HEAP8 etc.).
Now coming back to your question with having this in mind:
You can in fact give WASM the security permissions to read from a certain folder or to give some permissions to the file system. (https://emscripten.org/docs/porting/files/file_systems_overview.html#file-system-runtime-environment) and here you have an example based on enscripten library https://github.com/emscripten-core/emscripten/blob/master/tests/fs/test_nodefs_rw.c.
You can allocate the memory of the buffer on WASM side, use the Uint8Buffer to read the data, which is shared by the two and you do your computation.
Ok, so I'm trying to print from a webpage (the typical "print" button, but I don't want the print dialog to appear) so I decided to use my already existing node.js backend to do the task (mainly because printing from browser is nearly impossible without the printing dialog).
I found the node-printer (https://github.com/tojocky/node-printer) module, and it works great, but only with text. I tried to send RAW data, but what it does is printing the raw characters. What I actually need is to print a logo, along with some turn information (this is for a customer care facility).
Also, the printer must be installed locally, so I can't use IPP.
Is there any way to print an image, or a combination of images and text with node.js? can it be done through node-printer or is there another way?
I ended calling an exe to do the work for me. I use a child_process to call printhtml, which does all the printing work for me. My code ended this way:
var exec = require('child_process').exec;
exec('printhtml.exe file=file.html', function(err, data) {
console.log(data.toString());
});
Actually, you can print image using node-printer. This work for me
var Printer = require('node-printer');
var fs = require('fs');
// Get available printers list
var listPrinter = Printer.list();
// Create a new Pinter from available devices
var printer = new Printer('YOUR PRINTER HERE. GET IT FROM listPrinter');
// Print from a buffer, file path or text
var fileBuffer = fs.readFileSync('PATH TO YOUR IMAGE');
var jobFromBuffer = printer.printBuffer(fileBuffer);
// Listen events from job
jobFromBuffer.once('sent', function() {
jobFromBuffer.on('completed', function() {
console.log('Job ' + jobFromBuffer.identifier + 'has been printed');
jobFromBuffer.removeAllListeners();
});
});
I had success with the Node IPP package https://www.npmjs.com/package/ipp.
The example code on the docs, which uses another node module PDFKIT to convert your html/file into a PDF, does not work. See my answer here: Cannot print with node js ipp module for a working example.
Using Node.js, what is the best way to stream a file from a filesystem into Node.js, but reading it backwards, from bottom to top? I have a large file, and there doesn't seem to be much sense in reading from the top if I only want the last 10 lines. Is this possible?
Right now I have this horrible code, where we do a GET request with a browser to view the server logs, and pass a query string parameter to tell the server how many lines at the end of the log file we want to read:
function get(req, res, next) {
var numOfLinesToRespondWith = req.query.num_lines || 10;
var fileStream = fs.createReadStream(stderr_path, {encoding: 'utf8'});
var jsonData = []; //where jsonData gets populated
var ret = [];
fileStream.on('data', function processLineOfFileData(chunk) {
jsonData.push(String(chunk));
})
.on('end', function handleEndOfFileData(err) {
if (err) {
log.error(colors.bgRed(err));
res.status(500).send({"error reading from smartconnect_stdout_log": err.toString()});
}
else {
for(var i = 0; i < numOfLinesToRespondWith; i++){
ret.push(jsonData.pop());
}
res.status(200).send({"smartconnect_stdout_log": ret});
}
});
}
the code above reads the whole file and then adds the number of lines requested to the response after reading the whole file. This is bad, is there a better way to do this? Any recommendations will be met gladly.
(one problem with the code above is that it's writing out the last lines of the log but the lines are in reverse order...)
One potential way to do this is:
process.exec('tail -r ' + file_path).pipe(process.stdout);
but that syntax is incorrect - so my question there would be - how do I pipe the result of that command into an array in Node.js and eventually into a JSON HTTP response?
I created a module called fs-backwards-stream that could may meet your needs. https://www.npmjs.com/package/fs-backwards-stream
If you need the result parsed by lines rather than byte chunks you should use the module fs-reverse https://www.npmjs.com/package/fs-reverse or
both of these modules stream you could simply read the last n bytes of a file.
here is an example using plain node fs apis and no dependencies.
https://gist.github.com/soldair/f250fb497ce592c3694a
hope that helps.
One easy way if you're on a linux computer would be to execute the tac command in node as process.exec("tac yourfile.dat") and pipe it to your write stream
You could also use slice-file and then reverse the order yourself.
Also, look at what #alexmills said in the comments
this is the best answer I got, for now
the tail command on Mac/UNIX reads files from the end and pipes to stdout (correct me if this is loose language)
var cp = require('child_process');
module.exports = function get(req, res, next) {
var numOfLinesToRespondWith = req.query.num_lines || 100;
cp.exec('tail -n 5 ' + stderr_path, function(err,stdout,stderr){
if(err){
log.error(colors.bgRed(err));
res.status(500).send({"error reading from smartconnect_stderr_log": err.toString()});
}
else{
var data = String(stdout).split('\n');
res.status(200).send({"stderr_log": data});
}
});
}
this seems to work really well - it does, however, run on separate process which is expensive in it's own way, but probably better than reading an entire 10,000 line log file.
I'm trying to implement a routine for Node.js that would allow one to open a file, that is being appended to by some other process at this very time, and then return chunks of data immediately as they are appended to file. It can be thought as similar to tail -f UNIX command, however acting immediately as chunks are available, instead of polling for changes over time. Alternatively, one can think of it as of working with a file as you do with socket — expecting on('data') to trigger from time to time until a file is closed explicitly.
In C land, if I were to implement this, I would just open the file, feed its file descriptor to select() (or any alternative function with similar designation), and then just read chunks as file descriptor is marked "readable". So, when there is nothing to be read, it won't be readable, and when something is appended to file, it's readable again.
I somewhat expected this kind of behavior for following code sample in Javascript:
function readThatFile(filename) {
const stream = fs.createReadStream(filename, {
flags: 'r',
encoding: 'utf8',
autoClose: false // I thought this would prevent file closing on EOF too
});
stream.on('error', function(err) {
// handle error
});
stream.on('open', function(fd) {
// save fd, so I can close it later
});
stream.on('data', function(chunk) {
// process chunk
// fs.close() if I no longer need this file
});
}
However, this code sample just bails out when EOF is encountered, so I can't wait for new chunk to arrive. Of course, I could reimplement this using fs.open and fs.read, but that somewhat defeats Node.js purpose. Alternatively, I could fs.watch() file for changes, but it won't work over network, and I don't like an idea of reopening file all the time instead of just keeping it open.
I've tried to do this:
const fd = fs.openSync(filename, 'r'); // sync for readability' sake
const stream = net.Socket({ fd: fd, readable: true, writable: false });
But had no luck — net.Socket isn't happy and throws TypeError: Unsupported fd type: FILE.
So, any solutions?
UPD: this isn't possible, my answer explains why.
I haven't looked into the internals of the read streams for files, but it's possible that they don't support waiting for a file to have more data written to it. However, the fs package definitely supports this with its most basic functionality.
To explain how tailing would work, I've written a somewhat hacky tail function which will read an entire file and invoke a callback for every line (separated by \n only) and then wait for the file to have more lines written to it. Note that a more efficient way of doing this would be to have a fixed size line buffer and just shuffle bytes into it (with a special case for extremely long lines), rather than modifying JavaScript strings.
var fs = require('fs');
function tail(path, callback) {
var descriptor, bytes = 0, buffer = new Buffer(256), line = '';
function parse(err, bytesRead, buffer) {
if (err) {
callback(err, null);
return;
}
// Keep track of the bytes we have consumed already.
bytes += bytesRead;
// Combine the buffered line with the new string data.
line += buffer.toString('utf-8', 0, bytesRead);
var i = 0, j;
while ((j = line.indexOf('\n', i)) != -1) {
// Callback with a single line at a time.
callback(null, line.substring(i, j));
// Skip the newline character.
i = j + 1;
}
// Only keep the unparsed string contents for next iteration.
line = line.substr(i);
// Keep reading in the next tick (avoids CPU hogging).
process.nextTick(read);
}
function read() {
var stat = fs.fstatSync(descriptor);
if (stat.size <= bytes) {
// We're currently at the end of the file. Check again in 500 ms.
setTimeout(read, 500);
return;
}
fs.read(descriptor, buffer, 0, buffer.length, bytes, parse);
}
fs.open(path, 'r', function (err, fd) {
if (err) {
callback(err, null);
} else {
descriptor = fd;
read();
}
});
return {close: function close(callback) {
fs.close(descriptor, callback);
}};
}
// This will tail the system log on a Mac.
var t = tail('/var/log/system.log', function (err, line) {
console.log(err, line);
});
// Unceremoniously close the file handle after one minute.
setTimeout(t.close, 60000);
All that said, you should also try to leverage the NPM community. With some searching, I found the tail-stream package which might do what you want, with streams.
Previous answers have mentioned tail-stream's approach which uses fs.watch, fs.read and fs.stat together to create the effect of streaming the contents of the file. You can see that code in action here.
Another, perhaps hackier, approach might be to just use tail by spawning a child process with it. This of course comes with the limitation that tail must exist on the target platform, but one of node's strengths is using it to do asynchronous systems development via spawn and even on windows, you can execute node in an alternate shell like msysgit or cygwin to get access to the tail utility.
The code for this:
var spawn = require('child_process').spawn;
var child = spawn('tail',
['-f', 'my.log']);
child.stdout.on('data',
function (data) {
console.log('tail output: ' + data);
}
);
child.stderr.on('data',
function (data) {
console.log('err data: ' + data);
}
);
So, it seems people are still looking for an answer to this question for five years already, and there is yet no answer on topic.
In short: you can't. Not in Node.js particularly, you can't at all.
Long answer: there are few reasons for this.
First, POSIX standard clarifies select() behavior in this regard as follows:
File descriptors associated with regular files shall always select true for ready to read, ready to write, and error conditions.
So, select() can't help with detecting a write beyond the file end.
With poll() it's similar:
Regular files shall always poll TRUE for reading and writing.
I can't tell for sure with epoll(), since it's not standartized and you have to read quite lengthy implementation, but I would assume it's similar.
Since libuv, which is in core of Node.js implementation, uses read(), pread() and preadv() in its uv__fs_read(), neither of which would block when invoked at the end of file, it would always return empty buffer when EOF is encountered. So, no luck here too.
So, summarizing, if such functionality is desired, something must be wrong with your design, and you should revise it.
What you're trying to do is a FIFO file (acronym for First In First Out), which as you said works like a socket.
There's a node.js module that allows you to work with fifo files.
I don't know what do you want that for, but there are better ways to work with sockets on node.js. Try socket.io instead.
You could also have a look at this previous question:
Reading a file in real-time using Node.js
Update 1
I'm not familiar with any module that would do what you want with a regular file, instead of with a socket type one. But as you said, you could use tail -f to do the trick:
// filename must exist at the time of running the script
var filename = 'somefile.txt';
var spawn = require('child_process').spawn;
var tail = spawn('tail', ['-f', filename]);
tail.stdout.on('data', function (data) {
data = data.toString().replace(/^[\s]+/i,'').replace(/[\s]+$/i,'');
console.log(data);
});
Then from the command line try echo someline > somefile.txt and watch at the console.
You might also would like to have a look at this: https://github.com/layerssss/node-tailer
I'm trying to upload large files (at least 500MB, preferably up to a few GB) using the WebSocket API. The problem is that I can't figure out how to write "send this slice of the file, release the resources used then repeat". I was hoping I could avoid using something like Flash/Silverlight for this.
Currently, I'm working with something along the lines of:
function FileSlicer(file) {
// randomly picked 1MB slices,
// I don't think this size is important for this experiment
this.sliceSize = 1024*1024;
this.slices = Math.ceil(file.size / this.sliceSize);
this.currentSlice = 0;
this.getNextSlice = function() {
var start = this.currentSlice * this.sliceSize;
var end = Math.min((this.currentSlice+1) * this.sliceSize, file.size);
++this.currentSlice;
return file.slice(start, end);
}
}
Then, I would upload using:
function Uploader(url, file) {
var fs = new FileSlicer(file);
var socket = new WebSocket(url);
socket.onopen = function() {
for(var i = 0; i < fs.slices; ++i) {
socket.send(fs.getNextSlice()); // see below
}
}
}
Basically this returns immediately, bufferedAmount is unchanged (0) and it keeps iterating and adding all the slices to the queue before attempting to send it; there's no socket.afterSend to allow me to queue it properly, which is where I'm stuck.
Use web workers for large files processing instead doing it in main thread and upload chunks of file data using file.slice().
This article helps you to handle large files in workers. change XHR send to Websocket in main thread.
//Messages from worker
function onmessage(blobOrFile) {
ws.send(blobOrFile);
}
//construct file on server side based on blob or chunk information.
I believe the send() method is asynchronous which is why it will return immediately. To make it queue, you'd need the server to send a message back to the client after each slice is uploaded; the client can then decide whether it needs to send the next slice or a "upload complete" message back to the server.
This sort of thing would probably be easier using XMLHttpRequest(2); it has callback support built-in and is also more widely supported than the WebSocket API.
In order to serialize this operation you need the server to send you a signal every time a slice is received & written (or an error occurs), this way you could send the next slice in response to the onmessage event, pretty much like this:
function Uploader(url, file) {
var fs = new FileSlicer(file);
var socket = new WebSocket(url);
socket.onopen = function() {
socket.send(fs.getNextSlice());
}
socket.onmessage = function(ms){
if(ms.data=="ok"){
fs.slices--;
if(fs.slices>0) socket.send(fs.getNextSlice());
}else{
// handle the error code here.
}
}
}
You could use https://github.com/binaryjs/binaryjs or https://github.com/liamks/Delivery.js if you can run node.js on the server.
EDIT : The web world, browsers, firewalls, proxies, changed a lot since this answer was made. Right now, sending files using websockets
can be done efficiently, especially on local area networks.
Websockets are very efficient for bidirectional communication, especially when you're interested in pushing information (preferably small) from the server. They act as bidirectional sockets (hence their name).
Websockets don't look like the right technology to use in this situation. Especially given that using them adds incompatibilities with some proxies, browsers (IE) or even firewalls.
On the other end, uploading a file is simply sending a POST request to a server with the file in the body. Browsers are very good at that and the overhead for a big file is really near nothing. Don't use websockets for that task.
I think this socket.io project has a lot of potential:
https://github.com/sffc/socketio-file-upload
It supports chunked upload, progress tracking and seems fairly easy to use.