Pass canvas from client (browser) to server (nodejs) - javascript

I have a canvas in my browser that displays a feed from my webcam. What I want to do, is send the canvas data to my nodejs server, manipulate it, and send it back.
I can do it sending the canvas data via socket.io like so:
socket.emit('canvas_data', canvas.toDataURL());
And then rebuilding it on the nodejs server:
let img = new Image();
img.src = data; // this is the canvas_data from the first step
const canvas = createCanvas(640,480);
const ctx = canvas.getContext('2d');
ctx.drawImage(img,0,0,640,480);
However this seems really wasteful as I'm taking an already rendered canvas, converting it to base64, sending it, and then rebuilding it on the other side.
The whole point of this is to use tfjs on the server side:
let converted = tfjs.browser.fromPixels(canvas);
If I just send the canvas from the first step:
socket.emit('canvas_data', canvas);
And then run tfjs:
let converted = tfjs.browser.fromPixels(data);
I get the following error:
Error: pixels passed to tf.browser.fromPixels() must be either an HTMLVideoElement, HTMLImageElement, HTMLCanvasElement, ImageData in browser, or OffscreenCanvas, ImageData in webworker or {data: Uint32Array, width: number, height: number}, but was object
Is there a more efficient way to accomplish this?

using toDataURL is always going to be slow as browser needs to
encode all data before sending it.
your second example is better, just on the node side you need to create tensor from Buffer that you receive on socket (that would be fastest way), no need to use higher-level functions such as fromPixels
take a look at https://github.com/vladmandic/anime/blob/main/sockets/anime.ts for client-side code and https://github.com/vladmandic/anime/blob/main/sockets/server.ts for server-side code
note you also may need to account for channel-depth (does your model work with rgba or rgb) and/or model specific any pre-processing normalization, that's handled in https://github.com/vladmandic/anime/blob/main/sockets/inference.ts

Related

Javascript - imageCapture.takePhoto() function to take pictures

I am building an web application for my experiment purpose. The aim here is to capture ~15-20 frames per second from the webcam and send it to the server. Once the frame is captured, it is converted to base64 and added to an array. After certain time, it is sent back to the server. Currently I am using imageCapture.takePhoto() to achieve this functionality. I get blob as a result which is then changed to base64. The application runs for ~5 seconds and during this time, frames are captured and sent to the server.
What are the more efficient ways to capture the frames through webcam to achieve this?
You can capture still images directly from the <video> element used to preview the stream from .getUserMedia(). You set up that preview, of course, by doing this sort of thing (pseudocode).
const stream = await navigator.getUserMedia(options)
const videoElement = document.querySelector('video#whateverId')
videoElement.srcObject = stream
videoElement.play()
Next, make yourself a canvas object and a context for it. It doesn't have to be visible.
const scratchCanvas = document.createElement('canvas')
scratchCanvas.width = video.videoWidth
scratchCanvas.height = video.videoHeight
const scratchContext = scratchCanvas.getContext('2d')
Now you can make yourself a function like this.
function stillCapture(video, canvas, context) {
context.drawImage( video, 0, 0, video.videoWidth, video.videoHeight)
canvas.toBlob(
function (jpegBlob) {
/* do something useful with the Blob containing jpeg */
}, 'image/jpeg')
}
A Blob containing a jpeg version of a still capture shows up in the callback. Do with it whatever you need to do.
Then, invoke that function every so often. For example, to get approximately 15fps, do this.
const howOften = 1000.0 / 15.0
setInterval (stillCapture, howOften, videoElement, scratchCanvas, scratchContext)
All this saves you the extra work of using .takePhoto().

Alternative way of sending webgl context as binary data WITHOUT using gl.readPixels

I created two html clients that communicate with each other using a websocket server. One client draw a 3D model on it's canvas using Three.js and sends the canvas context, which is webGl, as binary data to the other client thought the websocket server.
The problem is that the readPixels() method is too slow. I need to make this stream as fluid as possible.
function animate() {
requestAnimationFrame( animate );
render();
var ctx = renderer.getContext();
var byteArray = new Uint8Array(1280 * 720 * 4);
ctx.readPixels(0,0,1280, 720, ctx.RGBA, ctx.UNSIGNED_BYTE, byteArray);
socket.send(byteArray.buffer);
}
renderer is a THREE.WebGLRenderer.
Any suggestions?
EDIT:
Here is the code that I used as basis for the 3D drawing link
You could try using canvas.toDataURL("image/jpg"); (or png) to grab the imagedata, that's a reliable way of getting the image data from a WebGL context. I'm not so sure how performance compares to readPixels, I've only ever used it for screenshots.

Cropping a Base64 PNG in-memory using PURE JavaScript on the client side w/o using canvas

Context: JavaScript, as part of a SDK (can be on node.js or browser).
Start point: I have a base64 string that's actually a base64 encoded PNG image (I got it from selenium webdriver - takeScreenshot).
Question: How do I crop it?
The techniques involving the canvas seem irrelevant (or am I wrong?). My code runs as part of tests - probably on node.js. The canvas approach doesn't seem to fit here and might also cause additional noise in the image.
All the libraries I found either deal with streams (maybe I should convert the string to stream somehow?) or deal directly with the UI by adding a control (irrelevant for me).
Isn't there something like (promises and callbacks omitted for brevity):
var base64png = driver.takeScreenshot();
var png = new PNG(base64png);
return png.crop(50, 100, 20, 80).toBase64();
?
Thanks!
Considering you wish to start with base64 string and end with cropped base64 string (image), here is the following code:
var Stream = require('stream');
var gm = require('gm');
var base64png = driver.takeScreenshot();
var stream = new Stream();
stream.on('data', function(data) {
print data
});
gm(stream, 'my_image.png').crop(WIDTH, HEIGHT, X, Y).stream(function (err, stdout, stderr) {
var data = '';
stdout.on('readable', function() {
data += stream.read().toString('base64');
});
stream.on('end', function() {
// DO something with your new base64 cropped img
});
});
stream.emit('data', base64png);
Be aware that it is unfinished, and might need some polishing or debugging (I am in no means a node.js guru), but the idea is next:
Convert string into stream
Read stream into GM module
Manipulate the image
Save it into a stream
Convert stream back into 64base string
Adding my previous comment as an answer:
Anyone looking to do this will need to decode the image to get the raw image data using a library such as node-pngjs and manipulate the data yourself (perhaps there is a library for such operations that doesn't rely on the canvas).

Changing the size of an array in Javascript or node.js efficiently

In my node.js code, there is a buffer array to which I store the contents of a received image, every time an image being received via TCP connection between the node.js as TCP client and another TCP server:
var byBMP = new Buffer (imageSize);
However
the size of image,imageSize, differs every time which makes the size of byBMP change accordingly. That means something like this happen constantly:
var byBMP = new Buffer (10000);
.... wait to receive another image
byBMP = new Buffer (30000);
.... wait to receive another image
byBMP = new Buffer (10000);
... etc
Question:
Is there any more efficient way of resizing the array byBMP. I looked into this: How to empty an array in JavaScript? which gave me some ideas ebout the efficient way of emptying an array, however for resizing it I need you guys' comments.node.js
The efficient way would be to use Streams rather than a buffer. The buffer will keep the content of the image in your memory which is not very scalable.
Instead, simply pipe the download stream directly to a file output:
imgDownloadStream.pipe( fs.createWriteStream('tmp-img') );
If you're keeping the img in memory to perform transformation, then simply apply the transformation on the stream buffers parts as they pass through.

WebSocket JavaScript: Sending complex objects

I am using WebSockets as the connection between a Node.js server and my client JS code.
I want to send a number of different media types (Text, Audio, Video, Images) through the socket.
This is not difficult of course. message.data instanceof Blob separates text from media files. The problem is, that I want to include several additional attributes to those media files.
F.e.:
Dimension of an image
Name of the image
. . .
Now I could send one message containing these informations in text form and follow it up with another message containing the blob.
I would very much prefer though, to be able to build an object:
imageObject = {
xDimension : '50px',
yDimension : '50px',
name : 'PinkFlowers.jpg'
imageData : fs.readFileSync(".resources/images/PinkFlowers.jpg")
}
And send this object as it is via socket.send(imageObject).
So far so good, this actually works, but how do I collect the object and make its fields accessible in the client again?
I have been tampering with it for a while now and I would be grateful for any ideas.
Best regards,
Sticks
Well I did get it to work using base64.
On the server side I am running this piece of code:
var imageObject = newMessageObject('img', 'flower.png');
imageObject.image = new Buffer(fs.readFileSync('./resources/images/flower.png'), 'binary').toString('base64');
imageObject.datatype = 'png';
connection.send(JSON.stringify(imageObject));
The new Buffer() is necessary to ensure a valid utf encoding. If used without, Chrome(dont know about Firefox and others) throws an error, that invalid utf8 encoding was detected and shuts down the execution after JSON.parse(message).
Note: newMessageObject is just an object construction method with two fields, type and name which I use.
On the client side its really straight forward:
websocketConnection.onmessage = function(evt) {
var message = JSON.parse(evt.data);
... // Some app specific stuff
var image = new Image();
image.onload = function() {
canvas.getContext("2d").drawImage(image, 0, 0);
}
image.src = "data:image/" + message.datatype + ";base64," + message.image;
}
This draws the image on the canvas.
I am not convinced, that this is practicable for audio or video files, but for images it does the job.
I will probably fall back to simply sending an obfuscated URL instead of audio/video data and read the files directly from the server. I dont like the security implications though.

Categories