How to pass image frames camera to a function in wasm (C++)? - javascript

I'm trying to build a C++ function and compile it to Wasm using Emscripten.
What this function will do is receive an image and do some process on it and return a result.
My first POC was successful, the user upload image using file input and I pass the data of the image using FileReader API:
const fileReader = new FileReader();
fileReader.onload = (event) => {
const uint8Arr = new Uint8Array(event.target.result);
passToWasm(event.target.result);
};
fileReader.readAsArrayBuffer(file); // I got this `file` from `change` event of the file input.
But when I implemented the camera feed and started to get frames to pass it to Wasm, I started to get exceptions in C++ side, and here's the JS implementation:
let imageData = canvasCtx.getImageData(0, 0, videoWidth, videoHeight);
var data=imageData.data.buffer;
var uint8Arr = new Uint8Array(data);
passToWasm(uint8Arr);
This one throws an exception in C++ side.
Now passToWasm implementation is:
function passToWasm(uint8ArrData) {
// copying the uint8ArrData to the heap
const numBytes = uint8ArrData.length * uint8ArrData.BYTES_PER_ELEMENT;
const dataPtr = Module._malloc(numBytes);
const dataOnHeap = new Uint8Array(Module.HEAPU8.buffer, dataPtr, numBytes);
dataOnHeap.set(uint8ArrData);
// calling the Wasm function
const res = Module._myWasmFunc(dataOnHeap.byteOffset, uint8ArrData.length);
}
While the C++ implementation will be something like this:
void EMSCRIPTEN_KEEPALIVE checkImageQuality(uint8_t* buffer, size_t size) {
// I'm using OpenCV in C++ to process the image data
// So I read the data of the image
cv::Mat raw_data = cv::Mat(1, size, CV_8UC1, buffer);
// Then I convert it
cv::Mat img_data = cv::imdecode(raw_data, cv::IMREAD_COLOR | cv::IMREAD_IGNORE_ORIENTATION);
// in one of the following steps I'm using cvtColor function which causes the issue for some reason
}
The exception I'm getting because of the camera implementation says:
OpenCV(4.1.0-dev) ../modules/imgproc/src/color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'
What is the difference between using file input and getting the data to pass it, and getting the data from a canvas as long as both of them are convert it to Uint8Array

I found a solution for this (maybe suits my case only).
When you're trying to get an image data from canvas you get it as 4 channels (RGBA like in PNG), and depending on your image processing code you need to deal with it.
My code was considering that the image should be 3 channels (RGB like in jpeg) so I had to convert it using this code:
canvasBuffer.toBlob(function (blob) {
passToWASM(blob);
},'image/jpeg');

Related

How can raw binary data be retrieved from an array buffer?

In trying to send document scans as binary data over a web socket the first block of code below works fairly well. Since the first byte contains information concerning what exactly to do with the blob, the image data starts at an offset of one byte; and because it appears that blob.slice() returns a copy of the original blob rather than simply reading it, this likely is not the most efficient method of processing the blob because it makes a copy of the entire blob less one byte.
socket.onmessage = function(evt) {
if (evt.data instanceof Blob) {
evt.data.slice(0,1).arrayBuffer()
.then( (b) => {
let v = new DataView(b),
r = v.getUint8(0);
// Do something based on r and then display scan.
let objectURL = URL.createObjectURL( evt.data.slice(1) );
imgscan.onload = () => {
URL.revokeObjectURL( objectURL );
};
imgscan.src = objectURL;
})
If the websocket's binaryType is changed to arraybuffer, it's a bit easier to read the first byte and apparently does not make a copy, but I do not understand how to get the image data out of the buffer to display it; that is, I don't see which method to apply to the DataView to get the raw binary data of the image. Would you please explain or point me to the correct documentation? Thank you.
This SO question seems similar but was not helpful to me anyway.
socket.onmessage = function(evt) {
if ( evt.data instanceof ArrayBuffer ) {
let v = new DataView(evt.data),
r = v.getUint8(0);
// How to get the raw binary image data out of
// the array buffer starting at the first byte?

Javascript + Node - Possible to edit audio from mediastream on the server and send it back to the client?

So I'm trying to manipulate the audio which comes from the client on the server and then return the newly manipulated audio back to the client. Of course this runs into the error that Node does not support the web-audio-api so I'm using the Node version of web-audio-api along with a WebRTC library.
As I'm new to the WebRTC world I've been building on from these examples which use the WebRTC library. Using the audio-video-loopback example as the starting point I've utilised some of the libraries none standard APIs to create an audio sink that allows me to directly access the samples from the client. For now I just want to change the volume so I'm just changing the values and pushing them into a new track which is how the doc (scroll down to Programmatic Audio) says to do this. At the end I just want to return to the newly created track which is done using the .replaceTrack method (which I believe retriggers a renegotiation).
Here's what I got so far for the server code (client is the same as the original example found in the link above):
const { RTCAudioSink, RTCAudioSource } = require("wrtc").nonstandard;
function beforeOffer(peerConnection) {
const audioTransceiver = peerConnection.addTransceiver("audio");
const videoTransceiver = peerConnection.addTransceiver("video");
let { track } = audioTransceiver.receiver;
const source = new RTCAudioSource();
const newTrack = source.createTrack();
const sink = new RTCAudioSink(track);
const sampleRate = 48000;
const samples = new Int16Array(sampleRate / 100); // 10 ms of 16-bit mono audio
const dataObj = {
samples,
sampleRate,
};
const interval = setInterval(() => {
// Update audioData in some way before sending.
source.onData(dataObj);
});
sink.ondata = (data) => {
// Do something with the received audio samples.
const newArr = data.samples.map((el) => el * 0.5);
dataObj[samples] = newArr;
};
return Promise.all([
audioTransceiver.sender.replaceTrack(newTrack),
videoTransceiver.sender.replaceTrack(videoTransceiver.receiver.track),
]);
}
Not surprisingly this doesn't work, I just get silence back even though the dataObj contains the correctly manipulated samples which is then passed to the newTrack when source.onData is called.
Is what I'm trying to do even possible server side? Any suggestions are welcome, like I said I'm very green with WebRTC.

Is there a good way to ensure an object gets created before moving through code that uses it?

I have a bit of code that uses fetch() to grab and convert .tiff images to an html5 canvas to be displayed in a browser using tiff.js (https://github.com/seikichi/tiff.js/tree/master). It almost works great, however, I am noticing that sometimes the images don't make it to the browser.
Some images will appear, but occasionally others will not, with the following error message in the browser:
ReferenceError: Tiff is not defined
I need to find out if there is a good way to ensure that these objects get created successfully, and would appreciate any insight I could get into what causes this behavior.
class tiffImage {
constructor() {
this.tiffURL = 'url-to-image';
this.height;
this.width;
this.canvas;
}
async loadImage() {
fetch(this.tiffURL)
// retrieve tiff and convert it to an html5 canvas
let response = await fetch(this.tiffURL);
let buffer = await response.arrayBuffer();
let tiff = new Tiff({buffer: buffer}); // error points to this line
this.canvas = tiff.toCanvas();
/* Parse some data from image and do DOM stuff */
}
}
// retrieve and display boards
let someTiff1 = new tiffImage();
let someTiff2 = new tiffImage();
let someTiff3 = new tiffImage();
let someTiff4 = new tiffImage();
let someTiff5 = new tiffImage();
someTiff1.loadImage();
someTiff2.loadImage();
someTiff3.loadImage();
someTiff4.loadImage();
someTiff5.loadImage();
Sometimes all of the images are loaded probably, and sometimes not. If the page is refreshed enough times it is guaranteed to see some images fail to load. Note that in my actual project I am instantiating and calling loadImage() on 13 objects.
Read up on Promises. A promise will allow you to wait for asynchronous actions to complete before progressing.
loadImage() {
return fetch(this.tiffURL)
.then(response=>{
// retrieve tiff and convert it to an html5 canvas
let buffer = response.arrayBuffer();
let tiff = new Tiff({buffer: buffer}); // error points to this line
this.canvas = tiff.toCanvas();
return this;
/* Parse some data from image and do DOM stuff */
}
}
Promise.all([someTiff1.loadImage(),someTiff2.loadImage()])
.then(results=>{
console.log("My results", results)
})
Don't use async and await in your code. If you're using these, you're using Promises wrong.

Difference between Javascript & C# Image File array

In C#, I read an image file into a base64 encoded string using:
var img = Image.FromFile(#"C:\xxx.jpg");
MemoryStream ms = new MemoryStream();
img.Save(ms, System.Drawing.Imaging.ImageFormat.Gif);
var arr = ms.ToArray();
var b64 = Convert.ToBase64String(arr);
In javascript, I do this:
var f = document.getElementById('file').files[0], //C:\xxx.jpg
r = new FileReader();
r.onloadend = function (e) {
console.log(btoa(e.target.result));
}
r.readAsBinaryString(f);
I'm getting different results.
Ultimately, I'm trying to read it in Javascript, POST that to an API, save in a database, and then retrieve & use later. If I use the C# code above, I can store the file and retrieve/display it fine. If I try the Javascript option, I don't get a valid image. So I'm trying to work out why they're different / where the problem is. Any pointers?

Cropping a Base64 PNG in-memory using PURE JavaScript on the client side w/o using canvas

Context: JavaScript, as part of a SDK (can be on node.js or browser).
Start point: I have a base64 string that's actually a base64 encoded PNG image (I got it from selenium webdriver - takeScreenshot).
Question: How do I crop it?
The techniques involving the canvas seem irrelevant (or am I wrong?). My code runs as part of tests - probably on node.js. The canvas approach doesn't seem to fit here and might also cause additional noise in the image.
All the libraries I found either deal with streams (maybe I should convert the string to stream somehow?) or deal directly with the UI by adding a control (irrelevant for me).
Isn't there something like (promises and callbacks omitted for brevity):
var base64png = driver.takeScreenshot();
var png = new PNG(base64png);
return png.crop(50, 100, 20, 80).toBase64();
?
Thanks!
Considering you wish to start with base64 string and end with cropped base64 string (image), here is the following code:
var Stream = require('stream');
var gm = require('gm');
var base64png = driver.takeScreenshot();
var stream = new Stream();
stream.on('data', function(data) {
print data
});
gm(stream, 'my_image.png').crop(WIDTH, HEIGHT, X, Y).stream(function (err, stdout, stderr) {
var data = '';
stdout.on('readable', function() {
data += stream.read().toString('base64');
});
stream.on('end', function() {
// DO something with your new base64 cropped img
});
});
stream.emit('data', base64png);
Be aware that it is unfinished, and might need some polishing or debugging (I am in no means a node.js guru), but the idea is next:
Convert string into stream
Read stream into GM module
Manipulate the image
Save it into a stream
Convert stream back into 64base string
Adding my previous comment as an answer:
Anyone looking to do this will need to decode the image to get the raw image data using a library such as node-pngjs and manipulate the data yourself (perhaps there is a library for such operations that doesn't rely on the canvas).

Categories