Is it possible to override the camera in JS? - javascript

I'm working with a big company, and part of their flow is scanning a QR code to register some data. The problem is, in order to test this, I need to generate a QR code from the data, photograph it on my phone, and scan it in through my laptop's camera.
There are NPM modules for creating QR codes from data so that's okay, but I was wondering if it's somehow possible to override getUserMedia to return a stream of bytes that is just a QR code? I was thinking of maybe encapsulating all this into one nice chrome extension, but from looking around online, I'm not sure how I'd 'override' the camera input and replace it with a stream of QR code bytes instead.
Thanks

The HTMLCanvasElement has a captureStream() method that does produce a MediaStream with a VideoTrack similar to what getUserMedia({video: true}) produces.
This is a convenient way to test various things with a video stream, without needing an human in the loop:
const width = 1280;
const height = 720
const canvas = Object.assign(document.createElement("canvas"), {width, height});
const ctx = canvas.getContext("2d");
// you'd do the drawing you wish, here I prepare some noise
const img = new ImageData(width, height);
const data = new Uint32Array(img.data.buffer);
const anim = () => {
for(let i = 0; i<data.length; i++) {
data[i] = 0xFF000000 + Math.random()*0xFFFFFF;
}
ctx.putImageData(img, 0, 0);
requestAnimationFrame(anim);
};
requestAnimationFrame(anim);
// extract the MediaStream from the canvas
const stream = canvas.captureStream();
// Use it in your test (here I'll just display it in the <video>)
document.querySelector("video").srcObject = stream;
video { height: 100vh }
<video controls autoplay></video>
But in your case, you need to separate the concerns.
The QR code detection tests should be done on their own, and these can certainly use still images instead of a MediaStream.

Related

Javascript - imageCapture.takePhoto() function to take pictures

I am building an web application for my experiment purpose. The aim here is to capture ~15-20 frames per second from the webcam and send it to the server. Once the frame is captured, it is converted to base64 and added to an array. After certain time, it is sent back to the server. Currently I am using imageCapture.takePhoto() to achieve this functionality. I get blob as a result which is then changed to base64. The application runs for ~5 seconds and during this time, frames are captured and sent to the server.
What are the more efficient ways to capture the frames through webcam to achieve this?
You can capture still images directly from the <video> element used to preview the stream from .getUserMedia(). You set up that preview, of course, by doing this sort of thing (pseudocode).
const stream = await navigator.getUserMedia(options)
const videoElement = document.querySelector('video#whateverId')
videoElement.srcObject = stream
videoElement.play()
Next, make yourself a canvas object and a context for it. It doesn't have to be visible.
const scratchCanvas = document.createElement('canvas')
scratchCanvas.width = video.videoWidth
scratchCanvas.height = video.videoHeight
const scratchContext = scratchCanvas.getContext('2d')
Now you can make yourself a function like this.
function stillCapture(video, canvas, context) {
context.drawImage( video, 0, 0, video.videoWidth, video.videoHeight)
canvas.toBlob(
function (jpegBlob) {
/* do something useful with the Blob containing jpeg */
}, 'image/jpeg')
}
A Blob containing a jpeg version of a still capture shows up in the callback. Do with it whatever you need to do.
Then, invoke that function every so often. For example, to get approximately 15fps, do this.
const howOften = 1000.0 / 15.0
setInterval (stillCapture, howOften, videoElement, scratchCanvas, scratchContext)
All this saves you the extra work of using .takePhoto().

Adding panner / spacial audio to Web Audio Context from a WebRTC stream not working

I would like to create a Web Audio panner to position the sound from a WebRTC stream.
I have the stream connecting OK and can hear the audio and see the video, but the panner does not have any effect on the audio (changing panner.setPosition(10000, 0, 0) to + or - 10000 makes no difference to the sound).
This is the onaddstream function where the audio and video get piped into a video element and where I presume i need to add the panner.
There are no errors, it just isn't panning at all.
What am I doing wrong?
Thanks!
peer_connection.onaddstream = function(event) {
var AudioContext = window.AudioContext || window.webkitAudioContext;
var audioCtx = new AudioContext();
audioCtx.listener.setOrientation(0,0,-1,0,1,0)
var panner = audioCtx.createPanner();
panner.panningModel = 'HRTF';
panner.distanceModel = 'inverse';
panner.refDistance = 1;
panner.maxDistance = 10000;
panner.rolloffFactor = 1;
panner.coneInnerAngle = 360;
panner.coneOuterAngle = 0;
panner.coneOuterGain = 0;
panner.setPosition(10000, 0, 0); //this doesn't do anything
peerInput.connect(panner);
panner.connect(audioCtx.destination);
// attach the stream to the document element
var remote_media = USE_VIDEO ? $("<video>") : $("<audio>");
remote_media.attr("autoplay", "autoplay");
if (MUTE_AUDIO_BY_DEFAULT) {
remote_media.attr("muted", "false");
}
remote_media.attr("controls", "");
peer_media_elements[peer_id] = remote_media;
$('body').append(remote_media);
attachMediaStream(remote_media[0], event.stream);
}
Try to get the event stream before setting the panner
var source = audioCtx.createMediaStreamSource(event.stream);
Reference: Mozilla Developer Network - AudioContext
CreatePaneer Refernce: Mozilla Developer Network - createPanner
3rd Party Library: wavesurfer.js
Remove all the options you've set for the panner node and see if that helps. (The cone angles seem a little funny to me, but I always forget how they work.)
If that doesn't work, create a smaller test with the panner but use a simple oscillator as the input. Play around with the parameters and positions to make sure it does what you want.
Put this back into your app. Things should work then.
Figured this out for myself.
The problems was not the code, it was because I was connected with Bluetooth audio.
Bluetooth apparently can only do stereo audio with the microphone turned off. As soon as you activate the mic, that steals one of the channels and audio output downgrades to mono.
If you have mono audio, you definitely cannot do 3D positioned sound, hence me thinking the code was not working.

ArrayBuffer extracted from HTML canvas is different from the ArrayBuffer that was decoded and loaded onto it using UTIF

My goal was to load a TIFF image onto a HTML canvas. The front-end receives an ArrayBuffer of the TIFF image and I was able utilize UTIF to decode the ArrayBuffer and render it on the HTML canvas. However another functionality requires me to export the canvas contents. For this I'm using UTIF again for encoding it back to ArrayBuffer which I will then pass onto back-end server to use.
My functional scenario is:
Load a TIFF image onto canvas.
Add other objects on canvas. For eg. circle, strokes, triangles etc. (ignored in the code below)
Export the canvas content as a TIFF image.
Code to add ArrayBuffer:
private _addArrayBufferAsImageOnCanvas(buffer: ArrayBuffer, meta?: {}) {
console.log(buffer); // 8 MB input
// Using UTIF.js to decode the array buffer and convert it to ImageData
const ifds = UTIF.decode(buffer);
const timage = ifds[0];
UTIF.decodeImage(buffer, timage);
const array = new Uint8ClampedArray(UTIF.toRGBA8(timage));
// Forming image Data
const imageData = new ImageData(array, timage.width, timage.height);
// a temporary canvas element
const canvas = document.createElement('canvas');
canvas.width = timage.width;
canvas.height = timage.height;
// on which we draw the ImageData
const ctx = canvas.getContext('2d');
ctx.putImageData(imageData, 0, 0);
// Get the image data
const outImageData = ctx.getImageData(0, 0, canvas.width, canvas.height);
// and use UTIF to encode the image
const binaryTiffImage = UTIF.encodeImage(outImageData.data, outImageData.width, outImageData.height);
// output
console.log(binaryTiffImage); // 16 MB output
}
The size/byte length of the buffer which is the input argument is half of the size/byte length of the binaryTiffImage which is extracted from the canvas. (8MB input, 16MB output)
Is it because UTIF encoding does not compress the array? (https://github.com/photopea/UTIF.js/blob/master/README.md#utifencodeimagergba-w-h-metadata)
Is there a way I can get the exact same ArrayBuffer from the canvas as it was loaded?
For the twice bigger output than the input, yes, the compression is probably the biggest problem.
However, there is anyway no way to get the exact same file from what you are doing, even when compressing the image's data.
First, the compressor would need to use the exact same settings as were used originally, this may or may not be possible, but not simple in any way.
Then, you are losing all the metadata from your original TIFF file. Your process only extracts the raw bitmap data, but all information that could be embedded inside this TIFF (EXIF, jpeg preview etc.) are lost.
Not only metadata are lost, but color profiles and color depth are also lost, your code converts whatever your originally had in your TIFF to sRGB, #32Bits (24bits + alpha).
If your image data was using a loosy compression like JPEG (while it's rare, it's possible), then you created new data by converting what was agglomerated data as now single pixels.
But even if you were using uncompressed raw pixel data #32bits, already with sRGB color profile and able to place back all the original metadata, you'll still face one big problem:
The 2D canvas API is loosy:
const ctx = document.createElement('canvas').getContext('2d');
ctx.canvas.width = ctx.canvas.height = 50;
const input = new Uint8ClampedArray(50 * 50 * 4);
crypto.getRandomValues(input); // fill with noise
const input_img = new ImageData( input, 50, 50 );
ctx.putImageData(input_img, 0, 0);
const output = ctx.getImageData(0, 0, 50, 50).data;
const are_different = input.some( (input_value, index) => output[ index ] !== input_value );
console.log( 'input and output are', are_different ? 'different' : 'same' );
// check from your browser's dev-tools
console.table( [input, output] );
To be fair, since this is due to alpha pre-multiplication, if you only had fully opaque pixels only, that shouldn't happen, but all these points only add up.

JavaScript - How to take a picture in webcam with EXIF data?

I am currently using webcam (not native camera) on a web page to take a photo on users' mobile phone. Like this:
var video: HTMLVideoElement;
...
var context = canvas.getContext('2d');
context.drawImage(video, 0, 0, width, height);
var jpegData = canvas.toDataURL('image/jpeg', compression);
In such a way, I can now successfully generate a JPEG image data from web camera, and display it on the web page.
However, I found that the EXIF data is missing.
according to this:
Canvas.drawImage() will ignore all EXIF metadata in images,
including the Orientation. This behavior is especially troublesome
on iOS devices. You should detect the Orientation yourself and use
rotate() to make it right.
I would love the JPEG image contain the EXIF GPS data. Is there a simple way to include camera EXIF data during the process?
Thanks!
Tested on Pixel 3 - it works. Please note - sometimes it does not work with some desktop web-cameras. you will need exif-js to get the EXIF object from example.
const stream = await navigator.mediaDevices.getUserMedia({ video : true });
const track = stream.getVideoTracks()[0];
let imageCapture = new ImageCapture(track);
imageCapture.takePhoto().then((blob) => {
const newFile = new File([blob], "MyJPEG.jpg", { type: "image/jpeg" });
EXIF.getData(newFile, function () {
const make = EXIF.getAllTags(newFile);
console.log("All data", make);
});
});
unfortunately there's no way to extract exif from canvas.
Although, if you have access to jpeg, you can extract exif from that. For that I'd recommend exifr instead of widely popular exif-js because exif-js has been unmaintained for two years and still has breaking bugs in it (n is undefined).
With exifr you can either parse everything
exifr.parse('./myimage.jpg').then(output => {
console.log('Camera:', output.Make, output.Model))
})
or just a few tags
let output = await exifr.parse(file, ['ISO', 'Orientation', 'LensModel'])
First of, according to what I found so far, there is no way to include exif data during canvas context drawing.
Second, there is a way to work around, which is to extract the exif data from the original JPEG file, then after canvas context drawing, put the extracted exif data back into the newly drawn JPEG file.
It's messy and a little hacky, but for now this is the work around.
Thanks!

Is it possible to control playback of animated webP images?

Does webP have a javascript API?
I want to seek to a specific point in an animated webP. I haven't come across any documentation to suggest it does but no harm asking SO.
Note: I'm not interested in the HTML5 video element, webM or other video formats.
Abstract
Does webP have a javascript API?
It seems that webP is not lightly supported as an api. But you could control them on the backend and on the frontend.
The last, the one that I understand, is able to manipulated not very efficiently but simplified.
For example?
✓Pausing an animation
(however I wouldn't recommend doing such a thing):
[].slice.apply(document.images).filter(is_webp_image).map(pause_webp);
const is_webp_image=(i)=> {
return /^(?!data:).*\.webp/i.test(i.src);
}
const pause_webp=(i)=> {
var c = document.createElement('canvas');
var w = c.width = i.width;
var h = c.height = i.height;
c.getContext('2d').drawImage(i, 0, 0, w, h);
try {
i.src = c.toDataURL("image/webp"); // if possible, retain all css aspects
} catch(e) { // cross-domain -- mimic original with all its tag attributes
for (var j = 0, a; a = i.attributes[j]; j++)
c.setAttribute(a.name, a.value);
i.parentNode.replaceChild(c, i);
}
}
That was pause a webp or gif.
✓Controlling playback:
To control playback, I recommend to slice the webp on the backend like this pseudocode; given a webp file variable and some self-made or adopted backend API there:
// pseudocode:
var li=[];
for(var itr=0;itr<len(webp)/*how long*/ ;itr++){
list.push( giveWebURI (webp.slice(itr,len(webp))));
// ^ e.g http://example.org/versions/75-sec-until-end.webp#
};
Then on the frontend JS:
const playFrom=(time)=>{
document.querySelector(/*selector*/).src=`
http://example.org/versions/${time}-sec-until-end.webp
`;
};
I would call this but an introduction into backend/frontend/file interactions.
Still, you could draw something out of this. Blessings!

Categories