We are currently trying to use the native BarcodeDetector in the latest Chrome (59). It is available under the enabled flag chrome://flags/#enable-experimental-web-platform-features.
You can have look at another example here.
We are checking for the native BarcodeDetector like this:
typeof window.BarcodeDetector === 'function'
But even when we fall into this branch and we finally manage to pour some image data into the detector we only get an exception:
DOMException: Barcode detection service unavailable.
I've googled that, but was not very successful. The most promising hint is this one, but it seems to be an odd Webkit fork.
What we are doing is the following (pseudocode!):
window.createImageBitmap(canvasContext.canvas) // from canvasEl.getContext('2d')
.then(function(data)
{
window.BarcodeDetector.detect(data);
});
// go on: catch the exception
Has anybody ever heard of this and can share some experiences with the BarcodeDetector?
I've had the same error using the desktop version of Chrome. I guess at the moment it is only implemented in the mobile version.
Here is an example that worked for me (Android 5.0.2, Chrome 59 with chrome://flags/#enable-experimental-web-platform-features enabled): https://jsfiddle.net/daniilkovalev/341u3qxz/
navigator.mediaDevices.enumerateDevices().then((devices) => {
let id = devices.filter((device) => device.kind === "videoinput").slice(-1).pop().deviceId;
let constrains = {video: {optional: [{sourceId: id }]}};
navigator.mediaDevices.getUserMedia(constrains).then((stream) => {
let capturer = new ImageCapture(stream.getVideoTracks()[0]);
step(capturer);
});
});
function step(capturer) {
capturer.grabFrame().then((bitmap) => {
let canvas = document.getElementById("canvas");
let ctx = canvas.getContext("2d");
ctx.drawImage(bitmap, 0, 0, bitmap.width, bitmap.height, 0, 0, canvas.width, canvas.height);
var barcodeDetector = new BarcodeDetector();
barcodeDetector.detect(bitmap)
.then(barcodes => {
document.getElementById("barcodes").innerHTML = barcodes.map(barcode => barcode.rawValue).join(', ');
step(capturer);
})
.catch((e) => {
console.error(e);
});
});
}
Reviving an old thread here.
It is not supported in desktop browsers, only mobile browsers.
Here's my working code:
getImage(event) {
let file: File = event.target.files[0];
let myReader: FileReader = new FileReader();
let barcode = window['BarcodeDetector'];
let pdf = new barcode(["pdf417"]);
createImageBitmap(file)
.then(img => pdf.detect(img))
.then(resp => {
alert(resp[0].rawValue);
});
}
It took some back and forth, but we finally have reliable feature detection for this API, see the article for full details. This is the relevant code snippet:
await BarcodeDetector.getSupportedFormats();
/* On a macOS computer logs
[
"aztec",
"code_128",
"code_39",
"code_93",
"data_matrix",
"ean_13",
"ean_8",
"itf",
"pdf417",
"qr_code",
"upc_e"
]
*/
This allows you to detect the specific feature you need, for example, QR code scanning:
if (('BarcodeDetector' in window) &&
((await BarcodeDetector.getSupportedFormats()).includes('qr_code'))) {
console.log('QR code scanning is supported.');
}
Related
I'm trying to create a 1ch (mono) MediaStreamTrack with a MediaStreamAudioDestinationNode. According to the standard, this should be possible.
const ctx = new AudioContext();
const destinationNode = new MediaStreamAudioDestinationNode(ctx, {
channelCount: 1,
channelCountMode: 'explicit',
channelInterpretation: 'speakers',
});
await ctx.resume(); // doesn't make a difference
// this fails
expect(destinationNode.stream.getAudioTracks()[0].getSettings().channelCount).equal(1);
Result:
Chrome 92.0.4515.107 always creates a stereo track
Firefox 90 returns nothing for destinationNode.stream.getAudioTracks()[0].getSettings() even though getSettings() should be fully supported
What am I doing wrong here?
Edit:
Apparently both firefox and chrome actually produce a mono track, they just don't tell you the truth. Here's a workaround solution for Typescript:
async function getNumChannelsInTrack(track: MediaStreamTrack): Promise<number> {
// unfortunately, we can't use track.getSettings().channelCount, because
// - Firefox 90 returns {} from getSettings() => see: https://bugzilla.mozilla.org/show_bug.cgi?id=1307808
// - Chrome 92 always reports 2 channels, even if that's incorrect => see: https://bugs.chromium.org/p/chromium/issues/detail?id=1044645
// Workaround: Record audio and look at the recorded buffer to determine the number of audio channels in the buffer.
const stream = new MediaStream();
stream.addTrack(track);
const mediaRecorder = new MediaRecorder(stream);
mediaRecorder.start();
return new Promise<number>((resolve) => {
setTimeout(() => {
mediaRecorder.stop();
mediaRecorder.ondataavailable = async ({ data }) => {
const offlineAudioContext = new OfflineAudioContext({
length: 1,
sampleRate: 48000,
});
const audioBuffer = await offlineAudioContext.decodeAudioData(
await data.arrayBuffer()
);
resolve(audioBuffer.numberOfChannels);
};
}, 1000);
});
}
I don't think you're doing anything wrong. It's a known issue (https://bugs.chromium.org/p/chromium/issues/detail?id=1044645) in Chrome which just didn't get fixed so far.
I think in Firefox it isn't even implemented. This bug (https://bugzilla.mozilla.org/show_bug.cgi?id=1307808) indicates that getSettings() only returns those values that can be changed so far.
I think it would be helpful if you star/follow theses issues or comment on them to make sure they don't get forgotten about.
I am working on a video editor, and the video is rendered using the canvas, so I use the JS MediaRecorder API, but I have run into an odd problem, where, because the MediaRecorder API is primarily designed for live streams, my exported WebM file doesn't show how long it is until it's done, which is kinda annoying.
This is the code I am using:
function exportVideo() {
const stream = preview.captureStream();
const dest = audioContext.createMediaStreamDestination();
const sources = []
.concat(...layers.map((layer) => layer.addAudioTracksTo(dest)))
.filter((source) => source);
// exporting doesn't work if there's no audio and it adds the tracks
if (sources.length) {
dest.stream.getAudioTracks().forEach((track) => stream.addTrack(track));
}
const recorder = new MediaRecorder(stream, {
mimeType: usingExportType,
videoBitsPerSecond: exportBitrate * 1000000,
});
let download = true;
recorder.addEventListener("dataavailable", (e) => {
const newVideo = document.createElement("video");
exportedURL = URL.createObjectURL(e.data);
if (download) {
const saveLink = document.createElement("a");
saveLink.href = exportedURL;
saveLink.download = "video-export.webm";
document.body.appendChild(saveLink);
saveLink.click();
document.body.removeChild(saveLink);
}
});
previewTimeAt(0, false);
return new Promise((res) => {
recorder.start();
audioContext.resume().then(() => play(res));
}).then((successful) => {
download = successful;
recorder.stop();
sources.forEach((source) => {
source.disconnect(dest);
});
});
}
And if this is too vague, please tell me what is vague about it.
Thanks!
EDIT: Narrowed down the problem, this is a chrome bug, see https://bugs.chromium.org/p/chromium/issues/detail?id=642012. I discovered a library called https://github.com/legokichi/ts-ebml that may be able to make the webm seekable, but unfortunately, this is a javascript project, and I ain't setting up Typescript.
JS MediaRecorder API exports a non seekable WebM file
Yes, it does. It's in the nature of streaming.
In order to make that sort of stream seekable you need to post process it. There's a npm embl library pre-typescript if you want to attempt it.
I'm trying to record parts of the video from a tag, save it for later use. And I found this article: Recording a media element, which described a method by first calling stream = video.captureStream(), then use new MediaRecord(stream) to get a recorder.
I've tested on some demos, the MediaRecorder works fine if stream is from user's device (such as microphone). However, when it comes to media element, my FireFox browser throws an exception: MediaRecorder.start: The MediaStream's isolation properties disallow access from MediaRecorder.
So any idea on how to deal with it?
Browser: Firefox
The page (including the js file) is stored at local.
The src attribute of <video> tag could either be a file from local storage or a url from Internet.
Code snippets:
let chunks = [];
let getCaptureStream = function () {
let stream;
const fps = 0;
if (video.captureStream) {
console.log("use captureStream");
stream = video.captureStream(fps);
} else if (video.mozCaptureStream) {
console.log("use mozCaptureStream");
stream = video.mozCaptureStream(fps);
} else {
console.error('Stream capture is not supported');
stream = null;
}
return stream;
}
video.addEventListener('play', () => {
let stream = getCaptureStream();
const mediaRecorder = new MediaRecorder(stream);
mediaRecorder.onstop = function() {
const newVideo = document.createElement('video');
newVideo.setAttribute('controls', '');
newVideo.controls = true;
const blob = new Blob(chunks);
chunks = [];
const videoURL = window.URL.createObjectURL(blob, { 'type' : 'video/mp4; codecs="avc1.42E01E, mp4a.40.2"' });
newVideo.src = videoURL;
document.body.appendChild(video);
}
mediaRecorder.ondataavailable = function(e) {
chunks.push(e.data);
}
stopButton.onclick = function() {
mediaRecorder.stop()
}
mediaRecorder.start(); // This is the line triggers exception.
});
I found the solution myself.
When I turned to Chrome, it shows that a CORS issue limits me from even playing original video. So I guess it's because the secure strategy that preventing MediaRecorder from accessing MediaStreams. Therefore, I deployed the local files to a local server with instruction on this page.
After that, the MediaRecorder started working. Hope this will help someone in need.
But still, the official document doesn't seem to mention much about isolation properties of media elements. So any idea or further explanation is welcomed.
It is known that iOS Safari does not support canvas.captureStream() to (e.g.) pipe its content into a video element, see this demo not working in iOS Safari.
However, canvas.captureStream() is a valid function in iOS Safari, and correctly returns a CanvasCaptureMediaStreamTrack, it just doesn't function as intended. In order to detect browsers that don't support canvas.captureStream, it would have been easy to do a test typeof canvas.captureStream === 'function', but at least for iOS Safari, we can't rely on that. Neither can we rely on the type of the returned value.
How do I write JavaScript that detects whether the current browser effectively supports canvas.captureStream()?
No iOS to test it here, but according to the comments on the issue you linked to, captureStream() actually works, what doesn't is the HTMLVideoElement's reading of this MediaStream. So that's what you actually want to test.
According to the messages there, the video doesn't even fail to load the video (i.e the metadata are correctly set and I don't expect events like error to fire, though if it does, then it's quite simple to test: check if a video is able to play such a MediaStream.
function testReadingOfCanvasCapturedStream() {
// first check the DOM API is available
if( !testSupportOfCanvasCapureStream() ) {
return Promise.resolve(false);
}
// create a test canvas
const canvas = document.createElement("canvas");
// we need to init a context on the canvas
const ctx = canvas.getContext("2d");
const stream = canvas.captureStream();
const vid = document.createElement("video");
vid.muted = true;
vid.playsInline = true;
vid.srcObject = stream;
let supports = false;
// Safari needs we draw on the canvas
// asynchronously after we requested the MediaStream
setTimeout(() => ctx.fillRect(0,0,5,5));
// if it failed, .play() would be enough
// but according to the comments on the issue, it isn't
return vid.play()
.then(() => supports = true)
.finally(() => {
// clean
stream.getTracks().forEach(track => track.stop());
return supports;
});
}
function testSupportOfCanvasCapureStream() {
return "function" === typeof HTMLCanvasElement.prototype.captureStream;
}
testReadingOfCanvasCapturedStream()
.then(supports => console.log(supports));
But if the video is able to play, but no images is painted, then we have to go a bit deeper and check what has been painted on the video. To do this, we'll draw some color on the canvas, wait for the video to have loaded and draw it back on the canvas before checking the color of the frame on the canvas:
async function testReadingOfCanvasCapturedStream() {
// first check the DOM API is available
if( !testSupportOfCanvasCapureStream() ) {
return false;
}
// create a test canvas
const canvas = document.createElement("canvas");
// we need to init a context on the canvas
const ctx = canvas.getContext("2d");
const stream = canvas.captureStream();
const clean = () => stream.getTracks().forEach(track => track.stop());
const vid = document.createElement("video");
vid.muted = true;
vid.srcObject = stream;
// Safari needs we draw on the canvas
// asynchronously after we requested the MediaStream
setTimeout(() => {
// we draw in a well knwown color
ctx.fillStyle = "#FF0000";
ctx.fillRect(0,0,300,150);
});
try {
await vid.play();
}
catch(e) {
// failed to load, no need to go deeper
// it's not supported
clean();
return false;
}
// here we should have our canvas painted on the video
// let's keep this image on the video
await vid.pause();
// now draw it back on the canvas
ctx.clearRect(0,0,300,150);
ctx.drawImage(vid,0,0);
const pixel_data = ctx.getImageData(5,5,1,1).data;
const red_channel = pixel_data[0];
clean();
return red_channel > 0; // it has red
}
function testSupportOfCanvasCapureStream() {
return "function" === typeof HTMLCanvasElement.prototype.captureStream;
}
testReadingOfCanvasCapturedStream()
.then(supports => console.log(supports));
I am currently using getUserMedia(), which is only working on Firefox and Chrome, yet it got deprecated and works only on https (in Chrome). Is there any other/better way to get the speech input in javascript that works on all platforms?
E.g. how do websites like web.whatsapp.com app record audio? getUserMedia() prompts first-time-users to permit audio recording, whereas the Whatsapp application doesn't require the user's permission.
The getUserMedia() I am currently using looks like this:
navigator.getUserMedia(
{
"audio": {
"mandatory": {
"googEchoCancellation": "false",
"googAutoGainControl": "false",
"googNoiseSuppression": "false",
"googHighpassFilter": "false"
},
"optional": []
},
}, gotStream, function(e) {
console.log(e);
});
Chrome 60+ does require using https, since getUserMedia is a powerful feature. The API access shouldn't work in non-secure domains, as that API access may get bled over to non-secure actors. Firefox still supports getUserMedia over http, though.
I've been using RecorderJS and it served my purposes well.
Here is a code example. (source)
function RecordAudio(stream, cfg) {
var config = cfg || {};
var bufferLen = config.bufferLen || 4096;
var numChannels = config.numChannels || 2;
this.context = stream.context;
var recordBuffers = [];
var recording = false;
this.node = (this.context.createScriptProcessor ||
this.context.createJavaScriptNode).call(this.context,
bufferLen, numChannels, numChannels);
stream.connect(this.node);
this.node.connect(this.context.destination);
this.node.onaudioprocess = function(e) {
if (!recording) return;
for (var i = 0; i < numChannels; i++) {
if (!recordBuffers[i]) recordBuffers[i] = [];
recordBuffers[i].push.apply(recordBuffers[i], e.inputBuffer.getChannelData(i));
}
}
this.getData = function() {
var tmp = recordBuffers;
recordBuffers = [];
return tmp; // returns an array of array containing data from various channels
};
this.start() = function() {
recording = true;
};
this.stop() = function() {
recording = false;
};
}
The usage is straightforward:
var recorder = new RecordAudio(userMedia);
recorder.start();
recorder.stop();
var recordedData = recorder.getData()
Edit: You may also want to check this answer if nothing works.
Recorder JS does the easy job for you. It works with Web Audio API nodes
Chrome and Firefox Browsers has evolved now. There is an inbuilt MediaRecoder API which does audio recording for you.
navigator.mediaDevices.getUserMedia({audio:true})
.then(stream => {
rec = new MediaRecorder(stream);
rec.ondataavailable = e => {
audioChunks.push(e.data);
if (rec.state == "inactive"){
// Use blob to create a new Object URL and playback/download
}
}
})
.catch(e=>console.log(e));
Working demo
MediaRecoder support starts from
Chrome support: 47
Firefox support: 25.0
The getUserMedia() isn't deprecated, deprecated is using it over http. How far I know the only browser which requires https for getUserMedia() right now is Chrome what I think is correct approach.
If you want ssl/tls for your test you can use free version of CloudFlare.
Whatsapp page doesn't provide any recording functions, it just allow you to launch application.
Good article about getUserMedia
Fully working example with use of MediaRecorder