What is a TrackStartError? - javascript

I am running audio only sessions using the constraints:
var constraints = {
audio: {
mandatory: {
echoCancellation: false
}, optional: [{
sourceId: audioSource
}]
},
video: false
};
I am noticing that in a very small number of sessions I am receiving a TrackStartError from the getUserMedia request. I cannot see any correlation between browser/browser version/OS/devices available. Some computers continually get this error, some once and then after a new getUserMedia request no problem and some don't experience this at all.
Is the TrackStartError documented fully as I have seen some issues around mandatory audio flags, but echoCancellation seems not to have this problem?

TrackStartError is a non-spec Chrome-specific version of NotReadableError:
Although the user granted permission to use the matching devices, a hardware error occurred at the operating system, browser, or Web page level which prevented access to the device.
Seems fitting, given that your constraints are non-spec and Chrome-specific as well. Instead, try:
var constraints = {
audio: {
echoCancellation: { exact: false },
deviceId: audioSource
},
};
I highly recommend the official adapter.js polyfill to deal with such browser differences.
Some systems (like Windows) give exclusive access to hardware devices, which can cause this error if other applications are currently using a mic or camera. It can also be a bug or driver issue.

Related

Can a DigitalPersona USB fingerprint scanner be used directly in the browser using only JavaScript?

I can access a DigitalPersona 4500 with the following code :
navigator.usb.requestDevice({ filters: [{ vendorId: 0x05ba }] })
.then(device => {
console.log(device.productName); // "VM56:3 U.are.U® 4500 Fingerprint Reader"
console.log(device.manufacturerName); // "VM56:4 DigitalPersona, Inc."
// ...
})
.catch(error => { console.error(error); });
Now, is it possible to open the device and start scanning?
(Disclaimer: I do not have access to the DP SDK because the devices I have access to were bought before I was hired, from some third party vendor and, after contacting them, they did not provide me with any support. The devices work just fine, so there are no plans to throw them away and replace them.)
Related questions
Finger print scanner with webusb
With WebUSB, without clear documentation for the "DigitalPersona 4500" device and what USB commands this device supports, it's really hard but still possible with lucky guessing. I strongly recommend getting the device.
Check out https://web.dev/devices-introduction/ for tips on how do this.
TLDR;
Watch Exploring WebUSB and its exciting potential from Suz Hinton.
You can also reverse-engineer this device by capturing raw USB
traffic and inspecting USB descriptors with external tools like
Wireshark and built-in browser tools such as the internal page
about://usb-internals in Chromium-based browsers.

Browser permissions for GetUserMedia from different camera devices

In the site I am coding, I want the user to have the option of toggling between different video input devices and view the stream.
I am able to enumerate all the devices using navigator.mediaDevices.enumerateDevices() and filtering this by kind gives me the video input devices.
However, when I try to use
navigator.mediaDevices.getUserMedia({ video: { deviceId: deviceIdOfSelectedDevice }}), I notice that I only get the stream of the camera allowed by the browser irrespective of the deviceId. I want to prompt for browser permissions to allow a different camera.
The documentation says this about your code:
The above will return the camera you requested, or a different camera if that specific camera is no longer available.
The document also says that you can require a device with exact:
{ video: { deviceId: { exact: deviceIdOfSelectedDevice } } }

Bypass browser audio processing

I've seen a number of posts discussing removal of automatic audio processing in various browsers, usually in connection with WebRTC. The javascript is along the lines of
navigator.mediaDevices.getUserMedia({
audio: {
autoGainControl: false,
channelCount: 2,
echoCancellation: false,
latency: 0,
noiseSuppression: false,
sampleRate: 48000,
sampleSize: 16,
volume: 1.0
}
});
I've set up WebRTC live streaming from my home studio to my website and need to implement this, but I'm unclear on where in the signal chain the constraints are placed.
If I am generating the audio in my studio, and the viewers are watching/listening on my website in a given browser, it seems that the proper place to drop the code would be in the html/javascript on the viewer end, not my end. But if the user is simply observing (not generating any audio of their own), a call to
navigator.mediaDevices.getUserMedia
on their end would appear to be inert.
What's the proper method for implementing a javascript snippet on the browser end for removing audio processing? Should this be done instead through the Web Audio API?
These media constraints are audio capture constraints, which are used where you record the audio... on the source end.

Confusion over args for Puppeteer

I am a little confused over the arguments needed for Puppeteer, in particular when the puppeteer-extra stealth plugin is used. I am currently just using all the default settings and Chromium however I keep seeing examples like this:
let options = {
headless: false,
ignoreHTTPSErrors: true,
args: [
'--no-sandbox',
'--disable-setuid-sandbox',
'--disable-sync',
'--ignore-certificate-errors'
],
defaultViewport: { width: 1366, height: 768 }
};
Do I actually need any of these to avoid being detected? Been using Puppeteer without setting any of them and it passes the bot test out of the box. What is --no-sandbox for?
these are chromium features - not puppeteer specific
please take a look at the following sections for --no-sandbox for example.
https://github.com/puppeteer/puppeteer/blob/main/docs/troubleshooting.md#setting-up-chrome-linux-sandbox
Setting Up Chrome Linux Sandbox
In order to protect the host
environment from untrusted web content, Chrome uses multiple layers of
sandboxing. For this to work properly, the host should be configured
first. If there's no good sandbox for Chrome to use, it will crash
with the error No usable sandbox!.
If you absolutely trust the content you open in Chrome, you can launch
Chrome with the --no-sandbox argument:
const browser = await puppeteer.launch({args: ['--no-sandbox',
'--disable-setuid-sandbox']});
NOTE: Running without a sandbox is
strongly discouraged. Consider configuring a sandbox instead.
https://chromium.googlesource.com/chromium/src/+/HEAD/docs/linux/sandboxing.md#linux-sandboxing
Chromium uses a multiprocess model, which allows to give different
privileges and restrictions to different parts of the browser. For
instance, we want renderers to run with a limited set of privileges
since they process untrusted input and are likely to be compromised.
Renderers will use an IPC mechanism to request access to resource from
a more privileged (browser process). You can find more about this
general design here.
We use different sandboxing techniques on Linux and Chrome OS, in
combination, to achieve a good level of sandboxing. You can see which
sandboxes are currently engaged by looking at chrome://sandbox
(renderer processes) and chrome://gpu (gpu process).\
. . .
You can disable all sandboxing (for
testing) with --no-sandbox.

getUserMedia with MediaStreamAudioSourceNode on Android Chrome

I am trying to use media streams with getUserMedia on Chrome on Android. To test, I worked up the script below which simply connects the input stream to the output. This code works as-expected on Chrome under Windows, but on Android I do not hear anything. The user is prompted to allow for microphone access, but no audio comes out of the speaker, handset speaker, or headphone jack.
navigator.webkitGetUserMedia({
video: false,
audio: true
}, function (stream) {
var audioContext = new webkitAudioContext();
var input = audioContext.createMediaStreamSource(stream);
input.connect(audioContext.destination);
});
In addition, the feedback beeps when rolling the volume up and down do not sound, as if Chrome is playing back audio to the system.
Is it true that this functionality isn't supported on Chrome for Android yet? The following questions are along similar lines, but neither really have a definitive answer or explanation.
HTML5 audio recording not woorking in Google Nexus
detecting support for getUserMedia on Android browser fails
As I am new to using getUserMedia, I wanted to make sure there wasn't something I was doing in my code that could break compatibility.
I should also note that this problem doesn't seem to apply to getUserMedia itself. It is possible to use getUserMedia in an <audio> tag, as demonstrated by this code (utilizes jQuery):
navigator.webkitGetUserMedia({
video: false,
audio: true
}, function (stream) {
$('body').append(
$('<audio>').attr('autoplay', 'true').attr('src', webkitURL.createObjectURL(stream))
);
});
Chrome on Android now properly supports getUserMedia. I suspect that this originally had something to do with the difference in sample rate between recording and playback (which exhibits the same issue on desktop Chrome). In any case, all started working some time on the latest stable around February 2014.

Categories