I've seen a number of posts discussing removal of automatic audio processing in various browsers, usually in connection with WebRTC. The javascript is along the lines of
navigator.mediaDevices.getUserMedia({
audio: {
autoGainControl: false,
channelCount: 2,
echoCancellation: false,
latency: 0,
noiseSuppression: false,
sampleRate: 48000,
sampleSize: 16,
volume: 1.0
}
});
I've set up WebRTC live streaming from my home studio to my website and need to implement this, but I'm unclear on where in the signal chain the constraints are placed.
If I am generating the audio in my studio, and the viewers are watching/listening on my website in a given browser, it seems that the proper place to drop the code would be in the html/javascript on the viewer end, not my end. But if the user is simply observing (not generating any audio of their own), a call to
navigator.mediaDevices.getUserMedia
on their end would appear to be inert.
What's the proper method for implementing a javascript snippet on the browser end for removing audio processing? Should this be done instead through the Web Audio API?
These media constraints are audio capture constraints, which are used where you record the audio... on the source end.
Related
Trying to lower the audio quality of web rtc to make a project a bit more realistic for what its emulating. I've tried similar things that I've seen posted with no luck. I think what im trying to do is decrease the allowed bit and sample rate.
I tried (gotten from: Is it really possible for webRTC to stream high quality audio without noise?)
`
navigator.mediaDevices.getUserMedia({
audio: {
autoGainControl: false,
channelCount: 2,
echoCancellation: false,
latency: 0,
noiseSuppression: false,
sampleRate: 8000,
sampleSize: 16,
volume: 1.0
}
`
but it doesn't seem to make the audio sound bad. The goal is to get it to sound more like AMBE audio.
I use audio-recorder-polyfill in my React project, to make possible audio recording for Safari. It seems to work in getting the recording to take place, however, no audio data gets available. The event "dataavailable" never gets fired, and no data seems to be "compiled" after stopping recording either.
recordFunc() {
navigator.mediaDevices.getUserMedia({ audio: true }).then(stream => {
recorder = new MediaRecorder(stream);
// Set record to <audio> when recording will be finished
recorder.addEventListener('dataavailable', e => {
this.audio.current.src = URL.createObjectURL(e.data);
})
// Start recording
recorder.start();
})
}
stopFunc() {
// Stop recording
recorder.stop();
// Remove “recording” icon from browser tab
recorder.stream.getTracks().forEach(i => i.stop());
}
There have been a number of similar issues posted on audio-recorder-polyfill's issue tracker.
a
b
c
d
e
Root cause
One of those issues, #4 (not listed above), is still open. Several comments on that issue tracker hint that the root issue is that Safari cancels the AudioContext if it was not created in a handler for a user interaction (e.g. a click).
Possible solutions
You may be able to get it to work if you:
Do the initialisation inside a handler for user interaction (i.e. <button onclick="recordFunc()">)
Do not attempt to reuse the MediaStream returned from getUserMedia() for multiple recordings
Do not attempt more than 4 (or 6?) audio recordings on the same page (sources [1], [2] mention that Safari will block this)
Alternative libraries
You might also be able to try the StereoAudioRecorder class from the RecordRTC package, which has more users (3K) but appears less maintained, and might work
Upcoming support
If you'd prefer to stick to the MediaRecorder API and the tips above don't work for you, the good news is that there is experimental support for MediaRecorder in Safari 12.4 and up (iOS and macOS), and it appears to be supported in the latest Safari Technology Preview.
See also
The comments in this issue may also be useful
I am running audio only sessions using the constraints:
var constraints = {
audio: {
mandatory: {
echoCancellation: false
}, optional: [{
sourceId: audioSource
}]
},
video: false
};
I am noticing that in a very small number of sessions I am receiving a TrackStartError from the getUserMedia request. I cannot see any correlation between browser/browser version/OS/devices available. Some computers continually get this error, some once and then after a new getUserMedia request no problem and some don't experience this at all.
Is the TrackStartError documented fully as I have seen some issues around mandatory audio flags, but echoCancellation seems not to have this problem?
TrackStartError is a non-spec Chrome-specific version of NotReadableError:
Although the user granted permission to use the matching devices, a hardware error occurred at the operating system, browser, or Web page level which prevented access to the device.
Seems fitting, given that your constraints are non-spec and Chrome-specific as well. Instead, try:
var constraints = {
audio: {
echoCancellation: { exact: false },
deviceId: audioSource
},
};
I highly recommend the official adapter.js polyfill to deal with such browser differences.
Some systems (like Windows) give exclusive access to hardware devices, which can cause this error if other applications are currently using a mic or camera. It can also be a bug or driver issue.
I am trying to create a live stream radio website for various radio stations. Many radio stations use RTMP for their live streaming. So I used jwplayer as my default player. However, it doesn't seem to work. Here is my code:
<script type="text/javascript">
jwplayer("container").setup({
flashplayer: "jwplayer.flash",
file: "rtmp://liveRadio.onlinehorizons.net/shabawreada",
height: 270,
width: 480,
autostart: true
});
</script>
I am confused in what to put in the file parameter and if I should use the streamer parameter. The above code does not work.
I've tested this stream with rtmpdump and there are 2 issues:
1) The address of the stream is rtmp://liveRadio.onlinehorizons.net/shabawreada/livestream
2) I've only used JW player once, but I very much doubt this will work. Some RTMP streams are not protected in any way and anyone can connect to them as they please, like you're trying to do here. However, others are (somewhat) protected, and this is one of them.
During the RTMP handshake, this stream, like many others, requires 2 additional parameters. One is the address of the SWF player from which the RTMP handshake was initiated, the other is the address of the html page where the player is being used. Unfortunately for you, JWPlayer doesn't let you set these fields arbitrarily (See "Configuration Options"), which means you cannot use it for your current purpose.
You could look for a player that does support this, but I wouldn't bet on finding one. Of course, this operation can easily be done with a desktop application.
Try
flashplayer: "jwplayer.flash.swf",
instead.
You also need to specify type for file's with no file extension.
ie: type: 'flv'
it needs the .swf extension, and that should work.
I am trying to use media streams with getUserMedia on Chrome on Android. To test, I worked up the script below which simply connects the input stream to the output. This code works as-expected on Chrome under Windows, but on Android I do not hear anything. The user is prompted to allow for microphone access, but no audio comes out of the speaker, handset speaker, or headphone jack.
navigator.webkitGetUserMedia({
video: false,
audio: true
}, function (stream) {
var audioContext = new webkitAudioContext();
var input = audioContext.createMediaStreamSource(stream);
input.connect(audioContext.destination);
});
In addition, the feedback beeps when rolling the volume up and down do not sound, as if Chrome is playing back audio to the system.
Is it true that this functionality isn't supported on Chrome for Android yet? The following questions are along similar lines, but neither really have a definitive answer or explanation.
HTML5 audio recording not woorking in Google Nexus
detecting support for getUserMedia on Android browser fails
As I am new to using getUserMedia, I wanted to make sure there wasn't something I was doing in my code that could break compatibility.
I should also note that this problem doesn't seem to apply to getUserMedia itself. It is possible to use getUserMedia in an <audio> tag, as demonstrated by this code (utilizes jQuery):
navigator.webkitGetUserMedia({
video: false,
audio: true
}, function (stream) {
$('body').append(
$('<audio>').attr('autoplay', 'true').attr('src', webkitURL.createObjectURL(stream))
);
});
Chrome on Android now properly supports getUserMedia. I suspect that this originally had something to do with the difference in sample rate between recording and playback (which exhibits the same issue on desktop Chrome). In any case, all started working some time on the latest stable around February 2014.