Change the VideoTrack of a MediaStream object - javascript

In a Nutshell: I'm trying to change the VideoTrack of a MediaStream object.
(Documentation: https://developer.mozilla.org/en-US/docs/WebRTC/MediaStream_API)
I have a MediaStream object __o_jsep_stream_audiovideo which is created by the sipml library.
__o_jsep_stream_audiovideo looks like this:
So it has one AudioTrack and one VideoTrack. At first the VideoTrack comes from the users camera (e.g label: "FaceTime Camera").
According to the Documentation:
A MediaStream consists of zero or more MediaStreamTrack objects, representing various audio or video tracks.
So we should be fine adding more Tracks to this Stream.
I'm trying to switch/exchange the VideoTrack with that from another stream. The other stream (streamB) originates from Chromes ScreenCapture api (label: "Screen")
I tried:
__o_jsep_stream_audiovideo.addTrack(streamB.getVideoTracks()[0])
which doesn't seem to have any effect.
I also tried assigning the videoTracks directly (which was desperate I know).
I must be missing something obvious could you point me in the right direction?
I'm running
Chrome (Version 34.0.1847.131) and
Canary (Version 36.0.1976.2 canary)
OSX 10.9.2

When you talk about change video track, we mean 2 areas:
change the remote video track (what the others can see from u)
WebRTC gets new version of doing that, since it deprecates addStream/removeStream.
However, the excelence is that they introduce new interface replaceTrack
stream.getTracks().forEach(function(track) {
// remote
qcClient.calls.values().forEach(function(call) {
var sender = call.pc.getSenders().find(function(s) {
return s.track.kind == track.kind;
});
sender.replaceTrack(track);
});
});
change your display video (You see yourself)
Better to just add a new video element (or using existing video element) But assign srcObject to the new captured stream

Adding and removing tracks on a MediaStream object do not signal a renegotiation and there are also issues with a MediaStream having two tracks of the same type in chrome.
You should probably just add the separate mediastream to the peer connection so that it can fire a re-negotiation and handle the streams. The Track add/remove functionality in chrome is very naive and not very granular and you should move away from it as much as you can.

Related

web worker not communicating with react component

I use audio-recorder-polyfill in my React project, to make possible audio recording for Safari. It seems to work in getting the recording to take place, however, no audio data gets available. The event "dataavailable" never gets fired, and no data seems to be "compiled" after stopping recording either.
recordFunc() {
navigator.mediaDevices.getUserMedia({ audio: true }).then(stream => {
recorder = new MediaRecorder(stream);
// Set record to <audio> when recording will be finished
recorder.addEventListener('dataavailable', e => {
this.audio.current.src = URL.createObjectURL(e.data);
})
// Start recording
recorder.start();
})
}
stopFunc() {
// Stop recording
recorder.stop();
// Remove “recording” icon from browser tab
recorder.stream.getTracks().forEach(i => i.stop());
}
There have been a number of similar issues posted on audio-recorder-polyfill's issue tracker.
a
b
c
d
e
Root cause
One of those issues, #4 (not listed above), is still open. Several comments on that issue tracker hint that the root issue is that Safari cancels the AudioContext if it was not created in a handler for a user interaction (e.g. a click).
Possible solutions
You may be able to get it to work if you:
Do the initialisation inside a handler for user interaction (i.e. <button onclick="recordFunc()">)
Do not attempt to reuse the MediaStream returned from getUserMedia() for multiple recordings
Do not attempt more than 4 (or 6?) audio recordings on the same page (sources [1], [2] mention that Safari will block this)
Alternative libraries
You might also be able to try the StereoAudioRecorder class from the RecordRTC package, which has more users (3K) but appears less maintained, and might work
Upcoming support
If you'd prefer to stick to the MediaRecorder API and the tips above don't work for you, the good news is that there is experimental support for MediaRecorder in Safari 12.4 and up (iOS and macOS), and it appears to be supported in the latest Safari Technology Preview.
See also
The comments in this issue may also be useful

Determine device ID of a camera with a specified facing mode

I am working on code to allow for an in-page select element to choose a camera. The default camera should be the "environment" camera while the rest should be listed after.
Using the following call I am able to stream video from an appropriate "environment"-facing camera:
navigator.mediaDevices.getUserMedia({ video: { facingMode: "environment"} }).then(function (stream) {
// display stream on web page
...
});
Similarly, I can get the list of available devices using the following:
navigator.mediaDevices.enumerateDevices().then(
devices => {
// build list of options
}
);
I store the deviceId for each option and use that to display the feed from that camera which works well.
However, the option selected by default is not necessarily the "environment" camera. And the stream object returned from getUserMedia doesn't seem to have an easy way to determine the deviceId of the device providing that stream. Nor can I seem to find any other way to determine the "environment"-facing camera.
Is this not possible or is there some kind of getDeviceIdForFacingMode function that I've just missed?
After digging through the objects a bit more I was ultimately able to find that the following works to get the deviceId from the stream:
stream.getVideoTracks()[0].getSettings().deviceId
I would assume that in other cases you may need to get careful about the [0] if for some reason your stream involved multiple video tracks but for my purposes this worked well. In general I would expect you could get whatever information you need between stream.getVideoTracks()[i] (MediaStreamTrack) and stream.getVideoTracks()[i].getSettings() (MediaTrackSettings).

AudioContext compromising app performance the more audio files I play

I'm developing an Elecron app (JavaScript) to audio visualization. There is a Playlist() instance which receives audio file paths the user wants to play. When the first audio finishes, it plays the next one. So far so good. The app does an intense computational work extracting audio features from each channel, re-rendering canvases and animating plots. It does it beautifully.
The problem is: each time the app plays a next file, the more slow it gets, as if all the audio data before is still somewhere. I've found in documentation the method close() from AudioContext():
"The close() method of the AudioContext Interface closes the audio context, releasing any system audio resources that it uses."
"An AudioContext can now be explicitly closed, thereby releasing any hardware resources associated with the AudioContext. Without this, developers had to depend on garbage collection of the AudioContext to release hardware resources."
I also have found this example of closing and restarting audio contexts:
https://github.com/mdn/webaudio-examples/blob/master/audiocontext-states/index.html
https://mdn.github.io/webaudio-examples/audiocontext-states/
The problem is that I use a audioContext.createMediaElementSource(HTMLelementID) and it doesn't allow me to restart everything recreating all the nodes like in the example. A simplified code that represents what I did before is:
class Audio() {
constructor(audioElementID, playlistObj) {
this.audioContext = new AudioContext();
this.audioElement = document.getElementById(audioElementID);
this.track = this.audioContext.createMediaElementSource(this.audioElement);
this.gainNode = this.audioContext.createGain();
this.track.connect(this.gainNode);
this.gainNode.connect(this.audioContext.destination);
this.audioElement.addEventListener('ended', () => {
playlistObj.playnextTrack() // changes the src from the html element (audioElementID) and sets this.audioElement.currentTime to 0
}
}
// everything is a property here for debugging reasons
}
const audio = new Audio('audioID', playlist);
// playlist defined somewhere else
To implement the close() method I had to change (just exactly the example, a function that recreates everything again):
class Audio() {
constructor(audioElementID, playlistObj) {
this.createAudioContext = () => {
this.audioContext = new AudioContext();
this.audioElement = document.getElementById(audioElementID);
this.track = this.audioContext.createMediaElementSource(this.audioElement);
this.gainNode = this.audioContext.createGain();
this.track.connect(this.gainNode);
this.gainNode.connect(this.audioContext.destination);
this.audioElement.addEventListener('ended', () => {
playlistObj.playNextTrack() // changes the src from the html element (audioElementID) and sets this.audioElement.currentTime to 0
}
}
this.createAudioContext();
}
}
and in playlist.playNextTrack() I pause the audioElement, call audio.audioContext.close(), wait for it (it's a promise), call audio.createAudioContext() to recreate everything and plays. The logic returns an error at this.track = this.audioContext.createMediaElementSource(this.audioElement) with:
"Failed to execute 'createMediaElementSource' on 'BaseAudioContext': HTMLMediaElement already connected previously to a different MediaElementSourceNode, at Audio.createAudioContext"
In the example, the audio source is just a random oscillator and not a mp3 audio file.
I'm really stuck here. Don't know what to do. I'm not even sure if AudioContext() really holds data from all the audio files before causing this performance problem. And if so, how could I reconnect the HTMLMediaElement to a new node audio.createAudioContext() creates? I've already tried audio.track.disconnect()but it doesn't work (as it shouldn't because here I'm disconnecting track from gainNode). And also audioElement doesn't have a disconnect()method as It's just a html element.
Any idea?
UPDATE:
I passed over the problem of recreating the audio context deleting and creating again the html element. But the problem persist: the more new audio files are played, the app gets slower. More precisely now: the more new AudioContext() is created, the slower it gets (even if I close the previous one).
I'm really stuck here. Don't know what to do. I'm not even sure if AudioContext() really holds data from all the audio files before causing this performance problem.
No, it's really unlikely this is the case. The AudioContext sets up things like the sample rate, output destination, and the graph. That's all.
The close() method of the AudioContext Interface closes the audio context, releasing any system audio resources that it uses.
You're misunderstanding what this means. Those "system audio resources" are the sound devices. While the AudioContext is running, there is an audio device requested. This is particularly meaningful in low power environments, like mobile. Another example would be Bluetooth. If the AudioContext is kept running, your Bluetooth headset may just stay on. If the AudioContext is allowed to close, then the Bluetooth headset may go to sleep.
And if so, how could I recconect the HTMLMediaElement to a new node audio.createAudioContext() creates?
You don't. While it would be nice if the API supported this, it seems it doesn't. Simply create a new HTMLMediaElement.
What you should do is properly profile your application to figure out where the slowdown is occurring. Use your developer tools. Might be faster though just to start commenting out sections of things that are running. We certainly can't tell you where the problem is, specifically, from the code you've shown.

Node JS access audio interface

I want to send audio signal coming from my audio interface (focusrite saffire) to my nodejs server. How should I go about doing this? The easiest way would be to access the audio interface from the browser (html5) like capturing microphone output with getUserMedia (https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia), but couldn't find a way to access my audio interface through that library. Otherwise, I'm planning on creating a desktop application, but don't know if there is a function/library to allow access to my usb-connected audio interface.
This probably has little to do with the Javascript or the MediaDevices API. On Linux, using Firefox, PulseAudio is required to interface with your audio hardware. Since your soundcard is an Input/Output interface, you should be able to test it pretty easily by simply playing any sound file in the browser.
Most of PulseAudio configuration can be achieved using pavucontrol GUI. You should check the "configuration" tab and both "input device" & "output device" tabs to make sure your Focusrite is correctly set up and used as sound I/O.
Once this is done, you should be able to access an audio stream using the following (only available if on a "secure context", ie localhost or served through HTTPS, as stated in the MDN page you mentionned):
navigator.mediaDevices.getUserMedia({ audio: true })
.then(function(stream) {
// do whatever you want with this audio stream
})
(code snippet taken from the MDN page about MediaDevices)
Sending audio to the server is a whole other story. Have you tried anything? Are you looking for a real-time communication with the server? If so, I'd start by having a look at the WebSockets API. The WebRTC docs might be worth reading too, but it is more oriented to client-to-client communications.
How to use exact device id
Use media device constrain to pass exact device id.
An sample would be
const preferedDeviceName = '<your_device_name>'
const deviceList = await navigator.mediaDevices.enumerateDevices()
const audio = devices.find((device) => device.kind === 'audioinput' && device.label === preferedDeviceName)
const {deviceId} = audio;
navigator.mediaDevices.getUserMedia({audio: { deviceId }}
How to process and send to the backend
Possible duplicate of this
Still not clear follow this

How to stop audio playback in Firefox & use source.connect() hack

I'm developing a webapp that (in part) records some audio using recorder.js, and sends it to a server. I'm trying to target Firefox, so I have to use this hack to keep the audio source from cutting off:
// Hack for a Firefox bug that stops input after a few seconds
window.source = audio_context.createMediaStreamSource(stream);
source.connect(audio_context.destination);
I think that this is causing audio to be played back through the computer speakers, but I'm not sure. I'm kind of a newbie when it comes to web audio. My goal is to eliminate the audio that is being played out of the speakers.
EDIT:
Here's a link to my JS file on Github: https://github.com/miller9904/Jonathan/blob/master/js/main.js#L87
you have to connect the source to the node( through which you retrieve data which you are going to record), replace this.node with what variable name you have assigned to yuor node used for processing.
window.source.connect(this.node);
//this.node.connect(this.context.destination);
edit: just checked, connecting to destination is not compulsory, also make sure your node variable does not get garbage collected( which i am assuming is happening in your case, since recording stops after few seconds.)

Categories