I have a problem, after subscribe to a remote stream I call a method - setVideoProfile() then there is no sound after that
this.client.on('stream-subscribed', (event) => {
let remoteStream = event.stream;
remoteStream.play(bindTag, {fit: 'contain'});
remoteStream.setAudioVolume(100);
remoteStream.setVideoProfile('120p_1');
});
when I comment //remoteStream.setVideoProfile('120p_1'); the sound works
I use AgoraRTC v2.8.0
has anyone encountered this?
setAudioVolume accepts only a number in the range [0,100] - both inclusive
Setting 0 mutes the audio and setting to 100 is maximum volume.
You can't pass in an arbitrary string like '120p_1'. That's why you do not hear any sound.
Update (from comments below)
You cant' set video profiles on remote streams. You can set this only on local streams. You should instead use dual streams and set a fallback mode if you want the user to receive a low-quality version of the remote video
client.enableDualStream(function() {
console.log("Enable dual stream success!")
}, function(err) {
console.log(err)
});
// Configuration for the receiver. When the network condition is poor, receive audio only.
client.setStreamFallbackOption(remoteStream, 2);
see full documentation:
video fallback https://docs.agora.io/en/Interactive%20Broadcast/fallback_web?platform=Web
dual streams https://docs.agora.io/en/Interactive%20Broadcast/API%20Reference/web/interfaces/agorartc.client.html#enabledualstream
Related
The bounty expires in 2 days. Answers to this question are eligible for a +100 reputation bounty.
Basj is looking for a canonical answer.
I'm building a low-latency sampler in the browser (using Javascript, WASM, etc.).
How to choose, from Javascript, a specific audio output device for Chrome or Firefox?
I see there is Audio Output API but there are not many examples (and by the way navigator.mediaDevices.selectAudioOutput is undefined on my Chrome 109).
For example, how to make Chrome use Asio4All as main output device?
(Note: using an ASIO device such as the free Asio4All driver can make the latency drop from 30 milliseconds to 5 milliseconds, even on an old computer).
There is also Audio Output Devices API which can be used to achieved the similar functionality.
// First Create a new audio element
var audio = new Audio("https://samplelib.com/lib/preview/mp3/sample-3s.mp3");
// Get the list of available audio output devices
navigator.mediaDevices.enumerateDevices()
.then(function(devices) {
// Filter the devices to get only the audio output ones
var audioOutputDevices = devices.filter(function(device) {
return device.kind === "audiooutput";
});
// Log the devices to the console
console.log(audioOutputDevices);
// If there is at least one audio output device, use the first one as the output device
if (audioOutputDevices.length > 0) {
// Set the sink ID of the audio element to the device ID of the first audio output device
audio.setSinkId(audioOutputDevices[0].deviceId)
.then(function() {
// Play the audio
audio.play();
})
.catch(function(error) {
// Handle any errors
console.error(error);
});
}
})
The audioOutputDevices[] can be leveraged to choose between vaious output devices.
My team is adapting the sipml5 library to create a html5 softphone for use in our organization. The full repository is here: https://github.com/L1kMakes/sipml5-ng . We have the code working well; audio and video calls work flawlessly. In the original code we forked from (which was from like 2012) screen sharing was accomplished with a browser plugin, but HTML 5 and WebRTC have changed to allow this to be done with just JavaScript now.
I am having difficulty adapting the code to accommodate this. I am starting with the code here on line 828: https://github.com/L1kMakes/sipml5-ng/blob/master/src/tinyMEDIA/src/tmedia_session_jsep.js This works, though without audio. That makes sense as the only possible audio stream from a screen share is the screen audio, not the mic audio. I am attempting to initialize an audio stream from getUserMedia, grab a video stream from getDisplayMedia, and present that to the client as a single mediaStream. Here's my adapted code:
if ( this.e_type == tmedia_type_e.SCREEN_SHARE ) {
// Plugin-less screen share using WebRTC requires "getDisplayMedia" instead of "getUserMedia"
// Because of this, audio constraints become limited, and we have to use async to deal with
// the promise variable for the mediastream. This is a change since Chrome 71. We are able
// to use the .then aspect of the promise to call a second mediaStream, then attach the audio
// from that to the video of our second screenshare mediaStream, enabling plugin-less screen
// sharing with audio.
let o_stream = null;
let o_streamAudio = null;
let o_streamVideo = null;
let o_streamAudioTrack = null;
let o_streamVideoTrack = null;
try {
navigator.mediaDevices.getDisplayMedia(
{
audio: false,
video: !!( this.e_type.i_id & tmedia_type_e.VIDEO.i_id ) ? o_video_constraints : false
}
).then(o_streamVideo => {
o_streamVideoTrack = o_streamVideo.getVideoTracks()[0];
navigator.mediaDevices.getUserMedia(
{
audio: o_audio_constraints,
video: false
}
).then(o_streamAudio => {
o_streamAudioTrack = o_streamAudio.getAudioTracks()[0];
o_stream = new MediaStream( [ o_streamVideoTrack , o_streamAudioTrack ] );
tmedia_session_jsep01.onGetUserMediaSuccess(o_stream, This);
});
});
} catch ( s_error ) {
tmedia_session_jsep01.onGetUserMediaError(s_error, This);
}
} else {
try {
navigator.mediaDevices.getUserMedia(
{
audio: (this.e_type == tmedia_type_e.SCREEN_SHARE) ? false : !!(this.e_type.i_id & tmedia_type_e.AUDIO.i_id) ? o_audio_constraints : false,
video: !!(this.e_type.i_id & tmedia_type_e.VIDEO.i_id) ? o_video_constraints : false // "SCREEN_SHARE" contains "VIDEO" flag -> (VIDEO & SCREEN_SHARE) = VIDEO
}
).then(o_stream => {
tmedia_session_jsep01.onGetUserMediaSuccess(o_stream, This);
});
} catch (s_error ) {
tmedia_session_jsep01.onGetUserMediaError(s_error, This);
}
}
My understanding is, o_stream should represent the resolved mediaStream tracks, not a promise, when doing a screen share. On the other end, we are using the client "MicroSIP." When making a video call, when the call is placed, I get my video preview locally in our web app, then when the call is answered the MicroSIP client gets a green square for a second, then resolves to my video. When I make a screen share call, my local web app sees the local preview of the screen share, but upon answering the call, my MicroSIP client just gets a green square and never gets the actual screen share.
The video constraints for both are the same. If I add debugging output to get more descriptive of what is actually in the media streams, they appear identical as far as I can tell. I made a test video call and a test screen share call, captured debug logs from each and held them side by side in notepad++...all appears to be identical save for the explicit debug describing the traversal down the permission request tree with "GetUserMedia" and "GetDisplayMedia." I can't really post the debug logs here as cleaning them up of information from my organization would leave them pretty barren. Save for the extra debug output on the "getDisplayMedia" call before "getUserMedia", timestamps, and uniqueID's related to individual calls, the log files are identical.
I am wondering if the media streams are not resolving from their promises before the "then" is completed, but asynchronous javascript and promises is still a bit over my head. I do not believe I should convert this function to async, but I have nothing else to debug here; the mediaStream is working as I can see it locally, but I'm stumped on figuring out what is going on with the remote send.
The solution was...nothing, the code was fine. It turns out the recipient SIP client we were using had an issue where it just aborts if it gets video larger than 640x480.
Its possible there's no solution to this but I thought I'd inquire anyway. Sometimes a video is really quiet and if I turn the volume of my computer up accordingly then other sounds I have become way too loud as a result. It would be nice to be able to boost the volume above maximum.
I did a search on google which literally turned up nothing at all, not even results related to videojs at all in fact. Some videos my Mac is almost at max volume to hear the video's speech well so it would not be feasible to start with everything at a lower volume and adjust accordingly.
I tried with:
var video = document.getElementById("Video1");
video.volume = 1.0;
And setting it to anything above 1.0 but the video then fails to open at all:
var video = document.getElementById("Video1");
video.volume = 1.4; /// 2.0 etc
Based on: http://cwestblog.com/2017/08/17/html5-getting-more-volume-from-the-web-audio-api/
You can adjust the gain by using the Web Audio API:
function amplifyMedia(mediaElem, multiplier) {
var context = new(window.AudioContext || window.webkitAudioContext),
result = {
context: context,
source: context.createMediaElementSource(mediaElem),
gain: context.createGain(),
media: mediaElem,
amplify: function(multiplier) {
result.gain.gain.value = multiplier;
},
getAmpLevel: function() {
return result.gain.gain.value;
}
};
result.source.connect(result.gain);
result.gain.connect(context.destination);
result.amplify(multiplier);
return result;
}
amplifyMedia(document.getElementById('myVideo'), 1.4);
The multiplier you pass to the function is at same level as the video volume, 1 being the 100% volume, but in this case you can pass a higher number.
Can't post any working demo or JSFiddle because Web Audio API requires a source from the same origin (or CORS compatible). You can see the error in the console: https://jsfiddle.net/yuriy636/41vrx1z7/1/
MediaElementAudioSource outputs zeroes due to CORS access restrictions
But I have tested locally and it works as intended.
If you have access to the source files, rather than trying to boost on the fly using Javascript (for that the Web Audio API answer from #yuriy636 is the best solution) then you can process the video locally using something like ffmpeg.
ffmpeg -i input.mp4 -filter:a "volume=1.5" output.mp4
This will apply a filter to the input.mp4 file that just adjusts the volume to 1.5x the input and creates a new file called output.mp4.
You can also set a decibel level:
ffmpeg -i input.mp4 -filter:a "volume=10dB" output.mp4
or review the instructions to normalize audio etc.
everything just works fine (createOffer, createAnswer, iceCandidates, ...), but then the incoming remoteStream has 2 tracks, the audioTrack which is working and the videoTrack which is not working with readyState: "muted".
if i do createOffer on pageload and then with start call do crreateOffer again with the same peerConnection, also the video displays correctly (but then i'll get in firefox the "Cannot create offer in state have-local-offer".
any ideas what could be the problem? (code is quite too complex for showing here)
Can you the local video on both the sides?
-> In a pc only one browser will get access to camera at any time either chrome/firefox)
-> Try calling between two different machines or chrome-to-chrome or firefox-to-firefox.
"Cannot create offer in state have-local-offer"
It mean you already created an offer and trying to create again without setting remote answer.
Calling createOffer again is not good idea. Make sure you create the offer in the following way(synchronously).
After receiving the stream gUM callback, then add it peerConnection.
After adding the stream then create the offer, in case of answer set remote offer also before creating answer.
I was having this issue while preparing the MediaStream on the iOS app. It turns out the the I wasn't passing the correct RTCMediaConstraints.
The problem resolved after I switch and use [RTCMediaConstraints defaultConstraints].
For example:
- (RTCVideoTrack *)createLocalVideoTrack {
RTCVideoTrack* localVideoTrack = nil;
RTCMediaConstraints *mediaConstraints = [RTCMediaConstraints defaultConstraints];
RTCAVFoundationVideoSource *source =
[[self peerConnectionFactory] avFoundationVideoSourceWithConstraints:mediaConstraints];
localVideoTrack =
[[self peerConnectionFactory] videoTrackWithSource:source
trackId:kARDVideoTrackId];
return localVideoTrack;
}
I want to change from a audio/video stream to a "screensharing" stream:
peerConnection.removeStream(streamA) // __o_j_sep... in Screenshots below
peerConnection.addStream(streamB) // SSTREAM in Screenshots below
streamA is a video/audio stream coming from my camera and microphone.
streamB is the screencapture I get from my extension.
They are both MediaStream objects that look like this:
* 1 Remark
But if I remove streamA from the peerConnection and addStream(streamB) like above nothing seems to happen.
The following works as expected (the stream on both ends is removed and re-added)
peerConnection.removeStream(streamA) // __o_j_sep...
peerConnection.addStream(streamA) // __o_j_sep...
More Details
I have found this example which does "the reverse" (Switch from screen capture to audio/video with camera) but can't spot a significant difference.
The peerConnection RTCPeerConnection object is actually created by this SIPML library source code available here. And I access it like this:
var peerConnection = stack.o_stack.o_layer_dialog.ao_dialogs[1].o_msession_mgr.ao_sessions[0].o_pc
(Yes, this does not look right, but there is no official way to get access to the Peer Connection see discussion here) and here.
Originally I tried to just (ex)change the videoTracks of streamA with the videoTrack of streamB. See question here. It was suggested to me that I should try to renegotiate the Peer Connection (by removing/adding Streams to it), because the addTrack does not trigger a re-negotitation.
I've also asked for help here but the maintainer seems very busy and didn't have a chance to respond yet.
* 1 Remark: Why does streamB not have a videoTracks property? The stream plays in an HTML <video> element and seems to "work". Here is how I get it:
navigator.webkitGetUserMedia({
audio: false,
video: {
mandatory: {
chromeMediaSource: 'desktop',
chromeMediaSourceId: streamId,
maxWidth: window.screen.width,
maxHeight: window.screen.height
//, maxFrameRate: 3
}
}
// success callback
}, function(localMediaStream) {
SSTREAM = localMediaStream; //streamB
// fail callback
}, function(error) {
console.log(error);
});
it also seems to have a videoTrack:
I'm running:
OS X 10.9.3
Chrome Version 35.0.1916.153
To answer your first question, when modifying the MediaStream in an active peerconnection, the peerconnection object will fire an onnegotiationneeded event. You need to handle that event and re-exchange your SDPs. The main reason behind this is so that both parties know what streams are being sent between them. When the SDPs are exchanged, the mediaStream ID is included, and if there is a new stream with a new ID(event with all other things being equal), a re-negotiation must take place.
For you second question(about SSTREAM). It does indeed contain video tracks but there is no videotrack attribute for webkitMediaStreams. You can grab tracks via their ID, however.
Since there is the possibility of having numerous tracks for each media type, there is no single attribute for a videotrack or audiotrack but instead an array of such. The .getVideoTracks() call returns an array of the current videoTracks. So, you COULD grab a particular video track through indicating its index .getVideoTracks()[0].
I do something similar, on clicking a button I remove the active stream and add the other.
This is the way I do it and it works for me perfectly,
_this.rtc.localstream.stop();
_this.rtc.pc.removeStream(_this.rtc.localstream);
gotStream = function (localstream_aud){
var constraints_audio={
audio:true
}
_this.rtc.localstream_aud = localstream_aud;
_this.rtc.mediaConstraints= constraints_audio;
_this.rtc.createOffer();
}
getUserMedia(constraints_audio, gotStream);
gotStream = function (localstream){
var constraints_screen={
audio:false,
video:{
mandatory:{
chromeMediaSource: 'screen'
}
}
}
_this.rtc.localstream = localstream;
_this.rtc.mediaConstraints=constraints_video;
_this.rtc.createStream();
_this.rtc.createOffer();
}
getUserMedia(constraints_video, gotStream);
Chrome doesn't allow audio along with the 'screen' so I create a separate stream for it.
You will need to do the opposite in order to switch back to your older video stream or actually to any other stream you want.
Hope this helps