Modifying SDP constraints for WebRTC in FireFox - javascript

I am trying to reduce the bitrate in a RTCPeerConnection within FireFox. I have successfully been able to do within Chrome.
I am modifying the SDP string that is automatically generated by FireFox after calling the createOffer method. My callback modifies the SDP and then tries to set the SDP in the RTCSessionDescription that is generated(which is just a DOMString according to the protocol spec). In Chrome, I can modify that SDP string and then set it(done within a callback passed to createOffer:
desc.sdp = TransFormSDP(desc.sdp);
connection.setLocalDescription(desc);
However, this does not seem to be working in FireFox, it will not update the SDP after my assignment and continues to utilize the string that was generated by the createOffer method.
Specifically, I am trying to specifically add an fmtp: max-fr=15; max-fs=400; restriction on the VP8 codec being offered and the bandwidth by adding b=AS:512 line in the video media portion of the SDP.
Does FF not allow you to modify the SDP after it has been automatically generated? Or Does FireFox disallow specific SDP options that are part of SDP's standardization(like bandwidth limits and codec settings)?
EDIT: Seriously FireFox??

Well, it seems that for now it is not supported, at least I am assuming so because there is yet to be a response to this bug. Guess I am stuck using Chrome for now.

Actually, the bitrate of the codec encoding is available throught the API, however it doesn't work very well on Firefox.
The proper API should be the one described in the specs https://www.w3.org/TR/webrtc/#dom-rtcrtpencodingparameters
RTCRtpSender.setParameters is supported in Firefox from version 64. But actually (v.66) does not support it correctly, bitrate works, but fps doesn't.
The API way snippet to modify the bitrate:
const sender = peerConnection.getSenders().filter(s => s.track.kind === 'video')[0];
sender.setParameters({...(sender.getParameters()), encodings: [{
maxBitrate: 1000*50,
}]});
However, chaging the bitrate throught the API has only a temporary effect in FF, as presented on the diagram below. The bitrate goes back to the default one after few seconds. The reason is not clear, probably it might be connected with the degradationPreference codec property since it acts differently for balanced, maintain-framerate and maintain-resolution. On chrome,​ it works normally.

Related

How to restrict fetching hls segment files on pause in Safari (MacOS and iOS)

We generate hls files with segment size of 3 seconds. We use hlsjs for non Safari browsers and Safari has native hls support.
In hlsjs world we were able to restrict how much ahead we should be in terms of buffer using maxMaxBufferLength, where as we are unable to find similar solution for Safari. In Safari, after loading video m3u8, even if I pause after a second, in the network tab I can see that all the segments are being fetched which I would like to restrict.
I'll not be able to share our examples due to company polices. But, a public example file by hls.js is attached below:
https://test-streams.mux.dev/x36xhzz/url_6/193039199_mp4_h264_aac_hq_7.m3u8 try opening this url in Safari, and try pausing the video, you'll see that it continues to download. Where as if you open same one using https://hls-js.netlify.app/demo/ with maxMaxBufferLength: 5 it won't happen.
Is there an option at ffmpeg to make it controlled buffer or some solution that we should do for Safari by listening to events?
Found the same question here -> https://developer.apple.com/forums/thread/121074
Once checking out this resource, it highlights the fact that:
hls.js tries to buffer up to a maximum number of bytes (60 MB by default) rather than to buffer up to a maximum nb of seconds.
this is to mimic the browser behavior (the buffer eviction algorithm is starting after the browser detects that video buffer size reaches a limit in bytes).
It is a good idea to check out Lines 175 and 176 of this script file here showing the 8 times of maxBufferSize to act as the maxBufLen. You might think about changing this.
try to remove the src attribute from video element
removeAttribute('src');
You may also need to call load method of video element to avoid browser crash

Audio/Video syncing problem while recording and playing webm chunks on Chrome of Windows 10

Problem
I am recording the webm chunks by MediaRecorder at Chrome 83 in Windows 10 and sending these to other computer. These chunks are playing on another Chrome by using Media Source Extension(MSE).
sourceBuffer.appendBuffer(webmChunkData);
Everything works fine between 1 to 1.20 seconds. But after that, the audio/video syncing problem starts. The gap between audio and video is minimal at the moment, but as time increases, the gap also rises.
The weird thing is that we can see the different behaviour on different browsers, let me show this by
Chrome's version is 83+ in almost all operating systems.
Camera can be the problem ?
I think Camera is not the problem as I have dual operating systems Fedora and Windows in the same machine. And webm chunks play fine with the Fedora.
Sample rate can be the problem ?
I doubt this. But when I compare the sample rate used by browsers while playing. chrome://media-internals shows 48000 for both with and without a syncing problem.
Warning message from Chrome
Chrome which has sync problem also shows the below message on chrome://media-internals
Question:
Why there is an audio/video syncing problem when both recording and playing are performed on Chrome browser in Windows 10?
How can I eliminate this syncing problem?
I believe I have a workaround for you. The problem seems specific to Chrome + MediaRecorder + VP8, and has nothing to do with MSE or the platform. I have the same issues on Chrome 98 on Mac 12.2.1. Additionally, if you decrease the .start(timeslice) argument, the issue will appear more rapidly and more severely.
However... when I use VP9 the problem does not manifest!
I'm using code like this:
function supportedMimeTypes(): string[] {
// From least desirable to most desirable
return [
// most compatible, but chrome creates bad chunks
'video/webm;codecs=vp8,opus',
// works in chrome, firefox falls back to vp8
'video/webm;codecs=vp9,opus'
].filter(
(m) => MediaRecorder.isTypeSupported(m)
);
}
const mimeType = supportedMimeTypes().pop();
if (!mimeType) throw new Error("Could not find a supported mime-type");
const recorder = new MediaRecorder(stream, {
// be sure to use a mimeType with a specific `codecs=` as otherwise
// chrome completely ignores it and uses video/x-matroska!
// https://stackoverflow.com/questions/64233494/mediarecorder-does-not-produce-a-valid-webm-file
mimeType,
});
recorder.start(1000)
The resulting VP9 appears to play in Firefox, and a VP8 recorded in Firefox plays well in Chrome too.

WebRTC: use of getStats()

I'm trying to get stats of a webRTC app to measure audio/video streaming bandwidth.
I checked this question and I found it very useful; however, when I try to use it I get
TypeError: Not enough arguments to RTCPeerConnection.getStats.
I think that is because of in 2016 something in webRTC is changed and now there are mediaStreamTracks; however I built the project without mediaStreamTracks and I don't know how to change this function to get it to work.
Do you have any ideas?
Thanks for your support!
UPDATE:
My call is
peer.pc.onaddstream = function(event) {
peer.remoteVideoEl.setAttribute("id", event.stream.id);
attachMediaStream(peer.remoteVideoEl, event.stream);
remoteVideosContainer.appendChild(peer.remoteVideoEl);
getStats(peer.pc);
};
and getStats() is identical to this link at chapter n.7.
been sometime since I used WebRTC, problem then was, chrome and firefox implemented it differently( believe they still do it differently)
Firefox:
webrtc stats tab is about:webrtc
peerConnection.getStats(null).then(function(stats){... // returns a promise
Chrome:
webrtc stats tab is chrome://webrtc-internals/
peerConnection.getStats(function(stats){ // pass a callback function
one way to circumvent these cross browser issues is using adapter.js

Change the VideoTrack of a MediaStream object

In a Nutshell: I'm trying to change the VideoTrack of a MediaStream object.
(Documentation: https://developer.mozilla.org/en-US/docs/WebRTC/MediaStream_API)
I have a MediaStream object __o_jsep_stream_audiovideo which is created by the sipml library.
__o_jsep_stream_audiovideo looks like this:
So it has one AudioTrack and one VideoTrack. At first the VideoTrack comes from the users camera (e.g label: "FaceTime Camera").
According to the Documentation:
A MediaStream consists of zero or more MediaStreamTrack objects, representing various audio or video tracks.
So we should be fine adding more Tracks to this Stream.
I'm trying to switch/exchange the VideoTrack with that from another stream. The other stream (streamB) originates from Chromes ScreenCapture api (label: "Screen")
I tried:
__o_jsep_stream_audiovideo.addTrack(streamB.getVideoTracks()[0])
which doesn't seem to have any effect.
I also tried assigning the videoTracks directly (which was desperate I know).
I must be missing something obvious could you point me in the right direction?
I'm running
Chrome (Version 34.0.1847.131) and
Canary (Version 36.0.1976.2 canary)
OSX 10.9.2
When you talk about change video track, we mean 2 areas:
change the remote video track (what the others can see from u)
WebRTC gets new version of doing that, since it deprecates addStream/removeStream.
However, the excelence is that they introduce new interface replaceTrack
stream.getTracks().forEach(function(track) {
// remote
qcClient.calls.values().forEach(function(call) {
var sender = call.pc.getSenders().find(function(s) {
return s.track.kind == track.kind;
});
sender.replaceTrack(track);
});
});
change your display video (You see yourself)
Better to just add a new video element (or using existing video element) But assign srcObject to the new captured stream
Adding and removing tracks on a MediaStream object do not signal a renegotiation and there are also issues with a MediaStream having two tracks of the same type in chrome.
You should probably just add the separate mediastream to the peer connection so that it can fire a re-negotiation and handle the streams. The Track add/remove functionality in chrome is very naive and not very granular and you should move away from it as much as you can.

Why aren't Safari or Firefox able to process audio data from MediaElementSource?

Neither Safari or Firefox are able to process audio data from a MediaElementSource using the Web Audio API.
var audioContext, audioProcess, audioSource,
result = document.createElement('h3'),
output = document.createElement('span'),
mp3 = '//www.jonathancoulton.com/wp-content/uploads/encodes/Smoking_Monkey/mp3/09_First_of_May_mp3_3a69021.mp3',
ogg = '//upload.wikimedia.org/wikipedia/en/4/45/ACDC_-_Back_In_Black-sample.ogg',
gotData = false, data, audio = new Audio();
function connect() {
audioContext = window.AudioContext ? new AudioContext() : new webkitAudioContext(),
audioSource = audioContext.createMediaElementSource( audio ),
audioScript = audioContext.createScriptProcessor( 2048 );
audioSource.connect( audioScript );
audioSource.connect( audioContext.destination );
audioScript.connect( audioContext.destination );
audioScript.addEventListener('audioprocess', function(e){
if ((data = e.inputBuffer.getChannelData(0)[0]*3)) {
output.innerHTML = Math.abs(data).toFixed(3);
if (!gotData) gotData = true;
}
}, false);
}
(function setup(){
audio.volume = 1/3;
audio.controls = true;
audio.autoplay = true;
audio.src = audio.canPlayType('audio/mpeg') ? mp3 : ogg;
audio.addEventListener('canplay', connect);
result.innerHTML = 'Channel Data: ';
output.innerHTML = '0.000';
document.body.appendChild(result).appendChild(output);
document.body.appendChild(audio);
})();
Are there any plans to patch this in the near future? Or is there some work-around that would still provide the audio controls to the user?
For Apple, this something that could be fixed in the WebKit Nightlies or will we have to wait until Safari 8.0 release to get HTML5 <audio> playing nicely with the Web Audio API? The Web Audio API has existed in Safari since at least version 6.0 and I initially posted this question long before Safari 7.0 was released. Is there a reason this wasn't fixed already? Will it ever be fixed?
For Mozilla, I know you're still in the process of switching over from the old Audio Data API, but is this a known issue with your Web Audio implementation and is it going to be fixed before the next release of Firefox?
This answer is quoted almost exactly from my answer to a related question: Firefox 25 and AudioContext createJavaScriptNote not a function
Firefox does support MediaElementSource if the media adheres to the Same-Origin Policy, however there is no error produced by Firefox when attempting to use media from a remote origin.
The specification is not really specific about it (pun intended), but I've been told that this is an intended behavior, and the issue is actually with Chrome… It's the Blink implementations (Chrome, Opera) that need to be updated to require CORS.
MediaElementSource Node and Cross-Origin Media Resources:
From: Robert O'Callahan <robert#ocallahan.org>
Date: Tue, 23 Jul 2013 16:30:00 +1200
To: "public-audio#w3.org" <public-audio#w3.org>
HTML media elements can play media resources from any origin. When an
element plays a media resource from an origin different from the page's
origin, we must prevent page script from being able to read the contents of
the media (e.g. extract video frames or audio samples). In particular we
should prevent ScriptProcessorNodes from getting access to the media's
audio samples. We should also information about samples leaking in other
ways (e.g. timing channel attacks). Currently the Web Audio spec says
nothing about this.
I think we should solve this by preventing any non-same-origin data from
entering Web Audio. That will minimize the attack surface and the impact on
Web Audio.
My proposal is to make MediaElementAudioSourceNode convert data coming from
a non-same origin stream to silence.
If this proposal makes it into spec it will be nearly impossible for a developer to even realize why his MediaElementSource is not working. As it stands right now, calling createMediaElementSource() on an <audio> element in Firefox 26 actually stops the <audio> controls from working at all and throws no errors.
What dangerous things can you do with the audio/video data from a remote origin? The general idea is that without applying the Same-Origin Policy to a MediaElementSource node, some malicious javascript could access media that only the user should have access to (session, vpn, local server, network drives) and send its contents—or some representation of it—to an attacker.
The HTML5 media elements don't have these restrictions by default. You can include remote media across all browsers by using the <audio>, <img>, or <video> elements. It's only when you want to manipulate or extract the data from these remote resources that the Same-Origin Policy comes into play.
[It's] for the same reason that you cannot dump image data cross-origin via <canvas>: media may contain sensitive information and therefore allowing rogue sites to dump and re-route content is a security issue. - #nmaier
createMediaElementSource() does not work properly in Safari 8.0.5 (and possibly earlier) but is fixed in Webkit Nightly as of 10600.5.17, r183978

Categories