I've got simple video stream working via getUserMedia, but I would like to handle case when webCam what i'm streaming from becomes disconnected or unavailable. So I've found oninactive event on stream object passed to successCallback function. Also I would like to restart video stream when exactly same webcam/mediaDevice will be plugged in.
Code example:
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia;
navigator.getUserMedia(constrains, function successCallback(stream) {
this.video.src = URL.createObjectURL(stream);
stream.oninactive = function (error) {
//this handler runs when device becomes unavailable.
this.onStreamInactive(error, stream);
}.bind(this);
}.bind(this), function errorCallback () {});
Based on the example above how i can:
Detect recently connected media device
Check is it the same device what I was streaming from
A better way would be to use MediaDevices.ondevicechange() as mentioned in the other answer in this thread, but it is still behind a flag on Chrome. Instead of using ondevicechange() to enumerate devices, poll MediaDevices.enumerateDevices() at regular interval when you start the call, at end of every poll interval compare the list of devices you get from the devices in the previous poll. This way you can know the new devices added/remove during the call.
A little late to answer, but it looks like you can use MediaDevices.ondevicechange to attach an event handler, and then in the event handler you can query MediaDevices.enumerateDevices() to get the full list. Then you inspect the list of devices, identify the one that was recently added by comparing by a cached list you have, and comparing properties to a record you kept of the properties of the current device. The links have more thorough examples.
Adapted from the ondevicechange reference page
navigator.mediaDevices.ondevicechange = function(event) {
navigator.mediaDevices.enumerateDevices()
.then(function(devices) {
devices.forEach(function(device) {
console.log(device);
// check if this is the device that was disconnected
});
});
}
Note that the type of the device objects returned by enumerateDevices is described here
Browser Support
It looks like it's pretty patchy as of writing this. See this related question: Audio devices plugin and plugout event on chrome browser for further discussion, but the short story is for Chrome you'll need to enable the "Experimental Web Platform features" flag.
Related
I'm trying to implement webrtc & simple peer to my chat. Everything works but I would like to add screen sharing option. For that I tried that:
$("#callScreenShare").click(async function(){
if(captureStream != null){
p.removeStream(captureStream)
p.addStream(videoStream)
captureStreamTrack.stop()
captureStreamTrack =captureStream= null
$("#callVideo")[0].srcObject = videoStream
$(this).text("screen_share")
}else{
captureStream = await navigator.mediaDevices.getDisplayMedia({video:true, audio:true})
captureStreamTrack = captureStream.getTracks()[0]
$("#callVideo")[0].srcObject = captureStream
p.removeStream(videoStream)
console.log(p)
p.addStream(captureStream)
$(this).text("stop_screen_share")
}
})
But I stop the camera and after that doesn't do anything and my video stream on my peer's computer is blocked. No errors, nothing only that.
I've put a console.log when the event stream is fired. The first time it fires but when I call the addStream method, it doesn't
If someone could help me it would be really helpful.
What I do is replacing the track. So instead of removing and adding the stream:
p.streams[0].getVideoTracks()[0].stop()
p.streams[0].replaceTrack(p.streams[0].getVideoTracks()[0], captureStreamTrack, p.streams[0])
This will replace the video track from the stream with the one of the display.
simple-peer docs
The below function will do the trick. Simply call the replaceTrack function, passing it the new track and the remote peer instance.
function replaceTrack(stream, recipientPeer ) {
recipientPeer.replaceTrack(
recipientPeer.streams[0].getVideoTracks()[0],
stream,
recipientPeer.streams[0]
)
}
The following script reads the audio from the user's microphone and renders an oscilloscope on a html canvas.
The source is taken from an example of the mozilla developer network: Visualizations with Web Audio API
And here is the fiddle: http://jsfiddle.net/b7j8pktp/
mozGetUserMedia
(note: code has no fork mechanism for different browsers: works only with firefox)
It works fine for a few seconds and then immediately stops rendering.
Whereas this works totally stable: http://mdn.github.io/voice-change-o-matic/
The problem can be reduced to the following code. The microphone activation icon (next to the the address bar in firefox) disappears after about 5 seconds:
navigator.mozGetUserMedia({audio: true},
function() {}, function() {} );
(http://jsfiddle.net/b7j8pktp/2/)
This is a known bug in Firefox. Just take the stream from the getUserMedia call and hook it up to the window like so:
navigator.mozGetUserMedia({audio: true}, function(stream) {
window.stream = stream;
// rest of the code
}, function err() {
// handle error
});
Hopefully we can get it fixed soon. The problem is that we're failing to add a reference to the stream when we do the AudioContext.createMediaStreamSource call, so that the stream is not referenced by anything anymore when the getUserMedia callback returns, and it is collected by the cycle collector when it runs, that is, a couple seconds later.
You can follow along in https://bugzilla.mozilla.org/show_bug.cgi?id=934512.
I am using getUserMedia to get access to web cam. I have a function which toggle on and off the video doing the following:
var videoTracks = this.stream.getVideoTracks();
if (videoTracks.length === 0) {
trace('No local video available.');
return;
}
trace('Toggling video mute state.');
for (var i = 0; i < videoTracks.length; ++i) {
videoTracks[i].enabled = !videoTracks[i].enabled;
}
trace('Video ' + (videoTracks[0].enabled ? 'unmuted.' : 'muted.'));
How can receive an event when the the value of enabled is changed? I tried to use Object.observe, but it doesn't work.
As far as I can tell there currently is no event fired/callback invoked when the enabled property changes.
From here:
Also, there is no "onmuted" and "onunmuted" event defined or fired in the WebRTC native implementations.
You might have to build this mechanism yourself:
Keep in mind that you're disabling a media track locally; it will not fire any event on target users side. If you disabled video track; then it will cause "blank-video" on target-users side.
You can manually fire events like "onmuted" or "onmediatrackdisabled" by using socket that was used for signaling. You can send/emit messages like:
yourSignalingSocket.send({
isMediaStreamTrackDisabled: true,
mediaStreamLabel: stream.label
});
According to the spec this should be part of the MediaStreamTrack interface eventually:
onmute of type EventHandler,
This event handler, of type mute, must be supported by all objects implementing the MediaStreamTrack interface.
I tried assigning a function to a track's onmute in Chrome (43) but it never got called (looks like this is not implemented yet).
I want to play a single audio file (mp3) and my only problem is media length.
It works just fine on Android 5.0.1, but on 4.4.2/4.4.4 it doesn't work!
With native implementation I get a value but it's incorrect and if I use the Media plugin API (from Phonegap) the media.duration is undefined and media.getDuration() returns -1.
I'm trying to get duration only after loadedmetadata event is fired, so this could not be the problem.
The native implementation is done through js with new Audio(), no DOM element involved.
The file is stored on sdcard, and src looks like file:///storage/sdcard/audio.mp3. Everything else regarding html5 audio api works, but duration.
Are there any solutions to fix this?
Thanks to #tawpie's answer I figured out a workaround for this issue I'm having.
That setInterval made me thing about my custom seekbar been updated (correctly) while the audio is playing and in calculating the width of it I was using audio duration value and from that results that the duration is working after media file play method is fired.
The problem is that loadedmetadata event doesn't return the correct duration value (in some browsers like android webView), but after audio played for at least 1s the duration is updated and you can use it.
So you can forget about loadedmetadata event and jump straight to canplay event and from there you can make something like this:
var myAudio = new Audio();
myAudio.src = 'file://mnt/sdcard/audio.mp3';
myAudio.load();
myAudio.correctDuration = null;
myAudio.addEventListener('canplay', function(){
myAudio.play();
myAudio.muted = true;
setTimeout(function(){
myAudio.pause();
myAudio.currentTime = 0;
myAudio.muted = false;
myAudio.correctDuration = myAudio.duration;
},1000);
});
...of course, you can use volume = 0.0/1.0 instead of mute.
Another method would be to create a helper function (in my case - a AngularJS service) which takes your src value and uses the code above and returns the correctDuration. This one is preferred if you have listeners to audio timeUpdate which changes the DOM.
The Media plugin works exactly the same way - if the audio haven't played for at least 1s you cannot use getDuration() method or duration property inside a interval/timeout wrapper to get the correct duration.
I think the video element behaves similarly. I'll test it these days.
Hope this workaround helps!
Try Media.node.duration. That works on windows... For what it's worth, as long as getDuration is called in an interval, I don't have any problems on Android 4.4. But I'm using just the media plugin new Media(src, onSuccess, onError, playbackStatus) and not the HTML5 player.
Hardcoded values. It's a pain, but you can do this if the files are local.
I ran into an issue where chrome was reporting different duration values than other browsers, and this is where we landed. I know it's not really a solution, but it works.
OR... you can use some external process to generate a json of duration times, and reference those values at runtime.
For the sake of reference:
audio.addEventListener('durationchange', function(e) {
console.log(e.target.duration); //FIRST 0, THEN REAL DURATION
});
worked for me.
Credit: this stackowerflow question
I have been searching high and low for some sort of documentation on the user-agent string that the built-in WeChat browser produces.
I do a lot of really specific browser detection and I cannot find anything remotely related to the UA string that WeChat passes to the website.
It will be something along the lines of this:
Mozilla/5.0 (iPhone; CPU OS 6_0 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Version/6.0 Mobile/10A5355d Safari/8536.25
Does anyone know if there is a way to differentiate between something like Safari on iOS and WeChat's built-in browser on iOS? (or if that's even possible)
Any suggestions will be much appreciated!
I've edited my answer since I found there is JS API for Weixin (WeChat):
http://mp.weixin.qq.com/qa/index.php?qa=search&q=weixinjsbridge
Long-story-short, you just add this to your js:
document.addEventListener('WeixinJSBridgeReady', function onBridgeReady() {
// bridge initialized, meaning we're in WeChat, not stand-alone browser...
}, false);
There's also API to share on moments and to share with specific friend, and get callback when successfully shared.
P.S.
Just figured out that on iOS WeChat, bridge is initialized way faster than on Android, and then this callback never gets called because listener was added after bridge was initialized.
So to complete answer, here is how to properly do it:
// when your webapp is loaded, before adding listener for weixing js bridge, check if it's already initialized:
var timeoutID = 0;
if( typeof WeixinJSBridge !== "undefined" )
{
// WeChat JS bridge already initialized. Wonderful.
}
else
{
// setup a time out of let's say 5 seconds, to wait for the bridge:
timeoutID = window.setTimeout(WeChatBridgeTimeout,5000);
// now add listener for the bridge:
document.addEventListener('WeixinJSBridgeReady', WeChatBridgeReady, false);
}
// Now in bridge time out:
function WeChatBridgeTimeout()
{
// Just to be sure that bridge was not initialized just before the line we added the listener (since it's a separate process than JS), let's check for it again:
if( typeof WeixinJSBridge !== "undefined" )
{
// WeChat JS bridge already initialized. Wonderful.
}
else
{
// Nope... if it's not initialized by now, we're not in WeChat.
}
}
// And in event handled:
function WeChatBridgeReady()
{
// remove listener timeout
window.clearTimeout(timeoutID);
// WeChat JS bridge initialized.
}
It's very simple. Just check the user-agent like:
if(req.headers['user-agent'].indexOf('MicroMessenger') !== -1){
//we are in wechat browser
your code here
}
Sometimes wechat browser evilly block the App Store links. That's when you need to redirect users to other places, or lead them to open your page in other more friendly browsers. :)
As for now (WeChat6.xx), the UA string of Wechat's built-in browser on ios is:
... MicroMessenger/6.1.4 ..., while of safari:
... Safari/600.1.4
so the following code works on both android and ios:
var isWeChat = /micromessenger/i.test(navigator.userAgent);