In a chrome-extension,i want to use desktopCapture API to only capture the window audio.I do not want the video stream.
But it turns out the streamId would be undefined always when i am trying to use it like shown below.
chrome.desktopCapture.chooseDesktopMedia(['audio'],(streamId,options) => {
//retrieve mediastream using streamId and navigator API
})
Please let me know what i am doing wrong or what approach i should take to capture browser audio.
Related
I am working on code to allow for an in-page select element to choose a camera. The default camera should be the "environment" camera while the rest should be listed after.
Using the following call I am able to stream video from an appropriate "environment"-facing camera:
navigator.mediaDevices.getUserMedia({ video: { facingMode: "environment"} }).then(function (stream) {
// display stream on web page
...
});
Similarly, I can get the list of available devices using the following:
navigator.mediaDevices.enumerateDevices().then(
devices => {
// build list of options
}
);
I store the deviceId for each option and use that to display the feed from that camera which works well.
However, the option selected by default is not necessarily the "environment" camera. And the stream object returned from getUserMedia doesn't seem to have an easy way to determine the deviceId of the device providing that stream. Nor can I seem to find any other way to determine the "environment"-facing camera.
Is this not possible or is there some kind of getDeviceIdForFacingMode function that I've just missed?
After digging through the objects a bit more I was ultimately able to find that the following works to get the deviceId from the stream:
stream.getVideoTracks()[0].getSettings().deviceId
I would assume that in other cases you may need to get careful about the [0] if for some reason your stream involved multiple video tracks but for my purposes this worked well. In general I would expect you could get whatever information you need between stream.getVideoTracks()[i] (MediaStreamTrack) and stream.getVideoTracks()[i].getSettings() (MediaTrackSettings).
I want to send audio signal coming from my audio interface (focusrite saffire) to my nodejs server. How should I go about doing this? The easiest way would be to access the audio interface from the browser (html5) like capturing microphone output with getUserMedia (https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia), but couldn't find a way to access my audio interface through that library. Otherwise, I'm planning on creating a desktop application, but don't know if there is a function/library to allow access to my usb-connected audio interface.
This probably has little to do with the Javascript or the MediaDevices API. On Linux, using Firefox, PulseAudio is required to interface with your audio hardware. Since your soundcard is an Input/Output interface, you should be able to test it pretty easily by simply playing any sound file in the browser.
Most of PulseAudio configuration can be achieved using pavucontrol GUI. You should check the "configuration" tab and both "input device" & "output device" tabs to make sure your Focusrite is correctly set up and used as sound I/O.
Once this is done, you should be able to access an audio stream using the following (only available if on a "secure context", ie localhost or served through HTTPS, as stated in the MDN page you mentionned):
navigator.mediaDevices.getUserMedia({ audio: true })
.then(function(stream) {
// do whatever you want with this audio stream
})
(code snippet taken from the MDN page about MediaDevices)
Sending audio to the server is a whole other story. Have you tried anything? Are you looking for a real-time communication with the server? If so, I'd start by having a look at the WebSockets API. The WebRTC docs might be worth reading too, but it is more oriented to client-to-client communications.
How to use exact device id
Use media device constrain to pass exact device id.
An sample would be
const preferedDeviceName = '<your_device_name>'
const deviceList = await navigator.mediaDevices.enumerateDevices()
const audio = devices.find((device) => device.kind === 'audioinput' && device.label === preferedDeviceName)
const {deviceId} = audio;
navigator.mediaDevices.getUserMedia({audio: { deviceId }}
How to process and send to the backend
Possible duplicate of this
Still not clear follow this
I am unable to write anything to allow this, but I was hoping for someone to point me in the right direction on finding a code that would do that. I am good with HTML & CSS, and very novice to JS.
What I need is to be able to use my microphone to speak in a single page website and it would stream it back. That way I can have a chromecast casting my voice to the TV while displaying the website's content.
I hope someone could help me out! I found this JS snippet here: Record audio and play it afterwards
but I need to be able to "stream" it right as I speak.
Cheers!
Using web audio API can help you here. MDN has an example on their webpage regarding sending audio input to audio output. Here is their sample code:
navigator.mediaDevices.getUserMedia ({audio: true, video: false})
.then(function(stream) {
let audioCtx = new AudioContext();
let source = audioCtx.createMediaStreamSource(stream);
source.connect(audioCtx.destination);
})
.catch(function(err) {
// Handle getUserMedia() error
});
You will need to attach that function to a user input event, such as clicking a button.
In a Nutshell: I'm trying to change the VideoTrack of a MediaStream object.
(Documentation: https://developer.mozilla.org/en-US/docs/WebRTC/MediaStream_API)
I have a MediaStream object __o_jsep_stream_audiovideo which is created by the sipml library.
__o_jsep_stream_audiovideo looks like this:
So it has one AudioTrack and one VideoTrack. At first the VideoTrack comes from the users camera (e.g label: "FaceTime Camera").
According to the Documentation:
A MediaStream consists of zero or more MediaStreamTrack objects, representing various audio or video tracks.
So we should be fine adding more Tracks to this Stream.
I'm trying to switch/exchange the VideoTrack with that from another stream. The other stream (streamB) originates from Chromes ScreenCapture api (label: "Screen")
I tried:
__o_jsep_stream_audiovideo.addTrack(streamB.getVideoTracks()[0])
which doesn't seem to have any effect.
I also tried assigning the videoTracks directly (which was desperate I know).
I must be missing something obvious could you point me in the right direction?
I'm running
Chrome (Version 34.0.1847.131) and
Canary (Version 36.0.1976.2 canary)
OSX 10.9.2
When you talk about change video track, we mean 2 areas:
change the remote video track (what the others can see from u)
WebRTC gets new version of doing that, since it deprecates addStream/removeStream.
However, the excelence is that they introduce new interface replaceTrack
stream.getTracks().forEach(function(track) {
// remote
qcClient.calls.values().forEach(function(call) {
var sender = call.pc.getSenders().find(function(s) {
return s.track.kind == track.kind;
});
sender.replaceTrack(track);
});
});
change your display video (You see yourself)
Better to just add a new video element (or using existing video element) But assign srcObject to the new captured stream
Adding and removing tracks on a MediaStream object do not signal a renegotiation and there are also issues with a MediaStream having two tracks of the same type in chrome.
You should probably just add the separate mediastream to the peer connection so that it can fire a re-negotiation and handle the streams. The Track add/remove functionality in chrome is very naive and not very granular and you should move away from it as much as you can.
I'm developping a Google Chrome extension, using content script. I want to interact with pages embedding a YouTube video player. I have include the www-widgetapi-vfljlXsRD.js as a JavaScript file, and YouTube namespace is correctly initialize inside the extension sandbox.
I'm trying to retrieve a reference to an existing iFrame player. To achieve that, I tried this:
var ytplayer = new YT.Player('ytplayer');
ytplayer.pauseVideo();
where div#ytplayeris the iFrame embedding the actual player.
I'm getting this error, telling that the method does not exist:
TypeError: Object # has no method 'pauseVideo'
What is the correct way to retrieve a reference to an existing player?
I was using YouTube player API before and it was working properly. Today I have issues like you, and I did not change anything in my code. It might mean that the www-widgetapi-vfljlXsRD.js has been changed and encounters bugs... I cannot help any further.
Ok, I found my way to talk to an existing player. I have not actually managed to get a working reference to it.
I was inspired by the amazing work of Rob (cf. YouTube iframe API: how do I control a iframe player that's already in the HTML?), I remarked that the Youtube player is sending message to the main window, like this:
{"event":"infoDelivery","info":{"currentTime":3.3950459957122803,"videoBytesLoaded":244,"videoLoadedFraction":0.24482703466872344},"id":1}
You can observe it by listening to the "message" event window.addEventListener('message', function (event) {console.log(event.data)}, false); in a window containing a playing player.
This infoDelivery event contains the current player's time and is sent while the video is playing. Listening to this event I am now able to know the current player time.
I have not found a proper solution so far so this one will make the job for the moment.