I'm currently developing a prototype that logs video chat information from third party services like Hangouts, Zoom, etc.
So far I'm unable to get a simple event to log to the console with navigator.mediaDevices.ondevicechange. Using the latest version of Chrome.
https://codepen.io/anon/pen/dqbNKR
I'm using this pen, and all I want to do is just log to the console when my video camera turns on/off. Is ondevicechange the right event?
https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/ondevicechange
A devicechange event is sent to a MediaDevices instance whenever a media device such as a camera, microphone, or speaker is connected to or removed from the system. It's a generic Event with no added properties.
I know I can also look at streams of specific elements, but since it's 3rd party services I don't necessarily know which elements to look at.
So how can I detect when my webcam turns on/off in a third party app in the browser?
Thanks for help =)
As I was typing this I came across this, but need to test it.
How to check with JavaScript that webcam is being used in Chrome
Related
I am trying to create a screen recorder for myself because my PC (Windows 10) does not have a camera for whatever reason. I know that may sound strange, but whenever I open the camera app, I get an error that says that no camera can be found. Furthermore, when any other app tries to use my camera, it still says that no camera can be found. My device manager does not even list any camera drivers or anything like that. Honestly, this is baffling to me as to how a modern PC would lack a camera. However, somehow when I play games on the Steam app, I can still use Steam's recording feature for recording videos of gameplay. I don't know if this implies that my PC actually does have a camera. Nevertheless, I am trying to create my own camera to fix my problem.
I learned about the html canvas element, and the method captureStream() that allows you to capture a video stream of the contents of the canvas. However, I've been seeing information online suggesting that this might actually require a camera, but this info has been scattered around the web and organized in confusing ways.
Can someone please clarify this for me? Does the usage of canvas.captureStream() require a camera?
Can I use HTMLCanvasElement.captureStream() if my pc doesn't have a webcam?
Yes you can.
There is no relation between this API and the webcam. You probably got confused wtih navigator.mediaDevices.getUserMedia() which also returns a MediaStream object and which indeed requires a capture device.
HTMLCanvasElement#captureStream() will produce said MediaStream from its buffer directly and thus doesn't need any capture device.
This is the same for screen-recording features, they don't need a dedicated capture device, they only do record the frames they (or the system) did generate.
By the way, for recording your screen, you'd be better using the Screen-Capture API which doesn't need a capture device either.
I tried using the Twilio Javascript Client Quickstart and am having some audio issues. I am trying to place a call to a cell phone from the browser. In general, everything seems to work without errors. The issue I am having seems to be an audio problem.
Edge - The call goes through, I can hear the audio from the microphone on the client but I cannot hear the audio from the phone on the client.
Chrome - I cannot even get the call to connect because I cannot click the Call button. It seems to get hung up on the microphone permission.
Firefox - The call goes though but no audio on either end.
IE - The Call button and textbox too enter in the phone number on the client don't even display.
Does anyone have any suggestions? Thanks!!
EDIT: Based on the comments I tried to use the sample files without changing them. There were some errors and I had to add the bootstrap and modernizr packages in nuget. Same result. No audio.
I have done some research on SO but the similar Q&A are for detecting if it has connection or not, but not about connection type.
The purpose of my website is that, if a user is on mobile(phone or tablet) and on wifi, play a video clip; if a user is on mobile and not on wifi, play a video clip; if a user is not on mobile, then play the video clip.
The reason for the different behavior is to avoid possible surcharges happen to the user due to the relatively larger size of the video clip. That is not related to speed - nowadays speed difference of LTE v.s. wifi maybe only little; it is more for the concern of users getting charged for the data usage without wifi connection.
So my question is, using AngularJS(<2.0),
1) How to detect the device is desktop or mobile
2) How to detect the device is connected to wifi or not
(I guess for Q1, the fallback is to use Bootstrap #media, but it isn't ideal.)
You don't need Angular to do such check.
In order to detect if a device is a desktop or a mobile, use navigator.userAgent, see this answer
In order to detect the connection type, use navigator.connection, see this answer
Be careful, this API support is not universal, see here.
Another way to do it is to try this plugin, which relies on internet speed check, but I have never used it.
Finally, if you REALLY need this info for smartphone users, convert your website on Cordova, then distribute your app.
With respect to finding out whether what device is used, this angular plugin can save some headaches:ngx-device-detector
install it: $ npm install ngx-device-detector --save,
add to the constructor. Then call this.deviceService.IsMobile() forexample to check if device type is mobile. It has other methods for checking if device is tablet or desktop and other methods that return usefull information about the browser.
i encourage devs to use feature detection, not browser or desktop/mobile detection. e.g modernizr has a feature detection for low-bandwidth connections, though it won't work in all browsers:
https://modernizr.com/download#lowbandwidth-setclasses&q=connect
the danger, as it states, is that unknown devices are assumed to be fast.
to get a sense of desktop vs mobile, there's a technique for listening to touch events. c.f.:
What's the best way to detect a 'touch screen' device using JavaScript?
regarding whether you should autoplay a video clip, if it's an HTML5 player, it won't autoplay on mobile anyway, for the reasons you mention, unless it's tied to a touch event (like hitting play).
i have gotten around this by "saving off" a touch event from earlier, like getting to the screen with the video player, and then re-using that event to autoplay. all that said, please consider if auto-play is truly what you want, as a lot of users find it annoying.
So, I am developing this VR website online with JavaScript.
I have a function that instantiates an object in the 3D room, at the given location that the user is "watching".
However, I don't see how I can execute that function when my iPhone is locked away in the VR headset. I have been thinking about volumeup-button on my headset that is connected to it or something, but haven't found anyone that has done that before.
Do you guys have any advice? Could there be a way of connecting a BlueTooth remote to it? Or is it simply impossible?
TLDR; How to execute a function in the browser on your phone, when you can't touch your phone.
It is possible to connect Bluetooth keyboard to iOS devices, as you suggested in your post.
You might also leverage the other sensors in the phone (such as accelerometer, gyroscope, microphone), but I'm not certain if Safari has access to those. You would have more options if you developed a standalone application rather than trying to go through a browser.
I am developing an app for Firefox OS which is supposed to load the camera when
an element is touched.
I had a search on the internet but I could not find a way to do such thing unless I was to start a "web activity" and let the user choose an application to pick.
I would like to force the camera application to start and not let the user choose the app to launch. Is there a way? (I really hope so!)
Thank you for the answer in advance!
Lorenzo
Launching the camera (app) and getting access to the camera (hardware) are two different things - depending on your needs, you may need the Camera API (as suggested by Jack) to pull images/video off the device camera hardware, or you might just want to launch the built-in camera app, so the user can interact with it (without requiring to retrieve any result, like a photo, from this interaction).
Unfortunately, both use cases are currently restricted by the permission system of Firefox OS.
Direct hardware access to the camera requires a "Certified" level permission, which prevents it to be used in third party applications. If you need this feature, your best chance is to wait until WebRTC (the getUserMedia() API) lands on Firefox OS devices, which will give you direct access to camera and microphone hardware in third-party applications (there are already some experiments on early Nightly builds of FxOS that use the WebRTC getUserMedia API on actual devices, so you it shouldn't take long before it is available to end users, too). Keep an eye on bug 750011 to follow implementation progress.
The other use case is launching the built-in camera application itself from your app. To launch an installed App on the device you need a reference to its App object, invoking the App object's .launch() method launches the selected app. Unfortunately though, currently the only way to acquiring said app object seems to be via the Apps.mgmt.getAll() function call, which lists all the installed apps on your device - scanning the list you would be able to pick the Camera app, and use its launch() method to launch it. You could see this in action in Kevin Grandon's "Matchscreen" homescreen-experiment. Unfortunately the permission system has the last word in this use case too, as the Apps.mgmt object calls, too require a "Certified" level permission (the webapps-manage permission). That is one of the main reasons why third party homescreens (like the one by Matteo D'Ignazio) can't function and actually launch apps currently. There is an ongoing discussion on relaxing the requirements on this, though, and there is work ongoing regarding third party home screens, so (in time) this should also be resolved.
As seen on the mdn page explaining App permissions, camera API is not available to third-party developers yet, but there are plans for it happening in the future.
Note: The reason that camera is limited to certified apps is that the sandbox that apps run in prevents access to the camera hardware. Our goal is to make it available to third party apps as soon as possible, but we don't have time to do that in the initial release.
You can use webRTC(getUserMedia API) in FxOS to access camera as in modern desktop browser after half a year. It will be a preffered way rather than the obsolete mozCamera API (which is not able to use for 3rd party developer).