Audiocontext was not allowed to start in lib-jitsi-meet - javascript

I'm developing a video chat application using lib-jitsi-meet. It's all working fine in Firefox, but when it comes to Chrome, it throws the following error in the console.
lib-jitsi-meet.min.js:1 The AudioContext was not allowed to start. It must be resumed (or created) after a user gesture on the page.
The participant can't use the microphone in the Chrome browser because of this error. I know this is related to the changes in Chrome autoplay policies. I could've done something if I'm using the pure web audio API, however, I couldn't understand what should I do to avoid this in lib-jitsi-meet.
I searched for this error in Jitsi community forums and many other places but didn't find any helpful answer to circumvent this.
How can I overcome this issue so that video chat participants from the Chrome browser can use their microphones and hear what others are saying? Thank you for all the helpful suggestions.

This problem occurs with Jitsi Meet NPM package. Upon importing it to my React app as
import JitsiMeetJS from 'lib-jitsi-meet-dist'
it automatically creates a new AudioContext object. Since that is not allowed in Chrome due to its autoplay policies, I cannot either access the microphone or listen to other participants. I could've resumed the created AudioContext and go on, but there was no way to do that in the library.
As a solution, I added the Jitsi Meet library as a script in the index.html of my React app and used the JitsiMeetJS object in the app as window.JitsiMeetJS.
<script src="https://meet.jit.si/libs/lib-jitsi-meet.min.js"></script>
With that approach, there was still an AudioContext object initialized, but it's related to something called collecting local stats, not Jitsi Meet core functionality. Therefore, I could ignore it and move on.

Related

Is there a way to detect if a specific app is installed on (Android\iOS) device from Javascript?

I want to run JS code on Safari (iOS) or Chrome (Android) to detect for example, if WhatsApp is installed on the device.
I played with:
How to check if an app is installed from a web-page on an iPhone?
and
https://github.com/hampusohlsson/browser-deeplink
But the problem is that if the app is installed on the device - the browser re-directs to the app. I want to stay on the page after the "detection stage", is that possible to do?
Following a few hours of research + consulting with top experts I came to the conclusion that there is no legitimate way of achieving the goal of detecting whether a specific app is installed on the device without having the browser re-direct to the app if it is installed.
In iOS for example, there was an app just removed from the App Store because of violating the rules: SysSecInfo. The app managed to pull list of all running processes for example.
From: https://www.sektioneins.de/en/blog/16-05-09-system-and-security-info.html
See https://developer.apple.com/videos/play/wwdc2015/703/ "App
Detection" starting at 08:34
During this talk they discuss several APIs used to gather information
about processes currently running on your system (around 12:12 in the
video) and claimed to have fixed them. However as so often Apple has
only partially fixed the problems they claim to have fixed. Therefore
they have actually never stopped malicious applications from gathering
information about what other applications run currently on your
device, but only removed access to detail information that is only
relevant for harmless system information tools anyway.
System and Security Info is therefore still able to show the list of
running processes and enriches this list with information from the
codesigning information including the list of entitlements running
processes have.

mic access into my browser

I am testing the annyang library for the voice command.
Unfortunately I am facing a problem that it works but the browser (chrome) only allows the listening for about 5 secs, then it asks for me to allow/deny the mic again.
(the allow is not saving in the chrome/manage exceptions setting).
What should I do?
Thanks in advance.
You can experience this behavior on the annyang official site, if you remove the SSL protocol (which is the default):
http://www.talater.com/annyang/
The reason for this is that Chrome only saves audio permission preferences for sites running on SSL. Annyang appears to close the audio context after a period of time if it isn't being used (presumably a performance thing).
Your two options are to:
Add SSL to your site using the library
File a bug report, because it should only attempt to make a connection to the audio context once

Create google map failed after internet reconnected

When I am creating google map on hybrid app and network suddenly disconnet, the google map show nothing after network has been reconnected. I got a error message as below and try to reload google map javascript again. Sometimes the map was created success but always failed. Is there any solution for this problem?
GET http://maps.gstatic.com/cat_js/intl/zh_tw/mapfiles/api-3/16/2/%7Bcommon,map%7D.js
net::ERR_PROXY_CONNECTION_FAILED %7Bmain,places%7D.js:10
GET http://maps.gstatic.com/cat_js/intl/zh_tw/mapfiles/api-3/16/2/%7Butil,stats%7D.js
net::ERR_PROXY_CONNECTION_FAILED %7Bmain,places%7D.js:10
watchPositionError 0
I know it can be annoying to think about, but at some point you just have to put it out of your head, and an important thing to take note of is sometimes the user will enter into situations where your app doesn't work exactly as you want it to, and it's out of your control, for example if the user disconnects the internet.
Another example of this would be if the user doesn't support cookies, or if they don't support javascript. As a developer, you won't always be working with the resources you need to build software for every person on every platform and in every scenario. Sure google has the ability to handle internet disconnect/reconnect in google drive, syncing up to the server when a connection is re-established, and Twitter has a mobile site that supports users who don't have javascript. As a developer you will have to make a judgement call on what is and what isn't a must-have feature. I'm guessing this isn't a must-have feature.

Activity Camera FirefoxOS

I am developing an app for Firefox OS which is supposed to load the camera when
an element is touched.
I had a search on the internet but I could not find a way to do such thing unless I was to start a "web activity" and let the user choose an application to pick.
I would like to force the camera application to start and not let the user choose the app to launch. Is there a way? (I really hope so!)
Thank you for the answer in advance!
Lorenzo
Launching the camera (app) and getting access to the camera (hardware) are two different things - depending on your needs, you may need the Camera API (as suggested by Jack) to pull images/video off the device camera hardware, or you might just want to launch the built-in camera app, so the user can interact with it (without requiring to retrieve any result, like a photo, from this interaction).
Unfortunately, both use cases are currently restricted by the permission system of Firefox OS.
Direct hardware access to the camera requires a "Certified" level permission, which prevents it to be used in third party applications. If you need this feature, your best chance is to wait until WebRTC (the getUserMedia() API) lands on Firefox OS devices, which will give you direct access to camera and microphone hardware in third-party applications (there are already some experiments on early Nightly builds of FxOS that use the WebRTC getUserMedia API on actual devices, so you it shouldn't take long before it is available to end users, too). Keep an eye on bug 750011 to follow implementation progress.
The other use case is launching the built-in camera application itself from your app. To launch an installed App on the device you need a reference to its App object, invoking the App object's .launch() method launches the selected app. Unfortunately though, currently the only way to acquiring said app object seems to be via the Apps.mgmt.getAll() function call, which lists all the installed apps on your device - scanning the list you would be able to pick the Camera app, and use its launch() method to launch it. You could see this in action in Kevin Grandon's "Matchscreen" homescreen-experiment. Unfortunately the permission system has the last word in this use case too, as the Apps.mgmt object calls, too require a "Certified" level permission (the webapps-manage permission). That is one of the main reasons why third party homescreens (like the one by Matteo D'Ignazio) can't function and actually launch apps currently. There is an ongoing discussion on relaxing the requirements on this, though, and there is work ongoing regarding third party home screens, so (in time) this should also be resolved.
As seen on the mdn page explaining App permissions, camera API is not available to third-party developers yet, but there are plans for it happening in the future.
Note: The reason that camera is limited to certified apps is that the sandbox that apps run in prevents access to the camera hardware. Our goal is to make it available to third party apps as soon as possible, but we don't have time to do that in the initial release.
You can use webRTC(getUserMedia API) in FxOS to access camera as in modern desktop browser after half a year. It will be a preffered way rather than the obsolete mozCamera API (which is not able to use for 3rd party developer).

What is causing my web app to stall on some Android devices? How do I detect?

I am building a cross-platform mobile app in HTML, CSS & Javascript and publishing it to Android and iOS using PhoneGap as a wrapper.
I have published in the Android Market, and am finding out through customer reviews that the app stalls early during the user setup phase, with some users/devices. I do not have any kind of error handling and reporting set up in the app, and the Android Market's crash report is still giving me 0 crashes after hundreds of downloads and half a dozen disappointed reviews.
I haven't thought about error reporting while building the app. Now that it exists, is there a simple way to implement it? I'm currently trying to intercept the window.onError event, but is that the right approach?
I've read about Acra http://code.google.com/p/acra/ but this seems to be Java error reporting. If Javascript fails on a missing function or non-existing object, will Acra report the details of that?
EDIT:
Since my app has internet access anyway, I implemented a listener for the javascript 'onerror' event, which provides a description of the error.
This listener uses Cordova/PhoneGap to detect if there is a connection: if there is, it sends the error description to the server along with the device. If there isn't, it attaches an event listener to Cordova/PhoneGap's 'online' event to send the error when the device goes online.
It's not running perfectly yet (which is why I'm not adding this as an official answer)
No, and believe me, it's a bad thing for a huge amount of us!

Categories