webkitspeechrecognition no longer prompts for permission - javascript

I've been prototyping a few pages that use webkitspeechrecognition. I learned quickly that you cannot load these from a file, you have to serve them from a webserver. I'm using osx so I just moved my files to the local apache that was already running and enabled. This worked fine for quite a while.
For some reason, none of my pages that were working fine will prompt me to deny/allow the microphone usage. I even copied an existing working page from another webserver and if I load it from http://localhost/speech.html it will not prompt. It skips the prompt and goes to my recognition.onerror handler and logs "not-allowed"
However, if I load the same page (or any of my other prototypes) from http://127.0.0.1/speech.html it works fine. This made me think I had accidentally cached a response like "always deny" or something. I think I cleared/reset all my chrome settings but I'm still getting the same behavior. 127.0.0.1 will properly prompt, but localhost will not prompt at all.
Where might chrome be storing some additional settings that I need to clear?

Your microphone settings might be stored at chrome://settings/contentExceptions#media-stream-mic. You can view the websites that have permissions saved there.

getUserMedia permissions requesting in chrome currently works something like:
If you have a request by http, getUserMedia will only remember the permissions for that session. If you go back to the same page. it asks again.
If you do the same request by https, once you set the permissions, you always have permissions.
My memory is that an exception is granted for http://localhost/... for debugging purposes. In this case, you don't need to repeatedly grant
permissions.
If you use http and 127.0.0.1/, I think no exception is made.
http://www.html5rocks.com/en/tutorials/getusermedia/intro/

Related

How to get Chrome to stop asking me whether its ok to use the camera?

I am doing some html/js/webrtc work with the webcam. Even though I am hosting files from the web server on my machine (thus 127.0.0.1), Chrome asks me whether its ok to use the camera every time I reload the page.
How can I get it to stop?
Just activate chrome://flags/#allow-insecure-localhost. This will work like https and fix a lot of development problems, including invalid ssl certificates.
use https. Chrome does not persist permissions on http and getUserMedia will stop working there soon (even though possibly not on localhost).
Alternatively, use command line flags like --use-fake-ui-for-media-stream to skip this.

webcam permission request doesnt work with local files

When experimenting some things with WebRTC. I looked at some examples and downloaded one from github. This wasn't working at all. At the right side of the url, there was an icon that indicated that my webcam was blocked. I clicked on it and said that it could use my webcam. Then chrome said to reload the page so i did that. And everything was the same as in the beginning. But when I loaded the same site through jsfiddle, it asked me with a pop-up for webcam access (the same way as every other application does) and that worked without a flaw. I tested some other browsers and it was all the same. Does anyone have a suggestion how to solve this problem? Thank you!
In order to use the web cam API, the file must be run from a server. When tyou run it from JSFiddle, it runs on a server, and thus works. It wont work if you run it as a file:/// in your browser, you must run a local web server on your computer and open the web app from there as http://
Running a server
Well running a webserver could be very complex, and requires knoweldege in using softwares like apache or ISS. Luckily enough, for develpers just seeking a simple, straight forward webserver for client side development, there are a couple of easy solutions:
Windows: use a software called WAMP - it automaticaly runs apache on your machiene and creates a folder on your computer in which you can put all the website content. http://www.wampserver.com/en/
Mac: simillar to WAMP, mac has a piece of software called XAMP that does pretty much the same thing. http://www.apachefriends.org/en/xampp.html
Both are pretty simple, but I think will be enough for simple front end development.
Chrome blocked my webcam on a site where I denied access multiple times (because I was testing).
You might need to visit chrome://settings/contentExceptions#media-stream and clear your settings.

Mobile App Authentication/Sencha Touch

I'm trying to write an app using Sencha Touch that ultimately targets iOS and Android. It's supposed to log into the corporate web server and then retrieve and parse some JSON data. It should be very simple. However I'm very new to both Sencha and Javascript, so I'm having a hard time doing this sort of client-side authentication. I can't even seem to make it authenticate from a web browser on my dev machine.
I used this link to help create my login page:
http://miamicoder.com/2012/adding-a-login-screen-to-a-sencha-touch-application/
But when I attempt to log in I seem to get the following error message and a null object:
XMLHttpRequest cannot load https://www.server.com/index.html?=_dc1234567890123
Origin http://localhost:8000 is not allowed by Access-Control-Allow-Origin.
Does anyone have any advice or good resources on getting this app to log in? Any help would be greatly appreciated!
Steve, the "is not allowed" error is returned because your login request violates the browser's same-origin policy (essentially it states that all XhrHttpRequests must go to the same domain the page was initially loaded from).
Some browsers offer ways of disabling this error temporarily (which might be fine for short-term development purposes), but for the long-term you'll either need to host your application in the same domain as your backend server, or look into using CORS or JSONP for your requests.
Your AJAX request violates the Same-Origin-Policy. That's why you are getting the error message. If you are using chrome for debugging u can disable the cross-domain Javascript security by doing the following :
For Windows:
1) Create a shortcut to Chrome on your desktop. Right-click on the shortcut and choose Properties, then switch to “Shortcut” tab.
2) In the “Target” field, append the following: –args –disable-web-security
For Mac, Open a terminal window and run this from command-line:
open ~/Applications/Google\ Chrome.app/ –args –disable-web-security
For Ubuntu, Open a terminal window and run this form command line:
open /usr/bin/ and execute ./google-chrome --disable-web-security
There is extension for chrome that does the work:
‪Allow-Control-Allow-Origin.
If you want to active it when the browser started, you have to press on the icon.

Developer Tools: Follow network requests across popups

We are trying to figure out how something works on the web (for web scraping/automation) and one of the web pages we are working on issues a popup to do some of the work. One of our most commonly used debug tools is the Chrome network tab in Developer Tools, hit "record" do some work, and then examine what was done and then replicate the work done "offline".
However the Developer Tools (in Chrome, Safari and Firefox - all work the same) do not follow requests across a popup, even if you hit "record".
Is there some configuration value I'm missing, or some way to record all network events? We can't use tcpdump/wireshark for this because it's all done over SSL. One option we've considered is a man-in-the-middle https proxy, but I can't find anything pre-written so we'd have to create one ourselves.
I don't know of any way to follow the requests across pop-ups, as each window has its own Web Inspector, however you can use Fiddler to inspect HTTPS requests. It will MITM, and subsequently throw a certificate error, which should allow you to inspect all requests in the order that they happened.
You can use Charles Web Debugging Proxy, which is an app that lets you see all the traffic and even replace some responses with your own. Of course that may break HTTPS so you have to accept the certificate errors, but that's usually a minor problem. It works on Win, Mac and even Linux.
The object inspector cannot inspect what isn't in the current page. Therefore, you will need to open the inspector inside the popup url with same parameters in order to see what it does.
As a tool, you can use a web sniffer to see exactly which url were called during the process.

Replace remote JavaScript file with a local debugging copy using Greasemonkey or userscript

While debugging a client app that uses a Google backend, I have added some debugging versions of the functions and inserted them using the Chrome Developer Tools script editor.
However there are a number of limitations with this approach, first is that the editor doesn't seem to always work with de-minified files, and when the JS file is 35K lines long, this is a problem.
Another issue is that all the initialization that is done during load time, uses the original "unpatched" functions, hence this is not ideal.
I would like to replace the remote javascript.js file with my own local copy, presumably using some regex on the file name, or whatever strategy was suitable, I am happy to use either Firefox or Chrome, if one was easier than the other.
So basically, as #BrockAdams identified, there are a couple of solutions to these types of problem depending on the requirements, and they follow either 1 of 2 methods.
the browser API switcharoo.
The proxy based interception befiddlement.
the browser API switcharoo.
Both firefox and chrome support browser extensions that can take advantage of platform specific APIs to register event handlers for "onbeforeload" or "onBeforeRequest" in the case of firefox and chrome respectively. The chrome APIs are currently experimental, hence these tools are likely to be better developed under firefox.
2 tools that definitely do something like what is required are AdBlock plus and Jsdeminifier both of which have the source code available.
The key point for these 2 firefox apps is that they intercept the web request before the browser gets its hands on it and operate on the other side of the http/https encrpytion stage, hence can see the decrypted response, however as identified in the other post that they don't do the whole thing, although the jsdeminifier was very useful, I didn't find a firefox plugin to do exactly what I wanted, but I can see from those previous plugins, that it is possible with both firefox and chrome. Though they don't actually do the trick as required.
The proxy based interception befiddlement This is definitely the better option in a plain HTTP environment, there are whole bunch of proxies such as pivoxy, fiddler2, Charles Web HTTP proxy, and presumably some that I didn't look at specifically such as snort that support filtering of some sort.
The simplest solution for myself was foxyproxy and privoxy on firefox, and configure a user.action and user.filter to detect the url of the page, and then to apply a filter which swapped out the original src tag, for my own one.
The https case. proxy vs plugin
When the request is https the proxy can't see the request url or the response body, so it can't do the cool swapping stuff. However there is one option available for those who like to mess with their browser. And that is the man-in-the-middle SSL proxy. The Charles Web HTTP proxy appears to be the main solution to this problem. Basically the way it works is that when your browser makes a request to the remote HTTPS server, the ssl proxy intercepts the request and from the ip address of the server generates a server certificate on the fly, which it signs with its own root CA, and sends back to the browser. The browser obviously complains about the self-signed cert, but here you can choose to install the ssl proxy root CA cert into the browser, befuddling the browser and allowing the ssl proxy to man in the middle and make replacements and filters on the raw response body.
Alternative roll your own chrome extension
I decided to go with rolling my own chrome extension, which I am planning to make available. Currently its in a very hardcoded to my own requirements state, but it works pretty good, even for https requests and another benefit is that a browser plugin solution can be more tightly integrated with the browser developer tools.

Categories