I want to call a browser function, e.g. AddSearchProvider(engineURL), which requires an URL to a XML-file. However, I want the user to generate the content of the XML-file himself, thus I want to call the function by passing (a reference to) the user generated content directly. It is key that all of this happens client-side only, such that no server is required to temporarily host files in the process.
I tried to encode the XML file into the URI:
uri = "data:application/xml;charset=utf-8," + encodeURIComponent($('#edit-search-engine').val());
window.external.AddSearchProvider(uri);
But Firefox (57.0) rejects this approach with the following message:
I guess¹ Firefox expects a "true", remote, URL. How can I realize the above functionality without a server in the loop?
¹Update: Firefox is indeed enforcing the URL to be of HTTP, HTTPS or FTP:
[...]
// Make sure the URLs are HTTP, HTTPS, or FTP.
let isWeb = ["https", "http", "ftp"];
if (isWeb.indexOf(engineURL.scheme) < 0)
throw "Unsupported search engine URL: " + engineURL;
if (iconURL && isWeb.indexOf(iconURL.scheme) < 0)
throw "Unsupported search icon URL: " + iconURL;
[...]
Related
I want to proxy only certain domains through my chrome extension but i need to do some checks on current url to decide if i need to proxy.
chrome.webRequest.onBeforeRequest.addListener(function(d){
chrome.proxy.settings.set({value: getProxyConfig(d.url), scope: 'regular'},function() {});
},{urls:[
"http://*/*",
"https://*/*"]},
["blocking"]);
The function getProxyConfig(d.url) is a simple function that will return the appropriate ProxyConfig object(mode is direct or fixed_servers) based on the url. The function doesn't make any external calls and just fetch a list of domains from local storage and does comparison.
What is the problem ?
Chrome try to proxy some urls that should not have been proxied leading to a ERR_TUNNEL_CONNECTION_FAILED because the proxy will allow only specific domains to be proxied.If i log getProxyConfig(d.url)output to the console for urls that are falling i see the mode as direct as expected.Note: An html page may contain both links that must/must not be proxied.
chrome.proxy.settings.set is asynchronous so i am thinking that maybe chrome.webRequest.onBeforeRequest is done executing before chrome.proxy.settings.set
You cannot modify proxy automatically while a request is being made. You have to Create a custom PAC script, where the script decides whether to proxy a URL or not:
const config = {
mode: "pac_script",
pacScript: {
data: "function FindProxyForURL(url, host) {\n" +
" if (host == 'foobar.com')\n" +
" return 'PROXY blackhole:80';\n" +
" return 'DIRECT';\n" +
"}"
}
}
chrome.proxy.settings.set(
{value: config, scope: 'regular'},
function() {}
)
Regarding, chrome.webRequest.onBeforeRequest , this is called AFTER the proxy been set, hence why you will get a isProxy attribute there.
For example it needs to call a web service hosted with SSL.
If it can, how to pass the client certificate then?
Thanks a lot!!
WinJS.xhr({
type: "GET",
url: "https://localhost:442/WebService1.asmx?op=Login",
}).then(function success(res) {
var debug1 = res.responseText + res.responseURL;
}, function error(err) {
var debug2 = err.responseText + err.responseURL;
}, function completed(result) {
if (result.status === 200) {
// do something
}
});
The debugging point will jump to 'complete(result)' function, but the status code is '0'. Even if I change URL to other https site (e.g. https://www.w3.org), result is the same.
------------- Update 1 ---------------------
If it's in C# I could use following code to pass client certificate. However if I want to change origial WinJs.xhr to HttpClient, just copy & paste seems not working as .js file could not understand all syntax?
var certQuery = new CertificateQuery();
var cert = (await CertificateStores.FindAllAsync(certQuery)).FirstOrDefault(c=>c.Issuer.StartsWith("xxxx",StringComparison.CurrentCultureIgnoreCase));
var filter = new HttpBaseProtocolFilter();
if (cert != null)
{
filter.ClientCertificate = cert;
filter.IgnorableServerCertificateErrors.Add(ChainValidationResult.Untrusted | ChainValidationResult.InvalidName);
}
var hc = new Windows.Web.Http.HttpClient(filter);
var uri = new Windows.Foundation.Uri(url);
hc.getStringAsync(uri).done({.......});
E.g.
1) How to write 'Using .... ' in JS file?
2) How to use "await" or "'FindAllAsync'" in this line? etc.
var cert = (await CertificateStores.FindAllAsync(certQuery)).FirstOrDefault(c=>c.Issuer.StartsWith("xxxx",StringComparison.CurrentCultureIgnoreCase));
WinJS.xhr wraps XMLHttpRequest( https://msdn.microsoft.com/en-us/library/windows/apps/br229787.aspx ) with a Promise-like interface (a WinJS Promise, not an ES6 Promise, but the concept is similar).
XMLHttpRequest has the withCredentials property which allows you to specify whether client-side credentials, including client-side certificates, should be sent or not - but there is no API that would allow you to specify which specific client-side certificate should be used.
Fortunately WinJS exposes the Windows.Web.Http.HttpClient type which gives you more control over client authentication, including client-side certificates - but your UWP application must have "Enterprise capability" to use the user's My certificate store - otherwise non-Enterprise UWP applications only have certificates in their Application Certificate Store:
https://blogs.windows.com/buildingapps/2015/11/23/demystifying-httpclient-apis-in-the-universal-windows-platform/#Dr3C9IMHv5pTPOrB.97
You must first add it to the app’s certificate store by following these instructions. Apps with enterprise capability can also use existing client certificates in the user’s ‘My’ store.
-------------------- UPDATE 2 ------------------------
I see now that what I am trying to accomplish is not possible with chrome. But I am still curios, why is the policy set stricter with chrome than for example Firefox? Or is it perhaps that firefox doesn't actually make the call either, but javascript-wise it deems the call failed instead of all together blocked?
---------------- UPDATE 1 ----------------------
The issue indeed seems to be regarding calling http from https-site, this error is produced in the chrome console:
Mixed Content: The page at 'https://login.mysite.com/mp/quickstore1' was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint 'http://localhost/biztv_local/video/video_check.php?video=253d01cb490c1cbaaa2b7dc031eaa9f5.mov&fullscreen=on'. This request has been blocked; the content must be served over HTTPS.
Then the question is why Firefox allows it, and whether there is a way to make chrome allow it. It has indeed worked fine until just a few months ago.
Original question:
I have some jQuery making an ajax call to http (site making the call is loaded over https).
Moreover, the call from my https site is to a script on the localhost on the clients machine, but the file starts with the
<?php header('Access-Control-Allow-Origin: *'); ?>
So that's fine. Peculiar setup you might say but the client is actually a mediaplayer.
It has always worked fine before, and still works fine in firefox, but since about two months back it isn't working in chrome.
Has there been a revision to policies in chrome regarding this type of call? Or is there an error in my code below that firefox manages to parse but chrome doesn't?
The error only occurs when the file is NOT present on the localhost (ie if a regular web user goes to this site with their own browser, naturally they won't have the file on their localhost, most won't even have a localhost) so one theory might be that since the file isn't there, the Access-Control-Allow-Origin: * is never encountered and therefore the call in its entirety is deemed insecure or not allowed by chrome, therefore it is never completed?
If so, is there an event handler I can attach to my jQuery.ajax method to catch that outcome instead? As of now, complete is never run if the file on localhost isn't there.
before : function( self ) {
var myself = this;
var data = self.slides[self.nextSlide-1].data;
var html = myself.getHtml(data);
$('#module_'+self.moduleId+'-slide_'+self.slideToCreate).html(html);
//This is the fullscreen-always version of the video template
var fullscreen = 'on';
//console.log('runnin beforeSlide method for a video template');
var videoCallStringBase = "http://localhost/biztv_local/video/video_check.php?"; //to call mediaplayers localhost
var videoContent='video='+data['filename_machine']+'&fullscreen='+fullscreen;
var videoCallString = videoCallStringBase + videoContent;
//TODO: works when file video_check.php is found, but if it isn't, it will wait for a video to play. It should skip then as well...
//UPDATE: Isn't this fixed already? Debug once env. is set up
console.log('checking for '+videoCallString);
jQuery.ajax({
url: videoCallString,
success: function(result) {
//...if it isn't, we can't playback the video so skip next slide
if (result != 1) {
console.log('found no video_check on localhost so skip slide '+self.nextSlide);
self.skip();
}
else {
//success, proceed as normal
self.beforeComplete();
}
},
complete: function(xhr, data) {
if (xhr.status != 200) {
//we could not find the check-video file on localhost so skip next slide
console.log('found no video_check on localhost so skip slide '+self.nextSlide);
self.skip();
}
else {
//success, proceed as normal
self.beforeComplete();
}
}, //above would cause a double-slide-skip, I think. Removed for now, that should be trapped by the fail clause anyways.
async: true
});
I try to read and write a cell in google spreadsheet with http request by javascript. The "read" operation works, but the "write" operation fail.
Please help to point out which part I should modify in my code of "write" operation.
The write example I followed is from here https://developers.google.com/google-apps/spreadsheets/,
and it is not working.
My read operation (this is working):
http_request.onreadystatechange = function() {
process_cellrw(http_request);
};
http_request.open('GET',"https://spreadsheets.google.com/feeds/cells/0Aqed....RHdGc/od6/private/full/R1C1", true);
http_request.setRequestHeader('Authorization','Bearer ' + strAccessToken);
http_request.send(null);
My write operation (this is not working):
var testxml = ['<entry xmlns="http://www.w3.org/2005/Atom" <br>
xmlns:gs="http://schemas.google.com/spreadsheets/2006">',<br>
'<id>https://spreadsheets.google.com/feeds/cells/0Aqed....RHdGc/od6/private/full/R1C1</id>',<br>
'<link rel="edit" type="application/atom+xml"<br> href="https://spreadsheets.google.com/feeds/cells/0Aqed....RHdGc/od6/private/full/R1C2/9zlgi"/>',<br>
'<gs:cell row="1" col="1" inputValue="xxxx"/>',<br>
'</entry>'].join('');<br>
http_request.onreadystatechange = function() {
process_cellrw();
};
http_request.open('PUT',"https://spreadsheets.google.com/feeds/cells/0Aqed....RHdGc/od6/private/full/R1C2/9zlgi");
http_request.setRequestHeader('Authorization','Bearer ' + strAccessToken);
http_request.setRequestHeader('GData-Version','3.0');
http_request.setRequestHeader('Content-Type', 'application/atom+xml');
http_request.setRequestHeader('If-Match','*');
http_request.setRequestHeader('Content-Length', testxml.length.toString());
http_request.send(testxml);
The write operation always receive http_request.status = 0 at callback function process_cellrw().
My environment is Windows 7 + Chrome browser. I also tested it on Android + Webkit, still fails.
I also tested to add a row by list feed, also fails by receive http_request.status = 0.
I know this doesn't answer your question, but I would open up the Chrome "Developer Tools", go to "Network" and inspect the response from google for the API call. It may contain headers that explain what failed...
I found the root cause : cross domain XMLHttpRequest POST/PUT are not support by docs.googole.com and spreadsheets.google.com
The XMLHttpRequest POST/PUT will first send a HTTP OPTIONS request header to the resource on the other domain, in order to determine whether the actual request is safe to send. But docs.googole.com and preadsheets.google.com always reply "404 Not Found" for this request. That's why I always received http_request.status = 0 at callback function process_cellrw().
One solution is to use another CGI which allows cross domain HTTP request, such as PHP.
Another solution is to implement the write operation with the function UrlFetchApp to send HTTP PUT request in Google Apps Script, and then we can use XMLHttpRequest GET to trigger this Apps Script.
I am using Websync3, Javascript API, and subscribing to approximately 9 different channels on one page. Firefox and Chrome have no problems, but IE9 is throwing an alert error stating The request is too large for IE to process properly.
Unfortunately the internet has little to no information on this. So does anyone have any clues as to how to remedy this?
var client = fm.websync.client;
client.initialize({
key: '********-****-****-****-************'
});
client.connect({
autoDisconnect: true,
onStreamFailure: function(args){
alert("Stream failure");
},
stayConnected: true
});
client.subscribe({
channel: '/channel',
onSuccess: function(args) {
alert("Successfully connected to stream");
},
onFailure: function(args){
alert("Failed to connect to stream");
},
onSubscribersChange: function(args) {
var change = args.change;
for (var i = 0; i < change.clients.length; i++) {
var changeClient = change.clients[i];
// If someone subscribes to the channel
if(change.type == 'subscribe') {
// If something unsubscribes to the channel
}else{
}
}
},
onReceive: function(args){
text = args.data.text;
text = text.split("=");
text = text[1];
if(text != "status" && text != "dummytext"){
//receiveUpdates(id, serial_number, args.data.text);
var update = eval('(' + args.data.text + ')');
}
}
});
This error occurs when WebSync is using the JSON-P protocol for transfers. This is mostly just for IE, cross domain environments. Meaning websync is on a different domain than your webpage is being served from. So IE doesn't want do make regular XHR requests for security reasons.
JSON-P basically encodes the up-stream data (your 9 channel subscriptions) as a URL encoded string that is tacked onto a regular request to the server. The server is supposed to interpret that URL-encoded string and send back the response as a JavaScript block that gets executed by the page.
This works fine, except that IE also has a limit on the overall request URL for an HTTP request of roughly 2kb. So if you pack too much into a single request to WebSync you might exceed this 2kb upstream limit.
The easiest solution is to either split up your WebSync requests into small pieces (ie: subscribe to only a few channels at a time in JavaScript), or to subscribe to one "master channel" and then program a WebSync BeforeSubscribe event that watches for that channel and re-writes the subscription channel list.
I suspect because you have a key in you example source above, you are using WebSync On-Demand? If that's the case, the only way to make a BeforeSubscribe event handler is to create a WebSync proxy.
So for the moment, since everyone else is stumped by this question as well, I put a trap in my PHP to not even load this Javascript script if the browser is Internet Destroyer (uhh, I mean Internet Explorer). Maybe a solution will come in the future though.