Detecting applicationCache viability of remote resource - javascript

I am trying to determine if cache (as obtained via applicationCache and HTML5 cache-manifest) is available located on a different domain (local file system vs WWW).
The cache-checking resource (a gateway mechanism, if you will) is located on the local filesystem and is loaded via a webview. This is a requirement that I cannot work around.
Until recently, this gateway local file would check to see if the device is online and redirect to the remote resource using window.location and if the device was not online, it would display a graphic (also locally packaged) that essentially said "You must connect to the internet to use this feature."
However, I just recently implemented offline support on that remote resource. It works. If we have the gateway file just redirect to the remote resource whilst offline, it'll load the remote resource from cache.
There's a possibility that a user may try to use the device whilst offline when they have not yet accessed it via online, so logic needs to be placed in the gateway code to test if the cache exists or not. I am running into cross domain issues, which I expected, but I am not sure how to go about fixing it.
Here is the code I have tried:
if (window.navigator.onLine === false) {
// See if we're able to reach the content (if it's cached, we'll get HTML, else it'll fail)
cacheCheck = $.ajax(contentLoc, {async:false});
// the request failed so we have no cache at all. let's just show the offline graphic.
if (cacheCheck.status === 0) { // no cache available :(
$("#launchpage").addClass("offline");
} else { // we have a cache :)
redirect();
}
} else {
redirect();
}
When I was writing it, I was under the naive hope that the $.ajax() would fetch the cached version (if it existed) and I could just test the returned object to see if the status code returned wasn't an error status code.
However, this does not work as object is returning "Error: NETWORK_ERR: XMLHttpRequest Exception 101"
Is there any other method that I can use to determine whether or not it is safe to redirect? It's a requirement that I display a local image if a redirect would fail (due to no cache)

I have figured out a workaround. If I inject an iframe pointing to the remote resource and check to see if that iframe loads HTML (I specifically check if the tag contains a manifest attribute) it assumes that cache exists and it can perform a redirect.
If I don't get what I'm expecting, the error graphic displays. Though in my implementation the graphic is always displayed when offline since I have to wait for the iframe to load asynchronously.
Here is example code to make it work:
if (window.navigator.onLine === false) {
// We're offline, so show the offline graphic, in case the future test fails (i.e., no cache available)
$("#launchpage").addClass("offline");
// Create an iframe pointing to the canvas. If it loads, we have a cache
$('body').append('<iframe id="offlineTest" src="'+contentLoc+'" style="display:none;" />');
$('#offlineTest').bind('load', function() {
// See if the result HTML tag has a manifest tag.
manifest = $("#offlineTest").contents().find('html').attr('manifest');
if (manifest !== undefined) { // cache is available.
redirect();
}
});
} else {
redirect();
}

Related

Javascript: Querying clipboard permissions on Firefox does not work

I'm trying to modify the content of the clipboard by executing the "copy" command on a focused DOM element. However, the new content comes from the server, and arrives from a websocket, which is then processed in a callback that does not come from direct user interaction.
Because it wasn't triggered by the user, it is not allowed to do such thing as modifying the clipboard content, as specified in the Firefox's MDM website . The error message is:
document.execCommand(‘cut’/‘copy’) was denied because it was not
called from inside a short running user-generated event handler.
To overcome this issue, the same page suggest to request permissions to the browser throught navigator.permissions.query():
navigator.permissions.query({name: "clipboard-write"}).then(result => {
if (result.state == "granted" || result.state == "prompt") {
/* write to the clipboard now */
}
});
However, thought the article they use different names for the permissions:
clipboard-write
clipboard-read
clipboardWrite
clipboardRead
Within the same site, the permissions article shows a browser compatibility table , where it says Firefox supports clipboardWrite from version 51 and clipboardRead from version 54.
The problem is that none of these permissions is working on Firefox (I'm using Firefox 63). The query callback is never called. I have tried the four permission names without any luck.
To make sure the mechanism was working, I tested other permissions, such as notifications which worked flawlessly (It showed "prompt")
navigator.permissions.query({name: "notifications"}).then(result => {
alert(result.state)
});
So my question is: Am I doing something wrong when requesting the permissions, or have this permissions changed whatsoever?
It is intentional, aka. by design in Firefox. They chose to not expose this API to WEB content, only for browser extensions.
See here and more specifically here for your reference:
We do not intend to expose the ability to read from the clipboard (the Clipboard.read and Clipboard.readText APIs) to the web at this time. These APIs will only be usable by extensions...

How to detect if a web page is running from a website or local file system?

This question has been asked before, but none of the provided answers are correct. I am not allowed to comment to the the original question (or answers) so I am creating a new question as has been suggested to me.
How to detect if a web page is running from a website or local file system
I need to detect if a user has accessed a specific page through a Safari Web Archive, or by going to the proper web url.
None of the answers on the linked question work with Safari webarchives.
The accepted answer was this :
switch(window.location.protocol) {
case 'http:':
case 'https:':
//remote file over http or https
break;
case 'file:':
//local file
break;
default:
//some other protocol
}
However, for some reason, Safari webarchive files seem to behave like they are being accessed remotely on the server. When testing for the location protocol, it always returns http, never file://
The only thing different inside a safari webarchive seems to be the mimetype of the file itself, being 'application/x-webarchive' But there seems to be no reliable way to detect the mime type of the current page.
I'd love to find a proper solution to detect a local page from a remote accessed page.
I know this question is super old, but I also needed a solution for this. After not finding a good answer anywhere, this is what I ended up coming up with after a lot of testing.
jQuery(document).ready(function () {
// Check if the page is being loaded from a local cache
if(jQuery('body').hasClass('localProtection')) {
window.document.location = 'https://example.com/somepage/';
return;
} else {
jQuery('body').addClass('localProtection');
}
});
When the page is initially loaded from the remote site, a class of "localProtection" is added to the body element. If the page is then saved locally by the browser, this class is saved along with the body element. Upon loading the page, we check if this class is already present and take an action -- in my case, redirect back to the original site. This is similar to how an eFUSE prevents downgrading of firmware on some devices.
This worked on Safari, Chrome, and Firefox for me.

Service Worker throwing errors when fetching MP3 audio

I would like my web app to be promoted for Add to Home Screen for users on Android+Chrome. (inspired by : Chromium Blog entry)
To do this I need a Service Worker running, even a dummy one. (Chrome needs the Service Worker as proof that I'm serious about web apps)
So I've created a dummy Service Worker with no content. It gets served with the correct no-cache headers, served over HTTPS, and is scoped to the whole domain.
Thing work generally, however every time I try to create an audio element on the fly :
jQuery( '<audio><source src="/beep.mp3" type="audio/mpeg"></source></audio>' );
...my console shows some unhappiness (taken from Chrome Canary for better messaging from the service worker thread, but basically the same output is in Chrome current) :
Mixed Content: The page at 'https://my.domain.com/some/page' was loaded over HTTPS, but requested an insecure video ''. This content should also be served over HTTPS.
GET https://my.domain.com/beep.mp3 400 (Service Worker Fallback Required)
I suppose it's important to note that, obviously, I'm not retrieving the resource directly, just creating the element and letting the browser retrieve the MP3.
The MP3 does actually get fetched (I am able to run the .play() method on the audio element). It's just the errors in my console log are piling up and makes me suspicious of how reliable this approach is. Also, incidentally, in Canary (but not current) the failure will change my "HTTPS lock" indicator from green to "warning" (so, future problem).
The audio source is from the same domain as the page, and both are HTTPS. So the "Mixed Content" message from the service worker thread is strange; it references a video with '' as the url.
Question : Am I doing something wrong or is this a Chrome bug? Do I need more than a dummy (empty) service worker? If I'm doing something wrong, I would like to find a best-practice/long-term type solution rather than hack something together, but I'll take what I can get. ;)
it seems to be a bug. This is the issue on Google code:
https://code.google.com/p/chromium/issues/detail?id=477685

How can I handle HTTP error responses when using an iframe file upload?

I am using the following dirty workaround code to simulate an ajax file upload. This works fine, but when I set maxAllowedContentLength in web.config, my iframe loads 'normally' but with an error message as content:
dataAccess.submitAjaxPostFileRequest = function (completeFunction) {
$("#userProfileForm").get(0).setAttribute("action", $.acme.resource.links.editProfilePictureUrl);
var hasUploaded = false;
function uploadImageComplete() {
if (hasUploaded === true) {
return;
}
var responseObject = JSON.parse($("#upload_iframe").contents().find("pre")[0].innerText);
completeFunction(responseObject);
hasUploaded = true;
}
$("#upload_iframe").load(function() {
uploadImageComplete();
});
$("#userProfileForm")[0].submit();
};
In my Chrome console, I can see
POST http:/acmeHost:57810/Profile/UploadProfilePicture/ 404 (Not
Found)
I would much prefer to detect this error response in my code over the risky business of parsing the iframe content and guessing there was an error. For 'closer-to-homeerrors, I have code that sends a json response, but formaxAllowedContentLength`, IIS sends a 404.13 long before my code is ever hit.
There is not much you can do if you have no control over the error. If the submission target is in the same domain than the submitter and you are not limited by the SOP, you can try to access the content of the iframe and figure out if it is showing a success message or an error message. However, this is a very bad strategy.
Why an IFRAME? It is a pain.
If you want to upload files without the page flicking or transitioning, you can use the JS File API : File API File Upload - Read XMLHttpRequest in ASP.NET MVC
The support is very good: http://caniuse.com/#feat=filereader
For old browsers that does not support the File API just provide a normal form POST. Not pretty... but ok for old browsers.
UPDATE
Since there is no chance for you to use that API... Years ago I was in the same situation and the outcome was not straightforward. Basically, I created a upload ticket system where to upload a file you had to:
create a ticket from POST /newupload/ , that would return a GUID.
create an iframe to /newupload/dialog/<guid> that would show the file submission form pointing to POST /newupload/<guid>/file
serve the upload status at GET /newupload/guid/status
check from the submitter (the iframe outer container) the status of the upload every 500ms.
when upload is started, hide the iframe or show something fancy like an endless progress bar.
when the upload operation is completed of faulted, remove iframe and let the user know how it went.
When I moved to the FileReader API... was a good day.

Detect and log when external JavaScript or CSS resources fail to load

I have multiple <head> references to external js and css resources. Mostly, these are for things like third party analytics, etc. From time to time (anecdotally), these resources fail to load, often resulting in browser timeouts. Is it possible to detect and log on the server when external JavaScript or CSS resources fail to load?
I was considering some type of lazy loading mechanism that when, upon failure, a special URL would be called to log this failure. Any suggestions out there?
What I think happens:
The user hits our page and the server side processes successfully and serves the page
On the client side, the HTML header tries to connect to our 3rd party integration partners, usually by a javascript include that starts with "http://www.someothercompany.com...".
The other company cannot handle our load or has shitty up-time, and so the connection fails.
The user sees a generic IE Page Not Found, not one from our server.
So even though my site was up and everything else is running fine, just because this one call out to the third party servers failed, one in the HTML page header, we get a whole failure to launch.
If your app/page is dependent on JS, you can load the content with JS, I know it's confusing. When loading these with JS, you can have callbacks that allow you to only have the functionality of the loaded content and not have to worry about what you didn't load.
var script = document.createElement("script");
script.type = "text/javascript";
script.src = 'http://domain.com/somefile.js';
script.onload = CallBackForAfterFileLoaded;
document.body.appendChild(script);
function CallBackForAfterFileLoaded (e) {
//Do your magic here...
}
I usually have this be a bit more complex by having arrays of JS and files that are dependent on each other, and if they don't load then I have an error state.
I forgot to mention, obviously I am just showing how to create a JS tag, you would have to create your own method for the other types of files you want to load.
Hope maybe that helps, cheers
You can look for the presence of an object in JavaScript, e.g. to see if jQuery is loaded or not...
if (typeof jQuery !== 'function') {
// Was not loaded.
}
jsFiddle.
You could also check for CSS styles missing, for example, if you know a certain CSS file sets the background colour to #000.
if ($('body').css('backgroundColor') !== 'rgb(0, 0, 0)') {
// Was not loaded.
}
jsFiddle.
When these fail, you can make an XHR to the server to log these failings.
What about ServiceWorker? We can use it to intercept all http requests and get response code to log whether the external resource fails to load.
Make a hash of the js name and session cookie and send both js name in plain and the hash. Server side, make the same hash, if both are same log, if not, assume it's abuse.

Categories