Caching with service workers - Chrome & Android Webview - javascript

I work on an old application that used ApplicationCache to work offline. Because of its deprecation, we would like to move to Service Workers to achieve the same goal.
Firefox 67 works very well with my current service worker implementation: when I first access to the application, every file listed as 'to-be-cached' is effectively downloaded and cached. The app can then be accessed offline.
Nevertheless, Chrome 74 and Android WebView (which looks like to be based on Chrome 73 - inside a Cordova app) have a slightly different comportment. When I first access to the application, a request per 'to-be-cached' file is put inside the cache. Navigating through the app works great when I'm online. But then I switch offline and only pages I have already accessed to are now accessible.
Is this a bug or a feature? Whatever it is, is there any workaround?

Finally, here is what I have understood: Firefox compares cached resources on their URLs (string). Thus, giving a list of URLs (string) is sufficient for Firefox to cache them and retrieve them later with a Request object (sent when browsing the web application).
Chrome apparently compares cached resources on another value (I haven't managed to find which one). Then, giving a list of URLs (string) wasn't sufficient for Chrome. Request objects and URLs (string) were not recognized as the same.
Based on W3C's specification cache.addAll method, Chrome behaviour should be the right one. But Firefox behaviour is simplier.

Related

Delphi TWebBrowser JavaScript Errors and Cookies

I have been stuck on this for a couple of weeks now and this is a follow on from SO question Delphi REST Debugger Returns Error 429 Too Many Requests but Browser Returns JSON as Expected
I was wanting to get the content of a url response using the TNetHTTPRequest and TNetHTTPClient components. I was continually getting 429 errors “too many requests”. When using Firefox Inspect Element to look at network and storage, I discovered that I needed to receive cookies and then send those cookies with my request. Unfortunately, one of the cookies essential to the website content seems to be dependent (I think) on the execution of javascript. I went back to first principles and dropped a TWebbrowser on a form (VCL) and sure enough browser shows a javascript error “Expected Identifier”.
When I use the TWebbrowser in FMX it does not throw an error it just does not return the website contents at all and remains blank. I need FMX as I will be in a cross platform mobile environment.
The URL is https://shop.coles.com.au/a/national/home
I use Delphi Community Edition 10.3.3 Rio.
The URL returns perfectly in commercial browsers Firefox, Safari, Chrome and even CEF4Delphi. Unfortunately, I can’t use CEF as I need cross platform.
I would like to know how to get the website content returned to the browser (or even better NetHTTPClient) without script errors and how to access the browsers current cookies.
Any help will be most appreciated.
Thanks,
John.
URL returns perfectly in commercial browsers ... without script errors and how to access the browsers current cookies
If you'd inspect the network traffic (F12 > Network, then requesting your URL) or use uMatrix (to block everything that doesn't belong to the domain by default) you'd see the JS does at least one XHR to amazonaws.com. Your HTTP transfer alone (as done by TNetHTTP*) works fine and you get the same resource that each internet browser gets.
However, you don't operate with what you got (in contrast to the internet browser, which also automatically parses the HTML, sees JS resources, and executes them). TWebbrowser does not what you take for granted most likely due to security settings (try to get an error console in there, preferably F12 again). You need to do the same: parse the HTML resource for JS URIs, requesting those and executing what you get, while still providing the same cookie environment.
For executing JS you could use Chakra or mORMot or BESEN. It's challenging at first, but the more you understand about HTTP (including cookies) and a JS engine, the more you'll see why "things work" in one situation and not in another. There's a reason why an internet browser is a very complex software and not just a downloader.
As per this forcing IE11 Quirks mode might cure your problem already when using TWebBrowser:
TBrowserEmulationAdjuster.SetBrowserEmulationDWORD(TBrowserEmulationAdjuster.IE11_Quirks);

chrome.storage.sync not syncing between browsers

I am writing a chrome extension in which I would like some data to be synced across multiple computers. I was under the impression that if I was 'signed in' using the same gmail account to two separate chrome browsers (one in a virtual machine running on the same computer) that I should be able to use the chrome.storage.sync.set on one browser, and then retrieve that same data from the other using chrome.storage.sync.get?
I can retrieve the data using the get method from that same browser I ran the set method with (even from a separate incognito window), but the same get command on the other virtual machine returns an empty object.
Both browsers have the 'sync everything' option set in 'advanced sync settings'.
Both browsers have the same chrome version: 33.0.1750.117 m.
Here is my code for setting and getting:
chrome.storage.sync.set({'foo': 'bar'}, function() {});
chrome.storage.sync.get('foo', function(items) {
console.dir(items);
});
As #abraham and #sowbug both alluded to, the problem was that when loading unpacked extensions they get assigned different extension ids.
I found the solution here: https://developer.chrome.com/extensions/packaging#upload
It involves packing the extension from the extensions page in chrome, and then loading the resulting .crx file. Every time this .crx file is loaded it will have the same extension id.

Does Mobile Safari clean the DOM Application Cache - and when?

We're doing a web app using the DOMApplicationCache / cache manifest, and I'm wondering if Mobile Safari at any point will clean/clear the DOMApplicationCache for my project (eg. if the website is not visited in a certain amount of time).
I'm also told, that saving the web app to the home screen will sandbox its application cache, effectively avoiding any time constraints there might be on the DOMApplicationCache in "normal" Mobile Safari.
So do anybody know what the current situation is on this? Or would you maybe be able to point me to a relevant resource?
Thanks!
It doesn't seem like it should be clearing the cache automatically, from what I can read in Apple's document about storing web content on the client: https://developer.apple.com/library/safari/#documentation/AppleApplications/Reference/SafariWebContent/Client-SideStorage/Client-SideStorage.html#//apple_ref/doc/uid/TP40002051-CH4-SW5
However, I would probably implement a measure to ensure that the cache is valid and exists using the status property of the applicationCache object. If you haven't already, check the class reference here: https://developer.apple.com/library/safari/#documentation/DataManagement/Reference/DOMApplicationCacheClassReference/DOMApplicationCache/DOMApplicationCache.html#//apple_ref/javascript/cl/DOMApplicationCache

Do mobile web browsers and mobile web apps share the same localStorage

I am working on a site for mobile devices. The site is available through normal web browsers and also through an app which is just a browser shell and brings up the mobile site. In our efforts to speed up the loading of the site in mobile we have reduced requsts, made use of data uris, etc. Recently we have started using localStorage to save styles and JavaScript data to the device.
Why you may ask?
In our testing, mobile browsers maintain their cache throughout their session and when the browser is closed and re-opened. The app maintains its cache as long as it is being used, but when it is closed and re-opened it re-requests everything, thus slowing down that initial load.
The problem is, we have styles and JavaScript that are specific for if you are in the browser or in the app for a few small things. We've seen a few things break around these subtle differences and my best theory is that localStorage is shared between the browser and the app. And a user that uses both the site and the app may have problems if the localStorage was set by one and needs something else for the other.
I can't find any documentation that confirms this theory or not, and short of creating an app just to test this I figured I'd ask if anyone has any ideas?
If you trust Apple...
Like cookies, storage objects are a shared resource common to web
content served from the same domain. All pages from the same domain
share the same local storage object. Frames and inline frames whose
contents are from the same origin also share the same session storage
object because they descend from the same window.
Because the storage resource is shared, scripts running in multiple
page contexts can potentially modify the data stored in a storage
object that is actively being scrutinized or modified by a script
running on a different page. If your scripts do not notice these
changes, you may not get the results you expect.
If you are populating your app with data from the same place as the web app, I would suspect there are some keys being modified by the other one. I know that using sessionStorage.clear() will wipe out keys if the web app and offline app load data from the same domain.
As Chiguireitor said it depends on what mobile OS the user is using, but in my experience iOS 4 & 5 share the same localStorage whether you're accessing the mobile app through the Safari browser or as a homescreen web app. And of course if you package it with something like Phonegap it acts as its own app therefore its localStorage is not shared.

Replace remote JavaScript file with a local debugging copy using Greasemonkey or userscript

While debugging a client app that uses a Google backend, I have added some debugging versions of the functions and inserted them using the Chrome Developer Tools script editor.
However there are a number of limitations with this approach, first is that the editor doesn't seem to always work with de-minified files, and when the JS file is 35K lines long, this is a problem.
Another issue is that all the initialization that is done during load time, uses the original "unpatched" functions, hence this is not ideal.
I would like to replace the remote javascript.js file with my own local copy, presumably using some regex on the file name, or whatever strategy was suitable, I am happy to use either Firefox or Chrome, if one was easier than the other.
So basically, as #BrockAdams identified, there are a couple of solutions to these types of problem depending on the requirements, and they follow either 1 of 2 methods.
the browser API switcharoo.
The proxy based interception befiddlement.
the browser API switcharoo.
Both firefox and chrome support browser extensions that can take advantage of platform specific APIs to register event handlers for "onbeforeload" or "onBeforeRequest" in the case of firefox and chrome respectively. The chrome APIs are currently experimental, hence these tools are likely to be better developed under firefox.
2 tools that definitely do something like what is required are AdBlock plus and Jsdeminifier both of which have the source code available.
The key point for these 2 firefox apps is that they intercept the web request before the browser gets its hands on it and operate on the other side of the http/https encrpytion stage, hence can see the decrypted response, however as identified in the other post that they don't do the whole thing, although the jsdeminifier was very useful, I didn't find a firefox plugin to do exactly what I wanted, but I can see from those previous plugins, that it is possible with both firefox and chrome. Though they don't actually do the trick as required.
The proxy based interception befiddlement This is definitely the better option in a plain HTTP environment, there are whole bunch of proxies such as pivoxy, fiddler2, Charles Web HTTP proxy, and presumably some that I didn't look at specifically such as snort that support filtering of some sort.
The simplest solution for myself was foxyproxy and privoxy on firefox, and configure a user.action and user.filter to detect the url of the page, and then to apply a filter which swapped out the original src tag, for my own one.
The https case. proxy vs plugin
When the request is https the proxy can't see the request url or the response body, so it can't do the cool swapping stuff. However there is one option available for those who like to mess with their browser. And that is the man-in-the-middle SSL proxy. The Charles Web HTTP proxy appears to be the main solution to this problem. Basically the way it works is that when your browser makes a request to the remote HTTPS server, the ssl proxy intercepts the request and from the ip address of the server generates a server certificate on the fly, which it signs with its own root CA, and sends back to the browser. The browser obviously complains about the self-signed cert, but here you can choose to install the ssl proxy root CA cert into the browser, befuddling the browser and allowing the ssl proxy to man in the middle and make replacements and filters on the raw response body.
Alternative roll your own chrome extension
I decided to go with rolling my own chrome extension, which I am planning to make available. Currently its in a very hardcoded to my own requirements state, but it works pretty good, even for https requests and another benefit is that a browser plugin solution can be more tightly integrated with the browser developer tools.

Categories