I would like my web app to be promoted for Add to Home Screen for users on Android+Chrome. (inspired by : Chromium Blog entry)
To do this I need a Service Worker running, even a dummy one. (Chrome needs the Service Worker as proof that I'm serious about web apps)
So I've created a dummy Service Worker with no content. It gets served with the correct no-cache headers, served over HTTPS, and is scoped to the whole domain.
Thing work generally, however every time I try to create an audio element on the fly :
jQuery( '<audio><source src="/beep.mp3" type="audio/mpeg"></source></audio>' );
...my console shows some unhappiness (taken from Chrome Canary for better messaging from the service worker thread, but basically the same output is in Chrome current) :
Mixed Content: The page at 'https://my.domain.com/some/page' was loaded over HTTPS, but requested an insecure video ''. This content should also be served over HTTPS.
GET https://my.domain.com/beep.mp3 400 (Service Worker Fallback Required)
I suppose it's important to note that, obviously, I'm not retrieving the resource directly, just creating the element and letting the browser retrieve the MP3.
The MP3 does actually get fetched (I am able to run the .play() method on the audio element). It's just the errors in my console log are piling up and makes me suspicious of how reliable this approach is. Also, incidentally, in Canary (but not current) the failure will change my "HTTPS lock" indicator from green to "warning" (so, future problem).
The audio source is from the same domain as the page, and both are HTTPS. So the "Mixed Content" message from the service worker thread is strange; it references a video with '' as the url.
Question : Am I doing something wrong or is this a Chrome bug? Do I need more than a dummy (empty) service worker? If I'm doing something wrong, I would like to find a best-practice/long-term type solution rather than hack something together, but I'll take what I can get. ;)
it seems to be a bug. This is the issue on Google code:
https://code.google.com/p/chromium/issues/detail?id=477685
Related
How would I overwrite the response body for an image with a dynamic value in a Manifest V3 Chrome extension?
This overwrite would happen in the background, as per the Firefox example, (see below) meaning no attaching debuggers or requiring users to press a button every time the page loads to modify the response.
I'm creating a web extension that would store an image in the extension's IndexedDB storage and then override the response body with that image on requests to a certain image. A redirect to a dataurl: I have it working in a Manifest V2 extension in Firefox via the browser.webRequest.onBeforeRequest api with the following code, but browser.webRequest and MV2 are depreciated in Chrome. In MV3, browser.webRequest was replaced with browser.declarativeNetRequest, but it doesn't have the same level of access, as you can only redirect and modify headers, not the body.
Firefox-compatible example:
browser.webRequest.onBeforeRequest.addListener(
(details) => {
const request = browser.webRequest.filterResponseData(details.requestId);
request.onstart = async () => {
request.write(racetrack);
request.disconnect();
};
},
{
urls: ['https://www.example.com/image.png'],
},
['requestBody', 'blocking']
);
The Firefox solution is the only one that worked for me, albeit being exlusive to Firefox. I attempted to write a POC userscript with xhook to modify the content of a DOM image element, but it didn't seem to return the modified image as expected. Previously, I tried using a redirect to a data URI and an external image, but while the redirect worked fine, the website threw an error that it couldn't load the required resources.
I'm guessing I'm going to have to write a content script that injects a Service Worker (unexplored territory for me) into the page and create a page rule that redirects, say /extension-injected-sw.js to either a web-available script, but I'm not too sure about how to pull that off, or if I'd still be able to have the service worker communicate with the extension, or if that would even work at all. Or is there a better way to do this that I'm overlooking?
Thank you for your time!
I've integrated Sentry with my website a few days ago and I noticed that sometimes users receive this error in their console:
ChunkLoadError: Loading chunk <CHUNK_NAME> failed.
(error: <WEBSITE_PATH>/<CHUNK_NAME>-<CHUNK_HASH>.js)
So I investigated the issue around the web and discovered some similar cases, but related to missing chunks caused by release updates during a session or caching issues.
The main difference between these cases and mine is that the failed chunks are actually reachable from the browser, so the loading error does not depend on the after-release refresh of the chunk hashes but (I guess), from some network related issue.
This assumption is reinforced by this stat: around 90% of the devices involved are mobile.
Finally, I come to the question: Should I manage the issue in some way (e. g. retrying the chunk loading if failed) or it's better to simply ignore it and let the user refresh manually?
2021.09.28 edit:
A month later, the issue is still occurring but I have not received any report from users, also I'm constantly recording user sessions with Hotjar but nothing relevant has been noticed so far.
I recently had a chat with Sentry support that helped me excluding the network related hypotesis:
Our React SDK does not have offline cache by default, when an error is captured it will be sent at that point. If the app is not able to connect to Sentry to send the event, it will be discarded and the SDK will no try to send it again.
Rodolfo from Sentry
I can confirm that the issue is quite unusual, I share with you another interesting stat: the user affected since the first occurrence are 882 out of 332.227 unique visitors (~0,26%), but I noticed that the 90% of the occurrences are from iOS (not generic mobile devices as I noticed a month ago), so if I calculate the same proportion with iOS users (794 (90% of 882) out of 128.444) we are near to a 0,62%. Still small but definitely more relevant on iOS.
This is most likely happening because the browser is caching your app's main HTML file, like index.html which serves the webpack bundles and manifest.
First I would ensure your web server is sending the correct HTTP response headers to not cache the app's index.html file (let's assume it is called that). If you are using NGINX, you can set the appropriate headers like this:
location ~* ^.+.html$ {
add_header Cache-Control "no-store max-age=0";
}
This file should be relatively small in size for a SPA, so it is ok to not cache this as long as you are caching all of the other assets the app needs like the JS and CSS, etc. You should be using content hashes on your JS bundles to support cache busting on those. With this in place visits to your site should always include the latest version of index.html with the latest assets including the latest webpack manifest which records the chunk names.
If you want to handle the Chunk Load Errors you could set up something like this:
import { ErrorBoundary } from '#sentry/react'
const App = (children) => {
<ErrorBoundary
fallback={({ error, resetError }) => {
if (/ChunkLoadError/.test(error.name)) {
// If this happens during a release you can show a new version alert
return <NewVersionAlert />
// If you are certain the chunk is on your web server or CDN
// You can try reloading the page, but be careful of recursion
// In case the chunk really is not available
if (!localStorage.getItem('chunkErrorPageReloaded')) {
localStorage.setItem('chunkErrorPageReloaded', true)
window.location.reload()
}
}
return <ExceptionRedirect resetError={resetError} />
}}>
{children}
</ErrorBoundary>
}
If you do decide to reload the page I would present a message to the user beforehand.
The chunk is reachable doesn't mean the user's browser can parse it. For example, if the user's browser is old. But the chunk contains new syntax.
Webpack loads the chunk by jsonp. It insert <script> tag into <head>. If the js chunk file is downloaded but cannot parsed. A ChunkLoadError will be throw.
You can reproduce it by following these steps. Write an optional chain and don't compile it. Ensure it output to a chunk.
const obj = {};
obj.sub ??= {};
Open your app by chrome 79 or safari 13.0. The full error message looks like this:
SyntaxError: Unexpected token '?' // 13.js:2
MAX RELOADS REACHED // chunk-load-handler.js:24
ChunkLoadError: Loading chunk 13 failed. // trackConsoleError.js:25
(missing: http://example.com/13.js)
I have a Chrome extension that adds a panel to the page in the floating iframe (on extension button click). There's certain JS code that is downloaded from 3rd party host and needs to be executed on that page. Obviously there's XSS issue and extension needs to comply with content security policies for that page.
Previously I had to deal with CSP directives that are passed via request headers, and was able to override those via setting a hook in chrome.webRequest.onHeadersReceived. There I was adding my host URL to content-security-policy headers. It worked. Headers were replaced, new directives applied to the page, all good.
Now I discovered websites that set the CSP directives via <meta> tag, they don't use request headers. For example, app pages in iTunes https://itunes.apple.com/us/app/olympics/id808794344?mt=8 have such. There is also an additional meta tag with name web-experience-app/config/environment (?) that somewhat duplicates the values that are set in content of tag with http-equiv="Content-Security-Policy".
This time I am trying to add my host name into meta tag inside chrome.webNavigation.onCommitted or onCompleted events listeners (JS vanilla via chrome.tabs.executeScript). I also experimented with running the same code from the webrequest's onCompleted listener (at the last step of lifecycle according to https://developer.mozilla.org/en-US/Add-ons/WebExtensions/API/webRequest).
When I inspect the page after its load - I see the meta tags have changed. But when I click on my extension to start loading iframe and execute JS - console prints the following errors:
Refused to frame 'https://myhost.com' because it violates the following Content Security Policy directive: "frame-src 'self' *.apple.com itmss: itms-appss: itms-bookss: itms-itunesus: itms-messagess: itms-podcasts: itms-watchs: macappstores: musics: apple-musics:".
I.e. my tag update was not effective.
I have several questions: first, do I do it right? Am I doing the update at the proper event? When is the data content from meta tags being read in the page lifecycle? Will it be auto-applied after tag content change?
As of March 2018 Chromium doesn't allow to modify the responseBody of the request. https://bugs.chromium.org/p/chromium/issues/detail?id=487422#c29
"WebRequest API: allow extensions to read response body" is a ticket from 2015. It is not on a path of getting to be resolved and needs some work/help.
--
Firefox has the webRequest filter implementation that allows to modify the response body before the page's meta directives are applied.
https://developer.mozilla.org/en-US/Add-ons/WebExtensions/API/webRequest/filterResponseData
BUT, my problem is focused on fixing the Chrome extension. Maybe Chrome picks this up one day.
--
In general, the Chrome's extension building framework seems like not a reliable path of building a long term living software; with browser vendors changing the rules frequently, reacting to newly discovered threats, having no up-to-date supported cross-browser standard.
--
In my case, the possible way around this issue can be to throw all the JS code into the extension's source base. Such that there's no 3-rd party to connect to fetch and execute the JS (and conflict with/violate the CSP rules). Haven't explored this yet, as I expected to reuse the code & interactive components I am using in my main browser application.
I've been interested in the same things and here are a few aspects that perhaps could help:
chrome.debugger extension API with Fetch (or Network) domain can be used to modify responseBody: https://chromedevtools.github.io/devtools-protocol/tot/Fetch/
This is an example implementation: https://github.com/mr-yt12/Debugger-API-Fetch-example-Chrome-Extension
However, I'm facing a problem of Fetch.requestPaused not firing on the first page load: Chrome Extension Debugger API, Fetch domain attaches/enables too late for the response body to be intercepted
And I haven't found a solution to this yet, besides first redirecting the request to 'http://google.com/gen_204' and then updating the tab. But this creates flicker and also I'm not sure if it's possible to redirect the request like this with manifest v3.
When using debugger API, Chrome shows a warning at the top, which changes the page's size and doesn't go away (perhaps it goes away 5 seconds after the debugger is detached and also if the user clicks "cancel"). This means that it's mostly only good for personal use or when distributing as a developer mode extension. Using --silent-debugger-extension-api flag with Chrome disables this warning.
I've tried injecting my script at document_start (when the meta tag is not yet created). Then I used mutationObserver (also tried other methods) to wait for the meta tag, to modify it before it's applied or fully created. Somehow it succeeded once or a few times, but could be a coincidence or a wrong interpretation of results. Perhaps it's worth experimenting with it.
Another idea (but I think I didn't make it work, but perhaps it's possible) is to use window.stop() at document_start and then rewrite the html content programmatically. This needs more researching.
It seems that meta tag CSP is applied once the meta tag is created (or while it's being created), and then there is no way to cancel what's been applied. It should be researched more on how to prevent it from applying or modifying it before it's fully created or applied.
I am developing a web app in which I am trying to use the HTML5 application cache.
I am running the application on apache tomcat 7. When the server is running it's OK; file downloads in Google Chrome and I get cached or update ready event. But once I shut down the server and refresh the page, I get an error manifest fetch fail (-1).
How to get over this error and why does it occur?
my manifest file is as follows(sample.manifest):
CACHE MANIFEST
# version 4
CACHE:
css/styles.css
js/script.js
js/jquery-latest.js
js/jquery.validate.js
img/blue-line.png
img/main-img.png
img/logo.png
img/green-li.png
img/gline2.png
img/gline3.png
img/gline4.png
img/gline5.png
img/diversity-img.jpg
img/facebook32.png
img/mail40x32.png
img/main-img-298.png
img/ppl-img.jpg
img/twitter32.png
leavevbc.html
diversity.html
NETWORK:
*
I added the correct MIME type but I'm still getting the problem.
The manifest load fail error is exactly what you have to expect if the server can't be reached. The manifest can't be loaded. It's a little bit confusing that this is reported as an error - but that's what the standard says. All you have to do is ignore the error and you should have an offline cached webapp.
In Chrome, inspect all your app cached items. You may be surprised to see that what is inside of your cached files are not what you put into them. I've run into this exact situation. I had a javascript file that contained my FALLBACK: offline.html page. The Webkit cache loader has issues when the type of content its loading is not what it expects. To me this is just wrong, but on the upside, it did reveal the problem. In my case, it looked in my js file and crimped when it saw the at the top of the file.
If there are resources that must be pulled when online only then list them in NETWORK: section.
To fix current situation do following:
clear out your browser cache
change comment at top of your manifest file so that new copy will be downloaded
fire up chrome with developer tools
pull down web page while online
inspect your chrome application cache files again
go offline and browser refresh
http://www.html5rocks.com/en/tutorials/appcache/beginner/
I haven't been able to get something like this to work:
var myWorker = new Worker("http://example.com/js/worker.js");
In my Firebug console, I get an error like this:
Failed to load script:
http://example.com/js/worker.js
(nsresult = 0x805303f4)
Every example of web worker usage I've seen loads a script from a relative path. I tried something like this, and it works just fine:
var myWorker = new Worker("worker.js");
But what if I need to load a worker script that's not at a relative location? I've googled extensively, and I haven't seen this issue addressed anywhere.
I should add that I'm attempting to do this in Firefox 3.5.
For those that don't know, here is the spec for Web Worker:
http://www.whatwg.org/specs/web-workers/current-work/
And a post by John Resig:
http://ejohn.org/blog/web-workers/
Javascript, generally, can't access anything outside of the url that the javascript file came from.
I believe that is what this part of the spec means, from: http://www.w3.org/TR/workers/
4.2 Base URLs and origins of workers
Both the origin and effective script origin of scripts running in workers are the origin of the absolute URL given in that the worker's location attribute represents.
This post has a statement about what error should be thrown in your situation:
http://canvex.lazyilluminati.com/misc/cgi/issues.cgi/message/%3Cop.u0ppu4lpidj3kv#zcorpandell.linkoping.osa%3E
According to the Web Worker draft specification, workers must be hosted at the same domain as the "first script", that is, the script that is creating the worker. The URL of the first script is what the worker URL is resolved against.
Not to mention...
Just about anytime you have a Cross-Origin Restriction Policy, there's no counterpoise to the file system (file://path/to/file.ext) - Meaning, the file protocol triggers handling for this policy.
This goes for "dirty images" in the Canvas API as well.
Hope this helps =]