Chrome: disable pre-render of hash change - javascript

We're developing a web application that handles state change via change of the hash of the page (e.g. example.com/#/page1).
Lately, I've been running into an issue with Google Chrome, when the prefetch option is enabled ("Predict network actions to improve page load performance"). Among the different routes, we have #/logout that performs the logout.
In the "normal" state, I'm on the page example.com/#/ (the main page), and as I start typing "l" after that (example.com/#/l), Chrome autocompletes with logout. However, not only it does autocomplete, but it also calls the "haschange" event, so the client is sending a request to log out to the server... Even just by typing a l!
This behaviour is not only unexpected, but it's also dangerous. Aside from unchecking "Predict network actions to improve page load performance" in the settings page (which is on by default), is there a way to prevent Chrome to do this?
EDIT
A small new "discovery". Actually, Chrome is not firing the "hashchange" event, as a console.log in the event handler is not being executed. Chrome learnt that, when visiting the #/logout page, a request to the server (GET /auth/destroy) is called, and so it's firing it by itself! What can we do to stop this?

Answering my own question. This is not really a solution, but rather a workaround.
According to this documentation, prendering is disabled in certain situations: with POST requests (not an option in our case) and when the resources are served via HTTPS.
Since we were already going to enable HTTPS in the production environment, we just enabled it in the development one as well and the issue disappeared. However, I still feel like this is more of a workaround than a real solution.

Related

Hitting back on browser to return to a react page does not cause any code on the page to run - why?

So we have a portion of our website as a React.js website. The rest is a Legacy site. After taking the user thru a few steps on a wizard, we transfer the user to our Legacy Page. In test its found that when the back browser button is pressed in the Legacy page, then we return to the previous React page, but none of the code in the React page runs. I've proven this by putting alerts and the code is simply not being run. What's happening is that the page is getting displayed with all the previous rendered output, but without any of the code running. So not sure how/where its getting cached.
I've checked the Cache-Control headers on the React page, and its set to Cache-Control: max-age=0, so the Browser should not be caching the page.
Even if there was any code in our legacy app using history.back() then i've tested that history.back() still makes the code to run on the page being moved back to.
So a bit of a mystery as to where its getting cached. Any thoughts as to what may be the issue?
As mentioned you are using React.js for your project, I think the best solution for your problem is to use React Router. This will render your components according to your current route. In case you are using the case route for the legacy page then, you will have to think for some other alternatives
On December 6th, 2021, Chrome 96 was released, it has a feature called bfcache, or rather back / forward cache and this is different to the Http cache. This caches the entire page in memory,including the Javascript heap, so its like a pause on the page. This is a browser optimisation and is the reason when you go back to the page using the back button on the browser, no code runs as it simply loads it back into memory including all the react state.
more info on bfcache

How can I cancel a Location: header-based page redirect with Greasemonkey?

I have a Greasemonkey script that fixes a rather broken redirect on a web application that I use. However, the behavior of this application has recently changed so that I can no longer use beforescriptexecute for one common case, causing the poorly designed redirect to happen.
This happens with a HTTP 301 status code and a Location header.
I've been digging through https://developer.mozilla.org/en-US/docs/Web/API/WindowEventHandlers but have not been able to work out one that can intercept the page before the browser redirects me. Is there any event that fires before the 301 redirect occurs that I can hook onto with Greasemonkey?

How to make a input type="file" field in a web page that is guaranteed to work in Android?

I have a simple <input type="file"> in a web form (to be viewed in a browser) and I need it to work on Android (besides other mobile devices and desktop).
Due to a well known but still unfixed bug in Android (https://code.google.com/p/android/issues/detail?id=53088), any such input field may miserably fail to work, because while you are choosing for the file to upload (with whatever application, e.g. the Gallery or a third party file browser), the browser activity is in the background and the system may kill it at any time (no matter how huge your RAM is), and hence the page may reload when the browser activity is restored, and the file you've selected will be forgotten.
This still happens in Chrome on Android 4.4.4.
Of course it does work at times, but not always, and it's unpredictable.
I can think of (painful to implement) workarounds for a webview within a native app, but I can't think of any workaround in pure html+javascript for a web page to be visited by a browser.
The thing is, some workaround must exist, because there are web pages out there with file uploads that never occur into this issue, such as m.facebook.com to name only one. EDIT: forget this paragraph, facebook and twitter are affected as much as every other web page with uploads (and btw, Instagram's mobile web page does not allow upload at all, funny huh?)
Does anyone know what the working workaround is? Or if any exists at all?
Just to be clear, I need a workaround that can be applied by just adjusting the html and/or adding no-matter-how-much javascript code, but without forcing the user to install any specific extra app.
"interesting" problem...
It is not a ready-to-use solution, but you could save the state of the page before requesting a file:
http://www.html5rocks.com/en/features/storage

AJAX activities disappear when browsing forward and back

This is not a Meta question.
I am trying to technically understand what principle is hidden behind the following behaviour. It's very easy to reproduce:
Vote up/down anything on this page1,
Click on any other link on this page,
Come back by pressing the back button.
Your upvote is not there anymore as well as any AJAX activities having appeared on the page.
Why is that? Why is the browser acting like so? How could StackOverflow prevent that?
1 If you are not connected, just wait for someone else's activity on the page (new comment, answer, vote) before moving page.
It’s the browser’s cache that is at play here.
Since you’re asked how SO could “prevent” this, it could be done by advising the browser to check for whether the document has changed every time. But SO not doing so, for performance reasons. So the HTML document is seen as “still valid” for a certain amount of time, during which the browser takes it straight from its cache, without making a round-trip to the server.
If you look at the HTTP response headers in your browser’s developer tools for the request your browser made for this page, you will see something like this,
Cache-Control: public, no-cache="Set-Cookie", max-age=60
– so this HTML document is to be considered valid for 60 seconds. If you navigate away from it and back in your browser, or close the tab and reopen it from history, within that 60 seconds, the browser is supposed to take the cached version of it and display it, without checking again with the server whether or not something has changed. And since your vote did not manipulate this original HTML document (only the DOM was updated with your vote), you still get the previous vote count shown.
But if you press [F5] in your browser, the cache will be circumvented – it will request the document from SO again, and then you see your vote, because this time the updated numbers are part of the updated HTML document that SO serves you.
If you want to delve more into HTTP caching, some resources of the top of Google that seem worth a look:
Caching Tutorial for Web Authors and Webmasters
A Beginner's Guide to HTTP Cache Headers
You are not "unvoting", you just are not seeing your vote because your browser is caching the ajax request.
If your press F12 on Chrome, click on Settings icon and then "Disable cache (while DevTools is open)", when you press back the browser will resend the request.
To prevent that you must specify on your code that you never want that specific request to be cached.
You may want to check the following post:
Prevent browser caching of jQuery AJAX call result
-
Ps. You must stay with the Console (F12) opened while doing the test.

Ajax, back button and DOM updates

If javascript modifies DOM in page A, user navigates to page B and then hits back button to get back to the page A. All modifications to DOM of page A are lost and user is presented with version that was originally retrieved from the server.
It works that way on stackoverflow, reddit and many other popular websites. (try to add test comment to this question, then navigate to different page and hit back button to come back - your comment will be "gone")
This makes sense, yet some websites (apple.com, basecamphq.com etc) are somehow forcing browser to serve user the latest state of the page. (go to http://www.apple.com/ca/search/?q=ipod, click on say Downloads link at the top and then click back button - all DOM updates will be preserved)
where is the inconsistency coming from?
One answer: Among other things, unload events cause the back/forward cache to be invalidated.
Some browsers store the current state of the entire web page in the so-called "bfcache" or "page cache". This allows them to re-render the page very quickly when navigating via the back and forward buttons, and preserves the state of the DOM and all JavaScript variables. However, when a page contains onunload events, those events could potentially put the page into a non-functional state, and so the page is not stored in the bfcache and must be reloaded (but may be loaded from the standard cache) and re-rendered from scratch, including running all onload handlers. When returning to a page via the bfcache, the DOM is kept in its previous state, without needing to fire onload handlers (because the page is already loaded).
Note that the behavior of the bfcache is different from the standard browser cache with regards to Cache-Control and other HTTP headers. In many cases, browsers will cache a page in the bfcache even if it would not otherwise store it in the standard cache.
jQuery automatically attaches an unload event to the window, so unfortunately using jQuery will disqualify your page from being stored in the bfcache for DOM preservation and quick back/forward. [Update: this has been fixed in jQuery 1.4 so that it only applies to IE]
Information about the Firefox bfcache
Information about the Safari Page Cache and possible future changes to how unload events work
Opera uses fast history navigation
Chrome doesn't have a page cache ([1], [2])
Pages for playing with DOM manipulations and the bfcache:
This page will be stored in the regular cache
This page will not, but will still be bfcached
I've been trying to get Chrome to behave like Safari does, and the only way I've found that works is to set Cache-control: no-store in the headers. This forces the browser to re-fetch the page from the server when the user presses the back button. Not ideal, but better than being shown an out-of-date page.
Facebook remembers page state by modifying the hash identifier in the URL for ajax requests. These changes are recorded in browser history, so when the user clicks the back button, the hash changes to what it was before. So then it is implied that you will need some Javascript to monitor the has identifier and react when it is changed by the browser. Andreas Blixt has a hash monitoring script available.
This has nothing to do with the hash (#) symbol.
If you would check apple's HTTP headers, it's simply caching the page.
Using the URL hash/fragment identifier is a pretty common way to hook/remember state in a web application that relies on Ajax and DOM updates.
Check out the Really Simple History project for some ideas. It's possible to monitor the URL for changes to the hash, and rsh does this, taking into account browser differences.
For anybody running in problems with Rails and this -- your issue isn't bfcache (I thought it was) -- it's the turbolinks gem. Here is how to remove it.
Hopefully this'll save you some time and banging your head against the wall.
What you are looking for is for some type of URL hash management. The # in the url is for client side only.
When you change the state of the back with JS, then you update the data in the # of the url.
Also you add some type of polling that monitors if the hash has changed, and loads the state of the page based off the new data in the hash.
Take a look at this:
http://ajaxpatterns.org/Unique_URLs

Categories