Am I right to assume that the biggest difference between location.assign and history.pushState in terms of URL displayed in browser is that the former reloads the page and the latter doesn't? I'm ofter asked on job interviews why hashtags were needed for routing in single page applications before the HTML5 history API came into being and I guess the answer should be because the developers had no tools to change URL without page being reloaded, correct? While manipulating the hash part of location could be done without page reload.
Related
I'm using the following history.go in search results with acceptable results cross-browser. I'd prefer a PHP solution but this filled the needs until I realized a larger issue.
Return To Search Results
My only issue is when the viewer comes from a page NOT originating from the search page http://www.domain.com/search/
Is there a way to modify this to use the simple script but if the previous -1 history is NOT the search page URL to redirect to the search page URL if the href is clicked?
the php variable $_SERVER['HTTP_REFERER'] will give you the page the current tab / window was on before it got to you.
this value might be empty if someone opened your page directly by typing in the url or by preventing this value to be transmitted to the server.
all in all there is no way to access the browser history at all due to security reasons.
$_SERVER['HTTP_REFERER'] is all there is that you could make use of. sorry to disappoint you.
btw.. its commonly used in hot linking prevention through out various blogs so people cannot "link" to pictures and files etc.
in your case you just need to figure out if the url equals the search site.
I have a website which has two versions, an all singing all dancing javascript powered application which is served when you request the root url
/
As you navigate around the lovely website the content updates, as does the url, thanks to html5 push state or good old correctly formatted #! urls. However if you don't have javascript enabled you can still use all functionality of the site as each piece of content also exists under it's own url. This is great for 3 reasons
non javascript users can still use the site
SEO - web crawlers can index the site easily
everything is shareable on social networks
The third reason is very important to me as every piece of content must be individually shareable on the site. And because each piece of content has it's own url it is easy to deep link to that url, and each piece of content can have it's own specific open graph data.
However the issue I hit is the following. You are a normal person and have javascript enabled and you are browsing and image gallery on the site and decide to share the picture of a lovely cat you have found. Using javascript the url has been updated to
/gallery/lovely-cat
You share this url and your friend clicks on it. When they click on the link the server sends you the non javascript / web crawler version of the site, and the experience is no where near as nice as the javascript version you would have been served if you directly went to the root of the site and navigated there.
Do anyone have a nice solution / alternative setup to solve this problems? I have several hacks which work, however I am not that happy with them. They include :
javascript redirect to the root of the site on every page and store a cookie / add a #! to the url so on page render the javascript router will show the correct content. ( does google punish automatic javascript redirects? )
render the no javascript page, and add some javascript which redirects the user to the root, similar to above, whenever the user clicks on a link
I don't particularly like either of these solutions, but can't think of a better solution. Rendering the entire javascript app for each page doesn't appear to be a solution to me, as you would end up with bad looking urls such as /gallery/lovely-cat/gallery/another-lovely-cat as you start navigating through the site.
My solution must support old browsers which do not implement push state
Make the "non javascript / web crawler version of the site" the same as the JavaScript version. Just build HTML on the server instead of DOM on the client.
Rendering the entire javascript app for each page doesn't appear to be a solution to me,
That is the robust approach
as you would end up with bad looking urls such as /gallery/lovely-cat/gallery/another-lovely-cat
Only if you linked (and pushStateed) to gallery/another-lovely-cat instead of /gallery/another-lovely-cat. (Note the / at the front).
Try out this plugin it might solve your 3rd reason, along with two reasons.
http://www.asual.com/jquery/address/
I am using window.location.replace for various things on an app and passing variables in the URL (www.website.com/?clicked=2).
The problem is that it absolutely floods the history with new entries (at least in Chrome). Is there a way to just have 1 entry for the page? I thought window.location.replace was supposed to replace the entry or so I have read?
Are there any other methods out there? If you use the app legitimately for like a minute clicking things, it will take up a page of history.
I do not think you can do that. window.location.replace will simply change an URL and reload page. As it is a different URL - it will be added to navigation history. Suppose the only thing you can do here - download full page with ajax. Little strange usage of ajax, but it will solve your problem with multiple entries in history. With things like History.js you can control what is in browser history, but page will be loaded with ajax anyway.
Location.replace( does not make a history entry.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What’s the shebang/hashbang (#!) in Facebook and new Twitter URLs for?
I was wondering how Twitter works its links.
If you look in the source code, you use the links are done like /#!/i/connect or /#!/i/discover, but they don't have a JavaScript function attached to them like load('connect') or something, and that it doesn't require a page reload. It just changes out the page content.
I saw this page, but then all of those files would have to exist, and you couldn't just go straight to one of them. I imagine that on Twitter each of those files don't exist, and that it is handled in some other method. Please correct me if I'm wrong, though.
Is there a way I could replicate this effect? If so, is there a tutorial on how to go about doing this?
"Hash-Bang" navigation, as it's sometimes called, ...
http://example.com/path/to/#!/some-ajax-state
...is a temporary solution for a temporary problem that is quickly becoming a non-issue thanks to modern browser standards. In all likelihood, Twitter will phase it out, as Facebook is already doing.
It is the combination of several concepts...
In the past, a link served two purposes: It loaded a new document and/or scrolled down to an embedded anchor as indicated with the hash (#).
http://example.com/script.php#fourth-paragraph
Anything in a URL after the hash was not requested from the server, but was searched for in the page by the browser. This all still works just fine.
With the adoption of AJAX, new content could be loaded into the current (already loaded) page. With this dynamic loading, several problems arose: 1) there was no unique URL for bookmarking or linking to this new content, 2) search would never see it.
Some smart people solved the first problem by using the hash as a sort of "state" reference to be included in links & bookmarks. After the document loads, the browser reads the hash and runs the AJAX requests, displaying the page plus its dynamic AJAX changes.
http://example.com/script.php#some-ajax-state
This solved the AJAX problem, but the search engine problem still existed. Search engines don't load pages and execute Javascript like a browser.
Google to the rescue. Google proposed a scheme where any URL with a hash-bang (#!) in lieu of just a hash (#) would suggest to the search bot that there was an alternate URL for indexing, which involved an "_escaped_fragment_" variable, among other things. Read about it here: Ajax Crawling: Getting Started.
Today, with the adoption of Javascript's pushstate in most major browsers, all of this is becoming obsolete. With pushstate, as content is dynamically loaded or changed, the current page URL can be altered without causing a page load. When desired, this provides a real working URL for bookmarks & history. Links can then be made as they always were, without hashes & hash-bangs.
As of today, if you load Facebook in an older browser, you'll see the hash-bangs, but a current browser will demonstrate the use of pushstate.
You might wanna check out more on Unique URLs.
It's loading the page via AJAX, and parsing the "hash" (the values that come after the "#") to determine which page it's going to load. Also, this method is used due to the nature that AJAX requests don't count to the browser's history thus the "back button breaks". But the browser does however store into history the hash changes.
Using hashes plus the fact that you can use hashes to determine pages, you can say that you can keep AJAX requested pages "in history". Added to that, hashed URLs are just URLs, and they are bookmarkable including the hash, so you can also bookmark AJAX requested pages.
If javascript modifies DOM in page A, user navigates to page B and then hits back button to get back to the page A. All modifications to DOM of page A are lost and user is presented with version that was originally retrieved from the server.
It works that way on stackoverflow, reddit and many other popular websites. (try to add test comment to this question, then navigate to different page and hit back button to come back - your comment will be "gone")
This makes sense, yet some websites (apple.com, basecamphq.com etc) are somehow forcing browser to serve user the latest state of the page. (go to http://www.apple.com/ca/search/?q=ipod, click on say Downloads link at the top and then click back button - all DOM updates will be preserved)
where is the inconsistency coming from?
One answer: Among other things, unload events cause the back/forward cache to be invalidated.
Some browsers store the current state of the entire web page in the so-called "bfcache" or "page cache". This allows them to re-render the page very quickly when navigating via the back and forward buttons, and preserves the state of the DOM and all JavaScript variables. However, when a page contains onunload events, those events could potentially put the page into a non-functional state, and so the page is not stored in the bfcache and must be reloaded (but may be loaded from the standard cache) and re-rendered from scratch, including running all onload handlers. When returning to a page via the bfcache, the DOM is kept in its previous state, without needing to fire onload handlers (because the page is already loaded).
Note that the behavior of the bfcache is different from the standard browser cache with regards to Cache-Control and other HTTP headers. In many cases, browsers will cache a page in the bfcache even if it would not otherwise store it in the standard cache.
jQuery automatically attaches an unload event to the window, so unfortunately using jQuery will disqualify your page from being stored in the bfcache for DOM preservation and quick back/forward. [Update: this has been fixed in jQuery 1.4 so that it only applies to IE]
Information about the Firefox bfcache
Information about the Safari Page Cache and possible future changes to how unload events work
Opera uses fast history navigation
Chrome doesn't have a page cache ([1], [2])
Pages for playing with DOM manipulations and the bfcache:
This page will be stored in the regular cache
This page will not, but will still be bfcached
I've been trying to get Chrome to behave like Safari does, and the only way I've found that works is to set Cache-control: no-store in the headers. This forces the browser to re-fetch the page from the server when the user presses the back button. Not ideal, but better than being shown an out-of-date page.
Facebook remembers page state by modifying the hash identifier in the URL for ajax requests. These changes are recorded in browser history, so when the user clicks the back button, the hash changes to what it was before. So then it is implied that you will need some Javascript to monitor the has identifier and react when it is changed by the browser. Andreas Blixt has a hash monitoring script available.
This has nothing to do with the hash (#) symbol.
If you would check apple's HTTP headers, it's simply caching the page.
Using the URL hash/fragment identifier is a pretty common way to hook/remember state in a web application that relies on Ajax and DOM updates.
Check out the Really Simple History project for some ideas. It's possible to monitor the URL for changes to the hash, and rsh does this, taking into account browser differences.
For anybody running in problems with Rails and this -- your issue isn't bfcache (I thought it was) -- it's the turbolinks gem. Here is how to remove it.
Hopefully this'll save you some time and banging your head against the wall.
What you are looking for is for some type of URL hash management. The # in the url is for client side only.
When you change the state of the back with JS, then you update the data in the # of the url.
Also you add some type of polling that monitors if the hash has changed, and loads the state of the page based off the new data in the hash.
Take a look at this:
http://ajaxpatterns.org/Unique_URLs