I'm writing a simple photo album app using ASP.NET Ajax.
The app uses async Ajax calls to pre-load the next photo in the album, without changing the URL in the browser.
The problem is that when the user clicks the back button in the browser, the app doesn't go back to the previous photo, instead, it navigates to the home page of the application.
Is there a way to trick the browser into adding each Ajax call to the browsing history?
Update: There is now the HTML5 History API (pushState, popState) which deprecates the HTML4 hashchange functionality. History.js provides cross-browser compatibility and an optional hashchange fallback for HTML4 browsers.
The answer for this question will be more or less the same as my answers for these questions:
How to show Ajax requests in URL?
How does Gmail handle back/forward in rich JavaScript?
In summary, you'll definitely want to check out these two projects which explain the whole hashchange process and adding ajax to the mix:
jQuery History (using hashes to manage your pages state and bind to changes to update your page).
jQuery Ajaxy (ajax extension for jQuery History, to allow for complete ajax websites while being completely unobtrusive and gracefully degradable).
MSDN has an article about Managing Browser History in ASP.NET AJAX
Many websites make use of a hidden iframe to do this, simply refresh the iframe with the new URL, which adds it to the browsing history. Then all you have to do is handle how your application reacts to those 'back button' events - you'll either need to detect the state/location of the iframe, or refresh the page using that URL.
You can use simple & lightweight PathJS lib.
Usage example:
Path.map("#/page1").to(function(){
...
});
Path.map("#/page2").to(function(){
...
});
Path.root("#/mainpage");
Path.listen();
The 3.5 SP1 update has support for browser history and back button in ASP.NET ajax now.
For all solutions about the back button, none of them are "automatic". With every single one you are going to have to do some work to persist the state of the page. So no, there isn't a way to "trick" the browser, but there are some great libraries out there that help you with the back button.
Info: Ajax Navigation is a regular feature of the upcoming IE8.
If you are using Rails, then definitely try Wiselinks https://github.com/igor-alexandrov/wiselinks. It is a a Swiss Army knife for browser state management. Here are some details: http://igor-alexandrov.github.io/blog/2013/07/11/the-way-to-wiselinks-1-dot-0/.
Related
Question: Is it possible to make SPA crawlable without server rendering with help of HTML5 History API pushState?
I have found contradictory meanings. Can you support or refute one of them?
YES, it's possible to make SPA crawlable w/o server rendering.
The only explanation I found is - when Google crawler goes through <a href="site.com/go"> it subscribes to onpopstate and waits you'll trigger HTML5 History pushState. After you get all async content, you trigger pushState and crawler start crawling.
Do really Google knows how to subscribe onpopstate events?
No, that impossible. I came to the same conclusion, the same is said in this article. It says that pushState is a replacement of hashbangs to make the same URL for both user and crawler.
P.S. If it's impossible with pushState, are there any other acceptable ways?
I have investigated several solutions to incorporate browser history (back and forward buttons) with an SPA using AJAX. The prevalent solution is using the HTML5 history API such as history.pushState.
There is also the option of using plugins such as:
jQuery Address
jQuery BBQ
Although the above methods work well going back and forward, they seem to be bypassing the browser page cache. So if I press the Back button, I could either go back to the server and fetch the data stored in the URL obtained from the browser history, OR, I could get the cached item from the browser cache.
I am not sure how to get the cached item from the browser cache after pressing the back or forward buttons (instead of going back to the server).
What type of client-side redirection on mobile devices is the most reliable?
This is more of a fundamental question. Lets only assume that we need to deal with iOS and Android devices at the moment (no BB or Windows) and the webpage that has the redirect is only a pass-through page (meaning that it does it's job, then has to pass the user to the next page)
I found this blog post, which talks about the pros/cons of each of them.
I feel the biggest con of the meta redirect is the fact that it makes an entry in the browser history.
The JavaScript redirect in my opinion seems less reliable, but has a better UX (no browser history entry, and the fact that you can put some logic and dynamic values in it)
Thanks!
You could do a mix of both approaches: create a javascript redirect with 300 ms or so of delay. As backup (in case javascript is disabled or just doesn't work for any reason), put on your page a meta redirect with 300 ms more than the javascript redirect.
I find that this little plugin is great for mobile detection - http://detectmobilebrowsers.com. I use the jQuery version on that page, but there are also other scripts for PHP, ASP, pure javascript etc.
So with the jQuery version you'd just do this for a redirect -
if($.browser.mobile) {
window.location = 'http://yourmobilesite.com';
}
I'm sorry if this is a newbie question but I don't really know what to search for either. How do you keep content from a previous page when navigating through a web site? For example, the right side Activity/Chat bar on facebook. It doesn't appear to refresh when going to different profiles; it's not an iframe and doesn't appear to be ajax (I could be wrong).
Thanks,
I believe what you're seeing in Facebook is not actual "page loads", but clever use of AJAX or AHAH.
So ... imagine you've got a web page. It contains links. Each of those links has a "hook" -- a chunk of JavaScript that gets executed when the link gets clicked.
If your browser doesn't support JavaScript, the link works as it normally would on an old-fashioned page, and loads another page.
But if JavaScript is turned on, then instead of navigating to an HREF, the code run by the hook causes a request to be placed to a different URL that spits out just the HTML that should be used to replace a DIV that's already showing somewhere on the page.
There's still a real link in the HTML just in case JS doesn't work, so the HTML you're seeing looks as it should. Try disabling JavaScript in your browser and see how Facebook works.
Live updates like this are all over the place in Web 2.0 applications, from Facebook to Google Docs to Workflowy to Basecamp, etc. The "better" tools provide the underlying HTML links where possible so that users without JavaScript can still get full use of the applications. (This is called Progressive Enhancement or Graceful degradation, depending on your perspective.) Of course, nobody would expect Google Docs to work without JavaScript.
In the case of a chat like Facebook, you must save the entire conversation on the server side (for example in a database). Then, when the user changes the page, you can restore the state of the conversation on the server side (with PHP) or by querying your server like you do for the chat (Javascript + AJAX).
This isn't done in Javascript. It needs to be done using your back-end scripting language.
In PHP, for example, you use Sessions. The variables set by server-side scripts can be maintained on the server and tied together (between multiple requests/hits) using a cookie.
One really helpful trick is to run HTTPFox in Firefox so you can actually monitor what's happening as you browse from one page to the next. You can check out the POST/Cookies/Response tabs and watch for which web methods are being called by the AJAX-like behaviors on the page. In doing this you can generally deduce how data is flowing to and from the pages, even though you don't have access to the server side code per se.
As for the answer to your specific question, there are too many approaches to list (cookies, server side persistence such as session or database writes, a simple form POST, VIEWSTATE in .net, etc..)
You can open your last closed web-page by pressing ctrl+shift+T . Now you can save content as you like. Example: if i closed a web-page related by document sharing and now i am on travel web page. Then i press ctrl+shift+T. Now automatic my last web-page will open. This function works on Mozilla, e explorer, opera and more. Hope this answer is helpful to you.
I'm experimenting with building sites dynamically on the client side, through JavaScript + a JSON content server, the js retrieves the content, and builds the page client-side.
Now, the content won't be indexed by Google this way. Is there a workaround for this? Like having a crawler version and a user version? Or having some sort of static archives? Has anyone done this already?
You should always make sure that your site works without javascript. Make links that link to static versions of the content. Then add javascript click handlers to those links that block the default action from hapening and make the AJAX request. I.e. using jQuery:
HTML:
<a href='static_content.html' id='static_content'>Go to page!</a>
Javascript:
$('#static_content').click(function(e) {
e.preventDefault(); //stop browser from following link
//make AJAX request
});
That way the site is usable for crawlers and users without javascript. And has fancy AJAX for people with javascript.
If the site is meant to be indexed by google then the "information" you want searchable and public should be available without javascript. You can always add the dynamic stuff later after the page loads with javascript. This will not only make the page indexable but will also make the page faster loading.
On the other hand if the site is more of an application 'ala gmail' then you probably don't want google indexing it anyway.
You could utilize a server rendered version, and then replace it onload with the ajax version.
But if you are going to do that, why not build the entire site that way and just use ajax for interaction where the client supports it ala non-intrusive javascript.
You can use phantomjs to build a crawler version, see my solution here:
https://github.com/liuwenchao/ajax-seo