Question: Is it possible to make SPA crawlable without server rendering with help of HTML5 History API pushState?
I have found contradictory meanings. Can you support or refute one of them?
YES, it's possible to make SPA crawlable w/o server rendering.
The only explanation I found is - when Google crawler goes through <a href="site.com/go"> it subscribes to onpopstate and waits you'll trigger HTML5 History pushState. After you get all async content, you trigger pushState and crawler start crawling.
Do really Google knows how to subscribe onpopstate events?
No, that impossible. I came to the same conclusion, the same is said in this article. It says that pushState is a replacement of hashbangs to make the same URL for both user and crawler.
P.S. If it's impossible with pushState, are there any other acceptable ways?
Related
I'm reading this doc which says:
The router uses the browser's history.pushState for navigation. Thanks
to pushState, you can make in-app URL paths look the way you want them
to look, e.g. localhost:3000/crisis-center. The in-app URLs can be
indistinguishable from server URLs.
Can anyone please explain with an example how exactly it makes it possible and why it wasn't possible until pushState was implemented?
This question may be not related to exact software stack, framework or language.
For my current project, we are using AngularJS to build the front-end that has a constant entrance page to load real data and render, which is easy for CDN and good for fast loading speed from browser side. But for some social feature, such architect may result in some problem. For example, when you paste your interested link to Facebook portal to share, Facebook will grab your page and show up a preview. If a landing page is empty, such preview won't work.
(I heard that Google+ recently support rendering javascript logic at server side before send back a preview, but obviously it's not a common support for other similar services. Google.com also supports indexing js based one-page application.)
Is there a better solution to solve this problem gracefully rather than fallback to have dynamic page which includes real data? Have I missed something in understanding this problem?
========
... I was even thinking of that, for requests that identified as FB request (like user agent), redirect it to a special gateway that wrapping sth like PhantomJS, fetch the page, render it server-side, and send back a DOM tree snapshot as content for FB to generate preview. But I also doubt that it's a good direction. : (
We are in the same situation. The simple solution is to use Open Graph meta tags in the pages your server will serve to Facebook scrapers.
Basically you need to do server-side what your web app is doing client-side. Amount of work highly depends on your hosting technology (MVC makes it super easy), your URI format and the APIs you use.
You will find some explanations here:
https://developers.facebook.com/docs/plugins/share-button/
Open graph introduction:
http://ogp.me/
I have a client (domestic violence center) who wants to know if we can prevent their site from showing up in browser history, or wipe the users visit from their browser history when they exit the site.
I know once someone is on the site we can build it in ways that prevent new pages from loading like a normal page using location.replace for navigation, but that initial page visit when someone typed in http://example.org will still be in the history.
Is it possible on page load to prevent the page from being recorded in the history, or erase the record if it exists?
I have a bad feeling it can't, but if anyone will know it's all my smartypants friends on Stack Overflow.
Unfortunately this can't be done.
For your clients scenario, my best advice would be that users are educated on how to remove their visit from their browser history, and how to use anonymous browsing/private mode/incognito mode in future visits.
Quoting MDN:
There is no way to clear the session history or to disable the
back/forward navigation from unprivileged code. The closest available
solution is the location.replace() method, which replaces the current
item of the session history with the provided URL.
So I think what your client wishes for is just not possible.
Add script to recognise each browser, and provide browser specific steps to erase history/use incognito mode.
A properly configured proxy server may be of use in that context.
I wonder how one would monitor a single page web app so you can see what the user does in your app, what "pages" he visited etc.
Kinda like Google Analytics with statistics for a lot of things.
Google Analytics is great for this. Check out custom events: http://code.google.com/apis/analytics/docs/tracking/eventTrackerGuide.html
It takes a fair amount more work than the "set it and forget it" type tracking you can do with traditional websites, but it's also pretty easy.
It will not 'just work', you need to be calling trackPageView to make the requests count as pageviews and include all the visitor information that would be included (not trackEvent). This gist is an old but popular solution to ajax navigation showing how to call trackPageView. It is called right after the content from the new url is injected into the page, so an equivalent in a framework like Backbone.js would on view initialization, like suggested in this guide.
I'm writing a simple photo album app using ASP.NET Ajax.
The app uses async Ajax calls to pre-load the next photo in the album, without changing the URL in the browser.
The problem is that when the user clicks the back button in the browser, the app doesn't go back to the previous photo, instead, it navigates to the home page of the application.
Is there a way to trick the browser into adding each Ajax call to the browsing history?
Update: There is now the HTML5 History API (pushState, popState) which deprecates the HTML4 hashchange functionality. History.js provides cross-browser compatibility and an optional hashchange fallback for HTML4 browsers.
The answer for this question will be more or less the same as my answers for these questions:
How to show Ajax requests in URL?
How does Gmail handle back/forward in rich JavaScript?
In summary, you'll definitely want to check out these two projects which explain the whole hashchange process and adding ajax to the mix:
jQuery History (using hashes to manage your pages state and bind to changes to update your page).
jQuery Ajaxy (ajax extension for jQuery History, to allow for complete ajax websites while being completely unobtrusive and gracefully degradable).
MSDN has an article about Managing Browser History in ASP.NET AJAX
Many websites make use of a hidden iframe to do this, simply refresh the iframe with the new URL, which adds it to the browsing history. Then all you have to do is handle how your application reacts to those 'back button' events - you'll either need to detect the state/location of the iframe, or refresh the page using that URL.
You can use simple & lightweight PathJS lib.
Usage example:
Path.map("#/page1").to(function(){
...
});
Path.map("#/page2").to(function(){
...
});
Path.root("#/mainpage");
Path.listen();
The 3.5 SP1 update has support for browser history and back button in ASP.NET ajax now.
For all solutions about the back button, none of them are "automatic". With every single one you are going to have to do some work to persist the state of the page. So no, there isn't a way to "trick" the browser, but there are some great libraries out there that help you with the back button.
Info: Ajax Navigation is a regular feature of the upcoming IE8.
If you are using Rails, then definitely try Wiselinks https://github.com/igor-alexandrov/wiselinks. It is a a Swiss Army knife for browser state management. Here are some details: http://igor-alexandrov.github.io/blog/2013/07/11/the-way-to-wiselinks-1-dot-0/.