I'm trying to make an Ajax Web Application that uses bread-crumbing to allow the use of the Back and Forward Buttons, but still have that slick ajax page movement.
An excellent example is Facebook's image gallery.
When you click 'Next' the URL changes to the respective URL but the entire page does not update. It's a really smooth interface and I'd like to mimic that.
Anyone got a tutorial/write up on how this works?
Thanks.
Facebook uses the URL-Anchor-Identifier to store the code needed for their AJAX code. This allows changing the URL without having the website reloaded.
Example: http://somedomain.com/#ajax_data_here
Now it's to you to write a smart format for your ajax data and to parse that data.
Update Dec 2012:
I've recently encountered the following method for changing the path within the URL without reloading. Although it only works with newer browsers, I thought I'd append it:
Modify the URL without reloading the page
As far as I am aware there are two main ways that this effect is achieved:
Using the anchor portion of the url (#gallery)
Using a hidden iframe
There are pre-built solutions that you can use to leverage this kind of functionality without having to deal with writing the code. For example if you are working with asp.net then you can use the Ajax History Control:
http://www.asp.net/ajax/videos/how-do-i-use-the-aspnet-ajax-history-control
If you are using JQuery, look at the Address plugin.
http://www.asual.com/jquery/address/
If you're using jQuery, there are lots of suggestions documented here: https://stackoverflow.com/questions/116446/what-is-the-best-back-button-jquery-plugin
I've personally used jQuery Address, and it's super easy and very effective.
Related
I have recently posted a question about how to login to twitter using requests library. Finally, I got the solution for that and another problem i am facing is that i am able to scrape only visible content in the page. How to scrape dynamically loaded content in that page?
Note: I am not using selenium. Please provide any other means to do this.
How to load dynamic content and then scrape it?
Without using something like Selenium or another browser (headless or otherwise) which will actually run the JavaScript in a normal-ish manner, the only other method would be to manually reverse engineer the JavaScript, see what kind of calls it's making, and make them yourself directly.
There wouldn't be any other kind of "one-size-fits-all" solution.
My point is; I use Ajax to load some part of the page but when the part is loaded I would like to change the url in the navigation bar.
I can go for the history.js as suggested in this post but I would like to know if there is a solution within Yii2 without using extensible library.
I don't wanna use pjax because I have some complex processing using pjax will make it more complex for me.
Yii2 don't deal with such kind of functionality. It'pure JS thing, you can use it without additional libraries. All major browsers already support it: caniuse.com.
Use history.pushState() to update current url and popstate event to update page content when user navigates through history.
We have a web app that its content generated by javascript. Can google index those pages?
When we investigate this issue we always found solutions from old pages about using "#!" in links.
In our app the links are like this:
domain.com/paris
domain.com/london
When we use these kind of links, javascript populates content.
Is it wise to use HTML snapshot or do you have any other suggestions?
Short answer
Yes they can crawl JavaScript generated content, as long as you are using pushstates.
Detailed answer
It depends on your setup. Google and Bing CAN crawl javascript and AJAX based content if your are using pushstates. If you do they will handle content coming from AJAX calls, updates to page title or meta tags using javascript, and in general any such things.
Most frontend frameworks like Angular, Ember or Backbone already works with pushstates so in these cases you don't need to do anything. Check whatever system you are using to see how they do things. If you are not using pushstates you will need to implement it on your own or use the whole escapted_fragment html snapshot deal.
So if you use pushstate then yes, search engines can crawl your page just fine. If you don't then no, you will need to implement pushstates or do HTML snapshots.
Bonus info - Unfortunately Facebook does not handle pushstates, so the facebook crawler needs either non-dynamic og-tags or HTML snapshots.
"Generated by JavaScript" is ambiguous. That could mean that you are running a JS script on the server or it could mean that you are making an AJAX call with a JS API. The difference appears to matter as far as Googlebot is concerned. But you don't have to take my word for it, as there is empirical proof of what Googlebot will and won't currently cache as far as JavaScript content in the form of live experiments using both the XMLHTTPRequest API and the Fetch API. So, as you can see, server-side rendering is still going to be the best way to go for SEO.
Youtube does not reload entire page to navigate between pages. How to use that navigation scheme?
Do I need to use Javascript or do I need an API?
It's called ajax load. It probably uses an API in the background, but you do not need to and it is a Javascript technology.
Here is a primer for ajax Ajax tutorial by W3Schools
You can do two things to achieve it:
Use of ajax, update your dynamic part with the help of jquery or similar technology.
Use of iframes, update your dynamic parts by changing iframe source. Youtube itself uses iframes to load many multiple parts parallelly and dynamically.
It is possible using AJAX by sending XMLHttpRequestto browser.
Better see full code: Here
Is it possible to have a navigation system optimized using javascript, but for the sake of search engines, have the hyperlinks still be crawlable?
Or maybe a condition statement that calls HTML code only if javascript is not enabled in the browser or when crawled by a search engine?
What you are describing would be characterized by unobtrusive javascript.
see; http://en.wikipedia.org/wiki/Unobtrusive_JavaScript
You write your html in the most semantic SEO friendly way possible for search engines and users with javascript turned off, then add your script separately to add your bells and whistles.
A framework such as jQuery is often useful.
For example;
About
could be given another function via an external javascript file containing;
$("#about").click( function() {
//fancy code here
return false;
});
which would stop the user being taken to /about and execute the given javascript instead.
Essentially this is the inverse of your suggestion; rather javascript is only used if it's available to enhance the existing html.
Sure. In addition to being SEO-friendly, this approach is also far more accessible to handicapped users; if you work or may someday work in government or higher education you need to know about accessibility, though in fact everyone should be keeping this issue in mind.
Google "progressive enhancement" for more information; here's a good article.
Basically you want to create your site as if it were using normal link navigation, and then add javascript event handlers to hijack the clicks that would normally trigger navigation.
It's not easy to trigger an event if javascript is disabled because to run anything client-side you use javascript. What I do for my sites is to use static html links, and then use javascript to change what occurs when these links are pressed.
This way you can have a link somewhere, that is still crawlable and works fine if javascript is disabled, but if javascript is enabled use an AJAX method to reload parts of the page.
The suckerfish for example, are drop down menus based on nested HTML lists, turned into horizontal menus. It looks nice and clean and has fully crawlable links. Generally, it's better to generate HTML and then use progressive enhancement to turn the HTML into something nice via JavaScript.
On the other hand, if you generate the JavaScript navigation, for example as a JSON object, the it should be easy to generate an XML sitemap for Google.
What do you mean by "optimized"? Optimized for speed because your navigation tree is huge and would generate unnecessary HTML traffic? Then you should generate the navigation via JavaScript and Ajax calls to keep load times down and serve a sitemap to the search engines. If you mean "pretty" then use progressive enhancement.
Basically the main thing would be to add real urls in your href tags, and an onclick even handler to cancel the default.