Doing links like Twitter, Hash-Bang #! URL's [duplicate] - javascript

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What’s the shebang/hashbang (#!) in Facebook and new Twitter URLs for?
I was wondering how Twitter works its links.
If you look in the source code, you use the links are done like /#!/i/connect or /#!/i/discover, but they don't have a JavaScript function attached to them like load('connect') or something, and that it doesn't require a page reload. It just changes out the page content.
I saw this page, but then all of those files would have to exist, and you couldn't just go straight to one of them. I imagine that on Twitter each of those files don't exist, and that it is handled in some other method. Please correct me if I'm wrong, though.
Is there a way I could replicate this effect? If so, is there a tutorial on how to go about doing this?

"Hash-Bang" navigation, as it's sometimes called, ...
http://example.com/path/to/#!/some-ajax-state
...is a temporary solution for a temporary problem that is quickly becoming a non-issue thanks to modern browser standards. In all likelihood, Twitter will phase it out, as Facebook is already doing.
It is the combination of several concepts...
In the past, a link served two purposes: It loaded a new document and/or scrolled down to an embedded anchor as indicated with the hash (#).
http://example.com/script.php#fourth-paragraph
Anything in a URL after the hash was not requested from the server, but was searched for in the page by the browser. This all still works just fine.
With the adoption of AJAX, new content could be loaded into the current (already loaded) page. With this dynamic loading, several problems arose: 1) there was no unique URL for bookmarking or linking to this new content, 2) search would never see it.
Some smart people solved the first problem by using the hash as a sort of "state" reference to be included in links & bookmarks. After the document loads, the browser reads the hash and runs the AJAX requests, displaying the page plus its dynamic AJAX changes.
http://example.com/script.php#some-ajax-state
This solved the AJAX problem, but the search engine problem still existed. Search engines don't load pages and execute Javascript like a browser.
Google to the rescue. Google proposed a scheme where any URL with a hash-bang (#!) in lieu of just a hash (#) would suggest to the search bot that there was an alternate URL for indexing, which involved an "_escaped_fragment_" variable, among other things. Read about it here: Ajax Crawling: Getting Started.
Today, with the adoption of Javascript's pushstate in most major browsers, all of this is becoming obsolete. With pushstate, as content is dynamically loaded or changed, the current page URL can be altered without causing a page load. When desired, this provides a real working URL for bookmarks & history. Links can then be made as they always were, without hashes & hash-bangs.
As of today, if you load Facebook in an older browser, you'll see the hash-bangs, but a current browser will demonstrate the use of pushstate.

You might wanna check out more on Unique URLs.
It's loading the page via AJAX, and parsing the "hash" (the values that come after the "#") to determine which page it's going to load. Also, this method is used due to the nature that AJAX requests don't count to the browser's history thus the "back button breaks". But the browser does however store into history the hash changes.
Using hashes plus the fact that you can use hashes to determine pages, you can say that you can keep AJAX requested pages "in history". Added to that, hashed URLs are just URLs, and they are bookmarkable including the hash, so you can also bookmark AJAX requested pages.

Related

Javascript History.Go default to page if not current site

I'm using the following history.go in search results with acceptable results cross-browser. I'd prefer a PHP solution but this filled the needs until I realized a larger issue.
Return To Search Results
My only issue is when the viewer comes from a page NOT originating from the search page http://www.domain.com/search/
Is there a way to modify this to use the simple script but if the previous -1 history is NOT the search page URL to redirect to the search page URL if the href is clicked?
the php variable $_SERVER['HTTP_REFERER'] will give you the page the current tab / window was on before it got to you.
this value might be empty if someone opened your page directly by typing in the url or by preventing this value to be transmitted to the server.
all in all there is no way to access the browser history at all due to security reasons.
$_SERVER['HTTP_REFERER'] is all there is that you could make use of. sorry to disappoint you.
btw.. its commonly used in hot linking prevention through out various blogs so people cannot "link" to pictures and files etc.
in your case you just need to figure out if the url equals the search site.

Deep linking javascript powered websites

I have a website which has two versions, an all singing all dancing javascript powered application which is served when you request the root url
/
As you navigate around the lovely website the content updates, as does the url, thanks to html5 push state or good old correctly formatted #! urls. However if you don't have javascript enabled you can still use all functionality of the site as each piece of content also exists under it's own url. This is great for 3 reasons
non javascript users can still use the site
SEO - web crawlers can index the site easily
everything is shareable on social networks
The third reason is very important to me as every piece of content must be individually shareable on the site. And because each piece of content has it's own url it is easy to deep link to that url, and each piece of content can have it's own specific open graph data.
However the issue I hit is the following. You are a normal person and have javascript enabled and you are browsing and image gallery on the site and decide to share the picture of a lovely cat you have found. Using javascript the url has been updated to
/gallery/lovely-cat
You share this url and your friend clicks on it. When they click on the link the server sends you the non javascript / web crawler version of the site, and the experience is no where near as nice as the javascript version you would have been served if you directly went to the root of the site and navigated there.
Do anyone have a nice solution / alternative setup to solve this problems? I have several hacks which work, however I am not that happy with them. They include :
javascript redirect to the root of the site on every page and store a cookie / add a #! to the url so on page render the javascript router will show the correct content. ( does google punish automatic javascript redirects? )
render the no javascript page, and add some javascript which redirects the user to the root, similar to above, whenever the user clicks on a link
I don't particularly like either of these solutions, but can't think of a better solution. Rendering the entire javascript app for each page doesn't appear to be a solution to me, as you would end up with bad looking urls such as /gallery/lovely-cat/gallery/another-lovely-cat as you start navigating through the site.
My solution must support old browsers which do not implement push state
Make the "non javascript / web crawler version of the site" the same as the JavaScript version. Just build HTML on the server instead of DOM on the client.
Rendering the entire javascript app for each page doesn't appear to be a solution to me,
That is the robust approach
as you would end up with bad looking urls such as /gallery/lovely-cat/gallery/another-lovely-cat
Only if you linked (and pushStateed) to gallery/another-lovely-cat instead of /gallery/another-lovely-cat. (Note the / at the front).
Try out this plugin it might solve your 3rd reason, along with two reasons.
http://www.asual.com/jquery/address/

Clarify the Impact Ajax Hash Navigation has on Search Engines and Non-Javascript visitors

I have an existing site to which I am adding AJAX Hash navigation. There are a couple situations that concern me.
I am using the onclick event of links to update the hash when appropriate:
My updateHash routine simply updates the hash to be #!Galleries/Colored-Pencil/71/1/
So - all links work for non-js visitors... the problems arrive when a hash-encoded link appears on an external site.
QUESTIONS
1) Inbound Links... Say someone copies and pastes a hash encoded url (i.e. www.mysite.com/#!Galleries/Colored-Pencil/71/1/). What can I possibly do for a non-javascript visitor that clicks the link? It seems to me they would just end up at my default page. The server won't know if the client has js enabled or anything about the requested hash. I have to imagine there is no work-around for this... I would have to display the home page. Besides showing the default home page, is there any way to support non-javascript visitors that stumble upon a hash-encoded url?
2) Google... Since all of my urls are presented without the hashes (which are only used on a js click), I don't feel I need to implement Google navigation of hashed links (i.e. the ?_escaped_fragment_= scheme)... but what about external links that contain the hashed navigation? I want the link juice to go to the 'normal url' (without hash-encoding). Should I handle '?_escaped_fragment_=' calls with a redirect to the non-encoded url? (doing so would also prevent duplicate content in Google's index)
I can think of ways to alleviate some of these issues (suggesting a permalink, trying to intercept 'copy' operation of url [not sure if possible or wise] and replace with desired url) - but I don't like these options and don't plan on investigating them... especially because the original problem would still need to be handled (being hash-encoded urls appearing on external web sites).
THANKS!

Javascript App and SEO

I've got this setup:
Single page app that generates HTML content using Javascript. There is no visible HTML for non-JS users.
History.js (pushState) for handling URLS without hashbangs. So, the app on "domain.com" can load dynamic content of "page-id" and updates the URL to "domain.com/page-id". Also, direct URLS work nicely via Javascript this way.
The problem is that Google cannot execute Javascript this way. So essentially, as far as Google knows, there is no content whatsoever.
I was thinking of serving cached content to search bots only. So, when a search bot hits "domain.com/page-id", it loads cached content, but if a user loads the same page, it sees normal (Javascript injected) content.
A proposed solution for this is using hashbangs, so Google can automatically convert those URLs to alternative URLs with an "escaped_fragment" string. On the server side, I could then redirect those alternative URLs to cached content. As I won't use hashbangs, this doesn't work.
Theoretically I have everything in place. I can generate a sitemap.xml and I can generate cached HTML content, but one piece of the puzzle is missing.
My question, I guess, is this: how can I filter out search bot access, so I can serve those bots the cached pages, while serving my users the normal JS enabled app?
One idea was parsing the "HTTP_USER_AGENT" string in .htaccess for any bots, but is this even possible and not considered cloaking? Are there other, smarter ways?
updates the URL to "domain.com/page-id". Also, direct URLS work nicely via Javascript this way.
That's your problem. The direct URLs aren't supposed to work via JavaScript. The server is supposed to generate the content.
Once whatever page the client has requested is loaded, JavaScript can take over. If JavaScript isn't available (e.g. because it is a search engine bot) then you should have regular links / forms that will continue to work (if JS is available, then you would bind to click/submit events and override the default behaviour).
A proposed solution for this is using hashbangs
Hashbangs are an awful solution. pushState is fix for hashbangs, and you are using that already - you just need to use it properly.
how can I filter out search bot access
You don't need to. Use progressive enhancement / unobtrusive JavaScript instead.

How do you keep content from your previous web page after clicking a link?

I'm sorry if this is a newbie question but I don't really know what to search for either. How do you keep content from a previous page when navigating through a web site? For example, the right side Activity/Chat bar on facebook. It doesn't appear to refresh when going to different profiles; it's not an iframe and doesn't appear to be ajax (I could be wrong).
Thanks,
I believe what you're seeing in Facebook is not actual "page loads", but clever use of AJAX or AHAH.
So ... imagine you've got a web page. It contains links. Each of those links has a "hook" -- a chunk of JavaScript that gets executed when the link gets clicked.
If your browser doesn't support JavaScript, the link works as it normally would on an old-fashioned page, and loads another page.
But if JavaScript is turned on, then instead of navigating to an HREF, the code run by the hook causes a request to be placed to a different URL that spits out just the HTML that should be used to replace a DIV that's already showing somewhere on the page.
There's still a real link in the HTML just in case JS doesn't work, so the HTML you're seeing looks as it should. Try disabling JavaScript in your browser and see how Facebook works.
Live updates like this are all over the place in Web 2.0 applications, from Facebook to Google Docs to Workflowy to Basecamp, etc. The "better" tools provide the underlying HTML links where possible so that users without JavaScript can still get full use of the applications. (This is called Progressive Enhancement or Graceful degradation, depending on your perspective.) Of course, nobody would expect Google Docs to work without JavaScript.
In the case of a chat like Facebook, you must save the entire conversation on the server side (for example in a database). Then, when the user changes the page, you can restore the state of the conversation on the server side (with PHP) or by querying your server like you do for the chat (Javascript + AJAX).
This isn't done in Javascript. It needs to be done using your back-end scripting language.
In PHP, for example, you use Sessions. The variables set by server-side scripts can be maintained on the server and tied together (between multiple requests/hits) using a cookie.
One really helpful trick is to run HTTPFox in Firefox so you can actually monitor what's happening as you browse from one page to the next. You can check out the POST/Cookies/Response tabs and watch for which web methods are being called by the AJAX-like behaviors on the page. In doing this you can generally deduce how data is flowing to and from the pages, even though you don't have access to the server side code per se.
As for the answer to your specific question, there are too many approaches to list (cookies, server side persistence such as session or database writes, a simple form POST, VIEWSTATE in .net, etc..)
You can open your last closed web-page by pressing ctrl+shift+T . Now you can save content as you like. Example: if i closed a web-page related by document sharing and now i am on travel web page. Then i press ctrl+shift+T. Now automatic my last web-page will open. This function works on Mozilla, e explorer, opera and more. Hope this answer is helpful to you.

Categories