Same websites after using html5mode and remove hashtags from url "you do not need to serve different or pre-rendered content to Google" says.
In Google write ajax crawling documents deprecated. Other website write Google can crawl an AngularJS fine. In old stackoverflow question(s) the solution offers a different way.
If you dont use hashtags you can put _escaped_fragment_ requests end of url to try how to see Google your website.
My AngularJS application uses html5mode and no needs hashtags(ex:www.domain.com/app/page-1). What should I do to be sure Google can crawl my AngularJS application fine ? Could you tell me more detail about crawling(I am not a senior).
Some informations without link because I could not post more than 2 links.
Thank you.
I'm glad to see your question that you have already done quite a good research on AngularJs and Google Crawler, as you already know about most of the stuff there is very little to make sure if the bot is working as expected or not.
Hashbang urls are an ugly stopgap requiring the developer to provide a pre-rendered version of the site at a special location. They still work, but you don't need to use them.
Hashbang URLs look like this:
domain.com/#!path/to/resource
This would be paired with a metatag like this:
<meta name="fragment" content="!">
Google will not index them in this form, but will instead pull a static version of the site from the _escaped_fragments_ URL and index that.
Pushstate URLs look like any ordinary URL:
domain.com/path/to/resource
Making Sure it Works:
Google Webmaster tools now contains a tool which will allow you to fetch a URL as google, and render JavaScript as Google renders it. Link to Googlebot-fetch
AngularJs and Google Crawler Stuff:
1: This is a wonderful article explaining everything in detail about AngularJS SEO
2: Also this question is already been answered by #superluminary in detail, please take a look Use PushState and Precomposition
3: Some Answers more answers from a post I earlier wrote "Link"
Related
Lately Google announced it will be rolling out support for _escape_fragment. It was a feature Google used to get "static" content of website if it had hashbang (#!) in URL.
So now Google advices to provide along with JS version of website a static, non JS version for users with no JS and for Google bots in the same manner.
So when person visits for example website test.com/#!/item/2
I should generate a JS version of website and in noscript tag a non-JS version. Ok.
But since hashbang is not sent to server how should i know that i need generate a static website for item 2?
So my question is: how to provide static content for no-JS users in hashbang url scheme website.
You can't, but that isn't what Google is saying.
Instead of using hashbangs, you should use pushState and the rest of the History API.
That will let you have URLs like http://test.com/item/2.
If someone visits http://test.com/item/2 then your server should generate the page in the state it would be in if they had visited http://test.com/item/1 and then triggered the JavaScript event that would convert it into http://test.com/item/2.
No need to use noscript at all.
I have a website which has two versions, an all singing all dancing javascript powered application which is served when you request the root url
/
As you navigate around the lovely website the content updates, as does the url, thanks to html5 push state or good old correctly formatted #! urls. However if you don't have javascript enabled you can still use all functionality of the site as each piece of content also exists under it's own url. This is great for 3 reasons
non javascript users can still use the site
SEO - web crawlers can index the site easily
everything is shareable on social networks
The third reason is very important to me as every piece of content must be individually shareable on the site. And because each piece of content has it's own url it is easy to deep link to that url, and each piece of content can have it's own specific open graph data.
However the issue I hit is the following. You are a normal person and have javascript enabled and you are browsing and image gallery on the site and decide to share the picture of a lovely cat you have found. Using javascript the url has been updated to
/gallery/lovely-cat
You share this url and your friend clicks on it. When they click on the link the server sends you the non javascript / web crawler version of the site, and the experience is no where near as nice as the javascript version you would have been served if you directly went to the root of the site and navigated there.
Do anyone have a nice solution / alternative setup to solve this problems? I have several hacks which work, however I am not that happy with them. They include :
javascript redirect to the root of the site on every page and store a cookie / add a #! to the url so on page render the javascript router will show the correct content. ( does google punish automatic javascript redirects? )
render the no javascript page, and add some javascript which redirects the user to the root, similar to above, whenever the user clicks on a link
I don't particularly like either of these solutions, but can't think of a better solution. Rendering the entire javascript app for each page doesn't appear to be a solution to me, as you would end up with bad looking urls such as /gallery/lovely-cat/gallery/another-lovely-cat as you start navigating through the site.
My solution must support old browsers which do not implement push state
Make the "non javascript / web crawler version of the site" the same as the JavaScript version. Just build HTML on the server instead of DOM on the client.
Rendering the entire javascript app for each page doesn't appear to be a solution to me,
That is the robust approach
as you would end up with bad looking urls such as /gallery/lovely-cat/gallery/another-lovely-cat
Only if you linked (and pushStateed) to gallery/another-lovely-cat instead of /gallery/another-lovely-cat. (Note the / at the front).
Try out this plugin it might solve your 3rd reason, along with two reasons.
http://www.asual.com/jquery/address/
This question may be not related to exact software stack, framework or language.
For my current project, we are using AngularJS to build the front-end that has a constant entrance page to load real data and render, which is easy for CDN and good for fast loading speed from browser side. But for some social feature, such architect may result in some problem. For example, when you paste your interested link to Facebook portal to share, Facebook will grab your page and show up a preview. If a landing page is empty, such preview won't work.
(I heard that Google+ recently support rendering javascript logic at server side before send back a preview, but obviously it's not a common support for other similar services. Google.com also supports indexing js based one-page application.)
Is there a better solution to solve this problem gracefully rather than fallback to have dynamic page which includes real data? Have I missed something in understanding this problem?
========
... I was even thinking of that, for requests that identified as FB request (like user agent), redirect it to a special gateway that wrapping sth like PhantomJS, fetch the page, render it server-side, and send back a DOM tree snapshot as content for FB to generate preview. But I also doubt that it's a good direction. : (
We are in the same situation. The simple solution is to use Open Graph meta tags in the pages your server will serve to Facebook scrapers.
Basically you need to do server-side what your web app is doing client-side. Amount of work highly depends on your hosting technology (MVC makes it super easy), your URI format and the APIs you use.
You will find some explanations here:
https://developers.facebook.com/docs/plugins/share-button/
Open graph introduction:
http://ogp.me/
per Google crawling, AJAX and HTML5 , google can crawl dynamic pages that use history api, yet it says that google won't execute any javascript on the page. To me that means that an ajax request and dom building wont be made, so google won't be able to index the contents of the page that is loaded in. can anyone please elaborate?
As written in the answer, you'll need to provide hard links for bots.
Just treat it like a user without JavaScript. You should support users with no JavaScript. Feel free to implement the <noscript> tag.
So linked on the page is a guide by google on how to make your ajax site crawl able by google. Following the mentioned schema you can do it.
www.example.com/ajax.html#!key=value
This way you can tell google crawlers that your site is ajax crawlable and they will do the rest.
I'm making a Web-App (still in "Beta") which uses the Flickr API to get information for the photos of a particular Flickr user and generates IPB code to post any of his/her images.
While Flickr now gives you the IPB code to show the image and link back to the photo site directly on its site, my App also has the option of embeding the title, description, select EXIF data, location information, etc. into the post for the IPB forum.
I've most recently added the option to integrate a Google Maps image of the photo's geolocation data into the post by using the Google Static Maps API.
The problem is that the image URL I have is in the following form (including IPB [IMG] tags):
[IMG]http://maps.google.com/maps/api/staticmap?zoom=16&size=600x600&maptype=hybrid&markers=19.387687,-99.251732&sensor=false[/IMG]
Which shows this example image (In practice the image size is user selectable):
However, some IPB forums seem to not support dyamic image URLs which gives me a broken image, I'd like to replace the
[IMG]http://maps.google.com/maps/api/staticmap?zoom=16&size=600x600&maptype=hybrid&markers=19.387687,-99.251732&sensor=false[/IMG]
with something like
[IMG]http://maps.google.com/maps/api/staticmap/map0000001.png[/IMG]
which should be supported by all IPB forums. Thanks in advance for your help.
In case you're interested, the most recent "released" version of my Web-App can be found here: http://flickr.argote.mx/ (The changes I mention here are still on local development server).
There are two types of solution as far as I can see:
You create a proxy server to download the images from Google and serve it on nice URLs to the clients. The disadvantage is that you will have to handle a high traffic through your servers (I don't know much about your project you have to decide about performance)
You create a special BBCODE to handle your URLs and you can use that on any IPB forums
+1: You could create a serverside script with nice URLs to redirect to the Google URLs but the problem is you never know how the different browsers will handle it. I suppose they normally don't follow URLs for images inside pages.
+2: Ask Google to support nice URLs ;)
Hope that helps.
You should be able to use a URL shortener service, as long as the service supports simple 301 redirects to image resources. You'd have to try out which ones do.
For example, bit.ly has a REST API. It allows you to make calls like this from within PHP:
http://api.bitly.com/v3/shorten?login=abc&apiKey=123&longUrl={myurl}&format=json
returning a bit.ly URL that you can use in BBCode.
Edit: According to this JSFiddle, this method works, at least in Chrome and IE8. It would still need scrupulous testing across browsers.
Since both Aston's suggestions are out of the question, maybe you can set up a simple script that redirects the request to Google Maps images (instead of a proxy)?
So you can have something like http://my-simple-script.tld/lat,lng have that script redirect to the correct Google Maps static image URL.