Facebook uses all Javascript... why? - javascript

I noticed that like Google Email, FB's source code shows nothing but Javascript. Why do they use JS to write the page?

this allows them to render pages extremely fast. They just load some javascript to render everything on the screen and then load the rest.
They name it BigPipe. You can read more here http://www.facebook.com/note.php?note_id=389414033919
pretty interesting reading.

Because their pages are extremely dynamic; most of the content has to be constructed dynamically.

All their content is populated using AJAX giving it a dynamic and desktop-ish look and feel (aka the instant messaging features)

Because AJAX (Asynchronized JavaScript and XML), provides dynamic feature to webpages, or websites, with this multiple parts of single page can work or can load simultaneously, so it provide great flexibilty and speed to loaading and working of pages

Related

Can Googlebot crawl javascript generated content?

We have a web app that its content generated by javascript. Can google index those pages?
When we investigate this issue we always found solutions from old pages about using "#!" in links.
In our app the links are like this:
domain.com/paris
domain.com/london
When we use these kind of links, javascript populates content.
Is it wise to use HTML snapshot or do you have any other suggestions?
Short answer
Yes they can crawl JavaScript generated content, as long as you are using pushstates.
Detailed answer
It depends on your setup. Google and Bing CAN crawl javascript and AJAX based content if your are using pushstates. If you do they will handle content coming from AJAX calls, updates to page title or meta tags using javascript, and in general any such things.
Most frontend frameworks like Angular, Ember or Backbone already works with pushstates so in these cases you don't need to do anything. Check whatever system you are using to see how they do things. If you are not using pushstates you will need to implement it on your own or use the whole escapted_fragment html snapshot deal.
So if you use pushstate then yes, search engines can crawl your page just fine. If you don't then no, you will need to implement pushstates or do HTML snapshots.
Bonus info - Unfortunately Facebook does not handle pushstates, so the facebook crawler needs either non-dynamic og-tags or HTML snapshots.
"Generated by JavaScript" is ambiguous. That could mean that you are running a JS script on the server or it could mean that you are making an AJAX call with a JS API. The difference appears to matter as far as Googlebot is concerned. But you don't have to take my word for it, as there is empirical proof of what Googlebot will and won't currently cache as far as JavaScript content in the form of live experiments using both the XMLHTTPRequest API and the Fetch API. So, as you can see, server-side rendering is still going to be the best way to go for SEO.

Modular approach to client-side applications

This is a follow-up to my previous question.
Suppose there is a single web page with a login form and sign-up link. When a user clicks on the link a new sign-up form is displayed. Suppose also I create separate HTML, CSS, and JavaScript files for both forms for modularity.
Now the web page should contain some JavaScript code to load the login form, when the page is loaded, and load the sign-up form upon click on the link.
Does this approach make sense? Are there any frameworks/libraries, which implement this approach? How would you suggest implement it?
I think the idea has some issues. First you should know that there are some old fashion ways to load another completely separated page in the main document. Using "iframe" tag is one of the most popular and unsecure ways to do such a thing. Showing popups and use "window.open" is another way that can show a new window and load the specific url completely separated. BUT...
There are many reasons that I'm now gonna suggest you not to do that in any of mentioned ways. You can simply use some libraries like "JQuery" to load another html in the current page without any need to load new resources that cause performance issues for you. I believe you should search for "JQuery $.get" and you will see how easy it would be.
Hope it helps.
Cheers
Yes that makes sense to me. I really like this approach as I think breaking an app into smaller chunks will make the development & maintenance much easier.
Basicly you need to load the css and js files by appending a link and script tag respecticly into the head section of the html. For loading the html part of the module you can simply use jQuery.get() method as suggested by other answer.
I have tried to implement it. I recently released my work on this. a small code base. actually in my approach each module has its own folder with its js, html and css files and optionally a server-side file too like a php or aspx file that will be called by javascript to query the server.
here is the project page in github called Yuva
take a look and let me know if this makes sense to you.

Using injected JavaScript to copy text from a web page

As part of a job I'm doing on a web site I have to copy a few thousand lines of text from several pages of the old site and paste them into the HTML for the new site. The long and painstaking way of going to the old page and copying the many lines of text and then going to my editor and pasting it there line by line is getting really old. I thought of using injected JavaScript to do this but I'm not quite sure where to start. Thanks in advance for any help.
Here are links to a page of the old site and a page of the new site. As you can see in the tables on each page it would take a ton of time to copy it all manually.
Old site: http://temp.delridgelegalformscom.officelive.com/macorporation1.aspx
New Site: http://ezwebsites.us/delridge/macorporation1.html
In order to do this type of work, you need two things: a way of injecting or executing your script on that page, and a good working knowledge of the Document Object Model for the target site.
I highly recommend using the Firefox plugin FireBug, or some equivalent tool on your browser of choice. FireBug lets you execute commands from a JavaScript console which will help. Hopefully the old site does not have a bunch of <FONT>, <OBJECT> or <IFRAME> tags which will make this even more tedious.
Using a library like Prototype or JQuery will also help selecting parts of the website you need. You can submit results using JQuery like this:
$(function() {
snippet = $('#content-id').html;
$.post('http://myserver/page', {content: snippet});
});
A problem you will very likely run into is the "same origination policy" many browsers enforce for JavaScript. So if your JavaScript was loaded from http://myserver as in this example, you would be OK.
Perhaps another route you can take is to use a scripting language like Ruby, Python, or (if you really have patience) VBA. The script can automate the list of pages to scrape and a target location for the information. It can just as easily package it up as a request to the new server if that's how pages get updated. This way you don't have to worry about injecting the JavaScript and hoping all works without problems.
I think you need Grease Monkey http://www.greasespot.net/

SEO friendly javascript and CSS links?

Is it possible to have a navigation system optimized using javascript, but for the sake of search engines, have the hyperlinks still be crawlable?
Or maybe a condition statement that calls HTML code only if javascript is not enabled in the browser or when crawled by a search engine?
What you are describing would be characterized by unobtrusive javascript.
see; http://en.wikipedia.org/wiki/Unobtrusive_JavaScript
You write your html in the most semantic SEO friendly way possible for search engines and users with javascript turned off, then add your script separately to add your bells and whistles.
A framework such as jQuery is often useful.
For example;
About
could be given another function via an external javascript file containing;
$("#about").click( function() {
//fancy code here
return false;
});
which would stop the user being taken to /about and execute the given javascript instead.
Essentially this is the inverse of your suggestion; rather javascript is only used if it's available to enhance the existing html.
Sure. In addition to being SEO-friendly, this approach is also far more accessible to handicapped users; if you work or may someday work in government or higher education you need to know about accessibility, though in fact everyone should be keeping this issue in mind.
Google "progressive enhancement" for more information; here's a good article.
Basically you want to create your site as if it were using normal link navigation, and then add javascript event handlers to hijack the clicks that would normally trigger navigation.
It's not easy to trigger an event if javascript is disabled because to run anything client-side you use javascript. What I do for my sites is to use static html links, and then use javascript to change what occurs when these links are pressed.
This way you can have a link somewhere, that is still crawlable and works fine if javascript is disabled, but if javascript is enabled use an AJAX method to reload parts of the page.
The suckerfish for example, are drop down menus based on nested HTML lists, turned into horizontal menus. It looks nice and clean and has fully crawlable links. Generally, it's better to generate HTML and then use progressive enhancement to turn the HTML into something nice via JavaScript.
On the other hand, if you generate the JavaScript navigation, for example as a JSON object, the it should be easy to generate an XML sitemap for Google.
What do you mean by "optimized"? Optimized for speed because your navigation tree is huge and would generate unnecessary HTML traffic? Then you should generate the navigation via JavaScript and Ajax calls to keep load times down and serve a sitemap to the search engines. If you mean "pretty" then use progressive enhancement.
Basically the main thing would be to add real urls in your href tags, and an onclick even handler to cancel the default.

How to force every page to load a certain javascript file?

I've created a pretty basic system here at work that does what Google analytics does (extremely simplistic in comparison) and it works quite well, but like Google Analytics it requires each page to reference a JavaScript file. Is there any way to make all of our pages that are served from IIS reference this Javascript file? I would like to capture these stats for every page.
Any ideas?
Thanks
Hmm, it looks like you are looking for this.
If you're dealing with static HTML files your best bet seems to be this previous question.
If you have an ASP site going, and you already have a header or layout file, I'd recommend putting it in there.
This depends on how you build your web site, but most people do this by adding the reference to their templates, layouts, master pages, or whatever term is used in your development platform.
You don't want every page tracked, e.g., pages returning data such as JSON or XML should not be meddled with. This is why it is better to have explicit control over which pages get the analytics javascript added to them.

Categories