How to Implement Caching using JQuery at Client Side Code? - javascript

I want to use the values of one page in another page using cache. Can any one suggest me how to use cache using jQuery?
After searching on several sites, I found this link : http://markdaggett.com/blog/2012/03/28/client-side-request-caching-with-javascript/. It uses a $.Cache.add(key,value); method.
When I used this in my aspx page it is throwing an error. Do I need to add any files to achieve cache?

If you want to store a value in one web page that you can retrieve in another page (on the same domain), you can either store it in a cookie or in local storage.
Browsers cache whole web resources (pages, images, scripts, etc...) not individual pieces of data.

+1 for using the local-storage.
here is a nice little plugin for using the localStorage with jQuery (including docs): http://upstatement.com/blog/2012/01/jquery-local-storage-done-right-and-easy/
this plugin uses cookies as a fallback for browsers which do not support the localStorage-feature

Related

Get localStorage from within extension without loading a page

I know how to get the localStorage from any open wep page by using content scripts. So I'm basically able to open a new tab with my own web page and read the storage data with a content script and message it to the background page.
But now I'd like to do this without loading an external page every time. Is there a way to access the localStorage of a page directly from within the extension? Maybe some query to chrome directly.
I don't see any API for that.
Your options are:
Make a native messaging host application that would read database files directly from Local Storage directory in the browser user profile. An example: What's the best way to read Sqlite3 directly in Browser using Javascript?
Put the other page into an iframe: Is it possible to use HTML5 local storage to share data between pages from different sites?
P.S. "Ironic side note" quoted from Cross-domain localStorage article by Nicholas C. Zakas.
Who knew cross-domain client-side data storage would be useful? Actually, the WHAT-WG did. In the first draft of the Web Storage specification (at that time, part of HTML5), there was an object called globalStorage that allowed you to specify which domains could access certain data. [...]
The globalStorage interface was implemented in Firefox 2 prematurely as the specification was still evolving. Due to security concerns, globalStorage was removed from the spec and replaced with the origin-specific localStorage.

Javascript App and SEO

I've got this setup:
Single page app that generates HTML content using Javascript. There is no visible HTML for non-JS users.
History.js (pushState) for handling URLS without hashbangs. So, the app on "domain.com" can load dynamic content of "page-id" and updates the URL to "domain.com/page-id". Also, direct URLS work nicely via Javascript this way.
The problem is that Google cannot execute Javascript this way. So essentially, as far as Google knows, there is no content whatsoever.
I was thinking of serving cached content to search bots only. So, when a search bot hits "domain.com/page-id", it loads cached content, but if a user loads the same page, it sees normal (Javascript injected) content.
A proposed solution for this is using hashbangs, so Google can automatically convert those URLs to alternative URLs with an "escaped_fragment" string. On the server side, I could then redirect those alternative URLs to cached content. As I won't use hashbangs, this doesn't work.
Theoretically I have everything in place. I can generate a sitemap.xml and I can generate cached HTML content, but one piece of the puzzle is missing.
My question, I guess, is this: how can I filter out search bot access, so I can serve those bots the cached pages, while serving my users the normal JS enabled app?
One idea was parsing the "HTTP_USER_AGENT" string in .htaccess for any bots, but is this even possible and not considered cloaking? Are there other, smarter ways?
updates the URL to "domain.com/page-id". Also, direct URLS work nicely via Javascript this way.
That's your problem. The direct URLs aren't supposed to work via JavaScript. The server is supposed to generate the content.
Once whatever page the client has requested is loaded, JavaScript can take over. If JavaScript isn't available (e.g. because it is a search engine bot) then you should have regular links / forms that will continue to work (if JS is available, then you would bind to click/submit events and override the default behaviour).
A proposed solution for this is using hashbangs
Hashbangs are an awful solution. pushState is fix for hashbangs, and you are using that already - you just need to use it properly.
how can I filter out search bot access
You don't need to. Use progressive enhancement / unobtrusive JavaScript instead.

Add unused javascript file on login page

I have a web application where many jquery files needed after login page. Can i include it on the login page so that on the next page the browser don't make a request for the file.
Means the files are used from the browser cache. Is it possible?
Yes, you can (as #Juhana already mentioned).
You can also use a CDN like Google to deliver jQuery or other common libraries. If someone already visited another site including jQuery via Google CDN, it would be already cached by his browser when logging in to your site (if you are also using the CDN of Google).
Yes, it is possible. Since you don't want to slow down login page, it is better to dynamically insert it on the page after it is loaded and rendered so user can interact with it without any delays that loading of this file might impose. Also note that this doesn't give you 100% guarantee that file will be in cache, since browser might choose not to store it or delete it before user visits your next page according to its policies and space limitations.
You can request file anywhere with jquery $.getScript() method. If server caching is ON everything will be fine.

Override to read local JS-file in web app wrapper

Im looking into creating a web wrapper for a existing web app. I clearly want to make it as quick as possible.
Is it possible to host the JS-files locally, instead of having to download the file, without altering the existing web app?
Using a WebViewClient you can prevent loading the javascript from the web server (edit only in API level 11 and higher unfortunately). Or you can disable JavaScript, load the page, then enable JavaScript again. After the page is loaded you can modify the DOM using javascript: urls to load the scripts from a local url (like file:///android_asset from the top of my head).
You can also change the cache strategy of the WebView so that it will never fetch anything that is already fetched once before, which might also be what you want in this case. These are set in http://developer.android.com/reference/android/webkit/WebSettings.html, you could set it to LOAD_CACHE_ELSE_NETWORK in this case.

Enabling SEO on AJAX pages

I'm experimenting with building sites dynamically on the client side, through JavaScript + a JSON content server, the js retrieves the content, and builds the page client-side.
Now, the content won't be indexed by Google this way. Is there a workaround for this? Like having a crawler version and a user version? Or having some sort of static archives? Has anyone done this already?
You should always make sure that your site works without javascript. Make links that link to static versions of the content. Then add javascript click handlers to those links that block the default action from hapening and make the AJAX request. I.e. using jQuery:
HTML:
<a href='static_content.html' id='static_content'>Go to page!</a>
Javascript:
$('#static_content').click(function(e) {
e.preventDefault(); //stop browser from following link
//make AJAX request
});
That way the site is usable for crawlers and users without javascript. And has fancy AJAX for people with javascript.
If the site is meant to be indexed by google then the "information" you want searchable and public should be available without javascript. You can always add the dynamic stuff later after the page loads with javascript. This will not only make the page indexable but will also make the page faster loading.
On the other hand if the site is more of an application 'ala gmail' then you probably don't want google indexing it anyway.
You could utilize a server rendered version, and then replace it onload with the ajax version.
But if you are going to do that, why not build the entire site that way and just use ajax for interaction where the client supports it ala non-intrusive javascript.
You can use phantomjs to build a crawler version, see my solution here:
https://github.com/liuwenchao/ajax-seo

Categories