There are plugins for handling history and bookmarking like: http://plugins.jquery.com/project/history. Somehow it doesn't look as a complete solution. For example, on one page you might have a filter that consists of several checkboxes, text boxes etc. You would like your page history functionality to update all those controls and to update url when value of some of those controls is changed. With the jquery history plugin, you would have to write all that code (even parsing of the hash value from url). Is there some more complete solution for this problem?
Ben Alman has recently released a fantastic plugin to handle things related to the questions you are asking. It is called jQuery BBQ (for Back Button and Query). It has excellent documentation, full unit tests and is alot more up to date than the antiquated jQuery History plugin. I especially like the onhashchange work that he did. (5 Stars. would do business with again A+++)
Perhaps try this jQuery History plugin: https://github.com/browserstate/history.js
It provides cross browser support, binding to hashes, overloading hashes, all the rest.
There is also a Ajax extension for it, allowing it to easily upgrade your webpage into a proper Ajax application: http://browserstate.github.com/history.js/demo/
This is the solution chosen by such sites as http://wbhomes.com.au/ and http://gatesonline.com.au/stage/public/
Overall it is well documented, supported and feature rich. It's also won a bounty question here How to show Ajax requests in URL?
Related
I have been using Kendo UI in web resources for Dynamics CRM for several years. My components require the use of ClientGlobalContext.js.aspx. In version 9.0.2.54 of Dynamics 365 online, I found that the newer version of ClientGlobalContext.js.aspx loads its own version of jQuery without checking to see if one is already present. It does this with a document.write statement, so this version of jQuery is always loaded after my code. I can work around this temporarily by using the JavaScript produced by this call with the jQuery load line commented out, since most of the instance/context specific information comes from the xhr request that is part of this page, but I am looking for a supported solution that will upgrade without issue and work across instances.
These are the options that I’ve thought of, I am looking for suggestions as to which is the best and any additional guidance on that option. I’ve considered the following:
Wait for ClientGlobalContext to be available then test for jQuery, use a document.write to include it if it is not there (won’t be with some versions, and they could stop including it at any time), once jQuery is available, load Kendo and proceed with my page. Again, I don’t have a way to change the Microsoft page and since there are asynchronous calls there, this may leave me with a timer loop—I can’t see how this isn’t ugly, but I may be missing something, and ugly or not it may be the best option.
Convince Microsoft to check for jQuery before reloading it or to
provide an alternate supported file without the jQuery. Since I
haven’t seen anyone else expressing this frustration, I am not
thinking this is likely. Not currently an idea on the Dynamics 365
forum, this was Telerik's suggestion, but is this a reasonable expectation?
Move away from jQuery-based UI libraries since I will never
control the whole page in Dynamics 365. Very painful, since I know
and like the current library and the jQuery version has features I
use that are not yet in the Angular version (Kendo
Angular version would be the easiest migration even given that I would have to learn angular). I know this is
subjective and not technical, so I can delete this option if it
makes the question better, but it is an option and will be harder
farther into the project.
Another solution I haven’t thought of, keeping in mind that Dynamics
web resources function completely client side. I am writing in
TypeScript and using npm modules and Webpack if that is helpful
The use of jQuery is supported and recommended (in some circumstances).
We recommend that you use jQuery together with HTML web resources
I don't think its unreasonable to raise it with Microsoft to get some help. That said it may not be that helpful;
It seems (in so far as I can tell) you are basically asking them to change some code - when you (or maybe Telerik) could change the code to achieve the same thing.
Even if you did manage to convince them to make a change, it might not appear in the product for a long time (e.g. month's).
It would probably be quicker and save you time, to just implement a fix in your own code.
A solution you may want to consider (mentioned in the article above) is using jQuery.noConflict. Scott Durow presented a solution to a similar problem here.
Decide on a custom ‘namespace’ for your jQuery library. I am using ‘xrmjQuery’
On the end of your jquery.js script add the following line:
/* jQuery script goes here */
window.xrmjQuery = jQuery.noConflict(true);
Inside your jquery_ui.js script (notice the ‘-‘ has been changed to an underscore since CRM doesn’t allow them in web resource names),
wrap the whole file in the following lines:
(function ($,jQuery) {
/*! jQuery UI Goes here */
})(window.xrmjQuery,window.xrmjQuery);
Inside your JavaScript web resource that use jQuery and jQuery-UI, wrap your code in the following:
(function($){
// Your Javascript goes here and can reference $ as usual
// e.g. var someField = $('#fieldName');
})(window.xrmjQuery);
This technique is called encapsulation and namespacing of jQuery.
In regards to supportability and future upgrades. It's worth remembering what staying supported means.
... you can assume (with reasonable confidence) that your
implementations will;
Function correctly.
Microsoft support will help when they don’t.
Will continue working when an upgrade occurs (unless features are deprecated – this happens but you usually get several years’ notice).
The first article I linked above and this, frame the context in which jQuery is supported, but don't cover the specifics of this situation. I would suggest that any coded solution you implement will probably upgrade without issue. That said testing and validation is recommended by Microsoft before upgrading production.
Once your Sandbox instance has been updated ... test the update for
your solutions and customizations.
I am creating an internal dashboard on my site which is only accessible to logged in users and therefore is not indexable / crawlable by search engines. This dashboard is mainly a single-page app. Also, I don't care (at least I don't think I care) about having pretty urls: there is no progressive enhancement - if javascript is disabled, then the dashboard is not functional.
What I need is the ability to navigate using the back / forward button between different states - for instance, various modals that are opened. And very importantly, I need to be able to link externally to the specific state of the dashboard (e.g. modal A was open in this configuration) - e.g. via emails to users containing links to the dashboard.
Given all this, is there any preference to "old school" hash bangs (#!) vs html5 pushState? pushState will require me to use something like history.js for older browser support anyway. And architecturally speaking, if my dashboard is at the following url:
http://example.com/dashboard
won't I have to perform nearly identical operations to resolve to a particular modal state regardless of whether I'm using pushState or onhashchange? In other words:
http://example.com/dashboard#!modalA/state1
or
http://example.com/dashboard/modalA/state1
both of which will require parsing client side (done by a framework) to figure out how to display the current dashboard state. My backend controller would still be mapping to /dashboard/* for any dashboard url since all of the state concern is handled on the client.
Am I missing something? Not that it should matter, but I am using CanJS which supports both hash events and pushState.
Note: my question is not specific to Google's hashbang proposal, but to the general use of the # (onhashchange).
This article does a pretty good job of summing up the potential issues with using hash/hashbang, albeit it's pretty biased against them- often with good reason.
You are in a pretty fortunate position given that:
You don't have to worry about SEO concerns
Your app is internal to your organization
To me this makes your choice pretty clear cut depending on whether or not you can require those within your organization to upgrade their browsers to a version that is HTML5 compatible, if they haven't already. So, if you have that ability, use HTML5 History api. If you don't, use hash. If you want HTML5 pushState with an HTML4 onhashchange fallback, you can do that as well though it'll require some extra work to ensure all your HTML5 links work for HTML4 users.
I know that in modern browsers I can change the URL silently (without page reload) like this:
window.history.pushState(“object or string”, “Title”, “/new-url”);
But I want to be able to detect changes in URL. When user enters the new URL in the address bar how can I handle the event and prevent the page from reloading?
P.S. I know I can use onhashchange but I just don't want to use hashes :)
As stated in Event when window.location.href changes it is not possible without polling. This was 2010. Maybe in the meanwhile there is a solution for HTML5 browsers.
Probably you already know that not using hashchange has drawbacks regarding server-side, crawling, 404-detection, ...
But using hashes has disadvantages as well, especially when it comes to deep links sent by mail (redirects don't transport hashes, typically used by webmail-clients to have clean referers).
I recommend to use a tested library for url-routing and history API. I'm sure you find one either at microjs.com, JSter or JSDB.IO. There are many which do a graceful fallback on disabled browsers like MSIE.
My best bets would be history.js and Crossroads.js (with Hasher).
If you already use a MVC framework like Backbone.js, AngularJS, Ember or the like you'll get what you want for free.
After all I'm not really sure whether any of these libraries are able to supress reloading of the (single-) page when location.href changes. Since your losing most of your state I'd use #hashes. IMHO, Users of Single-Page-Apps shouldn't change the browser location but use navigation options of your App.
Are there any best practices for implementing a long-lived JavaScript app, i.e. a web app that consists of a single page and loads other pages into the content area via AJAX? (Gmail is a good example of this.)
I already read about pro and cons, SEO, performance, etc. (http://stackoverflow.com/questions/1499129/one-page-only-javascript-applications), I'm interested in patterns how to implement this.
I'd like to avoid large frameworks (e.g. Cappuccino, Echo2, SproutCore, Claypool).
How would I manage dynamically loading content while maintaining the #link portion of the URL (for bookmarking)?
Don't get me wrong, I have an idea how to implement this myself, but this problem must have been solved before.
Are there articles on this? Maybe a tiny JavaScript library?
Thanks!
Mark
I found JQuery Address http://www.asual.com/jquery/address/ extremely easy to set up. $.address.change() let's you know whenever something was clicked (works with back and forth as well) and you just parse self.location.hash and build your app from there. It seems lighweight enough as well, if you can handle using JQuery.
Here is an article to help you with the History bookmarks problem: http://codinginparadise.org/weblog/2005/09/ajax-dhtmlhistory-and-historystorage.html. It's quite old, but the solution still works.
I made several apps using this "long lived" apps, and one thing you should take into account is IE's tendency to leak memory.
I would also recommend you to use a JS library, like JQuery to help you with the AJAX and DHTML.
Heard about javascript pushstate?
http://badassjs.com/post/840846392/location-hash-is-dead-long-live-html5-pushstate
It's meant to replace location.hash
Today a lot of content on Internet is generated using JavaScript (specifically by background AJAX calls). I was wondering how web crawlers like Google handle them. Are they aware of JavaScript? Do they have a built-in JavaScript engine? Or do they simple ignore all JavaScript generated content in the page (I guess quite unlikely). Do people use specific techniques for getting their content indexed which would otherwise be available through background AJAX requests to a normal Internet user?
JavaScript is handled by both Bing and Google crawlers. Yahoo uses the Bing crawler data, so it should be handled as well. I didn't look into other search engines, so if you care about them, you should look them up.
Bing published guidance in March 2014 as to how to create JavaScript-based websites that work with their crawler (mostly related to pushState) that are good practices in general:
Avoid creating broken links with pushState
Avoid creating two different links that link to the same content with pushState
Avoid cloaking. (Here's an article Bing published about their cloaking detection in 2007)
Support browsers (and crawlers) that can't handle pushState.
Google later published guidance in May 2014 as to how to create JavaScript-based websites that work with their crawler, and their recommendations are also recommended:
Don't block the JavaScript (and CSS) in the robots.txt file.
Make sure you can handle the load of the crawlers.
It's a good idea to support browsers and crawlers that can't handle (or users and organizations that won't allow) JavaScript
Tricky JavaScript that relies on arcane or specific features of the language might not work with the crawlers.
If your JavaScript removes content from the page, it might not get indexed.
around.
Most of them don't handle Javascript in any way. (At least, all the major search engines' crawlers don't.)
This is why it's still important to have your site gracefully handle navigation without Javascript.
I have tested this by putting pages on my site only reachable by Javascript and then observing their presence in search indexes.
Pages on my site which were reachable only by Javascript were subsequently indexed by Google.
The content was reached through Javascript with a 'classic' technique or constructing a URL and setting the window.location accordingly.
Precisely what Ben S said. And anyone accessing your site with Lynx won't execute JavaScript either. If your site is intended for general public use, it should generally be usable without JavaScript.
Also, related: if there are pages that you would want a search engine to find, and which would normally arise only from JavaScript, you might consider generating static versions of them, reachable by a crawlable site map, where these static pages use JavaScript to load the current version when hit by a JavaScript-enabled browser (in case a human with a browser follows your site map). The search engine will see the static form of the page, and can index it.
Crawlers doesn't parse Javascript to find out what it does.
They may be built to recognise some classic snippets like onchange="window.location.href=this.options[this.selectedIndex].value;" or onclick="window.location.href='blah.html';", but they don't bother with things like content fetched using AJAX. At least not yet, and content fetched like that will always be secondary anyway.
So, Javascript should be used only for additional functionality. The main content taht you want the crawlers to find should still be plain text in the page and regular links that the crawlers easily can follow.
crawlers can handle javascript or ajax calls if they are using some kind of frameworks like 'htmlunit' or 'selenium'