Ajax & Link Degradation - javascript

I load in HTML pages via Ajax. These files have no <title><head><body> tags, etc. Just a few divs inside them.
At the moment, I place links to these ajax pages in my href for browsers with JS disabled:
Honda
The javascript has return false so the user is never taken to that page physically if their browser supports javascript.
My concern is people potentially right-clicking and sending these links to other people. Which would look bad since it is unstyled, etc. I'm tempted to remove the href link because of this.
Are there alternatives to obfuscating the links? It goes against my ideals on best practices to remove the link entirely from the href.

It goes against my ideals on best practices to remove the link entirely from the href.
I strive to follow best practices as well, but what you're doing is actually worse than not including an href at all.
The href attribute should only be used for URLs that users can visit directly. Using it to hold a URL for Ajax use is common (Stack Overflow does it), but it's a misuse of the href attribute.
If possible, href should point to a full page which contains the content that would be loaded by Ajax. If you can't provide that then either remove href or set it to something like "#".

You don't need to obfuscate it and I also don't think you need to remove it. If you are using a server side language you can check for the
X-Requested-With: XMLHttpRequest
HTTP Header in each page. If it isn't set it has not been requested with Ajax. Then you can redirect the user to the right page.

One solution that would work well, but includes some work, would be to create degradation pages for the content. You could create copies of the pages that were complete HTML documents, and use their URL in the links, e.g.:
Honda
When you fetch the content using AJAX, you would replace the view in the URL with ajax.
This way the pages will also be indexed by web crawlers.

Maybe you could use the data tag to store/retrieve your value in a div with a mocklink style. FIDDLE
Something like this.
<div class="mockLink" data-link="/test/link/honda.html">Honda</div>
CSS
div.mockLink{color:blue; text-decoration:underline}
div.mockLink:hover{cursor:pointer;}
JS
$('div.mockLink').click(function(){
alert($(this).data('link')) ;
//do AJAX
});

Related

Could HTML file be reused dynamically?

If a website needs to have a page for every item, is it better to manually create pages with the same HTML code but different titles/images/descriptions or create only one but add content through javascrpit depending on a page a user followed, like that?
linkBtn.addEventListener('click', function() {
contentEl.innerHTML = `<div>
<div>
<h3>${title}</h3>
</div>
...etc
</div>`
})
or is there less horrible solution?
It is entirely up to you. Compare the stackoverflow website with the gmail one. Stackoverflow reloads the whole website as you navigate between pages. This means that your browser is requesting a new resource and the stackoverflow servers are returning that resource, possibly creating it dynamically with new question etc., but then just sending you raw HTML.
On the other hand, gmail loads once, but then fetches each different page entirely through javascript. This could involve asking the gmail server for new messages, but could also be just reworking the HTML to show a settings page, for instance.
There are obviously advantages and disadvantages to both ways of doing things.
As a side note, in javascript it is not a great idea to assign to innerHTML as this requires that the browser does a lot of work to re-parse the new markdown. Instead you should fully use the DOM model with functions such as document.createElement and Element.appendChild.
You need not create different pages for each product. Rather use HTML as template.
You can use something like handlebars to make templates.
I believe you should create one UI and fill it with data from some REST API. Look into creating dynamic websites, it is basic stuff.

How to load page content by URL using JavaScript

I was asked during an interview to write a piece of code to highlight the url if the content of the url contains a certain keyword. I honestly do not know how I can do that with JavaScript...
I think you can firstly fetch content using ajax, search for keyword,
then highlight the url if necessary.
But this comes with limitation:
The url you want to highlight should allow cross-site ajax fetching.
performance will not be so good, since you need to fetch each url if there are many.
If the targeting url is client-side rendering, fetching its html simply won't see the content since its not render yet.
Normally we won't solve this using front-end approach, but using search-engine indexing method instead.

Linkable navigation with AJAX?

Just getting into web programming and when it comes to navigation, so far I prefer the idea of using Javascript and AJAX requests to change the content when the menu links are clicked on. It's fast, no page refresh, brilliant. The only problem is that the website's URL always stays the same. So I can't, for example, link someone to the "About" page. What is the standard way to solve this problem?
I'm currently using only HTML, JavaScript and jQuery.
Usually you would use one of two methods. depending on the set of browsers you need to support:
Change the Hash - You can change the hash part of a url (http://.../index.html#about-page), without reloading the page. So, for example, when you click on the 'About' link, you can set the hash of the URL with something similar to this:
window.location.hash = 'bla-bla';
When your page loads, you parse the hash part and perform the necessary logic:
if (window.location.hash === 'about-page') {
// ...
}
Another method is using the 'History API' - In modern browsers you can use an api called 'History API' which allows you to change the history of the browser, including the URL. You use a method called pushState on the history object, for example:
window.history.pushState({ event: 'event-id' }, "Event Title", "?event=event-id");
For further details, you can see a previous answer I've posted in the past:
How to manage browser "back" and "forward" buttons when creating a javascript widget
Really depends on the framework you're using. That was always the classical criticism of JSF which had the same effect with using Put requests as part of its communication approach. They came up with some changes in later releases.
But whether there is a standard approach is hard to say, unequivocally. I have seen Put requests used strategically. That gets the url changing but it's that full submission you were avoiding.
Some javascript libraries allow you to modify the url yourself. I can't recall which ones we used.
In either cae your app will need to cater for this deep navigation because you're giving the users the ability to move directly to pages they may not have been able to previously

ajax web app : possible to force bookmarking without url's hash-string?

I have a single-page ajax powered web app, however the way my app works is if a hash string is in the url it will load that element which is really useful for people to link to content on it.
When it comes to bookmarking/favouriting things are different. My users want to book mark the app and not the current bit of content (hash string) they're on...
I'm thinking this is unlikely but is there anyway to get browsers to not include the hash string when the page is bookmarked?
I'm going with you are using the hash as an anchor, rather than a way to store a page's state in a Ajax application.
There are a few solutions you can implement:
Don't use anchors (and thus don't use a hash) and thus hash won't be bookmarked. Instead you can use something like jQuery ScrollTo and scroll to the link instead using javascript instead of the built-in anchor support. http://demos.flesler.com/jquery/scrollTo/
Have a toolbar up the top which contains the url without the hash, or a sidebar.
Educate your users.
If you are asking about keeping support for anchors in Ajax Web 2.0 Applications, then you may want to look at jQuery Ajaxy as it supports this; as seen by the "Durian" demo: http://www.balupton.com/sandbox/jquery-ajaxy/demo/

How to neutralize injected remote Ajax content?

I'll be inserting content from remote sources into a web app. The sources should be limited/trusted, but there are still a couple of problems:
The remote sources could
1) be hacked and inject bad things
2) overwrite objects in my global names
space
3) I might eventually open it up for users to enter their own remote source. (It would be up to the user to not get in trouble, but I could still reduce the risk.)
So I want to neutralize any/all injected content just to be safe.
Here's my plan so far:
1) find and remove all inline event handlers
str.replace(/(<[^>]+\bon\w+\s*=\s*["']?)/gi,"$1return;"); // untested
Ex.
<a onclick="doSomethingBad()" ...
would become
<a onclick="return;doSomethingBad()" ...
2) remove all occurences of these tags:
script, embed, object, form, iframe, or applet
3) find all occurences of the word script within a tag
and replace the word script with html entities for it
str.replace(/(<[>+])(script)/gi,toHTMLEntitiesFunc);
would take care
<a href="javascript: ..."
4) lastly any src or href attribute that doesn't start with http, should have the domain name of the remote source prepended to it
My question: Am I missing anything else? Other things that I should definitely do or not do?
Edit: I have a feeling that responses are going to fall into a couple camps.
1) The "Don't do it!" response
Okay, if someone wants to be 100% safe, they need to disconnect the computer.
It's a balance between usability and safety.
There's nothing to stop a user from just going to a site directly and being exposed. If I open it up, it will be a user entering content at their own risk. They could just as easily enter a given URL into their address bar as in my form. So unless there's a particular risk to my server, I'm okay with those risks.
2) The "I'm aware of common exploits and you need to account for this ..." response ... or You can prevent another kind of attack by doing this ... or What about this attack ...?
I'm looking for the second type unless someone can provide specific reasons why my would be more dangerous than what the user can do on their own.
Instead of sanitizing (black listing). I'd suggest you setup a white list and ONLY allow those very specific things.
The reason for this is you will never, never, never catch all variations of malicious script. There's just too many of them.
don't forget to also include <frame> and <frameset> along with <iframe>
for the sanitization thing , are you looking for this?
if not, perhaps you could learn a few tips from this code snippet.
But, it must go without saying that prevention is better than cure. You had better allow only trusted sources, than allow all and then sanitize.
On a related note, you may want to take a look at this article, and its slashdot discussion.
It sounds like you want to do the following:
Insert snippets of static HTML into your web page
These snippets are requested via AJAX from a remote site.
You want to sanitise the HTML before injecting into the site, as this could lead to security problems like XSS.
If this is the case, then there are no easy ways to strip out 'bad' content in JavaScript. A whitelist solution is the best, but this can get very complex. I would suggest proxying requests for the remote content through your own server and sanitizing the HTML server side. There are various libraries that can do this. I would recommend either AntiSamy or HTMLPurifier.
For a completely browser-based way of doing this, you can use IE8's toStaticHTML method. However no other browser currently implements this.

Categories