As RIAs and SPAs (or web apps with heavy javascript usage) have become more and more popular, I've been running into systems that, instead of using good old a href hyperlinks, I see them utilizing constructs using onclick with JavaScript code that manipulates navigation. This is particularly true with images.
For example, instead of seeing something like this:
<img src="...."/>
<div ... onclick='SomeJsFunctionThatNavsToAnotherPage()'><img src="..."/></a>
What is the advantage of this? It makes it incredibly hard to trace where pages transition to when debugging or trying to root cause a bug. I can get the idea when the target to navigate can change (so yes, here you could use a function that computes to what page to navigate to.)
But I see this pattern even when the pages to navigate to are constant. I find this extremely convoluted and hard to test. Not to mention that there is always the browser-specific bugs that come from stuff (sadly in my experience from over-complexifying the front-end.)
But I am not a RIA/SPA developer (just backend and traditional web development). Am I missing the rationale behind this?
TO CLARIFY
My question is not for the case when we want to redraw the page or change current content without changing the current location. My question is for plain
old transitions, from page A to page B.
In such a case, why use onclick=funcToChangeLocation() over <a href="some location"/>.
This has been a pain for me when troubleshooting systems that are already written (for I wouldn't write them like that), but there could be reasons I am not aware of.
Again, my question is not for pages that redraw themselves without changing the browser location, but for navigation from one page to the next.
ALSO
If you are going to vote to close this question, at least leave a message explaining why.
If you are making a web application, sometime you don't want to redirect the user to another page, but you want to dynamically change the content of the page without refreshing the page. It has some advantages. It can be faster. You can easily keep the state of the page/application. You are not obligated to communicate with the server. You can update only a part of the page.
You can also dynamically request data to print the page. If you are displaying an user profile page, you can only ask a json object that represent the user. This json object is smaller than the whole page and will be dynamically rendered. It can help to reduce the data transfer between users and server when your bandwidth is limited.
EDIT: In the case of a simple page redirection, I think it's a bad practice and I cannot see an advantage. I think it obfuscate the website when the google crawler try to parse the website.
I once had a pretty successful web directory website. One day Google decided that "directories" are competing businesses and started penalizing sites that had links on directories. I used the method you describe to cloak outgoing links to try and trick Google.
Related
I just recently started learning javascript and have a question regarding the'proper use'. I'm still trying to identify the role of Javascript within a website, and I'm curious whether or not it would be considered ok to have Javascript modified the HTML of a web page.
Let's say I have a panel on a web page. This panel houses a list. I would like users to be prompted to add items to this list.
I was thinking that it would be possible to use Javascript to generate list items to add to the list. However, this would be modifying the actual number of HTML elements on the web page... For some reason, this just seems 'hacky'. When I think of HTML, I think of a static structure that should come to life with CSS and Javascript.
So my question: is it considered okay to have Javascript modify the HTML of a web page? What about the case of adding items to a list?
Thank you!
Javascript is a programming language designed so it can modify the document that is being displayed(the DOM), the actual HTML is never touched.
Javascript has a place on a website and modifying the document/dom is perfectly acceptable and without it, would make javascript almost useless. CSS is great for certain tasks, but you can't do everything in CSS, though CSS5 is coming pretty close for many tasks.
Rewriting the entire DOM IS bad practice, but using it to shift an element's position based on an action, or creating a popup overlay is perfectly acceptable.
Remember the gold rule:
Modify as little as possible to accomplish the goal.
What matters is the user's experience of the (HTML) document. The representation of "the document" can change by utilising a language like javascript that "manipulates the DOM" - and the DOM is like an instance of the HTML document, or "document session" if you will.
So in a way, no, the HTML is touched. It is positively manhandled by javascript - indirectly and in a non-persistent way. But if you want to be pedantic... we say it isn't and leave half the readers confused.
When to use javascript and when not to. It's swings and roundabouts. You need to learn (mostly from experience) when one particular tool suits the situation. It usually boils down to some core considerations:
HTML is for markup. Structure. Meaning.
CSS is for style, feel, appearance.
Javascript is for those situations where none of the above work.
And I'm neglecting to mention server-side processing with the obvious disclaimer that all processing that ought to be done in privacy is done on the server (with a programming language like PHP or Ruby for example).
Sometimes you get the grey area in-between where you can do something either way. Those situations you may ask yourself a question like... would it be processed quicker if the client (user's computer) processes it, or the server... and that's where experience comes in.
It depends on the case to decide if you should manipulate DOM directly or let the JS do it.
If you have a static html page, just do your html and hand craft the
DOM. There is no need for JS to get a hand here.
If you have a semi static html page where the user actions change
part of it - then get the JS to do the changing part.
If you have a highly dynamic html page (like single page app) - you
should get the JS to render html mostly.
Using plain JS however is not the best for you in this age of great JS evolution. Learn it -but also learn to use good libraries and frameworks which can take you to the track very fast.
I recommend you to learn Jquery and Angular 2 which make you feel like learning a super set of JS straightaway.
Short disclamer: Javascript may modify DOM content in a browser but never change original HTML served from Web server.
Modern Web is unthinkable without javascript. JS allows to make static HTML interactive and improve User Experience (UX). As any sharp tool JS can produce a masterpiece out of nearly dead page and cut throat to a blooming static content.
Everything is up to the developer.
When not to use JS
Do not use JS to deliver ever-green content to the page. Web bots (crawlers) don't run JS, and you message "I come to this world to testify to the truth" may appear "a voice of crying out of desert" and be non-indexed and thus unread.
When JS in the must
Every time your page visitor does something the page should respond with proper action (using JS or, if possible, just CSS).
This is especially important when a prospect fills in a form. To err is human so a developer via JS should help the visitor to make wrong things right. In many cases it is possible without requesting server and in even more cases the answer should come from the server. And JS is your best friend in this case.
Javascript never lives alone. Browser object is its trustful ally. Most modern browsers support XMLHttpObject A.K.A AJAX (you may remember this name from ancient Greek history) which communicates with the server without reloading the page.
The idea of AJAX which stands for Asynchronous Javascript And Xml is to send request and receive response from the server asynchronously without blocking page in the browser.
Native javascript may be difficult to understand to many beginner developers. To make thing easier there are a lot of JS libraries with jQuery being most known.
Returning to the OP's question, Should Javascript be used to modify HTML?
The answer is: It Depends.
Ok so im learning javascript and I just wanted to know if its possible to make it do actions on external pages. For example if I wanted to say 'onload redirect to somesite.com/page1 then once on somesite.com/page1 fill in register form with these details'
is that possible?
You cannot do this.
This would represent, for lack of a better word, an enormous security hole.
The only way to make an external page "do stuff" is to write code that is on or explicitly included in that page itself. Period.
I have however, seen external pages get loaded INTO the current page as strings, and then have the javascript that loaded those pages modify that markup directly. But that is ugly.
On the first page you could modify some variables/values in a database. Then, in the second page you could check the values in your database, and do different "stuff" depending on those values.
You would need to set up a database and use some server-side scripting along with Javascript (server-side scripting is used to interact with your server/database). In your first page, the server-side script, like PHP, would fetch info from your Javascript. In your second page, your Javascript would fetch info from your server side script and then do stuff to that page.
This is a much safer way. If you are taking user input from things like HTML fields, you need to look into cleaning the input to prevent something called "cross-site scripting (XSS)".
You could do this IF you were rendering the other page in a frame of some sort.
There are multiple ways in which you can render an entire external page as a piece of your page. Many pages take precautions to block being rendered in a frame for just this very reason though (Not to mention copyright issues).
Once you're rendering the external page inside your page you should be able to reference components nested in your frame and do the sort of thing that you're describing.
There's no way to do this with JavaScript. The developers of all the major browsers work very, very hard to prevent this sort of thing. If this were possible, it would open up pretty massive security holes.
If you really want to use something like this for testing, you can look at browser automation software like Selenium. This allows you to automate various testing scenarios in your browser, but it does not affect other clients using your site.
I'm making a HTML5 page (game) which uses lot of popups and all kind of widgets appearing and dissapearing in the same page.
To implement this I could
Have all the popups and widgets listed in the page, invisible (like lot of examples I saw), and keep toggling only visibility.
Add and remove dynamically, using Javascript. I could put each popup as HTML fragment in a separate file (?).
The seconds is "modular" and I like the fact that I have no elements in the page which I'm not acutally using. But I don't know about performance (load HTML each time, DOM insertiong, etc.).
Is there a prefered/standard way to do this?
If we are talking about loading HTML from server, then obviously this won't be efficient.
I don't know what kind of game you are writing but I don't think there will be any visible difference in performance (except for loading data from server) unless you create like thousands popups per second (I doubt it). Let's be honest - your game isn't using like 4GB of memory. :) And if it is, then you're probably doing something wrong. And I do not think there is any standard way. It's more like how you feel it. :)
For example I always try to load every possible data from server with one request and store it on client-side, because most problems with performance is actually related to client-server communication. I also like the DOM to be clean, so in most cases I hold the (hidden) data in JavaScript, except for forms' hidden fields.
On the other hand, if I have for example blog with discussion and I load some additional data (for example user data, which is supposed to appear as a popup after click on the user's name) I tend to store it in DOM elements because it is easier (at least for me) to control it (I'm talking about jQuery and jQuery UI).
Note that it is possible that recreating popups may lead to memory leaks, but it is highly unlikely if you use some popular library like (for example) jQuery UI.
I got a webpage that calls oracle and then does some processing and then a lot of javascript.
The problem is that all of this make it slow for the user. I have to use internet explorer 6 so the javascript takes very long to load, around 15 seconds.
How can i make my server do all of this every minute for example and save the page so if a user requests it it would server them that page that is all ready calculated etc
im using tomcat server my webpage is mainly javascript and html
edit:
By the way I can not rewrite my webpage, it would have to remain as it is
I'm looking for something that would give the user a snapshot of the webpage that the server loaded
YSlow recommendations would tell you that you should put all your CSS in the head of your page and all JavaScript at the bottom, just before the closing body tag. This will allow the page to fully load the DOM and render it.
You should also minify and compress your JavaScript to reduce download size.
To do that, you'd need to have your server build up the DOM, run the JavaScript in an environment that looks (enough) like web browser, and then serialize the result as HTML.
There have been various attempts to do that, Jaxer is one of them (it was originally a product from Aptana, now an Apache project). Another related answer here on SO pointed to the jsdom project, which is a DOM implementation in JavaScript (video here).
Re
By the way I can not rewrite my webpage, it would have to remain as it is
That's very unlikely to be successful. There is bound to be some modification involved. At the very least, you're going to have to tell your server-side framework what parts it should process and what parts should be left to the client (e.g., user-interaction code).
Edit:
You might also look for "website thumbnail" services like shrinktheweb.com and similar. Their "pro" account allows full-size thumbnails (what I don't know is whether it's an image or HTML). But I'm not specifically suggesting them, just a line you might pursue. If you can find a project that does thumbnails, you may be able to adapt it to do what you want.
But again, take a look at Jaxer, you may find that it does what you need or very similar (and it's open-source, so you can modify it or extract the bits you want).
"How can i make my server do all of this every minute for example"
If you are asking how you can make your database server 'pre-run' a query, then look into materialized views.
If the Oracle query is responsible for (for example) 10 seconds of the delay there may be other things you can do to speed it up, but we'd need a lot more information on what the query does
When a developing a web app, versus a web site, what reasons are there to use multiple HTML pages, rather than using one html page and doing everything through Javascript?
I would expect that it depends on the application -- maybe -- but would appreciate any thoughts on the subject.
Thanks in advance.
EDIT:
Based on the responses here, and some of my own research, if you wanted to do a single-page, fully JS-Powered site, some useful tools would seem to include:
JQuery Plug Ins:
JQuery History:
http://balupton.com/projects/jquery-history
JQuery Address:
http://plugins.jquery.com/project/jquery-address
JQuery Pagination:
http://plugins.jquery.com/project/pagination
Frameworks:
Sproutcore
http://www.sproutcore.com/
Cappucino
http://cappuccino.org/
Possibly, JMVC:
http://www.javascriptmvc.com/
page based applications provide:
ability to work on any browser or device
simpler programming model
they also provide the following (although these are solvable by many js frameworks):
bookmarkability
browser history
refresh or F5 to repeat action
indexability (in case the application is public and open)
One of the bigger reasons is going to be how searchable your website is.
Doing everything in javascript is going to make it complicated for search engines to crawl all content of your website, and thus not fully indexing it. There are ways around this (with Google's recent AJAX SEO guidelines) but I'm not sure if all search engines support this yet. On top of that, it's a little bit more complex then just making separate pages.
The bigger issue, whether you decide to build multiple HTML pages, or you decide to use some sort of framework or CMS to generate them for you, is that the different sections of your website have URL's that are unique to them. E.g., an about section would have a URL like mywebsite.com/about, and that URL is used on the actual "about" link within the website.
One of the biggest downfalls of single-page, Ajax-ified websites is complexity. What might otherwise be spread across several pages suddenly finds its way into one huge, master page. Also, it can be difficult to coordinate the state of the page (for example, tracking if you are in Edit mode, or Preview mode, etc.) and adjusting the interface to match.
Also, one master page that is heavy on JS can be a performance drag if it has to load multiple, big JS files.
At the OP's request, I'm going to discuss my experience with JS-only sites. I've written four relevant sites: two JS-heavy (Slide and SpeedDate) and two JS-only (Yazooli and GameCrush). Keep in mind that I'm a JS-only-site bigot, so you're basically reading John Hinkley on the subject of Jody Foster.
The idea really works. It produces gracefully, responsive sites at very low operational costs. My estimate is that the cost for bandwidth, CPU, and such goes to 10% of the cost of running a similar page-based site.
You need fewer but better (or at least, better-trained) programmers. JavaScript is an powerful and elegant language, but it has huge problems that a more rigid and unimaginative language like Java doesn't have. If you have a whole bunch of basically mediocre guys working for you, consider JSP or Ruby instead of JS-only. If you are required to use PHP, just shoot yourself.
You have to keep basic session state in the anchor tag. Users simply expect that the URL represents the state of the site: reload, bookmark, back, forward. jQuery's Address plug-in will do a lot of the work for you.
If SEO is an issue for you, investigate Google Ajax Crawling. Basically, you make a very simple parallel site, just for search engines.
When would I not use JS-only? If I were producing a site that was almost entirely content, where the user did nothing but navigate from one place to another, never interacting with the site in a complicated manner. So, Wikipedia and ... well, that's about it. A big reference site, with a lot of data for the user to read.
modularization.
multiple files allows you to mre cleanly break out different workflow paths and process parts.
chances are your Business Rules are something that do not usually directly impact your layout rules and multiple files would better help in editing on what needs to be edited without the risk of breaking something unrelated.
I actually just developed my first application using only one page.
..it got messy
My idea was to create an application that mimicked the desktop environment as much as possible. In particular I wanted a detailed view of some app data to be in a popup window that would maintain it's state regardless of the section of the application they were in.
Thus my frankenstein was born.
What ended up happening due to budget/time constraints was the code got out of hand. The various sections of my JavaScript source got muddled together. Maintaining the proper state of various views I had proved to be... difficult.
With proper planning and technique I think the 'one-page' approach is a very easy way to open up some very interesting possibilities (ex: widgets that maintain state across application sections). But it also opens up many... many potential problem areas. including...
Flooding the global namespace (if you don't already have your own... make one)
Code organization can easily get... out of hand
Context - It's very easy to
I'm sure there are more...
In short, I would urge you to stay away from relying on JavaScript dependency for the compatibility issue's alone. What I've come to realize is there is simply no need rely on JavaScript to everything.
I'm actually in the process of removing JavaScript dependencies in loo of Progressive Enhancement. It just makes more sense. You can achieve the same or similar effects with properly coded JavaScript.
The idea is too...
Develop out well-formatted, fully functional application w/o any JavaScript
Style it
Wrap the whole thing with JavaScript
Using Progressive Enhancement one can develop an application that delivers the best possible experience for the user that is possible.
For some additional arguments, check out The Single Page Interface Manifesto and some (mostly) negative reaction to it on Hacker News (link at the bottom of the SPI page):
The Single Page Interface Manifesto: http://itsnat.sourceforge.net/php/spim/spi_manifesto_en.php
stofac, first of all, thanks for the link to the Single Page Interface (SPI) Manifesto (I'm the author of this boring text)
Said this, SPI != doing everything through Javascript
Take a look to this example (server-centric):
http://www.innowhere.com/insites/
The same in GAE:
http://itsnatsites.appspot.com/
More info about the GAE approach:
http://www.theserverside.com/news/thread.tss?thread_id=60270
In my opinion coding a complex SPI application/web site fully on JavaScript is very very complex and problematic, the best approach in my opinion is "hybrid programming" for SPI, a mix of server-centric for big state management and client-centric (a.k.a JavaScript by hand) for special effects.
Doing everything on a single page using ajax everywhere would break the browser's history/back button functionality and be annoying to the user.
I utterly despise JS-only sites where it is not needed. That extra condition makes all the difference. By way of example consider the oft quoted Google Docs, in this case it not only helps improve experiences it is essential. But some parts of Google Help have been JS-only and yet it adds nothing to the experience, it is only showing static content.
Here are reasons for my upset:
Like many, I am a user of NoScript and love it. Pages load faster, I feel safer and the more distracting adverts are avoided. The last point may seem like a bad thing for webmasters but I don't want anyone to get rewarded for pushing annoying flashy things in my face, if tactless advertisers go out of business I consider it natural selection.
Obviously this means some visitors to your site are either going to be turned away or feel hassled by the need to provide a temporary exclusion. This reduces your audience.
You are duplicating effort. The browser already has a perfectly good history function and you shouldn't need to reinvent the wheel by redrawing the previous page when a back button is clicked. To make matters worse going back a page shouldn't require re-rendering. I guess I am a student of If-it-ain't-broke-don't-fix-it School (from Don't-Repeat-Yourself U.).
There are no HTTP headers when traversing "pages" in JS. This means no cache controls, no expiries, content cannot be adjusted for requested language nor location, no meaningful "page not found" nor "unavailable" responses. You could write error handling routines within your uber-page that respond to failed AJAX fetches but that is more complexity and reinvention, it is redundant.
No caching is a big deal for me, without it proxies cannot work efficiently and caching has the greatest of all load reducing effects. Again, you could mimic some caching in your JS app but that is yet more complexity and redundancy, higher memory usage and poorer user experience overall.
Initial load times are greater. By loading so much Javascript on the first visit you are causing a longer delay.
More JavaScript complexity means more debugging in various browsers. Server-side processing means debugging only once.
Unfuddle (a bug-tracker) left a bad taste. One of my most unpleasant web experiences was being forced to use this service by an employer. On the surface it seems well suited; the JS-heavy section is private so doesn't need to worry about search engines, only repeat visitors will be using it so have time to turn off protections and shouldn't mind the initial JS library load.
But it's use of JS is pointless, most content is static. "Pages" were still being fetched (via AJAX) so the delay is the same. With the benefit of AJAX it should be polling in the background to check for changes but I wouldn't get notified when the visible page had been modified. Sections had different styles so there was an awkward re-rendering when traversing those, loading external stylesheets by Javascript is Bad Practice™. Ease of use was sacrificed for whizz-bang "look at our Web 2.0" features. Such a business-orientated application should concentrate on speed of retrieval, but it ended up slower.
Eventually I had to refuse to use it as it was disrupting the team's work flow. This is not good for client-vendor relationships.
Dynamic pages are harder to save for offline use. Some mobile users like to download in advance and turn off their connection to save power and data usage.
Dynamic pages are harder for screen readers to parse. While the number of blind users are probably less than those with NoScript or a mobile connection it is inexcusable to ignore accessibility - and in some countries even illegal, see the "Disability Discrimination Act" (1999) and "Equality Act" (2010).
As mentioned in other answers the "Progressive Enhancement", née "Unobtrusive Javascript", is the better approach. When I am required to make a JS-only site (remember, I don't object to it on principle and there are times when it is valid) I look forward to implementing the aforementioned AJAX crawling and hope it becomes more standardised in future.