This is an extension of an earlier questions I asked, here:
Django - Parse XML, output as HTML fragments for iFrame?
Basically, we're looking at integrating various HTML fragments into a page. We have an small web app generating little fragments for different results/gadgets, at various URLs. This is not necessairly on the same domain as the main page.
I was wondering what's the best way of including these into the main page? We also need the ability to skin the HTML fragments using CSS from the main page.
My initial thought was iFrames, however, I'm thinking the performance of this might not be great, and there's restrictions on CSS/JS manipulation of the included fragments.
Are SSI a better idea? Or should we use php includes? or JS? Any other suggestions? The two main considerations are performance of the main page, and the ability to style/manipulate the included fragments.
Cheers,
Victor
This sounds similar to what Facebook Platform applications do. One kind simply uses IFRAMEs, the other takes output from a backend and transforms it -- <fb:whatever> elements are expanded, JavaScript executed, and things like buttons are skinned. You could look at them for an example.
Using IFRAMEs would probably make things complicated. By default you cannot modify styles inside them from the outer frames, but you could probably use something like Google Closure's net.IframeIo to work around that.
I would try loading widgets using cross-domain scripting. Then you can add the widget's content to the page, however you wish, such as inserting it into the DOM.
iFrames should not be a problem performance-wise - It won't make a difference whether it's the browser doing the querying our your server. You may get design problems though.
SSI and PHP are more or less the same, but they both have the same problem: If the source page is down, rendering of the whole page is delayed.
The best thing performance-wise would be a cached PHP solution that reads the snippet, and is thus less vulnerable towards outages.
Funnily enough, I have written a PHP-based tool for exactly this purpose, and the company I wrote it for has agreed on publishing it as Open Source. It will be at least another four weeks, though, until I will get around to packaging it and setting up the documentation. Should that be of any interest to you despite the timeframne let me know and I will keep you updated.
Related
Our goal is to let a person, similarly to how you can embed a youtube video, embed a tree of web pages on our SaaS into their own site.
The solution should be good looking, cross-browser, responsive and simple for the end user (ideally, should be search-bot friendly as well).
We are considering two primary options where the customer only needs to copy+paste a code snippet which supports embedding in a portion of the page (ex: the middle column) or full width (ex: everything under a header):
IFRAME: Let the user embed an iframe inside a div, together with a snippet of JS that would resize the iframe as the window is resized.
JS "APP": Let the user paste in a script tag to a JS script/app which would communicate cross-domain (via CORS or JSONP) with our servers.
Ideally, we would like to be able to launch full screen modals from our content.
Questions/concerns for:
IFRAME:
Can an iframe reliably update the URL of the parent’s browser window?
Can we reliably launch full screen modals from an iframe?
Can we reliably get the iframe to resize when the window is resized or iframe content changes?
JS "APP":
How significant is the overhead of dealing with properly encapsulating our app to avoid naming/library conflicts? For example, we will ideally stick to vanilla JS but if we want to use a library like Ember and a customer of ours has an Ember site.
Any non-obvious cross domain gotchas? We will be using CORS or JSONP.
We would like input on both the:
technical limitations of what’s possible to do
practical hurdles we’d have to overcome going down each path.
p.s. We’re also considering a back-up option, which is to “fake” integration, where we host the content on our site with a subdomain URL (similarly to how Tumblr lets people host their blog on something like “apple.tumblr.com”). If we go down this path we imagine letting the user theme the subdomain. We understand the pros and cons of this path. Downsides are, it’s more work for the user, and they need to keep two different sites in sync visually.
I think the best approach is to use the same approach Google and other big companies have been using for quite some time, the embedded script. It is way better looking and semantically unharmful (as an iframe could - arguably - be), and super cross-browser and simple, in fact, just as cross-browser and simple as the code that will be pushed (it can be recommended to be put in a <script async> tag, which will eliminate the need for a crafted async script and, though it won't be as full compatible, it degrades okay and will work very well on most cases).
As to the CORS, if basic cautions are taken, there's no need to worry a lot about it. I would say, though, that if you're planning to do something in Ember or Angular to be embedded, that may be a whole load of scripts/bytes that can even slow down the site/app and impact on the whole user experience, perhaps too much for a component that would be loaded like that. So whenever possible, vanilla JS should always be used in components, especially if Ember/Angular is used because of specific features like binding (specific vanilla JS codes can cover that with much less weight).
I'm making a HTML5 page (game) which uses lot of popups and all kind of widgets appearing and dissapearing in the same page.
To implement this I could
Have all the popups and widgets listed in the page, invisible (like lot of examples I saw), and keep toggling only visibility.
Add and remove dynamically, using Javascript. I could put each popup as HTML fragment in a separate file (?).
The seconds is "modular" and I like the fact that I have no elements in the page which I'm not acutally using. But I don't know about performance (load HTML each time, DOM insertiong, etc.).
Is there a prefered/standard way to do this?
If we are talking about loading HTML from server, then obviously this won't be efficient.
I don't know what kind of game you are writing but I don't think there will be any visible difference in performance (except for loading data from server) unless you create like thousands popups per second (I doubt it). Let's be honest - your game isn't using like 4GB of memory. :) And if it is, then you're probably doing something wrong. And I do not think there is any standard way. It's more like how you feel it. :)
For example I always try to load every possible data from server with one request and store it on client-side, because most problems with performance is actually related to client-server communication. I also like the DOM to be clean, so in most cases I hold the (hidden) data in JavaScript, except for forms' hidden fields.
On the other hand, if I have for example blog with discussion and I load some additional data (for example user data, which is supposed to appear as a popup after click on the user's name) I tend to store it in DOM elements because it is easier (at least for me) to control it (I'm talking about jQuery and jQuery UI).
Note that it is possible that recreating popups may lead to memory leaks, but it is highly unlikely if you use some popular library like (for example) jQuery UI.
Recently, I have come up with some idea about how to improve the overall performance for
a web application in that instead of generating a ready-to-show html page from the web server, why not let it be fully generated in the client side.
Doing it this way, only pure data (in my case is data in JSON format) needs to be sent to the browser.
This will offload the work of html generation from the server to the client's browser and
reduce the size of the response packet sent back to users.
After some research, I have found that this framework (http://beebole-apps.com/pure/)
does the same thing as mine.
What I want to know is the pros and cons of doing it this way.
It is surely faster and better for web servers and with modern browser, Javascript code can run fast so page generation can be done fast.
What should be a downside for this method is probably for SEO.
I am not sure if search engines like Google will appropriately index my website.
Could you tell me what the downside for this method is?
Ps: I also attached sample code to help describe the method as follows:
In the head, simple javascript code will be written like this:
<script type='javascript' src='html_generator.js'/>
<script>
function onPageLoad(){
htmlGenerate($('#data').val());
}
</script>
In the body, only one element exist, used merely as a data container as follows:
<input type='hidden' id='data' value='{"a": 1, "b": 2, "c": 3}'/>
When the browser renders the file, htmlGenerate function which is in html_generator.js will be called and the whole html page will be generated in this function. Note that the html_generator.js file can be large since it contains a lot of html templates but since it can be cached in the browser, it will be downloaded once and only once.
Downsides
Search Engines will not be able to index your page. If they do, you're very lucky.
Users with JavaScript disabled, or on mobile devices, will very likely not be able to use it.
The speed advantages might turn out to be minimal, especially if the user's using a slow JavaScript engine like in IE.
Maintainability: Unless you automate the generation of your javascript, it's going to be a nightmare!
All in all
This method is not recommended if you're doing it only for speed increase. However, it is often done in web applications, where users stay on the same page, but then you would probably be better off using one of the existing frameworks for this, such as backbone.js, and make sure it remains crawlable by following Google's hashbang guidelines or HTML5 PushState (as suggested by #rohk).
If it's just performance you're looking for, and your app doesn't strictly need to work like this, don't do it. But if you do go this way, then make sure you do it in the standardized way so that it remains fast and indexable.
Client-side templating is often used in Single Page Applications.
You have to weight the pros and cons:
Pros :
More responsive interface
Load reduced on your web server
Cons :
SEO will be harder than for a classic website (unless you use HTML5 PushState)
Accessibility : you are relying heavily on javascript...
If you are going this way I recommend that you look at Backbone.js.
You can find tutorials and examples here : http://www.quora.com/What-are-some-good-resources-for-Backbone-js
Examples of SPA :
Document Cloud
Grooveshark
Gmail
Also look at this answer : https://stackoverflow.com/a/8372246/467003
Yikes.
jQuery templates are closer to what you are thinking. Sanity says you should have some HTML templates that you can easily modify at will. You want MAINTAINABILITY not string concatenations that keep you tossing and turning in your worst nightmares.
You can continue with this madness but you will end up learning the hard way. I employ you to first try some protypes using jQuery templates and your pure code way. Sanity will surely overcome you my friend and I say that coming from trying your way for a time. :)
Its possible to load content in dynamically of the template you need as well using AJAX. It doesn't make sense that you will have a panacea approach where every single conceivable template is needed to be downloaded in a single request.
The pros? I really can't think of any. You say that it will be easier in the webserver, but serving HTML is what web servers are designed to do.
The cons? It goes against just about every best practise when it comes to building websites:
search engines will not be able to index your site well, if at all
reduced maintainability
no matter how fast JS engines are, the DOM is slow, and will never be as fast as building HTML on the server
one JS error and your entire site doesn't render. Oops. Browsers however are extremely tolerant of HTML errors.
Ultimately HTML is a great tool for displaying content. JavaScript is a great way of adding interaction. Mixing them up like you suggest is just nuts and will only cause problems.
Cons
SEO
Excluding user without javascript
Pros
Faster workflow / Fluider interface
Small load reduce
If SEO is not important for you and most of your user have Javascript you could create a Single Page Applications. It will not reduce your load very much but it can create a faster workflow on the page.
Btw: I would not pack all templates in one file. It would be too big for big projects.
how good is it to load/render repeating elements in a html page via JavaScript. I know a few(5%) of ppl dont have js enabled. So is it really worth it. I can get up to 15-20% reduction in markup and in turn page size by doing so
If you care about page size, then turn on compression on your server instead.
Repeating content compresses very well, so your 15-20% is going to be a much smaller proportion of the page weight (and the HTML is probably going to be insignificant compared to any images you have anyway).
Avoid content generation with JS if you can, it is another point of failure.
I dont think that 15% reduction in size warrants a run of JS because the JS engines are
really different across all platforms and the extra JS code which works on all Browsers will be of same length.
Moreover the time taken by Browser to compile JS and the make DOM tree and then Add it to the Document and render it will make ur page slow.
IMO you should use js to generate dynamic html. And if user has disabled js then you can give him a warning that "You should enable JavaScript for my website to work best."
If you're not loading the HTML data asynchronously, you could use a server-side language (like PHP) to print the repeating HTML before the page is served to the browser. If that's not an option, then you'll need to either stick with the Javascript you already have or suck it up and type it out manually.
In this day and age, it's mostly acceptable to tell users that they have to have Javascript enabled to view your website.
In my opinion, most of the people who have JS disabled on their browser for various reasons (official, security), won't fit the profile of a client for most websites. The reason is that almost all of us are targeting the regular Joe who is uses Gmail with Ajax, and experiences the web in multimedia.
I would recommend using JS if it is making your product work better.
When a developing a web app, versus a web site, what reasons are there to use multiple HTML pages, rather than using one html page and doing everything through Javascript?
I would expect that it depends on the application -- maybe -- but would appreciate any thoughts on the subject.
Thanks in advance.
EDIT:
Based on the responses here, and some of my own research, if you wanted to do a single-page, fully JS-Powered site, some useful tools would seem to include:
JQuery Plug Ins:
JQuery History:
http://balupton.com/projects/jquery-history
JQuery Address:
http://plugins.jquery.com/project/jquery-address
JQuery Pagination:
http://plugins.jquery.com/project/pagination
Frameworks:
Sproutcore
http://www.sproutcore.com/
Cappucino
http://cappuccino.org/
Possibly, JMVC:
http://www.javascriptmvc.com/
page based applications provide:
ability to work on any browser or device
simpler programming model
they also provide the following (although these are solvable by many js frameworks):
bookmarkability
browser history
refresh or F5 to repeat action
indexability (in case the application is public and open)
One of the bigger reasons is going to be how searchable your website is.
Doing everything in javascript is going to make it complicated for search engines to crawl all content of your website, and thus not fully indexing it. There are ways around this (with Google's recent AJAX SEO guidelines) but I'm not sure if all search engines support this yet. On top of that, it's a little bit more complex then just making separate pages.
The bigger issue, whether you decide to build multiple HTML pages, or you decide to use some sort of framework or CMS to generate them for you, is that the different sections of your website have URL's that are unique to them. E.g., an about section would have a URL like mywebsite.com/about, and that URL is used on the actual "about" link within the website.
One of the biggest downfalls of single-page, Ajax-ified websites is complexity. What might otherwise be spread across several pages suddenly finds its way into one huge, master page. Also, it can be difficult to coordinate the state of the page (for example, tracking if you are in Edit mode, or Preview mode, etc.) and adjusting the interface to match.
Also, one master page that is heavy on JS can be a performance drag if it has to load multiple, big JS files.
At the OP's request, I'm going to discuss my experience with JS-only sites. I've written four relevant sites: two JS-heavy (Slide and SpeedDate) and two JS-only (Yazooli and GameCrush). Keep in mind that I'm a JS-only-site bigot, so you're basically reading John Hinkley on the subject of Jody Foster.
The idea really works. It produces gracefully, responsive sites at very low operational costs. My estimate is that the cost for bandwidth, CPU, and such goes to 10% of the cost of running a similar page-based site.
You need fewer but better (or at least, better-trained) programmers. JavaScript is an powerful and elegant language, but it has huge problems that a more rigid and unimaginative language like Java doesn't have. If you have a whole bunch of basically mediocre guys working for you, consider JSP or Ruby instead of JS-only. If you are required to use PHP, just shoot yourself.
You have to keep basic session state in the anchor tag. Users simply expect that the URL represents the state of the site: reload, bookmark, back, forward. jQuery's Address plug-in will do a lot of the work for you.
If SEO is an issue for you, investigate Google Ajax Crawling. Basically, you make a very simple parallel site, just for search engines.
When would I not use JS-only? If I were producing a site that was almost entirely content, where the user did nothing but navigate from one place to another, never interacting with the site in a complicated manner. So, Wikipedia and ... well, that's about it. A big reference site, with a lot of data for the user to read.
modularization.
multiple files allows you to mre cleanly break out different workflow paths and process parts.
chances are your Business Rules are something that do not usually directly impact your layout rules and multiple files would better help in editing on what needs to be edited without the risk of breaking something unrelated.
I actually just developed my first application using only one page.
..it got messy
My idea was to create an application that mimicked the desktop environment as much as possible. In particular I wanted a detailed view of some app data to be in a popup window that would maintain it's state regardless of the section of the application they were in.
Thus my frankenstein was born.
What ended up happening due to budget/time constraints was the code got out of hand. The various sections of my JavaScript source got muddled together. Maintaining the proper state of various views I had proved to be... difficult.
With proper planning and technique I think the 'one-page' approach is a very easy way to open up some very interesting possibilities (ex: widgets that maintain state across application sections). But it also opens up many... many potential problem areas. including...
Flooding the global namespace (if you don't already have your own... make one)
Code organization can easily get... out of hand
Context - It's very easy to
I'm sure there are more...
In short, I would urge you to stay away from relying on JavaScript dependency for the compatibility issue's alone. What I've come to realize is there is simply no need rely on JavaScript to everything.
I'm actually in the process of removing JavaScript dependencies in loo of Progressive Enhancement. It just makes more sense. You can achieve the same or similar effects with properly coded JavaScript.
The idea is too...
Develop out well-formatted, fully functional application w/o any JavaScript
Style it
Wrap the whole thing with JavaScript
Using Progressive Enhancement one can develop an application that delivers the best possible experience for the user that is possible.
For some additional arguments, check out The Single Page Interface Manifesto and some (mostly) negative reaction to it on Hacker News (link at the bottom of the SPI page):
The Single Page Interface Manifesto: http://itsnat.sourceforge.net/php/spim/spi_manifesto_en.php
stofac, first of all, thanks for the link to the Single Page Interface (SPI) Manifesto (I'm the author of this boring text)
Said this, SPI != doing everything through Javascript
Take a look to this example (server-centric):
http://www.innowhere.com/insites/
The same in GAE:
http://itsnatsites.appspot.com/
More info about the GAE approach:
http://www.theserverside.com/news/thread.tss?thread_id=60270
In my opinion coding a complex SPI application/web site fully on JavaScript is very very complex and problematic, the best approach in my opinion is "hybrid programming" for SPI, a mix of server-centric for big state management and client-centric (a.k.a JavaScript by hand) for special effects.
Doing everything on a single page using ajax everywhere would break the browser's history/back button functionality and be annoying to the user.
I utterly despise JS-only sites where it is not needed. That extra condition makes all the difference. By way of example consider the oft quoted Google Docs, in this case it not only helps improve experiences it is essential. But some parts of Google Help have been JS-only and yet it adds nothing to the experience, it is only showing static content.
Here are reasons for my upset:
Like many, I am a user of NoScript and love it. Pages load faster, I feel safer and the more distracting adverts are avoided. The last point may seem like a bad thing for webmasters but I don't want anyone to get rewarded for pushing annoying flashy things in my face, if tactless advertisers go out of business I consider it natural selection.
Obviously this means some visitors to your site are either going to be turned away or feel hassled by the need to provide a temporary exclusion. This reduces your audience.
You are duplicating effort. The browser already has a perfectly good history function and you shouldn't need to reinvent the wheel by redrawing the previous page when a back button is clicked. To make matters worse going back a page shouldn't require re-rendering. I guess I am a student of If-it-ain't-broke-don't-fix-it School (from Don't-Repeat-Yourself U.).
There are no HTTP headers when traversing "pages" in JS. This means no cache controls, no expiries, content cannot be adjusted for requested language nor location, no meaningful "page not found" nor "unavailable" responses. You could write error handling routines within your uber-page that respond to failed AJAX fetches but that is more complexity and reinvention, it is redundant.
No caching is a big deal for me, without it proxies cannot work efficiently and caching has the greatest of all load reducing effects. Again, you could mimic some caching in your JS app but that is yet more complexity and redundancy, higher memory usage and poorer user experience overall.
Initial load times are greater. By loading so much Javascript on the first visit you are causing a longer delay.
More JavaScript complexity means more debugging in various browsers. Server-side processing means debugging only once.
Unfuddle (a bug-tracker) left a bad taste. One of my most unpleasant web experiences was being forced to use this service by an employer. On the surface it seems well suited; the JS-heavy section is private so doesn't need to worry about search engines, only repeat visitors will be using it so have time to turn off protections and shouldn't mind the initial JS library load.
But it's use of JS is pointless, most content is static. "Pages" were still being fetched (via AJAX) so the delay is the same. With the benefit of AJAX it should be polling in the background to check for changes but I wouldn't get notified when the visible page had been modified. Sections had different styles so there was an awkward re-rendering when traversing those, loading external stylesheets by Javascript is Bad Practice™. Ease of use was sacrificed for whizz-bang "look at our Web 2.0" features. Such a business-orientated application should concentrate on speed of retrieval, but it ended up slower.
Eventually I had to refuse to use it as it was disrupting the team's work flow. This is not good for client-vendor relationships.
Dynamic pages are harder to save for offline use. Some mobile users like to download in advance and turn off their connection to save power and data usage.
Dynamic pages are harder for screen readers to parse. While the number of blind users are probably less than those with NoScript or a mobile connection it is inexcusable to ignore accessibility - and in some countries even illegal, see the "Disability Discrimination Act" (1999) and "Equality Act" (2010).
As mentioned in other answers the "Progressive Enhancement", née "Unobtrusive Javascript", is the better approach. When I am required to make a JS-only site (remember, I don't object to it on principle and there are times when it is valid) I look forward to implementing the aforementioned AJAX crawling and hope it becomes more standardised in future.