When I try to load a very large file using the appropriate loaders provided with the library, the tab my website runs in crashes. I have tried implementing the Worker class, but it doesnt seem to work. Heres what happens:
In the main javascript file I have:
var worker = new Worker('loader.js');
When user selects one of available models I check for the extension and pass the file URL/path to the worker: (in this instance a pcd file)
worker.postMessage({fileType: "pcd", file: file});
Now the loader.js has the appropriate includes that are necessary to make it work:
importScripts('js/libs/three.js/three.js');
importScripts('js/libs/three.js/PCDLoader.js');
and in its onmessage method, it uses the apropriate loader depending on file extension.
var loader = new THREE.PCDLoader();
loader.load(file, function (mesh) {
postMessage({points: mesh.geometry.attributes.position.array, colors: mesh.geometry.attributes.color.array});
});
The data is passed back successfully to the main javascript which adds it to the scene. At least for small files - large ones, like I said, take too long and the browser decides there was an error. Now I thought the worker class was supposed to work asynchronously, so whats the deal here?
Currently Three.js's loaders rely on strings and arrays of strings to parse data from a file. They dont split files into pieces, which leads to excessive memory usage which browsers immediately interrupt. Loading a 64 MB file spikes to over 1 GB memory used during load (which then results in an error).
Related
There are examples (especially in vtk.js that shows .VTP load) that indicate that all rendering capabilities are in there, by what about file format support: can one render a .pvd scene file directly on client side with no rendering/prerendering server on the backend (say on jsfiddle)?
Or any convertion pipeline that would turn traditional paraview .pvd into least
Using VTK.js, no rendering is done on the server. All the rendering is done on the client using WebGL.
If by rendering, you mean processing here are some info:
Currently, to load a VTP file with VTK.js, you need to pre-process it and use the vtkHttpDataSetReader to load the chunks created. This behavior has been implemented to handle most of the files handled by ParaView and VTK.
However, if your VTP files are not compressed (ASCII files), you could write a dedicated reader. The same behavior has been implemented for the vtkOBJReader. See the OBJReader example.
EDIT:
The VTP file reader I mentioned in my original message has been implemented. It's named vtkXMLPolyDataReader. See the GeometryViewer example.
I am working on a webpage ,on which I need to fetch around 10 external .js files.So browser makes 10 requests to fetch them. Is there any way which I can bring all of them in one shot??
First of all - if browser gets all 10 JS files, they will be stored in cache, so next request will load files from cache.
About merging files into one - you can create some script which will fetch all 10 files, merge their content and save in a specified file. Or do it by your own if JS files are static.
Is there any way I come bring all of them in One shot??
No, but you can bundle and compress them all into one single JS file.
That would save you the total round-trips time consumption for the all 10 files.
You can use online tools to achieve it easily, like:
http://jscompress.com/
You can try these
compress and minify your javascript files
Import all js files into one manually and do compress
Comments and whitespace are not needed for javascript execution;
Removing them will speed up script execution times
OR
If possible, you can execute the script asynchronously. HTML5 has got this feature.
<script> tag has got async boolean property. make use of it.
I'm working on a large project which is extensible with modules. Every module can have it's own javascript file which may only be needed on one page, multiple pages that use this module or even all pages if it is a global extension.
Right now I'm combining all .js files into one file whenever they get updated or a new module get's installed. The client only has to load one "big" .js file but parse it for every page. Let's assume someone has installed a lot of modules and the .js file grows to 1MB-2MB. Does it make sense to continue this route or should I include every .js when it is needed.
This would result in maybe 10-15 http requests more for every page. At the same time the parsing time for the .js file would be reduced since I only need to load a small portion for every page. At the same time the browser wouldn't try to execute js code that isn't even required for the current page or even possible to execute.
Comparing both scenarios is rather difficult for me since I would have to rewrite a lot of code. Before I continue I would like to know if someone has encountered a similar problem and how he/she solved it. My biggest concern is that the parsing time of the js files grows too much. Usually network latency is the biggest concern but I've never had to deal with so many possible modules/extensions -> js files.
If these 2 conditions are true, then it doesn't really matter which path you take as long as you do the Requirement (below).
Condition 1:
The javascript files are being run inside of a standard browser, meaning they are not going to be run inside of an apple ios uiWebView app (html5 iphone/ipad app)
Condition 2:
The initial page load time does not matter so much. In other words, this is more of a web application than a web page. So users login each day, stay logged in for a long time, do lots of stuffs...logout...come back the next day...
Requirement:
Put the javascript file(s), css files and all images under a /cache directory on the web server. Tell the web server to send the max-age of 1 year in the header (only for this directory and sub-dirs). Then once the browser downloads the file, it will never again waste a round trip to the web server asking if it has the most recent version.
Then you will need to implement javascript versioning, usually this is done by adding "?jsver=1" in the js include line. Then increment the version with each change.
Use chrome inspector and make sure this is setup correctly. After the first request, the browser never sends an Etag or asks the web server for the file again for 1 year. (hard reloads will download the file again...so test using links and a standard navigation path a user would normally take. Also watch the web server log to see what requests are being severed.
Good browsers will compile the javascript to machine code and the compiled code will sit in browser's cache waiting for execution. That's why Condition #1 is important. Today, the only browser which will not JIT compile js code is Apple's Safari inside of uiWebView which only happens if you are running html/js inside of an apple app (and the app is downloaded from the app store).
Hope this makes sense. I've done these things and have reduced network round trips considerably. Read up on Etags and how the browsers make round trips to determine if is using the current version of js/css/images.
On the other hand, if you're building a web site and you want to optimize for the first time visitor, then less is better. Only have the browser download what is absolutely needed for the first page view.
You really REALLY should be using on-demand JavaScript. Only load what 90% of users will use. For things most people won't use keep them separate and load them on demand. Also you should seriously reconsider what you're doing if you've got upwards of two megabytes of JavaScript after compression.
function ondemand(url,f,exe)
{
if (eval('typeof ' + f)=='function') {eval(f+'();');}
else
{
var h = document.getElementsByTagName('head')[0];
var js = document.createElement('script');
js.setAttribute('defer','defer');
js.setAttribute('src','scripts/'+url+'.js');
js.setAttribute('type',document.getElementsByTagName('script')[0].getAttribute('type'));
h.appendChild(js);
ondemand_poll(f,0,exe);
h.appendChild(document.createTextNode('\n'));
}
}
function ondemand_poll(f,i,exe)
{
if (i<200) {setTimeout(function() {if (eval('typeof ' + f)=='function') {if (exe==1) {eval(f+'();');}} else {i++; ondemand_poll(f,i,exe);}},50);}
else {alert('Error: could not load \''+f+'\', certain features on this page may not work as intended.\n\nReloading the page may correct the problem, if it does not check your internet connection.');}
}
Example usage: load example.js (first parameter), poll for the function example_init1() (second parameter) and 1 (third parameter) means execute that function once the polling finds it...
function example() {ondemand('example','example_init1',1);}
I tried unsuccessfully to add a google map(externally loaded script) to a meteor app, and I noticed there were two kinds of problems:
If I do the simple thing and add the main API script to my <head></head>, then it gets rendered last.
When this happens, I am obliged to insert any scripts that depend on the API again in my template's <head> - after the main API script. (otherwise scripts complain they don't see the API blabla..)
Then the time for the actually function call comes - and now putting it inside <head> after the rest won't work. You need to use Template.MyTemplate.rendered.
Basically my question is:
What's the cleanest way to handle these kinds of things?
Is there some other variable/method I can use to make sure my Google main API file is called very first in my HTML?
I just released a package on atmosphere (https://atmosphere.meteor.com) that might help a bit. It's called session-extras, and it defines a couple functions that I've used to help with integrating external scripts. Code here: https://github.com/belisarius222/meteor-session-extras
The basic idea is to load a script asynchronously, and then in the callback when the script has finished loading, set a Session variable. I use the functions in the session-extras package to try to make this process a bit smoother. I have a few functions that have 3 or 4 different dependencies (scripts and subscriptions), so it was starting to get hairy...
I suppose I should add that you can then conditionally render templates based on whether all the dependencies are there. So if you have a facebook button, for example, with helpers that check the Session variables, you can give it a "disabled" css class and show "loading facebook..." until all the necessary scripts have loaded.
edit 03/14/2013
There is also an entirely different approach that is applicable in many cases: create your own package. This is currently possible with Meteorite (instructions), and the functionality should soon be available in Meteor itself. Some examples of this approach are:
jquery-rate-it: https://github.com/dandv/meteor-jquery-rateit
meteor-mixpanel: https://github.com/belisarius222/meteor-mixpanel
If you put a js file in a package, it loads before your app code, which is often a good way to include libraries. Another advantage of making a package is that packages can declare dependencies on each other, so if the script in question is, for example, a jQuery plugin, you can specify in the package's package.js file that the package depends on jQuery, and that will ensure the correct load order.
Sometimes it gets a little more interesting (in the Chinese curse sense), since many external services, including mixpanel and filepicker.io, have a 2-part loading process: 1) a JS snippet to be included at the end of the body, and 2) a bigger script loaded from a CDN asynchronously by that snippet. The js snippet generally (but not always!) makes some methods available for use before the bigger script loads, so that you can call its functions without having to set up more logic to determine its load status. Mixpanel does that, although it's important to remember that some of the JS snippets from external services expect you to set the API key at the end of the snippet, guaranteed to be before the bigger script loads; in some cases if the script loads before the API key is set, the library won't function correctly. See the meteor-mixpanel package for an example of an attempt at a workaround.
It's possible to simply download the bigger js file yourself from the CDN and stick it in your application; however, there are good reasons not to do this:
1) the hosted code might change, and unless you check it religiously, your code could get out of date and start to use an old version of the API
2) these libraries have usually been optimized to load the snippet quickly in a way that doesn't increase your page load time dramatically. If you include the bigger JS file in your application, then your server has to serve it, not a CDN, and it will serve it on initial page load.
It sounds like you're loading your Javascript files by linking it with HTML in your template. There's a more Meteor way of doing this:
From the Meteor Docs:
Meteor gathers all JavaScript files in your tree with the exception of
the server and public subdirectories for the client. It minifies this
bundle and serves it to each new client. You're free to use a single
JavaScript file for your entire application, or create a nested tree
of separate files, or anything in between.
So with that in mind, rather than link the gmaps.js into head, just download the un-minified version of gmaps and drop it in you application's tree.
Also from the Meteor Docs:
It is best to write your application in such a way that it is
insensitive to the order in which files are loaded, for example by
using Meteor.startup, or by moving load order sensitive code into
Smart Packages, which can explicitly control both the load order of
their contents and their load order with respect to other packages.
However sometimes load order dependencies in your application are
unavoidable. The JavaScript and CSS files in an application are loaded
according to these rules:
Files in the lib directory at the root of your application are loaded
first.
[emphasis added]
And if the sequence is still an issue, drop the js file into client/lib and it will load before all the Javascript you've written.
I've used meteor-external-file-loader and a bit of asynchronous looping to load some scripts, which will load javascript (or stylesheets) in the order you specify.
Make sure to have meteorite and add the package above >> mrt add external-file-loader
Here's the function I wrote to make use of this package:
var loadFiles = function(files, callback) {
if (!callback) callback = function() {};
(function step(files, timeout, callback) {
if (!files.length) return callback();
var loader;
var file = files.shift();
var extension = file.split(".").pop();
if (extension === "js")
loader = Meteor.Loader.loadJs(file, function() {}, timeout);
else if (extension === "css") {
Meteor.Loader.loadCss(file);
loader = $.Deferred().resolve();
}
else {
return step(files, timeout, callback);
}
loader.done(function() {
console.log("Loaded: " + file);
step(files, timeout, callback);
}).fail(function() {
console.error("Failed to load: " + file);
step(files, timeout, callback);
});
})(files, 5000, callback);
}
Then to use this, add to one of your created methods for a template like so:
Template.yourpage.created = function() {
var files = [
"//ajax.googleapis.com/ajax/libs/jqueryui/1.10.3/jquery-ui.min.js",
"javascripts/bootstrap.min.js"
];
loadFiles(files, function() {
console.log("Scripts loaded!");
});
}
Quick edit: Found it's just a good idea to place the functionality from the .created method in the /lib folder in Meteor.startup();
Meteor.startup(function() {
if (Meteor.isClient) {
// Load files here.
}
});
Caveat: Lots of javascript files = really long load time.... Not sure how that will be affected by the normal meteor javascript files, and the load order there. I would just make sure that there are no conflicts until users take action on the website (or that if there is, they are loaded first).
My web application uses jQuery and some jQuery plugins (e.g. validation, autocomplete). I was wondering if I should stick them into one .js file so that it could be cached more easily, or break them out into separate files and only include the ones I need for a given page.
I should also mention that my concern is not only the time it takes to download the .js files but also how much the page slows down based on the contents of the .js file loaded. For example, adding the autocomplete plugin tends to slow down the response time by 100ms or so from my basic testing even when cached. My guess is that it has to scan through the elements in the DOM which causes this delay.
I think it depends how often they change. Let's take this example:
JQuery: change once a year
3rd party plugins: change every 6 months
your custom code: change every week
If your custom code represents only 10% of the total code, you don't want the users to download the other 90% every week. You would split in at least 2 js: the JQuery + plugins, and your custom code. Now, if your custom code represents 90% of the full size, it makes more sense to put everything in one file.
When choosing how to combine JS files (and same for CSS), I balance:
relative size of the file
number of updates expected
Common but relevant answer:
It depends on the project.
If you have a fairly limited website where most of the functionality is re-used across multiple sections of the site, it makes sense to put all your script into one file.
In several large web projects I've worked on, however, it has made more sense to put the common site-wide functionality into a single file and put the more section-specific functionality into their own files. (We're talking large script files here, for the behavior of several distinct web apps, all served under the same domain.)
The benefit to splitting up the script into separate files, is that you don't have to serve users unnecessary content and bandwidth that they aren't using. (For example, if they never visit "App A" on the website, they will never need the 100K of script for the "App A" section. But they would need the common site-wide functionality.)
The benefit to keeping the script under one file is simplicity. Fewer hits on the server. Fewer downloads for the user.
As usual, though, YMMV. There's no hard-and-fast rule. Do what makes most sense for your users based on their usage, and based on your project's structure.
If people are going to visit more than one page in your site, it's probably best to put them all in one file so they can be cached. They'll take one hit up front, but that'll be it for the whole time they spend on your site.
At the end of the day it's up to you.
However, the less information that each web page contains, the quicker it will be downloaded by the end-viewer.
If you only include the js files required for each page, it seems more likely that your web site will be more efficient and streamlined
If the files are needed in every page, put them in a single file. This will reduce the number of HTTP request and will improve the response time (for lots of visits).
See Yahoo best practice for other tips
I would pretty much concur with what bigmattyh said, it does depend.
As a general rule, I try to aggregate the script files as much as possible, but if you have some scripts that are only used on a few areas of the site, especially ones that perform large DOM traversals on load, it would make sense to leave those in separate file(s).
e.g. if you only use validation on your contact page, why load it on your home page?
As an aside, you can sometimes sneak these files into interstitial pages, where not much else is going on, so when a user lands on an otherwise quite heavy page that needs it, it should already be cached - use with caution - but can be a handy trick when you have someone benchmarking you.
So, as few script files as possible, within reason.
If you are sending a 100K monolith, but only using 20K of it for 80% of the pages, consider splitting it up.
It depends pretty heavily on the way that users interact with your site.
Some questions for you to consider:
How important is it that your first page load be very fast?
Do users typically spend most of their time in distinct sections of the site with subsets of functionality?
Do you need all of the scripts ready the moment that the page is ready, or can you load some in after the page is loaded by inserting <script> elements into the page?
Having a good idea of how users use your site, and what you want to optimize for is a good idea if you're really looking to push for performance.
However, my default method is to just concatenate and minify all of my javascript into one file. jQuery and jQuery.ui are small and have very low overhead. If the plugins you're using are having a 100ms effect on page load time, then something might be wrong.
A few things to check:
Is gzipping enabled on your HTTP server?
Are you generating static files with unique names as part of your deployment?
Are you serving static files with never ending cache expirations?
Are you including your CSS at the top of your page, and your scripts at the bottom?
Is there a better (smaller, faster) jQuery plugin that does the same thing?
I've basically gotten to the point where I reduce an entire web application to 3 files.
vendor.js
app.js
app.css
Vendor is neat, because it has all the styles in it too. I.e. I convert all my vendor CSS into minified css then I convert that to javascript and I include it in the vendor.js file. That's after it's been sass transformed too.
Because my vendor stuff does not update often, once in production it's pretty rare. When it does update I just rename it to something like vendor_1.0.0.js.
Also there are minified versions of those files. In dev I load the unminified versions and in production I load the minified versions.
I use gulp to handle doing all of this. The main plugins that make this possible are....
gulp-include
gulp-css2js
gulp-concat
gulp-csso
gulp-html-to-js
gulp-mode
gulp-rename
gulp-uglify
node-sass-tilde-importer
Now this also includes my images because I use sass and I have a sass function that will compile images into data-urls in my css sheet.
function sassFunctions(options) {
options = options || {};
options.base = options.base || process.cwd();
var fs = require('fs');
var path = require('path');
var types = require('node-sass').types;
var funcs = {};
funcs['inline-image($file)'] = function (file, done) {
var file = path.resolve(options.base, file.getValue());
var ext = file.split('.').pop();
fs.readFile(file, function (err, data) {
if (err) return done(err);
data = new Buffer(data);
data = data.toString('base64');
data = 'url(data:image/' + ext + ';base64,' + data + ')';
data = types.String(data);
done(data);
});
};
return funcs;
}
So my app.css will have all of my applications images in the css and I can add the image's to any chunk of styles I want. Typically i create classes for the images that are unique and I'll just take stuff with that class if I want it to have that image. I avoid using Image tags completely.
Additionally, use html to js plugin I compile all of my html to the js file into a template object hashed by the path to the html files, i.e. 'html\templates\header.html' and then using something like knockout I can data-bind that html to an element, or multiple elements.
The end result is I can end up with an entire web application that spins up off one "index.html" that doesn't have anything in it but this:
<html>
<head>
<script src="dst\vendor.js"></script>
<script src="dst\app.css"></script>
<script src="dst\app.js"></script>
</head>
<body id="body">
<xyz-app params="//xyz.com/api/v1"></xyz-app>
<script>
ko.applyBindings(document.getTagById("body"));
</script>
</body>
</html>
This will kick off my component "xyz-app" which is the entire application, and it doesn't have any server side events. It's not running on PHP, DotNet Core MVC, MVC in general or any of that stuff. It's just basic html managed with a build system like Gulp and everything it needs data wise is all rest apis.
Authentication -> Rest Api
Products -> Rest Api
Search -> Google Compute Engine (python apis built to index content coming back from rest apis).
So I never have any html coming back from a server (just static files, which are crazy fast). And there are only 3 files to cache other than index.html itself. Webservers support default documents (index.html) so you'll just see "blah.com" in the url and any query strings or hash fragments used to maintain state (routing etc for bookmarking urls).
Crazy quick, all pending on the JS engine running it.
Search optimization is trickier. It's just a different way of thinking about things. I.e. you have google crawl your apis, not your physical website and you tell google how to get to your website on each result.
So say you have a product page for ABC Thing with a product ID of 129. Google will crawl your products api to walk through all of your products and index them. In there you're api returns a url in the result that tells google how to get to that product on a website. I.e. "http://blah#products/129".
So when users search for "ABC thing" they see the listing and clicking on it takes them to "http://blah#products/129".
I think search engines need to start getting smart like this, it's the future imo.
I love building websites like this because it get's rid of all the back end complexity. You don't need RAZOR, or PHP, or Java, or ASPX web forms, or w/e you get rid of those entire stacks.... All you need is a way to write rest apis (WebApi2, Java Spring, or w/e etc etc).
This separates web design into UI Engineering, Backend Engineering, and Design and creates a clean separation between them. You can have a UX team building the entire application and an Architecture team doing all the rest api work, no need for full stack devs this way.
Security isn't a concern either, because you can pass credentials on ajax requests and if your stuff is all on the same domain you can just make your authentication cookie on the root domain and presto (automatic, seamless SSO with all your rest apis).
Not to mention how much simpler server farm setup is. Load balance needs are a lot less. Traffic capabilities a lot higher. It's way easier to cluster rest api servers on a load balancer than entire websites.
Just setup 1 nginx reverse proxy server to serve up your index .html and also direct api requests to one of 4 rest api servers.
Api Server 1
Api Server 2
Api Server 3
Api Server 4
And your sql boxes (replicated) just get load balanced from the 4 rest api servers (all using SSD's if possible)
Sql Box 1
Sql Box 2
All of your servers can be on internal network with no public ips and just make the reverse proxy server public with all requests coming in to it.
You can load balance reverse proxy servers on round robin DNS.
This means you only need 1 SSL cert to since it's one public domain.
If you're using Google Compute Engine for search and seo, that's out in the cloud so nothing to worry about there, just $.
If you like the code in separate files for development you can always write a quick script to concatenate them into a single file before minification.
One big file is better for reducing HTTP requests as other posters have indicated.
I also think you should go the one-file route, as the others have suggested. However, to your point on plugins eating up cycles by merely being included in your large js file:
Before you execute an expensive operation, use some checks to make sure you're even on a page that needs the operations. Perhaps you can detect the presence (or absence) of a dom node before you run the autocomplete plugin, and only initialize the plugin when necessary. There's no need to waste the overhead of dom traversal on pages or sections that will never need certain functionality.
A simple conditional before an expensive code chunk will give you the benefits of both the approaches you are deciding on.
I tried breaking my JS in multiple files and ran into a problem. I had a login form, the code for which (AJAX submission, etc) I put in its own file. When the login was successful, the AJAX callback then called functions to display other page elements. Since these elements were not part of the login process I put their JS code in a separate file. The problem is that JS in one file can't call functions in a second file unless the second file is loaded first (see Stack Overflow Q. 25962958) and so, in my case, the called functions couldn't display the other page elements. There are ways around this loading sequence problem (see Stack Overflow Q. 8996852) but I found it simpler put all the code in one larger file and clearly separate and comment sections of code that would fall into the same functional group e.g. keep the login code separate and clearly commented as the login code.