Increase site load time while loading data in static website - javascript

I have a table containing sponsors data in a section of my static website. Currently I have kept the data in a json file and creating the table using javascript while the website is loading, the js file is included at the end of the html page. So the pseudo code of the page is somthing like
<body>
...
<section id="sponsors"></section>
...
<script src="sponsors.js"></script>
</body>
The sponsors.js loads the data from sponsors.json using a $.get() and then uses the data to create a table.
My question is which of the following approach increases the site load time?
Using the above approach keeping the data in a separate json file.
Keeping the data in sponsors.js instead of keeping in a separate json file
Hard coding the table with the individual data
Any other approach not listed above?

3 is the fastest approach because the data does not need to be rendered, however hard coding is not very useful.
2 is slightly faster as a seperate request does not need to be made.
1 is the slowest, however if you cached the json in a cookie it would not have to be downloaded every time. By using an expire on the cookie it would trigger it to be downloaded again.

Well, the best approach in terms of performance is always the one that involves as few files as possible.
Inlining your JSON would certainly be best for performance, but then you're making a sacrifice in code readability and modularity. You're going to have to make a judgement for yourself on what the best solution is, by balancing ease of programming with performance.
Personally, I'd stick with what you got, because changing the JSON later won't be complicated at all - and the performance hit you'll take won't likely be noticeable, much less game breaking.

Keeping the data in an external JSON would increase the time to load the table, but would descrease the time needed to load the page itself.
Instead hard coding the table will load the table instantly but it will also increase the time to load the page.

Related

Strategy for making React image gallery SEO-friendly

I wrote a React image gallery or slideshow. I need to make the alt text indexable by search engines, but because my server is in PHP, React.renderToString is of limited use.
The server is in PHP + MySQL. The PHP uses Smarty, a decent PHP template engine, to render the HTML. The rest of the PHP framework is my own. The Smarty template has this single ungainly line:
<script>
var GalleryData = {$gallery};
</script>
which is rendered by the PHP's controller function as follows:
return array(
'gallery' => json_encode($gallery),
);
($gallery being the result table of a MySQL query).
And my .js:
React.render(<Gallery gallery={GalleryData} />, $('.gallery').get(0));
Not the most elegant setup, but given that my server is in PHP there doesn't seem to be much of a better way to do it (?)
I did a super quick hack to fix this at first shot - I copied the rendered HTML from Firebug, and manually inserted it into a new table in the DB. I then simply render this blob of code from PHP and we're good to go on the browser.
There was one complication which is that because React components are only inserted into the DOM as they're first rendered (as I understand it), and because the gallery only shows one image slide at a time, I had to manually click through all slides once before saving the HTML code out.
Now however the alt text is editable by CMS and so I need to automate this process, or come up with a better solution.
Rewriting the server in Node.js is out of the question.
My first guess is that I need to install Node, and write a script that creates the same React component. Because the input data (including the alt text) has to come from MySQL, I have a few choices:
connect to the MySQL DB from Note, and replicate the query
create a response URL on the PHP side that returns only the JSON (putting the SQL query into a common function)
fetch the entire page in Node but extracting GalleryData will be a mess
I then have to ensure that all components are rendered into the DOM, so I can script that by manually calling the nextSlide() method as many times as there are slides (less one).
Finally I'll save the rendered DOM into the DB again (so the Node script will require a MySQL connection after all - maybe the 1st option is the best).
This whole process seems very complicated for such a basic requirement. Am I missing something?
I'm completely new to Node and the whole idea of building a DOM outside of the browser is basically new to me. I don't mind introducing Node into my architecture but it will only be to support React being used on the front-end.
Note that the website has about 15,000 pageviews a month, so massive scalability isn't a consideration - I don't use any page caching as it simply isn't needed for this volume of traffic.
I'm likely to have a few React components that need to be rendered statically like this, but maintaining a small technical overhead (e.g. maintaing a set of parallel SQL queries in Node) won't be a big problem.
Can anyone guide me here?
I think you should try rendering React components on server-side using PHP. Here is a PHP lib to do that.
But, yes, you'll basically need to use V8js from your PHP code. However, it's kind of experimental and you may need to use other around. (And this "other way around" may be using Node/Express to render your component. Here is some thoughts on how to do it.)

Mysql take a long time to load

I have a website that mirror the hacked websites
From the day of starting everything is cool till it's being bigger
I have 2,197 records in MySQL DB
Every time I want to load this page http://red-zone.org/archive/ 2,197 have to be shown.
The cause that i'm using JavaScript for pagination and the JavaScript will work just after loading all the records
I don't like to use PHP for pagination
Does there's any other solution ?
I think the best way to use paging with big data is using Server-side processing with ajax/java script.
If you want a cool example you can look into datatables:
https://datatables.net/examples/data_sources/server_side.html
Of you can make your own javascript, but i see your already using datatables try the server side processing it will resolve your issue.

Generating HTML in JavaScript vs loading HTML file

Currently I am creating a website which is completely JS driven. I don't use any HTML pages at all (except index page). Every query returns JSON and then I generate HTML inside JavaScript and insert into the DOM. Are there any disadvantages of doing this instead of creating HTML file with layout structure, then loading this file into the DOM and changing elements with new data from JSON?
EDIT:
All of my pages are loaded with AJAX calls. But I have a structure like this:
<nav></nav>
<div id="content"></div>
<footer></footer>
Basically, I never change nav or footer elements, they are only loaded once, when loading index.html file. Then on every page click I send an AJAX call to the server, it returns data in JSON and I generate HTML code with jQuery and insert like this $('#content').html(content);
Creating separate HTML files, and then for example using $('#someID').html(newContent) to change every element with JSON data, will use even more code and I will need 1 more request to server to load this file, so I thought I could just generate it in browser.
EDIT2:
SEO is not very important, because my website requires logging in so I will create all meta tags in index.html file.
In general, it's a nice way of doing things. I assume that you're updating the page with AJAX each time (although you didn't say that).
There are some things to look out for. If you always have the same URL, then your users can't come back to the same page. And they can't send links to their friends. To deal with this, you can use history.pushState() to update the URL without reloading the page.
Also, if you're sending more than one request per page and you don't have an HTML structure waiting for them, you may get them back in a different order each time. It's not a problem, just something to be aware of.
Returning HTML from the AJAX is a bad idea. It means that when you want to change the layout of the page, you need to edit all of your files. If you're returning JSON, it's much easier to make changes in one place.
One thing that definitly matters :
How long will it take you to develop a new system that will send data as JSON + code the JS required to inject it as HTML into the page ?
How long will it take to just return HTML ? And how long if you can re-use some of your already existing server-side code ?
and check how much is the server side interrection of your pages...
also some advantages of creating pure HTML :
1) It's simple markup, and often just as compact or actually more compact than JSON.
2) It's less error prone cause all you're getting is markup, and no code.
3) It will be faster to program in most cases cause you won't have to write code separately for the client end.
4) The HTML is the content, the JavaScript is the behavior. You're mixing both for absolutely no compelling reason.
in javascript or nay other scripting language .. if you encountered a problem in between the rest of the code will not work
and also it is easier to debug in pure html pages
my opinion ... use scriptiong code wherever necessary .. rest of the code you can do in html ...
it will save the triptime of going to server then fetch the data and then displaying it again.
Keep point No. 4 in your mind while coding.
I think that you can consider 3 methods:
Sending only JSON to the client and rendering according to a template (i.e.
handlerbar.js)
Creating the pages from the server-side, usually faster rendering also you can cache the page.
Or a mixture of this would be to generate partial views from the server and sending them to the client, for example it's like having a handlebar template on the client and applying the data from the JSON, but only having the same template on the server-side and rendering it on the server and sending it to the client in the final format, on the client you can just replace the partial views.
Also some things to think about determined by the use case of the applicaton, is that if you are targeting SEO you should consider ColBeseder advice, of if you are targeting mobile users, probably you would better go with the JSON only response, as this is a more lightweight response.
EDIT:
According to what you said you are creating a single page application, if this is correct, then probably you can go with either the JSON or a partial views like AngularJS has. But if your server-side logic is written to handle only JSON response, then probably you could better use a template engine on the client like handlerbar.js, underscore, or jquery templates, and you can define reusable portions of your HTML and apply to it the data from the JSON.
If you cared about SEO you'd want the HTML there at page load, which is closer to your second strategy than your first.
Update May 2014: Google claims to be getting better at executing Javascript: http://googlewebmastercentral.blogspot.com/2014/05/understanding-web-pages-better.html Still unclear what works and what does not.
Further updates probably belong here: Do Google or other search engines execute JavaScript?

Animated infinite scrolling via AJAX: returning JSON or HTML?

I'm trying to create an infinite scrolling page - somewhat similar to tumblr archive pages like this. I understand the concept that I have to load the content with a server call, but I don't know how to achieve this "animated loading" design like in Tumblr.
I don't want to know the exact code, only the overall concept of the solution. So what would be the best practice to do things like this?
What should I get from the server: a bunch of JSON data or a full HTML page?
I have tried to decode the Tumblr page above, and I saw on my network traffic page that in every scroll event there is a POST request which returns a full HTML page which has its own JavaScript and CSS content!
I guess that the animation logic is inside of this JavaScript content.
But I have 2 questions about this method:
When I get the full HTML page from the server (which contains the new page as well), how can I throw the currently displayed HTML document away and I set the new one?
Isn't it too bad from a performance point of view to return a full HTML document every time? Because the full document would contain the results of the previous "pages" of the archive as well. Or do I think wrong?
Wouldn't it be better to return a JSON-only result from the server? (It have to be parsed on the client but it would be more network traffic-friendly, I guess)
If it would be better to return a JSON, why the Tumblr works on the other way?
It surely is beneficial to not send lots of data that will not be used.
However, if your server has a lot of resources, you can do some preprocessing on the server instead of client. This means, instead of JSON, you can send an HTML snippet, the block that will be added. Moreover, if your HTML structure is very complex, you don't want to implement it twice; once in HTML and once in Javascript.
The way Tumblr works might be because they don't want to add much more to the server code base, and instead offload the work to the client. Since only one page is sent at a time, the overhead is constant w.r.t. the number of pages. The client can just take the full HTML, find the corresponding element with DOM manipulation and place it somewhere.
In fact, that is what the AutoPager plugin does: It learns the "next" link and the page body from the user, then fetches additional full pages from the unsuspecting server and inserts their content into the page (and reads the next page url).
In short:
The benefit of JSON is low bandwidth usage.
The benefit of HTML snippets is low demands on client processing power, and little to no code duplication.
The benefit of full HTML is that the server needs not care if it's serving the first page or any other.

Put javascript in one .js file or break it out into multiple .js files?

My web application uses jQuery and some jQuery plugins (e.g. validation, autocomplete). I was wondering if I should stick them into one .js file so that it could be cached more easily, or break them out into separate files and only include the ones I need for a given page.
I should also mention that my concern is not only the time it takes to download the .js files but also how much the page slows down based on the contents of the .js file loaded. For example, adding the autocomplete plugin tends to slow down the response time by 100ms or so from my basic testing even when cached. My guess is that it has to scan through the elements in the DOM which causes this delay.
I think it depends how often they change. Let's take this example:
JQuery: change once a year
3rd party plugins: change every 6 months
your custom code: change every week
If your custom code represents only 10% of the total code, you don't want the users to download the other 90% every week. You would split in at least 2 js: the JQuery + plugins, and your custom code. Now, if your custom code represents 90% of the full size, it makes more sense to put everything in one file.
When choosing how to combine JS files (and same for CSS), I balance:
relative size of the file
number of updates expected
Common but relevant answer:
It depends on the project.
If you have a fairly limited website where most of the functionality is re-used across multiple sections of the site, it makes sense to put all your script into one file.
In several large web projects I've worked on, however, it has made more sense to put the common site-wide functionality into a single file and put the more section-specific functionality into their own files. (We're talking large script files here, for the behavior of several distinct web apps, all served under the same domain.)
The benefit to splitting up the script into separate files, is that you don't have to serve users unnecessary content and bandwidth that they aren't using. (For example, if they never visit "App A" on the website, they will never need the 100K of script for the "App A" section. But they would need the common site-wide functionality.)
The benefit to keeping the script under one file is simplicity. Fewer hits on the server. Fewer downloads for the user.
As usual, though, YMMV. There's no hard-and-fast rule. Do what makes most sense for your users based on their usage, and based on your project's structure.
If people are going to visit more than one page in your site, it's probably best to put them all in one file so they can be cached. They'll take one hit up front, but that'll be it for the whole time they spend on your site.
At the end of the day it's up to you.
However, the less information that each web page contains, the quicker it will be downloaded by the end-viewer.
If you only include the js files required for each page, it seems more likely that your web site will be more efficient and streamlined
If the files are needed in every page, put them in a single file. This will reduce the number of HTTP request and will improve the response time (for lots of visits).
See Yahoo best practice for other tips
I would pretty much concur with what bigmattyh said, it does depend.
As a general rule, I try to aggregate the script files as much as possible, but if you have some scripts that are only used on a few areas of the site, especially ones that perform large DOM traversals on load, it would make sense to leave those in separate file(s).
e.g. if you only use validation on your contact page, why load it on your home page?
As an aside, you can sometimes sneak these files into interstitial pages, where not much else is going on, so when a user lands on an otherwise quite heavy page that needs it, it should already be cached - use with caution - but can be a handy trick when you have someone benchmarking you.
So, as few script files as possible, within reason.
If you are sending a 100K monolith, but only using 20K of it for 80% of the pages, consider splitting it up.
It depends pretty heavily on the way that users interact with your site.
Some questions for you to consider:
How important is it that your first page load be very fast?
Do users typically spend most of their time in distinct sections of the site with subsets of functionality?
Do you need all of the scripts ready the moment that the page is ready, or can you load some in after the page is loaded by inserting <script> elements into the page?
Having a good idea of how users use your site, and what you want to optimize for is a good idea if you're really looking to push for performance.
However, my default method is to just concatenate and minify all of my javascript into one file. jQuery and jQuery.ui are small and have very low overhead. If the plugins you're using are having a 100ms effect on page load time, then something might be wrong.
A few things to check:
Is gzipping enabled on your HTTP server?
Are you generating static files with unique names as part of your deployment?
Are you serving static files with never ending cache expirations?
Are you including your CSS at the top of your page, and your scripts at the bottom?
Is there a better (smaller, faster) jQuery plugin that does the same thing?
I've basically gotten to the point where I reduce an entire web application to 3 files.
vendor.js
app.js
app.css
Vendor is neat, because it has all the styles in it too. I.e. I convert all my vendor CSS into minified css then I convert that to javascript and I include it in the vendor.js file. That's after it's been sass transformed too.
Because my vendor stuff does not update often, once in production it's pretty rare. When it does update I just rename it to something like vendor_1.0.0.js.
Also there are minified versions of those files. In dev I load the unminified versions and in production I load the minified versions.
I use gulp to handle doing all of this. The main plugins that make this possible are....
gulp-include
gulp-css2js
gulp-concat
gulp-csso
gulp-html-to-js
gulp-mode
gulp-rename
gulp-uglify
node-sass-tilde-importer
Now this also includes my images because I use sass and I have a sass function that will compile images into data-urls in my css sheet.
function sassFunctions(options) {
options = options || {};
options.base = options.base || process.cwd();
var fs = require('fs');
var path = require('path');
var types = require('node-sass').types;
var funcs = {};
funcs['inline-image($file)'] = function (file, done) {
var file = path.resolve(options.base, file.getValue());
var ext = file.split('.').pop();
fs.readFile(file, function (err, data) {
if (err) return done(err);
data = new Buffer(data);
data = data.toString('base64');
data = 'url(data:image/' + ext + ';base64,' + data + ')';
data = types.String(data);
done(data);
});
};
return funcs;
}
So my app.css will have all of my applications images in the css and I can add the image's to any chunk of styles I want. Typically i create classes for the images that are unique and I'll just take stuff with that class if I want it to have that image. I avoid using Image tags completely.
Additionally, use html to js plugin I compile all of my html to the js file into a template object hashed by the path to the html files, i.e. 'html\templates\header.html' and then using something like knockout I can data-bind that html to an element, or multiple elements.
The end result is I can end up with an entire web application that spins up off one "index.html" that doesn't have anything in it but this:
<html>
<head>
<script src="dst\vendor.js"></script>
<script src="dst\app.css"></script>
<script src="dst\app.js"></script>
</head>
<body id="body">
<xyz-app params="//xyz.com/api/v1"></xyz-app>
<script>
ko.applyBindings(document.getTagById("body"));
</script>
</body>
</html>
This will kick off my component "xyz-app" which is the entire application, and it doesn't have any server side events. It's not running on PHP, DotNet Core MVC, MVC in general or any of that stuff. It's just basic html managed with a build system like Gulp and everything it needs data wise is all rest apis.
Authentication -> Rest Api
Products -> Rest Api
Search -> Google Compute Engine (python apis built to index content coming back from rest apis).
So I never have any html coming back from a server (just static files, which are crazy fast). And there are only 3 files to cache other than index.html itself. Webservers support default documents (index.html) so you'll just see "blah.com" in the url and any query strings or hash fragments used to maintain state (routing etc for bookmarking urls).
Crazy quick, all pending on the JS engine running it.
Search optimization is trickier. It's just a different way of thinking about things. I.e. you have google crawl your apis, not your physical website and you tell google how to get to your website on each result.
So say you have a product page for ABC Thing with a product ID of 129. Google will crawl your products api to walk through all of your products and index them. In there you're api returns a url in the result that tells google how to get to that product on a website. I.e. "http://blah#products/129".
So when users search for "ABC thing" they see the listing and clicking on it takes them to "http://blah#products/129".
I think search engines need to start getting smart like this, it's the future imo.
I love building websites like this because it get's rid of all the back end complexity. You don't need RAZOR, or PHP, or Java, or ASPX web forms, or w/e you get rid of those entire stacks.... All you need is a way to write rest apis (WebApi2, Java Spring, or w/e etc etc).
This separates web design into UI Engineering, Backend Engineering, and Design and creates a clean separation between them. You can have a UX team building the entire application and an Architecture team doing all the rest api work, no need for full stack devs this way.
Security isn't a concern either, because you can pass credentials on ajax requests and if your stuff is all on the same domain you can just make your authentication cookie on the root domain and presto (automatic, seamless SSO with all your rest apis).
Not to mention how much simpler server farm setup is. Load balance needs are a lot less. Traffic capabilities a lot higher. It's way easier to cluster rest api servers on a load balancer than entire websites.
Just setup 1 nginx reverse proxy server to serve up your index .html and also direct api requests to one of 4 rest api servers.
Api Server 1
Api Server 2
Api Server 3
Api Server 4
And your sql boxes (replicated) just get load balanced from the 4 rest api servers (all using SSD's if possible)
Sql Box 1
Sql Box 2
All of your servers can be on internal network with no public ips and just make the reverse proxy server public with all requests coming in to it.
You can load balance reverse proxy servers on round robin DNS.
This means you only need 1 SSL cert to since it's one public domain.
If you're using Google Compute Engine for search and seo, that's out in the cloud so nothing to worry about there, just $.
If you like the code in separate files for development you can always write a quick script to concatenate them into a single file before minification.
One big file is better for reducing HTTP requests as other posters have indicated.
I also think you should go the one-file route, as the others have suggested. However, to your point on plugins eating up cycles by merely being included in your large js file:
Before you execute an expensive operation, use some checks to make sure you're even on a page that needs the operations. Perhaps you can detect the presence (or absence) of a dom node before you run the autocomplete plugin, and only initialize the plugin when necessary. There's no need to waste the overhead of dom traversal on pages or sections that will never need certain functionality.
A simple conditional before an expensive code chunk will give you the benefits of both the approaches you are deciding on.
I tried breaking my JS in multiple files and ran into a problem. I had a login form, the code for which (AJAX submission, etc) I put in its own file. When the login was successful, the AJAX callback then called functions to display other page elements. Since these elements were not part of the login process I put their JS code in a separate file. The problem is that JS in one file can't call functions in a second file unless the second file is loaded first (see Stack Overflow Q. 25962958) and so, in my case, the called functions couldn't display the other page elements. There are ways around this loading sequence problem (see Stack Overflow Q. 8996852) but I found it simpler put all the code in one larger file and clearly separate and comment sections of code that would fall into the same functional group e.g. keep the login code separate and clearly commented as the login code.

Categories