Should web worker be used to handle style rendering in css-in-js? - javascript

I am writing a css-in-js library, and I came across web worker recently, and I learnt how it can help me run code in parallel.
As the main thread is treated as UI thread, should I push some of the css-generating works from my library to web-worker? Would this be considered as anti-pattern?

There is no generic rule. You move something to web worker to make page operable during the calculation. In case of css-in-js in most of cases page should wait css to do something, so usage of workers looks limited there. The only case i see - web worker could prepare the skin update in background without the page blocking. Generally speaking, you need some real project to check is it a good idea

Related

How the ReactJS can manage the browser's RAM/CPU resources?

Today I have a job interview and they asked me a different question. He said in the Single Page Application (SPA) structure the front-end developers should manage the client resources and cleaning up the other process usages (RAM, CPU).
He said when the clients open more pages or send more requests, the client RAM and CPU will use more and in the future theirs browser will crash or this browser will be slowly because the resource usage is more and more and the components will open on each other.
He said there are many methods to handle this problem, and every body have a different solution for solving this problem. I told him I think the Virtual DOM can solve this problem and by default the ReactJS can support this problem.
He approved but I have doubts who my answer was correct. So, is a right method or solution to manage the SPA clients computer resource's (RAM, CPU)?
The browser (client) does not give you the ability to directly access system resources like RAM or CPU hence you can't really control these resources. But traversing the DOM too much and making too many frequent API calls can lead to a less performant web app. React does improve rendering through its virtual dom feature which means the DOM only gets updated when it absolutely needs to. Also caching less changing datasets can reduce the number of API calls you web app has to make. In general try not to render huge data in a single page for data-heavy web apps, always use pagination. also, try not to fetch large data in one API call too.

Background processes in Node.js

What is a good aproach to handle background processes in a NodeJS application?
Scenario: After a user posts something to an app I want to crunch the data, request additional data from external resources, etc. All of this is quite time consuming, so I want it out of the req/res loop. Ideal would be to just have a queue of jobs where you can quickly dump a job on and a daemon or task runner will always take the oldest one and process it.
In RoR I would have done it with something like Delayed Job. What is the Node equivalent of this API?
If you want something lightweight, that runs in the same process as the server, I highly recommend Bull. It has a simple API that allows for a fine grained control over your queues.
If you're familiar with Ruby's Resque, there is a node implementation called Node-resque
Bull and Node-resque are all backed by Redis, which is ubiquitous among Node.js worker queues. They would be able to do what RoR's DelayedJob does, it's matter of specific features that you want, and your API preferences.
Background jobs are not directly related to your web service work, so they should not be in the same process. As you scale up, the memory usage of the background jobs will impact the web service performance. But you can put them in the same code repository if you want, whatever makes more sense.
One good choice for messaging between the two processes would be redis, if dropping a message every now and then is OK. If you want "no message left behind" you'll need a more heavyweight broker like Rabbit. Your web service process can publish and your background job process can subscribe.
It is not necessary for the two processes to be co-hosted, they can be on separate VMs, Docker containers, whatever you use. This allows you to scale out without much trouble.
If you're using MongoDB, I recommend Agenda. That way, separate Redis instances aren't running and features such as scheduling, queuing, and Web UI are all present. Agenda UI is optional and can be run separately of course.
Would also recommend setting up a loosely coupled abstraction between your application logic and the queuing / scheduling system so the entire background processing system can be swapped out if needed. In other words, keep as much application / processing logic away from your Agenda job definitions in order to keep them lightweight.
I'd like to suggest using Redis for scheduling jobs. It has plenty of different data structures, you can always pick one that suits better to your use case.
You mentioned RoR and DJ, so I assume you're familiar with sidekiq. You can use node-sidekiq for job scheduling if you want to, but its suboptimal imo, since it's main purpose is to integrate nodejs with RoR.
For worker daemonising I'd recommend using PM2. It's widely used and actively-maintained. It solves a lot of problems (e.g. deployment, monitoring, clustering) so make sure it won't be an overkill for you.
I tried bee-queue & bull and chose bull in the end.
I first chose bee-queue b/c it is quite simple, their examples are easy to understand, while bull's examples are bit complicated. bee's wiki Bee Queue's Origin also resonates with me. But the problem with bee is <1> their issue resolution time is quite slow, their latest update was 10 months ago. <2> I can't find an easy way to pause/cancel job.
Bull, on the other hand, frequently updates their codes, response to issues. Node.js job queue evaluation said bull's weakness is "slow issues resolution time", but my experience is the opposite!
But anyway their api is similar so it is quite easy to switch from one to another.
I suggest to use a proper Node.js framework to build you app.
I think that the most powerful and easy to use is Sails.js.
It's a MVC framework so if you are used to develop in ROR, you will find it very very easy!
If you use it, It's already present a powerful (in javascript terms) job manager.
new sails.cronJobs('0 01 01 * * 0', function () {
sails.log.warn("START ListJob");
}, null, true, "Europe/Dublin");
If you need more info not hesitate to contact me!

Good Practice with External JavaScript files

I'm new to JavaScript.
How should I split my functions across external scripts? What is considered good practice? should all my functions be crammed into one external .js file or should I group like functions together?
I would guess more files mean more HTTP requests to obtain the script and that could slow down performance? However more files keep things organized: for example, onload.js initializes things on load, data.js retrieves data from the server, ui.js refer to UI handlers...
What's the pros advice on this?
Thanks!
As Pointy mentioned, you should try a tool. Try Grunt of Brunch, both are meant to help you in the build process, and you can configure them to combine all your files when you are ready for prod (also, minify, etc), while keeping separate files when you are developing.
When releasing a product, you generally want as little HTTP requests as possible to load a page (Hence image sprites, for example)
So I'd suggest concatenating your .js's for release, but keeping them separated in a way that works well for you, during development.
Keep in mind that if you "use strict", concatenating scripts might be a source of errors.
(more info on js strict mode here)
It depends on the size, count of your scripts and how many of them you use at any time.
Many performance good practices claim (and there's good logic in this) that it's good to inline your JavaScript if it's small enough. This leads to lower count of HTTP requests but it also prevents the browser from caching the JavaScript so you should be very careful. That's why there're a practices even to inline your images (using base64 encoding) in some special cases (for example look at Bing.com, all their JavaScript is inline).
If you have a lot of JavaScript files and you're using just a little part of them at any time (not only as count but as size) you can load them asynchronously (for example using require.js). But this will require a lot of changes in your application design if you haven't considered it at the beginning (and also make your application complexity bigger).
There are practices even to cache your CSS/JavaScript into the localStorage. For further information you can read the Web Performance Daybook
So let's make something like a short retrospection.
If you have your JavaScript inline this will reduce the first load of the page. The inline JavaScript won't be cached by the browser so every next load of your page will be slower that if you have used external files.
If you are using different external files make sure that you're using all of them or at least big part of them because you can have redundant HTTP requests for files which actually are unnecessary loaded. This will lead to better organization of your code but probably greater load time (still don't forget the browser cache which will help you).
To put everything in at single file will reduce your HTTP requests but you'll have one big file which will block your page loading (if you're using synchronous loading of the JS file) until the file have been loaded completely. In such case I can recommend you to put this big file in the end of the body.
For performance tracking you can use tools like YSlow.
When I think about good practice, then I think of MVC patterns. One might argue if this is the way to go in development, but many people use it to structure what the want to achieve. Usually it is not advisable to use MVC at all if the project is just too small - just like creating a full C++ windows app if you just needed a simple C program with a for loop.
In any case, MVC or MV* in javascript will help you structure your code to the extent that all the actions are part of the controllers, while object properties are just stored in the model. The views then are just for showing purposes and are rendered for the user via special requests or rendinering engines. When I stared using MV*, I stared with BackboneJS and the Guide "Developing BackboneJS Applications" from Addy Osmani. Of course there are a multitude of other frameworks that you can use to structure your code. They can be found on the TodoMVC website.
What you can also do is derive your own structure from their apps and then use the directory structure for your development (but without the MV* framework).
I do not agree to your concern that using such a structure lead to more files, which mean more HTTP requests. Of course this is true during development, BUT remember, the user should get a performance enhanced (i.e. compiled) and minified version as a script. Therefore even if you are developing in such an organized way, it makes more sense to minify/uglify and compile your scripts with closure compiler from Google.

Do we really need multi-threaded JavaScript?

I have recently heard about the Web Workers spec that defines API for multi-threading JavaScript. But after working with client side scripting for so long now (and event-driven paradigm), I don't really see a point with using multiple thread.
I can see how the JavaScript engine and browser rendering engine can benefit from multi-threading, but I really don't see much benefit in handing this power to application programmers.
The Wikipedia article actually answers your question fairly well.
The power is given to us developers so that we can specifically offload tasks that would be disruptive to users to a web worker. The browser does not know which scripts are necessary for your custom interface to function properly, but you do.
If you've got a script that blocks the page rendering for 10 seconds but isn't necessary for the website to function, you could offload it to a web worker. Doing so allows your users to interact with the page instead of forcing them to wait 10 seconds for that script to execute. In a way, it's like AJAX in that things can be injected in after the interface loads so as to not delay users' interaction.

What is a "Javascript Bootloader"?

I have seen this mainly in the source of Facebook Bootloader.setResourceMap({"bMxb7":{"name":.... What is exactly a bootloader in javascript? What is its use and purpose?
Bootloader is an important part of Facebook's front-end code, which allows Javascript libraries to be lazy-loaded as needed instead of on page load. A couple of Facebook developers go into further detail here if you'd like to know more.
You can use RequireJS, LABjs or others to achieve the same thing.
Generally speaking, the bootloader is a (relatively) small amount of code responsible for establishing the environment that all subsequent code requires to run, as such it is also the the first code to be executed. It's usually restricted to OSes, but makes sense for FB too.
In the case of Facebook, the bootloader will do things like loading additional JS files and other resources that the library needs in addition to the single public <script /> the developer included in the document.
Strictly speaking, there is no such thing.
A bootstrap loader (which is the full name of the term) is the part of an operating system that loads the disk operating system from disk, thus the computer lifts itself by the bootstraps, by loading from disk before the disk loading routines are loaded.
There are no Javascript operating systems, so there is no bootstrap loader for Javascript. This is just some object that is named that way, presumably because it does something early in the page load process.
Here is a video from React.js Conf 2016 that touches on what Bootloader is. But to summarize: Bootloader helps improve the performance of the app but not downloading content unless they are actually needed. The video has the example of the 'share button'
https://youtu.be/SnAq9tbeRm4?t=17m16s

Categories