I have recently heard about the Web Workers spec that defines API for multi-threading JavaScript. But after working with client side scripting for so long now (and event-driven paradigm), I don't really see a point with using multiple thread.
I can see how the JavaScript engine and browser rendering engine can benefit from multi-threading, but I really don't see much benefit in handing this power to application programmers.
The Wikipedia article actually answers your question fairly well.
The power is given to us developers so that we can specifically offload tasks that would be disruptive to users to a web worker. The browser does not know which scripts are necessary for your custom interface to function properly, but you do.
If you've got a script that blocks the page rendering for 10 seconds but isn't necessary for the website to function, you could offload it to a web worker. Doing so allows your users to interact with the page instead of forcing them to wait 10 seconds for that script to execute. In a way, it's like AJAX in that things can be injected in after the interface loads so as to not delay users' interaction.
Related
Can I write my web-site only in JavaScript being sure that my code is concealed from anyone? In this respect, is Node.js, like Apache, can be given access to through an Internet-provider?
The answer to both of your questions is yes.
Node.js can completely replace Apache (assuming you are willing to re-write all of your PHP as JavaScript). If you have your Apache running in reverse-proxy mode between your server and client, you can even handle some requests in Node.JS while handing others in PHP. This will allow you to maintain original functionality while changing the code out, and also allow PHP to handle more mundane tasks.
While you cannot prevent raw JavaScript from being read through any means of obfuscation, you can prevent people from reading your code by note making use of standard JavaScript at all. You can use a NativeExtension for Node to add an extension handler for encrypted JavaScript files:
require.extensions[".jse"] = function (m) {
m.exports = MyNativeExtension.decrypt(fs.readFileSync(m.filename));
};
require("YourCode.jse");
This will convert your JavaScript code to a .jse, which you would then package for production. Considering the encryption is done inside the native extension, the encryption key wouldn't be revealed.
Hope this helps! :)
Yes, you definitely can. It may, however, take a while to transition existing code, and if this is for a corporate institution, you'll have to ask your coworkers and your boss/supervisor. Good luck, and remember, always document your code in JavaScript (no types) all languages.
NodeJS is much faster: http://www.hostingadvice.com/blog/comparing-node-js-vs-php-performance/
Many more libraries: http://npmjs.org
Only need one language for everything
Comparing PHP with another technology like Node.js which is meant for an entirely different type of task, comparer must mention the difference of use-case/context in which one is suitable over others.
Let's talk about in terms of a different area of execution because we can not Disgrace any of them and both have its own priority.
If you talk about in terms of Application Domain.
PHP :
CMS (Content Management Systems) like WordPress, Drupal also use PHP which makes it possible to be used in creating blogs, websites, e-commerce sites, etc.
Used in developing CPU-intensive applications like meteorology applications and scientific applications.
should be used in applications in which the client does not have to interact with the server again and again
PHP 7 is based on the PHPNG engine that speeds up PHP applications more than the previous PHP interpreter (Zend Engine 2.0). Thanks to PHPNG, your apps see up to the 2x faster performance and 50% better memory consumption than PHP 5.6.
NodeJs:
Nodejs is ideal for developing highly scalable server-side solutions because of its non-blocking I/O, and event-driven model.
Used massively in Real-time applications like chat applications, blogs, video streaming applications.
Used in developing single-page applications like resume portfolios, individual websites.
Node.js should be used for the applications which require a lot of interaction between client and server.
For some tasks, Node.js may be faster than the “standard” web server with PHP because it runs as a single thread with non-blocking IO for each connection, hence there is no memory overrun. Due to this, Node.js is useful when there is a need to process data in real-time (chats, games, video, big data streams without logic)
PHP is Still alive and he has learned its lessons from Node.JS
ReactPHP enables developers to write PHP-based socket server to process requests constantly as well as Node.js does it (yes, Node.js is faster, but the fact is that PHP can also do it). The same thing with Workers (the classes responsible for running asynchronous jobs and synchronizing their results) or Amp (libraries that are used for writing non-blocking asynchronous code). Hence, it is easy to create long-running processes using PHP. Additionally, there are a lot of tools for supporting and managing these processes (such as supervisord).
So, the same tasks may be performed either with PHP or with Node.js. The question “what tool to use” is a question of personal preferences. you can use Node.js for tasks involving big data flows and PHP for tasks involving complex logic, high-load tasks, for dealing with external utilities, applications, and OS.
From the scalability perspective, there are no big differences between PHP and Node.js, it is more important to consider the project’s architecture.
Dayle Rees (a Laravel Framework contributor and developer): For a long time PHP was the butt of many language jokes, but I honestly feel that it’s becoming not only a popular language but a powerful one. PHP7 is great. The speed boost is one thing, but having optional support for full type hinting is a game changer. We’ve also got modern tools like Laravel and Composer, breathing new life into the language and its supporting community. With this in mind, I think it’s unlikely that Laravel will move from PHP. I think it’s more likely to gain further integration with front-end tools to provide a complete application building platform. That’s where I see it heading in terms of future expansion. I’m sure the Node will continue to excel when dealing with microservices and threaded applications.
The most important and most awaited News from PHP is, PHP is scheduled to receive a Just In Time (JIT) compiler in its next major version PHP-8 (most probably in sep-2021).This is going to boom the php and it breaks all his limitation due to JIT.
Wrap up
To wrap up Both have some pros, both have some cons but the amazing thing is both are created by intellects to make the web development better. While selecting the technology the question shouldn’t be which one is better but which one can serve your project needs in a better way. Understanding your project and business logic can give you a clear idea about selecting the right technology for your project.Moreover, one more important thing to consider is the skills and proficiency of the developers using the technology, how they use them and apply to the project.
This question already has answers here:
Is there a way to create out of DOM elements in Web Worker?
(10 answers)
Closed 7 years ago.
We all know we can spin up some web workers, give them some mundane tasks to do and get a response at some point, at which stage we normally parse the data and render it somehow to the user right.
Well... Can anyone please provide a relatively in-depth explanation as to why Web Workers do not have access to the DOM? Given a well thought out OO design approach, I just don't get why they shouldn't?
EDITED:
I know this is not possible and it won't have a practical answer, but I feel I needed a more in depth explanation. Feel free to close the question if you feel it's irrelevant to the community.
The other answers are correct; JS was originally designed to be single-threaded as a simplification, and adding multi-threading at this point has to be done very carefully. I wanted to elaborate a little more on thread safety in Web Workers.
When you send messages to workers using postMessage, one of three things happen,
The browser serializes the message into a JSON string, and de-serializes on the other end (in the worker thread). This creates a copy of the object.
The browser constructs a structured clone of the message, and passes it to the worker thread. This allows messaging using more complex data types than 1., but is essentially the same thing.
The browser transfers ownership of the message (or parts of it). This applies only for certain data types. It is zero-copy, but once the main context transfers the message to the worker context, it becomes unavailable in the main context. This is, again, to avoid sharing memory.
When it comes to DOM nodes,
is obviously not an option, since DOM nodes contain circular references,
could work, but in a read-only mode, since any changes made by the worker to the clone of the node will not be reflected in the UI, and
is unfortunately not an option since the DOM is a tree, and transferring ownership of a node would imply transferring ownership of the entire tree. This poses a problem if the UI thread needs to consult the DOM to paint the screen and it doesn't have ownership.
The entire world of Javascript in a browser was originally designed with a huge simplification - single threadedness (before there were webWorkers). This single assumption makes coding a browser a ton simpler because there can never be thread contention and can never be two threads of activity trying to modify the same objects or properties at the same time. This makes writing Javascript in the browser and implementing Javascript in the browser a ton simpler.
Then, along came a need for an ability to do some longer running "background" type things that wouldn't block the main thread. Well, the ONLY practical way to put that into an existing browser with hundreds of millions of pages of Javascript already out there on the web was to make sure that this "new" thread of Javascript could never mess up the existing main thread.
So, it was implemented in a way that the ONLY way it could communicate with the main thread or any of the objects that the main thread could modify (such as the entire DOM and nearly all host objects) was via messaging which, by its implementation is naturally synchronized and safe from thread contention.
Here's a related answer: Why doesn't JavaScript get its own thread in common browsers?
Web Workers do not have DOM access because DOM code is not thread-safe. By DOM code, I mean the code in the browser that handles your DOM calls.
Making thread-unsafe code safe is a huge project, and none of the major browsers are doing it.
Servo project is writing new browser from scratch, and I think they are writing their DOM code with thread-safety in mind.
I have a general question that I'm having trouble grappling with about web workers. I understand that they engage in background calculations in another thread so they take off the load from the window that the user is in.
However I'm confused on whether that 'other thread' means something like having a different program running on the computer, having a separate browser open, or whether it's like a new tab in the same browser. I feel that this is more of the latter case, but I'm not 100% sure about that and I can't find good explanations.
What implications does this have on the limitations of what we can do with web workers?
Thanks in advance!
A webworker works like an independent thread of execution. Multiple threads can run at the same time in a computer process. If there are multiple processors, these multiple threads can actually run at the same time. If there is only a single processor, then the OS on the computer handles time slicing between the different threads such that each one runs for a short while, then the next one runs and, to the casual observer, they appear to be running at the sametime.
In a browser, a webworker is indeed a thread of execution that runs independently of the browser window thread (of which there is one for each browser page that is open in the browser). The browser window thread has a number of limititations. The main limitation is that it only processes user events (mouse movement, mouse clicks, keyboard events, etc...) when no javascript code is also running in the main browser thread. So, if you were to run some long running javascript code in the main browser thread, the browser will "appear" to be locked up and won't process any user events while that javascript is running. This is generally considered a bad user experience.
But, if you run this javascript in a webworker, it can go do it's long running thing without blocking the processing of events in the main browser window thread. When it finishes its long running computation, it can then send a message to the main browser window thread and the result can be processed (e.g. displayed in the page or whatever the particular action is).
There are ways to work-around the limitations of the main browser thread by breaking your work into small chunks and executing small chunks of work on a recurring timer. But, using a web worker thread can significantly simplify the programming.
Web workers themselves cannot access the browser page in any way. They can't read values out of it or modify it - they can't run animations, etc... This limits their usefulness a bit to tasks that are more independent from the page. The classic use is some long running calculation (e.g. analyzing data from an image, carrying out ajax calls, doing some complex calculation, etc...). Web workers can communicate with the main thread via a messaging system. It's kind of like leaving a voicemail. The webworker calls up the main thread and leaves a message for it. The next time the main thread has nothing to do, it checks to see if there are any messages from web workers and if so, it processes them. In this way, the main thread and the web worker thread can communicate, but one cannot interrupt the other while it's doing something else.
When I make a website using javascript, do I have the opportunity to take advantage of multiple threads on the client's computer?
I know web programming can give you access to multiple asynchronous http or networking requests. I'm wondering about actual in-browser processing.
Web Workers is the way to go...
It is a HTML 5 feature which allows running multiple threads (workers) on the client. This feature is currently a working draft.
You can start any number of workers for a page, and each worker can 'post' their state or the result to the main thread.
Have a peek at this MDN post https://developer.mozilla.org/En/Using_web_workers
Also, the link posted by SRN is also very useful ( http://www.html5rocks.com/en/tutorials/workers/basics/)
Also mind that the browser support is still not good. http://caniuse.com/webworkers
It's best you have a fall-back method in case you hit a browser that's un-supported. Also note that Chrome used to have a bug where the web worker can can actually hang the Chrome UI. May be it's now fixed, but look out.
There are things called "WebWorkers" that provide some degree of concurrency. They interoperate with "normal" code via a message passing paradigm kind-of like Erlang processes (though not nearly as sophisticated).
It's a new-ish HTML5 thing, and not supported in old browsers of course.
Web Workers is the technology.
A web worker -- as defined by the World-Wide Web Consortium (W3C) and the Web Hypertext Application Technology Working Group (WHATWG) -- is a JavaScript script -- executed from an HTML page -- that runs in the background, independently of other, user-interface scripts that may also have been executed from the same HTML page
See on usage on MDN too:
Dedicated Web Workers provide a simple means for web content to run scripts in background threads. Once created, a worker can send messages to the spawning task by posting messages to an event handler specified by the creator.
there is multithreaded web programming but there is no multithreaded javascript.
when javascript executes in a browser on the client it is interpreted line by line and won't render anything while it is executing.
you can tap into open source libraries to "imitate" multithreading but basically each javascript needs a page to live and run on.
some tricks are to pass long running functions to run inside of an iframe or to use a setTimeout function to do some work for 50 milliseconds at a time until some flag you use to keep track of the job says the work is done like isFinished == true
the latest versions of flash player allows multithreading in it but it is limited to very basic usage across a single domain.
html 5 web workers are another option but won't work in all browsers
JavaScript is single threaded - Silverlight is not, but interaction between JavaScript and Silverlight must be performed on the Silverlight UI thread.
However, what exactly is the relationship between the Silverlight UI thread and the JavaScript thread? Are they by any definition the same thread, or separate threads with the interactions performed purely through the respective event loops and blocking one thread when waiting for the other (when evaluating/calling JavaScript from Silverlight for example)? Put another way, can JavaScript execute concurrently with Silverlight actions on the UI thread (and can multiple Silverlight instances hosted in the same page have their UI threads running concurrently)?
I haven't used Silverlight, but I've done pretty extensive work with Java Applets and Flash, so I'll comment from that perspective.
You're right that JavaScript is single-threaded. Anything that causes it to block will prevent all other computation and actions. It will even lock the browser in some cases, though newer browsers are getting better at separating out tabs into separate processes, which helps.
Any thread in a plugin like Silverlight is completely separate from JavaScript in the browser. The interfaces between them may be blocking however. If Silverlight's UI thread blocks when communicating with native JS, then no other work will be done on that thread while it's waiting. Other threads can continue to work as normal.
To address your question about whether JS can execute concurrently while actions on the Silverlight UI thread are running, I don't see why not. They have separate runtimes, and as long as they're not intercommunicating (which would cause one to block), they should be able to keep running fine in isolation.
My gut says the same would be true of multiple Silverlight instances in the same page, but that's really an architectural design question that I'm not able to answer.
Hope this helps!