It helps me understand things by using real world comparison, in this case fastfood.
In java, for synchronous blocking I understand that each request processed by a thread can only be completed one at a time. Like ordering through a drive through, so if im tenth in line I have to wait for the 9 cars ahead of me. But, I can open up more threads such that multiple orders are completed simultaneously.
In javascript you can have asynchronous non-blocking but single threaded. As I understand it, multiple requests are made, and those request are immediately accepted, but the request is processed by some background process at some later time before returning. I don't understand how this would be faster. If you order 10 burgers at the same time the 10 requests would be put in immediately but since there is only one cook (single thread) it still takes the same time to create the 10 burgers.
I mean I understand the reasoning, of why non blocking async single thread "should" be faster for somethings, but the more I ask myself questions the less I understand it which makes me not understand it.
I really dont understand how non blocking async single threaded can be faster than sync blocking multithreaded for any type of application including IO.
Non-blocking async single threaded is sometimes faster
That's unlikely. Where are you getting this from?
In multi-threaded synchronous I/O, this is roughly how it works:
The OS and appserver platform (e.g. a JVM) work together to create 10 threads. These are data structures represented in memory, and a scheduler running at the kernel/OS level will use these data structures to tell one of your CPU cores to 'jump to' some point in the code to run the commands it finds there.
The datastructure that represents a thread contains more or less the following items:
What is the location in memory of the instruction we were running
The entire 'stack'. If some function invokes a second function, then we need to remember all local variables and the point we were at in that original method, so that when the second method 'returns', it knows how to do that. e.g. your average java program is probably ~20 methods deep, so that's 20x the local vars, 20 places in code to track. This is all done on stacks. Each thread has one. They tend to be fixed size for the entire app.
What cache page(s) were spun up in the local cache of the core running this code?
The code in the thread is written as follows: All commands to interact with 'resources' (which are orders of magnitude slower than your CPU; think network packets, disk access, etc) are specified to either return the data requested immediately (only possible if everything you asked for is already available and in memory). If that is impossible, because the data you wanted just isn't there yet (let's say the packet carrying the data you want is still on the wire, heading to your network card), there's only one thing to do for the code that powers the 'get me network data' function: Wait until that packet arrives and makes its way into memory.
To not just do nothing at all, the OS/CPU will work together to take that datastructure that represents the thread, freeze it, find another such frozen datastructure, unfreeze it, and jump to the 'where did we leave things' point in the code.
That's a 'thread switch': Core A was running thread 1. Now core A is running thread 2.
The thread switch involves moving a bunch of memory around: All those 'live' cached pages, and that stack, need to be near that core for the CPU to do the job, so that's a CPU loading in a bunch of pages from main memory, which does take some time. Not a lot (nanoseconds), but not zero either. Modern CPUs can only operate on the data loaded in a nearby cachepage (which are ~64k to 1MB in size, no more than that, a thousand+ times less than what your RAM sticks can store).
In single-threaded asynchronous I/O, this is roughly how it works:
There's still a thread of course (all things run in one), but this time the app in question doesn't multithread at all. Instead, it, itself, creates the data structures required to track multiple incoming connections, and, crucially, the primitives used to ask for data work differently. Remember that in the synchronous case, if the code asks for the next bunch of bytes from the network connection then the thread will end up 'freezing' (telling the kernel to find some other work to do) until the data is there. In asynchronous modes, instead the data is returned if available, but if not available, the function 'give me some data!' still returns, but it just says: Sorry bud. I have 0 new bytes for you.
The app itself will then decide to go work on some other connection, and in that way, a single thread can manage a bunch of connections: Is there data for connection #1? Yes, great, I shall process this. No? Oh, okay. Is there data for connection #2? and so on and so forth.
Note that, if data arrives on, say, connection #5, then this one thread, to do the job of handling this incoming data, will presumably need to load, from memory, a bunch of state info, and may need to write it.
For example, let's say you are processing an image, and half of the PNG data arrives on the wire. There's not a lot you can do with it, so this one thread will create a buffer and store half of the PNG inside it. As it then hops to another connection, it needs to load the ~15% of the image it alrady got, and add onto that buffer the 10% of the image that just arrived in a network packet.
This app is also causing a bunch of memory to be moved around into and out of cache pages just the same, so in that sense it's not all that different, and if you want to handle 100k things at once, you're inevitably going to end up having to move stuff into and out of cache pages.
So what is the difference? Can you put it in fry cook terms?
Not really, no. It's all just data structures.
The key difference is in what gets moved into and out of those cache pages.
In the case of async it is exactly what the code you wrote wants to buffer. No more, no less.
In the case of synchronous, it's that 'datastructure representing a thread'.
Take java, for example: That means at the very least the entire stack for that thread. That's, depending on the -Xss parameter, about 128k worth of data. So, if you have 100k connections to be handled simultaneously, that's 12.8GB of RAM just for those stacks!
If those incoming images really are all only about 4k in size, you could have done it with 4k buffers, for only 0.4GB of memory needed at most, if you handrolled that by going async.
That is where the gain lies for async: By handrolling your buffers, you can't avoid moving memory into and out of cache pages, but you can ensure it's smaller chunks. and that will be faster.
Of course, to really make it faster, the buffer for storing state in the async model needs to be small (not much point to this if you need to save 128k into memory before you can operate on it, that's how large those stacks were already), and you need to handle so many things at once (10k+ simultaneous).
There's a reason we don't write all code in assembler or why memory managed languages are popular: Handrolling such concerns is tedious and error-prone. You shouldn't do it unless the benefits are clear.
That's why synchronous is usually the better option, and in practice, often actually faster (those OS thread schedulers are written by expert coders and tweaked extremely well. You don't stand a chance to replicate their work) - that whole 'by handrolling my buffers I can reduce the # of bytes that need to be moved around a ton!' thing needs to outweigh the losses.
In addition, async is complicated as a programming model.
In async mode, you can never block. Wanna do a quick DB query? That could block, so you can't do that, you have to write your code as: Okay, fire off this job, and here's some code to run when it gets back. You can't 'wait for an answer', because in async land, waiting is not allowed.
In async mode, anytime you ask for data, you need to be capable of dealing with getting half of what you wanted. In synchronized mode, if you ask for 4k, you get 4k. The fact that your thread may freeze during this task until the 4k is available is not something you need to worry about, you write your code as if it just arrives as you ask for it, complete.
Bbbuutt... fry cooks!
Look, CPU design just isn't simple enough to put in terms of a restaurant like this.
You are mentally moving the bottleneck from your process (the burger orderer) to the other process (the burger maker).
This will not make your application faster.
When considering the single-threaded async model, the real benefit is that your process is not blocked while waiting for the other process.
In other words, do not associate async with the word fast but with the word free. Free to do other work.
So, basically i have 2 lists on a node.js app that i'm planning, one of them i get from a database and the other from copying the names of files on a server and storing that inside a list.
The problem is: both of them have ~750000 strings each, and i need to search for each string in one inside the other.
I am pretty new to node, so i'm wondering, will my app lock itself to other users while comparing the lists, with it being single threaded and all? comparing two huge lists like these seems pretty cpu intensive to me.
It depends of how you are comparing the lists and how your application is built, but generally yes. For example, this code would block execution:
server.on("request", (req, res)=>{
// This function will not be called while the comparasion is running.
// Requests will have to wait until the call stack empties (until nothing
// is running).
}
compareLists();
Maybe you should take a look at worker threads or cluster in order to make your Node.js application multi-threaded, or you could just execute another Node.js script in parallel with child process. Also, see this guide about the JavaScript event loop.
I just started getting into child_process and all I know is that it's good for delegating blocking functions (e.g. looping a huge array) to child processes.
I use typeorm to communicate with the mysql database. I was wondering if there's a benefit to move some of the asynchronous database works to child processes. I read it in another thread (Unfortunately I couldn't find it in the browser history) that there's no good reason to delegate async functions to child processes. Is it true?
example code:
child.js
import {createConnection} "./dbConnection";
import {SomeTable} from "./entity/SomeTable";
process.on('message', (m)=> {
createConnection().then(async connection=>{
let repository = connection.getRepository(SomeTable);
let results = await repository
.createQueryBuilder("t")
.orderBy("t.postId", "DESC")
.getMany();
process.send(results);
})
});
main.js
const cp = require('child_process');
const child = cp.fork('./child.js');
child.send('Please fetch some data');
child.on('message', (m)=>{
console.log(m);
});
The big gain about Javascript is its asynchronous nature...
What happens when you call an asynchronous function is that the code continues to execute, not waiting for the answer. And just when the function is done, and an answer is given does it then continue on with that part.
Your database call is already asynchronous. So you would spawn another node process for completely nothing. Since your database takes all the heat, having more nodeJS processes wouldn't help on that part.
Take the same example but with a file write. What could make the write to the disk faster? Nothing much really... But do we care? Nope because our NodeJS is not blocked and keeps answering requests and handling tasks. The only thing that you might want to check is to not send a thousand file writes at the same time, if they are big there would be a negative impact on the file system, but since a write is not CPU intensive, node will run just fine.
child processes really are a great tool, but it is rare to need it. I too wanted to use some when I heard about them, but the thing is that you will certainly not need them at all... The only time I decided to use it was to create a CPU intensive worker. It would make sure it spawns one child process per Core (since node is single threaded) and respawn any faulty ones.
Sometimes I'm having issues with firebase when the user is on a slow mobile connection. When the user saves an entry to firebase I actually have to write to 3 different locations. Sometimes, the first one works, but if the connection is slow the 2nd and 3rd may fail.
This leaves me with entries in the first location that I constantly need to clean up.
Is there a way to help prevent this from happening?
var newTikiID = ref.child("tikis").push(tiki, function(error){
if(!error){
console.log("new tiki created")
var tikiID = newTikiID.key()
saveToUser(tikiID)
saveToGeoFire(tikiID, tiki.tikiAddress)
} else {
console.log("an error occurred during tiki save")
}
});
There is no Firebase method to write to multiple paths at once. Some future tools planned by the team (e.g. Triggers) may resolve this in the future.
This topic has been explored before and the firebase-multi-write README contains a lot of discussion on the topic. The repo also has a partial solution to client-only atomic writes. However, there is no perfect solution without a server process.
It's important to evaluate your use case and see if this really matters. If the second and third writes failed to write to a geo query, chances are, there's really no consequence. Most likely, it's essentially the same as if the first write had failed, or if all writes had failed; it won't appear in searches by geo location. Thus, the complexity of resolving this issue is probably a time sink.
Of course, it does cost a few bytes of storage. If we're working with millions of records, that may matter. A simple solution for this scenario would be to run and audit report that detects broken links between the data and geofire tables and cleans up old data.
If an atomic operation is really necessary, such as gaming mechanics where fairness or cheating could be an issue, or where integrity is lost by having partial results, there are a couple options:
1) Master Record approach
Pick a master path (the one that must exist) and use security rules to ensure other records cannot be written, unless the master path exists.
".write": "root.child('maste_path').child(newData.child('master_record_id')).exists()"
2) Server-side script approach
Instead of writing the paths separately, use a queue strategy.
Create an single event by writing a single event to a queue
Have a server-side process monitor the queue and process events
The server-side process does the multiple writes and ensures they
all succeed
If any fail, the server-side process handles
rollbacks or retries
By using the server-side queue, you remove the risk of a client going offline between writes. The server can safely survive restarts and retry events or failures when using the queue model.
I have had the same problem and I ended up choosing to use condition Conditional Request with the Firebase REST API in order to write data transactionally. See my question and answer. Firebase: How to update multiple nodes transactionally? Swift 3 .
If you need to write concurrently (but not transactionally) to several paths, you can do that now as Firebase supports multi-path updates. https://firebase.google.com/docs/database/rest/save-data
https://firebase.googleblog.com/2015/09/introducing-multi-location-updates-and_86.html
I'm working on a game prototype and worried about the following case: Browser does AJAX to Node.JS, which has to do several MongoDB operations using async.series.
What prevents multiple requests at the same time causing the database issues? New events (i.e. db operations) seem like they could be run out of order or in between the async.series steps.
In other words, what happens if a user does AJAX calls very quickly, before the prior ones have finished their async.series. Hopefully that makes sense.
If this is indeed an issue, what is the proper way to handle it?
First and foremost, #fmodos's comment should be completely disregarded. It is wrong on many levels but most simply you could have any number of nodes running (say on Heroku) and there is no guarantee that subsequent requests will hit the same node.
Now, I'm going to answer your question by asking more questions. (You really didn't give me a choice here)
What are these operations doing? Inserting documents? Updating existing documents? Removing documents? This is very important because if all you're doing is simply inserting documents then why does it matter if one finishes for before the other? If you're updating documents then you should NOT be issuing a find, grabbing a ref to the object, and then calling save. (I'm making the assumption you're using mongoose, if you're not, I would) Instead what you should be doing is using built in mongo functions like $inc which properly handle concurrent requests.
http://docs.mongodb.org/manual/reference/operator/update/inc/
Does that help at all? If not, please let me know and I will give it another shot.
Mongo has database wide read/write locks. It gives preference to writes of the same collection first then fulfills reads. So, if by chance, you have Bill writing to the db and Joe is reading at the same time, Bill's write will execute first while Joe waits until the write is complete and then he is given all the data (including Bill's).