EDIT: found the answer - https://www.youtube.com/watch?v=8aGhZQkoFbQ
_
Okay, so I have some C# background and C# "async environment" is a bit of a mixed bag of concurrency and "small parallelism" i.e. when it comes to heavy async environment you can have race conditions, deadlock and you need to protect shared resources.
Now, I'm trying to understand how does JavaScript/ES6 async environment works. Consider following code:
// current context: background "page"
// independent from content page
let busy = false;
// This is an event handler that receives event from content page
// It can happen at any time
runtimeObj.onMessage.addListener((request) =>
{
if(request.action === 'AddEntry')
{
AddEntry(request);
return true;
}
return false;
} );
function AddEntry(data)
{
if (!busy)
group.push({url: data.url, time: Date.now(), session: data.session});
else
setTimeout(AddEntry(data),10000) // simulating Semaphore wait
}
// called from asynchronous function setInterval()
function SendPOST()
{
if (groups.length < 1 || groups === undefined)
return;
busy = true; // JS has no semaphores so I "simulate it"
let del = [];
groups.forEach(item =>
{
if (Date.now() - item.time > 3600000)
{
del.push(item);
let xhr = new XMLHttpRequest();
let data = new FormData();
data.append('action', 'leave');
data.append('sessionID', item.session);
xhr.withCredentials = true;
http.onreadystatechange = function()
{
if(http.readyState == 4 && http.status !== 200) {
console.log(`Unable to part group ${item.url}! Reason: ${http.status}. Leave group manually.`)
}
}
xhr.open('POST', item.url, true);
xhr.send(data);
}
});
del.forEach(item => groups.slice(item,1));
busy = false;
}
setInterval(SendPOST, 60000);
This is not really the best example, since it does not have bunch of async keyword paired with functions but in my understanding both sendPost() and AddEntry() arent really pure sequential operations. Nonetheless, I've been told that in AddEntry(), busy always will be false because:
sendPost is queued to execute in a minute
an event is add to the event loop
the event is processed and AddEntry is called
since busy = false the group is pushed
a minute passes and SendPost is added to the event loop
the event is process and SendPost is called
groups.length === 1 so it continues
busy = true
each group causes a request to get queued
an event comes in
busy = false
the event is processed, AddEntry is called
busy is false, like it will always be
group is pushed
eventually the requests from before are resolved, and t
he onreadystatechange callbacks are put on the event loop
eventually each of the callbacks are processed and the logging statements are executed
Is this correct? From what I understand it essentially means there can be no race conditions or deadlock or I ever need to protect shared resource.
If I'm two write a similar code for C# runtime where sendPost() and AddEntry() are asynchronous task methods that can be called in non-blocking way from different event sources there could situation when I access shared resource while the iteration context is temporally suspended in context switch by Thread Sheduler.
Is this correct?
Yes, it describes adequately whats happening.
JS runs in a single threaded way, which means that a function will always run till it's end.
setTimeout(function concurrently() {
console.log("this will never run");
});
while(true) console.log("because this is blocking the only thread JS has");
there can be no race conditions ...
There can (in a logical way). However, they cannot appear in synchronous code, as JavaScript runs single threaded¹. If you add a callback to something, or await a promise, then other code might run in the meantime (but never at the same time!), which might cause a race condition:
let block = false;
async function run() {
if(block) return // only run this once
// If we'd do block = true here, this would be totally safe
await Promise.resolve(); // things might get out of track here
block = true;
console.log("whats going on?");
}
run(); run();
... nor deadlocks ...
Yes, they are quite impossible (except if you use Atomics¹).
I never need to protect shared resource.
Yes. Because there are no shared ressources (okay, except SharedBuffers¹).
¹: By using WebWorkers (or threads on NodeJS), you actually control multiple JS threads. However each thread runs its own JS code, so variables are not shared. These threads can pass messages between each other and they can also share a specific memory construct (a SharedArrayBuffer). That can then be accessed concurrently, and all the effects of concurrent access apply (but you will rarely use it, so ...).
Related
I'm working on a client side simulation that does on the fly background computation and view refresh. However, because the simulation is always live, the CPU ends up doing a lot of unnecessary work during intense user inputs and edits.
What I want to achieve is a way to kill the whole sequence on user event.
anticipated usage in the main app:
var sequence = new Sequence(heavyFunc1, heavyFunc2, updateDom);
document.addEventListener("click", sequence.stop)
sequence.run() //all the heavy computation runs until told to stop
anticipated usage in a web worker:
var sequence = new Sequence(heavyFunc1, heavyFunc2, self.postmessage);
self.onmessage = function(e) {
if (e.data === 'stop') msg = sequence.stop;
else msg = sequence.run(e.data); //resets and restarts
};
I've looked around and can think of the following tools and patterns:
setTimout(fcn,0) || setImmediate(fcn) shim : Wrap the individual steps setTimeout(fcn,0) inside the sequences in both the main script and worker to processe new events before the end of the sequence.
killFlag = false;
window.addEventListener('keypress', function() {killFlag = true});
//this exampe works only with setTimeout, fails with setImmediate lib
var interruptibleSequence = function(tasks) {
var iterate = function() {
if (killFlag || !tasks.length) return;
tasks.shift()();
if (tasks.length) window.setTimeout(iterate,0);
};
iterate();
};
This example worked with setTimeout but failed with setImmediate where the keypress event always came in last.
debounce : This is the typical answer that does not seem to apply in my case. Delaying and batching user inputs would partly reduce the processing intensity at the expense of longer processing time.
Promises : I'm already using promises for worker results and could introduce additional promises between steps to interrupt the sequence and process new events. I've tried (and failed) using either flag checks or Promise.race.
With kill flag
killFlag = false;
window.addEventListener('keypress', function() {killFlag = true});
//does not work. promises get priority and event is only triggered last
var interruptibleSequence = function(tasks) {
var seq = Promise.resolve();
tasks.forEach(function(task){
seq = seq.then(function() {
if (killFlag) return;
else task();
});
});
};
With Promise.race
var killTrigger;
var killPromise = new Promise(function(res,rej){killTrigger = rej});
window.addEventListener('keypress', killTrigger());
//does not work. promises get priority and event is only triggered last
var raceToFailure = function(tasks) {
var seq = Promise.resolve();
tasks.forEach(function(task){
seq = Promise.race([seq.then(task),killPromise]);
});
};
Question
What would be the recommended pattern to kill a sequence on event?
In Short: Call Stack >> Promise >> Message Queue (events & setTimeout)
Turns out the question was badly formulated and too specific to a particular use case. In general, for sync functions that return immediately, wrapping them in a sequence of promise does hand over control to the next function but still take precedence over events in the message queue, even those fired before the Promises were created. The accepted answer does not always work.
In the snippet below, the Event, Promise, Sync calls end up being executed in the Sync, Promise, Event order. (native Promises required)
The only way therefore to let events interrupt a heavy computation is to stage the heavy sync functions with setTimeout.
//prep
function log(m) {document.getElementsByTagName('pre')[0].innerHTML += m+'<br>'}
function syncDelay(ms) {
for (var tgt=Date.now()+ms; Date.now()<tgt;) Math.random();
}
function async1() {syncDelay(100); log('1. async1')}
window.addEventListener('message', function(e){log(e.data)});
function syncPr1() {syncDelay(100); log('3. syncPromise1')}
function syncPr2() {syncDelay(100); log('4. syncPromise2')}
function syncPr3() {syncDelay(100); log('5. syncPromise3')}
//sequence
setTimeout(async1,0);
window.postMessage('2. events get done last', '*');
Promise.resolve()
.then(syncPr1)
.then(syncPr2)
.then(syncPr3)
.catch(function(e){console.log(e)});
log('6. sync items get executed before next promise call');
<pre></pre>
Something requests a task
Something else pulls the task list out of storage, and checks if there are tasks there.
If there are tasks it removes one and the smaller "task list" is put back in storage.
Between steps 2 and 3 a race condition can occur if multiple requests occur, and the same task will be served twice.
Is the correct resolution to "lock" the "tasks table" while a single task is "checked out", to prevent any other requests?
What is the solution with the least performance impact, such as delay of execution, and how should it be implemented in javascript with chrome.storage API ?
Some code for example :
function decide_response ( ) {
if(script.replay_type == "reissue") {
function next_task( tasks ) {
var no_tasks = (tasks.length == 0);
if( no_tasks ) {
target_complete_responses.close_requester();
}
else {
var next_task = tasks.pop();
function notify_execute () {
target_complete_responses.notify_requester_execute( next_task );
}
setTable("tasks", tasks, notify_execute);
}
}
getTable( "tasks", next_tasks );
...
}
...
}
I think you can manage without a lock by taking advantage of the fact that javascript is single-threaded within a context, even with the asynchronous chrome.storage API. As long as you're not using chrome.storage.sync, that is - if there may or may not be changes from the cloud I think all bets are off.
I would do something like this (written off the cuff, not tested, no error handling):
var getTask = (function() {
// Private list of requests.
var callbackQueue = [];
// This function is called when chrome.storage.local.set() has
// completed storing the updated task list.
var tasksWritten = function(nComplete) {
// Remove completed requests from the queue.
callbackQueue = callbackQueue.slice(nComplete);
// Handle any newly arrived requests.
if (callbackQueue.length)
chrome.storage.local.get('tasks', distributeTasks);
};
// This function is called via chrome.storage.local.get() with the
// task list.
var distributeTasks = function(items) {
// Invoke callbacks with tasks.
var tasks = items['tasks'];
for (var i = 0; i < callbackQueue.length; ++i)
callbackQueue[i](tasks[i] || null);
// Update and store the task list. Pass the number of requests
// handled as an argument to the set() handler because the queue
// length may change by the time the handler is invoked.
chrome.storage.local.set(
{ 'tasks': tasks.slice(callbackQueue.length) },
function() {
tasksWritten(callbackQueue.length);
}
);
};
// This is the public function task consumers call to get a new
// task. The task is returned via the callback argument.
return function(callback) {
if (callbackQueue.push(callback) === 1)
chrome.storage.local.get('tasks', distributeTasks);
};
})();
This stores task requests from consumers as callbacks in a queue in local memory. When a new request arrives, the callback is added to the queue and the task list is fetched iff this is the only request in the queue. Otherwise we can assume that the queue is already being processed (this is an implicit lock that allows only one strand of execution to access the task list).
When the task list is fetched, tasks are distributed to requests. Note that there may be more than one request if more have arrived before the fetch completed. This code just passes null to a callback if there are more requests than tasks. To instead block requests until more tasks arrive, hold unused callbacks and restart request processing when tasks are added. If tasks can be dynamically produced as well as consumed, remember that race conditions will need to be prevented there as well but is not shown here.
It's important to prevent reading the task list again until the updated task list is stored. To accomplish this, requests aren't removed from the queue until the update is complete. Then we need to make sure to process any requests that arrived in the meantime (it's possible to short-circuit the call to chrome.storage.local.get() but I did it this way for simplicity).
This approach should be pretty efficient in the sense that it should minimize updates to the task list while still responding as quickly as possible. There is no explicit locking or waiting. If you have task consumers in other contexts, set up a chrome.extension message handler that calls the getTask() function.
Suppose I load some Flash movie that I know at some point in the future will call window.flashReady and will set window.flashReadyTriggered = true.
Now I have a block of code that I want to have executed when the Flash is ready. I want it to execute it immediately if window.flashReady has already been called and I want to put it as the callback in window.flashReady if it has not yet been called. The naive approach is this:
if(window.flashReadyTriggered) {
block();
} else {
window.flashReady = block;
}
So the concern I have based on this is that the expression in the if condition is evaluated to false, but then before block() can be executed, window.flashReady is triggered by the external Flash. Consequently, block is never called.
Is there a better design pattern to accomplish the higher level goal I'm going for (e.g., manually calling the flashReady callback)? If not, am I safe, or are there other things I should do?
All Javascript event handler scripts are handled from one master event queue system. This means that event handlers run one at a time and one runs until completion before the next one that's ready to go starts running. As such, there are none of the typical race conditions in Javascript that one would see in a multithreaded language where multiple threads of the language can be running at once (or time sliced) and create real-time conflict for access to variables.
Any individual thread of execution in javascript will run to completion before the next one starts. That's how Javascript works. An event is pulled from the event queue and then code starts running to handle that event. That code runs by itself until it returns control to the system where the system will then pull the next event from the event queue and run that code until it returns control back to the system.
Thus the typical race conditions that are caused by two threads of execution going at the same time do not happen in Javascript.
This includes all forms of Javascript events including: user events (mouse, keys, etc..), timer events, network events (ajax callbacks), etc...
The only place you can actually do multi-threading in Javascript is with the HTML5 Web Workers or Worker Threads (in node.js), but they are very isolated from regular javascript (they can only communicate with regular javascript via message passing) and cannot manipulate the DOM at all and must have their own scripts and namespace, etc...
While I would not technically call this a race condition, there are situations in Javascript because of some of its asynchronous operations where you may have two or more asynchronous operations in flight at the same time (not actually executing Javascript, but the underlying asynchronous operation is running native code at the same time) and it may be unpredictable when each operation will complete relative to the others. This creates an uncertainty of timing which (if the relative timing of the operations is important to your code) creates something you have to manually code for. You may need to sequence the operations so one runs and you literally wait for it to complete before starting the next one. Or, you may start all three operations and then have some code that collects all three results and when they are all ready, then your code proceeds.
In modern Javascript, promises are generally used to manage these types of asynchronous operations.
So, if you had three asynchronous operations that each return a promise (like reading from a database, fetching a request from another server, etc...), you could manually sequence then like this:
a().then(b).then(c).then(result => {
// result here
}).catch(err => {
// error here
});
Or, if you wanted them all to run together (all in flight at the same time) and just know when they were all done, you could do:
Promise.all([a(), b(), c()])..then(results => {
// results here
}).catch(err => {
// error here
});
While I would not call these race conditions, they are in the same general family of designing your code to control indeterminate sequencing.
There is one special case that can occur in some situations in the browser. It's not really a race condition, but if you're using lots of global variables with temporary state, it could be something to be aware of. When your own code causes another event to occur, the browser will sometimes call that event handler synchronously rather than waiting until the current thread of execution is done. An example of this is:
click
the click event handler changes focus to another field
that other field has an event handler for onfocus
browser calls the onfocus event handler immediately
onfocus event handler runs
the rest of the click event handler runs (after the .focus() call)
This isn't technically a race condition because it's 100% known when the onfocus event handler will execute (during the .focus() call). But, it can create a situation where one event handler runs while another is in the middle of execution.
JavaScript is single threaded. There are no race conditions.
When there is no more code to execute at your current "instruction pointer", the "thread" "passes the baton", and a queued window.setTimeout or event handler may execute its code.
You will get better understanding for Javascript's single-threading approach reading node.js's design ideas.
Further reading:
Why doesn't JavaScript support multithreading?
It is important to note that you may still experience race conditions if you eg. use multiple async XMLHttpRequest. Where the order of returned responses is not defined (that is responses may not come back in the same order they were send). Here the output depends on the sequence or timing of other uncontrollable events (server latency etc.). This is a race condition in a nutshell.
So even using a single event queue (like in JavaScript) does not prevent events coming in uncontrollable order and your code should take care of this.
Sure you need. It happens all the time:
<button onClick=function() {
const el = document.getElementById("view");
fetch('/some/api').then((data) => {
el.innerHTML = JSON.stringify(data);
})
}>Button 1</button>
<button onClick=function() {
const el = document.getElementById("view");
fetch('/some/other/api').then((data) => {
el.innerHTML = JSON.stringify(data);
})
}>Button 2</button>
Some people don't view it as a race condition.
But it really is.
Race condition is broadly defined as "the behavior of an electronic, software, or other system where the output is dependent on the sequence or timing of other uncontrollable events".
If user clicks these 2 buttons in a brief period, the output is not guaranteed to depend of the order of clicking. It depends on which api request will be resolved sooner. Moreover, the DOM element you're referencing can be removed by some other event (like changing route).
You can mitigate this race condition by disabling button or showing some spinner when loading operation in progress, but that's cheating. You should have some mutex/counter/semaphore at the code level to control your asynchronous flow.
To adapt it to your question, it depends on what "block()" is. If it's a synchronous function, you don't need to worry. But if it's asynchronous, you have to worry:
function block() {
window.blockInProgress = true;
// some asynchronous code
return new Promise(/* window.blockInProgress = false */);
}
if(!window.blockInProgress) {
block();
} else {
window.flashReady = block;
}
This code makes sense you want to prevent block from being called multiple times. But if you don't care, or the "block" is synchronous, you shouldn't worry. If you're worried about that a global variable value can change when you're checking it, you shouldn't be worried, it's guaranteed to not change unless you call some asynchronous function.
A more practical example. Consider we want to cache AJAX requests.
fetchCached(params) {
if(!dataInCache()) {
return fetch(params).then(data => putToCache(data));
} else {
return getFromCache();
}
}
So happens if we call this code multiple times? We don't know which data will return first, so we don't know which data will be cached. The first 2 times it will return fresh data, but the 3rd time we don't know the shape of response to be returned.
Yes, of course there are race conditions in Javascript. It is based on the event loop model and hence exhibits race conditions for async computations. The following program will either log 10 or 16 depending on whether incHead or sqrHead is completed first:
const rand = () => Math.round(Math.random() * 100);
const incHead = xs => new Promise((res, rej) =>
setTimeout(ys => {
ys[0] = ys[0] + 1;
res(ys);
}, rand(), xs));
const sqrHead = xs => new Promise((res, rej) =>
setTimeout(ys => {
ys[0] = ys[0] * ys[0];
res(ys);
}, rand(), xs))
const state = [3];
const foo = incHead(state);
const bar = sqrHead(state);
Promise.all([foo, bar])
.then(_ => console.log(state));
Why there no such function in javascript that sets a timeout for its continuation, saves the necessary state (the scope object and the execution point), terminates the script and gives the control back to the browser? After the timeout expires the browser would load back the execution context and continues the script, and we would have a real non browser blocking sleep functionality that would work even if the JS engine is single threaded.
Why there is still no such functionality in javascript? Why do we have to still slice our code into functions and set the timeouts to the next step to achieve the sleep effect?
I think 'sleep'ing is something you do not want in your browser.
First of all it might be not clear what has to happen and how a browser should behave when you actually sleep.
Is the complete Script runtime sleeping? Normally it should because you only have one thread running your code. So what happens if other events oocur during sleep? they would block, and as soon execution continues all blocked events would fire. That will cause an odd behaviour as you might imagine (for instance mouse click events which are fired some time, maybe seconds, after the actual click). Or these events had to be ignored, which will lead to a loss of information.
What will happen to your browser? Shall it wait for sleep if the user clicks a (e.g. close window) button? I think not, but this might actually call javascript code again (unload) which will not be able to be called since program execution is sleeping.
On a second thought sleep is a sign of poor program design. Actually a program/function/you name it has a certain task, which shall be completed as soon as possible. Sometimes you have to wait for a result (for instance you wait for an XHR to complete) and you want to continue program execution meanwhile. In this case you can and should use asynchronous calls. This results in two advantages:
The speed of all scripts is enhanced (no blocking of other scripts due to sleep)
The code is executed exactly when it should and not before or after a certain event (which might lead to other problems like deadlocks if two functions check for the same condition ...)
... which leads to another problem: Imagine two or more pieces of code would call sleep. They would hinder themselves if they try to sleep at the same, maybe unnecessarily. This would cause a lot of trouble when you like to debug, maybe you even have difficulties in ensuring which function sleeps first, because you might control this behavior somehow.
Well I think that it is one of the good parts of Javascript, that sleep does not exist. However it might be interesting how multithreaded javascripts could perform in a browser ;)
javascript is desgined for single process single thread runtime, and browser also puts UI rendering in this thread, so if you sleep the thread, UI rendering such as gif animation and element's event will also be blocked, the browser will be in "not responding" state.
Maybe a combination of setTimeout and yield would work for your needs?
What's the yield keyword in JavaScript?
You could keep local function scope while letting the browser keep going about its work.
Of course that is only in Mozilla at the moment?
Because "sleep()" in JavaScript would make for a potentially horrible user experience, by freezing the web browser and make it unresponsive.
What you want is a combination of yield and Deferreds (from jquery for example).
It's called sometimes pseudoThreads, Light Threading or Green Threads. And you can do exactly what you want with them in javascript > 1.7 . And here is how:
You'll need first to include this code:
$$ = function (generator) {
var d = $.Deferred();
var iter;
var recall = function() {
try {var def = iter.send.apply(iter, arguments);} catch(e) {
if (e instanceof StopIteration) {d.resolve(); return;}
if (e instanceof ReturnValueException) {
d.resolve(e.retval); return
};
throw e;
};
$.when(def).then(recall); // close the loop !
};
return function(arguments) {
iter = generator.apply(generator, arguments);
var def = iter.next(); // init iterator
$.when(def).then(recall); // loop in all yields
return d.promise(); // return a deferred
}
}
ReturnValueException = function (r) {this.retval = r; return this; };
Return = function (retval) {throw new ReturnValueException(retval);};
And of course call jquery code to get the $ JQuery acces (for Deferreds).
Then you'll be able to define once for all a Sleep function:
function Sleep(time) {
var def = $.Deferred();
setTimeout(function() {def.resolve();}, time);
return def.promise();
}
And use it (along with other function that could take sometime):
// Sample function that take 3 seconds to execute
fakeAjaxCall = $$(function () {
yield (Sleep(3000));
Return("AJAX OK");
});
And there's a fully featured demo function:
function log(msg) {$('<div>'+msg+'</div>').appendTo($("#log")); }
demoFunction = $$(function (arg1, arg2) {
var args = [].splice.call(arguments,0);
log("Launched, arguments: " + args.join(", "));
log("before sleep for 3secs...");
yield (Sleep(3000));
log("after sleep for 3secs.");
log("before call of fake AjaxCall...");
ajaxAnswer = yield (fakeAjaxCall());
log("after call of fake AjaxCall, answer:" + ajaxAnswer);
// You cannot use return, You'll have to use this special return
// function to return a value
log("returning 'OK'.");
Return("OK");
log("should not see this.");
});
As you can see, syntax is a little bit different:
Remember:
any function that should have these features should be wrapped in $$(myFunc)
$$ will catch any yielded value from your function and resume it only when
the yielded value has finished to be calculted. If it's not a defered, it'll work
also.
Use 'Return' to return a value.
This will work only with Javascript 1.7 (which is supported in newer firefox version)
It sounds like what you're looking for here is a way to write asynchronous code in a way that looks synchronous. Well, by using Promises and asynchronous functions in the new ECMAscript 7 standard (an upcoming version of JavaScript), you actually can do that:
// First we define our "sleep" function...
function sleep(milliseconds) {
// Immediately return a promise that resolves after the
// specified number of milliseconds.
return new Promise(function(resolve, _) {
setTimeout(resolve, milliseconds);
});
}
// Now, we can use sleep inside functions declared as asynchronous
// in a way that looks like a synchronous sleep.
async function helloAfter(seconds) {
console.log("Sleeping " + seconds + " seconds.");
await sleep(seconds * 1000); // Note the use of await
console.log("Hello, world!");
}
helloAfter(1);
console.log("Script finished executing.");
Output:
Sleeping 1 seconds.
Script finished executing.
Hello, world!
(Try in Babel)
As you may have noticed from the output, this doesn't work quite the same way that sleep does in most languages. Rather than block execution until the sleep time expires, our sleep function immediately returns a Promise object which resolves after the specified number of seconds.
Our helloAfter function is also declared as async, which causes it to behave similarly. Rather than block until its body finishes executing, helloAfter returns a Promise immediately when it is called. This is why "Script finished executing." gets printed before "Hello, world!".
Declaring helloAfter as async also allows the use of the await syntax inside of it. This is where things get interesting. await sleep(seconds * 1000); causes the helloAfter function to wait for the Promise returned by sleep to be resolved before continuing. This is effectively what you were looking for: a seemingly synchronous sleep within the context of the asynchronous helloAfter function. Once the sleep resolves, helloAfter continues executing, printing "Hello, world!" and then resolving its own Promise.
For more information on async/await, check out the draft of the async functions standard for ES7.
How could something equivalent to lock in C# be implemented in JavaScript?
So, to explain what I'm thinking a simple use case is:
User clicks button B. B raises an onclick event. If B is in event-state the event waits for B to be in ready-state before propagating. If B is in ready-state, B is locked and is set to event-state, then the event propagates. When the event's propagation is complete, B is set to ready-state.
I could see how something close to this could be done, simply by adding and removing the class ready-state from the button. However, the problem is that a user can click a button twice in a row faster than the variable can be set, so this attempt at a lock will fail in some circumstances.
Does anyone know how to implement a lock that will not fail in JavaScript?
Lock is a questionable idea in JS which is intended to be threadless and not needing concurrency protection. You're looking to combine calls on deferred execution. The pattern I follow for this is the use of callbacks. Something like this:
var functionLock = false;
var functionCallbacks = [];
var lockingFunction = function (callback) {
if (functionLock) {
functionCallbacks.push(callback);
} else {
$.longRunning(function(response) {
while(functionCallbacks.length){
var thisCallback = functionCallbacks.pop();
thisCallback(response);
}
});
}
}
You can also implement this using DOM event listeners or a pubsub solution.
JavaScript is, with a very few exceptions (XMLHttpRequest onreadystatechange handlers in some versions of Firefox) event-loop concurrent. So you needn't worry about locking in this case.
JavaScript has a concurrency model based on an "event loop". This model is quite different than the model in other languages like C or Java.
...
A JavaScript runtime contains a message queue, which is a list of messages to be processed. To each message is associated a function. When the stack is empty, a message is taken out of the queue and processed. The processing consists of calling the associated function (and thus creating an initial stack frame) The message processing ends when the stack becomes empty again.
...
Each message is processed completely before any other message is processed. This offers some nice properties when reasoning about your program, including the fact that whenever a function runs, it cannot be pre-empted and will run entirely before any other code runs (and can modify data the function manipulates). This differs from C, for instance, where if a function runs in a thread, it can be stopped at any point to run some other code in another thread.
A downside of this model is that if a message takes too long to complete, the web application is unable to process user interactions like click or scroll. The browser mitigates this with the "a script is taking too long to run" dialog. A good practice to follow is to make message processing short and if possible cut down one message into several messages.
For more links on event-loop concurrency, see E
I've had success mutex-promise.
I agree with other answers that you might not need locking in your case. But it's not true that one never needs locking in Javascript. You need mutual exclusivity when accessing external resources that do not handle concurrency.
Locks are a concept required in a multi-threaded system. Even with worker threads, messages are sent by value between workers so that locking is unnecessary.
I suspect you need to just set a semaphore (flagging system) between your buttons.
Here's a simple lock mechanism, implemented via closure
const createLock = () => {
let lockStatus = false
const release = () => {
lockStatus = false
}
const acuire = () => {
if (lockStatus == true)
return false
lockStatus = true
return true
}
return {
lockStatus: lockStatus,
acuire: acuire,
release: release,
}
}
lock = createLock() // create a lock
lock.acuire() // acuired a lock
if (lock.acuire()){
console.log("Was able to acuire");
} else {
console.log("Was not to acuire"); // This will execute
}
lock.release() // now the lock is released
if(lock.acuire()){
console.log("Was able to acuire"); // This will execute
} else {
console.log("Was not to acuire");
}
lock.release() // Hey don't forget to release
If it helps anyone in 2022+, all major browsers now support Web Locks API although experimental.
To quote the example in MDN:
await do_something_without_lock();
// Request the lock.
await navigator.locks.request('my_resource', async (lock) => {
// The lock has been acquired.
await do_something_with_lock();
await do_something_else_with_lock();
// Now the lock will be released.
});
// The lock has been released.
await do_something_else_without_lock();
Lock is automatically released when the callback returns
Locks are scoped to origins (https://example.com != https://example.org:8080), and work across tabs/workers.
Lock requests are queued (first come-first served); (unlike in some other languages where the lock is passed to some thread at random)
navigator.locks.query() can be used to see what has the lock, and who are in the queue to acquire the lock
There is a mode="shared" to implement a readers-writers lock if you need it
Why don't you disable the button and enable it after you finish the event?
<input type="button" id="xx" onclick="checkEnableSubmit('true');yourFunction();">
<script type="text/javascript">
function checkEnableSubmit(status) {
document.getElementById("xx").disabled = status;
}
function yourFunction(){
//add your functionality
checkEnableSubmit('false');
}
</script>
Happy coding !!!
Some addition to JoshRiver's answer according to my case;
var functionCallbacks = [];
var functionLock = false;
var getData = function (url, callback) {
if (functionLock) {
functionCallbacks.push(callback);
} else {
functionLock = true;
functionCallbacks.push(callback);
$.getJSON(url, function (data) {
while (functionCallbacks.length) {
var thisCallback = functionCallbacks.pop();
thisCallback(data);
}
functionLock = false;
});
}
};
// Usage
getData("api/orders",function(data){
barChart(data);
});
getData("api/orders",function(data){
lineChart(data);
});
There will be just one api call and these two function will consume same result.
Locks still have uses in JS. In my experience I only needed to use locks to prevent spam clicking on elements making AJAX calls.
If you have a loader set up for AJAX calls then this isn't required (as well as disabling the button after clicking).
But either way here is what I used for locking:
var LOCK_INDEX = [];
function LockCallback(key, action, manual) {
if (LOCK_INDEX[key])
return;
LOCK_INDEX[key] = true;
action(function () { delete LOCK_INDEX[key] });
if (!manual)
delete LOCK_INDEX[key];
}
Usage:
Manual unlock (usually for XHR)
LockCallback('someKey',(delCallback) => {
//do stuff
delCallback(); //Unlock method
}, true)
Auto unlock
LockCallback('someKey',() => {
//do stuff
})