I'm working on a client side simulation that does on the fly background computation and view refresh. However, because the simulation is always live, the CPU ends up doing a lot of unnecessary work during intense user inputs and edits.
What I want to achieve is a way to kill the whole sequence on user event.
anticipated usage in the main app:
var sequence = new Sequence(heavyFunc1, heavyFunc2, updateDom);
document.addEventListener("click", sequence.stop)
sequence.run() //all the heavy computation runs until told to stop
anticipated usage in a web worker:
var sequence = new Sequence(heavyFunc1, heavyFunc2, self.postmessage);
self.onmessage = function(e) {
if (e.data === 'stop') msg = sequence.stop;
else msg = sequence.run(e.data); //resets and restarts
};
I've looked around and can think of the following tools and patterns:
setTimout(fcn,0) || setImmediate(fcn) shim : Wrap the individual steps setTimeout(fcn,0) inside the sequences in both the main script and worker to processe new events before the end of the sequence.
killFlag = false;
window.addEventListener('keypress', function() {killFlag = true});
//this exampe works only with setTimeout, fails with setImmediate lib
var interruptibleSequence = function(tasks) {
var iterate = function() {
if (killFlag || !tasks.length) return;
tasks.shift()();
if (tasks.length) window.setTimeout(iterate,0);
};
iterate();
};
This example worked with setTimeout but failed with setImmediate where the keypress event always came in last.
debounce : This is the typical answer that does not seem to apply in my case. Delaying and batching user inputs would partly reduce the processing intensity at the expense of longer processing time.
Promises : I'm already using promises for worker results and could introduce additional promises between steps to interrupt the sequence and process new events. I've tried (and failed) using either flag checks or Promise.race.
With kill flag
killFlag = false;
window.addEventListener('keypress', function() {killFlag = true});
//does not work. promises get priority and event is only triggered last
var interruptibleSequence = function(tasks) {
var seq = Promise.resolve();
tasks.forEach(function(task){
seq = seq.then(function() {
if (killFlag) return;
else task();
});
});
};
With Promise.race
var killTrigger;
var killPromise = new Promise(function(res,rej){killTrigger = rej});
window.addEventListener('keypress', killTrigger());
//does not work. promises get priority and event is only triggered last
var raceToFailure = function(tasks) {
var seq = Promise.resolve();
tasks.forEach(function(task){
seq = Promise.race([seq.then(task),killPromise]);
});
};
Question
What would be the recommended pattern to kill a sequence on event?
In Short: Call Stack >> Promise >> Message Queue (events & setTimeout)
Turns out the question was badly formulated and too specific to a particular use case. In general, for sync functions that return immediately, wrapping them in a sequence of promise does hand over control to the next function but still take precedence over events in the message queue, even those fired before the Promises were created. The accepted answer does not always work.
In the snippet below, the Event, Promise, Sync calls end up being executed in the Sync, Promise, Event order. (native Promises required)
The only way therefore to let events interrupt a heavy computation is to stage the heavy sync functions with setTimeout.
//prep
function log(m) {document.getElementsByTagName('pre')[0].innerHTML += m+'<br>'}
function syncDelay(ms) {
for (var tgt=Date.now()+ms; Date.now()<tgt;) Math.random();
}
function async1() {syncDelay(100); log('1. async1')}
window.addEventListener('message', function(e){log(e.data)});
function syncPr1() {syncDelay(100); log('3. syncPromise1')}
function syncPr2() {syncDelay(100); log('4. syncPromise2')}
function syncPr3() {syncDelay(100); log('5. syncPromise3')}
//sequence
setTimeout(async1,0);
window.postMessage('2. events get done last', '*');
Promise.resolve()
.then(syncPr1)
.then(syncPr2)
.then(syncPr3)
.catch(function(e){console.log(e)});
log('6. sync items get executed before next promise call');
<pre></pre>
Related
Given the following script:
console.log("start of hard script");
const start = performance.now();
setTimeout(() => console.log('setTimeout'),0);
document.addEventListener("DOMContentLoaded", () => {
console.log('fired event DOMContentLoaded')
});
document.addEventListener("click" , () => {
console.log("fired event click")
});
while(start + 1000 > performance.now());
console.log("end of hard script")
I am sure I read somewhere that the user interaction queue will be more prioritized than the timer queue.
I wanted to see how that priority is defined, but saw in the specs:
Let taskQueue be one of the event loop's task queues, chosen in an
implementation-defined manner, with the constraint that the chosen
task queue must contain at least one runnable task. If there is no
such task queue, then jump to the microtasks step below.
If WHATWG doesn't define a concrete order of queues, I'd like to know by what criteria implementors run? How do they evaluate the "important-ness" of queues? And in the end I'd like to see an example that shows the order of these queues, if it is possible.
I'd like to know by what criteria implementors run? How do they evaluate the "important-ness" of queues?
That's basically their call, a design choice made from years of experience looking at how their tool is being used and what should be prioritized (and also probably a good part of common sense).
The WHATWG indeed doesn't define at all how this task prioritization should be implemented. All they do is to define various task-sources (not even task-queues), to ensure that two tasks queued in the same source will get executed in the correct order.
The closest we have of defining some sort of prioritization is the incoming Prioritized Task Scheduling API which will give us, web-authors, the mean to post prioritized tasks, with three priority levels: "user-blocking", "user-visible" and "background".
To check what browsers actually implement, you'd have to go through their source-code and inspect it thoroughly.
I myself already spent a couple hours in there and all I can tell you about it is that you better be motivated if you want to get the full picture.
A few points that may interest you:
All browsers don't expose the same behavior at all.
In Chrome, setTimeout() still has a minimum delay of 1ms (https://crbug.com/402694)
In Firefox, because Chrome's 1ms delay was producing different results on some web-pages, they create a special very-low-priority task-queue only for the timers scheduled before the page load, the ones scheduled after are queued in a normal priority task-queue. (https://bugzil.la/1270059)
At least in Chrome, each task-queue has a "starvation" protection, which prevents said queue to flood the event-loop with its own task, by letting the queues with lower priority also execute some of their tasks, after some time (not sure how much).
And in the end I'd like to see an example that shows the order of these queues, if it is possible.
As hinted before, that's quite complicated, since there is no "one" order.
Your own example though is quite a good test, which in my Chrome browser does show correctly that the UI task-queue has an higher priority than the timer one (the while loop takes care of the 1ms minimum delay I talked about). But for the DOMContentLoaded though, I must admit I'm not entirely sure it shows anything significant: The HTML parser is also blocked by the while loop and thus the task to fire the event will only get posted after the whole script is executed.
But given this task is posted on the DOM Manipulation task source, we can check it's priority by forcing a other task that uses this task source, e.g script.onerror.
So here is an update to your snippet, with a few more task sources, called in reverse order of what my Chrome's prioritization seems to be:
const queueOnDOMManipulationTaskSource = (cb) => {
const script = document.createElement("script");
script.onerror = (evt) => {
script.remove();
cb();
};
script.src = "";
document.head.append(script);
};
const queueOnTimerTaskSource = (cb) => {
setTimeout(cb, 0);
}
const queueOnMessageTaskSource = (cb) => {
const { port1, port2 } = new MessageChannel();
port1.onmessage = (evt) => {
port1.close();
cb();
};
port2.postMessage("");
};
const queueOnHistoryTraversalTaskSource = (cb) => {
history.pushState("", "", location.href);
addEventListener("popstate", (evt) => {
cb();
}, { once: true });
history.back();
}
const queueOnNetworkingTaskSource = (cb) => {
const link = document.createElement("link");
link.onerror = (evt) => {
link.remove();
cb();
};
link.href = ".foo";
link.rel = "stylesheet";
document.head.append(link);
};
const makeCB = (log) => () => console.log(log);
console.log("The page will freeze for 3 seconds, try to click on this frame to queue an UI task");
// let the message show
setTimeout(() => {
window.scheduler?.postTask(makeCB("queueTask background"), {
priority: "background"
});
queueOnHistoryTraversalTaskSource(makeCB("History Traversal"));
queueOnNetworkingTaskSource(makeCB("Networking"));
queueOnTimerTaskSource(makeCB("Timer"));
// the next three are a tie in current Chrome
queueOnMessageTaskSource(makeCB("Message"));
window.scheduler?.postTask(makeCB("queueTask user-visible"), {
priority: "user-visible"
});
queueOnDOMManipulationTaskSource(makeCB("DOM Manipulation"));
window.scheduler?.postTask(makeCB("queueTask user-blocking with delay"), {
priority: "user-blocking",
delay: 1
});
window.scheduler?.postTask(makeCB("queueTask user-blocking"), {
priority: "user-blocking"
});
document.addEventListener("click", makeCB("UI task source"), {
once: true
});
const start = performance.now();
while (start + 3000 > performance.now());
}, 1000);
EDIT: found the answer - https://www.youtube.com/watch?v=8aGhZQkoFbQ
_
Okay, so I have some C# background and C# "async environment" is a bit of a mixed bag of concurrency and "small parallelism" i.e. when it comes to heavy async environment you can have race conditions, deadlock and you need to protect shared resources.
Now, I'm trying to understand how does JavaScript/ES6 async environment works. Consider following code:
// current context: background "page"
// independent from content page
let busy = false;
// This is an event handler that receives event from content page
// It can happen at any time
runtimeObj.onMessage.addListener((request) =>
{
if(request.action === 'AddEntry')
{
AddEntry(request);
return true;
}
return false;
} );
function AddEntry(data)
{
if (!busy)
group.push({url: data.url, time: Date.now(), session: data.session});
else
setTimeout(AddEntry(data),10000) // simulating Semaphore wait
}
// called from asynchronous function setInterval()
function SendPOST()
{
if (groups.length < 1 || groups === undefined)
return;
busy = true; // JS has no semaphores so I "simulate it"
let del = [];
groups.forEach(item =>
{
if (Date.now() - item.time > 3600000)
{
del.push(item);
let xhr = new XMLHttpRequest();
let data = new FormData();
data.append('action', 'leave');
data.append('sessionID', item.session);
xhr.withCredentials = true;
http.onreadystatechange = function()
{
if(http.readyState == 4 && http.status !== 200) {
console.log(`Unable to part group ${item.url}! Reason: ${http.status}. Leave group manually.`)
}
}
xhr.open('POST', item.url, true);
xhr.send(data);
}
});
del.forEach(item => groups.slice(item,1));
busy = false;
}
setInterval(SendPOST, 60000);
This is not really the best example, since it does not have bunch of async keyword paired with functions but in my understanding both sendPost() and AddEntry() arent really pure sequential operations. Nonetheless, I've been told that in AddEntry(), busy always will be false because:
sendPost is queued to execute in a minute
an event is add to the event loop
the event is processed and AddEntry is called
since busy = false the group is pushed
a minute passes and SendPost is added to the event loop
the event is process and SendPost is called
groups.length === 1 so it continues
busy = true
each group causes a request to get queued
an event comes in
busy = false
the event is processed, AddEntry is called
busy is false, like it will always be
group is pushed
eventually the requests from before are resolved, and t
he onreadystatechange callbacks are put on the event loop
eventually each of the callbacks are processed and the logging statements are executed
Is this correct? From what I understand it essentially means there can be no race conditions or deadlock or I ever need to protect shared resource.
If I'm two write a similar code for C# runtime where sendPost() and AddEntry() are asynchronous task methods that can be called in non-blocking way from different event sources there could situation when I access shared resource while the iteration context is temporally suspended in context switch by Thread Sheduler.
Is this correct?
Yes, it describes adequately whats happening.
JS runs in a single threaded way, which means that a function will always run till it's end.
setTimeout(function concurrently() {
console.log("this will never run");
});
while(true) console.log("because this is blocking the only thread JS has");
there can be no race conditions ...
There can (in a logical way). However, they cannot appear in synchronous code, as JavaScript runs single threaded¹. If you add a callback to something, or await a promise, then other code might run in the meantime (but never at the same time!), which might cause a race condition:
let block = false;
async function run() {
if(block) return // only run this once
// If we'd do block = true here, this would be totally safe
await Promise.resolve(); // things might get out of track here
block = true;
console.log("whats going on?");
}
run(); run();
... nor deadlocks ...
Yes, they are quite impossible (except if you use Atomics¹).
I never need to protect shared resource.
Yes. Because there are no shared ressources (okay, except SharedBuffers¹).
¹: By using WebWorkers (or threads on NodeJS), you actually control multiple JS threads. However each thread runs its own JS code, so variables are not shared. These threads can pass messages between each other and they can also share a specific memory construct (a SharedArrayBuffer). That can then be accessed concurrently, and all the effects of concurrent access apply (but you will rarely use it, so ...).
I have a long running for-loop in my code and I'd like to delay to loop to handle other tasks in the event queue (like a button press). Does javascript or JQuery have anything that could help me? Basically I'm trying to do something similar to delaying loops like here (https://support.microsoft.com/en-us/kb/118468).
If your application really requires long-running JavaScript code, one of the best ways to deal with it is by using JavaScript web workers. JavaScript code normally runs on the foreground thread, but by creating a web worker you can effectively keep a long-running process on a background thread, and your UI thread will be free to respond to user input.
As an example, you create a new worker like this:
var myWorker = new Worker("worker.js");
You can then post messages to it from the js in the main page like this:
myWorker.postMessage([first.value,second.value]);
console.log('Message posted to worker');
And respond to the messages in worker.js like this:
onmessage = function(e) {
console.log('Message received from main script');
var workerResult = 'Result: ' + (e.data[0] * e.data[1]);
console.log('Posting message back to main script');
postMessage(workerResult);
}
With the introduction of generators in ES6, you can write a helper method that uses yield to emulate DoEvents without much syntactic overhead:
doEventsOnYield(function*() {
... synchronous stuff ...
yield; // Pump the event loop. DoEvents() equivalent.
... synchronous stuff ...
});
Here's the helper method, which also exposes the completion/failure of the function as a Promise:
function doEventsOnYield(generator) {
return new Promise((resolve, reject) => {
let g = generator();
let advance = () => {
try {
let r = g.next();
if (r.done) resolve();
} catch (ex) {
reject(ex);
}
setTimeout(advance, 0);
};
advance();
});
}
Note that, at this time, you probably need to run this through an ES6-to-ES5 transpiler for it to run on common browsers.
You can use the setTimeout:
setTimeout(function() { }, 3600);
3600 it's the time in milliseconds:
http://www.w3schools.com/jsref/met_win_settimeout.asp
There is no exact equivalent to DoEvents. Something close is using setTimeout for each iteration:
(function next(i) {
// exit condition
if (i >= 10) {
return;
}
// body of the for loop goes here
// queue up the next iteration
setTimeout(function () {
// increment
next(i + 1);
}, 0);
})(0); // initial value of i
However, that’s rarely a good solution, and is almost never necessary in web applications. There might be an event you could use that you’re missing. What’s your real problem?
Here's a tested example of how to use Yield as a direct replacement for DoEvents.
(I've used Web Worker and it's great, but it's far removed from DoEvents and near-impossible to access global variables). This has been formatted for ease of understanding and attempts to show how the extras required (to make the function handle yield) could be treated as an insertion within the original function. "yield" has all sorts of other features, but used thus, it is a near direct replacement for DoEvents.
//'Replace DoEvents with Yield ( https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/yield )
var misc = 0; //'sample external var
function myfunction() { //'This is the beginning of your original function which is effectively replaced by a handler inserted as follows..
//'-----------------------------------Insert Handler..
var obj = myfuncGen.next(); //'start it
if (obj.done == false) {
setTimeout(myfunction, 150); //'adjust for the amount of time you wish to yield (depends how much screen drawing is required or etc)
}
}
var myfuncGen = myfuncSurrogate(); //'creates a "Generator" out of next.
function* myfuncSurrogate() { //'This the original function repackaged! Note asterisk.
//'-------------------------------------End Insert
var ms; //...your original function continues here....
for (var i = 1; i <= 9; i++) { //'sample 9x loop
ms = new Date().getTime();
while (new Date().getTime() < ms + 500); //'PAUSE (get time & wait 500ms) as an example of being busy
misc++; //'example manipulating an external var
outputdiv.innerHTML = "Output Div<br>demonstrating progress.. " + misc;
yield; //'replacement for your doevents, all internal stack state and variables effectively hibernate.
}
console.log("done");
}
myfunction(); //'and start by calling here. Note that you can use "return" to return a value except by call backs.
<div id='outputdiv' align='center'></div>
..If you are new to all this, be aware that without the insertion and the yield keyword, you would simply wait 5 seconds while nothing happened and then the progress {div} would read "9" (because all the other changes to {div} were invisible).
I noticed a strange behavior: if I have a series of tasks and wish to defer their execution, then I can use a setTimeout with 0 delay for each of them.
(see http://javascript.info/tutorial/events-and-timing-depth#the-settimeout-func-0-trick)
Everything works perfectly: the tasks are queued and executed as soon as possible.
But ... if the invocation of the various setTimeout is very close, then I found that sometimes (rarely happens!) is not executed in the correct order.
Why?
Nobody ever promised they would be fired in the "correct" order (the tasks with the same timeout will be executed in the order they are set to time out). setTimeout only guarantees that:
each timeout is executed exactly once (unless the page dies in the meantime)
each timeout is executed no sooner than when it is supposed to.
There is no word about execution order. In fact, even if the implementor tried to preserve order (even as a side-effect), most likely there is not enough time resolution to provide a unique sort order to all tasks, and a binary heap (which may well be used here) does not preserve insertion order of equal keys).
If you want to preserve the order of your deferred tasks, you should only enqueue one when the previous one is done.
This should work:
var defer = (function(){
//wrapped in IIFE to provide a scope for deferreds and wrap
var running = false;
var deferreds = [];
function wrap(func){
return function(){
func();
var next = deferreds.shift();
if(next){
setTimeout(wrap(next),0);
}else{
running = false;
}
}
}
return function(func){
if(running){
deferreds.push(func);
}else{
setTimeout(wrap(func),0);
running = true;
}
}
})()
Demo: http://jsfiddle.net/x2QuB/1/
You can consider using jquery deferreds ( or some other implementation of deferreds), which can handle this pattern very elegantly.
The important point to note is that the deferred done callbacks are executed in the order in which they are added.
var createCountFn = function(val){
return function(){
alert(val)
};
}
// tasks
var f1 = createCountFn(1),
f2 = createCountFn('2nd'),
f3 = createCountFn(3);
var dfd = $.Deferred();
dfd.done(f1).done(f2).done(f3);
dfd.resolve();
demo
The HTML5 draft specification states that the setTimeout method can be run asynchronously (implying that the order that the callbacks will be executed may not be preserved), which could be what your browser is doing.
The setTimeout() method must run the following steps:
...
6. Return handle, and then continue running this algorithm asynchronously.
7. If the method context is a Window object, wait until the Document associated with the method context has been fully active for a further timeout milliseconds (not necessarily consecutively).
In any case one could workaround this issue by doing something similar to this:
function inOrderTimeout(/* func1[, func2, func3, ...funcN], timeout */)
{ var timer; // for timeout later
var args = arguments; // allow parent function arguments to be accessed by nested functions
var numToRun = args.length - 1; // number of functions passed
if (numToRun < 1) return; // damm, nothing to run
var currentFunc = 0; // index counter
var timeout = args[numToRun]; // timeout should be straight after the last function argument
(function caller(func, timeout) // name so that recursion is possible
{ if (currentFunc > numToRun - 1)
{ // last one, let's finish off
clearTimeout(timer);
return;
}
timer = setTimeout(function ()
{ func(); // calls the current function
++currentFunc; // sets the next function to be called
caller(args[currentFunc], timeout);
}, Math.floor(timeout));
}(args[currentFunc], timeout)); // pass in the timeout and the first function to run
}
How could something equivalent to lock in C# be implemented in JavaScript?
So, to explain what I'm thinking a simple use case is:
User clicks button B. B raises an onclick event. If B is in event-state the event waits for B to be in ready-state before propagating. If B is in ready-state, B is locked and is set to event-state, then the event propagates. When the event's propagation is complete, B is set to ready-state.
I could see how something close to this could be done, simply by adding and removing the class ready-state from the button. However, the problem is that a user can click a button twice in a row faster than the variable can be set, so this attempt at a lock will fail in some circumstances.
Does anyone know how to implement a lock that will not fail in JavaScript?
Lock is a questionable idea in JS which is intended to be threadless and not needing concurrency protection. You're looking to combine calls on deferred execution. The pattern I follow for this is the use of callbacks. Something like this:
var functionLock = false;
var functionCallbacks = [];
var lockingFunction = function (callback) {
if (functionLock) {
functionCallbacks.push(callback);
} else {
$.longRunning(function(response) {
while(functionCallbacks.length){
var thisCallback = functionCallbacks.pop();
thisCallback(response);
}
});
}
}
You can also implement this using DOM event listeners or a pubsub solution.
JavaScript is, with a very few exceptions (XMLHttpRequest onreadystatechange handlers in some versions of Firefox) event-loop concurrent. So you needn't worry about locking in this case.
JavaScript has a concurrency model based on an "event loop". This model is quite different than the model in other languages like C or Java.
...
A JavaScript runtime contains a message queue, which is a list of messages to be processed. To each message is associated a function. When the stack is empty, a message is taken out of the queue and processed. The processing consists of calling the associated function (and thus creating an initial stack frame) The message processing ends when the stack becomes empty again.
...
Each message is processed completely before any other message is processed. This offers some nice properties when reasoning about your program, including the fact that whenever a function runs, it cannot be pre-empted and will run entirely before any other code runs (and can modify data the function manipulates). This differs from C, for instance, where if a function runs in a thread, it can be stopped at any point to run some other code in another thread.
A downside of this model is that if a message takes too long to complete, the web application is unable to process user interactions like click or scroll. The browser mitigates this with the "a script is taking too long to run" dialog. A good practice to follow is to make message processing short and if possible cut down one message into several messages.
For more links on event-loop concurrency, see E
I've had success mutex-promise.
I agree with other answers that you might not need locking in your case. But it's not true that one never needs locking in Javascript. You need mutual exclusivity when accessing external resources that do not handle concurrency.
Locks are a concept required in a multi-threaded system. Even with worker threads, messages are sent by value between workers so that locking is unnecessary.
I suspect you need to just set a semaphore (flagging system) between your buttons.
Here's a simple lock mechanism, implemented via closure
const createLock = () => {
let lockStatus = false
const release = () => {
lockStatus = false
}
const acuire = () => {
if (lockStatus == true)
return false
lockStatus = true
return true
}
return {
lockStatus: lockStatus,
acuire: acuire,
release: release,
}
}
lock = createLock() // create a lock
lock.acuire() // acuired a lock
if (lock.acuire()){
console.log("Was able to acuire");
} else {
console.log("Was not to acuire"); // This will execute
}
lock.release() // now the lock is released
if(lock.acuire()){
console.log("Was able to acuire"); // This will execute
} else {
console.log("Was not to acuire");
}
lock.release() // Hey don't forget to release
If it helps anyone in 2022+, all major browsers now support Web Locks API although experimental.
To quote the example in MDN:
await do_something_without_lock();
// Request the lock.
await navigator.locks.request('my_resource', async (lock) => {
// The lock has been acquired.
await do_something_with_lock();
await do_something_else_with_lock();
// Now the lock will be released.
});
// The lock has been released.
await do_something_else_without_lock();
Lock is automatically released when the callback returns
Locks are scoped to origins (https://example.com != https://example.org:8080), and work across tabs/workers.
Lock requests are queued (first come-first served); (unlike in some other languages where the lock is passed to some thread at random)
navigator.locks.query() can be used to see what has the lock, and who are in the queue to acquire the lock
There is a mode="shared" to implement a readers-writers lock if you need it
Why don't you disable the button and enable it after you finish the event?
<input type="button" id="xx" onclick="checkEnableSubmit('true');yourFunction();">
<script type="text/javascript">
function checkEnableSubmit(status) {
document.getElementById("xx").disabled = status;
}
function yourFunction(){
//add your functionality
checkEnableSubmit('false');
}
</script>
Happy coding !!!
Some addition to JoshRiver's answer according to my case;
var functionCallbacks = [];
var functionLock = false;
var getData = function (url, callback) {
if (functionLock) {
functionCallbacks.push(callback);
} else {
functionLock = true;
functionCallbacks.push(callback);
$.getJSON(url, function (data) {
while (functionCallbacks.length) {
var thisCallback = functionCallbacks.pop();
thisCallback(data);
}
functionLock = false;
});
}
};
// Usage
getData("api/orders",function(data){
barChart(data);
});
getData("api/orders",function(data){
lineChart(data);
});
There will be just one api call and these two function will consume same result.
Locks still have uses in JS. In my experience I only needed to use locks to prevent spam clicking on elements making AJAX calls.
If you have a loader set up for AJAX calls then this isn't required (as well as disabling the button after clicking).
But either way here is what I used for locking:
var LOCK_INDEX = [];
function LockCallback(key, action, manual) {
if (LOCK_INDEX[key])
return;
LOCK_INDEX[key] = true;
action(function () { delete LOCK_INDEX[key] });
if (!manual)
delete LOCK_INDEX[key];
}
Usage:
Manual unlock (usually for XHR)
LockCallback('someKey',(delCallback) => {
//do stuff
delCallback(); //Unlock method
}, true)
Auto unlock
LockCallback('someKey',() => {
//do stuff
})