In a few places I have needed to call setTimeout, eg:
setTimeout(() => this.isSaving[index] = false, 500);
When my component is destroyed, will that timeout continue to emit? In other words do I need to capture the returned observable, like this:
this.subTimeout = setTimeout(() => this.isSaving[index] = false, 500);
and then unsub in my destroy hook:
ngOnDestroy() {
this.subTimeout.unsubscribe();
}
This gets laborious if I have to initiate several setTimeouts in my component. Is there an easier way to destroy them all? Like maybe with takeUntil(destroy$)?
do I need to unsubscribe from setTimeout calls?...When my component is destroyed, will that timeout continue to emit?
You don't need to worry about the callback executing repeatedly. setTimeout executes just once then it's dead. It's generally good practice to account for the possibility that the time will run out and the callback execute only after the component has been destroyed though. In my own applications, I throw in an array all subscriptions that need to be undone, and I have a standard batch unsubscribe job in ngOnDestroy. The same can be done with your timeouts:
// component property
timeOutIDs:number[] = [];
...
// triggering a timeout and capturing the id
this.timeOutIDs.push(
setTimeout(() => this.isSaving[index] = false, 500)
);
...
// inside ngOnDestroy
this.timeoutIDs.forEach(id => clearTimeout(id));
With this approach you won't need multiple variables to store different timeout ids, and you can be sure that all your timeouts will be cleared properly if you always push the return value of setTimeout in your ids array.
Additional note: You should always cancel setInterval calls and always unsubscribe from open-ended subscriptions though.
It depends on what's going on inside setTimeout callbacks, but generally they should be unsubscribed. There's no way how they could be magically be unsubscribed. Their callbacks will be fired any way and may cause errors or undesirable side effects.
It is a good practice to make timeout be assigned somewhere, at least for the purpose of testing. The thing that is supposed to be done in ngOnDestroy, this.subTimeout.unsubscribe() presumes that a timeout is performed through RxJS:
this.subTimeout = Observable.timer(500).subscribe(() => {
this.isSaving[index] = false;
});
The way it can be improved depends on what purpose these timeouts serve.
Related
I've a question/problem with an whileloop
I need to wait until something changes outside the while loop.
Let's say i have this while loop:
window.changeMe = true;
while(window.changeMe){
}
now i have these two options:
Change the changeMe variable via the Console/JavaScript Execution
Change the changeMe variable via an WebSocket Event
but neither is working, if i change the Variable directly, it is not changed.
If i trigger an WebSocket Event its not getting called.
Maybe its BLOCKED.. so is there any other way to change the variable?
I known i can use await and its already working that way, but the problem is that these functions with while are called via an Addon
and using many await's looks kinda ugly for the addon creator :(
an system with setTimeout & Callbacks are also working but also looks kinda ugly..
Yes, you are correct. Having a infinite while loop will prevent executing any other code from javascript event loop which occupies the main thread.
In order to imitate the same behavior you can implement your own while loop that is friendly to asynchronous events and external code execution. You have to use:
tail recursion in order to minimize the memory footprint,
setTimeout as a mechanism to allow other parts of your code to run asynchronously.
EXAMPLE:
window.changeMe = true;
let stop = setTimeout(() => { console.log("External change stop"); window.changeMe = false; }, 4000)
var whileLoop = () => {
console.log("Inside: ", window.changeMe)
return window.changeMe
? setTimeout(() => { whileLoop(); }, 0)
: false
}
whileLoop()
console.log("Outside: ", window.changeMe)
Here is a fiddle:
https://jsfiddle.net/qwmosfrd/
Here is a setInterval fiddle:
https://jsfiddle.net/2s6pa1jo/
Promise return value example fiddle:
https://jsfiddle.net/0qum6gnf/
JavaScript is single-threaded. If you have while (true) {}, then nothing else outside the while loop can change the state of your program. You need to change your approach. You probably want to set up event listeners instead or put this inside an async function so you can use await to release execution, or some other asynchronous API. But plain vanilla while () {} is synchronous and cannot be affected by other things while it is running.
You can't use a while loop in that way in nodejs.
Nodejs runs your Javascript in a single thread and the overall architecture of the environment is event driven. What your while loop is doing is a spin loop so while that loop is running, no other events can ever run. You have to return control back to the event loop before any other events can run. That means that timers, network events, etc... cannot run while your spin loop is running. So, in nodejs, this is never the right way to write code. It will not work.
The one exception could be if there was an await inside the loop which would pause the loop and allow other events to run.
So, while this is running:
while(window.changeMe){
}
No other events can run and thus nothing else gets a chance to change the changeMe property. Thus, this is just an infinite loop that can never complete and nothing else gets a chance to run.
Instead, you want to change your architecture to be event driven so that whatever changes the changeMe property emits some sort of event that other code can listen to so it will get notified when a change has occurred. This can be done by having the specific code that changes the property also notify listeners or it can be done by making the property be a setter method so that method can see that the property is being changed and can fire an event to notify any interested listeners that the value has changed.
I was just going through the code of react-slick and came across the following peice of code:-
this.callbackTimers.push(
setTimeout(() => this.setState({ animating }), 10)
);
In the componentWillUnmount the callbackTimers are cleared like so :-
componentWillUnmount = () => {
if (this.callbackTimers.length) {
this.callbackTimers.forEach(timer => clearTimeout(timer));
this.callbackTimers = [];
}
};
Is the sole purpose of using the array to free memory or is there something that i have missed here ?
Why not just call the setTimeout directly:
setTimeout(() => this.setState({ animating }), 10)
instead of using an array ? i do see callbackTimers being used elsewhere too , but i don't know why exactly this array is needed apart from freeing memory, Is there any other purpose to this array ?
The line in question can be found HERE.
clearTimeout() function is only necessary to cancel any pending timers. Otherwise, you don't really need to call clearTimeout() function to clear the memory.
In the code in your question, timers are cleared in the componentWillUnmount lifecycle method to cancel any pending timers when component is about to unmount. This will prevent any code in the callback function of any pending timer, from executing once the component has unmounted.
I am using lodash throttle like this
const throttledFetch = _.throttle(fetch, 10000, { 'leading': false });
I need to trigger this upon a certain notification event I am getting from a ws. So the idea was, if I get 10 notifications at almost the same time, to have the fetch function fire only once at the wait of 10 seconds.
Instead, what is happening is that the fetch functions gets fired 10 times after the 10 second delay.
How can I fix this? I could use any other methods.
Any suggestion is welcome
The throttled function should remain the same between re-renderings, meaning we have to use React's UseCallback function. This would have worked if you changed:
const throttledFetch = _.throttle(fetch, 10000, { 'leading': false });
To
const throttledFetch = useCallback(_.throttle(fetch, 10000, { 'leading': false }));
Keep a counter for the invocation and check it for invoking only once.
if we are going to call the fetch function only once, for all the N requests at same time, we should be using debounce instead of throttle. That is the best way to respond to user interaction / some event from the Web socket listener.
The response should be faster, and also it should not be called frequently. In order to full fill the above requirement, I would be going with _debounce.
As answered by Tuxedo Joe in the above answer, we can go with useCallback approach, since the reference is going to stay the same between re-renders.
const asyncFetch = useCallback(_.debounce(fetch, 10000, { 'leading': false }));
I have an asynchronous function running in my web application that enables a chat input. In a different component I need to set a variable to the input and then focus() on it when certain conditionals are met. Unfortunately the chat input DOM element isn't always available when I try to declare it based on the asynchronous nature of the function that enables it. Being familiar with how setTimeoout() works with the call stack I wrapped my declaration in a setTimeout and everything (seemingly) works as expected now.
So my question is if this is a good practice or not? I'm using React/Redux and will have to do a lot of prop threading and extra logic to get a seemingly easy task accomplished without the setTimeout.
It is an alright practice ;)
It gets the job done, but it is usually preferable to work with callbacks or promises instead of polling to see if the dom is ready. The main failing with a "setTimeout" approach is that you are setting a timer and what if the resource (chat plugin) takes longer to load than the timer you set.
// Run it
main();
// Supporting code
function main() {
let attempts = 0;
const maxAttempts = 10;
tryUpdate();
function tryUpdate() {
// Call it once
attempts++;
const success = updateAndFocus();
console.log(attempts);
// Keep calling it every 100ms
if (!success && attempts < maxAttempts) {
setTimeout(() => tryUpdate(), 100);
}
}
}
function updateAndFocus() {
const el = document.getElementById('findme');
if (!el) return false;
// do work
el.focus;
return true;
}
Suppose I load some Flash movie that I know at some point in the future will call window.flashReady and will set window.flashReadyTriggered = true.
Now I have a block of code that I want to have executed when the Flash is ready. I want it to execute it immediately if window.flashReady has already been called and I want to put it as the callback in window.flashReady if it has not yet been called. The naive approach is this:
if(window.flashReadyTriggered) {
block();
} else {
window.flashReady = block;
}
So the concern I have based on this is that the expression in the if condition is evaluated to false, but then before block() can be executed, window.flashReady is triggered by the external Flash. Consequently, block is never called.
Is there a better design pattern to accomplish the higher level goal I'm going for (e.g., manually calling the flashReady callback)? If not, am I safe, or are there other things I should do?
All Javascript event handler scripts are handled from one master event queue system. This means that event handlers run one at a time and one runs until completion before the next one that's ready to go starts running. As such, there are none of the typical race conditions in Javascript that one would see in a multithreaded language where multiple threads of the language can be running at once (or time sliced) and create real-time conflict for access to variables.
Any individual thread of execution in javascript will run to completion before the next one starts. That's how Javascript works. An event is pulled from the event queue and then code starts running to handle that event. That code runs by itself until it returns control to the system where the system will then pull the next event from the event queue and run that code until it returns control back to the system.
Thus the typical race conditions that are caused by two threads of execution going at the same time do not happen in Javascript.
This includes all forms of Javascript events including: user events (mouse, keys, etc..), timer events, network events (ajax callbacks), etc...
The only place you can actually do multi-threading in Javascript is with the HTML5 Web Workers or Worker Threads (in node.js), but they are very isolated from regular javascript (they can only communicate with regular javascript via message passing) and cannot manipulate the DOM at all and must have their own scripts and namespace, etc...
While I would not technically call this a race condition, there are situations in Javascript because of some of its asynchronous operations where you may have two or more asynchronous operations in flight at the same time (not actually executing Javascript, but the underlying asynchronous operation is running native code at the same time) and it may be unpredictable when each operation will complete relative to the others. This creates an uncertainty of timing which (if the relative timing of the operations is important to your code) creates something you have to manually code for. You may need to sequence the operations so one runs and you literally wait for it to complete before starting the next one. Or, you may start all three operations and then have some code that collects all three results and when they are all ready, then your code proceeds.
In modern Javascript, promises are generally used to manage these types of asynchronous operations.
So, if you had three asynchronous operations that each return a promise (like reading from a database, fetching a request from another server, etc...), you could manually sequence then like this:
a().then(b).then(c).then(result => {
// result here
}).catch(err => {
// error here
});
Or, if you wanted them all to run together (all in flight at the same time) and just know when they were all done, you could do:
Promise.all([a(), b(), c()])..then(results => {
// results here
}).catch(err => {
// error here
});
While I would not call these race conditions, they are in the same general family of designing your code to control indeterminate sequencing.
There is one special case that can occur in some situations in the browser. It's not really a race condition, but if you're using lots of global variables with temporary state, it could be something to be aware of. When your own code causes another event to occur, the browser will sometimes call that event handler synchronously rather than waiting until the current thread of execution is done. An example of this is:
click
the click event handler changes focus to another field
that other field has an event handler for onfocus
browser calls the onfocus event handler immediately
onfocus event handler runs
the rest of the click event handler runs (after the .focus() call)
This isn't technically a race condition because it's 100% known when the onfocus event handler will execute (during the .focus() call). But, it can create a situation where one event handler runs while another is in the middle of execution.
JavaScript is single threaded. There are no race conditions.
When there is no more code to execute at your current "instruction pointer", the "thread" "passes the baton", and a queued window.setTimeout or event handler may execute its code.
You will get better understanding for Javascript's single-threading approach reading node.js's design ideas.
Further reading:
Why doesn't JavaScript support multithreading?
It is important to note that you may still experience race conditions if you eg. use multiple async XMLHttpRequest. Where the order of returned responses is not defined (that is responses may not come back in the same order they were send). Here the output depends on the sequence or timing of other uncontrollable events (server latency etc.). This is a race condition in a nutshell.
So even using a single event queue (like in JavaScript) does not prevent events coming in uncontrollable order and your code should take care of this.
Sure you need. It happens all the time:
<button onClick=function() {
const el = document.getElementById("view");
fetch('/some/api').then((data) => {
el.innerHTML = JSON.stringify(data);
})
}>Button 1</button>
<button onClick=function() {
const el = document.getElementById("view");
fetch('/some/other/api').then((data) => {
el.innerHTML = JSON.stringify(data);
})
}>Button 2</button>
Some people don't view it as a race condition.
But it really is.
Race condition is broadly defined as "the behavior of an electronic, software, or other system where the output is dependent on the sequence or timing of other uncontrollable events".
If user clicks these 2 buttons in a brief period, the output is not guaranteed to depend of the order of clicking. It depends on which api request will be resolved sooner. Moreover, the DOM element you're referencing can be removed by some other event (like changing route).
You can mitigate this race condition by disabling button or showing some spinner when loading operation in progress, but that's cheating. You should have some mutex/counter/semaphore at the code level to control your asynchronous flow.
To adapt it to your question, it depends on what "block()" is. If it's a synchronous function, you don't need to worry. But if it's asynchronous, you have to worry:
function block() {
window.blockInProgress = true;
// some asynchronous code
return new Promise(/* window.blockInProgress = false */);
}
if(!window.blockInProgress) {
block();
} else {
window.flashReady = block;
}
This code makes sense you want to prevent block from being called multiple times. But if you don't care, or the "block" is synchronous, you shouldn't worry. If you're worried about that a global variable value can change when you're checking it, you shouldn't be worried, it's guaranteed to not change unless you call some asynchronous function.
A more practical example. Consider we want to cache AJAX requests.
fetchCached(params) {
if(!dataInCache()) {
return fetch(params).then(data => putToCache(data));
} else {
return getFromCache();
}
}
So happens if we call this code multiple times? We don't know which data will return first, so we don't know which data will be cached. The first 2 times it will return fresh data, but the 3rd time we don't know the shape of response to be returned.
Yes, of course there are race conditions in Javascript. It is based on the event loop model and hence exhibits race conditions for async computations. The following program will either log 10 or 16 depending on whether incHead or sqrHead is completed first:
const rand = () => Math.round(Math.random() * 100);
const incHead = xs => new Promise((res, rej) =>
setTimeout(ys => {
ys[0] = ys[0] + 1;
res(ys);
}, rand(), xs));
const sqrHead = xs => new Promise((res, rej) =>
setTimeout(ys => {
ys[0] = ys[0] * ys[0];
res(ys);
}, rand(), xs))
const state = [3];
const foo = incHead(state);
const bar = sqrHead(state);
Promise.all([foo, bar])
.then(_ => console.log(state));