Doubts regarding a variable scope with declaration variable via let - javascript

I've run into the strange assertion in a book in my point of view. I guess that I don't understand something but anyway it will be great if you shed light on the situation.
ajax('<host1>/items',
items => {
for (let item of items) {
ajax(`<host2>/items/${item.getId()}/info`,
dataInfo => {
ajax(`<host3>/files/${dataInfo.files}`,
processFiles);
});
}
});
An author pay attention on :
There’s another hidden problem with this code. Can you guess what it is? It occurs when you mix a synchronous artifact like a for..of imperative block invoking asynchronous functions. Loops aren’t aware that there’s latency in those calls, so they’ll always march ahead no matter what, which can cause some really unpredictable and hard-to-diagnose bugs. In these situations, you can improve matters by creating closures around your asynchronous functions, managed by using forEach() instead of the loop.
Instead of it they offer the following:
ajax('/data',
items => {
items.forEach(item => {
// process each item
});
});
Frankly speaking I expected that if we use let for loop it means we create a closure for each iteration therefore I don't see any hidden problems there.

You are correct, if the author's comment is on that exact code snippet, they were mistaken.
Loops aren’t aware that there’s latency in those calls [...] you can improve matters by [...] using forEach()
That changes nothing, forEach() is equally unaware of async calls made inside its callback as a for loop is of async calls made in its body. forEach() will "always march ahead" the same way a for loop will.
With let you cannot encounter the issue that the author seems to be worried about, as each iteration of the loop has its own item just like when using items.forEach( item => { ... .
Even with var there is no issue with that code, since the variable item is not used inside the callback to the ajax request. You could produce the author's concern by using var and using item inside the callback, such as: console.log( item.getId() );.
Note: It is important to be aware that the callbacks will most likely run in a different (seemingly random) order than they were initiated in. If you aren't aware of that it can cause surprising bugs, but that also has nothing to do with using a loop vs. forEach.

The authors of that book seem to have no clue. There is no problem of for (let … of …) that .forEach(…) would fix.
They talk about
creating closures around your asynchronous functions, managed by using forEach() instead of the loop
but the closure is not created by the forEach callback function, the closure is the callback passed into the ajax function. It closes over the surrounding scope, and there is hardly any difference between the for block scope (when using let or const) and the function body scope (when using forEach).

Related

JsHint W083 Don't Make Functions in Loop and jQuery's $.each() function

I'm currently using JsHint and am receiving warning W083: "Don't make functions within a loop". I read this post from JsLint Error Explanations and understand why you should not do this, which essentially boils down to the asychrnonous nature of JavaScript and the potential for variables to be overwritten.
However, I also read in a few other posts here on SO that although this is a faux pas it does not always lead to bugs depending on the situation.
My situation in particular that JsHint is complaining about is a for-loop that uses the jQuery $(selector).each() function within it. This function takes a function as a parameter. Below is a snippet of the code that I'm concerned about. Don't worry about what it actually does+ since I'm really just using this as an example:
for (var i = 0; i < sections.length; i++) {
disableSectionState[sections[i].name] = {};
$('.' + sections[i].name).each(function (index, element) {
var id = $(element).attr('id');
disableSectionState[sections[i].name][id] = $(element).attr('disabled');
});
if (sections[i].isChecked) {
$('.' + sections[i].name).attr('disabled', 'disabled');
}
}
Essentially, this is just a nested for-each loop within a for-loop, so I didn't think this would be too dangerous, but I'm obviously not familiar with all of the quirks in js.
As of right now, everything is working properly with this function in particular, but I wanted to ask the community about the dangers of this using jQuery's each function within a loop.
To prevent this being marked as a dupe I did see this SO question, but the only answer doesn't go into any detail or explain anything, and based on the comments it looks like an XY problem anyway. I'm more interested in the why this is when at it's core is that it's essentially a nested loop.
Is it really that much safer for this function to be extracted and named outside of the loop? If I copied the loop counter to a variable in scope of the anonymous function, would that eliminate the potential danger of this design? Is that function executed completely asynchronously outside of the main for-loop?
+In case you're actually interested: This code is used to determine if certain fields should be disabled at page load if certain options are enabled.
The problem isn't using jQuery's each within the loop, it's repeatedly declaring a function. That can lead to some odd closure issues (the function closes on a reference to the loop counter, which still gets updated and can change) and can be a non-trivial performance problem on less clever VMs.
All JSHint is asking you to change is:
function doStuff(index, element) {
var id = $(element).attr('id');
disableSectionState[sections[i].name][id] = $(element).attr('disabled');
}
for (var i = 0; i < sections.length; i++) {
disableSectionState[sections[i].name] = {};
$('.' + sections[i].name).each(doStuff);
if (sections[i].isChecked) {
$('.' + sections[i].name).attr('disabled', 'disabled');
}
}
Most of the dangers come when you're calling something asynchronously from within a loop and close over the loop counter. Take, for example:
for (var i = 0; i < urls.length; ++i) {
$.ajax(urls[i], {success: function () {
console.log(urls[i]);
});
}
You may think it will log each URL as the requests succeed, but since i probably hit length before any requests have come back from the server, you're more likely to see the last URL repeatedly. It makes sense if you think about it, but can be a subtle bug if you aren't paying close attention to closure or have a more complex callback.
Not declaring functions within the loop forces you to explicitly bind or pass the loop counter, among other variables, and prevents this sort of thing from accidentally cropping up.
In some more naive implementations, the machine may also actually create a closure scope for the function every iteration of the loop, to avoid any potential oddities with variables that change within the loop. That can cause a lot of unnecessary scopes, which will have performance and memory implications.
JSHint is a very opinion-based syntax checker. It's kind of like deciding which type of citations to do on a paper MLA or APA. If you go with one, you just follow their rules because, most of the time, it is "right", but it's rarely ever wrong. JSHint also says to always use === but there may be cases to use == instead.
You can either follow the rules or ignore them with the following
// Code here will be linted with JSHint.
/* jshint ignore:start */
// Code here will be ignored by JSHint.
/* jshint ignore:end */
If you are going to use JSHint, I would just comply. It tends to keep the code a little more consistent, and when you start trying to work around one warning or error, it tends to start creating a bunch more
Is it really that much safer for this function to be extracted and named outside of the loop?
In practice, yes. In general, on case by case, maybe not.
If I copied the loop counter to a variable in scope of the anonymous function, would that eliminate the potential danger of this design?
No.
Is that function executed completely asynchronously outside of the main for-loop?
Pretty sure it is.

JavaScript callback sequence of operations

I am reading a book called You don't know JS: Async and Performance. They give the following example of a problem with nested callbacks and I was wondering if someone could elaborate on the specifics for me.
doA (function(){
doC();
doD(function(){
doF();
})
doE();
});
doB();
According to the author, the code will execute in the order denoted alphabetically. Meaning doA, then doB ... . I may have been able to guess this based on experience, but I am trying to get a better grasp as to exactly why this happens.
Edit: I posted the question because the example in the book didn't make any sense to me and I was hoping to get some clarification. Maybe I should have just said that instead of trying to rescue the author with some explanation. I edited the question a little to try and make that clear.
Is it because the event loop runs for the entire outer "wrapper" first before it starts the inner wrapper?
No. If the order of execution really is A,B,C,D,E,F, then it is because that is how the functions are written to invoke their callbacks. If the functions were written differently, it could just as easily be A,C,D,E,F,B or A,C,D,F,E,B or, it could even just be A,B, if A does not accept a callback function.
All of this speculation...
Here is what I think is happening. The event loop is first created as doA and doB because JavaScript is not really "concerned" with the contents of those lines at first. When JavaScript runs the line doA(function... it then adds the callback function to the end of the event loop placing doC and doD behind doB.
... is more or less nonsense.
Here is a drastically simplified example:
function doA(callback) { callback(); }
doA(function () {
doB()
});
doC();
Here, the order is A,B,C because that is how A is written.
However, if we change doA so it invokes its callback asynchronously...
function doA(callback) { setTimeout(callback); }
... then the order changes completely to A,C,B.
This has nothing to do with JavaScript being "concerned" with any of the inner or outer code, or where JavaScript "chooses" to place the callbacks in the "event loop". These are not real things. JavaScript doesn't "choose" to do anything. It just executes your code. It's entirely about what each function does and whether the function accepts a callback and how that callback is invoked.

Is it fine to use Array.prototype.forEach in a loop?

JSHint is complaining at me because I'm looping over an object using for(o in ...), then using a o.somearray.forEach(function(){...}); inside. It's saying to not create functions within loops, but does it even matter in this case? It looks a bit nicer since there's less lines and it looks (slightly) better, but are there any major implications from it?
Is it any better to use a normal for-loop and iterate over the array like that, or is it fine to create a function and use the ECMA 5 version?
I'm doing something like this:
for(var i in data) {
data[i].arr.forEach(function(...) {
// do magic
});
}
It is fine to use forEach, what it is suggesting here is that the function that you are passing to forEach should be created outside of the loop, something like the following:
var doMagic = function(...) {
// do magic
};
for (var i in data) {
data[i].arr.forEach(doMagic);
}
Creating functions within a loop is discouraged because it is inefficient, the JavaScript interpreter will create an instance of the function per loop iteration. Additional details are provided in JSLint Error Explanations: Don't make functions within a loop.
Yes, it's fine to nest the constructs. It's also fine to create a "new" function each loop. And,
Performance differences when creating many "new" functions within the same execution context is an implementation detail; but it is not inherently slower. (See this jsperf test case1)
Even though a new function-object will be created in the "new" case each loop the same number of execution contexts are created - namely, the current execution context and when the function is called. Smarter JavaScript implementations can trivially take advantage of this; or they may not.
I prefer the inline-method in this particular case.
1 Test on the particular JavaScript implementation of course;
In IE 10, Chrome 33, and FF 23 show equivalent performance
FF 27 favors the "new" function case
Safari 5 buggers the numbers and runs slower in the "new" case

Does the use of prototyping in javascript have a negative effect on ajax calls and asynchronous code?

If I had the following object and prototyped functionality added on.
function Hello (input){
this.input = input
}
Hello.prototype.ajaxcall = function(id){
$.get("/ajax/method", {parameter:input}, function(data){
$("#"id).html(data);
});
}
Forgive the syntax if not completely correct but what it should be doing is taking in an element id, performing an ajax call and assigning the ajax call result to the innerHTML of the id. Will the fact that the ajaxcall function is shared across all instances of an object cause any problems with regards to what data will be assigned to which id if for example 20 object were all created together and had this function called immediately?
If this is the case, does it make sense to put asyncronous methods inside the object constructor instead?
What would happen if 20 objects would be created and the ajaxcall function would be called? Nothing much. The ajax calls would run asynchronously. When they have finished they are queued so that they run on the main thread when the current running operation on the main thread finished.
So the callback functions run all synchronous in a queue next time there's time for it. Nothing bad can happen here.
I don't understand your question about the constructor. What would that change? If you use your Hello objects they have an instance variable. This is is enclosed in the callback closure . Creating a new function doesn't change the value in another callback function.
If you use the same IDs the content could flash when the text changes and you don't know which callback would be ran last but that's the worst thing that could happen.
There should be no issue. You're calling the function 20 distinct times with 20 different ids.
Conceptually though. I'm not seeing why this is part of your object. The function does not use anything at all from the object itself.
This particular example would work. Your function makes no use of any instance variables, so it doesn't really make sense to declare it that way, but it makes even less sense to move it into the constructor. Still it will work because the id argument will not be shared between calls.
EDIT: So now that you've changed it so that it does use an instance variable you've got the syntax wrong, it needs to be
{parameter : this.input}
But aside from that it will still work. The asynchronous behaviour is not a problem for the code shown.

Can I use setTimeout to create a cheap infinite loop?

var recurse = function(steps, data, delay) {
if(steps == 0) {
console.log(data.length)
} else {
setTimeout(function(){
recurse(steps - 1, data, delay);
}, delay);
}
};
var myData = "abc";
recurse(8000, myData, 1);
What troubles me with this code is that I'm passing a string on 8000 times. Does this result in any kind of memory problem?
Also, If I run this code with node.js, it prints immediately, which is not what I would expect.
If you're worried about the string being copied 8,000 times, don't be, there's only one copy of the string; what gets passed around is a reference.
The bigger question is whether the object created when you call a function (called the "variable binding object" of the "execution context") is retained, because you're creating a closure, and which has a reference to the variable object for the context and thus keeps it in memory as long as the closure is still referenced somewhere.
And the answer is: Yes, but only until the timer fires, because once it does nothing is referencing the closure anymore and so the garbage collector can reclaim them both. So you won't have 8,000 of them outstanding, just one or two. Of course, when and how the GC runs is up to the implementation.
Curiously, just earlier today we had another question on a very similar topic; see my answer there as well.
It prints immediately because the program executes "immediately". On my Intel i5 machine, the whole operation takes 0.07s, according to time node test.js.
For the memory problems, and wether this is a "cheap infinite loop", you'll just have to experiment and measure.
If you want to create an asynchronous loop in node, you could use process.nextTick. It will be faster than setTimeout(func, 1).
In general Javascript does not support tail call optimization, so writing recursive code normally runs the risk of causing a stack overflow. If you use setTimeout like this, it effectively resets the call stack, so stack overflow is no longer a problem.
Performance will be the problem though, as each call to setTimeout generally takes a fair bit of time (around 10 ms), even if you set delay to 0.
The '1' is 1 millisecond. It might as well be a for loop. 1 second is 1000. I recently wrote something similar checking on the progress of a batch of processes on the back end and set a delay of 500. Older browsers wouldn't see any real difference between 1 and about 15ms if I remember correctly. I think V8 might actually process faster than that.
I don't think garbage collection will be happening to any of the functions until the last iteration is complete but these newer generations of JS JIT compilers are a lot smarter than the ones I know more about so it's possible they'll see that nothing is really going on after the timeout and pull those params from memory.
Regardless, even if memory is reserved for every instance of those parameters, it would take a lot more than 8000 iterations to cause a problem.
One way to safeguard against potential problems with more memory intensive parameters is if you pass in an object with the params you want. Then I believe the params will just be a reference to a set place in memory.
So something like:
var recurseParams ={ steps:8000, data:"abc", delay:100 } //outside of the function
//define the function
recurse(recurseParams);
//Then inside the function reference like this:
recurseParams.steps--

Categories