Does this code create any memory leaks? Or is there anything wrong with the code?
HTML:
<div id='info'></div>
Javascript:
var count = 0;
function KeepAlive()
{
count++;
$('#info').html(count);
var t=setTimeout(KeepAlive,1000);
}
KeepAlive();
Run a test here:
http://jsfiddle.net/RjGav/
You should probably use setInterval instead:
var count = 0;
function KeepAlive() {
$('#info').html(++count);
}
var KAinterval = setInterval(KeepAlive, 1000);
You can cancel it if you ever need to by calling clearInterval(KAinterval);.
I think this will leak because the successive references are never released. That is, the first call immediately creates a closure by referencing the function from within itself. When it calls itself again, the new reference is from the instance created on the first iteration, so the first one could again never be released.
You could test this theory pretty easily by changing the interval to something very small and watch the memory in chrome...
(edit) theory tested with your fiddle, actually, I'm wrong it doesn't leak, at least in Chrome. But that's no guarantee some other browser (e.g. older IE) isn't as good at garbage collecting.
But whether or not it leaks, there's no reason not to use setInterval instead.
This should not create a leak, because the KeepAlive function will complete in a timely manner and thus release all variables in that function. Also, in your current code, there is no reason to set the t var as it is unused. If you want to use it to cancel your event, you should declare it in a higher scope.
Other than that, I see nothing "wrong" with your code, but it really depends on what you are trying to do. For example, if you are trying to use this as a precise timer, it will be slower than a regular clock. Thus, you should either consider either setting the date on page load and calculating the difference when you need it or using setInterval as g.d.d.c suggested.
It is good to have setInterval method like g.d.d.c mentioned.
Moreover, it is better to store $('#info') in a variable outside the function.
Checkout http://jsfiddle.net/RjGav/1/
Related
I've not experimented with canvases before. I've made a project which involves a canvas that gets continually extended, with new rows of "data" being appended to the bottom of it.
I've got the actual rendering part working fine; the final output is what I want it to be... but my intention with the project was to be able to watch as it gets drawn on the screen. However, what instead happens is that the canvas just hangs for a few seconds, and then displays all at once. This happens in Chrome at least, I've not tested other browsers.
I'm using a loop like the following:
for(var i = 0; i < 500; i++){
addRow(data, canvas);
}
And essentially I want to view each row as it's being drawn.
Any ideas how I could do this?
Since JavaScript is single-threaded, the canvas cannot be redrawn while your loop is running.
You have to create idle moments in between the calls to addRow, so the JavaScript thread is freed to actually act on the new data. You do this by making the calls to addRow asynchronous; the easiest way to do this is to (ab)use the standard function setTimeout1.
setTimeout takes a function and executes this function asynchronously, after a given delay (if you omit the delay parameter, a delay of 0ms is assumed). I called this abusing this function earlier because you don't use the delay functionality, but I do think this is the standard way to execute code asynchronously; if not please let me know.
You can also pass an anonymous function (rather than a named one) to setTimeout, like so:
for(var i = 0; i < 500; i++){
setTimeout(function() {
data = calculateNextRow(data)
addRow(data, canvas);
});
}
If you need to use i inside the anonymous function, there will be some scope issues which are to complicated to explain here, but this answer does an excellent job of the deeper causes of the issues I alluded to and examples 3 and 5 illustrate exactly these issues.
1: in my earlier comment I suggested setInterval which was a mistake; setInterval calls the passed function repeatedly which is redundant since you would be calling it from a loop. Otherwise setTimeout and setInterval are very similar.
Does JavaScript (pure, not jQuery, if it matters) know to clear up/free/release from the last reference to an object in a "delayed" function called from a timer or event?
Take the following code:
function myInitFunc()
{
var myInitObj = new Object();
myInitObj.properties = lotsOfStuff;
var myDelayedInitFunc = function ()
{
doSomethingWith(myInitObj);
// I shall not be accessing myInitObj again now.
};
// Let's say, *one* of the following:
setTimeout(myDelayedInitFunc, 1000);
window.addEventListener('load', myDelayedInitFunc);
document.addEventListener('DOMContentLoaded', myDelayedInitFunc);
}
Note that myDelayedInitFunc() is deliberately accessing variable myInitObj, which is local to myInitFunc().
In, say, http://javascript.info/tutorial/memory-leaks it states "Functions used in setTimeout/setInterval are also referenced internally and tracked until complete, then cleaned up". Does this "clean up" understand that it can get rid of the myInitObj as well as the function itself? I'm sort of guessing it does....
What about the two event examples? Even though we know they are "one-shot" events, I'm guessing that neither myDelayedInitFunc nor myInitObj will get cleaned up?
If it is the case that some of these do not clean up, should I make myDelayedFunc() set myInitObj = null; at its end so as to minimise the wastage?
Yes, there is no need for you to clean up that reference. If you're worried that your JS code might have memory leaks, you should perhaps read up on memory profiling.
I've been reading up a lot on closures in Javascript. I come from a more traditional (C, C++, etc) background and understand call stacks and such, but I am having troubles with memory usage in Javascript. Here's a (simplified) test case I set up:
function updateLater(){
console.log('timer update');
var params = new Object();
for(var y=0; y<1000000; y++){
params[y] = {'test':y};
}
}
Alternatively, I've also tried using a closure:
function updateLaterClosure(){
return (function(){
console.log('timer update');
var params = new Object()
for(var y=0; y<1000000; y++)
{
params[y] = {'test':y};
}
});
}
Then, I set an interval to run the function...
setInterval(updateLater, 5000); // or var c = updateLaterClosure(); setInterval(c,5000);
The first time the timer runs, the Memory Usage jumps form 50MB to 75MB (according to Chrome's Task Manager). The second time it goes above 100MB. Occasionally it drops back down a little, but never below 75MB.
Check it out yourself: https://local.phazm.com:4435/Streamified/extension/branches/lib/test.html
Clearly, params is not being fully garbage collected, because the memory from the first timer call is not being freed... yet, neither is it adding 25MB of memory on EACH call, so it is not as if the garbage collection is NEVER happening... it almost seems as though one instance of "params" is always being kept around. I've tried setting up a sub-closure and other things... no dice.
What is MOST disturbing, though, is that the memory usage trends upwards. It might "just" be 75MB for now, but leave it running for long enough (overnight) and it'll get to 500 MB.
Ideas?
Thanks!
Allocating 25mb causes a GC to happen. This GC cleans up the last instance but of course not the current. So you always have one instance around.
GC does not happen when the program is idle. It does not happen between your timer calls so the memory stays around.
That is not even a closure. A closure is when you return something from a function, like an array, function, object or anything that can contain references, and it carries with it all the local members of that function.
what you have there is just a case of a very long loop that is building a very big object. and maybe your memory does not get reclaimed as fast as you are building the huge objects.
var recurse = function(steps, data, delay) {
if(steps == 0) {
console.log(data.length)
} else {
setTimeout(function(){
recurse(steps - 1, data, delay);
}, delay);
}
};
var myData = "abc";
recurse(8000, myData, 1);
What troubles me with this code is that I'm passing a string on 8000 times. Does this result in any kind of memory problem?
Also, If I run this code with node.js, it prints immediately, which is not what I would expect.
If you're worried about the string being copied 8,000 times, don't be, there's only one copy of the string; what gets passed around is a reference.
The bigger question is whether the object created when you call a function (called the "variable binding object" of the "execution context") is retained, because you're creating a closure, and which has a reference to the variable object for the context and thus keeps it in memory as long as the closure is still referenced somewhere.
And the answer is: Yes, but only until the timer fires, because once it does nothing is referencing the closure anymore and so the garbage collector can reclaim them both. So you won't have 8,000 of them outstanding, just one or two. Of course, when and how the GC runs is up to the implementation.
Curiously, just earlier today we had another question on a very similar topic; see my answer there as well.
It prints immediately because the program executes "immediately". On my Intel i5 machine, the whole operation takes 0.07s, according to time node test.js.
For the memory problems, and wether this is a "cheap infinite loop", you'll just have to experiment and measure.
If you want to create an asynchronous loop in node, you could use process.nextTick. It will be faster than setTimeout(func, 1).
In general Javascript does not support tail call optimization, so writing recursive code normally runs the risk of causing a stack overflow. If you use setTimeout like this, it effectively resets the call stack, so stack overflow is no longer a problem.
Performance will be the problem though, as each call to setTimeout generally takes a fair bit of time (around 10 ms), even if you set delay to 0.
The '1' is 1 millisecond. It might as well be a for loop. 1 second is 1000. I recently wrote something similar checking on the progress of a batch of processes on the back end and set a delay of 500. Older browsers wouldn't see any real difference between 1 and about 15ms if I remember correctly. I think V8 might actually process faster than that.
I don't think garbage collection will be happening to any of the functions until the last iteration is complete but these newer generations of JS JIT compilers are a lot smarter than the ones I know more about so it's possible they'll see that nothing is really going on after the timeout and pull those params from memory.
Regardless, even if memory is reserved for every instance of those parameters, it would take a lot more than 8000 iterations to cause a problem.
One way to safeguard against potential problems with more memory intensive parameters is if you pass in an object with the params you want. Then I believe the params will just be a reference to a set place in memory.
So something like:
var recurseParams ={ steps:8000, data:"abc", delay:100 } //outside of the function
//define the function
recurse(recurseParams);
//Then inside the function reference like this:
recurseParams.steps--
I thought I would try and be clever and create a Wait function of my own (I realise there are other ways to do this). So I wrote:
var interval_id;
var countdowntimer = 0;
function Wait(wait_interval) {
countdowntimer = wait_interval;
interval_id = setInterval(function() {
--countdowntimer <=0 ? clearInterval(interval_id) : null;
}, 1000);
do {} while (countdowntimer >= 0);
}
// Wait a bit: 5 secs
Wait(5);
This all works, except for the infinite looping. Upon inspection, if I take the While loop out, the anonymous function is entered 5 times, as expected. So clearly the global variable countdowntimer is decremented.
However, if I check the value of countdowntimer, in the While loop, it never goes down. This is despite the fact that the anonymous function is being called whilst in the While loop!
Clearly, somehow, there are two values of countdowntimer floating around, but why?
EDIT
Ok, so I understand (now) that Javascript is single threaded. And that - sort of - answers my question. But, at which point in the processing of this single thread, does the so called asynchronous call using setInterval actually happen? Is it just between function calls? Surely not, what about functions that take a long time to execute?
There aren't two copies of the variable lying around. Javascript in web browsers is single threaded (unless you use the new web workers stuff). So the anonymous function never has the chance to run, because Wait is tying up the interpreter.
You can't use a busy-wait functions in browser-based Javascript; nothing else will ever happen (and they're a bad idea in most other environments, even where they're possible). You have to use callbacks instead. Here's a minimalist reworking of that:
var interval_id;
var countdowntimer = 0;
function Wait(wait_interval, callback) {
countdowntimer = wait_interval;
interval_id = setInterval(function() {
if (--countdowntimer <=0) {
clearInterval(interval_id);
interval_id = 0;
callback();
}
}, 1000);
}
// Wait a bit: 5 secs
Wait(5, function() {
alert("Done waiting");
});
// Any code here happens immediately, it doesn't wait for the callback
Edit Answering your follow-up:
But, at which point in the processing of this single thread, does the so called asynchronous call using setInterval actually happen? Is it just between function calls? Surely not, what about functions that take a long time to execute?
Pretty much, yeah — and so it's important that functions not be long-running. (Technically it's not even between function calls, in that if you have a function that calls three other functions, the interpreter can't do anything else while that (outer) function is running.) The interpreter essentially maintains a queue of functions it needs to execute. It starts starts by executing any global code (rather like a big function call). Then, when things happen (user input events, the time to call a callback scheduled via setTimeout is reached, etc.), the interpreter pushes the calls it needs to make onto the queue. It always processes the call at the front of the queue, and so things can stack up (like your setInterval calls, although setInterval is a bit special — it won't queue a subsequent callback if a previous one is still sitting in the queue waiting to be processed). So think in terms of when your code gets control and when it releases control (e.g., by returning). The interpreter can only do other things after you release control and before it gives it back to you again. And again, on some browsers (IE, for instance), that same thread is also used for painting the UI and such, so DOM insertions (for instance) won't show up until you release control back to the browser so it can get on with doing its painting.
When Javascript in web browsers, you really need to take an event-driven approach to designing and coding your solutions. The classic example is prompting the user for information. In a non-event-driven world, you could do this:
// Non-functional non-event-driven pseudo-example
askTheQuestion();
answer = readTheAnswer(); // Script pauses here
doSomethingWithAnswer(answer); // This doesn't happen until we have an answer
doSomethingElse();
That doesn't work in an event-driven world. Instead, you do this:
askTheQuestion();
setCallbackForQuestionAnsweredEvent(doSomethingWithAnswer);
// If we had code here, it would happen *immediately*,
// it wouldn't wait for the answer
So for instance, askTheQuestion might overlay a div on the page with fields prompting the user for various pieces of information with an "OK" button for them to click when they're done. setCallbackForQuestionAnswered would really be hooking the click event on the "OK" button. doSomethingWithAnswer would collect the information from the fields, remove or hide the div, and do something with the info.
Most Javascript implementation are single threaded, so when it is executing the while loop, it doesn't let anything else execute, so the interval never runs while the while is running, thus making an infinite loop.
There are many similar attempts to create a sleep/wait/pause function in javascript, but since most implementations are single threaded, it simply doesn't let you do anything else while sleeping(!).
The alternative way to make a delay is to write timeouts. They can postpone an execution of a chunk of code, but you have to break it in many functions. You can always inline functions so it makes it easier to follow (and to share variables within the same execution context).
There are also some libraries that adds some syntatic suggar to javascript making this more readable.
EDIT:
There's an excelent blog post by John Resig himself about How javascript timers work. He pretty much explains it in details. Hope it helps.
Actually, its pretty much guaranteed that the interval function will never run while the loop does as javascript is single-threaded.
There is a reason why no-one has made Wait before (and so many have tried); it simply cannot be done.
You will have to resort to braking up your function into bits and schedule these using setTimeout or setInterval.
//first part
...
setTimeout(function(){
//next part
}, 5000/*ms*/);
Depending on your needs this could (should) be implemented as a state machine.
Instead of using a global countdowntimer variable, why not just change the millisecond attribute on setInterval instead? Something like:
var waitId;
function Wait(waitSeconds)
{
waitId= setInterval(function(){clearInterval(waitId);}, waitSeconds * 1000);
}