Unattached anonymous functions and doubly named methods in javascript? - javascript

I'm debugging an app that uses .NET's scriptmanager.
It may be a glitch in firebug, but when I read through the code there are a lot of lines like the following:
// anonymous functions not attached as handlers and not called immediately
function () {
//code
}
// named functions added as methods
myObj = {
myMethod: function myFunctionName() {
//code
}
}
Are these lines valid and, if so, what do they do and what possible reason would there be for coding like this (and I won't accept "It's microsoft - what d'you expect" as an answer)?

This might be worth a read: How does an anonymous function in JavaScript work?

They are there because some busy programmer was intending to do something and ran out of time, but left the stub as a reminder of work to be done. They do nothing as of yet.
or to watermark the code for checks that are done elsewhere in the logic
or simply put there to obfuscate...

Related

Externalize a function in a Typescript method while maintaining closure

Reworded:
A common pattern is to pass callback functions, such as with Mongoose's save (just for example and simplified - no error handling):
someMethod(req:Request, res:Response){
document.save( function(err){ res.status(200).send({message: 'all good'})});
}
I'd like to externalize the callback. You can do this this way:
var respond = function(err:any, res:Response){
res.status(200).send({message: 'all good'});
}
someMethod(req:Request, res:Response){
document.save( function(err){ respond(err, res)});
}
...but ideally I'd like to do this by just passing a function like respond without having to create a call back function to enclose respond. I wanted to know if this is possible. Since the anonymous function has access to res, I thought there might be some way to gain access to res in a function defined externally. It appears there is not a way to do this so I'll live with wrapping it.
My original question was trying to isolate the specific issue I was interested in - which is to gain access to the caller's variables implicitly. Doesn't seem like that is possible. Fair enough.
Original Question:
I'd like to externalize a bit of code I use frequently and I'm having trouble understanding closure in the context of a Typescript method. Take a look:
var test = function(){
console.log("Testing external: "+JSON.stringify(this.req.body));
}
class Handler {
static post(req: Request, res: Response){
(function(){
console.log("TESTING anon: "+JSON.stringify(req.body));
}) ();
test();
}
}
Besides the fact that this does nothing useful, in this bit of code, the inline anonymous function has access to the req object, but the test() function does not. this in test is undefined. Removing this to match the inline function doesn't help.
I believe if I were to bind on this for the call I'd just end up with a reference to the Handler class when I really want to bind on the post method.
My motivation for doing this is that I want to make a function that can be passed as a callback to a bunch of different request handlers. When I write the functions inline it all works, but when I externalize it I can't get a closure over the variables in the enclosing method. I've read "You Don't Know JS: this & Object Prototypes", and in pure Javascript I can manage to make these sorts of things work but I'm obviously doing something wrong here (it may not be Typescript related, maybe I'm just messing it up).
So bottomline - is there a way I can externalize the handler and get access to the method variables as if I were writing it inline? I could just create an inline anonymous function as the callback that calls the external function with all the variables I need, but I want to really understand what is happening here.
This is not an answer, but will hopefully give me enough feedback to give you one because its not at all clear what you're actually trying to accomplish here and whether or not you actually understand what the terms mean is an open question since you use them correctly one minute and sketchily the next.
var test = function(){
console.log("Testing external: " + JSON.stringify(this.req.body));
}
In strict mode this will throw an error, in sloppy it will try to access the req property of the global object which is not likely what you want.
(function(){
console.log("TESTING anon: "+JSON.stringify(req.body));
}) ();
The IFFE wrapper is completely unnecessary, it literally adds nothing to the party. So why include it?
static post(req: Request, res: Response){
console.log("TESTING anon: "+JSON.stringify(req.body));
test(); // is this the spot where you are 'in-lining?'
}
What I think you want is this:
var test = function(reqBody) {
console.log("Testing external: " + JSON.stringify(reqBody));
};
class Handler {
static post(req: Request, res: Response) {
test(req.body);
}
}

How to set JS events at the right time

In the rails.js that came with my rails (3.0.x, still with prototype), I see the following structure:
(function() {
// ...
document.on("click", ...
})();
What exactly is accomplished with the wrapping of the whole code in the anonymous function? Is this a valid way to delay the code until the dom has loaded or only the document object?
In my project, I currently have a lot of setup code inside a Event.observe(document, 'dom:loaded', function() { ... } block. I was wondering, if I should adopt the pattern above when I refactor my code.
You have stumbled across the module pattern. It is useful because variables inside the immediately invoked function are local and don't pollute the global namespace.
(function(){
var something = 17;
//can use something inside here
}());
//but not here anymore
Not ethat there is no difference in timeing since the function is immediately invoked (in the final () bit)
The self-invoking anonymous function will trigger what is inside immediately, which has nothing to do with delaying the code.
To make the code block inside be executed after the DOM is ready, you have to have DOMready listener. I guess the code you mentioned Event.observe(document, 'dom:loaded', function() { ... } is the one.

Strange behavior on function calling

When calling a Javascript function, it seems like JS gives priority to functions without parameters first, even if I have the same function name with parameters.
The strange behavior only happens in the following scenario:
I have a an HTML page with embedded Javascript, like this:
//Javascript in the page
function testAbc(){
alert('testAbc no params');
}
//Javascript in common.js
function testAbc(x){
alert('testAbc with param:'+x);
}
function testAbcFunc(x){
testAbc(x);
}
Now from somewhere in the page, im calling testAbcFunc from the common.js expecting it to call testAbc with parameter which is the common function. But strangely, JS calls back the function in the original page without params!!
I have been debugging this bug fore few hours now, and i tried this short code to reproduce the bug, it does happen each time.
NOTE: if all functions are in the same page, the correct function (with params) will be called, but when ther are split between the page and the JS file. JS seems to give priority to the function in the page even though is doesn't have parameter
JavaScript does not support method overloading based on parameters. It simply uses the last-defined function if multiple functions have the same name. The version in the page will override the included version. When it worked for you, I assume that the include version (with the argument signature) was inlined after the original.
JavaScript doesn't have overloaded function. It doesn't care about signatures, it calls functions solely by names and nothing else. It is strange that later function does not completely hide the first one but well, there's no spec about that behaviour.
So just don't do that, check the number of params with arguments.length inside the function and don't try to use overloading which will never work.
function testAbc(){
if (arguments.length == 0) {
alert('testAbc no params');
} else {
var x = arguments[0];
alert('testAbc with param:'+x);
}
}
There is no function overloading in JavaScript. If you are defining a function with two times with diffrent number of parameters the last one to be defined will be called.
Also, you should be namespacing your JavaScript.
Like so:
var common = {
testABC: function () {
//Stuff
}
};
Then call testABC like this
common.testABC();

simple javascript question

I have a 2000 line jquery file, I just broke up the file into smaller ones, If I have a function in the first file, that file # 2 is referring to, it's coming up undefined.
Every file is is wrapped in a jquery ready function, What's the best way to do this?
If the function in question is declared within the scope of the ready handler, it won't be accessible to any other code, including other ready handlers.
What you need to do is define the function in the global scope:
function foo()
{
alert('foo');
}
$(document).ready(function()
{
foo();
});
P.S. A more concise way of adding a ready handler is this:
$(function()
{
foo();
});
Edit: If the contents of each of your divided ready handlers rely on the previous sections, then you can't split them up, for the reasons outlines above. What would be more sensible would be to factor out the bulk of the logic into independent functions, put these in their own files outside the ready event handler, and then call them from within the handler.
Edit: To further clarify, consider this handler:
$(function()
{
var foo = 'foo';
var bar = 'bar';
alert(foo);
alert(bar);
});
I might then split this up:
$(function()
{
var foo = 'foo';
var bar = 'bar';
});
$(function()
{
alert(foo);
alert(bar);
});
The problem with this is that foo and bar are defined in the first handler, and when they are used in the second handler, they have gone out of scope.
Any continuous flow of logic like this needs to be in the same scope (in this case, the event handler).
Function definition should not be wrapped in another function. Not unless you really want that function definition to be private. And if I understand correctly that's not your intention.
Only wrap function invocation in the jQuery ready function.
If you're worried about your functions clashing with third party function names then namespace them:
var myFunctions = {}
myFunctions.doThis = function () {}
myFunctions.doThat = function () {}
But really, you only need to worry about this if you're creating a mashup or library for others to use. On your own site YOU have control of what gets included in javascript.
Actually, for performance reasons, it may be better to keep it in one file; multiple requests actually can take up more bandwidth... but as separate files, you would need to order them in a particular order so that there is a logical sequence. Instead of having everything in a document.ready, have each script define a method, that the page will execute within its own document.ready handler, so that you can maintain that order.
Most likely the reason it's coming up undefined is because when you have separate ready calls, the scope of the code inside those calls is different.
I would reorganize my code. Any shared functions can be attached to the jQuery object directly, using $.extend. This is what we do for our application and it works well.
See this question. Hope it helps.
Everyfile shouldnt have a ready function. Only one file should have the ready function and that should be the last file.
"wrapped in a jquery ready function" is nothing else than binding stuff to the ready event that is fired when jQuery thinks the DOM is ready.
You should only bind methods that is depending on the DOM to the ready event. It doesnt matter how many binds you make, all of the methods will be executed in the binding order in the end.
Functions provide scope in JavaScript. Your code in the jquery.ready is an anonymous function, so it is unaware of the other scopes. remove the wrappings for those JavaScript functions and declare them as regular functions, a la
$(document).ready(function ()
{
functionFromFile1();
functionFromFile2();
};

Can dynamically loaded JavaScript be unloaded?

I am writing a web application that has a static outer "shell" and a dynamic content section. The dynamic content section has many updates as users navigate the system. When a new content block is loaded, it may also optionally load another JavaScript file. In the name of good housekeeping, I remove script blocks from the DOM that apply to old content blocks, since that JavaScript is no longer needed.
The problem comes next, when I realized that although I have removed the <script> element from the DOM, the JavaScript that was previously evaluated is still available for execution. That makes sense of course, but I'm worried that it may cause a memory leak if the users navigate to a lot of different sections.
The question then, is should I be worried about this situation? If so, is there a way to force the browser to cleanup stale JavaScript?
<theory>You could go with a more object-oriented approach, and build the model in a way that each block of javascript blocks come in as their own objects, with their own methods. Upon unloading it, you simply set that object to null.</theory>
(This is fairly off-the-cuff.)
Memory use is indeed an issue you need to be concerned with in the current browser state of the art, although unless we're talking about quite a lot of code, I don't know that code size is the issue (it's usually DOM size, and leftover event handlers).
You could use a pattern for your loadable modules that would make it much easier to unload them en mass -- or at least, to let the browser know it can unload them.
Consider:
window.MyModule = (function() {
alert('This happens the moment the module is loaded.');
function MyModule() {
function foo() {
bar();
}
function bar() {
}
}
return MyModule;
})();
That defines a closure that contains the functions foo and bar, which can call each other in the normal way. Note that code outside functions runs immediately.
Provided you don't pass out any references to what's inside the closure to anything outside it, then window.MyModule will be the only reference to that closure and its execution context. To unload it:
try {
delete window.MyModule;
}
catch (e) {
// Work around IE bug that doesn't allow `delete` on `window` properties
window.MyModule = undefined;
}
That tells the JavaScript environment you're not using that property anymore, and makes anything it references available for garbage collection. When and whether that collection happens is obviously implementation-dependent.
Note that it will be important if you hook event handlers within the module to unhook them before unloading. You could do that by returning a reference to a destructor function instead of the main closure:
window.MyModule = (function() {
alert('This happens the moment the module is loaded.');
function foo() {
bar();
}
function bar() {
}
function destructor() {
// Unhook event handlers here
}
return destructor;
})();
Unhooking is then:
if (window.MyModule) {
try {
window.MyModule();
}
catch (e) {
}
try {
delete window.MyModule;
}
catch (e) {
// Work around IE bug that doesn't allow `delete` on `window` properties
window.MyModule = undefined;
}
}
If you save the evaluated code in namespaces, such as:
var MYAPP = {
myFunc: function(a) { ... }
}
"Freeing" the whole thing should be as simple as setting MYPP to some random value, ala
MYAPP = 1
This does depend on there being no other means of referencing the variable, which isn't trivial
How about loading the JS files into an iframe? Then (in theory, never tested it myself) you can remove the iframe from the DOM and remove the "memory" it's using.
I think... or I hope...
If you are worried about memory leaks then you will want to make certain that there is no event handlers in the code you want removed referring to the still existing dom tree.
It may be that you need to keep a list of all event handlers your code added, and before unloading, go through and remove the event handlers.
I have never done it that way, I always worry about when I remove nodes that there is still a reference.
Here is a good article on javascript memory leaks:
http://javascript.crockford.com/memory/leak.html
JavaScript interpreters have garbage collectors. In other words, if you don't reference anything, it won't be keeping them around.
One of the reasons why it is good to use JSON with a callback function (JSONP).
example, if you HTTP response for each JS is:
callback({status: '1', resp: [resp here..]});
And if callback() does not create a reference to the JSON object passed in as an argument, it will be garbage collected after the function completes.
If you really need to make a reference, then you probably need that data around for some reason - otherwise you would/should NOT have referenced it in the first place.
The methods mentioned to namespace objects just creates a reference that will be persisted until the reference count comes to 0. In other words, you have to track every reference and delete it later, which can be hard when you have closures and references from DOM lying around. Just one reference will keep the object in memory, and some simple operations may create references without you realizing it.
Nice discussion. Clears up a lot of things. I have another worry, though.
If I bind window.MyModule.bar() to an event, what happens if the event accidentally gets triggered after window.MyModule is deleted? For me, the whole point of namespacing and separating js into dynamically loaded modules is to avoid triggering event handlers cross-module by mistake.
For example, if I do (excuse my jQuery):
$('.some-class').click(window.MyModule.bar);
What happens if I delete window.MyModule, load another module, and click on an element which accidentally has a class called some-class?

Categories