Below is a code snippet I found online on a blog which entails a simple example in using the stream Transform class to alter data streams and output the altered result. There are some things about this that I don't really understand.
var stream = require('stream');
var util = require('util');
// node v0.10+ use native Transform, else polyfill
var Transform = stream.Transform ||
require('readable-stream').Transform;
Why does the program need to check if the this var points to an instance of the Upper constructor? The Upper constructor is being used to construct the upper object below, so what is the reason to check for this? Also, I tried logging options, but it returns null/undefined, so what's the point of that parameter?
function Upper(options) {
// allow use without new
if (!(this instanceof Upper)) {
return new Upper(options);
}
I assume that this Transform.call method is being made to explicitly set the this variable? But why does the program do that, seeing as how Transform is never being called anyway.
// init Transform
Transform.call(this, options);
}
After googling the util package, I know that it is being used here to allow Upper to inherit Transform's prototypal methods. Is that right?
util.inherits(Upper, Transform);
The function below is what really confuses me. I understand that the program is setting a method on Upper's prototype which is used to transform data being input into it. But, I don't see where this function is being called at all!
Upper.prototype._transform = function (chunk, enc, cb) {
var upperChunk = chunk.toString().toUpperCase();
this.push(upperChunk);
cb();
};
// try it out - from the original code
var upper = new Upper();
upper.pipe(process.stdout); // output to stdout
After running the code through a debugger, I can see that upper.write calls the aforementioned Upper.prototype._transform method, but why does this happen? upper is an instance of the Upper constructor, and write is a method that doesn't seem to have any relation to the _transform method being applied to the prototype of Upper.
upper.write('hello world\n'); // input line 1
upper.write('another line'); // input line 2
upper.end(); // finish
First, if you haven't already, take a look at the Transform stream implementer's documentation here.
Q: Why does the program need to check if the this var points to an instance of the Upper constructor? The Upper constructor is being used to construct the upper object below, so what is the reason to check for this?
A: It needs to check because anyone can call Upper() without new. So if it's detected that a user called the constructor without new, out of convenience (and to make things work correctly), new is implicitly called on the user's behalf.
Q: Also, I tried logging options, but it returns null/undefined, so what's the point of that parameter?
A: options is just a constructor/function parameter. If you don't pass anything to the constructor, then obviously it will be undefined (or whatever value you pass to it). You can have as many parameters as you want/need, just like any ordinary function. In the case of Upper() however, configuration isn't really needed due to the simplicity of the transform (just converting all input to uppercase).
Q: I assume that this Transform.call method is being made to explicitly set the this variable? But why does the program do that, seeing as how Transform is never being called anyway.
A: No, the Transform.call() allows the inherited "class" to perform its own initialization, such as setting up internal state variables. You can think of it as calling the super() in ES6 classes.
Q: After googling the util package, I know that it is being used here to allow Upper to inherit Transform's prototypal methods. Is that right?
A: Yes, that is correct. However, these days you can also use ES6 classes to do real inheritance. The node.js stream implementers documentation shows examples of both inheritance methods.
Q: The function below is what really confuses me. I understand that the program is setting a method on Upper's prototype which is used to transform data being input into it. But, I don't see where this function is being called at all!
A: This function is called internally by node when it has data for you to process. Think of the method as being part of an interface (or a "pure virtual function" if you are familiar with C++) that you are required to implement in your custom Transform.
Q: After running the code through a debugger, I can see that upper.write calls the aforementioned Upper.prototype._transform method, but why does this happen? upper is an instance of the Upper constructor, and write is a method that doesn't seem to have any relation to the _transform method being applied to the prototype of Upper.
A: As noted in the Transform documentation, Transform streams are merely simplified Duplex streams (meaning they accept input and produce output). When you call .write() you are writing to the Writable (input) side of the Transform stream. This is what triggers the call to ._transform() with the data you just passed to .write(). When you call .push() you are writing to the Readable (output) side of the Transform stream. That data is what seen when you either call .read() on the Transform stream or you attach a 'data' event handler.
Related
In Backbone 1.1.2, at line 279
// Return a copy of the model's `attributes` object.
toJSON: function(options) {
return _.clone(this.attributes);
},
options is clearly not used, so why have it there at all. It is just wasted memory.
What am I missing here?
Per comment here is one way of calling this code -so why pass options when it will not be used?
toJSON: function(options) {
return this.map(function(model){ return model.toJSON(options); });
},
It doesn't waste memory, since the argument would have to be available as arguments[0] anyway (either options is a function call and the vm has to do it for side-effects anyway, or it's just an object and so it's just a reference).
It also serves as a document reference to what superclasses can implement.
Since JS is using prototypes for its object orientation, if you create a toJSON function in one of your superclasses, it'll be used instead.
There's no functional reason for including that in the Model.toJSON signature -- it is strictly for developers' benefit.
From the link that #t.niese uncovered:
Morning #aoboturov! Thanks for pointing this out. It's actually intentional and is meant to remind you that collection.toJSON(options) passes along the options argument to each of its models by default. See #1222 and #1098 for details.
(Note that the referenced #1222 is basically just a dupe, and #1098 is where they added the feature in the first place.)
In other words, the parameter is put there for the sake of clarity for developers who may want to override the Model.toJSON implementation. The collection passes along the original Backbone.sync call's options object, since some implementations of Model.toJSON might want to use it.
Including that options parameter in the signature doesn't affect memory usage at all (even if it did, the effect would be miniscule), since the Collection.toJSON implementation passes that options object as an argument, either way.
Consider the code:
Example 1
var Executors = java.util.concurrent.Executors;
var executor = Executors.newCachedThreadPool();
var fork = function (callable) {
// Clarify Runnable versus Callable overloaded methods
executor['submit(java.util.concurrent.Callable)'](callable);
};
fork(function(){ ... }); //ok
This works.
But this does not work:
Example 2
var Executors = java.util.concurrent.Executors;
var executor = Executors.newCachedThreadPool();
var fork = executor['submit(java.util.concurrent.Callable)'];
fork(function(){ ... }); //fails, NullPointerException
I assume, it is because fork here is not a JS Function instance, it is actually an instance of jdk.internal.dynalink.beans.SimpleDynamicMethod
I tried to use fork.apply(executor,function() { ... }); but natrually, SimpleDynamicMethod has no apply.
Why is it, actually, that Example 2 does not work, while Example 1 does?
Is it simply a perk of Nashorn? It there a better way to define fork() function than in Example 1?
Update
In example 2,
print(typeof fork); reports function
print(fork) reports [jdk.internal.dynalink.beans.SimpleDynamicMethod Future java.util.concurrent.AbstractExecutorService.submit(Callable)]
and exception is (with line 13 reading fork(function() {)
Exception in thread "main" java.lang.NullPointerException
at jdk.nashorn.internal.scripts.Script$\^eval\_._L5(<eval>:13)
at jdk.nashorn.internal.scripts.Script$\^eval\_.runScript(<eval>:5)
at jdk.nashorn.internal.runtime.ScriptFunctionData.invoke(ScriptFunctionData.java:527)
Unfortunately, currently you can't use bind, apply, and call with POJO methods. I'd like to add that as a future enhancement. The best you can currently do in your above example is:
executor['submit(Callable)'](function() { ... })
Note that while in general indexed access to a property (using the [] operator) is less efficient than property name access (using the . operator), Nashorn recognizes indexed access with a string literal and treats it just as efficiently as a property name access, so you don't suffer a slowdown here, just a bit of visual noise. In the case above, it will actually end up getting linked to the executor's virtual method directly.
Speaking of visual noise, you don't have to fully qualify java.util.concurrent.Callable. When the non-qualified name of the parameter type is sufficient to disambiguate the overloads (which is pretty much always), you can just use the non-qualified name, regardless of what package it is in (works for your own classes too).
The problem is that you are missing the receiver 'executor' from the call. In general, 'fetching' Java functions is only practical with static Java functions. For example:
jjs> var abs = java.lang.Math.abs;
jjs> abs(-10);
10
In your example, we could have bound fork to executor and make it equivalently static. This support is currently not present. We should probably have support for adding the receiver as the first argument if 'fetched' from a class. Will file an enhancement request for a future release.
Alex,
In example 1, var fork is a function that returns array executor. In example 2, var fork is an array. That is why you cant use () and apply.
Does fork[0](function(){...}) work for you ?
Thanks
Edit: I found this interesting library which looks like it can do exactly what I was describing at the bottom: https://github.com/philbooth/check-types.js
Looks like you can do it by calling check.quacksLike.
I'm fairly new to using javascript and I'm loving the amount of power it offers, but sometimes it is too flexible for my sanity to handle. I would like an easy way to enforce that some argument honors a specific interface.
Here's a simple example method that highlights my problem:
var execute = function(args)
{
executor.execute(args);
}
Let's say that the executor expects args to have a property called cmd. If it is not defined, an error might be caught at another level when the program tries to reference cmd but it is undefined. Such an error would be more annoying to debug than explicitly enforcing cmd's existence in this method. The executor might even expect that args has a function called getExecutionContext() which gets passed around a bit. I could imagine much more complex scenarios where debugging would quickly become a nightmare of tracing through function calls to see where an argument was first passed in.
Neither do I want to do something on the lines of:
var execute = function(args)
{
if(args.cmd === undefined || args.getExecutionContext === undefined ||
typeof args.getExecutionContext !== 'function')
throw new Error("args not setup correctly");
executor.execute(args);
}
This would entail a significant amount of maintenance for every function that has arguments, especially for complex arguments. I would much rather be able to specify an interface and somehow enforce a contract that tells javascript that I expect input matching this interface.
Maybe something like:
var baseCommand =
{
cmd: '',
getExecutionContext: function(){}
};
var execute = function(args)
{
enforce(args, baseCommand); //throws an error if args does not honor
//baseCommand's properties
executor.execute(args);
}
I could then reuse these interfaces amongst my different functions and define objects that extend them to be passed into my functions without worrying about misspelling property names or passing in the wrong argument. Any ideas on how to implement this, or where I could utilize an existing implementation?
I don't see any other way to enforce this. It's one of the side effects of the dynamic nature of JavaScript. It's essentially a free-for-all, and with that freedom comes responsibility :-)
If you're in need of type checking you could have a look at typescript (it's not JavaScript) or google's closure compiler (javascript with comments).
Closure compiler uses comments to figure out what type is expected when you compile it. Looks like a lot of trouble but can be helpful in big projects.
There are other benefits that come with closure compiler as you will be forced to produce comments that are used in an IDE like netbeans, it minifies your code, removes unused code and flattens namespaces. So code organized in namespaces like myApp.myModule.myObject.myFunction will be flattened to minimize object look up.
Cons are that you need to use externs when you use libraries that are not compiler compatible like jQuery.
The way that this kind of thing is typically dealt with in javascript is to use defaults. Most of the time you simply want to provide a guarentee that certain members exist to prevent things like reference errors, but I think that you could use the principal to get what you want.
By using something like jQuery's extend method, we can guarentee that a parameter implements a set of defined defaults.
var defaults = {
prop1: 'exists',
prop2: function() { return 'foo'; }
};
function someCall(args) {
var options = $.extend({}, defaults, args);
// Do work with options... It is now guarentee'd to have members prop1 and prop2, defined by the caller if they exist, using defaults if not.
}
If you really want to throw errors at run time if a specific member wasn't provided, you could perhaps define a function that throws an error, and include it in your defaults. Thus, if a member was provided by the caller, it would overwrite the default, but if it was missed, it could either take on some default functionality or throw an error as you wish.
I have a set of C++ classes exposed to javascript in Qt 5, based on the QJSEngine (because Qt script seems to be deprecated).
My QObject-derived classes A and B have the Q_OBJECT macro and use the Q_DECLARE_METATYPE macro as well.
I have exposed factory functions for my classes that allow me to create new instances from inside javascript. All the following works fine:
a = namespace.createNewA(); // QJSEngine reports a type A object
b = namespace.createNewB(); // QJSEngine reports a type B object
b.SetParent(a); // SetParent is a slot of B taking a const A& parameter, gets called correctly
// But now.
b.GetParent(); // Reports QVariant(A), even though this is a slot: A GetParent() const
Is there a way for me to ensure that GetParent in javascript gets recognized as an actual type A object, instead of a QVariant?
I figured out the issues with my original code:
Using Qt slots with return values is probably not a good idea in the general case (see e.g. Qt: meaning of slot return value?). I changed the function to Q_INVOKABLE, which didn't actually change the result but it seemed a safer place to continue from.
Returning by value doesn't seem to be a good idea for the scripting/wrapping either. Digging through the code with a debugger I found that the QVariant cast-to-qobject failed, and that made a lightbulb go off: by-value probably breaks the qobject_cast that tries to look up the QObject-derived type. I changed the call to return a pointer-to-QObject and now the QJSEngine reports the object as the correct type.
This required some changes to my code setup, because the returned value used to be a temporary, but I can live with that.
I'm working on a plug-in for jQuery and I'm getting this JSLint error:
Problem at line 80 character 45: Do not use 'new' for side effects.
(new jQuery.fasterTrim(this, options));
I haven't had much luck finding info on this JSLint error or on any side effects that new might have.
I've tried Googling for "Do not use 'new' for side effects." and got 0 results. Binging gives me 2 results but they both just reference the JSLint source. Hopefully this question will change that. :-)
Update #1:
Here's more source for the context:
jQuery.fn.fasterTrim = function(options) {
return this.each(function() {
(new jQuery.fasterTrim(this, options));
});
};
Update #2:
I used the Starter jQuery plug-in generator as a template for my plug-in, which has that code in it.
JsLint itself gives you the reason:
Constructors are functions that are
designed to be used with the new
prefix. The new prefix creates a new
object based on the function's
prototype, and binds that object to
the function's implied this parameter.
If you neglect to use the new prefix,
no new object will be made and this
will be bound to the global object.
This is a serious mistake.
JSLint enforces the convention that
constructor functions be given names
with initial uppercase. JSLint does
not expect to see a function
invocation with an initial uppercase
name unless it has the new prefix.
JSLint does not expect to see the new
prefix used with functions whose names
do not start with initial uppercase.
This can be controlled with the newcap
option.
JSLint does not expect to see the
wrapper forms new Number, new String,
new Boolean.
JSLint does not expect to see new
Object (use {} instead).
JSLint does not expect to see new
Array (use [] instead).
Travis, I am the developer behind the Starter site.
#Pointy hit the nail on the head. The reason the Starter code is written that way is because we do need a new object, we just don't need to store a reference to it at that point.
Simply changing the command from
(new jQuery.fasterTrim(this, options));
to
var fT = new jQuery.fasterTrim(this, options);
will appease JSLint as you have found.
The Starter plugin setup follows the jQuery UI pattern of storing a reference to the object in the data set for the element. So this is what is happening:
New object is created (via new)
The instance is attached to the DOM element using jQuery's data :$(el).data('FasterTrim', this)
There is no use for the object that is returned, and thus no var declaration made. I will look into changing the declaration and cleaning up the output to pass JSLint out of the box.
A little more background:
The benefit to storing the object using data is that we can access the object later at any time by calling: $("#your_selector").data('FasterTrim'). However, if your plugin does not need to be accessed mid stream that way (Meaning, it gets set up in a single call and offers no future interaction) then storing a reference is not needed.
Let me know if you need more info.
It's complaining because you're calling "new" but then throwing away the returned object, I bet. Why is that code using "new"? In other words, why isn't it just
jQuery.fasterTrim(this, options);
edit OK, well that "Starter" tool generates the code that way because it really does want a new object created, and yes it really is to take advantage of side effects. The constructor code that "Starter" generates stashes a reference to the new object on the affected element, using the jQuery "data" facility.
You are using new to perform some action instead of to create an object and return it. JSLint considers this an invalid use of new.
You should either use it like this:
var x = new SomeConstructor();
Or perform some action like this:
SomeMethod();
But never use new to perform an action like this:
new SomeCosntructor(args);
Doing so is considered using new for side effects because you aren't using it to create an object.
Basically JavaScript tends to be a slow beast, so creating a new object just to call a function is quite inefficient. The function is static anyway.
$.fasterTrim(this, options);
From jQuery fasterTrim source code:
* Usage:
*
* $(element).fasterTrim(options); // returns jQuery object
* $.fasterTrim.trim(" string ", options); // returns trimmed string
To answer the question, "Do not use new for side effects" means:
Do not use new for what the
constructor will do to its parameters
but to create an object, side effects
in constructors are baaaad!