I find it strange that all WebGL constants are defined as members of the rendering context. This means that if the context is wrapped in some library, accessing those constants becomes problematic.
Is there any reason why I can't define them all explicitly? Or, if they are implementation defined, maybe the first time a context is created, write all enum values to some global object?
Basically, instead of writing new renderer.Texture(renderer.gl.TEXTURE_2D) or new renderer.Texture("TEXTURE_2D"), I want to write something like new renderer.Texture(WebGL.TEXTURE_2D).
You can access them using WebGLRenderingContext and WebGL2RenderingContext without creating an instance of a context. For example:
console.log(WebGLRenderingContext.TEXTURE_2D); // 3553
you are free to define them as your own constants. In fact it may make your code faster
const TEXTURE_2D = 0x0DE1
...
gl.bindTexture(TEXTURE_2D, someTexture);
Is perfectly fine. And, if that code is run through a modern JavaScript compressor it will get turned into this
gl.bindTexture(0x0DE1, someTexture);
Which will arguably be faster. Faster then gl.TEXTURE_2D because using gl.TEXTURE_2D the JavaScript engine has to always check that someone didn't assign gl.TEXTURE_2D to something else. Faster than TEXTURE_2D because even a const variable represents something being created where as 0x0DE1 definitely does not.
Just because I'll probably get some questions later, my point above about speed is the JavaScript engine has to check every single time you call
gl.bindTexture(gl.TEXTURE2D, ...)
That someone somewhere didn't do
gl.TEXTURE_2D = 123
or make a property getter
Object.defineProperty(gl, 'TEXTURE_2D', {
enumerable: true,
writable: false,
get() {
console.log('TEXTURE_2D was accessed at', (new Error()).stack));
return 0xDE1;
}
});
The JavaScript engine can't assume the the TEXTURE_2D property was not changed. It has to check every time.
As for const there may or may not be a general speed difference but for example if we make a function that returns a function like this
function makeFuncThatReturnsValue(value) {
const v = value;
return function() {
return v;
}
}
We can see that every time we call makeFuncThatReturnsValue a new v will be created and captured in the closure.
Just using a literal directly won't have that issue, nothing will be created. Of course you don't want to use the literal directly, magic numbers are bad, but if you compile your JavaScript with a modern compressor it will swap any consts for literals where appropriate.
Running an example through Google's closure compiler
Code:
const w = {
TEXTURE_2D: 0x0DE1,
};
gl.bindTexture(w.TEXTURE_2D, null);
Result:
gl.bindTexture(3553,null);
Related
What I need to do
I need to detect whether two objects are the same. By same I mean deep-equal: different objects that look and behave the same are the same to me. For instance {} is the same as {}, even though {} != {}.
I need to do this on Node.js.
Problem
This has been easy with most types I handle (undefined, null, numbers, NaN, strings, objects, arrays), but it's proving really hard with functions.
For function I would consider two functions to be the same if their name and code is identical. In case the functions are closures, then the variables they capture need to be the same instance.
I don't know how to implement that.
Attempted solutions
These are all the approaches I could think of to compare functions, but they all have issues:
Comparing the function with == or === doesn't work, of course. Example ()=>0 != ()=>0, but they should be the same.
Comparing the function name doesn't work, of course. Example: ()=>0 === ()=>1 when they shouldn't be the same.
Comparing the function's code (as reported by Function.prototype.toString) doesn't work:
const fnFactory=(n)=>{ return ()=>n; };
const fn1=fnFactory(0);
const fn2=fnFactory(1);
fn1.toString() === fn2.toString()
But they shouldn't be the same.
Compare the functions' code. If it's the same, parse it and detect whether the function has any captured variables, if it doesn't, the functions are the same.
This solution however would be unbearably slow. Besides it still doesn't work for different functions that capture the same instances of variables.
What I need this for (example)
I need this in order to implement a factory function like this (this is just a minimal example, of course):
// Precondition:
// "factoryFn" is called only once for every value of "fn"
// the result is then reused for every future invocation.
const storage=new MagicStorage();
function createOrReuse(fn, ...opts)
{
if(storage.has(fn, ...opts)){
return storage.get(fn, ...opts);
}
else{
const data=factoryFn(...opts);
storage.set(data, fn, ...opts);
return data;
}
}
Then I want to use it in several places around my code. For instance:
function f1(n) {
// ...
const buffer=createOrReuse((n)=>new Buffer.alloc(n*1024*1024), n);
//...
}
function f2(){
//...
emitter.on('ev', async()=>{
const buffer=await createOrReuse(()=>fs.readFile('file.txt'));
// ...
});
//...
}
Of course there are other ways to achieve the same result: For instance I could store the allocated values in variables with a lifetime long enough.
However similar solutions are much less ergonomic. I wish to have that small createOrReuse function. In other languages (like C++) such a createOrReuse could be implemented. Can I not implement it in Javascript?
Question
Is it possible to implement the function-comparison logic I need in pure JavaScript? I can use any ES version.
Otherwise is it possible to implement it as a native module for Node.js?
If yes, Is there any Node.js native module that can be used to achieve what I need?
Otherwise, where can I start to develop one?
Having read this article https://www.toptal.com/javascript/es6-class-chaos-keeps-js-developer-up and subsequently "JavaScript: The Good Parts", I shall henceforth commit to becoming a better JavaScript developer. However, one question remained for me. I usually implemented methods like this:
function MyClass(){
this.myData = 43;
this.getDataFromObject = function(){
return this.myData;
}
}
MyClass.prototype.getDataFromPrototype = function(){
return this.myData;
}
var myObject = new MyClass();
console.log(myObject.getDataFromObject());
console.log(myObject.getDataFromPrototype());
My assumption that underlies this whole post is that getDataFromObject is faster (during call, not during object creation) because it saves an indirection to the prototype but it is also less memory-efficient because every object gets an own instance of the function object. If that is already wrong, please correct me and you can probably stop reading here.
Else: Both article and book recommend a style like this:
function secretFactory() {
const secret = "Favor composition over inheritance [...]!"
const spillTheBeans = () => console.log(secret)
return {
spillTheBeans
}
}
const leaker = secretFactory()
leaker.spillTheBeans()
(quote from the article, the book didn't have ES6 yet but the ideas are similar)
My issue is this:
const leaker1 = secretFactory()
const leaker2 = secretFactory()
console.log(leaker1.spillTheBeans === leaker2.spillTheBeans) // false
Do I not mostly want to avoid that every object gets an own instance of every method? It might be insignificant here but if spillTheBeans is more complicated and I create a bazillion objects, each with twelvetythousand other methods?
If so, what is the "goot parts"-solution? My assumption would be:
const spillStaticBeans = () => console.log("Tabs rule!")
const spillInstanceBeans = (beans) => console.log(beans)
function secretFactory() {
const secret = "Favor composition over inheritance [...]!"
return{
spillStaticBeans,
spillInstanceBeans: () => spillInstanceBeans(secret)
}
}
const leaker1 = secretFactory()
const leaker2 = secretFactory()
leaker1.spillStaticBeans()
leaker2.spillInstanceBeans()
console.log(leaker1.spillStaticBeans === leaker2.spillStaticBeans) // true
console.log(leaker1.spillInstanceBeans === leaker2.spillInstanceBeans) // false
The spillInstanceBeans method is still different because each instance needs its own closure but at least they just wrap a reference to the same function object which contains all the expensiveness.
But now I have to write every method name two to three times. Worse, I clutter the namespace with public spillStaticBeans and spillInstanceBeans functions. In order to mitigate the latter, I could write a meta factory module:
const secretFactory = (function(){
const spillStaticBeans = () => console.log("Tabs rule!")
const spillInstanceBeans = (beans) => console.log(beans)
return function() {
const secret = "Favor composition over inheritance [...]!"
return{
spillStaticBeans,
spillInstanceBeans: () => spillInstanceBeans(secret)
}
}
}())
This can be used the same way as before but now the methods are hidden in a closure. However, it gets a bit confusing. Using ES6 modules, I could also leave them in module scope and just not export them. But is this the way to go?
Or am I mistaken in general and JavaScript's internal function representation takes care of all this and there is not actually a problem?
My assumption that underlies this whole post is that getDataFromObject is faster to call than getDataFromPrototype because it saves an indirection to the prototype
No. Engines are very good at optimising the prototype indirection. The instance.getDataFromPrototype always resolves to the same method for instances of the same class, and engines can take advantage of that. See this article for details.
Do I not mostly want to avoid that every object gets an own instance of every method? It might be insignificant here
Yes. In most of the cases, it actually is insignificant. So write your objects with methods using whatever style you prefer. Only if you actually measure a performance bottleneck, reconsider the cases where you are creating many instances.
Using ES6 modules, I could also leave them in module scope and just not export them. But is this the way to go?
Yes, that's a sensible solution. However, there's no good reason to extract spillInstanceBeans to the static scope, just leave it where it was - you have to create a closure over the secret anyway.
The spillInstanceBeans method is still different because each instance needs its own closure but at least they just wrap a reference to the same function object which contains all the expensiveness.
It should be noted that you're just replicating the way the JavaScript VM works internally: a function like spillTheBeans is compiled only once where it occurs in the source code even if it has free variables like secret. In SpiderMonkey for example, the result is called a »proto-function« (not to be confused with prototype). These are internal to the VM and cannot be accessed from JavaScript.
At runtime, function objects are created by binding the free variables of proto-functions to (a part of) the current scope, pretty much like your spillInstanceBeans example.
Saying that, it's true that using closures instead of prototype methods and this creates more function objects overall – the robustness gained from true privacy and read-only properties might make it worthwhile. The proposed style focuses more on objects rather than classes, so a different design could emerge that cannot be compared directly to a class-based design.
As Bergi says, measure and reconsider if performance is more important in (some part of) your code.
I'm building various d3.js dashboards which frequently refer to a javascript_properties.js file which includes properties such as:
var all_charts = (function() {
return {
width:860,
height:500,
from_date:"",
to_date:"",
highlight_color:"#00FFFF"
}
}());
I use these properties frequently within various functions.
My question is:
Is there any harm in calling each property direct every time I use it or would it be more efficient to declare a local variable at the beginning of each function if a property is going to be called more than once?
To show an example. A local variable:
var width = all_charts.width;
OR calling
all_charts.width
as many times as required during a function.
There may be little discernible difference?
This isn't about memory usage, it's about lookup time.
Yes, caching the property to a local variable may make it faster when using that repeatedly afterward, as the JavaScript engine doesn't have to traverse the scope chain up to the global level to find all_charts and then look up width on it.
But, it's unlikely to make a noticeable difference unless you're using these properties hundreds of thousands of times in the same function.
Side note: There's no point to the function in the all_charts code, what you have does exactly what this does, just more indirectly:
var all_charts = {
width:860,
height:500,
from_date:"",
to_date:"",
highlight_color:"#00FFFF"
};
Below is a code snippet I found online on a blog which entails a simple example in using the stream Transform class to alter data streams and output the altered result. There are some things about this that I don't really understand.
var stream = require('stream');
var util = require('util');
// node v0.10+ use native Transform, else polyfill
var Transform = stream.Transform ||
require('readable-stream').Transform;
Why does the program need to check if the this var points to an instance of the Upper constructor? The Upper constructor is being used to construct the upper object below, so what is the reason to check for this? Also, I tried logging options, but it returns null/undefined, so what's the point of that parameter?
function Upper(options) {
// allow use without new
if (!(this instanceof Upper)) {
return new Upper(options);
}
I assume that this Transform.call method is being made to explicitly set the this variable? But why does the program do that, seeing as how Transform is never being called anyway.
// init Transform
Transform.call(this, options);
}
After googling the util package, I know that it is being used here to allow Upper to inherit Transform's prototypal methods. Is that right?
util.inherits(Upper, Transform);
The function below is what really confuses me. I understand that the program is setting a method on Upper's prototype which is used to transform data being input into it. But, I don't see where this function is being called at all!
Upper.prototype._transform = function (chunk, enc, cb) {
var upperChunk = chunk.toString().toUpperCase();
this.push(upperChunk);
cb();
};
// try it out - from the original code
var upper = new Upper();
upper.pipe(process.stdout); // output to stdout
After running the code through a debugger, I can see that upper.write calls the aforementioned Upper.prototype._transform method, but why does this happen? upper is an instance of the Upper constructor, and write is a method that doesn't seem to have any relation to the _transform method being applied to the prototype of Upper.
upper.write('hello world\n'); // input line 1
upper.write('another line'); // input line 2
upper.end(); // finish
First, if you haven't already, take a look at the Transform stream implementer's documentation here.
Q: Why does the program need to check if the this var points to an instance of the Upper constructor? The Upper constructor is being used to construct the upper object below, so what is the reason to check for this?
A: It needs to check because anyone can call Upper() without new. So if it's detected that a user called the constructor without new, out of convenience (and to make things work correctly), new is implicitly called on the user's behalf.
Q: Also, I tried logging options, but it returns null/undefined, so what's the point of that parameter?
A: options is just a constructor/function parameter. If you don't pass anything to the constructor, then obviously it will be undefined (or whatever value you pass to it). You can have as many parameters as you want/need, just like any ordinary function. In the case of Upper() however, configuration isn't really needed due to the simplicity of the transform (just converting all input to uppercase).
Q: I assume that this Transform.call method is being made to explicitly set the this variable? But why does the program do that, seeing as how Transform is never being called anyway.
A: No, the Transform.call() allows the inherited "class" to perform its own initialization, such as setting up internal state variables. You can think of it as calling the super() in ES6 classes.
Q: After googling the util package, I know that it is being used here to allow Upper to inherit Transform's prototypal methods. Is that right?
A: Yes, that is correct. However, these days you can also use ES6 classes to do real inheritance. The node.js stream implementers documentation shows examples of both inheritance methods.
Q: The function below is what really confuses me. I understand that the program is setting a method on Upper's prototype which is used to transform data being input into it. But, I don't see where this function is being called at all!
A: This function is called internally by node when it has data for you to process. Think of the method as being part of an interface (or a "pure virtual function" if you are familiar with C++) that you are required to implement in your custom Transform.
Q: After running the code through a debugger, I can see that upper.write calls the aforementioned Upper.prototype._transform method, but why does this happen? upper is an instance of the Upper constructor, and write is a method that doesn't seem to have any relation to the _transform method being applied to the prototype of Upper.
A: As noted in the Transform documentation, Transform streams are merely simplified Duplex streams (meaning they accept input and produce output). When you call .write() you are writing to the Writable (input) side of the Transform stream. This is what triggers the call to ._transform() with the data you just passed to .write(). When you call .push() you are writing to the Readable (output) side of the Transform stream. That data is what seen when you either call .read() on the Transform stream or you attach a 'data' event handler.
This might sound like a noob question but here goes;
Basically, I'm passing a large amount of data from one object to another. Below is a simplified example.
// Example 1
function Person(hugeData) {
this.info = function() {
console.log(hugeData);
}
}
Homer = new Person(hugeData);
Homer.info();
Compared with
// Example 2
function Person() {
var hugeData;
this.set = function(data) {
hugeData = data;
}
this.info = function() {
console.log(hugeData);
}
}
Homer = new Person();
Homer.set(hugeData);
Homer.info();
Is there much of a difference performance-wise between the two code snippets? Please focus on the context of the example rather than the code itself (setting object variable vs passing by arguments).
While the example above is for Javascript, I would also like to know if the same principle applies for other programming languages like PHP.
Thanks.
No, not at all.
Without going into much detail now, both, formal paramters and local variables are stored within the such called Activation Object (in ES3) or the Lexical Environment Record (ES5) under the hood.
So access times should be identical by spec.
If you want to know the details, checkout:
http://dmitrysoshnikov.com/ecmascript/javascript-the-core/
and
http://dmitrysoshnikov.com/ecmascript/es5-chapter-3-2-lexical-environments-ecmascript-implementation/
Testcase: http://jsperf.com/formal-parameters-vs-local-variables-access-time
I assume the major point of your question is whether this line...
hugeData = data;
... in your code may affect the performance.
And the answer is no, it's not (at least not so it can affect the application's performance).
If hugeData refers to an object (and remember arrays in JS are essentially objects), it actually stores only a reference to this object. And the reference is what will be copied, without any duplication of the object's contents.
If hugeData refers to a string, it can be a bit more complicated... but as far as I know, most modern browsers (check MDN, as example) now implement 'copy-on-writing' technique. In other words, the string won't be duplicated here as well.
Passing by variable is the best thing. Because it's obvious your huge data is not a primitive so you are going to only refer that in the functions context.
Look at jAndy's answer.
It's hard to generalize in this case because every implementation of Javascript is different.
I could imagine that when you pass a big hunk of data with the creation of an object, you are saving a reallocation of memory, which would happen when you create the object with few data first and then add a lot of data to it later. But most JS implementation will store the big data chunk in a reference anyway, so it is unlikely that it really matters.