In Python, you can call the string.join() method on any iterable, like so:
",".join(some_iterable)
The argument may be a list, generator, or any other object as long as it is iterable.
Playing around with ES6, I couldn't find a way to do so without having to create an array first, I had to do something like this:
function *myGenerator() { ... }
let output = [...myGenerator()].join(",");
I know that join() is an Array.prototype method. Is it possible for me to call join() or some equivalent to concatenate the values generated by myGenerator without having to create an intermediate array, like the python example above?
The comments above answer your question pretty well. However, I was curious about #jonrsharpe's comment about generating intermediate strings and was wondering how that actually affected performance. So I put a join method on the prototype of a generator and tested it.
Here's the join() code:
function* nGen(start, stop) {
while (start < stop) {
yield start
start++
}
}
nGen.prototype.join = function(sep) {
let res = this.next().value
for (let v = this.next(); !v.done; v = this.next()) {
res += sep + v.value
}
return res
}
let g = nGen(2, 20)
console.log(g.join(','))
The original jsPerf tests in Safari and Chrome showed this working faster than the very common idiom: [...g].join(','). In a JSBench test in 2022, results were inconsistent between Chrome, Firefox, and Edge, but this method is now slower than [...g].join(',') and/or a for loop.
I am far from a master of writing jsPerf/JSBench tests, so maybe I'm screwing something up or misinterpreting the result. But if you're interested: https://jsbench.me/zlkz8tm6vw/1
Related
I have an iterator object, and I would like to get the first element that satisfies a testing function.
I know I can use Array.prototype.find if I convert the iterator to an array. using the [...ducks].find((duck) => duck.age >= 3) syntax for example.
But I do not want to convert the iterator to an array because it's wasteful. In my example, I have many ducks, and I need an old one that will be found after only a few tests.
I'm wondering whether an elegant solution exists.
There's nothing built in, but you can readily write a function for it, stick it in your utilities toolkit, and reuse it:
function find(it, callback) {
for (const value of it) {
if (callback(value)) {
return value;
}
}
}
Usage:
const duck = find(ducks, (duck) => duck.age >= 3);
In ES6 we can use a rest parameter, effectively creating an Array of arguments. TypeScript transpiles this to ES5 using a for loop. I was wondering is there any scenarios where using the for loop approach is a better option than using Array.prototype.slice? Maybe there are edge cases that the slice option does not cover?
// Written in TypeScript
/*
const namesJoinTS = function (firstName, ...args) {
return [firstName, ...args].join(' ');
}
const res = namesJoinTS('Dave', 'B', 'Smith');
console.log(res)
*/
// TypeScript above transpiles to this:
var namesJoinTS = function (firstName) {
var args = [];
for (var _i = 1; _i < arguments.length; _i++) {
args[_i - 1] = arguments[_i];
}
return [firstName].concat(args).join(' ');
};
var res = namesJoinTS('Dave', 'B', 'Smith');
console.log(res); //Dave B Smith
// Vanilla JS
var namesJoinJS = function (firstName) {
var args = [].slice.call(arguments, 1);
return [firstName].concat(args).join(' ');
};
var res = namesJoinJS('Dave', 'B', 'Smith');
console.log(res); // //Dave B Smith
This weird transpilation is a side effect of the biased optimization older versions of V8 had (and might still have). They optimize(d) some certain patterns greatly but did not care about the overall performance, therefore some strange patterns (like a for loop to copy arguments into an array *) did run way faster. Therefore the maintainers of libraries & transpilers started searching for ways to optimize their code acording to that, as their code runs on millions of devices and every millisecond counts. Now as the optimizations in V8 got more mature and are focused on the average performance, most of these tricks don't work anymore. It is a matter of time till they get refactored out of the codebase.
Additionally JavaScript is moving towards a language that can be optimized more easily, older features like arguments are replaced with newer ones (rest properties) that are more strict, and therefore more performant. Use them to achieve good performance with good looking code, arguments is a mistake of the past.
I was wondering is there any scenarios where using the for loop approach is a better option than using Array.prototype.slice?
Well it is faster on older V8 versions, wether that is still the case has to be tested. If you write the code for your project I would always choose the more elegant solution, the millisecond you might theoretically loose doesn't matter in 99% of the cases.
Maybe there are edge cases that the slice option does not cover?
No (AFAIK).
*you might ask "why is it faster though?", well that's because:
arguments itself is hard to optimize as
1) it can be reassigned (arguments = 3)
2) it has to be "live", changing arguments will get reflected to arguments
Therefore it can only be optimized if you directly access it, as the compiler then might replace the arraylike accessor with a variable reference:
function slow(a) {
console.log(arguments[0]);
}
// can be turned into this by the engine:
function fast(a) {
console.log(a);
}
This also works for loops if you inline them and fall back to another (maybe slower) version if the number of arguments changes:
function slow() {
for(let i = 0; i < arguments.length; i++) {
console.log(arguments[i]);
}
}
slow(1, 2, 3);
slow(4, 5, 6);
slow("what?");
// can be optimized to:
function fast(a, b, c) {
console.log(a);
console.log(b);
console.log(c);
}
function fast2(a) {
console.log(a);
}
fast(1,2,3);
fast(4, 5, 6);
fast2("what?");
Now if you however call another function and pass in arguments things get really complicated:
var leaking;
function cantBeOptimized(a) {
leak(arguments); // uurgh
a = 1; // this has to be reflected to "leaking" ....
}
function leak(stuff) { leaking = stuff; }
cantBeOptimized(0);
console.log(leaking[0]); // has to be 1
This can't be really optimized, it is a performance nighmare.
Therefore calling a function and passing arguments is a bad idea performance wise.
In ES6 we now have iterators and for..of to iterate them. we have some built-ins for arrays; notably keys, values and entries.
These methods allow one to perform much of the iteration one would commonly perform. But, what about iteration in reverse? This is also a very common task and I don't see anything in the spec specifically for it? Or maybe I missed it?
Ok, we have Array.prototype.reverse but I don't necessarily want to reverse a large array in place and then reverse it again when finished. I also don't want to use Array.prototype.slice to make a temporary shallow copy and reverse that just for iteration.
So I took a look a generators and came up with these working solutions.
(function() {
'use strict';
function* reverseKeys(arr) {
let key = arr.length - 1;
while (key >= 0) {
yield key;
key -= 1;
}
}
function* reverseValues(arr) {
for (let key of reverseKeys(arr)) {
yield arr[key];
}
}
function* reverseEntries(arr) {
for (let key of reverseKeys(arr)) {
yield [key, arr[key]];
}
}
var pre = document.getElementById('out');
function log(result) {
pre.appendChild(document.createTextNode(result + '\n'));
}
var a = ['a', 'b', 'c'];
for (var x of reverseKeys(a)) {
log(x);
}
log('');
for (var x of reverseValues(a)) {
log(x);
}
log('');
for (var x of reverseEntries(a)) {
log(x);
}
}());
<pre id="out"></pre>
Is this really the way that reverse iteration is intended in ES6 or have I missed something fundamental in the spec?
Is this really the way that reverse iteration is intended in ES6?
There was a proposal for reverse iteration, discussed on esdicuss and a git project outlining a spec, but nothing much seemed to happen with respect to it. ES6 is finalised now, so it's not something that is going to be added this time around. Anyway, for arrays and strings I've written a little code to fill in the gaps (in my opinion) and I will post it here as it may help others. This code is based on my browsers today and some improvements could possibly be made if there was more of ES6 implemented on them. I may get around to a gist or a small github project later.
Update: I have created a GitHub project for work on this.
Have a look at https://www.npmjs.com/package/itiriri.
It's a library that has similar methods as arrays, but works with iterators.
import { query } from 'itiriri';
const m = new Map();
m.set(1, 'a');
m.set(2, 'b');
m.set(3, 'c');
const result = query(m);
for (const [k, v] of result.reverse()) {
console.log(k + ' - ' + v)
}
query returns an iterable that has similar methods as arrays. In above example reverse() is used. There are also fitler, slice, map, concat etc.
If you need back an array, or a map from a query you can use one of .toArray(), .toMap() or .toSet() methods.
It is hard to Google for some keywords like "with" word, so I am testing to ask here.
Is the with statement in JavaScript inefficient?
For instance, say I have:
with(obj3) {
with(obj2) {
with(obj1) {
with(obj0) {
eval("(function() { console.log(aproperty) })();");
}
}
}
}
Would the above be more or less efficient, if for instance, I walked over obj0, obj1, obj2, obj3 and merged them together, and then used either:
One with statement alone
Created a parameters string with the keys of obj0, obj1, obj2 and obj3, and an args array for the values and used:
eval("function fn(aproperty, bproperty) { console.log(aproperty); }")
fn.apply(undefined, args);
Which of these three approaches can be deemed to be quicker? I am guessing on with statements but so many with's makes me think I can optimize it further.
If you're looking for options, then you may want to consider a third approach, which would be to create (on the fly if needed) a prototype chain of objects.
EDIT: My solution was broken. It requres the non-standard __proto__ property. I'm updating to fix it, but be aware that this isn't supported in all environments.
var objs = [null,obj3,obj2,obj1,obj0];
for (var i = 1; i < objs.length; i++) {
objs[i].__proto__ = Object.create(objs[i-1]);
}
var result = objs.pop();
This avoids with and should be quicker than merging, though only testing will tell.
And then if all you needed was a product of certain properties, this will be very quick.
var props = ["x2","b1","a3"];
var product = result.y3;
for (var i = 0; i < props.length; i++)
product *= result[props[i]];
Newer browsers have an internal tokening mechanism to make the javascript interpretation cheaper. It is very like JIT in the newer JVMs. I think there isn't a much problem with your deeply embedded with-s, practically it will be some like
__get_aproperty() {
if (obj0.has("aproperty")) return obj0.aproperty;
if (obj1.has("aproperty")) return obj1.aproperty;
if (obj2.has("aproperty")) return obj2.aproperty;
if (obj3.has("aproperty")) return obj3.aproperty;
}
So, the structure of your js is highly embedded, but the structure of the real execution in the JS-engine of the browsers, will be simple and linear.
But the tokenization of the JS, that is costly. And if the JS-engine finds an eval, needs to tokenize.
I voted for the first version.
With statement will make your code run like it's 1980 - literally every optimization implemented in a JIT cannot be used when it's in effect.
I'm trying to decide whether to use the reduce() method in Javascript for a function I need to write which is something like this
var x = [some array], y = {};
for (...) {
someOperation(x[i]);
y[x[i]] = "some other value";
}
Now this can obviously be written as a reduce() function in the following manner:
x.reduce(function(prev, current, index, arr) {
someOperation(current);
prev[current] = "some other value";
return prev;
}, {})
Or something like that. Is there any performance or other difference between the two? Or some other reason (like browser support, for instance) due to which one should be favoured over the other in a web programming environment? Thanks.
Even though I prefer these operations (reduce, map, filter, etc.), it's still not feasible to use them because of certain browsers that do not support them in their implementations. Sure, you can "patch" it by extending the Array prototype, but that's opening a can of worms too.
I don't think there's anything inherently wrong with these functions, and I think they make for better code, but for now it's best not to use them. Once a higher percentage of the population uses a browser that supports these functions I think they'll be fair game.
As far as performance, these will probably be slower than hand written for loops because of the overhead from function calls.
map and filter and reduce and forEach and ... (more info: https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Array#Iteration_methods ) are far better than normal loops because:
They are more elegant
They encourage functional programming (see benefits of functional programming)
You will need to write functions anyway and pass into them, as parameters, iteration variables. This is because javascript has no block scope. Functions like map and reduce make your job so much easier because they automatically set up your iteration variable and pass it into your function for you.
IE9 claims to support these. They're in the official javascript/ecmascript spec. If you care about people who are using IE8, that is your prerogative. If you really care, you can hack it by overriding Array.prototype for ONLY IE8 and older, to "fix" IE8 and older.
reduce is used to return one value from an array, as a result of sequentially processing the results of the previous elements.
reduceRight does the same, but starts at the end and works backwards.
map is used to return an array whose members have all been passed through a function.
neither method affects the array itself.
var A1= ['1', '2', '3', '4', '5', '6', '7',' 8'];
// This use of map returns a new array of the original elements, converted to numbers-
A1=A1.map(Number); // >> each of A1's elements converted to a number
// This reduce totals the array elements-
var A1sum= A1.reduce(function(a, b){ return a+b;});
// A1sum>> returned value: (Number) 36
They are not supported in older browsers, so you'll need to provide a substitute for them. Not worth it if all you are doing can be replicated in a simple loop.
Figuring the standard deviation of a population is an example where both map and reduce can be effectively used-
Math.mean= function(array){
return array.reduce(function(a, b){ return a+b; })/array.length;
}
Math.stDeviation=function(array){
var mean= Math.mean(array);
dev= array.map(function(itm){return (itm-mean)*(itm-mean); });
return Math.sqrt(dev.reduce(function(a, b){ return a+b; })/array.length);
}
var A2= [6.2, 5, 4.5, 6, 6, 6.9, 6.4, 7.5];
alert ('mean: '+Math.mean(A2)+'; deviation: '+Math.stDeviation(A2))
kennebec - good going, but your stDeviation function calls reduce twice and map once when it only needs a single call to reduce (which makes it a lot faster):
Math.stDev = function (a) {
var n = a.length;
var v = a.reduce(function (v, x) {
v[0] += x * x;
v[1] += x;
return v;
}, [0,0]);
return Math.sqrt( (v[0] - v[1]*v[1] / n) / n );
}
Should do a conversion to number when assigning to v[1] to make sure string numbers don't mess with the result and the divisor in the last line should probablly be (n - 1) in most cases, but that's up to the OP. :-)