Maximum call stack on non recursive function - javascript

Hello I'm running on a problem with this function that results in a maximum call stack exceeded error. This function is not recursive, so I don't really understand why it is exceeding the call stack.
I copied this function from some blog (maybe stackoveflow), It converts a Word Array to a Byte Array to use in pako.js.
It's used to inflate a zlib compressed string.
When the string is small It doesn't exceed the call stack, but with longer strings it exceeds it.
I've tried rewriting it with setTimeout, but it becomes very slow.
Do any of you have any suggestions?
Thanks.
const wordToByteArray = function (word, length) {
let ba = [];
let i;
let xFF = 0xFF;
if (length > 0)
ba.push(word >>> 24);
if (length > 1)
ba.push((word >>> 16) & xFF);
if (length > 2)
ba.push((word >>> 8) & xFF);
if (length > 3)
ba.push(word & xFF);
return ba;
};
const wordArrayToByteArray = function(wordArray, length) {
if (wordArray.hasOwnProperty("sigBytes") && wordArray.hasOwnProperty("words")) {
length = wordArray.sigBytes;
wordArray = wordArray.words;
}
let result = [];
let bytes;
let i = 0;
while (length > 0) {
bytes = wordToByteArray(wordArray[i], Math.min(4, length));
length -= bytes.length;
result.push(bytes);
i++;
}
return [].concat.apply([], result);
};
Solution
Thanks for the answers bellow, this was the solution.
...
while (length > 0) {
bytes = wordToByteArray(wordArray[i], Math.min(4, length));
length -= bytes.length;
bytes.forEach(function (byte) {
result.push(byte);
});
i++;
}
return result;
};

[].concat.apply([], result); is likely your problem; that's effectively saying "please call [].concat(result[0], result[1], result[2], ..., result[result.length - 1]). For a large input, you likely have a proportionately large result. Per MDN's warnings on apply:
But beware: in using apply this way, you run the risk of exceeding the JavaScript engine's argument length limit. The consequences of applying a function with too many arguments (think more than tens of thousands of arguments) vary across engines (JavaScriptCore has hard-coded argument limit of 65536), because the limit (indeed even the nature of any excessively-large-stack behavior) is unspecified. Some engines will throw an exception. More perniciously, others will arbitrarily limit the number of arguments actually passed to the applied function. To illustrate this latter case: if such an engine had a limit of four arguments (actual limits are of course significantly higher), it would be as if the arguments 5, 6, 2, 3 had been passed to apply in the examples above, rather than the full array.
If your result array is truly huge, trying to pass tens of thousands (or more) arguments to Array.concat could blow your stack. The MDN docs suggest that in scenarios like yours you could use a hybrid strategy to avoid blowing the stack, applying to chunks of arguments at a time instead of all the arguments, no matter how big.
Luckily, someone has already provided a guide on how to do this safely; just use that and explicitly loop to extend with each sub-array in result.

Your result array is very large. Problem here is in apply method where you provide result as arguments. Arguments of a function are pushed onto stack and it causes stack overflow.

Related

Multiply a string by itself in javascript [duplicate]

This question already has answers here:
Repeat a string in JavaScript a number of times
(24 answers)
Closed last year.
What is the best or most concise method for returning a string repeated an arbitrary amount of times?
The following is my best shot so far:
function repeat(s, n){
var a = [];
while(a.length < n){
a.push(s);
}
return a.join('');
}
Note to new readers: This answer is old and and not terribly practical - it's just "clever" because it uses Array stuff to get
String things done. When I wrote "less process" I definitely meant
"less code" because, as others have noted in subsequent answers, it
performs like a pig. So don't use it if speed matters to you.
I'd put this function onto the String object directly. Instead of creating an array, filling it, and joining it with an empty char, just create an array of the proper length, and join it with your desired string. Same result, less process!
String.prototype.repeat = function( num )
{
return new Array( num + 1 ).join( this );
}
alert( "string to repeat\n".repeat( 4 ) );
I've tested the performance of all the proposed approaches.
Here is the fastest variant I've got.
String.prototype.repeat = function(count) {
if (count < 1) return '';
var result = '', pattern = this.valueOf();
while (count > 1) {
if (count & 1) result += pattern;
count >>= 1, pattern += pattern;
}
return result + pattern;
};
Or as stand-alone function:
function repeat(pattern, count) {
if (count < 1) return '';
var result = '';
while (count > 1) {
if (count & 1) result += pattern;
count >>= 1, pattern += pattern;
}
return result + pattern;
}
It is based on wnrph's algorithm.
It is really fast. And the bigger the count, the faster it goes compared with the traditional new Array(count + 1).join(string) approach.
I've only changed 2 things:
replaced pattern = this with pattern = this.valueOf() (clears one obvious type conversion);
added if (count < 1) check from prototypejs to the top of function to exclude unnecessary actions in that case.
applied optimisation from Dennis answer (5-7% speed up)
UPD
Created a little performance-testing playground here for those who interested.
variable count ~ 0 .. 100:
constant count = 1024:
Use it and make it even faster if you can :)
This problem is a well-known / "classic" optimization issue for JavaScript, caused by the fact that JavaScript strings are "immutable" and addition by concatenation of even a single character to a string requires creation of, including memory allocation for and copying to, an entire new string.
Unfortunately, the accepted answer on this page is wrong, where "wrong" means by a performance factor of 3x for simple one-character strings, and 8x-97x for short strings repeated more times, to 300x for repeating sentences, and infinitely wrong when taking the limit of the ratios of complexity of the algorithms as n goes to infinity. Also, there is another answer on this page which is almost right (based on one of the many generations and variations of the correct solution circulating throughout the Internet in the past 13 years). However, this "almost right" solution misses a key point of the correct algorithm causing a 50% performance degradation.
JS Performance Results for the accepted answer, the top-performing other answer (based on a degraded version of the original algorithm in this answer), and this answer using my algorithm created 13 years ago
~ October 2000 I published an algorithm for this exact problem which was widely adapted, modified, then eventually poorly understood and forgotten. To remedy this issue, in August, 2008 I published an article http://www.webreference.com/programming/javascript/jkm3/3.html explaining the algorithm and using it as an example of simple of general-purpose JavaScript optimizations. By now, Web Reference has scrubbed my contact information and even my name from this article. And once again, the algorithm has been widely adapted, modified, then poorly understood and largely forgotten.
Original string repetition/multiplication JavaScript algorithm by
Joseph Myers, circa Y2K as a text multiplying function within Text.js;
published August, 2008 in this form by Web Reference:
http://www.webreference.com/programming/javascript/jkm3/3.html (The
article used the function as an example of JavaScript optimizations,
which is the only for the strange name "stringFill3.")
/*
* Usage: stringFill3("abc", 2) == "abcabc"
*/
function stringFill3(x, n) {
var s = '';
for (;;) {
if (n & 1) s += x;
n >>= 1;
if (n) x += x;
else break;
}
return s;
}
Within two months after publication of that article, this same question was posted to Stack Overflow and flew under my radar until now, when apparently the original algorithm for this problem has once again been forgotten. The best solution available on this Stack Overflow page is a modified version of my solution, possibly separated by several generations. Unfortunately, the modifications ruined the solution's optimality. In fact, by changing the structure of the loop from my original, the modified solution performs a completely unneeded extra step of exponential duplicating (thus joining the largest string used in the proper answer with itself an extra time and then discarding it).
Below ensues a discussion of some JavaScript optimizations related to all of the answers to this problem and for the benefit of all.
Technique: Avoid references to objects or object properties
To illustrate how this technique works, we use a real-life JavaScript function which creates strings of whatever length is needed. And as we'll see, more optimizations can be added!
A function like the one used here is to create padding to align columns of text, for formatting money, or for filling block data up to the boundary. A text generation function also allows variable length input for testing any other function that operates on text. This function is one of the important components of the JavaScript text processing module.
As we proceed, we will be covering two more of the most important optimization techniques while developing the original code into an optimized algorithm for creating strings. The final result is an industrial-strength, high-performance function that I've used everywhere--aligning item prices and totals in JavaScript order forms, data formatting and email / text message formatting and many other uses.
Original code for creating strings stringFill1()
function stringFill1(x, n) {
var s = '';
while (s.length < n) s += x;
return s;
}
/* Example of output: stringFill1('x', 3) == 'xxx' */
The syntax is here is clear. As you can see, we've used local function variables already, before going on to more optimizations.
Be aware that there's one innocent reference to an object property s.length in the code that hurts its performance. Even worse, the use of this object property reduces the simplicity of the program by making the assumption that the reader knows about the properties of JavaScript string objects.
The use of this object property destroys the generality of the computer program. The program assumes that x must be a string of length one. This limits the application of the stringFill1() function to anything except repetition of single characters. Even single characters cannot be used if they contain multiple bytes like the HTML entity .
The worst problem caused by this unnecessary use of an object property is that the function creates an infinite loop if tested on an empty input string x. To check generality, apply a program to the smallest possible amount of input. A program which crashes when asked to exceed the amount of available memory has an excuse. A program like this one which crashes when asked to produce nothing is unacceptable. Sometimes pretty code is poisonous code.
Simplicity may be an ambiguous goal of computer programming, but generally it's not. When a program lacks any reasonable level of generality, it's not valid to say, "The program is good enough as far as it goes." As you can see, using the string.length property prevents this program from working in a general setting, and in fact, the incorrect program is ready to cause a browser or system crash.
Is there a way to improve the performance of this JavaScript as well as take care of these two serious problems?
Of course. Just use integers.
Optimized code for creating strings stringFill2()
function stringFill2(x, n) {
var s = '';
while (n-- > 0) s += x;
return s;
}
Timing code to compare stringFill1() and stringFill2()
function testFill(functionToBeTested, outputSize) {
var i = 0, t0 = new Date();
do {
functionToBeTested('x', outputSize);
t = new Date() - t0;
i++;
} while (t < 2000);
return t/i/1000;
}
seconds1 = testFill(stringFill1, 100);
seconds2 = testFill(stringFill2, 100);
The success so far of stringFill2()
stringFill1() takes 47.297 microseconds (millionths of a second) to fill a 100-byte string, and stringFill2() takes 27.68 microseconds to do the same thing. That's almost a doubling in performance by avoiding a reference to an object property.
Technique: Avoid adding short strings to long strings
Our previous result looked good--very good, in fact. The improved function stringFill2() is much faster due to the use of our first two optimizations. Would you believe it if I told you that it can be improved to be many times faster than it is now?
Yes, we can accomplish that goal. Right now we need to explain how we avoid appending short strings to long strings.
The short-term behavior appears to be quite good, in comparison to our original function. Computer scientists like to analyze the "asymptotic behavior" of a function or computer program algorithm, which means to study its long-term behavior by testing it with larger inputs. Sometimes without doing further tests, one never becomes aware of ways that a computer program could be improved. To see what will happen, we're going to create a 200-byte string.
The problem that shows up with stringFill2()
Using our timing function, we find that the time increases to 62.54 microseconds for a 200-byte string, compared to 27.68 for a 100-byte string. It seems like the time should be doubled for doing twice as much work, but instead it's tripled or quadrupled. From programming experience, this result seems strange, because if anything, the function should be slightly faster since work is being done more efficiently (200 bytes per function call rather than 100 bytes per function call). This issue has to do with an insidious property of JavaScript strings: JavaScript strings are "immutable."
Immutable means that you cannot change a string once it's created. By adding on one byte at a time, we're not using up one more byte of effort. We're actually recreating the entire string plus one more byte.
In effect, to add one more byte to a 100-byte string, it takes 101 bytes worth of work. Let's briefly analyze the computational cost for creating a string of N bytes. The cost of adding the first byte is 1 unit of computational effort. The cost of adding the second byte isn't one unit but 2 units (copying the first byte to a new string object as well as adding the second byte). The third byte requires a cost of 3 units, etc.
C(N) = 1 + 2 + 3 + ... + N = N(N+1)/2 = O(N^2). The symbol O(N^2) is pronounced Big O of N squared, and it means that the computational cost in the long run is proportional to the square of the string length. To create 100 characters takes 10,000 units of work, and to create 200 characters takes 40,000 units of work.
This is why it took more than twice as long to create 200 characters than 100 characters. In fact, it should have taken four times as long. Our programming experience was correct in that the work is being done slightly more efficiently for longer strings, and hence it took only about three times as long. Once the overhead of the function call becomes negligible as to how long of a string we're creating, it will actually take four times as much time to create a string twice as long.
(Historical note: This analysis doesn't necessarily apply to strings in source code, such as html = 'abcd\n' + 'efgh\n' + ... + 'xyz.\n', since the JavaScript source code compiler can join the strings together before making them into a JavaScript string object. Just a few years ago, the KJS implementation of JavaScript would freeze or crash when loading long strings of source code joined by plus signs. Since the computational time was O(N^2) it wasn't difficult to make Web pages which overloaded the Konqueror Web browser or Safari, which used the KJS JavaScript engine core. I first came across this issue when I was developing a markup language and JavaScript markup language parser, and then I discovered what was causing the problem when I wrote my script for JavaScript Includes.)
Clearly this rapid degradation of performance is a huge problem. How can we deal with it, given that we cannot change JavaScript's way of handling strings as immutable objects? The solution is to use an algorithm which recreates the string as few times as possible.
To clarify, our goal is to avoid adding short strings to long strings, since in order to add the short string, the entire long string also must be duplicated.
How the algorithm works to avoid adding short strings to long strings
Here's a good way to reduce the number of times new string objects are created. Concatenate longer lengths of string together so that more than one byte at a time is added to the output.
For instance, to make a string of length N = 9:
x = 'x';
s = '';
s += x; /* Now s = 'x' */
x += x; /* Now x = 'xx' */
x += x; /* Now x = 'xxxx' */
x += x; /* Now x = 'xxxxxxxx' */
s += x; /* Now s = 'xxxxxxxxx' as desired */
Doing this required creating a string of length 1, creating a string of length 2, creating a string of length 4, creating a string of length 8, and finally, creating a string of length 9. How much cost have we saved?
Old cost C(9) = 1 + 2 + 3 + 4 + 5 + 6 + 7 + 9 = 45.
New cost C(9) = 1 + 2 + 4 + 8 + 9 = 24.
Note that we had to add a string of length 1 to a string of length 0, then a string of length 1 to a string of length 1, then a string of length 2 to a string of length 2, then a string of length 4 to a string of length 4, then a string of length 8 to a string of length 1, in order to obtain a string of length 9. What we're doing can be summarized as avoiding adding short strings to long strings, or in other words, trying to concatenate strings together that are of equal or nearly equal length.
For the old computational cost we found a formula N(N+1)/2. Is there a formula for the new cost? Yes, but it's complicated. The important thing is that it is O(N), and so doubling the string length will approximately double the amount of work rather than quadrupling it.
The code that implements this new idea is nearly as complicated as the formula for the computational cost. When you read it, remember that >>= 1 means to shift right by 1 byte. So if n = 10011 is a binary number, then n >>= 1 results in the value n = 1001.
The other part of the code you might not recognize is the bitwise and operator, written &. The expression n & 1 evaluates true if the last binary digit of n is 1, and false if the last binary digit of n is 0.
New highly-efficient stringFill3() function
function stringFill3(x, n) {
var s = '';
for (;;) {
if (n & 1) s += x;
n >>= 1;
if (n) x += x;
else break;
}
return s;
}
It looks ugly to the untrained eye, but it's performance is nothing less than lovely.
Let's see just how well this function performs. After seeing the results, it's likely that you'll never forget the difference between an O(N^2) algorithm and an O(N) algorithm.
stringFill1() takes 88.7 microseconds (millionths of a second) to create a 200-byte string, stringFill2() takes 62.54, and stringFill3() takes only 4.608. What made this algorithm so much better? All of the functions took advantage of using local function variables, but taking advantage of the second and third optimization techniques added a twenty-fold improvement to performance of stringFill3().
Deeper analysis
What makes this particular function blow the competition out of the water?
As I've mentioned, the reason that both of these functions, stringFill1() and stringFill2(), run so slowly is that JavaScript strings are immutable. Memory cannot be reallocated to allow one more byte at a time to be appended to the string data stored by JavaScript. Every time one more byte is added to the end of the string, the entire string is regenerated from beginning to end.
Thus, in order to improve the script's performance, one must precompute longer length strings by concatenating two strings together ahead of time, and then recursively building up the desired string length.
For instance, to create a 16-letter byte string, first a two byte string would be precomputed. Then the two byte string would be reused to precompute a four-byte string. Then the four-byte string would be reused to precompute an eight byte string. Finally, two eight-byte strings would be reused to create the desired new string of 16 bytes. Altogether four new strings had to be created, one of length 2, one of length 4, one of length 8 and one of length 16. The total cost is 2 + 4 + 8 + 16 = 30.
In the long run this efficiency can be computed by adding in reverse order and using a geometric series starting with a first term a1 = N and having a common ratio of r = 1/2. The sum of a geometric series is given by a_1 / (1-r) = 2N.
This is more efficient than adding one character to create a new string of length 2, creating a new string of length 3, 4, 5, and so on, until 16. The previous algorithm used that process of adding a single byte at a time, and the total cost of it would be n (n + 1) / 2 = 16 (17) / 2 = 8 (17) = 136.
Obviously, 136 is a much greater number than 30, and so the previous algorithm takes much, much more time to build up a string.
To compare the two methods you can see how much faster the recursive algorithm (also called "divide and conquer") is on a string of length 123,457. On my FreeBSD computer this algorithm, implemented in the stringFill3() function, creates the string in 0.001058 seconds, while the original stringFill1() function creates the string in 0.0808 seconds. The new function is 76 times faster.
The difference in performance grows as the length of the string becomes larger. In the limit as larger and larger strings are created, the original function behaves roughly like C1 (constant) times N^2, and the new function behaves like C2 (constant) times N.
From our experiment we can determine the value of C1 to be C1 = 0.0808 / (123457)2 = .00000000000530126997, and the value of C2 to be C2 = 0.001058 / 123457 = .00000000856978543136. In 10 seconds, the new function could create a string containing 1,166,890,359 characters. In order to create this same string, the old function would need 7,218,384 seconds of time.
This is almost three months compared to ten seconds!
I'm only answering (several years late) because my original solution to this problem has been floating around the Internet for more than 10 years, and apparently is still poorly-understood by the few who do remember it. I thought that by writing an article about it here I would help:
Performance Optimizations for High Speed JavaScript / Page 3
Unfortunately, some of the other solutions presented here are still some of those that would take three months to produce the same amount of output that a proper solution creates in 10 seconds.
I want to take the time to reproduce part of the article here as a canonical answer on Stack Overflow.
Note that the best-performing algorithm here is clearly based on my algorithm and was probably inherited from someone else's 3rd or 4th generation adaptation. Unfortunately, the modifications resulted in reducing its performance. The variation of my solution presented here perhaps did not understand my confusing for (;;) expression which looks like the main infinite loop of a server written in C, and which was simply designed to allow a carefully-positioned break statement for loop control, the most compact way to avoid exponentially replicating the string one extra unnecessary time.
Good news! String.prototype.repeat is now a part of JavaScript.
"yo".repeat(2);
// returns: "yoyo"
The method is supported by all major browsers, except Internet Explorer. For an up to date list, see MDN: String.prototype.repeat > Browser compatibility.
MDN has a polyfill for browsers without support.
This one is pretty efficient
String.prototype.repeat = function(times){
var result="";
var pattern=this;
while (times > 0) {
if (times&1)
result+=pattern;
times>>=1;
pattern+=pattern;
}
return result;
};
String.prototype.repeat is now ES6 Standard.
'abc'.repeat(3); //abcabcabc
Expanding P.Bailey's solution:
String.prototype.repeat = function(num) {
return new Array(isNaN(num)? 1 : ++num).join(this);
}
This way you should be safe from unexpected argument types:
var foo = 'bar';
alert(foo.repeat(3)); // Will work, "barbarbar"
alert(foo.repeat('3')); // Same as above
alert(foo.repeat(true)); // Same as foo.repeat(1)
alert(foo.repeat(0)); // This and all the following return an empty
alert(foo.repeat(false)); // string while not causing an exception
alert(foo.repeat(null));
alert(foo.repeat(undefined));
alert(foo.repeat({})); // Object
alert(foo.repeat(function () {})); // Function
EDIT: Credits to jerone for his elegant ++num idea!
Use Array(N+1).join("string_to_repeat")
Here's a 5-7% improvement on disfated's answer.
Unroll the loop by stopping at count > 1 and perform an additional result += pattnern concat after the loop. This will avoid the loops final previously unused pattern += pattern without having to use an expensive if-check.
The final result would look like this:
String.prototype.repeat = function(count) {
if (count < 1) return '';
var result = '', pattern = this.valueOf();
while (count > 1) {
if (count & 1) result += pattern;
count >>= 1, pattern += pattern;
}
result += pattern;
return result;
};
And here's disfated's fiddle forked for the unrolled version: http://jsfiddle.net/wsdfg/
/**
#desc: repeat string
#param: n - times
#param: d - delimiter
*/
String.prototype.repeat = function (n, d) {
return --n ? this + (d || '') + this.repeat(n, d) : '' + this
};
this is how to repeat string several times using delimeter.
Tests of the various methods:
var repeatMethods = {
control: function (n,s) {
/* all of these lines are common to all methods */
if (n==0) return '';
if (n==1 || isNaN(n)) return s;
return '';
},
divideAndConquer: function (n, s) {
if (n==0) return '';
if (n==1 || isNaN(n)) return s;
with(Math) { return arguments.callee(floor(n/2), s)+arguments.callee(ceil(n/2), s); }
},
linearRecurse: function (n,s) {
if (n==0) return '';
if (n==1 || isNaN(n)) return s;
return s+arguments.callee(--n, s);
},
newArray: function (n, s) {
if (n==0) return '';
if (n==1 || isNaN(n)) return s;
return (new Array(isNaN(n) ? 1 : ++n)).join(s);
},
fillAndJoin: function (n, s) {
if (n==0) return '';
if (n==1 || isNaN(n)) return s;
var ret = [];
for (var i=0; i<n; i++)
ret.push(s);
return ret.join('');
},
concat: function (n,s) {
if (n==0) return '';
if (n==1 || isNaN(n)) return s;
var ret = '';
for (var i=0; i<n; i++)
ret+=s;
return ret;
},
artistoex: function (n,s) {
var result = '';
while (n>0) {
if (n&1) result+=s;
n>>=1, s+=s;
};
return result;
}
};
function testNum(len, dev) {
with(Math) { return round(len+1+dev*(random()-0.5)); }
}
function testString(len, dev) {
return (new Array(testNum(len, dev))).join(' ');
}
var testTime = 1000,
tests = {
biggie: { str: { len: 25, dev: 12 }, rep: {len: 200, dev: 50 } },
smalls: { str: { len: 5, dev: 5}, rep: { len: 5, dev: 5 } }
};
var testCount = 0;
var winnar = null;
var inflight = 0;
for (var methodName in repeatMethods) {
var method = repeatMethods[methodName];
for (var testName in tests) {
testCount++;
var test = tests[testName];
var testId = methodName+':'+testName;
var result = {
id: testId,
testParams: test
}
result.count=0;
(function (result) {
inflight++;
setTimeout(function () {
result.start = +new Date();
while ((new Date() - result.start) < testTime) {
method(testNum(test.rep.len, test.rep.dev), testString(test.str.len, test.str.dev));
result.count++;
}
result.end = +new Date();
result.rate = 1000*result.count/(result.end-result.start)
console.log(result);
if (winnar === null || winnar.rate < result.rate) winnar = result;
inflight--;
if (inflight==0) {
console.log('The winner: ');
console.log(winnar);
}
}, (100+testTime)*testCount);
}(result));
}
}
Here's the JSLint safe version
String.prototype.repeat = function (num) {
var a = [];
a.length = num << 0 + 1;
return a.join(this);
};
For all browsers
This is about as concise as it gets :
function repeat(s, n) { return new Array(n+1).join(s); }
If you also care about performance, this is a much better approach :
function repeat(s, n) { var a=[],i=0;for(;i<n;)a[i++]=s;return a.join(''); }
If you want to compare the performance of both options, see this Fiddle and this Fiddle for benchmark tests. During my own tests, the second option was about 2 times faster in Firefox and about 4 times faster in Chrome!
For moderns browsers only :
In modern browsers, you can now also do this :
function repeat(s,n) { return s.repeat(n) };
This option is not only shorter than both other options, but it's even faster than the second option.
Unfortunately, it doesn't work in any version of Internet explorer. The numbers in the table specify the first browser version that fully supports the method :
function repeat(pattern, count) {
for (var result = '';;) {
if (count & 1) {
result += pattern;
}
if (count >>= 1) {
pattern += pattern;
} else {
return result;
}
}
}
You can test it at JSFiddle. Benchmarked against the hacky Array.join and mine is, roughly speaking, 10 (Chrome) to 100 (Safari) to 200 (Firefox) times faster (depending on the browser).
Just another repeat function:
function repeat(s, n) {
var str = '';
for (var i = 0; i < n; i++) {
str += s;
}
return str;
}
There are many ways in the ES-Next ways
1. ES2015/ES6 has been realized this repeat() method!
/**
* str: String
* count: Number
*/
const str = `hello repeat!\n`, count = 3;
let resultString = str.repeat(count);
console.log(`resultString = \n${resultString}`);
/*
resultString =
hello repeat!
hello repeat!
hello repeat!
*/
({ toString: () => 'abc', repeat: String.prototype.repeat }).repeat(2);
// 'abcabc' (repeat() is a generic method)
// Examples
'abc'.repeat(0); // ''
'abc'.repeat(1); // 'abc'
'abc'.repeat(2); // 'abcabc'
'abc'.repeat(3.5); // 'abcabcabc' (count will be converted to integer)
// 'abc'.repeat(1/0); // RangeError
// 'abc'.repeat(-1); // RangeError
2. ES2017/ES8 new add String.prototype.padStart()
const str = 'abc ';
const times = 3;
const newStr = str.padStart(str.length * times, str.toUpperCase());
console.log(`newStr =`, newStr);
// "newStr =" "ABC ABC abc "
3. ES2017/ES8 new add String.prototype.padEnd()
const str = 'abc ';
const times = 3;
const newStr = str.padEnd(str.length * times, str.toUpperCase());
console.log(`newStr =`, newStr);
// "newStr =" "abc ABC ABC "
refs
http://www.ecma-international.org/ecma-262/6.0/#sec-string.prototype.repeat
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/repeat
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/padStart
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/padEnd
function repeat(s, n) { var r=""; for (var a=0;a<n;a++) r+=s; return r;}
This may be the smallest recursive one:-
String.prototype.repeat = function(n,s) {
s = s || ""
if(n>0) {
s += this
s = this.repeat(--n,s)
}
return s}
Fiddle: http://jsfiddle.net/3Y9v2/
function repeat(s, n){
return ((new Array(n+1)).join(s));
}
alert(repeat('R', 10));
Simple recursive concatenation
I just wanted to give it a bash, and made this:
function ditto( s, r, c ) {
return c-- ? ditto( s, r += s, c ) : r;
}
ditto( "foo", "", 128 );
I can't say I gave it much thought, and it probably shows :-)
This is arguably better
String.prototype.ditto = function( c ) {
return --c ? this + this.ditto( c ) : this;
};
"foo".ditto( 128 );
And it's a lot like an answer already posted - I know this.
But why be recursive at all?
And how about a little default behaviour too?
String.prototype.ditto = function() {
var c = Number( arguments[ 0 ] ) || 2,
r = this.valueOf();
while ( --c ) {
r += this;
}
return r;
}
"foo".ditto();
Because, although the non recursive method will handle arbitrarily large repeats without hitting call stack limits, it's a lot slower.
Why did I bother adding more methods that aren't half as clever as those already posted?
Partly for my own amusement, and partly to point out in the simplest way I know that there are many ways to skin a cat, and depending on the situation, it's quite possible that the apparently best method isn't ideal.
A relatively fast and sophisticated method may effectively crash and burn under certain circumstances, whilst a slower, simpler method may get the job done - eventually.
Some methods may be little more than exploits, and as such prone to being fixed out of existence, and other methods may work beautifully in all conditions, but are so constructed that one simply has no idea how it works.
"So what if I dunno how it works?!"
Seriously?
JavaScript suffers from one of its greatest strengths; it's highly tolerant of bad behaviour, and so flexible it'll bend over backwards to return results, when it might have been better for everyone if it'd snapped!
"With great power, comes great responsibility" ;-)
But more seriously and importantly, although general questions like this do lead to awesomeness in the form of clever answers that if nothing else, expand one's knowledge and horizons, in the end, the task at hand - the practical script that uses the resulting method - may require a little less, or a little more clever than is suggested.
These "perfect" algorithms are fun and all, but "one size fits all" will rarely if ever be better than tailor made.
This sermon was brought to you courtesy of a lack of sleep and a passing interest.
Go forth and code!
Firstly, the OP's questions seems to be about conciseness - which I understand to mean "simple and easy to read", while most answers seem to be about efficiency - which is obviously not the same thing and also I think that unless you implement some very specific large data manipulating algorithms, shouldn't worry you when you come to implement basic data manipulation Javascript functions. Conciseness is much more important.
Secondly, as André Laszlo noted, String.repeat is part of ECMAScript 6 and already available in several popular implementations - so the most concise implementation of String.repeat is not to implement it ;-)
Lastly, if you need to support hosts that don't offer the ECMAScript 6 implementation, MDN's polyfill mentioned by André Laszlo is anything but concise.
So, without further ado - here is my concise polyfill:
String.prototype.repeat = String.prototype.repeat || function(n){
return n<=1 ? this : this.concat(this.repeat(n-1));
}
Yes, this is a recursion. I like recursions - they are simple and if done correctly are easy to understand. Regarding efficiency, if the language supports it they can be very efficient if written correctly.
From my tests, this method is ~60% faster than the Array.join approach. Although it obviously comes nowhere close disfated's implementation, it is much simpler than both.
My test setup is node v0.10, using "Strict mode" (I think it enables some sort of TCO), calling repeat(1000) on a 10 character string a million times.
If you think all those prototype definitions, array creations, and join operations are overkill, just use a single line code where you need it. String S repeating N times:
for (var i = 0, result = ''; i < N; i++) result += S;
Use Lodash for Javascript utility functionality, like repeating strings.
Lodash provides nice performance and ECMAScript compatibility.
I highly recommend it for UI development and it works well server side, too.
Here's how to repeat the string "yo" 2 times using Lodash:
> _.repeat('yo', 2)
"yoyo"
Recursive solution using divide and conquer:
function repeat(n, s) {
if (n==0) return '';
if (n==1 || isNaN(n)) return s;
with(Math) { return repeat(floor(n/2), s)+repeat(ceil(n/2), s); }
}
I came here randomly and never had a reason to repeat a char in javascript before.
I was impressed by artistoex's way of doing it and disfated's results. I noticed that the last string concat was unnecessary, as Dennis also pointed out.
I noticed a few more things when playing with the sampling disfated put together.
The results varied a fair amount often favoring the last run and similar algorithms would often jockey for position. One of the things I changed was instead of using the JSLitmus generated count as the seed for the calls; as count was generated different for the various methods, I put in an index. This made the thing much more reliable. I then looked at ensuring that varying sized strings were passed to the functions. This prevented some of the variations I saw, where some algorithms did better at the single chars or smaller strings. However the top 3 methods all did well regardless of the string size.
Forked test set
http://jsfiddle.net/schmide/fCqp3/134/
// repeated string
var string = '0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789';
// count paremeter is changed on every test iteration, limit it's maximum value here
var maxCount = 200;
var n = 0;
$.each(tests, function (name) {
var fn = tests[name];
JSLitmus.test(++n + '. ' + name, function (count) {
var index = 0;
while (count--) {
fn.call(string.slice(0, index % string.length), index % maxCount);
index++;
}
});
if (fn.call('>', 10).length !== 10) $('body').prepend('<h1>Error in "' + name + '"</h1>');
});
JSLitmus.runAll();
I then included Dennis' fix and decided to see if I could find a way to eek out a bit more.
Since javascript can't really optimize things, the best way to improve performance is to manually avoid things. If I took the first 4 trivial results out of the loop, I could avoid 2-4 string stores and write the final store directly to the result.
// final: growing pattern + prototypejs check (count < 1)
'final avoid': function (count) {
if (!count) return '';
if (count == 1) return this.valueOf();
var pattern = this.valueOf();
if (count == 2) return pattern + pattern;
if (count == 3) return pattern + pattern + pattern;
var result;
if (count & 1) result = pattern;
else result = '';
count >>= 1;
do {
pattern += pattern;
if (count & 1) result += pattern;
count >>= 1;
} while (count > 1);
return result + pattern + pattern;
}
This resulted in a 1-2% improvement on average over Dennis' fix. However, different runs and different browsers would show a fair enough variance that this extra code probably isn't worth the effort over the 2 previous algorithms.
A chart
Edit: I did this mostly under chrome. Firefox and IE will often favor Dennis by a couple %.
Simple method:
String.prototype.repeat = function(num) {
num = parseInt(num);
if (num < 0) return '';
return new Array(num + 1).join(this);
}
People overcomplicate this to a ridiculous extent or waste performance. Arrays? Recursion? You've got to be kidding me.
function repeat (string, times) {
var result = ''
while (times-- > 0) result += string
return result
}
Edit. I ran some simple tests to compare with the bitwise version posted by artistoex / disfated and a bunch of other people. The latter was only marginally faster, but orders of magnitude more memory-efficient. For 1000000 repeats of the word 'blah', the Node process went up to 46 megabytes with the simple concatenation algorithm (above), but only 5.5 megabytes with the logarithmic algorithm. The latter is definitely the way to go. Reposting it for the sake of clarity:
function repeat (string, times) {
var result = ''
while (times > 0) {
if (times & 1) result += string
times >>= 1
string += string
}
return result
}
Concatenating strings based on an number.
function concatStr(str, num) {
var arr = [];
//Construct an array
for (var i = 0; i < num; i++)
arr[i] = str;
//Join all elements
str = arr.join('');
return str;
}
console.log(concatStr("abc", 3));
Hope that helps!
With ES8 you could also use padStart or padEnd for this. eg.
var str = 'cat';
var num = 23;
var size = str.length * num;
"".padStart(size, str) // outputs: 'catcatcatcatcatcatcatcatcatcatcatcatcatcatcatcatcatcatcatcatcatcatcat'
To repeat a string in a specified number of times, we can use the built-in repeat() method in JavaScript.
Here is an example that repeats the following string for 4 times:
const name = "king";
const repeat = name.repeat(4);
console.log(repeat);
Output:
"kingkingkingking"
or we can create our own verison of repeat() function like this:
function repeat(str, n) {
if (!str || !n) {
return;
}
let final = "";
while (n) {
final += s;
n--;
}
return final;
}
console.log(repeat("king", 3))
(originally posted at https://reactgo.com/javascript-repeat-string/)

What's the difference in returning 0 or 1 on a javascript sorting function? [duplicate]

When sorting an array of numbers in JavaScript, I accidentally used < instead of the usual - -- but it still works. I wonder why?
Example:
var a = [1,3,2,4]
a.sort(function(n1, n2){
return n1<n2
})
// result is correct: [4,3,2,1]
And an example array for which this does not work (thanks for Nicolas's example):
[1,2,1,2,1,2,1,2,1,2,1,2]
This sort works on your input array due to its small size and the current implementation of sort in Chrome V8 (and, likely, other browsers).
The return value of the comparator function is defined in the documentation:
If compareFunction(a, b) is less than 0, sort a to an index lower than
b, i.e. a comes first.
If compareFunction(a, b) returns 0, leave a and
b unchanged with respect to each other, but sorted with respect to all
different elements.
If compareFunction(a, b) is greater than 0, sort b to an index lower than a, i.e. b comes first.
However, your function returns binary true or false, which evaluate to 1 or 0 respectively when compared to a number. This effectively lumps comparisons where n1 < n2 in with n1 === n2, claiming both to be even. If n1 is 9 and n2 is 3, 9 < 3 === false or 0. In other words, your sort leaves 9 and 3 "unchanged with respect to each other" rather than insisting "sort 9 to an index lower than 3".
If your array is shorter than 11 elements, Chrome V8's sort routine switches immediately to an insertion sort and performs no quicksort steps:
// Insertion sort is faster for short arrays.
if (to - from <= 10) {
InsertionSort(a, from, to);
return;
}
V8's insertion sort implementation only cares if the comparator function reports b as greater than a, taking the same else branch for both 0 and < 0 comparator returns:
var order = comparefn(tmp, element);
if (order > 0) {
a[j + 1] = tmp;
} else {
break;
}
Quicksort's implementation, however, relies on all three comparisons both in choosing a pivot and in partitioning:
var order = comparefn(element, pivot);
if (order < 0) {
// ...
} else if (order > 0) {
// ...
}
// move on to the next iteration of the partition loop
This guarantees an accurate sort on arrays such as [1,3,2,4], and dooms arrays with more than 10 elements to at least one almost certainly inaccurate step of quicksort.
Update 7/19/19: Since the version of V8 (6) discussed in this answer, implementation of V8's array sort moved to Torque/Timsort in 7.0 as discussed in this blog post and insertion sort is called on arrays of length 22 or less.
The article linked above describes the historical situation of V8 sorting as it existed at the time of the question:
Array.prototype.sort and TypedArray.prototype.sort relied on the same Quicksort implementation written in JavaScript. The sorting algorithm itself is rather straightforward: The basis is a Quicksort with an Insertion Sort fall-back for shorter arrays (length < 10). The Insertion Sort fall-back was also used when Quicksort recursion reached a sub-array length of 10. Insertion Sort is more efficient for smaller arrays. This is because Quicksort gets called recursively twice after partitioning. Each such recursive call had the overhead of creating (and discarding) a stack frame.
Regardless of any changes in the implementation details, if the sort comparator adheres to standard, the code will sort predictably, but if the comparator doesn't fulfill the contract, all bets are off.
After my initial comment, I wondered a little bit about how easy it is to find arrays for which this sorting method fails.
I ran an exhaustive search on arrays of length up to 8 (on an alphabet of size the size of the array), and found nothing. Since my (admittedly shitty) algorithm started to be too slow, I changed it to an alphabet of size 2 and found that binary arrays of length up to 10 are all sorted properly. However, for binary arrays of length 11, many are improperly sorted, for instance [0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0].
// Check if 'array' is properly sorted using the "<" comparator function
function sortWorks(array) {
let sorted_array = array.sort(function(n1, n2) {
return n1 < n2;
});
for (let i=0; i<sorted_array.length-1; i++) {
if (sorted_array[i] < sorted_array[i+1]) return false;
}
return true;
}
// Get a list of all arrays of length 'count' on an alphabet of size 'max_n'.
// I know it's awful, don't yell at me :'(
var arraysGenerator;
arraysGenerator = function (max_n, count) {
if (count === 0) return [[]];
let arrays = arraysGenerator(max_n, count-1);
let result = [];
for (let array of arrays) {
for (let i = 0; i<max_n; i++) {
result.push([i].concat(array));
}
}
return result;
}
// Check if there are binary arrays of size k which are improperly sorted,
// and if so, log them
function testArraysOfSize(k) {
let arrays = arraysGenerator(2, k);
for (let array of arrays) {
if (!sortWorks(array)) {
console.log(array);
}
}
}
I'm getting some weird false-positives for some reason though, not sure where my mistake is.
EDIT:
After checking for a little while, here's a partial explanation on why OP's "wrong" sorting method works for lengths <=10 and for lengths >=11: it looks like (at least some) javascript implementations use InsertionSort if the array length is short (length <= 10) and QuickSort otherwise. It looks like QuickSort actively uses the "-1" outputs of the compare function while InsertionSort does not and relies only on the "1" outputs.
Source: here, all thanks to the original author.
If we analyze what's being done, it seems that this is mostly luck as in this case, 3 and 2 are considered to be "the same" and should be interchangeable. I suppose in such cases, the JS engines keep the original order for any values that have been deemed equal:
let a = [1, 3, 2, 4];
a.sort((n1, n2) => {
const result = n1 < n2;
if (result < 0) {
console.log(`${n1} comes before ${n2}`);
} else if (result > 0) {
console.log(`${n2} comes before ${n1}`);
} else {
console.log(`${n1} comes same position as ${n2}`);
}
return result;
})
console.log(a);
As pointed out in the comments, this isn't guaranteed to work ([1,2,1,2,1,2,1,2,1,2,1,2] being a counter-example).
Depending on exact sort() implementation, it is likely that it never checks for -1. It is easier and faster, and it makes no difference (as sorting is not guaranteed to be stable anyway, IIRC).
If the check sort() makes internally is compareFunction(a, b) > 0, then effectively true is interpreted as a > b, and false is interpreted as a <= b. And then your result is exactly what one would expect.
Of course the key point is that for > comparison true gets covered to 1 and false to 0.
Note: this is all speculation and guesswork, I haven't confirmed it experimentally or in browser source code - but it's reasonably likely to be correct.
Sort function expects a comparator which returns a number (negative, zero, or positive).
Assuming you're running on top of V8 engine (Node.js, Chrome etc.), you can find in array implementation that the returned value is compared to 0 (yourReturnValue > 0). In such case, the return value being casted to a number, so:
Truthy values are converted to 1
Falsy values are converted to 0
So based the documentation and the above, your function will return a sorted array in descending order in your specific case, but might brake in other cases since there's no regard to -1 value.

Can I pre-allocate memory for a Map/Set with a known number of elements?

In the case of a JS array, it's possible to create one with predefined length. If we pass the length to the constructor, like new Array(itemCount), a JS engine could pre-allocate memory for the array, so it won't be needed to reallocate it while new items are added to the array.
Is it possible to pre-allocate memory for a Map or Set? It doesn't accept the length in the constructor like array does. If you know that a map will contain 10 000 items, it should be much more efficient to allocate the memory once than reallocate it multiple times.
Here is the same question about arrays.
There is discussion in the comments, whether the predefined array size has an impact on the performance or not. I've created a simple test to check it, the result is:
Chrome - filling an array with a predefined size is about 3 times faster
Firefox - there is no much difference, but the array with predefined length is a little bit faster
Edge - filling of a predefined size array is about 1.5 times faster
There is a suggestion to create the entries array and pass it to the Map, so the map can take the length from the array. I've created such test too, the result is:
Chrome - construction of a map which accepts entries array is about 1.5 times slower
Firefox - there is no difference
Edge - passing of entries array to the Map constructor is about 1.5 times faster
There is no way to influence the way the engine allocates memory, so whatever you do, you cannot guarantee that the trick you are using works on every engine or even at another version of the same engine.
There are however typed arrays in JS, they are the only datastructures with a fixed size. With them, you can implement your own hashmap with fixed size.
A very naive implementation is slightly faster² on inserting, not sure how it behaves on reads and others (also it can only store 8bit integers > 0):
function FixedMap(size) {
const keys = new Uint8Array(size),
values = new Uint8Array(size);
return {
get(key) {
let pos = key % size;
while(keys[pos] !== key) {
if(keys[pos] % size !== key % size)
return undefined;
pos = (pos + 1) % size;
}
return values[pos];
},
set(key, value) {
let pos = key % size;
while(key[pos] && key[pos] !== key) pos = (pos + 1) % size;
keys[pos] = value;
values[pos] = value;
}
};
}
console.time("native");
const map = new Map();
for(let i = 0; i < 10000; i++)
map.set(i, Math.floor(Math.random() * 1000));
console.timeEnd("native");
console.time("self");
const fixedMap = FixedMap(10000);
for(let i = 0; i < 10000; i++)
fixedMap.set(i, Math.floor(Math.random() * 1000));
console.timeEnd("self");
² Marketing would say It is up to 20% faster! and I would say It is up to 2ms faster, and I spent more than 10minutes on this ...
No, this is not possible. It's doubtful even that the Array(length) trick still works, engines have become much better at optimising array assignments and might have chosen a different strategy to (pre)determine the size of memory to allocate.
For Maps and Sets, no such trick exists. Your best bet will be creating them by passing an iterable to their constructor, like new Map(array), instead of using the set/add methods. This does at least offer the chance for the engine to take the iterable size as a hint - although filtering out duplicates can still lead to differences.
Disclaimer: I don't know whether any engines implement, or plan to implement, such an optimisation. It might not be worth the effort, especially if the current memory allocation strategy is already good enough to diminish any improvements.

Why are negative array indices much slower than positive array indices?

Here's a simple JavaScript performance test:
const iterations = new Array(10 ** 7);
var x = 0;
var i = iterations.length + 1;
console.time('negative');
while (--i) {
x += iterations[-i];
}
console.timeEnd('negative');
var y = 0;
var j = iterations.length;
console.time('positive');
while (j--) {
y += iterations[j];
}
console.timeEnd('positive');
The first loop counts from 10,000,000 down to 1 and accesses an array with a length of 10 million using a negative index on each iteration. So it goes through the array from beginning to end.
The second loop counts from 9,999,999 down to 0 and accesses the same array using a positive index on each iteration. So it goes through the array in reverse.
On my PC, the first loop takes longer than 6 seconds to complete, but the second one only takes ~400ms.
Why is the second loop faster than the first?
Because iterations[-1] will evaluate to undefined (which is slow as it has to go up the whole prototype chain and can't take a fast path) also doing math with NaN will always be slow as it is the non common case.
Also initializing iterations with numbers will make the whole test more useful.
Pro Tip: If you try to compare the performance of two codes, they should both result in the same operation at the end ...
Some general words about performance tests:
Performance is the compiler's job these days, code optimized by the compiler will always be faster than code you are trying to optimize through some "tricks". Therefore you should write code that is likely by the compiler to optimize, and that is in every case, the code that everyone else writes (also your coworkers will love you if you do so). Optimizing that is the most benefitial from the engine's view. Therefore I'd write:
let acc = 0;
for(const value of array) acc += value;
// or
const acc = array.reduce((a, b) => a + b, 0);
However in the end it's just a loop, you won't waste much time if the loop is performing bad, but you will if the whole algorithm performs bad (time complexity of O(n²) or more). Focus on the important things, not the loops.
To elaborate on Jonas Wilms' answer, Javascript does not work with negative indice (unlike languages like Python).
iterations[-1] is equal to iteration["-1"], which look for the property named -1 in the array object. That's why iterations[-1] will evaluate to undefined.

Array contains anything other than 0

I have an array of numbers with 64 indexes (it's canvas image data).
I want to know if my array contains only zero's or anything other than zero.
We can return a boolean upon the first encounter of any number greater than zero (even if the very last index is non-zero and all the others are zero, we should return true).
What is the most efficient way to determine this?
Of course, we could loop over our array (focus on the testImageData function):
// Setup
var imgData = {
data: new Array(64)
};
imgData.data.fill(0);
// Set last pixel to black
imgData.data[imgData.data.length - 1] = 255;
// The part in question...
function testImageData(img_data) {
var retval = false;
for (var i = 0; i < img_data.data.length; i++) {
if (img_data.data[i] > 0) {
retval = true;
break;
}
}
return retval;
}
var result = testImageData(imgData);
...but this could take a while if my array were bigger.
Is there a more efficient way to test if any index in the array is greater than zero?
I am open to answers using lodash, though I am not using lodash in this project. I would rather the answer be native JavaScript, either ES5 or ES6. I'm going to ignore any jQuery answers, just saying...
Update
I setup a test for various ways to check for a non-zero value in an array, and the results were interesting.
Here is the JSPerf Link
Note, the Array.some test was much slower than using for (index) and even for-in. The fastest, of course, was for(index) for(let i = 0; i < arr.length; i++)....
You should note that I also tested a Regex solution, just to see how it compared. If you run the tests, you will find that the Regex solution is much, much slower (not surprising), but still very interesting.
I would like to see if there is a solution that could be accomplished using bitwise operators. If you feel up to it, I would like to see your approach.
Your for loop is the fastest way on Chrome 64 with Windows 10.
I've tested against two other options, here is the link to the test so you can run them on your environment.
My results are:
// 10776 operations per second (the best)
for (let i = 0; i < arr.length; i++) {
if (arr[i] !== 0) {
break
}
}
// 4131 operations per second
for (const n of arr) {
if (n !== 0) {
break
}
}
// 821 operations per second (the worst)
arr.some(x => x)
There is no faster way than looping through every element in the array. logically in the worst case scenario the last pixel in your array is black, so you have to check all of them. The best algorithm therefore can only have a O(n) runtime. Best thing you can do is write a loop that breaks early upon finding a non-white pixel.

Categories