Recursive function inside of a loop - javascript

I've been studying recursive functions and I'm starting to understand them more or less. I was working on a free code camp challenge when I came across this and I do not understand it. A recursive function inside of a for loop:
function steamroller(arr) {
var newArr = [];
for (var i = 0; i < arr.length; i++) {
//If (i)th element is an array
if (Array.isArray(arr[i])) {
newArr = newArr.concat(steamroller(arr[i]));
console.log(newArr);
} else {
newArr.push(arr[i]);
}
}
return newArr;
}
steamroller([1, [2],[3, [[4]]]]);
//returns [1, 2, 3, 4]
The line I'm having a hard time understanding is:
newArr = newArr.concat(steamroller(arr[i]));
On that line, newArr is concatenated to what? The function is called again inside of the .concat method, right? But what happens to that for loop? Does the function call inside of the concat method force the loop to exit?
Here is a JSFiddle, I have each newArr logged to console, but I can't even follow it. The array is built up like so:
[1, 2]
[4]
[3, 4]
[1, 2, 3, 4] //Final
Thanks.

The steamroller function needs to loop over the indices in the array provided as the function parameter in order to ensure each index of the array is seen.
However, the original array has a number of indices, which in turn may contain multiple indices themselves, all of which need to be looped over in turn.
The call to concat is done only on the current index of the loop, meaning that the outcome is the "steamrollered" representation of the current index.
Step by step
The original array is passed in to the function: [1, [2],[3, [[4]]]]
The loop begins with the first index: 1, which is not an array, so it is pushed to the resulting array.
The next loop iteration's index is [2], which is an array, so is recursed.
The first recursive call to the function receives [2] and iterates over it.
The first iteration of this recursive call finds the index to be 2, which is not an array, so is pushed to the resulting array.
... continue ...
What we see is that as the nested arrays are iterated over using the recursive function, we always end up obtaining the internal integer values, no matter how nested.

It sounds like your mental muddling is mostly to do with the way "the stack" works.
A basic function call pauses execution at the point that it was called, and steps through each of the lines inside the function, then resumes outside of the function. And, you can have functions run inside of other functions; that much you may have already known.
The important thing to understand here is everything that the program is keeping track of. For starters, the program keeps track of what position it was being called from so it can "pop up one" in the stack, and continue. eg, at the midpoint of execution the stack may look like this (knowing which function/line it's at):
steamroller:10
steamroller:8
steamroller:8
main:20
Then its current loop hits "return"...
steamroller:8
steamroller:8
main:20
But that's not all - it also preserves each instance of the variables declared in that run of the function. You can, in fact, have about 5 or 6 or 5 million newArrs because those variables are declared on the stack, not in one singular spot.
And none of that information - the line number, or instance variables, are destroyed when it enters a function - it just saves its spot in the current function, and steps through the inner one.
Make sense?

First enter:
steamroller([1, [2],[3, [[4]]]])
First element isn't an Array, so newArr receives '1'.
Second element is an Array, so it calls steamroller again:
steamroller([1, [2],[3, [[4]]]])->steamroller(2)
It's not an array, so it will concat the element to a newArr, so it will simply concat 2 to an empty array. The function will return it, and concat the [2] to the newArr that contains [1], and now you have [1,2], and the console pops out the first line you saw on your fiddle.
I think you got it to what happens latter, but comment you still need explanation for what happens with 3 and 4.

With recursive functions is that the return of the console will be a mess because is different than a normal loop it need to finish the last position of the recursive tree before return all the values in the console, so if you put in one of the positions of the array with a more deeper position like [[[[3]]]] it will go to the end of the position and then it will return all the values, so if you put some kind of console to see how is working your recursive function you will receive some kind of tree, not like a normal loop, that's the danger with the recursive functions, and they can be pretty heavy to the memory speaking about CPU usage in a production server. so if you can perform this operation with doing any recursive function it will be better.
So if you change you code to
steamroller([1, [2],[3, [4]]]);
//the return will be
[1, 2]
[3, 4]
[1, 2, 3, 4]
Speaking about levels in the recursive tree
So if you put the [[3]] as the [[4]] the return will be in the same level of the tree.
steamroller([1, [2],[[3], [4]]]);
//the return will be
[1, 2]
[3]
[4]
[3, 4]
[1, 2, 3, 4]
Hi hope that can get you a better idea about the recursive functions

If we inspect the run without the recursion then:
i = 0 => arr[i] = 1 => newArr.push(arr[i]) => newArr = [1]
i = 1 => arr[i] = [2] => steamroller(arr[i]) = [2] => newArr = [1, 2]
i = 2 => arr[i] = [3, [[4]]] => steamroller(arr[i]) = [3, 4] => newArr = [1, 2, 3, 4]

Related

infinite inner arrays with push() method in javascript chrome console

Recently i was experimenting with array push() method in javascript.
I created an array, and pushed the same array using push method.
var a=['hello', 4]
a.push(a)
the result was surprising, when i explored the array items in chrome console, there was infinite array produced. i want to know why this is happening. As the push() method adds the element on last.
I was expecting this result
['hello, 4, ['hello',4]]
but the result was something else, here is the screenshot of chrome console
problem screenshot, infinite inner arrays using push method in JavaScript
When you assign an object to a variable in Javascript (and indeed in most similar languages), that variable holds what is called a "reference" to that object. Note that arrays in JS are objects, but primitive values like strings and numbers are not.
One consequence of object assignments being "by reference" is that any change you make to that object - even if its done to another variable that happens to reference the same object - will "show up" when you inspect the original variable.
So here, you start off with
var a=['hello', 4]
And then do
a.push(a)
then the object (array) to which a points has been changed. It now has an additional element on the end - which is a, the very same array we're talking about.
So it expands recursively:
a = ['hello', 4, a]
= ['hello', 4, ['hello', 4, a]]
= ['hello', 4, ['hello', 4, ['hello', 4, a]]]
...
And so on, infinitely. This doesn't require an infinite amount of memory, because the third element of the array is simply a reference to the memory location which holds the array.

Overwrite the this value in array.prototype

I've found that the native js sort function screws up from time to time so I wanted to implement my own. Lets say i have the following :
Array.prototype.customSort = function(sortFunction, updated) {...}
var array = [5, 2, 3, 6]
array.customSort(function(a,b) {return a - b})
console.log(array)
Array should be [2, 3, 5, 6]
Updated is the array that has been sorted.
No matter what I return in customSort, the order of array is still in the original order. How do I overwrite the 'this' value / get it to point to the array with the correct order?
If you consider the actual code you gave above, you have to make sure that your customSort function updates this.
One case is that customSort only uses this as "read-only" input, that is - only puts the sorted array in updated, rather than changing this.
In such case, considering the code above (which you might have performed tests with), no updated parameter is sent to the function, to receive the sorted values.
Another case is that customSort returns the sorted array, in which case you have to collect it:
array = array.customSort(function(a,b) {return a - b});
console.log(array);
I just ended up iterating over the updated array and replacing each value in this with the value in updated. In code, that looks like...
function customSort(cb) {
...//updated is the sorted array that has been built
var that = this;
_.each(updated, function (ele, index) {
that[index] = ele;
})
}
I wanted the function to operate in the exact same way the native array.sort function does - it overwrites the array provided instead of returning a new, sorted array.
I find it odd that this works... you cannot overwrite the entire this value in one clean sweep but you can in steps. I couldn't do this for in the customSort function :
this = updated;

Chaining Methods error in Javascript

I would like to know the reason why this simple piece of code fails:
var arr = [1, 2, 3];
arr.push(arr[0]).shift();
console.log(arr);
it returns in firebug console "TypeError: arr.push(...).shift is not a function"
I think it happens because I invoke the shift() method not on an array but on the pushed element.
Is there a more elegant way to obtain the same result that,
var arr = [1, 2, 3];
arr.push(arr[0]);
arr.shift();
console.log(arr);
produce ?
Thanks in advance!
From the MDN:
The push() method adds one or more elements to the end of an array and
returns the new length of the array.
arr.push(arr[0]) doesn't return the array but a number which, obviously, has no shift function.
To my knowledge, there's no simple expression pushing an element to an array and returning that array. But in your case you may simply reverse the operations and do
arr.push(arr.shift());
I think it happens because I invoke the shift() method not on an array but on the pushed element.
Almost. push returns the new length of the array. A number obviously doesn't have a shift() method.
Your method of putting it on two lines is the simplest way.
Essentially this question is saying, can I somehow "elegantly" express the notion of moving the first item of an array to the end. Luckily, JS is a Turing-complete language, which allows us to define functions, so the "elegant" answer is just
rotate(arr)
Now it merely remains to define rotate. To rotate is to drop the first element in the result of adding the head element to the end:
function rotate(arr) { return drop(add(arr, head(arr))); }
Now drop is
function drop(arr) { return arr.shift(), arr; }
and head of course is
function head(arr) { return arr[0]; }
and add is
function add(arr, elt) { return arr.push(elt), arr; }
Another approach
I could also write a function to move n elements from position i to position j, using splice, as follows:
function move(arr, n, i, j) {
arr.splice.apply(arr, [j-n+1, 0].concat(arr.splice(i, n)));
return arr;
}
Then to rotate is to move one element at the beginning to the end:
function rotate(arr) { return move(arr, 1, 0, 999); }

Updating an array with a conditional for loop - Javascript

I have two arrays with different types of complex objects. If there is a certain match of values between objects of each array then I need to take a few values from one of the matching objects and save it. If a few key value pairs from within a matching object get saved then it needs to be removed from its array so that it does not get saved more than once.
The following is my best effort (so far to show an example of what I'm trying to do).
I want to update my empty array based on the code that follows. This is a simple example to illustrate a much more complex problem and I think it's probably a better way to go than cutting and pasting dozens of lines of code. The issue has to do with collecting values in the emptyArray and filtering array2 during each passing of the outer loop. Suggestions leading with underscore are not helpful.
var array = [1, 2, 3, 4, 1, 2, 3, 4];
var array2 = [2, 4, 6, 8, 10];
var emptyArray = [];
for (i = 0; i < array.length; i++){
var something = array[i];
var array2 = _.without(array2, emptyArray);
for (a = 0; a < array2.length; a++){
var value = array2[a];
if(something === value){
emptyArray.push(value);
break;
}
}
}
I want to update the values in array2 based on the if statement so that those values are not repeated in the nested loop. Instead my emptyArray remains empty instead of adding values from the array2 as elements of array2 are equal to elements of array.
To be clear, right now emptyArray remains empty and never filters array2.
I'd like to see empty array collect value 2 at the start of the outer loop's second iteration then I'd like to see emptyArray collect value 4 at the start of the 4th iteration of the outer loop.
I'd want to filter each of these values from array2 as they become part of emptyArray so that they do not set off the if statement during the 6th and 8th iterations of the outer loop. I imagine that emptyArray = [2, 4] and array2 = [6, 8, 10] when the loops are finished.
function _.without doesn't take an array as second argument, it takes individual items to be removed. e.g _.without(ar1,1,2,3)
If you need to pass an array use _.difference(ar1, ar2)
Since you are already using underscore, this can be much simpler.
emptyArray = _.intersection(array, array2)
array2 = _.difference(array2, emptyArray)

How to extend an existing JavaScript array with another array, without creating a new array

There doesn't seem to be a way to extend an existing JavaScript array with another array, i.e. to emulate Python's extend method.
I want to achieve the following:
>>> a = [1, 2]
[1, 2]
>>> b = [3, 4, 5]
[3, 4, 5]
>>> SOMETHING HERE
>>> a
[1, 2, 3, 4, 5]
I know there's a a.concat(b) method, but it creates a new array instead of simply extending the first one. I'd like an algorithm that works efficiently when a is significantly larger than b (i.e. one that does not copy a).
Note: This is not a duplicate of How to append something to an array? -- the goal here is to add the whole contents of one array to the other, and to do it "in place", i.e. without copying all elements of the extended array.
The .push method can take multiple arguments. You can use the spread operator to pass all the elements of the second array as arguments to .push:
>>> a.push(...b)
If your browser does not support ECMAScript 6, you can use .apply instead:
>>> a.push.apply(a, b)
Or perhaps, if you think it's clearer:
>>> Array.prototype.push.apply(a,b)
Please note that all these solutions will fail with a stack overflow error if array b is too long (trouble starts at about 100,000 elements, depending on the browser).
If you cannot guarantee that b is short enough, you should use a standard loop-based technique described in the other answer.
Update 2018: A better answer is a newer one of mine: a.push(...b). Don't upvote this one anymore, as it never really answered the question, but it was a 2015 hack around first-hit-on-Google :)
For those that simply searched for "JavaScript array extend" and got here, you can very well use Array.concat.
var a = [1, 2, 3];
a = a.concat([5, 4, 3]);
Concat will return a copy the new array, as thread starter didn't want. But you might not care (certainly for most kind of uses this will be fine).
There's also some nice ECMAScript 6 sugar for this in the form of the spread operator:
const a = [1, 2, 3];
const b = [...a, 5, 4, 3];
(It also copies.)
You should use a loop-based technique. Other answers on this page that are based on using .apply can fail for large arrays.
A fairly terse loop-based implementation is:
Array.prototype.extend = function (other_array) {
/* You should include a test to check whether other_array really is an array */
other_array.forEach(function(v) {this.push(v)}, this);
}
You can then do the following:
var a = [1,2,3];
var b = [5,4,3];
a.extend(b);
DzinX's answer (using push.apply) and other .apply based methods fail when the array that we are appending is large (tests show that for me large is > 150,000 entries approx in Chrome, and > 500,000 entries in Firefox). You can see this error occurring in this jsperf.
An error occurs because the call stack size is exceeded when 'Function.prototype.apply' is called with a large array as the second argument. (MDN has a note on the dangers of exceeding call stack size using Function.prototype.apply - see the section titled "apply and built-in functions".)
For a speed comparison with other answers on this page, check out this jsperf (thanks to EaterOfCode). The loop-based implementation is similar in speed to using Array.push.apply, but tends to be a little slower than Array.slice.apply.
Interestingly, if the array you are appending is sparse, the forEach based method above can take advantage of the sparsity and outperform the .apply based methods; check out this jsperf if you want to test this for yourself.
By the way, do not be tempted (as I was!) to further shorten the forEach implementation to:
Array.prototype.extend = function (array) {
array.forEach(this.push, this);
}
because this produces garbage results! Why? Because Array.prototype.forEach provides three arguments to the function it calls - these are: (element_value, element_index, source_array). All of these will be pushed onto your first array for every iteration of forEach if you use "forEach(this.push, this)"!
I feel the most elegant these days is:
arr1.push(...arr2);
The MDN article on the spread operator mentions this nice sugary way in ES2015 (ES6):
A better push
Example: push is often used to push an array to the end of an existing
array. In ES5 this is often done as:
var arr1 = [0, 1, 2];
var arr2 = [3, 4, 5];
// Append all items from arr2 onto arr1
Array.prototype.push.apply(arr1, arr2);
In ES6 with spread this becomes:
var arr1 = [0, 1, 2];
var arr2 = [3, 4, 5];
arr1.push(...arr2);
Do note that arr2 can't be huge (keep it under about 100 000 items), because the call stack overflows, as per jcdude's answer.
Overview
a.push(...b) - limited, fast, modern syntax
a.push.apply(a, b) - limited, fast
a = a.concat(b) unlimited, slow if a is large
for (let i in b) { a.push(b[i]); } - unlimited, slow if b is large
Each snippet modifies a to be extended with b.
The "limited" snippets pass each array element as an argument, and the maximum number of arguments you can pass to a function is limited. From that link, it seems that a.push(...b) is reliable until there are about 32k elements in b (the size of a does not matter).
Relevant MDN documentation: spread syntax, .apply(), .concat(), .push()
Speed considerations
Every method is fast if both a and b are small, so in most web applications you'll want to use push(...b) and be done with it.
If you're handling more than a few thousand elements, what you want to do depends on the situation:
you're adding a few elements to a large array
→ push(...b) is very fast
you're adding many elements to a large array
→ concat is slightly faster than a loop
you're adding many elements to a small array
→ concat is much faster than a loop
you're usually adding only a few elements to any size array
→ loops are about as fast as the limited methods for small additions, but will never throw an exception even if it is not the most performant when you add many elements
you're writing a wrapper function to always get the maximum performance
→ you'll need to check the lengths of the inputs dynamically and choose the right method, perhaps calling push(...b_part) (with slices of the big b) in a loop.
This surprised me: I thought a=a.concat(b) would be able to do a nice memcpy of b onto a without bothering to do individual extend operations as a.push(...b) would have to do, thus always being the fastest. Instead, a.push(...b) is much, much faster especially when a is large.
The speed of different methods was measured in Firefox 88 on Linux using:
a = [];
for (let i = 0; i < Asize; i++){
a.push(i);
}
b = [];
for (let i = 0; i < Bsize; i++){
b.push({something: i});
}
t=performance.now();
// Code to test
console.log(performance.now() - t);
Parameters and results:
ms | Asize | Bsize | code
----+-------+-------+------------------------------
~0 | any | any | a.push(...b)
~0 | any | any | a.push.apply(a, b)
480 | 10M | 50 | a = a.concat(b)
0 | 10M | 50 | for (let i in b) a.push(b[i])
506 | 10M | 500k | a = a.concat(b)
882 | 10M | 500k | for (let i in b) a.push(b[i])
11 | 10 | 500k | a = a.concat(b)
851 | 10 | 500k | for (let i in b) a.push(b[i])
Note that a Bsize of 500 000 is the largest value accepted by all methods on my system, that's why it is smaller than Asize.
All tests were run multiple times to see if the results are outliers or representative. The fast methods are almost immeasurable in just one run using performance.now(), of course, but since the slow methods are so obvious and the two fast methods both work on the same principle, we needn't bother repeating it a bunch of times to split hairs.
The concat method is always slow if either array is large, but the loop is only slow if it has to do a lot of function calls and doesn't care how large a is. A loop is thus similar to push(...b) or push.apply for small bs but without breaking if it does get large; however, when you approach the limit, concat is a bit faster again.
First a few words about apply() in JavaScript to help understand why we use it:
The apply() method calls a function with a given this value, and
arguments provided as an array.
Push expects a list of items to add to the array. The apply() method, however, takes the expected arguments for the function call as an array. This allows us to easily push the elements of one array into another array with the builtin push() method.
Imagine you have these arrays:
var a = [1, 2, 3, 4];
var b = [5, 6, 7];
and simply do this:
Array.prototype.push.apply(a, b);
The result will be:
a = [1, 2, 3, 4, 5, 6, 7];
The same thing can be done in ES6 using the spread operator ("...") like this:
a.push(...b); //a = [1, 2, 3, 4, 5, 6, 7];
Shorter and better but not fully supported in all browsers at the moment.
Also if you want to move everything from array b to a, emptying b in the process, you can do this:
while(b.length) {
a.push(b.shift());
}
and the result will be as follows:
a = [1, 2, 3, 4, 5, 6, 7];
b = [];
If you want to use jQuery, there is $.merge()
Example:
a = [1, 2];
b = [3, 4, 5];
$.merge(a,b);
Result: a = [1, 2, 3, 4, 5]
I like the a.push.apply(a, b) method described above, and if you want you can always create a library function like this:
Array.prototype.append = function(array)
{
this.push.apply(this, array)
}
and use it like this
a = [1,2]
b = [3,4]
a.append(b)
It is possible to do it using splice():
b.unshift(b.length)
b.unshift(a.length)
Array.prototype.splice.apply(a,b)
b.shift() // Restore b
b.shift() //
But despite being uglier it is not faster than push.apply, at least not in Firefox 3.0.
as the top voted answer says, a.push(...b) is probably the correct answer taking into account the size limit issue.
On the other hand, some of the answers on performance seem out of date.
These numbers below are for 2022-05-20
from here
At appears that push is fastest across the board in 2022. That may change in the future.
Answers ignoring the question (generating a new array) are missing the point. Lots of code might need/want to modify an array in place given there can be other references to the same array
let a = [1, 2, 3];
let b = [4, 5, 6];
let c = a;
a = a.concat(b); // a and c are no longer referencing the same array
Those other references could be deep in some object, something that was captured in a closure, etc...
As a probably bad design but as an illustration, imagine you had
const carts = [
{ userId: 123, cart: [item1, item2], },
{ userId: 456, cart: [item1, item2, item3], },
];
and a function
function getCartForUser(userId) {
return customers.find(c => c.userId === userId);
}
Then you want to add items to the cart
const cart = getCartForUser(userId);
if (cart) {
cart.concat(newItems); // FAIL 😢
cart.push(...newItems); // Success! 🤩
}
As an aside, the answers suggesting modifying Array.prototype are arguably bad adivce. Changing the native prototypes is bascially a landmine in your code. Another implementation maybe be different than yours and so it will break your code or you'll break their code expecting the other behavior. This includes if/when a native implmentation gets added that clashes with yours. You might say "I know what I'm using so no issue" and that might be true at the moment and you're a single dev but add a second dev and they can't read your mind. And, you are that second dev in a few years when you've forgotten and then graft some other library (analytics?, logging?, ...) on to your page and forget the landmind you left in the code.
This is not just theory. There are countless stories on the net of people running into these landmines.
Arguably there are just a few safe uses for modifying a native object's prototype. One is to polyfill an existing and specified implementation in an old browser. In that case, the spec is defined, the spec is implemented is shipping in new browsers, you just want to get the same behavior in old browsers. That's pretty safe. Pre-patching (spec in progress but not shipping) is arguably not safe. Specs change before shipping.
This solution works for me (using the spread operator of ECMAScript 6):
let array = ['my', 'solution', 'works'];
let newArray = [];
let newArray2 = [];
newArray.push(...array); // Adding to same array
newArray2.push([...array]); // Adding as child/leaf/sub-array
console.log(newArray);
console.log(newArray2);
I'm adding this answer, because despite the question stating clearly without creating a new array, pretty much every answer just ignores it.
Modern JavaScript works well with arrays and alike as iterable objects. This makes it possible to implement a version of concat that builds upon that, and spans the array data across its parameters logically.
The example below makes use of iter-ops library that features such logic:
import {pipe, concat} from 'iter-ops';
const i = pipe(
originalArray,
concat(array2, array3, array4, ...)
); //=> Iterable
for(const a of i) {
console.log(a); // iterate over values from all arrays
}
Above, no new array is created. Operator concat will iterate through the original array, then will automatically continue into array2, then array3, and so on, in the specified order.
This is the most efficient way of joining arrays in terms of memory usage.
And if, at the end, you decide to convert it into an actual physical array, you can do so via the spread operator or Array.from:
const fullArray1 = [...i]; // pulls all values from iterable, into a new array
const fullArray2 = Array.from(i); // does the same
Combining the answers...
Array.prototype.extend = function(array) {
if (array.length < 150000) {
this.push.apply(this, array)
} else {
for (var i = 0, len = array.length; i < len; ++i) {
this.push(array[i]);
};
}
}
You can create a polyfill for extend as I have below. It will add to the array; in-place and return itself, so that you can chain other methods.
if (Array.prototype.extend === undefined) {
Array.prototype.extend = function(other) {
this.push.apply(this, arguments.length > 1 ? arguments : other);
return this;
};
}
function print() {
document.body.innerHTML += [].map.call(arguments, function(item) {
return typeof item === 'object' ? JSON.stringify(item) : item;
}).join(' ') + '\n';
}
document.body.innerHTML = '';
var a = [1, 2, 3];
var b = [4, 5, 6];
print('Concat');
print('(1)', a.concat(b));
print('(2)', a.concat(b));
print('(3)', a.concat(4, 5, 6));
print('\nExtend');
print('(1)', a.extend(b));
print('(2)', a.extend(b));
print('(3)', a.extend(4, 5, 6));
body {
font-family: monospace;
white-space: pre;
}
Another solution to merge more than two arrays
var a = [1, 2],
b = [3, 4, 5],
c = [6, 7];
// Merge the contents of multiple arrays together into the first array
var mergeArrays = function() {
var i, len = arguments.length;
if (len > 1) {
for (i = 1; i < len; i++) {
arguments[0].push.apply(arguments[0], arguments[i]);
}
}
};
Then call and print as:
mergeArrays(a, b, c);
console.log(a)
Output will be: Array [1, 2, 3, 4, 5, 6, 7]
The answer is super simple.
>>> a = [1, 2]
[1, 2]
>>> b = [3, 4, 5]
[3, 4, 5]
>>> SOMETHING HERE
(The following code will combine the two arrays.)
a = a.concat(b);
>>> a
[1, 2, 3, 4, 5]
Concat acts very similarly to JavaScript string concatenation. It will return a combination of the parameter you put into the concat function on the end of the array you call the function on. The crux is that you have to assign the returned value to a variable or it gets lost. So for example
a.concat(b); <--- This does absolutely nothing since it is just returning the combined arrays, but it doesn't do anything with it.
Another option, if you have lodash installed:
import { merge } from 'lodash';
var arr1 = merge(arr1, arr2);
Use Array.extend instead of Array.push for > 150,000 records.
if (!Array.prototype.extend) {
Array.prototype.extend = function(arr) {
if (!Array.isArray(arr)) {
return this;
}
for (let record of arr) {
this.push(record);
}
return this;
};
}
You can do that by simply adding new elements to the array with the help of the push() method.
let colors = ["Red", "Blue", "Orange"];
console.log('Array before push: ' + colors);
// append new value to the array
colors.push("Green");
console.log('Array after push : ' + colors);
Another method is used for appending an element to the beginning of an array is the unshift() function, which adds and returns the new length. It accepts multiple arguments, attaches the indexes of existing elements, and finally returns the new length of an array:
let colors = ["Red", "Blue", "Orange"];
console.log('Array before unshift: ' + colors);
// append new value to the array
colors.unshift("Black", "Green");
console.log('Array after unshift : ' + colors);
There are other methods too. You can check them out here.
Super simple, does not count on spread operators or apply, if that's an issue.
b.map(x => a.push(x));
After running some performance tests on this, it's terribly slow, but answers the question in regards to not creating a new array. Concat is significantly faster, even jQuery's $.merge() whoops it.
https://jsperf.com/merge-arrays19b/1

Categories