The array.slice() method can extract a slice from the beginning, middle, or end of an array for whatever purposes you require, without changing the original array.
const chunkSize = 10;
for (let i = 0; i < array.length; i += chunkSize) {
const chunk = array.slice(i, i + chunkSize);
// do whatever
}
The last chunk may be smaller than chunkSize. For example when given an array of 12 elements the first chunk will have 10 elements, the second chunk only has 2.
Note that a chunkSize of 0 will cause an infinite loop.
The array.slice() method can extract a slice from the beginning, middle, or end of an array for whatever purposes you require, without changing the original array.
const chunkSize = 10;
for (let i = 0; i < array.length; i += chunkSize) {
const chunk = array.slice(i, i + chunkSize);
// do whatever
}
The last chunk may be smaller than chunkSize. For example when given an array of 12 elements the first chunk will have 10 elements, the second chunk only has 2.
Note that a chunkSize of 0 will cause an infinite loop.
Here's a ES6 version using reduce
const perChunk = 2 // items per chunk
const inputArray = ['a','b','c','d','e']
const result = inputArray.reduce((resultArray, item, index) => {
const chunkIndex = Math.floor(index/perChunk)
if(!resultArray[chunkIndex]) {
resultArray[chunkIndex] = [] // start a new chunk
}
resultArray[chunkIndex].push(item)
return resultArray
}, [])
console.log(result); // result: [['a','b'], ['c','d'], ['e']]
And you're ready to chain further map/reduce transformations. Your input array is left intact
If you prefer a shorter but less readable version, you can sprinkle some concat into the mix for the same end result:
inputArray.reduce((all,one,i) => {
const ch = Math.floor(i/perChunk);
all[ch] = [].concat((all[ch]||[]),one);
return all
}, [])
You can use remainder operator to put consecutive items into different chunks:
const ch = (i % perChunk);
Videos
First: Since you already know how many chunks you want, there's no point in checking if the chunk exists and otherwise declare it. In general terms, you should not need to check for objects you can define deterministically.
So instead of doing this check on each loop:
if (typeof result[i] == 'undefined'){
result[i] = [];
}
create an array of N empty arrays beforehand.
const result = Array.from(Array(chunks),item=>[]);
Second: Albeit the performance difference is negligible, checking for i's value and conditionally reassigning it is less efficient than using the modulo operator on its value regardless
So instead of
results[i].push(...)
i++
i = (i == chunks) ? 0 : i;
You can do
results[i % chunks].push(...)
i++
With the above, your function could be expressed as
function usingShift(myArray, chunks=5){
const copiedArray = myArray.slice(),
result=Array.from(Array(chunks),item=>[]);
let i=0;
while(copiedArray.length){
result[i % chunks].push(copiedArray.shift());
i++;
}
return result;
}
Third: As you've been told, shifting from an array is expensive. I understand you're doing it because you want to populate the chunks in the same order of the original array. However you can achieve the same popping from the a reversed array:
If you declare
const a = myArray.slice().reverse();
You can replace the usage of shift with
result[i].push(a.pop());
The function would be something like:
function usingPop(myArray, chunks=5){
const reversedArr = myArray.slice().reverse(),
result=Array.from(Array(chunks),item=>[]);
let i=0;
while(reversedArr.length){
result[i % chunks].push(reversedArr.pop());
i++;
}
return result;
}
However... you'd still be copying the array and performing a mutation on the copy. @Miklós Mátyás solution has the advantage of populating the result without copying nor extracting items from the source array. Now, you haven't said the source array will be always the same (9 elements from 1 to 9). It could as well have repeated/unsorted items, so his solution should take into account not the item itself but its index, which can be expressed as:
function filterByModulo(myArray, chunks=5){
return Array.from(Array(chunks),(_,modulo)=>{
return myArray.filter((item,index) => index % chunks === modulo);
});
}
That's pretty clean, but it's filtering on the original array as many times as chunks you want, so it's performance degrades according to the source array length AND the chunk quantity.
Personally I believe this is a case in which reduce would be more appropiate and pretty concise, while avoiding the copying or mutation of the source array.
function usingReduce(myArray,chunks=5) {
const result=Array.from(Array(chunks),i=>[]);
return myArray.reduce( (accum,item,index)=>{
accum[index%chunks].push(item);
return accum;
}, result);
}
Finally there's the classic for loop
function classicFor(sourceArr, chunks=5) {
const lengthOfArray=sourceArr.length;
const result=Array.from(Array(chunks),i=>[]);
for(let index=0; index<lengthOfArray ; index++) {
result[index % chunks ].push(sourceArr[index]);
}
return result;
}
I made a test case at JSPerf in which it shows that the for loop is the most efficient. (I threw in forEach and for..of implementations too).
Running with a source array of 5000 items and 5 chunks shows that using pop on the source is more efficient than using shift by a 2.89x factor. It even looks more efficient that reduce. The classic for loop is the fastest whereas filtering N times comes up last by a ratio of 9x the modulo filtering.
If you use a source of 100000 items and 15 chunks the classic for is still the most efficient (still 9x modulo filtering) but the other implementations do scale a bit better than modulo filtering.
If you want to make the code "as efficient as possible" then avoid additional function calls - that includes iterator methods like Array.map(), Array.forEach(), etc., and while it would allow the code to be written in a more readable way, the ES6 for...of loop would also affect efficiency because it calls an iterator function for each element in the array.
I suggest using a basic for loop after creating an array of arrays (for each chunk). As others have mentioned, modulo division can be used to determine which sub-array to put each element into.
function into_subarrays(myArray, chunks=2){
const result = Array(chunks);
for (let i = chunks; i--; ) {
result[i] = []; //initialize sub-arrays
}
for(let i = 0, length = myArray.length; i < length; i++) {
result[i%chunks].push(myArray[i]);
}
return result;
}
const a = [1,2,3,4,5,6,7,8,9];
console.log(into_subarrays(a, 2));
Notice the first loop to set the arrays of result iterates from chunks down to zero - this minimizes the number of operations in the for loop conditions. This wouldn't work for the second loop because the order of i values is important.
I see the function has a default parameter value for chunks - this is an ES6 feature. While not specifically ES6 features, the let and const keywords can be used to declare variables scoped to the blocks they are contained in.
In the existing code there are these lines towards the end of the while block:
result[i].push(a.shift()); i++; i = (i == chunks) ? 0 : i; //Wrap around chunk selector
Whenever I see a pre/post increment operator on a single line I look to see if it could be combined with another operation - e.g.
result[i++].push(a.shift());
or
i = (++i == chunks) ? 0 : i; //Wrap around chunk
Also, prefer using === over == to avoid unnecessary type coercion.