The minimum even value in the array - javascript

So, I have this program that asks for the minimum even value in the array and I have written the code but I seem to missed a loop. I will write the correct code but I hope someone would explain why is there a while loop
<HTML>
<HEAD>
<SCRIPT LANGUAGE = "JavaScript">
var number=new Array(10)
for(var i=0; i<number.length; i=i+1)
{
number[i] =window.prompt('enter number ','')
number[i] =parseFloat(number[i])
}
var y = 0
while (number[y] % 2 != 0) //get the first even number in the array
{
y = y + 1
}
//after you exit the while loop y will have the index of the first even number
var Min
Min = number[y]
for(var i=0; i<number.length; i=i+1)
{
if (number[i] % 2 == 0)
{
if(number[i]<Min)
{
Min= number[i]
}
}
}
document.write(Min)
</SCRIPT>
</HEAD>
</HTML>
So, this part
var y = 0
while (number[y] % 2 != 0) //get the first even number in the array
{
y = y + 1
}
//after you exit the while loop y will have the index of the first even number
I'm finding it hard to really grasp this loop and if I might ask: is there another way to find the minimum value in an array?
Many thanks!

The while loop sets the first value of Min so that subsequent comparisons work. Here's a far
simpler and faster way to do the same thing:
var min = Infinity; // Start with the biggest number possible
for (var i=myArray.length;i--;){
var val = myArray[i];
if (val<min && val%2==0) min = val;
}
This is faster because—unlike the original code—this doesn't iterate over the first non-even values twice. It would be roughly equivalent in speed if the for loop in the original started at index y, i.e. for (var i=y+1;i<number.length;++i)
It's also very slightly faster because the for loop caches the length of the array instead of looking it up each time, and because it only looks up the value in the array once each loop, not three times. Modern JavaScript runtimes like V8 can optimize naive code to behave similarly, however, so this is not a very important point.
Edit: For fun, here's a modern, functional programming approach:
var min = Math.min.apply(Math,myArray.filter(function(n){ return n%2==0 }));
The above uses Array.filter to create a new array of just the even-valued items, and then uses Function.prototype.apply to pass the array of values as parameters to Math.min.

If you're interested how to do that in modern Javascript, it goes like this:
minEvenElement = Math.min.apply(Math, myArray.filter(function(e) { return !(e % 2) }))

Related

How to partition array of integers to even and odd?

I want to partition an array (eg [1,2,3,4,5,6,7,8]), first partition should keep even values, second odd values (example result: [2,4,6,8,1,3,5,7]).
I managed to resolve this problem twice with built-in Array.prototype methods. First solution uses map and sort, second only sort.
I would like to make a third solution which uses a sorting algorithm, but I don't know what algorithms are used to partition lists. I'm thinking about bubble sort, but I think it is used in my second solution (array.sort((el1, el2)=>(el1 % 2 - el2 % 2)))... I looked at quicksort, but I don't know where to apply a check if an integer is even or odd...
What is the best (linear scaling with array grow) algorithm to perform such task in-place with keeping order of elements?
You can do this in-place in O(n) time pretty easily. Start the even index at the front, and the odd index at the back. Then, go through the array, skipping over the first block of even numbers.
When you hit an odd number, move backwards from the end to find the first even number. Then swap the even and odd numbers.
The code looks something like this:
var i;
var odd = n-1;
for(i = 0; i < odd; i++)
{
if(arr[i] % 2 == 1)
{
// move the odd index backwards until you find the first even number.
while (odd > i && arr[odd] % 2 == 1)
{
odd--;
}
if (odd > i)
{
var temp = arr[i];
arr[i] = arr[odd];
arr[odd] = temp;
}
}
}
Pardon any syntax errors. Javascript isn't my strong suit.
Note that this won't keep the same relative order. That is, if you gave it the array [1,2,7,3,6,8], then the result would be [8,2,6,3,7,1]. The array is partitioned, but the odd numbers aren't in the same relative order as in the original array.
If you are insisting on an in-place approach instead of the trivial standard return [arr.filter(predicate), arr.filter(notPredicate)] approach, that can be easily and efficiently achieved using two indices, running from both sides of the array and swapping where necessary:
function partitionInplace(arr, predicate) {
var i=0, j=arr.length;
while (i<j) {
while (predicate(arr[i]) && ++i<j);
if (i==j) break;
while (i<--j && !predicate(arr[j]));
if (i==j) break;
[arr[i], arr[j]] = [arr[j], arr[i]];
i++;
}
return i; // the index of the first element not to fulfil the predicate
}
let evens = arr.filter(i=> i%2==0);
let odds = arr.filter(i=> i%2==1);
let result = evens.concat(odds);
I believe that's O(n). Have fun.
EDIT:
Or if you really care about efficiency:
let evens, odds = []
arr.forEach(i=> {
if(i%2==0) evens.push(i); else odds.push(i);
});
let result = evens.concat(odds);
Array.prototype.getEvenOdd= function (arr) {
var result = {even:[],odd:[]};
if(arr.length){
for(var i = 0; i < arr.length; i++){
if(arr[i] % 2 = 0)
result.odd.push(arr[i]);
else
result.even.push(arr[i]);
}
}
return result ;
};

What is the time complexity of this for loop nested in a while loop?

I am trying to optimize a function. I believe this nested for loop is quadratic, but I'm not positive. I have recreated the function below
const bucket = [["e","f"],[],["j"],[],["p","q"]]
let totalLettersIWantBack = 4;
//I'm starting at the end of the bucket
function produceLetterArray(bucket, limit){
let result = [];
let countOfLettersAccumulated = 0;
let i = bucket.length - 1;
while(i > 0){
if(bucket[i].length > 0){
bucket[i].forEach( (letter) =>{
if(countOfLettersAccumulated === totalLettersIWantBack){
return;
}
result.push(letter);
countOfLettersAccumulated++;
})
}
i--;
}
return result;
}
console.log(produceLetterArray(bucket, totalLettersIWantBack));
Here is a trick for such questions. For the code whose complexity you want to analyze, just write the time that it would take to execute each statement in the worst case assuming no other statement exists. Note the comments begining with #operations worst case:
For the given code:
while(i > 0){ //#operations worst case: bucket.length
if(bucket[i].length > 0){ //#operations worst case:: 1
bucket[i].forEach( (letter) =>{ //#operations worst case: max(len(bucket[i])) for all i
if(countOfLettersAccumulated === totalLettersIWantBack){ //#operations worst case:1
return;
}
result.push(letter); //#operations worst case:1
countOfLettersAccumulated++; //#operations worst case:1
})
}
i--; ////#operations worst case:: 1
}
We can now multiply all the worst case times (since they all can be achieved in the worst case, you can always set totalLettersIWantBack = 10^9) to get the O complexity of the snippet:
Complexity = O(bucket.length * 1 * max(len(bucket[i])) * 1 * 1 * 1 * 1)
= O(bucket.length * max(len(bucket[i]))
If the length of each of the bucket[i] was a constant, K, then your complexity reduces to:
O(K * bucket.length ) = O(bucket.length)
Note that the complexity of the push operation may not remain constant as the number of elements grow (ultimately, the runtime will need to allocate space for the added elements, and all the existing elements may have to be moved).
Whether or not this is quadratic depends on what you consider N and how bucket is organized. If N is the total number of letters, then the runtime is bound by either the number of bins in your bucket, if that is larger than N, or it is bound by the number of letters in the bucket, if N is larger. In either case, the search time increases linearly with the larger bound, if one would dominate the other the time complexity is O(N). This is effectively a linear search with "turns" in it, scrunching a linear search and spacing it out does not change the time complexity. The existence of multiple loops in a piece of code does not alone make it non linear. Take the linear search example again. We search a list until we've found the largest element.
//12 elements
var array = [0,1,2,3,4,5,6,7,8,9,10,11];
var rows = 3;
var cols = 4;
var largest = -1;
for(var i = 0; i < rows; ++i){
for(var j = 0; j < cols; ++j){
var checked = array[(i * cols) + j];
if (checked > largest){
largest = checked;
}
}
}
console.log("found largest number (eleven): " + largest.toString());
Despite this using two loops instead of one, the runtime complexity is still O(N) where N is the number of elements in the input. Scrunching this down so each index is actually an array to multiple elements, or separating relevant elements by empty bins doesn't change the fact the runtime complexity is bound linearly.
This is technically linear with n being the number of elements total in your matrix. This is because the exit condition is the length of bucket and for each array in bucket you check if countOfLettersAccumulated is equal to totalLettersIWantBack. Continually looking at values.
It gets a lot more complicated if you are looking for an answer matching the dimensions of your matrix because it looks like the dimensions of bucket are not fixed.
You can turn this bit of code into constant by adding an additional check outside the bucket foreach which if countOfLettersAccumulated is equal to
totalLettersIWantBack then you do a break.
I like #axiom's explanation of complexity analyze.
Just would like to add possible optimized solution.
UPD .push (O(1)) is faster that .concat (O(n^2))
also here is test Array push vs. concat
const bucket = [["e","f"],[],["j", 'm', 'b'],[],["p","q"]]
let totalLettersIWantBack = 4;
//I'm starting at the end of the bucket
function produceLetterArray(bucket, limit){
let result = [];
for(let i = bucket.length-1; i > 0 && result.length < totalLettersIWantBack; i--){
//previous version
//result = result.concat(bucket[i].slice(0, totalLettersIWantBack-result.length));
//faster version of merging array
Array.prototype.push.apply(result, bucket[i].slice(0, totalLettersIWantBack-result.length));
}
return result;
}
console.log(produceLetterArray(bucket, totalLettersIWantBack));

Is it Possiable to call to previous increments of a variable?

for example lets say i have a loop that is doing basic counting, while the variable is less than 16 the loop will run and at the end of the loop you add 2 to the variable and add one to a "count" variable
what i want to know is if its possible to callback to any of the previous variables for either variable for example can i count all the times count % 2 === 0?
im not quite sure if once a variable makes any kind of change if all previous versions of that variable are gone
http://codepen.io/anon/pen/Gojoxm
var two = 0;
var count = 0;
while ( two < 16) {
two += 2;
count++;
};
console.log(count);
If I understand you right, then no, you cannot. When you assign a new value to a variable, the previous value is lost.
You have to either run this loop again or store intermediate values in an array:
var values = [];
var two = 0;
while (two < 16) {
two += 2;
values.push(two);
}
console.log(values.length); // the same result
Then, you will always be able to do whatever you want with these values.
For example, you can check if there were any odd values:
var anyOddNumbers = values.some(function(x) { return x % 2 === 1; }); // false

Speed differences in JavaScript functions finding the most common element in an array

I'm studying for an interview and have been working through some practice questions. The question is:
Find the most repeated integer in an array.
Here is the function I created and the one they created. They are appropriately named.
var arr = [3, 6, 6, 1, 5, 8, 9, 6, 6]
function mine(arr) {
arr.sort()
var count = 0;
var integer = 0;
var tempCount = 1;
var tempInteger = 0;
var prevInt = null
for (var i = 0; i < arr.length; i++) {
tempInteger = arr[i]
if (i > 0) {
prevInt = arr[i - 1]
}
if (prevInt == arr[i]) {
tempCount += 1
if (tempCount > count) {
count = tempCount
integer = tempInteger
}
} else {
tempCount = 1
}
}
console.log("most repeated is: " + integer)
}
function theirs(a) {
var count = 1,
tempCount;
var popular = a[0];
var temp = 0;
for (var i = 0; i < (a.length - 1); i++) {
temp = a[i];
tempCount = 0;
for (var j = 1; j < a.length; j++) {
if (temp == a[j])
tempCount++;
}
if (tempCount > count) {
popular = temp;
count = tempCount;
}
}
console.log("most repeated is: " + popular)
}
console.time("mine")
mine(arr)
console.timeEnd("mine")
console.time("theirs")
theirs(arr)
console.timeEnd("theirs")
These are the results:
most repeated is: 6
mine: 16.929ms
most repeated is: 6
theirs: 0.760ms
What makes my function slower than their?
My test results
I get the following results when I test (JSFiddle) it for a random array with 50 000 elements:
mine: 28.18 ms
theirs: 5374.69 ms
In other words, your algorithm seems to be much faster. That is expected.
Why is your algorithm faster?
You sort the array first, and then loop through it once. Firefox uses merge sort and Chrome uses a variant of quick sort (according to this question). Both take O(n*log(n)) time on average. Then you loop through the array, taking O(n) time. In total you get O(n*log(n)) + O(n), that can be simplified to just O(n*log(n)).
Their solution, on the other hand, have a nested loop where both the outer and inner loops itterate over all the elements. That should take O(n^2). In other words, it is slower.
Why does your test results differ?
So why does your test results differ from mine? I see a number of possibilities:
You used a to small sample. If you just used the nine numbers in your code, that is definately the case. When you use short arrays in the test, overheads (like running the console.log as suggested by Gundy in comments) dominate the time it takes. This can make the result appear completely random.
neuronaut suggests that it is related to the fact that their code operates on the array that is already sorted by your code. While that is a bad way of testing, I fail to see how it would affect the result.
Browser differences of some kind.
A note on .sort()
A further note: You should not use .sort() for sorting numbers, since it sorts things alphabetically. Instead, use .sort(function(a, b){return a-b}). Read more here.
A further note on the further note: In this particular case, just using .sort() might actually be smarter. Since you do not care about the sorting, only the grouping, it doesnt matter that it sort the numbers wrong. It will still group elements with the same value together. If it is faster without the comparison function (i suspect it is), then it makes sense to sort without one.
An even faster algorithm
You solved the problem in O(n*log(n)), but you can do it in just O(n). The algorithm to do that is quite intuitive. Loop through the array, and keep track of how many times each number appears. Then pick the number that appears the most times.
Lets say there are m different numbers in the array. Looping through the array takes O(n) and finding the max takes O(m). That gives you O(n) + O(m) that simplifies to O(n) since m < n.
This is the code:
function anders(arr) {
//Instead of an array we use an object and properties.
//It works like a dictionary in other languages.
var counts = new Object();
//Count how many of each number there is.
for(var i=0; i<arr.length; i++) {
//Make sure the property is defined.
if(typeof counts[arr[i]] === 'undefined')
counts[arr[i]] = 0;
//Increase the counter.
counts[arr[i]]++;
}
var max; //The number with the largest count.
var max_count = -1; //The largest count.
//Iterate through all of the properties of the counts object
//to find the number with the largerst count.
for (var num in counts) {
if (counts.hasOwnProperty(num)) {
if(counts[num] > max_count) {
max_count = counts[num];
max = num;
}
}
}
//Return the result.
return max;
}
Running this on a random array with 50 000 elements between 0 and 49 takes just 3.99 ms on my computer. In other words, it is the fastest. The backside is that you need O(m) memory to store how many time each number appears.
It looks like this isn't a fair test. When you run your function first, it sorts the array. This means their function ends up using already sorted data but doesn't suffer the time cost of performing the sort. I tried swapping the order in which the tests were run and got nearly identical timings:
console.time("theirs")
theirs(arr)
console.timeEnd("theirs")
console.time("mine")
mine(arr)
console.timeEnd("mine")
most repeated is: 6
theirs: 0.307ms
most repeated is: 6
mine: 0.366ms
Also, if you use two separate arrays you'll see that your function and theirs run in the same amount of time, approximately.
Lastly, see Anders' answer -- it demonstrates that larger data sets reveal your function's O(n*log(n)) + O(n) performance vs their function's O(n^2) performance.
Other answers here already do a great job of explaining why theirs is faster - and also how to optimize yours. Yours is actually better with large datasets (#Anders). I managed to optimize the theirs solution; maybe there's something useful here.
I can get consistently faster results by employing some basic JS micro-optimizations. These optimizations can also be applied to your original function, but I applied them to theirs.
Preincrementing is slightly faster than postincrementing, because the value does not need to be read into memory first
Reverse-while loops are massively faster (on my machine) than anything else I've tried, because JS is translated into opcodes, and guaranteeing >= 0 is very fast. For this test, my computer scored 514,271,438 ops/sec, while the next-fastest scored 198,959,074.
Cache the result of length - for larger arrays, this would make better more noticeably faster than theirs
Code:
function better(a) {
var top = a[0],
count = 0,
i = len = a.length - 1;
while (i--) {
var j = len,
temp = 0;
while (j--) {
if (a[j] == a[i]) ++temp;
}
if (temp > count) {
count = temp;
top = a[i];
}
}
console.log("most repeated is " + top);
}
[fiddle]
It's very similar, if not the same, to theirs, but with the above micro-optimizations.
Here are the results for running each function 500 times. The array is pre-sorted before any function is run, and the sort is removed from mine().
mine: 44.076ms
theirs: 35.473ms
better: 32.016ms

Numeric sort of a stream that keeps alpha order when the number is the same

I tried a few of the regex sorts I found on SO, but I think they may not like the + symbol in the stream i'm needing to sort.
So I'm getting a data stream that looks like this (3 to 30 letters '+' 0 to 64000 number)
userString = "AAA+800|BBB+700|CCC+600|ZZZ+500|YYY+400|XXX+300|XXA+300|XXZ+300";
the output needs to be in the format:
array[0] = "XXA+300" // 300 being the lowest num and XXA being before XXX
array[...]
array[7] = "AAA+800"
I wish to order it from lowest num to highest num and reversed.
Here is my inefficient code. which loops 8x8 times. (my stream maybe 200 items long)
It works, but it looks messy. Can someone help me improve it so it uses less iterations?
var array = userString.split('|');
array.sort();
for(var i=0; i<len; i++) { // array2 contains just the numbers
bits = array[i].split('+');
array2[i] = bits[1];
}
array2.sort();
if(sort_order==2)
array2.reverse();
var c=0;
for(var a=0;a<len;a++) { // loop for creating array3 (the output)
for(var i=0; i<len ; i++) { // loop thru array to find matching score
bits = array[i].split('+');
if(bits[1] == array2[a]) { // found matching score
array3[c++] = bits[0]+'+'+bits[1]; // add to array3
array[i]='z+z'; // so cant rematch array position
}
}
}
array = array3;
Kind Regards
Please forgive the terse answer (and lack of testing), as I'm typing this on an iPhone.
var userArr = userString.split('|');
userArr.sort(function(a, b) {
var aArr = a.split('+'),
bArr = b.split('+'),
aLetters = aArr[0],
bLetters = bArr[0],
aNumbers = parseInt( aArr[1] ),
bNumbers = parseInt( bArr[1] );
if (aNumbers == bNumbers) {
return aLetters.localeCompare( bLetters );
}
return aNumbers - bNumbers;
/*
// Or, for reverse order:
return -(aNumbers - bNumbers);
// or if you prefer to expand your terms:
return -aNumbers + bNumbers;
*/
});
Basically we're splitting on | then doing a custom sort in which we split again on +. We convert the numbers to integers, then if they differ (e.g. 300 and 800) we compare them directly and return the result (because in that case the letters are moot). If they're the same, though (300 and 300) we compare the first parts (XXA and XXX) and return that result (assuming you want an ordinary alphabetical comparison). In this fashion the whole array is sorted.
I wasn't entirely sure what you meant by "and reversed" in your question, but hopefully this will get you started.
As you may've guessed this isn't totally optimal as we do split and parseInt on every element in every iteration, even if we already did in a previous iteration. This could be solved trivially by pre-processing the input, but with just 200 elements you probably won't see a huge performance hit.
Good luck!

Categories