Sorting Arrays Javascript - javascript

I am brand new to javascript(I have been exposed to DOM manipulation, however for this assignment we can display input in a simple console.log according to the prof) and I have came across this problem that was due for a school assignment, I need to take user input of 3 numbers, display them, and show the max and min number entered, as well as the average.
The code I have below preforms as I intended but what I am looking for is feedback for improvements, I am still in the process of training my brain to break down these types of problems as well as organize my thinking. I would like to be practicing the "best" methods or most effective methods, as my thinking and logic is not yet defined and I am in the stage where everything is new so I may as learn the most effective ways/strategies. Any improvements or better ways to solve this question is greatly appreciated.
Thanks!
let num = parseFloat(prompt("enter your first number"));
let num1 = parseFloat(prompt("enter your second number"));
let num2 = parseFloat(prompt("enter your third number"));
let avg = parseFloat(console.log('The Average of The Numbers',
num, ',', num1, ',', num2, 'Is:', (num + num1 + num2) / 3));
let numTot = parseFloat(console.log(`The Numbers You Have Entered
Are`, num, +num1, +num2));
let total = parseFloat(console.log('The Total Of', num, '+', num1,
'+', num2, 'Is :', num + num1 + num2));
let highest = Math.max(num, num1, num2);
let lowest = Math.min(num, num1, num2);
console.log("The Highest Number Entered Is:", highest);
console.log("The Lowest Number Entered Is:", lowest);

Here's how I would do it:
Declare your numbers in an array. Prefix them with + to coerce them
from a String to Number.
Array.reduce to loop over your array and calculate the average.
Use spread syntax to obtain min/max values in the accumulator object,
you pass to .reduce.
// prompt returns a String, prefix with +
// to coerce them to Numbers, since we'll be
// working with numbers.
const numbers = [
+prompt('Enter number 1'),
+prompt('Enter number 2'),
+prompt('Enter number 3')
]
const result = numbers.reduce((acc, number, index) => {
// For each number:
// Add the number to the accumulator sum.
acc.sum += number
// If this is the last iteration:
if (index === numbers.length - 1) {
// Calculate the average from the sum.
acc.avg = acc.sum / numbers.length
// Also discard the sum property, we no longer need it.
delete acc.sum
}
// Return the accumulator for the next iteration.
return acc
}, {
// Our accumulator object, initialised with min/max values.
min: Math.min(...numbers),
max: Math.max(...numbers),
sum: 0,
avg: 0
})
// Log the processed accumulator.
console.log(result)
Why use a loop at all?
Array.reduce loops over an array, similarly to how a for loop does. Using a loop-like construct allows you to add more numbers in the numbers array without needing to modify your calculations code.
Why divide at the end of the loop?
Summing the numbers and dividing once at the end of the loop helps avoid numerical errors.
If you do acc.avg = acc.avg + number / numbers.length on each iteration you'll start noticing that the average turns out to be a bit off. Try it just for the sake of it.
Might look a bit complex for a beginner but those 2 concepts (esp. Array.reduce) are worth looking into. FWIW, the classroom example given for teaching Array.reduce is calculating
averages from an array of numbers.

If you want to use advances features, then reference to the Nik Kyriakides answer. On this approach I will use a for loop to ask for numbers in an iterative way and calculate the minimun, maximun and total progressively. The average could be obtained dividing total by the quantity of numbers you asked for:
const numbersToAsk = 3;
let max, min, avg, total = 0;
for (var i = 1; i <= numbersToAsk; i++)
{
// Ask the user for a new number and get it.
let num = parseFloat(prompt("enter your number " + i));
// Sum the new number to the previous accumulated total.
total += num;
// Recalculate the new maximum, if there wasn't a previous one,
// just assign the current number.
max = !max ? num : Math.max(max, num);
// Recalculate the new minimum, if there wasn't a previous one,
// just assign the current number.
min = !min ? num : Math.min(min, num);
}
// Calulate the average.
avg = total / numbersToAsk;
// Show the obtained results on the console.
console.log(
"Total: " + total,
"Average: " + avg,
"Min: " + min,
"Max: " + max
);

Related

How do I fix my problem excluding 0s in my javascript?

I am trying to exclude 0 in my average code. The code is supposed to ask the user to write 4 numbers and then output an average between them 4.
The code works fine, but now I want to exclude 0s from the code, and it worked but only if you digit 0 one time. For example: I want to exclude two 0s but I have no idea how to do it. My idea was to delete the num that has 0 in it so the operation calculates the 0\> numbers only. There is my code:
var num1 = parseInt(prompt("first number: "),10);
var num2 = parseInt(prompt("second number: "),10);
var num3 = parseInt(prompt("third number: "),10);
var num4 = parseInt(prompt("fourth number: "),10);
if (num1==0){
document.getElementById("output").textContent = ((num2 + num3+ num4) / 3);
}
if (num2==0){
document.getElementById("output").textContent = ((num1 + num3+ num4) / 3);
}
if (num3==0){
document.getElementById("output").textContent = ((num1 + num2+ num4) / 3);
}
if (num4==0){
document.getElementById("output").textContent = ((num1 + num2+ num3) / 3);
}
else {
document.getElementById("output").textContent = ((num1 + num2 + num3+ num4) / 4);
}
Think about your current line of thought.
You wrote some code that averages between 3 numbers if one of the 4 numbers is zero. Now you realize it doesn't work when 2 of the 4 numbers are zeroes. How many more if-elses would you need to write to accommodate that? Then imagine if 3 of them were zeroes.
This problem is pretty simple because you only take 4 inputs. Now imagine if you took 100 inputs and are trying to get averages of all the non-zero integers from there.
A much more scalable solution would be to have it be such that no matter how many inputs you take, the code will only look at and average the non-zero integers. Something like this:
let num1 = 3; //imagine it were equivalent to var num1 = parseInt(prompt("first number: "),10);
let num2 = 4; //imagine it were equivalent to var num2 = parseInt(prompt("second number: "),10);
let num3 = 0; //imagine it were equivalent to var num3 = parseInt(prompt("third number: "),10);
let num4 = 0; //imagine it were equivalent to var num4 = parseInt(prompt("fourth number: "),10);
const numbers = [num1, num2, num3, num4]
function avgNonZeroValues(nums){
const nonZeros = nums.filter(number => number !== 0);
const avgOfNonZeros = nonZeros.reduce(
(accumulator, currentValue) => accumulator + currentValue,
0
)/nonZeros.length;
return avgOfNonZeros;
}
console.log(avgNonZeroValues(numbers));
You need to know two functions:
reduce()
Reduce is a very handy method to go through a list/array and do something with it. It takes in a initial value(0 for us), and we also tell it what to do with every element. Here, we are saying "keep on adding the elements. Here, we are using it to get the sum of all the numbers in the array.
filter()
Filter is pretty self-explanatory. You give this function an array and a condition, and it will filter out the elements that do not satisfy the condition, and will give you a new array of all the elements that do satisfy the condition. Here, we are using it to filter all the zeros out of the array.
You give this function avgNonZeroValues an array of any length containing numbers, it will remove all the 0s and average the rest of them.
Refactoring this to use arrays allows easy use of array functions to filter and transform the numbers.
const prompts = ["first", "second", "third", "fourth"];
const values = prompts
.map(p => parseInt(prompt(`${p} number: `),10))
.filter(v => v !== 0);
const total = values.reduce((acc, val) => acc + val, 0);
const avg = total/values.length;
console.log(total, avg);

Why does it take so many attempts to generate a random value between 1 and 10000?

I have the following code that generates an initial random number between 1 and 10000, then repeatedly generates a second random number until it matches the first:
let upper = 10000;
let randomNumber = getRandomNumber(upper);
let guess;
let attempt = 0;
function getRandomNumber(upper) {
return Math.floor(Math.random() * upper) + 1;
}
while (guess !== randomNumber) {
guess = getRandomNumber(upper);
attempt += 1;
}
document.write('The randomNumber is ' + randomNumber);
document.write(' it took' + attempt);
I am confused at (attempt) variables. Why is it that the computer took this many attempts to get the randomNumber? Also, it didn't put attempt in the loop condition.
Just to give you a start. This is what your code does:
// define the maximum of randomly generated number. range = 0 - 10.000
let upper = 10000;
// generate a random number out of the range 0-10.000
let randomNumber = getRandomNumber(upper);
// predefine variable guess
let guess;
// set a counter to 0
let attempt = 0;
// generate and return a random number out of the range from 0 to `upper`
function getRandomNumber(upper) {
return Math.floor(Math.random() * upper) + 1;
}
// loop until guess equals randomNumber
while (guess !== randomNumber) {
// generate a new random number and assign it to the variable guess
guess = getRandomNumber(upper);
// increase the counter by 1
attempt += 1;
}
// output the initially generated number
document.write('The randomNumber is ' + randomNumber);
// output the number of repetitions
document.write(' it took' + attempt);
So, once again. You generate a random number at start. And then you repeat generating another random number until this second random number matches the first. As you don't set any limits e.g. "each random number can only appear once" or "no more than 10.000 tries" your program might need millions of tries until the number matches, because you have a range of 10.000 possible numbers and they might repeat a hundreds of times each before the match is finally there.
Try to optimize your program by limiting the number of tries to 10.000. And you could your computer just let count upwards from 0 to 10.000 instead of guessing with a randomly generated number.
When I repeatedly run your snippet, I am seeing your code take anywhere from a few thousand to a few tens of thousands of repetitions to get a given value for a random number sampled between 1 and 10000. But this is not surprising -- it is expected.
Assuming your getRandomNumber(upper) function does indeed return a number between 1 and upper with a uniform distribution, the expected probability that the number returned will not be the initial, given value randomNumber is:
1 - (1/upper)
And the chance that the first N generated numbers will not include the given value is:
(1 - (1/upper)) ^ N
And so the chance P that the first N generated numbers will include given value is:
P = 1 - (1 - (1/upper)) ^ N
Thus the following formula gives the number of repetitions you will need to make to generate your initial value with a given probability P:
N = ln(1.0 - P) / ln(1.0 - (1.0/upper))
Using this formula, there is only a 50% chance of getting randomValue after 6932 repetitions, and a 95% chance after 29956 repetitions.
let upper = 10000;
function numberOfRepetitionsToGetValueWithRequiredProbability(upper, P) {
return Math.ceil(Math.log(1.0 - P) / Math.log(1.0 - (1.0/upper)))
}
function printNumberOfRepetitionsToGetValueWithRequiredProbability(upper, P) {
document.write('The number of tries to get a given value between 1 and ' + upper + ' with a ' + P + ' probability: ' + numberOfRepetitionsToGetValueWithRequiredProbability(upper, P) + ".<br>");
}
var probabilities = [0.10, 0.20, 0.30, 0.40, 0.50, 0.60, 0.70, 0.80, 0.90, 0.95, 0.99, 0.9999];
probabilities.forEach((p) => printNumberOfRepetitionsToGetValueWithRequiredProbability(upper, p));
This is entirely consistent with the observed behavior of your code. And of course, assuming Math.random() is truly random (which it isn't, it's only pseudorandom, according to the docs) there is always going to be a vanishingly small probability of never encountering your initial value no matter how many repetitions you make.

How to accumulate over each number? JavaScript [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
This is my problem I am having a hard time onto what to do to solve this
The Task: We'll pass you an array of two numbers.
Return the sum of those two numbers plus the
sum of all the numbers between them. The lowest number will not always come first.
For example, sumAll([4,1]) should return 10 because
sum of all the numbers between 1 and 4 (both inclusive) is 10.
function sumAll(arr) {
Math.min(arr); //finds the lowest number and takes it 1
Math.max(arr); //finds the largest number 4
//must start at the 1st number and loops over until the max value is reached
//0 start at the 0th index of the array
//++ increament by one so 1 2 3 4
//multiply's each number
//.lenght until the lenght of the array is reached
var i;
for (i = 0; i < arr.length; i++) {
i * i;
}
return 1;
}
sumAll([1, 4]);
If its going to be always 2 numbers in an array, then you can easily do this and no more fancy code.
var arr = [1, 4];
arr.sort((a, b) => a - b);
var total = 0;
for (var i = arr[0]; i <= arr[1]; i++ ) {
total += i;
}
console.log(total);
You can grab the largest number from your input array using Math.max and the smallest number from the array using Math.min, you just need to spread the values from the array into the method calls so that the numbers from the input array are used as the arguments (rather than the array itself).
Once you have the largest and smallest number, you can find the sum between (and including) these two numbers. This can be done using a loop. However, a more efficient way would be to use a formula to compute it for you. If you call the smaller number a and the larger number b, you want to find:
res = a + (a+1) + (a+2) + ... + (b-1) + b
res2 = b + (b-1) + (b-2) + ... + (a+1) + a
As you can see above res2 and res are equal. So we can say res2 = res. So, if we perform res + res2, we will get 2*res. If we add the two together (adding by the columns), we get:
2*res = a+b + (a+1)+(b-1) + (a+2)+(b-2) + ... + (b-1)+(a+1) + b+a
= a+b + a+b + a+b + ... + a+b + a+b
As you can see 2*res results in a+b being repeated for every number in the original equation, which is b-a + 1 times. Thus:
2*res = (b-a + 1)*(a+b)
As we want to find what res is, we can divide both sides by 2 to get:
res = (b-a + 1)*(a+b)/2
So, we can use the above equation to find the sum of numbers between two numbers a and b, where a is the smaller number and b is the larger number.
Using both Math.max(), Math.min() and the above equation, we can do this using the following:
const sumRange = (a, b) => ((b - a + 1)*(a + b))/2;
function sumAll(arr) {
const smaller = Math.min(...arr);
const bigger = Math.max(...arr);
return sumRange(smaller, bigger);
}
console.log(sumAll([4, 1]));
You could do this a number of ways, in this case I am using a while loop.
function sumAll(arr) {
// Get the min/max values from the array,
// Note: you have to spread the array values as individual args using '...' notation
const min = Math.min(...arr);
const max = Math.max(...arr);
// Start at the min value
let current = min;
let sum = 0;
// Loop through all numbers between min and max inclusively
while (current <= max) {
sum += current;
current++;
}
return sum;
};
console.log(sumAll([1, 4]));
You can just find the lower number before running the loop for getting the sum of all inbetween numbers.
You can just add the condition:
if(arr[0]<arr[1]){
first= arr[0], last= arr[1]
}
else {
first=arr[1], last=arr[0] }
for (i = first; i <= last; i++){
let temp = temp + i;
}
return temp;
}
Just sort the array and run the loop to add the number , starting from first element ending at second element
function findSum(arr){
let sortedArr = arr.slice().sort((a,b) => a-b);
let total =0;
for(let i=arr[0];i<=arr[1];i++){
total+=i;
}
console.log(total);
}
findSum([1,4])
var points = [40, 100, 1, 5, 25, 10];
points.sort(function(a, b){return a-b});
points[0]; // this is min value of the array values
You can check this link on w3schools

Is there a simple way to split a number into an array of digits without converting it to a string and back?

I was working on a Javascript exercise that required a number to be converted into an array of single digits and the only way I can seem to achieve this is by converting the number into a string and then converting it back to a number.
let numbers = 12345;
Array.from(numbers.toString(10), Number) // [1, 2, 3, 4, 5]
Basically, I'm wondering if this is the best way to achieve a split like this on a number or if there is a more efficient method that doesn't require such a conversion.
You can always get the smallest digit with n % 10. You can remove this digit with subtraction and division by 10 (or divide and floor). This makes for a pretty simple loop:
function digits(numbers){
if (numbers == 0) return [numbers]
let res = []
while (numbers){
let n = numbers % 10
res.push(n)
numbers = (numbers - n) / 10
}
return res.reverse()
}
console.log(digits(1279020))
This takes the numbers in reverse order so you either have to unshift the results on to the array or push and reverse at the end.
One of the nice things about this, is that you can find the digits of different bases by swapping out 10 for a the base of your choice:
function digits(numbers, base){
if (numbers == 0) return [numbers]
let res = []
while (numbers){
let n = numbers % base
res.push(n)
numbers = (numbers - n) / base
}
return res.reverse()
}
// binary
console.log(digits(20509, 2).join(''))
console.log((20509).toString(2))
// octal
console.log(digits(20509, 8).join(''))
console.log((20509).toString(8))
Although once your base is larger than 10 you will have to map those digits to the appropriate letters.
One approach would be to iterate through the number of digits and calculate the difference of each modulo by base, and then populate the output list from the result of each iteration.
A quick way to identify the number of digits in your base 10 input would be the following:
Math.floor(Math.log(input) / Math.LN10 + 1) // 5 for input of 12349
Next, iterate through this range and for each iteration, calculate the base of the current and previous iterations, and perform module of the input against these. The digit for the current iteration is then derived from the difference of the modulo calculations like this:
function arrayFromInput(input) {
const output = [];
for (let i = 0; i < Math.floor(Math.log(input) / Math.LN10 + 1); i++) {
const lastBase = Math.pow(10, i);
const nextBase = Math.pow(10, i + 1);
const lastMod = input % lastBase;
const nextMod = input % nextBase;
const digit = (nextMod - lastMod) / lastBase;
output.unshift(digit);
}
return output;
}
console.log(arrayFromInput(12345), '= [1,2,3,4,5]');
console.log(arrayFromInput(12), '= [1,2]');
console.log(arrayFromInput(120), '= [1,2 0]');
console.log(arrayFromInput(9), '= [9]');
console.log(arrayFromInput(100), '= [1,0,0]');

How do I optimally distribute values over an array of percentages?

Let's say I have the following code:
arr = [0.1,0.5,0.2,0.2]; //The percentages (or decimals) we want to distribute them over.
value = 100; //The amount of things we have to distribute
arr2 = [0,0,0,0] //Where we want how many of each value to go
To find out how to equally distribute a hundred over the array is simple, it's a case of:
0.1 * 100 = 10
0.5 * 100 = 50
...
Or doing it using a for loop:
for (var i = 0; j < arr.length; i++) {
arr2[i] = arr[i] * value;
}
However, let's say each counter is an object and thus has to be whole. How can I equally (as much as I can) distribute them on a different value. Let's say the value becomes 12.
0.1 * 12 = 1.2
0.5 * 12 = 6
...
How do I deal with the decimal when I need it to be whole? Rounding means that I could potentially not have the 12 pieces needed.
A correct algorithm would -
Take an input/iterate through an array of values (for this example we'll be using the array defined above.
Turn it into a set of whole values, which added together equal the value (which will equal 100 for this)
Output an array of values which, for this example it will look something like [10,50,20,20] (these add up to 100, which is what we need to add them up to and also are all whole).
If any value is not whole, it should make it whole so the whole array still adds up to the value needed (100).
TL;DR dealing with decimals when distributing values over an array and attempting to turn them into an integer
Note - Should this be posted on a different stackoverflow website, my need is programming, but the actual question will likely be solved using a mathematics. Also, I had no idea how to word this question, which makes googling incredibly difficult. If I've missed something incredibly obvious, please tell me.
You should round all values as you assign them using a rounding that is known to uniformly distribute the rounding. Finally, the last value will be assigned differently to round the sum up to 1.
Let's start slowly or things get very confused. First, let's see how to assign the last value to have a total of the desired value.
// we will need this later on
sum = 0;
// assign all values but the last
for (i = 0; i < output.length - 1; i++)
{
output[i] = input[i] * total;
sum += output[i];
}
// last value must honor the total constraint
output[i] = total - sum;
That last line needs some explanation. The i will be one more than the last allowed int the for(..) loop, so it will be:
output.length - 1 // last index
The value we assign will be so that the sum of all elements is equal to total. We already computed the sum in a single-pass during the assignment of the values, and thus don't need to iterated over the elements a second time to determine it.
Next, we will approach the rounding problem. Let's simplify the above code so that it uses a function on which we will elaborate shortly after:
sum = 0;
for (i = 0; i < output.length - 1; i++)
{
output[i] = u(input[i], total);
sum += output[i];
}
output[i] = total - sum;
As you can see, nothing has changed but the introduction of the u() function. Let's concentrate on this now.
There are several approaches on how to implement u().
DEFINITION
u(c, total) ::= c * total
By this definition you get the same as above. It is precise and good, but as you have asked before, you want the values to be natural numbers (e.G. integers). So while for real numbers this is already perfect, for natural numbers we have to round it. Let's suppose we use the simple rounding rule for integers:
[ 0.0, 0.5 [ => round down
[ 0.5, 1.0 [ => round up
This is achieved with:
function u(c, total)
{
return Math.round(c * total);
}
When you are unlucky, you may round up (or round down) so much values that the last value correction will not be enough to honor the total constraint and generally, all value will seem to be off by too much. This is a well known problem of which exists a multi-dimensional solution to draw lines in 2D and 3D space which is called the Bresenham algorithm.
To make things easy I'll show you here how to implement it in 1 dimension (which is your case).
Let's first discuss a term: the remainder. This is what is left after you have rounded your numbers. It is computed as the difference between what you wish and what you really have:
DEFINITION
WISH ::= c * total
HAVE ::= Math.round(WISH)
REMAINDER ::= WISH - HAVE
Now think about it. The remained is like the piece of paper that you discard when you cut out a shape from a sheet. That remaining paper is still there but you throw it away. Instead of this, just add it to the next cut-out so it is not wasted:
WISH ::= c * total + REMAINDER_FROM_PREVIOUS_STEP
HAVE ::= Math.round(WISH)
REMAINDER ::= WISH - HAVE
This way you keep the error and carry it over to the next partition in your computation. This is called amortizing the error.
Here is an amortized implementation of u():
// amortized is defined outside u because we need to have a side-effect across calls of u
function u(c, total)
{
var real, natural;
real = c * total + amortized;
natural = Math.round(real);
amortized = real - natural;
return natural;
}
On your own accord you may wish to have another rounding rule as Math.floor() or Math.ceil().
What I would advise you to do is to use Math.floor(), because it is proven to be correct with the total constraint. When you use Math.round() you will have smoother amortization, but you risk to not have the last value positive. You might end up with something like this:
[ 1, 0, 0, 1, 1, 0, -1 ]
Only when ALL VALUES are far away from 0 you can be confident that the last value will also be positive. So, for the general case the Bresenham algoritm would use flooring, resulting in this last implementation:
function u(c, total)
{
var real, natural;
real = c * total + amortized;
natural = Math.floor(real); // just to be on the safe side
amortized = real - natural;
return natural;
}
sum = 0;
amortized = 0;
for (i = 0; i < output.length - 1; i++)
{
output[i] = u(input[i], total);
sum += output[i];
}
output[i] = total - sum;
Obviously, input and output array must have the same size and the values in input must be a paritition (sum up to 1).
This kind of algorithm is very common for probabilistical and statistical computations.
Alternate implementation - it remembers a pointer to the biggest rounded value and when the sum differs of 100, increment or decrement value at this position.
const items = [1, 2, 3, 5];
const total = items.reduce((total, x) => total + x, 0);
let result = [], sum = 0, biggestRound = 0, roundPointer;
items.forEach((votes, index) => {
let value = 100 * votes / total;
let rounded = Math.round(value);
let diff = value - rounded;
if (diff > biggestRound) {
biggestRound = diff;
roundPointer = index;
}
sum += rounded;
result.push(rounded);
});
if (sum === 99) {
result[roundPointer] += 1;
} else if (sum === 101) {
result[roundPointer] -= 1;
}

Categories