I need to generate a random number in Meteor - javascript

I am trying that I need to generate a random number between 100-1000 and record database it but every number must be unique from others.
how can I do it in Meteor,
Thank you.

You can follow this logic:
var arr = [];
for (var i = 100; i <= 1000; i++) {
arr.push(i);
}
Or, if Underscore is available:
var arr = _.range(100, 1001);
Now we have an array including all the unique values you want to be assigned. Then for generation:
var rand = Math.floor((Math.random()*arr.length));
var randNumber = arr[rand];
arr.splice(rand,1);
There you go, you have a random number between 100 and 1000 called randNumber, and cannot get the same one next time you run that bit of code.
But you will need to store a big arr array somewhere as long as you want to generate random numbers. It really depends on how persistant you want this array to be, if the process needs to take place in a long period of time (for example "each time a user does X") or if it is a one-time process.

Related

Kirchoff Current Law using JavaScript

I am wondering how should I implement KCL using JavaScript. I have several current meter that to detect the value of current along a line.
I came out with a simple program but was unable to think about how to implement KCL efficiently.
var currentPointsDBA = new Array(); // Data from electricity meter will be captured and store in this array
var totalCurrentMDB = new Array(); // Total current from adding will be stored in array. Each index represents 1 distribution board.
const totalBusBar = 5;
for (z=0; z<currentPointsDBA.length(); z++){
totalCurrent += currentPoints[z];
}
totalCurrentMDB[0] = totalCurrent;
var currentCheck = [
{
name: DBA, number: 0, threshold: 0
}];
for(i=0; i<totalBusBar; i++){
var totalCurrentCheck = currentCheck[i];
if(totalCurrentMDB[totalCurrentCheck.number] != 0){
//DO SOMETHING
}
}
I realized with this, the condition will never be met as I am adding the current without using the concept of current entering the node is equals to current exiting a node.
Another difficulty that I was facing was to have an algorithm that can be used to calculate positively and negatively for KCL.
Inputs would be a power line that will be connected to a transducer(meter to measure current). Output is also current that is connected to a meter.
For the Point 1,2,3,4,5 it will be replaced with the meters. Basically, I would want an algorithm to have Sum of DBA = Point 1+2+3+4+5. However, point 1 is bi-directional. It can act as input or output. The issue will be to figure out how the current direction is and use that to determine as input or output.

In JS which one makes sense: searching for a value in object collection by foreach vs keeping multiple collections with different keys

I'm working on some kind of 1:1 chat system, the environment is Node.JS
For each country, there is a country room (lobby), for each socket client there is a js class/object is being created and each object is in a list with their unique user id.
This unique id is preserved even users logged in from different browser tabs etc..
Each object stored in collections like: "connections" (all of them), "operators"(only operators), "{countryISO}_clients" (users) and the reference key is their unique id.
In some circumstances, I need to access these connections by their socket ids.
At this point, I can think of 2 resolutions.
Using a for each loop to find the desired object
Creating another collection, this time instead of unique id use socket id (or something else.)
Which one makes sense? Because in JS since this collection will be a reference list instead of a copy, it feels like it makes sense (and beautiful looking) but I can't be sure. Which one is expensive in memory/performance terms?
I can't make thorough tests since I don't know how to create dummy (simultaneous) socket connections.
Expected connected socket client count: 300 - 1000 (depends on the time of the day)
e.g. user:
"qk32d2k":{
"uid":"qk32d2k",
"name":"Josh",
"socket":"{socket.io's socket reference}",
"role":"user",
"rooms":["room1"],
"socketids":["sid1"]
"country":"us",
...
info:() => { return gatherSomeData(); },
update:(obj) => { return updateSomeData(obj); },
send:(data)=>{ /*send data to this user*/ }
}
e.g. Countries collection:
{
us:{
"qk32d2k":{"object above."}
"l33t71":{"another user object."}
},
ca:{
"asd231":{"other user object."}
}
}
Pick Simple Design First that Optimizes for Most Common Access
There is no ideal answer here in the absolute. CPUs are wicked fast these days, so if I were you I'd start out with one simple mechanism of storing the sockets that you can access both ways you want, even if one way is kind of a brute force search. Pick the data structure that optimizes the access mechanism that you expect to be either most common or most sensitive to performance.
So, if you are going to be looking up by userID the most, then I'd probably store the sockets in a Map object with the userID as the key. That will give you fast, optimized access to get the socket for a given userID.
For finding a socket by some other property of the socket, you will just iterate the Map item by item until you find the desired match on some other socket property. I'd probably use a for/of loop because it's both fast and easy to bail out of the loop when you've found your match (something you can't do on a Map or Array object with .forEach()). You can obviously make yourself a little utility function or method that will do the brute force lookup for you and that will allow you to modify the implementation later without changing much calling code.
Measure and Add Further Optimization Later (if data shows you need to)
Then, once you get up to scale (or simulated scale in pre-production test), you take a look at the performance of your system. If you have loads of room to spare, you're done - no need to look further. If you have some operations that are slower than desired or higher than desired CPU usage, then you profile your system and find out where the time is going. It's most likely that your performance bottlenecks will be elsewhere in your system and you can then concentrate on those aspects of the system. If, in your profiling, you find that the linear lookup to find the desired socket is causing some of your slow-down, then you can make a second parallel lookup Map with the socketID as the key in order to optimize that type of lookup.
But, I would not recommend doing this until you've actually shown that it is an issue. Premature optimization before you have actual metrics that prove it's worth optimizing something just add complexity to a program without any proof that it is required or even anywhere close to a meaningful bottleneck in your system. Our intuition about what the bottlenecks are is often way, way off. For that reasons, I tend to pick an intelligent first design that is relatively simple to implement, maintain and use and then, only when we have real usage data by which we can measure actual performance metrics would I spend more time optimizing it or tweaking it or making it more complicated in order to make it faster.
Encapsulate the Implementation in Class
If you encapsulate all operations here in a class:
Adding a socket to the data structure.
Removing a socket from the data structure.
Looking up by userID
Looking up by socketID
Any other access to the data structure
Then, all calling code will access this data structure via the class and you can tweak the implementation some time in the future (to optimize based on data) without having to modify any of the calling code. This type of encapsulation can be very useful if you suspect future modifications or change of modifications to the way the data is stored or accessed.
If You're Still Worried, Design a Quick Bench Measurement
I created a quick snippet that tests how long a brute force lookup is in a 1000 element Map object (when you want to find it by something other than what the key is) and compared it to an indexed lookup.
On my computer, a brute force lookup (non-indexed lookup) takes about 0.002549 ms per lookup (that's an average time when doing 1,000,000 lookups. For comparison an indexed lookup on the same Map takes about 0.000017 ms. So you save about 0.002532 ms per lookup. So, this is fractions of a millisecond.
function addCommas(str) {
var parts = (str + "").split("."),
main = parts[0],
len = main.length,
output = "",
i = len - 1;
while(i >= 0) {
output = main.charAt(i) + output;
if ((len - i) % 3 === 0 && i > 0) {
output = "," + output;
}
--i;
}
// put decimal part back
if (parts.length > 1) {
output += "." + parts[1];
}
return output;
}
let m = new Map();
// populate the Map with objects that have a property that
// you have to do a brute force lookup on
function rand(min, max) {
return Math.floor((Math.random() * (max - min)) + min)
}
// keep all randoms here just so we can randomly get one
// to try to find (wouldn't normally do this)
// just for testing purposes
let allRandoms = [];
for (let i = 0; i < 1000; i++) {
let r = rand(1, 1000000);
m.set(i, {id: r});
allRandoms.push(r);
}
// create a set of test lookups
// we do this ahead of time so it's not part of the timed
// section so we're only timing the actual brute force lookup
let numRuns = 1000000;
let lookupTests = [];
for (let i = 0; i < numRuns; i++) {
lookupTests.push(allRandoms[rand(0, allRandoms.length)]);
}
let indexTests = [];
for (let i = 0; i < numRuns; i++) {
indexTests.push(rand(0, allRandoms.length));
}
// function to brute force search the map to find one of the random items
function findObj(targetVal) {
for (let [key, val] of m) {
if (val.id === targetVal) {
return val;
}
}
return null;
}
let startTime = Date.now();
for (let i = 0; i < lookupTests.length; i++) {
// get an id from the allRandoms to search for
let found = findObj(lookupTests[i]);
if (!found) {
console.log("!!didn't find brute force target")
}
}
let delta = Date.now() - startTime;
//console.log(`Total run time for ${addCommas(numRuns)} lookups: ${delta} ms`);
//console.log(`Avg run time per lookup: ${delta/numRuns} ms`);
// Now, see how fast the same number of indexed lookups are
let startTime2 = Date.now();
for (let i = 0; i < indexTests.length; i++) {
let found = m.get(indexTests[i]);
if (!found) {
console.log("!!didn't find indexed target")
}
}
let delta2 = Date.now() - startTime2;
//console.log(`Total run time for ${addCommas(numRuns)} lookups: ${delta2} ms`);
//console.log(`Avg run time per lookup: ${delta2/numRuns} ms`);
let results = `
Total run time for ${addCommas(numRuns)} brute force lookups: ${delta} ms<br>
Avg run time per brute force lookup: ${delta/numRuns} ms<br>
<hr>
Total run time for ${addCommas(numRuns)} indexed lookups: ${delta2} ms<br>
Avg run time per indexed lookup: ${delta2/numRuns} ms<br>
<hr>
Net savings of an indexed lookup is ${(delta - delta2)/numRuns} ms per lookup
`;
document.body.innerHTML = results;

Sorting with constraint no consecutive equals

I would like to sort a news feed according to created date, which is trivial, but I don't want 2 consecutive posts with the same userId field.
This might not be theoritically possible because what if I have only 2 posts with the same userId field?
I am looking for an algorithm that sorts according to fieldA but doesn't have 2 consecutive elements with the same fieldB.
It would also nice to have a parametrized algorithm about the required number of different posts between same user's different posts. (In the first scenario this parameter is 1)
I'm not looking for performance (O(n^2) would be okay) but a clever & simple way, maybe with 5 lines of code.
Language doesn't matter but Javascript is preferred.
To solve this problem in 5 lines is somewhat difficult,I'm trying to give a short pseudocode and you may translate it to js.
First we group the input to A[1],A[2],...,A[k].A[i] is a container contains all posts of i-th user,this can be easily done via O(n) scanning.
code:
for i = 1 to k
lastOccurPosition[i] = -intervalLength; //that is the interval length specified by parameter
for i = 1 to k
sort(A[i]);
for i = 1 to n
minElement = INF; //find the minimum
minUserId = -1; //record whose post is minimun
for j = 1 to k
if(i - lastOccurPosition[i] <= intervalLength) //if the user has occured within interval length,the user's post shouldn't be choosen
continue;
if(A[j][1] < minElement)
minElement = A[j][1];
minUserId = j;
answer[i] = minElement; //put min element into answer array
lastOccurPosition[minUserId] = i; //update choosen user's last occur position
A[minUserId].pop_front(); //delele first element
It is easy to analyse this algorithm's complexity is O(n^2) and I haven't thought out a more concise solution.
Hope to be helpful.
Put the atributes in an array, and sort that array:
arr.sort();
Put the second atribute in another array and sort that array according to the first one.
var newarr = [arr[0]];
for (var i=1; i<arr.length; i++) {
if (arr[i]!=arr[i-1]) newarr.push(arr[i]);
}
Now this just remove duplicates.
This all feels kind of trivial, am I missing something?
Hope this helps.
Cheers,
Gijs

Random number in JavaScript per day once

I'm in the process of coding an application that does the following:
Generates a random number with 4 digits.
Changes it once per calendar day.
Won't change that full day. Only once in a day.
I tried:
function my_doubt()
{
var place = document.getElementById("my_div")
place.innerHTML=Math.floor((Math.random()*100)+1);
}
I'm getting a random number with Math.random(). However, I'm rather clueless about how to generate a different number for each day. What are some common approaches for tackling this problem?
Note: It doesn't have to be really random. A pseudo - random number is also OK.
You need to seed the random number generator with a number derived from the current date, for example "20130927" for today.
You haven't been clear about your requirements, so I don't know how random you need (do you have requirements for how uniform of a distribution you need?).
This will generate a random looking 4 digit number which may be good enough for your requirements, but if you perform an analysis you'll find the number isn't actually very random:
function rand_from_seed(x, iterations){
iterations = iterations || 100;
for(var i = 0; i < iterations; i++)
x = (x ^ (x << 1) ^ (x >> 1)) % 10000;
return x;
}
var random = rand_from_seed(~~((new Date)/86400000)); // Seed with the epoch day.
Now that your question is a bit more reasonable, clear and nicer in tone. I can give you a way to get the same result on the client-side. However as others mentioned, to maintain consistency, you probably want to maintain the number on the server to ensure consistency.
var oneDayInMs = 1000*60*60*24;
var currentTimeInMs = new Date().getTime(); // UTC time
var timeInDays = Math.floor(currentTimeInMs / oneDayInMs);
var numberForToday = timeInDays % 9999;
console.log(numberForToday);
// zero-filling of numbers less than four digits might be optional for you
// zero-filled value will be a string to maintain its leading 0s
var fourDigitNumber = numberForToday.toString();
while(fourDigitNumber.length < 4)
{
fourDigitNumber = 0+fourDigitNumber;
}
console.log(fourDigitNumber);
// remember that this number rotates every and is unique for 10000 days
1)create a random number in javascript
2)store in cookie that will expire after one day
3)get value from cookie, if it does not exist goto 1

Creating a random number generator in jscript and prevent duplicates

We are trying to create a random number generator to create serial numbers for products on a virtual assembly line.
We got the random numbers to generate, however since they are serial numbers we don't want it to create duplicates.
Is there a way that it can go back and check to see if the number generated has already been generated, and then to tell it that if it is a duplicate to generate a new number, and to repeat this process until it has a "unique" number.
The point of a serial number is that they're NOT random. Serial, by definition, means that something is arranged in a series. Why not just use an incrementing number?
The easiest way to fix this problem is to avoid it. Use something that is monotonically increasing (like time) to form part of your serial number. To that you can prepend some fixed value that identifies the line or something.
So your serial number format could be NNNNYYYYMMDDHHMMSS, where NNNN is a 4-digit line number and YYYY is the 4 digit year, MM is a 2 digit month, ...
If you can produce multiple things per second per line, then add date components until you get to the point where only one per unit time is possible -- or simply add the count of items produced this day to the YYYYMMDD component (e.g., NNNNYYYYMMDDCCCCCC).
With a truly random number you would have to store the entire collection and review it for each number. Obviously this would mean that your generation would become slower and slower the larger the number of keys you generate (since it would have to retry more and more often and compare to a larger dataset).
This is entirely why truly random numbers just are never used for this purpose. For serial numbers the standard is always to just do a sequential number - is there any real real for them to be random?
Unique IDs are NEVER random - GUIDs and the like are based on the system time and (most often) MAC address. They're globally unique because of the algorithm used and the machine specifics - not because of the size of the value or any level of randomness.
Personally I would do everything I could to either use a sequential value (perhaps with a unique prefix if you have multiple channels) or, better, use a real GUID for your purpose.
is this what you are looking for?
var rArray;
function fillArray (range)
{
rArray = new Array ();
for(var x = 0; x < range; x++)
rArray [x] = x;
}
function randomND (range)
{
if (rArray == null || rArray.length < 1)
fillArray (range);
var pos = Math.floor(Math.random()*rArray.length);
var ran = rArray [pos];
for(var x = pos; x < rArray.length; x++)
rArray [x] = rArray [x+1];
var tempArray = new Array (rArray.length-1)
for(var x = 0; x < tempArray.length; x++)
tempArray [x] = rArray [x];
rArray = tempArray;
return ran;
}

Categories