Be secured from perform one action twice - javascript

I have application on Node.js with using Express as API route and MongoDB as DB.
I have a raffle. User must join raffle only one time.
I am currently using an array in memory with participating, but if you will make two request to API at the same time, user will be joined in raffle two times.
How can i disallow to join raffle more than one time?

You would need to structure your workflow to support idempotent operation. That is, performing the same operation twice does not change the result.
For example, incrementing a variable is not idempotent, since calling the increment function twice resulted in the variable get incremented twice (e.g. calling x++ twice will result in adding 2 to x). I think this is the essence of your current design, where you mentioned: " if you will make two request to API at the same time, user will be joined in raffle two times".
An example of an idempotent operation is setting a variable to a value. For example, calling var x = 1 multiple times will only result in value 1 getting assigned to x. No matter how many times you call that line, x will always be 1.
Some resources to help get you started:
What is Idempotency
Database operation that can be applied repeatedly and produce the same results
What is an idempotent function
How to Write Resilient MongoDB Applications

You should maybe store the array in mondodb, you don't want to loose this list if node restart.
About the "join two time" problem, just throttle the client side function that make the request to your API, so that it can only be called one time during the time passed in your throttle function.
Exemple throttle function :
function throttle (callback, limit) {
var wait = false;
return function () {
if (!wait) {
callback.apply(this, arguments);
wait = true;
setTimeout(function () {
wait = false;
}, limit);
}
}
}
and now your request function :
var throttleApiFunc = throttle(apiFunc, 5000);
// this can only trigger one time each 5 seconds
throttleApiFunc();

Related

Rx.Js: understanding expand operator

I post data to backend, processing data takes some time and long polling is not a solution in my particular case, so I send request each 5 seconds with expand operator
this.apiService.postData(data).pipe(
expand((status) =>
status.comptete? this.apiService.askStatus(status.request_id).pipe(delay(5000)) : empty()
),
map((result) => {
// processing result here
})
);
The question is how can I make delay be dynamic (e.g. at first time I want to ask for status in 1 second, at second time in 2 seconds and so on)? And two more questions. Have I understood correctly that if I add take(N) operator that will limit askStatus calls to N? Have I understood correctly that I don't need to do any sort of unsubscription here?
expand() passes also index every time it calls the project function so you can calculate delay based on that:
expand((status, index) =>
status.comptete ? this.apiService.askStatus(...).pipe(delay(...)) : empty()
Using take(N) inside expand() won't help because expand() calls the project function on every emission from both source and inner Observables. But you can of course use take(N) after expand().
You don't have to unsubscribe from askStatus() manually if you handle unsubscription later where you also subscribe.

Unknown length array used to execute promised functions sequentially

I'm making a function on a node.js server which reads a CSV file, I need to read all lines and execute several promised operations (MySQL queries) for each one (update or insert, then set an specified column which identifies this item as "modified in this execution") and once this finishes change another column on those not updated or inserted to identify this items as "deleted"
At first, the problem I had was that this CSV has millions of lines (literally) and a hundred of columns, so I run out of memory quite easily, and this number of lines can grow or decrease so I cannot know the amount of lines I will have to process every time I receive it.
I made a simple program that allows me to separate this CSV in some others with a readable amount of lines so my server can work with each one of them without dying, thus making an unknown amount of files each new file is processed, so now I have a different problem.
I want to read all of those CSVs, make those operations, and, once those operations are finished, execute the final one which will change those not updated/inserted. The only issue is that I need to read all of them and I cannot do this simultaneously, I have to make it sequentially, no matter how many they are (as said, after separating the main CSV, I may have 1 million lines divided into 3 files, or 2 millions into 6 files).
At first I though about using a forEach loop, but the problem is that, foreach doesn't respects the promisifying, so it will launch all of them, server will run out of memory when loading all those CSVs and then die. Honestly, using a while(boolean) on each iteration of the foreach to wait for the resolve of each promisified function seems pretty.... smelly for me, plus I feel like that solution will stop the server from working properly so I'm looking for a different solution.
Let me give you a quick explanation of what I want:
const arrayReader=(([arrayOfCSVs])=>{
initialFunction();
functions(arrayOfCSVs[0])
.then((result)=>{
functions(arrayOfCSVs[1])
.then((result2)=>{
functions(arrayOfCSVs[2])
(OnAndOnAndOnAndOn...)
.then((resultX)=>{
executeFinalFunction();
});
});
});
You can use Array.reduce to get the previous promise and queue new promise, without the need for waiting.
const arrayReader = ([arrayOfCSVs]) => {
initialFunction();
return arrayOfCSVs.reduce((prom, csv) => {
return prom.then(() => functions(csv));
}, Promise.resolve()).then(resultX => executeFinalFunction());
}

What does RxJS.Observable debounce do?

Can anybody explain in plain English what RxJS Observable debounce function does?
I imagine it emits an event once in a while depending on the parameters, but my code below doesn't work as I expected.
var x$ = Rx.Observable.fromEvent(window, 'click')
.map(function(e) {return {x:e.x, y:e.y};})
.debounce(1000)
.subscribe(function(el) {
console.log(el);
});
and the JsBin version.
I expected that this code would print one click once per second, no matter how fast I am clicking. Instead it prints the click at what I think are random intervals.
Debounce will emit a value after a specified time interval has passed without another value being emitted.
Using simple diagrams the following may provide greater help:
Stream 1 | ---1-------2-3-4-5---------6----
after debounce, the emitted stream looks like as follows:
Stream 2 | ------1-------------5---------6-
The intermediate items (in this case, 2,3,4) are ignored.
An example is illustrated below:
var Rx = require('rx-node');
var source = Rx.fromStream(process.stdin).debounce(500);
var subscription = source.subscribe(
function (x) {
console.log('Next: %s', x);
}
);
I used node to illustrate this... assuming you have node installed, you can run it by typing
$node myfile.js (where the aforementioned code is in myfile.js)
Once this node program is started you can type values at the console -- if you type quickly items are ignored, and if type intermittently fast and slow items will appear after a gap in typing (in the example above I have 500ms) at the console ("Next: ")
There is also some excellent reference material at https://github.com/Reactive-Extensions/RxJS/blob/master/doc/api/core/operators/debounce.md
Long story short:
debounce waits for X time that the stream isn't emitting any new value, then let the latest value pass.
Long story:
Once a value is emitted, debounce will pause its emission for X time to see if another value is emitted, in fact blocking the stream during this time. If a new value is emitted during the debounce time then the timer is restarted and debounce waits again for the full time.
If its timer expires without any new value being emitted, it let the latest value pass.
Let's say that you want to add autocomplete to an input box. If the user insert "a" you may want to show him the choices "acorn, alaska", but if the user right after press "l" you would propose just "alaska". In this case it's better to wait for the user to stop pressing the keyboards to avoid doing unnecessary work. debounce it's the right tool here: it waits for X time the the stream isn't emitting any new value
.debounce() produces the last received value if no values were received within the specified interval.
It means that as soon as you click within a second - nothing will be produced.
If you want to throttle values to be emitted no more frequent than every second you need to use .sample(1000) instead.

What's the best way to cycle through each data-XXX cell synchronously?

I have an html table with a column which has a data- attribute in each row, like <td data-XXX="####">, I want to find each cell (which can be done with
$table.find("[data-XXX]")
unless someone has a better way), take the value of the data-XXX attribute, pass it to an AJAX query, parse the JSON result, and place the results in the <td> with the associated data-XXX field. And maybe the toughest requirement of all… I'd like it all done synchronously, to minimize a lot of server requests all at once… so the next AJAX call isn't made until the previous table cell is filled.
So… my question is specifically - what's the best way to cycle through each data-XXX cell synchronously?
Based on your description, I think you want the ajax calls done one after the other so the next one doesn't start until the previous one finished. The ajax calls themselves will still be asynchronous (as they should). You can do that like this:
function processData() {
var items = $table.find("[data-XXX]");
var cntr = 0;
function next() {
if (cntr < items.length) {
var obj = items.eq(cntr);
// do your ajax call with obj
$.ajax(...).done(function(results) {
// process results returned from ajax call
// ...
// now do the next iteration
++cntr;
next();
});
}
}
// start first iteration
next();
}
The basic idea is that you set up two state variables (your jQuery object that contains the list of objects to process and an index variable that keeps track of which one to process next). Then, you create an inner function that you use for each iteration. In the completion handler for the ajax call, you increment your counter and then start the next iteration.
$table.find("data=XXX").each(function(){
var tdEle = this;
});
Also checkout dataset:
https://developer.mozilla.org/en-US/docs/Web/API/HTMLElement.dataset

Filtering with large dataset

I have a large data-set with roughly 10,000 records. I want to be able to have a filter mechanism on this data-set. That basically performs a LIKE sql expression on a field and returns the matching results.
To do this, I've used JQuery to bind the "input" event on my filter textbox to my filter handler function.
The issue at the moment is, that If a load of keys are pressed at once in the textbox, then the filter function gets called many times, thus making many SQL calls to filter which is very inefficient.
Is there a way I can detect in my handler when the user has finished typing or when there's a gap of a certain period and only then performing the filtering? So i only make one database call when loads of characters get input at once. If the characters get input slowly though, I want to be able to filter each time.
Cheers.
Here is a way of doing it jsfiddle
var test = 0;
$('body').on('keyup','input',function(){
var val = $.trim($(this).val());
test++;
if(val !== ""){
doSomething(test);
}
});
function doSomething(t){
setTimeout(function(){
if(t === test){
//Place to Call the other Function or just do sql stuff
alert($.trim($('input').val()));
}
},500);
}
There is a global variable to test against, if the user typed another letter. And the setTimeout function waits 500ms to see if they did type another letter.
I'm sure there are better ways to do this, but that will probably get you started.

Categories