Seat booking logic for meteor react application - javascript

I am working on a meteor booking application where I have a limited number of seats. My application can be used by many users in parallel so there is a possibility that two users try to book same seat. But as per business logic seats can never be overbooked. In Java, I could have restricted parallel booking using synchronized block. I don't have much experience in meteor/react so not sure what should be the right way to achieve this.
My current thought is to use a reactive boolean to create a lock so if the application gets two booking request then it process them synchronously and fail the second booking request. Since the seat will already be allocated in first request. But I am afraid whether I will get into any deadlock. So I am seeking your opinion/help to implement this in a proper way.
Thanks for your advice!

I'm assuming here your backend is node.js, seen as your using meteor, you're already using NPM, so a backend using Node makes sense.
In this case let's say your using Express or KOA to handle your requests, you can simply chain your tasks using promises, this will force the tasks to execute linearly.
Below is a simple working example, if you run the snippet you will notice I'm adding tasks every 700ms, but the tasks can only be completed in 1000ms, but as you see there is not overlapping and tasks get done in order.
const delay = (ms) => new Promise((r) => setTimeout(r, ms));
let lastTask = Promise.resolve();
async function addTask(txt) {
const ptask = lastTask;
lastTask = (async () => {
await ptask;
console.log(`starting task ${txt}`);
await delay(1000);
console.log(`done task ${txt}`);
})();
}
async function test() {
for (let l = 0; l < 5; l += 1) {
setTimeout(() => {
console.log(`adding task ${l}`);
addTask(l);
}, l * 700);
}
}
test();

if you are using pub/sub in your Meteor, the job is done. Your bookings are reactive on a first come first served basis. As long as your connection is on, when you write your first booking, the seat is taken.
E.g. (logic writing)
1 Publish your bookings within desired scope.
2.Subscribe on the client within the same scope.
3.If document bookedOn $exists (date of booking) make 'unbookable' / unclickable, make UX show the necessary colors/experience.
When one books it, all users online on the platform and viewing that component will get the update.
It would be a bit of an 'issue' if you wouldn't use pubs/subs but common ... you're on Meteor you should use the native reactivity of Meteor. Your boolean is Boolean(bookedOn) or just bookedOn.

I think the meteor way to do it is to call meteor method, to than user occupies a seat.
In this method check if this seat is already occupied.
More info here:
https://forums.meteor.com/t/if-multiple-users-are-trying-to-access-one-method-of-meteor-methods-how-to-make-method-as-synchronous-to-use-one-user-only-at-a-time/24969/8
But methods from different clients runs concurrently on the server.
You'll have to use something like semaphore. The easiest way would be to write a lock in mongo and check if the lock is not exists for this seat. Later the lock can be destroyed by mongo with TTL https://docs.mongodb.com/manual/tutorial/expire-data/
You can read more about methods here https://guide.meteor.com/methods.html
To wrap it up, the pseudo code would be something like this:
accuireLock(userId, seatId); // will read lock, if it's free write it, then read again just in case. In anything fails it should throw error
takeSeat(userId, seatId);

Related

How to prevent API calls on a minute timer loop when in the process of logging out of a web app?

In my Ionic/Angular app, I have a 60 second timer observable which just emits the current time synced with server time. Each minute I fetch permissions, settings, etc. I pass a token with each request. On logout I revoke the token. Here's a sample of what my logic looks like.
Side note: There's also a feature where a user can "change login type" where they can "become" an administrator, for example, and this process may also trigger a similar circumstance.
this.clientTimeSub = this.timeService.clientTime
.pipe(takeUntil(this.logoutService.isLoggingOut$))
.subscribe(async (latestClientTime) => {
this.clientTime = { ...latestClientTime };
// if client time just rolled over to a new minute, update settings
if (
this.clientTime?.time?.length === 7 &&
this.clientTime?.time?.slice(-1) === '0'
) {
await updateSettings();
await updatePermissions();
// etc
// These functions will:
// (1) make an api call (using the login token!)
// (2) update app state
// (3) save to app storage
}
});
When I am logging out of the app, there's a small time window where I could be in the middle of sending multiple api requests and the token is no longer valid, due to the timer rolling to a new minute just as I was logging out, or close to it. I am then presented with a 401: Unauthorized in the middle of logging out.
My naive solution was to tell this observable to stop propagation when a Subject or BehaviorSubject fires a value telling this observable that it is logging out, you can see this here .pipe(takeUntil(this.logoutService.isLoggingOut$)).
Then, in any of my logout methods, I would use:
logout() {
this.isLoggingOut.next(true);
...
// Logout logic here, token becomes invalidated somewhere here
// then token is deleted from state, etc, navigate back to login...
...
this.isLoggingOut.next(false);
}
In that small time window of logging out, the client timer should stop firing and checking if it's rolled to a new minute, preventing any further api calls that may be unauthenticated.
Is there a way I can easily prevent this issue from happening or is there a flaw in my logic that may be causing this issue?
I appreciate any help, thank you!
First of all, it is not the best way to use async-await along with RXJS. Its because RXJS as a reactive way of functional programming, have its "pipeable" operators so you can kinda "chain" everything.
So instead of having a logic of calculating time in your subscribe callback function you should rather use, for example filter() RXJ operator, and instead of using await-async you can use switchMap operator and inside it, use forkJoin or concat operator.
this.timeService.clientTime
.pipe(
// Filter stream (according to your calculation)
filter((time) => {
// here is your logic to calculate if time has passed or whatever else you are doing
// const isValid = ...
return isValid;
}),
// Switch to another stream so you can call api calls
// Here with "from" we are converting promises to observables in order to be able to use magic of RXJS
switchMap(_ => forkJoin([from(updateSettings), from(updatePermissions)])),
// Take until your logout
takeUntil(this.logoutService.isLoggingOut$)
).subcribe(([updateSettings, updatePermissions]) => {
// Basically your promises should just call API services, and other logic should be here
// Here you can use
// (2) update app state
// (3) save to app storage
})
If you split actions like in my example, in your promises you just call api calls to update whatever you are doing, then when its done, in subscribe callback you can update app state, save to app storage etc. So you can have 2 scenarios here:
Api calls from promises, are still in progress. If you trigger logout in the meanwhile takeUntil will do the thing and you will not update app state etc.
If both Api calls from promises are done, you are in a subscribe callback block and if its just a synchronous code (hopefully) it will be done. And then async code can be executed (your timer can now emit next value, its all about Event Loop in javascript)

Optimal way of sharing load between worker threads

What is the optimal way of sharing linear tasks between worker threads to improve performance?
Take the following example of a basic Deno web-server:
Main Thread
// Create an array of four worker threads
const workers = new Array<Worker>(4).fill(
new Worker(new URL("./worker.ts", import.meta.url).href, {
type: "module",
})
);
for await (const req of server) {
// Pass this request to worker a worker thread
}
worker.ts
self.onmessage = async (req) => {
//Peform some linear task on the request and make a response
};
Would the optimal way of distributing tasks be something along the lines of this?
function* generator(): Generator<number> {
let i = 0;
while (true) {
i == 3 ? (i = 0) : i++;
yield i;
}
}
const gen = generator();
const workers = new Array<Worker>(4).fill(
new Worker(new URL("./worker.ts", import.meta.url).href, {
type: "module",
})
);
for await (const req of server) {
// Pass this request to a worker thread
workers[gen.next().value].postMessage(req);
}
Or is there a better way of doing this? Say, for example, using Attomics to determine which threads are free to accept another task.
When working with WorkerThread code like this, I found that the best way to distribute jobs was to have the WorkerThread ask the main thread for a job when the WorkerThread knew that it was done with the prior job. The main thread could then send it a new job in response to that message.
In the main thread, I maintained a queue of jobs and a queue of WorkerThreads waiting for a job. If the job queue was empty, then the WorkerThread queue would likely have some workerThreads in it waiting for a job. Then, any time a job is added to the job queue, the code checks to see if there's a workerThread waiting and, if so, removes it from the queue and sends it the next job.
Anytime a workerThread sends a message indicating it is ready for the next job, then we check the job queue. If there's a job there, it is removed and sent to that worker. If not, the worker is added to the WorkerThread queue.
This whole bit of logic was very clean, did not need atomics or shared memory (because everything was gated through the event loop of the main process) and wasn't very much code.
I arrived at this mechanism after trying several other ways that each had their own problems. In one case, I had concurrency issues, in another I was starving the event loop, in another, I didn't have proper flow control to the WorkerThreads and was overwhelming them and not distributing load equally.
There are some abstractions in Deno to handle these kind of needs very easily. Especially considering the pooledMap functionality.
So you have a server which is an async iterator and you would like to leverage threading to generate responses since each response depends on a time taking heavy computation right..?
Simple.
import { serve } from "https://deno.land/std/http/server.ts";
import { pooledMap } from "https://deno.land/std#0.173.0/async/pool.ts";
const server = serve({ port: 8000 }),
ress = pooledMap( window.navigator.hardwareConcurrency - 1
, server
, req => new Promise(v => v(respondWith(req))
);
for await (const res of ress) {
// respond with res
}
That's it. In this particular case the repondWith function bears the heavy calculation to prepare your response object. In case it's already an async function then you don't even need to wrap it into a promise. Obviously here I have just used available many less one threads but it's up to you to decide howmany threads to spawn.

Repository pattern practical use cases and implementation in node.js

Can someone please explain what’s the use of this pattern with example?
All I'm confused is that I can have database instance wherever I want and I have flexibility to do anything by it, am I wrong? specially is
The repository pattern is a strategy for abstracting data access.
it’s like putting a universal adapter in between your application and your data so it doesn’t matter what data storage technology you use. All your app wants is having defined operations on items, it shouldn’t have to care about how it’s stored or where it comes from.
Also, there's no need to mention that all impacts of changes will be handled from one place instead of cascading all through your code!
Personally, I love this design pattern because it allows me to only have concerns about my business logics at the first steps instead of dealing with variant databases, on top of that, it solves a huge amount of my headaches when it comes to writing tests! So instead of stubbing or spying databases, which can be a headache, I can simply enjoy a mock version of operations
Now let’s implement a sample in js, it can be just as simple as below code (it's a simplified sample of course)
// userRepository.js
const userDb = [];
module.exports = {
insert: async (user) => userDb.push(user),
findAll: async () => userDb,
};
here is how I use this pattern, first I write something like below code in a 5 minute
// userRepository.js
const userDb = new Map();
module.exports = Object.freeze({
findById: async (id) => userDb.get(id),
insert: async (user) => userDb.set(user.id, user),
findAll: async () => Array.from(userDb.values()),
removeById: async (id) => userDb.delete(id),
update: async (updatedUser) => {
if (!userDb.has(updatedUser.id)) throw new Error('user not found');
userDb.set(updatedUser.id, { ...userDb.get(updatedUser.id), ...updatedUser });
},
});
Then I start to write my unit tests for repository that I’ve just written and business use-cases and so on…
anytime I’m satisfied with everything I can simply use a real database, because it’s just an IO mechanism, isn’t it? :)
So in above code I’ll replace userDb with a real database and write real data access methods, and of course expect all my tests to be passed.

Is it safe to rely on Node.js require behavior to implement Singletons?

Suppose I have a module implemented like this:
const dbLib = require('db-lib');
const dbConfig = require('db-config');
const dbInstance = dbLib.createInstance({
host: dbConfig.host,
database: dbConfig.database,
user: dbConfig.user,
password: dbConfig.password,
});
module.exports = dbInstance;
Here an instance of database connection pool is created and exported. Then suppose db-instance.js is required several times throughout the app. Node.js should execute its code only once and always pass the same one instance of database pool. Is it safe to rely on this behavior of Node.js require command? I want to use it so that I don't need to implement dependency injection.
Every single file that you require in Node.js is Singleton.
Also - require is synchronous operation and it also works deterministically. This means it is predictable and will always work the same.
This behaviour is not for require only - Node.js has only one thread in Event-Loop and it works like this (little simplified):
Look if there is any task that it can do
Take the task
Run the task synchronously from the beginning to the end
If there are any asynchronous calls, it just push them to "do later", but it never starts them before synchronous part of the task is done (unless there is worker spawned, but you dont need to know details about this)
Repeat the whole process
For example imagine this code is file infinite.js:
setTimeout(() => {
while(true){
console.log('only this');
}
}, 1000)
setTimeout(() => {
while(true){
console.log('never this');
}
}, 2000)
while(someConditionThatTakes5000Miliseconds){
console.log('requiring');
}
When you require this file, it first register to "doLater" the first setTimeout to "after 1000ms be resolved", the second for "after 2000ms be resolved" (note that it is not "run after 1000ms").
Then it run the while cycle for 5000ms (if there is condition like that) and nothing else happens in your code.
After 5000ms the require is completed, the synchronous part is finished and Event Loop looks for new task to do. And the first one to see is the setTimeout with 1000ms delay (once again - it took 1000ms to just mark as "can be taken by Event-Loop", but you dont know when it will be run).
There is neverending while cycle, so you will see in console "only this". The second setTimeout will never be taken from Event-Loop as it is marked after 2000ms to "can be taken", but Event Loop is stuck in never-ending while loop already.
With this knowledge, you can use require (and other Node.js aspects) very confidently.
Conclusion - the require is synchronous, deterministic. Once it finishes with requiring file (the output of it is a object with methods and properties you export, or empty object if you dont export anything) the reference to this object is saved to Node.js core memory. When you require file from somewhere else, it firsts look into the core memory and if it finds the require there, it just use the reference to the object and therefore never execute it twice.
POC:
Create file infinite.js
const time = Date.now();
setTimeout(() => {
let i=0;
console.log('Now I get stuck');
while(true){
i++;
if (i % 100000000 === 0) {
console.log(i);
}
}
console.log('Never reach this');
}, 1000)
setTimeout(() => {
while(true){
console.log('never this');
}
}, 2000)
console.log('Prepare for 5sec wait')
while(new Date() < new Date(time + 5*1000)){
// blocked
}
console.log('Done, lets allow other')
Then create server.js in same folder with
console.log('start program');
require('./infinite');
console.log('done with requiring');
Run it with node server
This will be the output(with numbers neverending):
start program
Prepare for 5sec wait
Done, lets allow other
done with requiring
Now I get stuck
100000000
200000000
300000000
400000000
500000000
600000000
700000000
800000000
900000000
The documentation of Node.js about modules explains:
Modules are cached after the first time they are loaded. This means (among other things) that every call to require('foo') will get exactly the same object returned, if it would resolve to the same file.
Multiple calls to require('foo') may not cause the module code to be executed multiple times.
It is also worth mentioning the situations when require produce Singletons and when this goal could not reached (and why):
Modules are cached based on their resolved filename. Since modules may resolve to a different filename based on the location of the calling module (loading from node_modules folders), it is not a guarantee that require('foo') will always return the exact same object, if it would resolve to different files.
Additionally, on case-insensitive file systems or operating systems, different resolved filenames can point to the same file, but the cache will still treat them as different modules and will reload the file multiple times. For example, require('./foo') and require('./FOO') return two different objects, irrespective of whether or not ./foo and ./FOO are the same file.
To summarize, if your module name is unique inside the project then you'll always get Singletons. Otherwise, when there are two modules having the same name, require-ing that name in different places may produce different objects. To ensure they produce the same object (the desired Singleton) you have to refer the module in a manner that is resolved to the same file in both places.
You can use require.resolve() to find out the exact file that is resolved by a require statement.

Text Game: Asynchronous Node - Prompts and MySQL

PREFACE
So it seems I've coded myself into a corner again. I've been teaching myself how to code for about 6 months now and have a really bad habit of starting over and changing projects so please, if my code reveals other bad practices let me know, but help me figure out how to effectively use promises/callbacks in this project first before my brain explodes. I currently want to delete it all and start over but I'm trying to break that habit. I did not anticipate the brain melting difficulty spike of asynchronicity (is that the word? spellcheck hates it)
The Project
WorldBuilder - simply a CLI that will talk to a MySQL database on my machine and mimic basic text game features to allow building out an environment quickly in my spare time. It should allow me to MOVE [DIRECTION] LOOK at and CREATE [ rooms / objects ].
Currently it works like this:
I use Inquirer to handle the prompt...
request_command: function(){
var command = {type:"input",name:"command",message:"CMD:>"}
inquirer.prompt(command).then(answers=>{
parser.parse(answers.command);
}).catch((error)=>{
console.log(error);
});
}`
The prompt passed whatever the user types to the parser
parse: function(text) {
player_verbs = [
'look',
'walk',
'build',
'test'
]
words = text.split(' ');
found_verb = this.find(player_verbs, words)[0];
if (found_verb == undefined) {
console.log('no verb found');
} else {
this.verbs.execute(found_verb, words)
}
}
The parser breaks the string into words and checks those words against possible verbs. Once it finds the verb in the command it accesses the verbs object...
(ignore scope mouthwash, that is for another post)
verbs: {
look: function(cmds) {
// ToDo: Check for objects in the room that match a word in cmds
player.look();
},
walk: function(cmds) {
possible_directions = ['north','south','east','west'];
direction = module.exports.find(possible_directions, cmds);
player.walk(direction[0]);
},
build: function(cmds) {
// NOTE scope_mouthwash exists because for some
// reason I cannot access the global constant
// from within this particular nested function
// scope_mouthwash == menus
const scope_mouthwash = require('./menus.js');
scope_mouthwash.room_creation_menu();
},
execute: function(verb, args) {
try{
this[verb](args);
}catch(e){
throw new Error(e);
}
}
}
Those verbs then access other objects and do various things but essentially it breaks down to querying the database an unknown number of times to either retrieve info about an object and make a calculation or store info about an object. Currently my database query function returns a promise.
query: function (sql) {
return new Promise(function(resolve, reject){
db.query(sql, (err, rows)=>{
if (err) {
reject(err);
}
else {
resolve(rows);
}
});
});
},
The Problem
Unfortunately I jumped the gun and started using promises before I fully understood when to use them and I believe I should have used callbacks in some of these situations but I just don't quite get it yet. I solved 'callback hell' before I had to experience 'callback hell' and now am trying to avoid 'promise hell'. My prompt used to call itself in a loop after it triggered the required verbs but this whole approach broke down when I realized I'd get prompt messages in the middle of other prompt cycles for room building and such.
Should my queries be returning promises or should I rewrite them to use callback functions handled by whichever verb calls the query? How should I determine when to prompt the user again in the situation of having an unknown number of asynchronous processes?
So, put in a different way, my question is..
Given the parameters of my program how should I be visualizing and managing the asynchronous flow of commands, each of which may chain to an unknown number of database queries?
Possible Solution directions that have occurred to me..
Create an object that keeps track of when there are pending promises and
simply prompts the user when all promises are resolved or failed.
Rewrite my program to use callback functions where possible and force a known number of promises. If I can get all verbs to work off callback functions I might be able to say something like Prompt.then(resolve verb).then(Prompt)...
Thank you for bearing with me, I know that was a long post. I know enough to get myself in trouble and Im pretty lost in the woods right now.

Categories