I've searched Stack overflow and all I can find is how to use Tornado as a HTTP server.
Now, my question is how do I start doing push notifications using the system?
Let me give you some context...
The database
I have a database on some server far away that I know nothing about, other than its a postgreSQL database and a piece of software on that server updates the database every so often ( maybe every couple of seconds, to couple of days).
Currently
I Have a django app that displays these database rows. it gets these database rows from a different app - an app called api - using an ajax call every 5 seconds. As we all know this method is wasteful.
What I'd like to do
Well I'll bullet point it:
I'd like my Django app to stay the same in structure
The Django app will contain in its view JS code for connecting to a separate server.
this separate server will check the database for changes every 60 seconds. If the database has changed, then notify the clients with a message, such as "new data available"
Hopefully thats not too vague.
Thanks,
Andy.
I found that the django-websocket-redis package suits my needs which are very much comparable to yours as it can easily be implemented on top of your existing project.
Mind that there are a few dependencies (UWGSI and Redis, primarily) and I've had to switch to a Linux development environment to get everything to work properly.
Related
I have made an API using Express.js and scraped a website using cheeriojs. I have deployed this API using heroku. I want my web application to fetch the latest data from the scraped website but my deployed app is not doing so. It is still showing the old data . How to make it fetch live data continuously
Hi I got stuck in a similar problem.The way out was using cron jobs and the database simultaneously.
So I configured cron jobs to visit my website twice a day to activate the server(You don't need this if you have a good number of active users).
So when the server restarts, my app checks if the data stored in my db(i.e. the last data that was scraped from the source when the server was active previously) is the same as the one that is currently present on the target website(from where I scrape data). If it is true then do nothing, else{
update the db with the latest data and display the same on your website
}
The drawbacks with this approach are:
1.It updates data only twice a day
2.If your site has many active users throughout the day, their constant visits won't let your app's server to get idle and hence at the time that you configure for the cron job to visit your site,there are chances that your server may already be online at that moment and it may not update the data.
However , for less active users this way works perfectly fine.
You can configure cron jobs from here: https://cron-job.org/en/members/jobs/
My client wants applications written in electron which works both offline and online. The application is to connect to the server / database and download data (photos and descriptions of products) so that it can work in the offline version. When it is online, the application checks whether there is any change on the server / database and then an update occurs.
I have already prepared such applications (search engine and product filters with the possibility of generating pdf), but I have no idea how the application would check whether the server existed any of the data and download new photos of products, etc.
Yeah well the question is: Do you want to run you're queries on the Client it self? Because then, if not strictly restricted, someone finds a way to open up the console and sends funny queries.
But anyway... your question:
The easiest way I can think of this, because we don't want to download everything again, is to have a version number for each element.
So the client has one and if that does not match the servers it gets updated. Now you just have to get all IDs. But keep in mid, you still have to handle what happens if an item gets removed or a new one added.
This is not really an answer but I hope it inspired you a little bit.
Oh some after thoughts:
You could get something like a WebSocket between those two. Yes you had to program one more service but
The query would be save
You could keep track of removed and added items
You could work out some timestamp system that you get all items that are newer than the timestamp of the client.. this will be some work though.
nice day, Elias
I'm currently working for a startup which ask me to develop an offline app using pouchDB. This pouchdb is used to store the data entered by users.
The offline application works fine. Now I have to add one feature on the online app to sync the dbs. After a login, the online app have to check if data is stored in a pouchdb on the device which is connecting, and, if the check found data, the online app have to pull this data.
I have the folowing problem: the online app can't get the db stored localy on the device (even if I run the both app in the same browser).
I explain my problem in another StackOverflow post, but the formulation was not so good so I think that's a good thing to post another question.
My old post here
I work on this problem during few days and I don't have much time until I have to finish my work, if someone know how to solve this, it could be very nice :)
I think the lack of response is because readers are not very clear what the problem is. In your other post it sounds like you are saying if you write a new entry to the local database you cannot retrieve it again. In this question it sounds like once you have a local database entry you cannot make it replicate to the server database - is that the case?
On the PouchDb front page is a short example of writing to a local database and then replicating it to a server. Like this:
var db = new PouchDB('dbname');
db.put({
_id: 'dave#gmail.com',
name: 'David',
age: 69
});
db.replicate.to('http://example.com/mydb');
(the example assumes the database can be updated by anyone ie no security - otherwise you need a username and password as explained here)
Does this work for you? If not can you say what happens?
Checking to see if there is data locally should be a case of seeing if your local database has any entries in it (db.info would be a start as it returns a document count). Then you could replicate the local database using the db.replicate call.
Does this help?
I have an ionic app and a Parse.com backend. My users can perform CRUD functions on exercise programmes, changing every aspect of the programme including adding, deleting, editing the exercises within it.
I am confused about when to save, when to call the server and how much data can be held in services / $rootScope?
Typical user flow is as below:
Create Programme and Client (Create both on server and store data in $localStorage).
User goes to edit screen where they can perform CRUD functions on all exercises within the programme. Currently I perform a server call on each function so it is synced to the backed.
The user may go back and select a different programme - downloading the data and storing it localStorage again.
My question is how can I ensure that my users data is always saved to the server and offer them a responsive fast user experience.
Would it be normal to have a timeout function that triggers a save periodically? On a mobile the amount of calls to the server is quite painful over a poor connection.
Any ideas on full local / remote sync with Ionic and Parse.com would be welcome.
From my experience, the best way to think of this is as follows:
localStorage is essentially a cache layer, which if up to date is great because it can reduce network calls. However it is limited to the current session, and should be treated as volatile storage.
Your server is your source of truth, and as such, should always be updated.
What this means is, for reads, localstorage is great, you don't need to fetch your data a million times if it hasn't changed. For writes, always trust your server for long term storage.
The pattern I suggest is, on load, fetch any relevant data and save it to local storage. Any further reads should come from local storage. Edits, should go directly to the server, and on success, you can write those changes to localstorage. This way, if you have an error on save, the user can be informed, and/or you can use localstorage as a queue to continue trying to post the data to the server until a full success.
This is called "offline sync" or sometimes "4 ways data binding". The point is to cache data locally and sync it with a remote backend. This is a very common need, but the solutions are unfornately not that common... The ideal flow would follows this philosophy:
save data locally
try to sync it with server (performing auto merges)
And
Periodically sync, along with a timer and maybe some "connection resumed" event
This is very hard to achieve manually. If been searching modules for a long time, and the only ones that come to my mind don't realy fit your needs (become they often are backend providers that give you frontend connectors; and you already have an opiniated backend), but here they are anyway:
Strongloop's Loopback.io
Meteor
PouchDB
My app is built in PHP (served on nginx/php-fpm) and I use node/js with socket.io for users push notifications. These are pushed using Redis pub/sub to interlink between PHP and node.js
The node.js app maintains an array of online userid's. They get added when a user connects to socket.io and removed from the array when they disconnect.
MySQL is used as the main database and I have a standard relationship table which denotes who is following who. A list of followers userid's is retrieved when a user logs and displayed to them.
I now wish to intersect these two sets of data to provide live online status's of these relationships in a similar manner to facebook (a green light for online and grey for offline)
What would be the most performant and most scale-able way of managing this. My current thought process is along these lines:
On the client side we have a javascript array of followers user id's. Set up a timer client side which pushes this array to the node.js app every 60 seconds or so. Node.js handles inersecting the followers id's with its current array of online users and returns an object depicting which users are online.
Now this would work but it feels like it might be a heavy load on node.js to be consistently looping through users followers lists for every online user. Or perhaps I am wrong and this would be relatively trivial considering the main application itself is served by PHP and not via node which only currently handles notification pushing?
Regardless, is there a better way? It's worth noting that I also use redis to build users activity streams (The data is stored in MySQL and redis maintains lists of activity id's)
So seeing as I already have a Redis server active, would there be a better method leveraging Redis itself instead?
Thanks for any advice
If i remember right, when socket.io connected to client side, that makes a request to the client every time for checking active connection and return result, in callback succeffull you can put code that will be update time active users in DB. And when get last notes beyond 5 minutes from DB.