im developing gps tracking system with asp.net mvc and sql server db first , i have question about main page that show all device on map, in this page fetch many data from database and and process it and after every 15 seconds i should refresh this page
how can i Reduce the pressure on this page?
If you have an idea or a specific way to reduce the pressure on this page please guide me
You could use SignalR which implements web sockets; unlike what you're doing which is polling (constantly requesting data from the server every 15s or so), with websockets you don't have to initiate a request again and again, as the server will be the one send data to any connected client IF there is data to be sent.
It's a popular C# library so you'll find a lot of tutorials here and there.
diagram of main tables,but i insert row data of gps device in another table but last log table is live data for display last state of carrier
Related
I have a django webapp that displays stock data.
How it works:
I make requests to a stock data API and get data every 15 mins when the US stock market is open. This is a background periodic task using Celery.
I have a database where I update this data as soon as I get it from the API.
Then I send the updated data from the database to a view, where I can visualize it in a HTML table.
Using jQuery I refresh the table every 5mins to give it a feel of "real time", although it is not.
My goal is to get the HTML table updated (or item by item) as soon as the database gets updated too, making it 100% real time.
The website will have registered users (up to 2500-5000 users) that will be visualizing this data at the same time.
I have googled around and didn't find much info. There's django channels (websockets), but all the tutorials I've seen are focused on building real time chats. I'm not sure how efficient websockets are since I have zero experience with them.
The website is hosted on Heroku Hobby for what its worth.
My goal is to make a fully real time webapp and make is as much efficient as possible.
I have made an API using Express.js and scraped a website using cheeriojs. I have deployed this API using heroku. I want my web application to fetch the latest data from the scraped website but my deployed app is not doing so. It is still showing the old data . How to make it fetch live data continuously
Hi I got stuck in a similar problem.The way out was using cron jobs and the database simultaneously.
So I configured cron jobs to visit my website twice a day to activate the server(You don't need this if you have a good number of active users).
So when the server restarts, my app checks if the data stored in my db(i.e. the last data that was scraped from the source when the server was active previously) is the same as the one that is currently present on the target website(from where I scrape data). If it is true then do nothing, else{
update the db with the latest data and display the same on your website
}
The drawbacks with this approach are:
1.It updates data only twice a day
2.If your site has many active users throughout the day, their constant visits won't let your app's server to get idle and hence at the time that you configure for the cron job to visit your site,there are chances that your server may already be online at that moment and it may not update the data.
However , for less active users this way works perfectly fine.
You can configure cron jobs from here: https://cron-job.org/en/members/jobs/
I've built an API that delivers live data all at once when a user submits a search for content. I'd like to take this API to the next level by delivering the API content to a user as the content is received instead of waiting for all of the data to be received before displaying.
How does one go about this?
Easiest way to do in Django is using Django Endless Pagination
I think the better way to apply it is setting limit in your query. For example, If you have 1000 of records in your database, then retrieving all data at once takes time. So, if a user search a word 'apple', you initially send the database request with limit 10. And, you can set pagination or scroll feature at your front-end. If the user click next page or scroll your page, you can again send the database request with another limit 10 so that the database read action will not take more time to read the limited data.
From your explanation
We're pulling our data from multiple sources with each user search.
Being directly connected to the scrapers for those sources, we display
the content as each scraper completes content retrieval. I was
originally looking to mimic this in the API, which is obviously quite
different from traditional pagination - hope this clarifies.
So you in your API, you want to
take query from user
initiate live scrapers
get back the data to the user when scrapers finish the job !
(correct me if im wrong)
My Answer
This might feel little complicated, but this is the best one I can think of.
1. When user submits the query:
1. Initiate the live scrapers in to celery queue (take care of the priority).
2. Once the queue is finished, get back to the user with the information you have via sockets(this is how facebook or any website sends users notifications`. But in your case you will send the results html data in the socket.
3. Since you will have the data already, moved into the db as you scraped, you will can paginate it like normal db.
But this approach gives you a lag of a few seconds or a minute to reply back to the user, meanwhile you keep theuser busy with something on the UI front.
Why can I not see newly inserted JSON documents?????
Couchbase 4.5
JavaScript
AngularJS
Node.JS
Express Server
I have a web application, which performances data maintenance. The user will select from a menu, which data to manipulate. The user has the option of inserting, updating, or deleting once the web application retrieves the data. I ran several tests of the application and discovered an issue.
Background to the Issue:
Couchbase server resides on Local Computer
Application written using JavaScript, AngularJS, NodeJS running Express Server
User enters new data into the web application. Once completed, the user will depress an update button, which determines if the user wants to insert or update a JSON document. The web application determines the user is adding new data to the database. The web application formats the data into the appropriate JSON document format. The web application sends the data to the database using the REST paradigm. The database returns a success status back to the web application. Upon recognizing the successful database update occurred, the web application retrieves the data in order to display the must current data.
Issue:
After reviewing the new retrieval of the data, the record just inserted does not display. After waiting a few minutes, I re-retrieved the data again. The newly inserted data appears. I ran the process several times. Each time I could not immediately retrieve the newly inserted data.
Questions:
Can someone explain to me why a JSON document of less than 1000 characters will not retrieve after insertion?
Will I need to insert the new inserted data into my existing result set? If so, why when the data resides in the database?
TIA
I guess you are talking about N1QL or views querying, if so, you probably operating on default consistency levels, which trades immediate update for performance. If that is critical at point in your application, you should pick different consistency level.
Overview of the feature: http://developer.couchbase.com/documentation/server/4.5/architecture/querying-data-with-n1ql.html
Blog post with video demo: http://blog.couchbase.com/2016/july/n1ql-scan-consistency-including-new-atplus-video
N1QL API to change consistency: https://github.com/couchbase/couchnode/blob/771ebf78f82b437999e13b05e4699c88a02dc8d3/lib/n1qlquery.js#L71
I'm trying to build a single page web app using Backbone. the app looks and behaves like a mobile app running on a tablet.
The web app is built to help event organizers manage their lists of people attending their events, and this includes the ability to search and filter those lists of attendees.
I load all attendees list when the user opens the attendees screen. and whenever the user starts to search or filter the attendees, the operation happens on the client side.
This way always works perfectly when the event has about ~400 attendees or less, but when the number of attendees gets bigger than that (~1000), the initial download time takes longer (makes sense) .. but after all data is loaded, searching and filtering is still fast relatively.
I originally decided to go with the option of fully loading all the data each time the app is loaded; to do all search operations on the client side and save my servers the headache and make search results show up faster to the user.
I don't know if this is the best way to build a web/mobile app that processes a lot data or not.
I wish there's a known pattern for dealing with these kinds of apps.
In my opinion your approach to process the data on the client side makes sense.
But what do you mean with "fully loading all the data each time the app is loaded"?
You could load the data only once at the beginning and then work with this data throughout the app lifecycle without reloading this data every time.
What you also could do is store the data which you have initially fetched to HTML5 localstorage. Then you only have to refetch the data from the server if something changed. This should reduce your startup time.