Updating currency charts in a website: what is the best solution - javascript

I have the idea to put around 30 charts on a webpage, the charts will be load with data from a SQLite database and a node.js express webserver is sending the data and also getting the data every second from another program.
I think the node.js server will be ok to read and write every seconds this huge amounts of data, but i am not sure what is the best solution to update the charts in the webpage.
I have hear about things like Web Worker or Service Worker or Server Sent Event and i myself would use a setTimeout function in the webpage which would read every second the data again and again and write it into the charts, but in my testing with just around 10 charts now i see already that the browser takes to much cpu and ram.
i am using apexcharts in the page and i am not sure if my setTimeout function ad reading every second with that from the database is the problem or how other profis are doing it, what is the best technique to update such charts every second?

Related

How to display data queried through Flask on HTML while the query is still running

So, I am making a SQL query and processing the data acquired by SQL to a JSON format(so in a program that generates JSON). But the process takes a very long time I am talking about upwards of 6 minutes, so I am trying to display the data shown as the processing of the data is being done, but I am quite unsure as to how to go about that.
The only idea I have had is to split the queries into 10 at a time and display the 10 and run the rest in the background and if load them as required(but again no clue how to go about that)
I am familiar with FLASK,HTML,JS,Jquery(barely). I am open to new frameworks, but if possible I would like to stick with what I know. Any solution is appreciated.
You might want to look into Celery.
https://flask.palletsprojects.com/en/2.0.x/patterns/celery/
Basically, you would offload your sql queries to a celery task. The task could run each query sequentially and keep saving the results to a database, a redis cache, or a plain file.
Then your html/js would set a timer/interval to do a fetch() every few seconds to a flask route that reads from that database until it is done.

React-table loads slower with live API data than with mock data

Currently when I load a React-table locally with ~200 rows of mock data the performance is smooth, but when I do the same thing locally with ~200 rows of data from the API performance takes a huge hit. I'm fine with the actual search taking a hit, but currently, every time I try to navigate to the next page of the table or navigate out of/into the component itself the lag is very noticeable. The API is made through Spring, but it only appears to be accessed when the search is initially executed.
I've tried tracking the call with console logs to see where in the code the issue is occurring, it appears that the issue exists within react-table itself, but I'm still unsure of why the issue only occurs with data pulled from the API.
It is normal whenever you are fetching big data. There will be slight lag while fetching data from API. Also share your code. So, I can go through it.

a Method to Updating REST API data in periodic time?

i dunno how to put it. Hope the title is right for my problem or scenario.
I want to build a REST API, with a data coming from many rssfeed web. Right now, i'm able to fetch the data using a script javascript and saving it in my database. To be able fetch that data, i have to open a page so the script will be able to run and reload every 1 minute. The Rest Api is still in localhost by the way.
The Question is, what if i want to host it, should i have 1 PC to
always running 24 hours which only open a browser and access a REST
API address so the script will keep running and the data will always
be up to date?
Right now this the only method in my head, is there any method that i shouldn't have 1 pc to running 24hours a day seven days a week.
The best solution of your problem is to setup a scheduler that will be running on a predefined period, and fetch the data and store it in DB internally and you don't need to open a page to do that if you are not modifying the response returned from rssfeed.
You can go through this, Post , Tutorial, Node-Schedule, Parse. These are some of the example which you can use based on your requirements

Background job on heroku how does the web know it's finished

So, I'm creating this application that sometime it require pulling the feed and it's always timeout on heroku because of the xml parser takes time. So, I change to be asynchronous load via Ajax every time the page is loaded. I still get H12 error from my Ajax call. Now I'm thinking of using Resque to run the job in background. I can do that no problem but how would I know that the job is finished so I can pull the processed feed on to the html page via AJAX?
Not sure if my question is clear, so how would the web layer knows that the job is done and it should signal e.g (onComplete in javascript) to populate the content on the page?
There are a number of ways to do this
The JavaScript can use AJAX to poll the server asking for the results and the server can respond with 'not yet' or the results. You keep asking until you get the results.
You could take a look at Juggernaut (http://juggernaut.rubyforge.org/) which lets your server push to the client
Web Sockets are the HTML5 way to deal with the problem. There are a few gems around to get you started Best Ruby on Rails WebSocket tool
You have an architecture problem here. The reason for the H12 is so that the user is not sat there for more than 30 seconds.
By moving the long running task into a Resque queue, you are making it disconnected to the front end web process - there is no way that the two can communicate due to process isolation.
Therefore you need to look at what you are doing and how. For instance, if you are pulling a feed, are you able to do this at some point before the user needs to see the output and cache the results in some way - or are you able to take the request for the feed from the user and then email them when you have the data for them to look at etc etc.
The problem you have here is that your users are asking for something which takes longer than a reasonable amount of time to complete, so therefore you need to have a good look at what you are doing and how.

How to handle large data sets for server-side simulation --> client browser

Sorry for the somewhat confusing title. Not sure really how to title this. My situation is this- I have an academic simulation tool, that I in the process of developing a web front-end for. While the C++-based simulator is computationally quite efficient (several hundredths to a tenth of a second runtime) for small systems, it can generate a significant (in web app terms) amount of data (~4-6mb).
Currently the setup is as follows-
User accesses index.html file. This page on the left side has an interactive form where the user can input simulation parameters. On the right side is a representation of the system they are creating, along with some greyed out tabs for various plots of the simulation data.
User clicks "Run simulation." This submits the requested sim parameters to a runSimulation.php file via an AJAX call. runSimulation.php creates an input file based on the submitted data, then runs the simulator using this input file. The simulator spits out 4-6mb of data in various output files.
Once the simulation is done running, the response to the browser is another javascript function which calls a file returnData.php. This php script packages the data in the output files as JSON data, returns the JSON data to the browser, then deletes the data files.
This response data is then fed to a few plotting objects in the browser's javascript, and the plot tabs become active. The user can then open and interact with the plotted data.
This setup is working OK, however I am running into two issues:
The return data is slow- 4-6mb of data coming back can take a while to load. (That data is being gzipped, which reduces its side considerably, but it still can take 20+ seconds on a slower connection)
The next goal is to allow the user to plot multiple simulation runs so that they can compare the results.
My thought is that I might want to keep the data files on the server, while the users session is active. This would enable the ability to only load up the data for the plot the user wants to view (and perhaps loading other data in the background as they view the results of the current plot). For the multiple runs, I can have multiple data sets sitting on the server, ready for the user to download if/when they are needed.
However, I have a big issue with this line of thinking- how do I recognize (in php) that the user has left the server, and delete the data? I don't want the users to take over the drive space on the machine. Any thoughts on best practices for this kind of web app?
For problem #1, you don't really have any options. You are already Gzip'ing the data, and using JSON, which is a relatively lightweight format. 4~6 MB of data is indeed a lot. BTW if you think PHP is taking too long to generate the data, you can use your C++ program to generate the data and serve it using PHP. You can use exec() to do that.
However, I am not sure how your simulations work, but Javascript is a Turing-complete language, so you could possibly generate some/most/all of this data on the client side (whatever makes more sense). In this case, you would save lots of bandwidth and decrease loading times significantly - but mind that JS can be really slow.
For problem #2, if you leave data on the server you'll need to keep track of active sessions (ie: when was the last time the user interacted with the server), and set a timeout that makes sense for your application. After the timeout, you can delete the data.
To keep track of interaction, you can use JS to check if a user is active (by sending heartbeats or something like that).

Categories