My web app is handling large data-sets that have to be returned to the user. So to speed up this process, when the data is new, the first user to access the data, triggers a caching script.
So the next user to access the data, will have the data be returned to the user in a 1/10th of the time.
The issue is, the first user has to wait 30 odd seconds to generate the data, which during that time, the script runs and starts the caching process, the issue is, the browsers checks back keeping the connection alive by triggering a GET request again.
Which triggers another cache event due to the fact the first one hasn't completed so the timestamp isnt there to say there is a cached version, so I can get multiple caching happening at the same time, due to the fact the first one hasn't finished.
Is there a way to keep the connection open and not timeout, or not allow the browser to trigger multiple GET requests when one connection is taking a bit longer to respond?
Related
I'm seeking the best way to keep the data up-to-date when dealing with web socket async requests, especially data that contain time information.
I have a dashboard, when I logged/pressed feed I updated the nextFeed time to feedingInterval + the current logged time.
feedingInterval = 2 hrs
I used to do form submit, and page refresh.
Also, I used to do location.reload() every 1 minute to keep my nextFeed time up-to-date when my user opened up the link on a machine connected to a TV.
nextFeed time always up to date, worse case there is 1 mn lagged.
NOW
To enhance the UX:
I don't use form submit anymore, so no page refresh, I use Ajax POST to a web socket route to broadcast to all my devices. When I receive any push notifications from Pusher Cloud, I update the DOM live dynamically on all my devices.
Issue
I used to refresh my page every 1 minute, since I didn't do ( location.reload(); ) anymore. My feeding time will become stale (not up-to-date).
Current solution:
I created an API to get the nextFeed time, called it every 5 seconds, and update DOM every 5 seconds. I notice my browser seem very hot since, maybe due too much requests in a minute.
Then you might say/think... just called every minute then ...
But ... If I make a call every one 1 minute then my users wont' have up-to-date information which defeats the purpose of me doing the websocket in the first place.
What should I do to have the best UX, but not too compromise my performance?
https://www.bunlongheng.com/baby/5db53c4c-5be4-4aa2-b9f6-b564acb871ac?code=d3s!gn
I am trying to automate my performance testing scenario on two different browser Edge and Chrome, since both scripts supports the JavaScript API's for performance.
The problem I am facing is my scenarios are different from one another, but this happen on the same session of web Browser.
For example, I need to measure the response time on opening a form which is first scenario, the second scenario would be response time taken for a form to save, both are two different scenarios but use same session of browser.
I am trying to use the following code once my first scenario is done
window.performance.timing.responseStart - window.performance.timing.responseEnd
here I get proper time 8 ms, post this I try to fill and save form which is my second scenario, and verify the response time like the above, I still get 8 ms which is not validate as the actual time took to save and reload the new form is more than 8 ms. When I check the individual response start and end time, they remain same for the entire session.
I want to calculate different response time for the form on save and reload. Is this feasible in First place? If so what is the better approach?
With reference to this article, we can see that:
PerformanceTiming.responseStart: When the browser received the first byte of the response, from the server from a cache, or from a local resource.
PerformanceTiming.responseEnd: When the browser received the last byte of the response, or when the connection is closed if this happened first, from the server, the cache, or from a local resource.
So, according to above properties we could get the total time taken to download the webpage from the server.
More detail about the process timing, I suggest you check the PerformanceTiming interface and Measure Web Page Performance Using JavaScript.
Besides, if you want to calculate the response time for the form save and reload, I suggest you could also try to use Date object to calculate the time, like this and this thread.
I have a web application that needs to refresh some values often because the changes have to be available almost in real time. To do this, I run via ajax a refresh.php routine every 15 seconds, which returns the updated information. Time that increases if there is no user activity.
I thought about the possibility of creating a service-worker in the browser (since I already use it for pwa too), and in it create a web-socket, and then only when there is an update on the server, create a socket for the ip's of the users that are logged in (and saved in a db), just to send to the user's browser that there is an update, then the web-socket triggers the javascript routine that connects to the server and does the update.
I do not know if it would be possible to create the socket just to inform that there is an update, because in this case, I do not want to leave the socket open, creating the connection only when there is an update of the information, which should happen to several users.
Has anyone ever needed or done anything like this, or would you have any other ideas?
I have a requirement where a user presses a start timer button and it begins keeping track of time. As the user moves through the website, I want the time to continue tracking, until they press the stop button.
Obviously, this cannot be achieved through client-side javascript alone, since for each page refresh time will be lost. One solution I thought was to use faye/websockets to just push the time to the browser, but for every second that lapses, that will push data to client - a strain on the server.
The only solution I can come up with is keep track of the time in javascript and then capture the page unload event send, ajax request to server with the amount of time, and let the server continue incrementing time until the next page is fully loaded. This means it will not be using push technology, just regular ajax. Is this the optimal option here or is there a better solution?
What about the case where the user kills the browser? You won't be able to capture the unload event in this case.
If you want a client side solution, try putting the start time in the localStorage where this will persist across page loads. Then when the user hits stop, you can make an ajax call to the server with the elapsed time.
I assume you need to display a timer to the user, which updates every second.
I have built a web application like that. It was a single-page application (AngularJS), so the user could navigate from 'page' to 'page' without a complete web page being loaded and the timer kept running. Would that be an option in your case?
Otherwise, you could put the start time in a cookie and every second display the difference between the current time and the start time.
A few other options, which are less preferred:
Run the web site in an iframe and keep the timer outside the iframe.
Let the timer run on the server and make a small AJAX request to the server every second (yes, I know...).
Hybrid: Let the timer run on the server and on the client and synchronize the client with the server on every page load.
Options 2 and 3 require a stateful server (with all its drawbacks).
I'm have a very long process in a php script (generate a huge pdf).
I have a button in my HTML page that launches the php script and I'd like to show a kind of progress bar or at least an animated gif and when the php script is over, display the generated pdf.
The generation of the pdf may last 15 minutes so the php engine exits in timeout and the browser too.
I there a way to declare a kind of client-side callback that would be invoked as soon as the server side process is over ?
Thanks for your replies
Edit :
Thanks for your replies :)
If I well understand, I must launch the process on server-side and "detach" my client i.e do not wait untill the process is over. Instead, my client should periodically check the progression of server-side process. Right ?
If so, I may use the following scenario :
The client sends an ajax request to the server. The server launches
the process and returns a guid to the client. This guid identifies
the job.
The client periodically checks the progression of the job
via an Ajax request, from its guid.
Once the job is over, the client can issue a last Ajax query to
download the PDF
That means that the server must save the generated PDF on its disk and wait for the final Ajax request to send the file and delete it, right ?
For something as long as 15 minutes, I wouldn't even use web sockets for this. 15 minutes is a long time and there's really no telling what the user is going to be doing in the meantime. A disconnected notification process is probably going to be more reliable in this case.
Consider something like:
User initiates process, whereby a record is stored in the database "queueing" the process to take place.
User is immediately presented with a page indicating that the process has been queued and that they can continue to use the application.
A separate application which runs periodically (every minute? every few minutes?) checks for "queued" processes in the database, updates their status to "in-progress" (so subsequent runs don't also pick up the same records), and processes them.
As each process completes, it's either removed from the database or updated to a "completed" status.
The user is otherwise notified that the process is complete.
This final notification can be done a number of ways. An email can be sent to the user, for example. Or consider a user experience similar to the Facebook notification bar. Each page in the website can check for "completed" processes when the page loads and present a "notification" in the UI which directs the user to the result of the process. If users spend a lot of time on any given page then this would be a good place to use web sockets or long polling via JavaScript to keep checking for completed processes.
The main thing is to separate the user interface from the long-running process. Web applications by design aren't suited for processes which run for that long. By separating the concerns the web application can focus just on the user interface and the command-line application can focus on the processing. (As an added bonus, this would prevent users from over-loading the server with too many concurrent processes. The command-line application can just run one record at a time, so too many concurrent processes just slows down the response, not the server.)
as #David said, but no one has covered the progress bar, the implantation of this depends on what you know ( you being the application creating the pdf ).
Do you know the size of the pdf when complete.
Do you know how long
it will take to generate.
Do you have code where you can hook into
to update the progress.
The application needs a way to know when to update the completed percentage, and by how much. If you can do that then you can either store the progress in the database with the script that crates the PDF and read it on a user facing page, or store it in a file, etc..
jQuery UI progress bar is easy to use, but you will have to know what percentage is done to be able to tell the end user.
After that it is a pretty simple matter of using ajax (jquery $.post ) and a file, that's how i do it. I just wright a simple text file with a number representing the completion percent. Load it via ajax and feed it to the jquery ui progress widget.