I have an application that 99% of the time functions correctly. It's a relatively simple checkout system - the user submits a form, it runs through validation (all fields contain something), then fires against a payment processor. The payment processor uses an API to process the order and returns error or success responses, error returning a message, success passing the user to a 'Thank You' page with the order information.
The problem we're having is we're hearing about customers who say that when it starts processing a message appears (it's supposed to) in an overlay, then just hangs there. I've coded in a timeout which is supposed to wait 25 seconds, then send the user to the success page (minus any success information) which then tells them there was an error. However, in a small number of instances this is not happening.
I've tested this on the gauntlet of browsers and cannot replicate it, so I'm wondering...
If it's possible that a toolbar or plugin on the browser could be preventing the scripts from running correctly.
If there's some way I can programatically check for errors like this and push the user on regardless.
Here's the code for reference: http://jsfiddle.net/XaP7z/
I know this is a long-winded and somewhat vague question but I'm grasping at straws and the client is not happy (regardless of this being a <1% issue).
Related
I have made a laravel 5.2 app which records transaction of a warehouse .. now my client wants me to give a feature which will prevent his employees from closing the browser.I know it is possible in desktop apps.My client wants to prevent his employees from stealing the money by not saving the transaction. Is there any way i can prevent the user from not being able to close the browser/window or can i record everything if someone closes the browser/tab/window without clicking save button (to catch the criminal :D)
No, you cannot prevent the user from closing the browser from via your web site, neiter from server, nor from client side. Special software installed on the computer may make it possible, but the employer could e.g. simply kill the browser process.
One workaround that would make data loss less likely (though not impossible) would be to automatically save the data every time the user changes anything on the front end, e.g. via XMLHttpRequests.
In any case, you have to ensure in the backend (which is not in the control of the end user), that only complete and valid transactions are being saved and incomplete transactions are discarded. By doing this, the issue you are describing should be completely avoidable.
It is also very simple to check if a transaction was completed or not and who worked on it.
The moment, the employee logs in or starts a new transaction, a flag is set, which indicates an unfinished transaction. This event is also documented in a log. When the transaction is finished correctly, the flag is removed and another logentry is created. If not, there are irregularities visible in the log.
In this way, it is also possible to set a time limit for the duration of an transaction.
Our team has a production-level Meteor app. In the app we have a particular Meteor method which sends an email. Today it sent 127 emails from one click of the Submit button (over the course of about 20 minutes).
I cannot post the exact code but the basic flow is pretty straightforward:
We catch the submit event and send everything to the server via Meteor.call
The Meteor method sends a request to a service to render a PDF
The Meteor method sends a request to SendGrid with the attachments and other email data
The Meteor method returns and triggers the callback
We don't really have much basis for determining the exact problem and are researching but are suspecting it is partly due to the end user's connection timing out and Meteor re-sending requests for which it did not get any response.
There are two threads we found related to the problem: https://groups.google.com/forum/#!topic/meteor-talk/vu5kk3t0Lr4 and https://github.com/meteor/meteor/issues/1285
Which both answer that methods should be idempotent. Obviously sending an email directly from a method is not idempotent so we proposed that the Meteor method should add these emails to a queue and have a different service process the queue on a schedule. However, we do not want to start implementing solutions that might help solve the problem.
So, this leaves me with two questions:
What exactly would cause a Meteor method to be called 127 times? How do we prevent this from happening? Is this a bug in Meteor or a bug in our app?
If we update the method so it uses EmailQueue.insert(...) (and let something else process the queue) does that just mean we, in this case, would put 127 records into the queue instead? Is the only solution here having some sort of lock to ensure duplicate records are not processed/inserted?
Thank you for any insight.
I would recommend you post the code, the process may be simple but there's usually small subleties that can help determine what's going on, i.e is it async/is there anything that may be blocking.
What may cause it
If Meteor doesn't respond to your browser it thinks it has not yet fired the call and it re-calls it. At least this is what I can gather may be happening using the information you've provided.
To get passed this ensure that
When sending the email you either use this.unblock (http://docs.meteor.com/#method_unblock) or use asynchronous JS with any of the proccesses you're doing
Ensure that nothing happening is blocking the main thread for your app. (It's hard to tell what it could be without any code)
Errors. If meteor restarts and reconnects it will prompt the browser to re-send the Meteor.call
The big clue here is that it took 20 mins, so its very likely to any of these.
How to check. Check the Network tab in your browser to see what's being sent to the server. Is the method being called multiple times? Are there re-connection attempts?
To answer the second question on whether using an Email Queue would help, it may not, it depends more on what's causing the initial problem.
Additionally: I think SendGrid automatically queues the emails anyway so you could send them all the emails at once and they queue them so they send out depending on your limits with them. (Not sure on this since I used them quite a while back)
I'm have a very long process in a php script (generate a huge pdf).
I have a button in my HTML page that launches the php script and I'd like to show a kind of progress bar or at least an animated gif and when the php script is over, display the generated pdf.
The generation of the pdf may last 15 minutes so the php engine exits in timeout and the browser too.
I there a way to declare a kind of client-side callback that would be invoked as soon as the server side process is over ?
Thanks for your replies
Edit :
Thanks for your replies :)
If I well understand, I must launch the process on server-side and "detach" my client i.e do not wait untill the process is over. Instead, my client should periodically check the progression of server-side process. Right ?
If so, I may use the following scenario :
The client sends an ajax request to the server. The server launches
the process and returns a guid to the client. This guid identifies
the job.
The client periodically checks the progression of the job
via an Ajax request, from its guid.
Once the job is over, the client can issue a last Ajax query to
download the PDF
That means that the server must save the generated PDF on its disk and wait for the final Ajax request to send the file and delete it, right ?
For something as long as 15 minutes, I wouldn't even use web sockets for this. 15 minutes is a long time and there's really no telling what the user is going to be doing in the meantime. A disconnected notification process is probably going to be more reliable in this case.
Consider something like:
User initiates process, whereby a record is stored in the database "queueing" the process to take place.
User is immediately presented with a page indicating that the process has been queued and that they can continue to use the application.
A separate application which runs periodically (every minute? every few minutes?) checks for "queued" processes in the database, updates their status to "in-progress" (so subsequent runs don't also pick up the same records), and processes them.
As each process completes, it's either removed from the database or updated to a "completed" status.
The user is otherwise notified that the process is complete.
This final notification can be done a number of ways. An email can be sent to the user, for example. Or consider a user experience similar to the Facebook notification bar. Each page in the website can check for "completed" processes when the page loads and present a "notification" in the UI which directs the user to the result of the process. If users spend a lot of time on any given page then this would be a good place to use web sockets or long polling via JavaScript to keep checking for completed processes.
The main thing is to separate the user interface from the long-running process. Web applications by design aren't suited for processes which run for that long. By separating the concerns the web application can focus just on the user interface and the command-line application can focus on the processing. (As an added bonus, this would prevent users from over-loading the server with too many concurrent processes. The command-line application can just run one record at a time, so too many concurrent processes just slows down the response, not the server.)
as #David said, but no one has covered the progress bar, the implantation of this depends on what you know ( you being the application creating the pdf ).
Do you know the size of the pdf when complete.
Do you know how long
it will take to generate.
Do you have code where you can hook into
to update the progress.
The application needs a way to know when to update the completed percentage, and by how much. If you can do that then you can either store the progress in the database with the script that crates the PDF and read it on a user facing page, or store it in a file, etc..
jQuery UI progress bar is easy to use, but you will have to know what percentage is done to be able to tell the end user.
After that it is a pretty simple matter of using ajax (jquery $.post ) and a file, that's how i do it. I just wright a simple text file with a number representing the completion percent. Load it via ajax and feed it to the jquery ui progress widget.
I have a web server that generates questions for students of a particular subject. The web server needs to keep track of how much time each student has spent on a particular set of questions.
The web pages have a "Finished" button, which, when pressed, causes statistics to be sent to server.
However, I also want the web browser to send statistics if the student navigates away from the page or closes the browser window without pressing "Finished".
For this purpose, I have planned to have "onunload" or "onbeforeunload" send an Ajax request to the server with the relevant information. But apparently different browsers do not fully support these events, and also there are restrictions on what can be done in the event handlers. And, of course, I don't want the browse to freeze if the communication with the server fails.
So, I need some advice on the best way to do this.
If I wanted to be sure to handle all the "special events" I would send tick 'requests' from the webpage to the server. Granularity depends on the tracking requirements, the load, and whether it is an intranet or internet application; can be some seconds or even a minute. So you are tracking the time spent on the page even if the browser/os/network crashes.
The best way to implement is, is to use period updates. This will pretty much guarantee you have some relevant data when the user disconnects in any way.
An implementation is pretty trivial, all tough you might have to refactor some of your logic to send out period updates instead of everything at once.
function sendStatistics()
{
// ajax and what not
}
setInterval(function(){
sendStatistics();
}, 1000);
An other way to make it work is to make your ajax call in beforeunload and make it synchronous. This will freeze the browser for duration of the call, and will also only work when navigating away or closing the browser, i don't recommend this.
I have an html5 application. It sends stuff to a database. Originally I was doing the database update instantly, however, now I am allowing for offline usage, and instead letting the user choose when to send to the database. (Eg if they are on a plane, they might want to do it later.)
So, now the problem is, if they shutdown, I want to warn them before they do, if they have failed to send any unsent data and allow them to return to send it if necessary. I will ultimately be using local storage until it is sent, so it wont be lost, but the important thing is to let the user know it is as yet unsent.
I will have a global g_bUnsentData = true; if there is unsent data.
Can this be done in javascript, and how can you do it?
Also, as an aside, how do I test for online status, so I can warn the user if they are offline and trying to send data?
See #SmartK8's comment.
Nonetheless, if you bind a function to the beforeunload event that returns a string, most browsers will display that string and a confirmation to leave the page. For example:
Some browsers might not display your returned string in the confirmation at all. Other browsers, especially mobile ones, will not even prevent the page from being navigated away from.
For testing the online status, navigator.onLine is a boolean value indicating network availability. It's not available everywhere (Opera, Firefox), but it's a start. See this page at html5 rocks for more details.