Browser timing out JavaScript recursive function, how to solve? - javascript

I had to develop a newsletter manager with JS + PHP + MYSQL and I would like to know a few things on browser timing out the JS functions. If I'm running a recursive function that delays a call to itself (while PHP returns a list of email), how can I be sure that the browser won't timeout this JS function ?
I'm asking this, because I remember using a similar newsletter manager, that while doing the ajax requests, after a few calls, it stopped without any apparent reason. I know JS is not meant for this, and I should use Crontab on server, but, I can't assume the users server handles cron, so I had to stick with JS + php.
PS - This didn't happened on this app yet, I'm just trying to prevent the worse of the scenarios (since I've tested a newsletter manager, that worked the same as this one I'm developing). Since my dummy email list is small and the delays between calls are also small, this works just fine, but let's imagine a 1,000 contact list, with a delay between sends of 120 seconds: Sending 30 emails for each 2 minutes.
By the way, why this ? Well, many hosting servers has a limit on emails sent per day or hour and this helps preventing violating that policy.

from the mootools standpoint, there are several possible solutions here.
request.periodical - http://mootools.net/docs/more/Request/Request.Periodical
has plenty of options that allow for handling batches of jobs, look at it like a more complex .periodical (setInterval) that understands async nature of the result and can compensate for lag etc. I think it can literally do what you set in your requirements out of the box, all you need is an oncomplete callback that clears up the done from your pending array (for eg).
request.queue - http://mootools.net/docs/more/Request/Request.Queue
basically, setup all your requests to handle the chunks of data and pass them on to Request.Queue to handle sequentially. Probably less sophisticated from the point of view of sending rate control.

How about a meta refresh. That will not cause a timeout in your javascript function. You Just reload your page after a specific time and then send the next emails out. Adding a parameter to the URL you can find out which "round" you are on.
Can this do the job for you?

You need to use setTimeOut. The code needs to yield control to the UI thread and let the browser become responsive to avoid the script from being stopped.
Read this post by Nick Z.
http://www.nczonline.net/blog/2009/01/13/speed-up-your-javascript-part-1/
There is also something the W3C Spec about this called "Efficient Script Yielding" I'm not sure how far along it is or if any browsers support it.
https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/setImmediate/Overview.html
You could also try HTML5 Web Workers.

Related

Alternatives to executing a script using cron job every second?

I have a radio station at Tunein.com. In order to update album art and artist information, I need to send the following
# Update the song now playing on a station
GET http://air.radiotime.com/Playing.ashx?partnerId=<id>&partnerKey=<key>&id=<stationid>&title=Bad+Romance&artist=Lady+Gaga
The only way I can think to do this would be by setting up a PHP/JS page that updates the &title and &artist part of the URL and sends it off if there is a change. But I'd have to execute it every second, or at least every few seconds, using cron.
Are there any other more efficient ways this could be done?
Thank you for your help.
None of the code in this answer was tested. Use at your own risk.
Since you do not control the third-party API and the API is not capable of pushing information to you when it's available (an ideal situation), your only option is to poll the API at some interval to look for changes and to make updates as necessary. (Be sure the API provider is okay with such an approach as it might violate terms of use designed to prevent system abuse.)
You need some sort of long-running process that will execute at a given interval.
You mentioned cron calling a PHP script which is one option (here cron is the long-running process). Cron is very stable and would be a good choice. I believe though that cron has a minimum interval of 1 minute. I'm sure there are similar tools out there, but those might require you to have full control over your server.
You could also make a PHP script the long-running process with something like this:
while(true){
doUpdates(); # Call the API, make updates, etc
sleep(5); # Wait 5 seconds
}
If you do go down the PHP route, error handling of some sort will be a must:
while(true){
try{
doUpdates();
} catch (Exception $e) {
# manage the error
}
sleep(5);
}
Personal Advice
Using PHP as a daemon is possible but it is not as well tested as the typical use of PHP. If this task was given to me, I'd write a server/application in JavaScript using Node.js. I would prefer Node because it is designed to work as a long running process and intervals/events are a key part of JavaScript and I would be more confident in that working well than PHP for this specific task.

Way to determine/circumvent if an AJAX request timed out?

I have a simple web-page (PHP, JS and HTML) that is displayed to illustrate that a computation is in process. This computation is triggered by a pure JavaScript AJAX-request of a PHP-script doing the actual computations.
For details, please see
here
What the actual computation is, does not play a role, so for simplicity, it is just a sleep()-command.
When I execute the same code locally (browser calls website under localhost: linux, apache, php-mod) it works fine, independant of the sleep-time.
However, when I let it run on a different machine (not localhost, but also Linux, apache, php-mod), the PHP-script does run through (results are created), but the AJAX-request does not get any response, so there is no "onreadystatechange" if the sleep-time is >= 900 seconds. When sleep-time < 900 seconds it also works nicely and the AJAX-request is correctly terminated (readyState==4 and status==200).
The apache and php-configuration are more or less default and I verified the crucial options there already (max_execution_time etc.) but none seems to be valid here as they are either shorter (<1 min.) or bigger, e.g. for the garbage-collector (24 min.).
So I am absolutely confused what may cause this. I am thinking it might be network-related, although I didn't find any appropriate option in my router or so.
Also no error is reported in the apache-logs or in PHP (error loggin to file).
Letting the JavaScript with the AJAX-request display the request.status upon successfull return, surprisingly when I hit "Esc" in the browser window after the sleep is over, I also get the status "200" displayed but not automatically as it should do it.
In any case, I am hoping that you may have an idea how to circumvent this problem?
Maybe some dummy-communication between client and server every 10 minutes or so might do the trick, but I don't have an idea how to best do something like this, especially letting this be transparent to the user and not interfering with the actual work of doing the computations/sleep.
Best,
Shadow
P.S. The post that I am referencing is written by me, but seems to tramsit the idea that it might be related to some config-option, which seems not to be the case. This is why I am writing this post here, basically asking for a way to circumvent such an issue regardless of it's origin.
I'm from the other post you mentioned!
Now that I know more about what you are trying to do: monitor a possibly long running server job, I can recommend something which should turn out a lot better, its not a direct answer to your question, but its a design consideration which includes by its nature a more suitable solution.
Basically, unlink the actions of "starting" the server side task, from monitoring its progress.
execute.php kicks off your background job on the server, and immediately returns.
Another script/URL (lets call it status.php) is available to check the progress of the task execute.php is performing.
When status.php is requested, it won't return until it has something to report, UNLESS 30 seconds (or some other fixed) amount of time passes, at which point it returns a value that you know means "check again". Do this in a loop, and you can be notified immediately of when the background task has completed.
More details on an approach similar to this: http://billhiggins.us/blog/2011/04/27/resty-long-ops
I hope this help give you some design ideas to address your problem!

How can live search / search suggestions be implemented using Dojo?

I want to implement a 'live search' or 'search suggestions' feature in a web application that uses the Dojo Framework. It would be similar to the way Google and Bing searches display matches as you type: when you type in the search box, a list of potential matches appears below. Searches would be performed server side, with the results sent back to the browser using AJAX.
Does anyone know of a good way to implement this using Dojo?
Here are some potential options:
The built-in widget dijit.form.ComboBox
This has very similar functionality, but I've only seen it used with limited data sets. The examples always use small lists (such as the 50 states in USA) and preload the entire data set for client-side filtering. However I presume I could hook it up to a dojox.data.JsonQueryRestStore for server-side search — can anyone confirm whether that works?
QueryBox http://marumushi.com/code/querybox/
This implementation mainly does the job, but it has some minor bugs and doesn't look like it's being maintained. I'd have to do some bugfixes on the code before using it.
Medryx http://blog.medryx.org/2008/09/10/dijitsearch-part-2/
This also looks like it does the job, but it is described as 'alpha-level' code and the link to the code seems to be broken...
I could probably make one of the above work, but I'd like to know if there are any better alternatives out there.
I implemented it 5 years ago when Dojo was at 0.2:
http://www.lazutkin.com/blog/2005/12/23/live-filtering/
While the code is ancient, it is trivial, and hopefully it'll give you ideas on how to attack it. The rough sketch:
Attach an event handler to your input box, which is triggered on changes — use "onkeyup" to detect a change in the input box.
Wait until user stopped typing by setting a timer in your event handler, if it is not set yet. 200-500ms are good waiting times.
The timeout plays a dual role:
It throttles our requests to a server to prevent overloading.
It plays on our perception of time and our typing habits.
If our timeout is up, and we don't wait for a server ⇒ send server a string we have so far.
If we are still waiting for a server, cancel the request and ask again.
This part is app-specific: we don't want to overload a server, and sometimes a server cannot handle broken connections well.
In the example I don't cancel the XHR call, but wait it to finish first before submitting new request.
Server responds with relevant results, which are promptly shown.
In the blog post I implemented it as a widget. Obviously the exact packaging is up to you.

A reasonable number of simultaneous, asynchronous ajax requests

I'm wondering what the consensus is on how many simultaneous asynchronous ajax requests is generally allowable.
The reason I ask, is I'm working on a personal web app. For the most part I keep my requests down to one. However there are a few situations where I send up to 4 requests simultaneously. This causes a bit of delay, as the browser will only handle 2 at a time.
The delay is not a problem in terms of usability, for now. And it'll be awhile before I have to worry about scalability, if ever. But I am trying to adhere to best practices, as much as is reasonable. What are your thoughts? Is 4 requests a reasonable number?
I'm pretty sure the browser limits the number of connections you can have anyway.
If you have Firefox, type in about:config and look for network.http.max-connections-per-server and that will tell you your maximum. I'm almost positive that this will be the limit for AJAX connections as well. I think IE is limited to 2. I'm not sure about Chrome or Opera.
Edit:
In Firefox 23 the preference with name network.http.max-connections-per-server doesn't exist, but there is a network.http.max-persistent-connections-per-server and the default value is 6.
That really depends on if it works like that properly. If the logic of the application is built that 4 simultaneos requests make sense, do it like this. If the logic isn't disturbed by packing multiple requests into one request you can do that, but only if it won't make the code more complicated. Keep it as simple and straight forward until you have problems, then you can start to optimize.
But you can ask yourself if the design of the app can be improved that there is no need for multiple requests.
Also check it on a really slow connection. Simultaneous http requests are not necessarily executed on the server in the proper order and they might also return in a different order. That might give problems you'll experience only on slower lines.
It's tough to answer without knowing some details. If you're just firing the requests and forgetting about them, then 4 requests could be fine, as could 20 - as long as the user experience isn't harmed by slow performance. But if you have to collect information back from the service, then coordinating those responses could get tricky. That may be something to consider.
The previous answer from Christian has a good point - check it on a slow connection. Fiddler can help with that as it allows you to test slow connections by simulating different connection speeds (56K and up).
You could also consider firing a single async request that could contain one or more messages to a controlling service, which could then hand the messages off to the appropriate services, collect the results and then return back to the client. Having multiple async requests being fired and then returning at different times could present a choppy experience for the user as each response is then rendered on the page at different times.
In my experience 1 is the best number, but I'll agree there may be some rare situations that might require simultaneous calls.
If I recall, IE is the only browser that still limits connections to 2. This causes your requests to be queue, and if your first 2 requests take longer than expected or timeout, the other two requests will automatically fail. In some cases you also get the annoying "allow script to continue" dialog in IE.
If your user can't really do anything until all 4 requests come back (especially with IE's bogged down JavaScript performance) I would create a transport object that contains the data for all requests and then a returning transport object that can be parsed and delegated on return.
I'm not an expert in networking, but probably four wouldn't be much of a problem for a small to medium application, however, the larger it gets the higher the server load which could eventually cause problems. This really doesn't answer your questions, but here is a suggestion. If delay is not a problem why don't you use an queue.
var request = []//a queue of the requests to be sent to the server
request[request.length] = //whatever you want to send to the server
startSend();
function startSend(){//if nothing is in the queue go ahead and send this one
if(request.length===1){
send();
}
}
function send(){//the ajax call to the server using the first request in queue
var sendData = request[0];
//code to send the data
//then when you get the response (I can't remember exactly the code for it)
//send it to a function to process the data
}
function process(data){
request.splice(0,1);
if(request.length>0){//check to see if you need to do another ajax call
send();
}
//process data
}
This is probably isn't the best way to do it, but that's the idea you could probably modify it to do 2 requests instead of just one. Also, maybe you could modify it to send as many requests as their are in the queue as one request. Then the server splits them up and processes each one and sends the data back. All at once or even as it gets it since the server can flush the data several times. You just have to make sure your are parsing the response text correctly.

Captivate - LMS - SCORM communication problems

I'm developing a SCORM compliant LMS, and having some problems with Captivate generated contents.
Basically, the behavior is: If you see a SCO (captivate generated content) with for example 15 slides and 1 question in each slide quickly, my lms is not tracking all the 15 question, only the first 3 or 4. If you wait a long time at the end, or if you take the content slow, it works fine.
After a lot of google searches, and debugging and tracing, finally, I found two main issues:
1) Captivate - SCORM API communication is asynchronous (is the same than flash - javascript communication). So, when the user see the content quickly, the function calls get more and more dealayed, and at the end, maybe the user is answering question 15, and the content is sending question 4 information. I cannot change the Flash or JS-Flash interface, because this is provided by Captivate.
There is a way to make this sync?? I mean, to force the flash wait some way?
2) The functions are taking longer each time they are called, for example, setValue takes 7 milliseconds the first time and 200 the last time is called.
To understand this problem, here is a little background:
Captivate contents (all contents really but more captivate) calls a specific function many times, the SetValue function, one of the SCORM API functions. This function takes two parameters (fieldName, value) the firstone is the name of the field to be set, and the second the new value. In my implementation, this function first validate the value using a regular expression, and then set the value in an object.
Ok, I can add a lot more info, but I don't know what is really important, I'm not hoping you fix my code without seeing it, but I'm out of ideas, and need new opinions, ideas, directions.... maybe that sombody ask the right question... help :)
Thanks
When publishing for SCORM, Captivate does not use synchronous communication methods.* Depending on the browser, Captivate uses either FSCommand or the old-school getURL method to communicate with the HTML file; the HTML file then uses JavaScript to relay the data to the LMS via the SCORM API.
The response (if any) is relayed from JavaScript to either FSCommand or a proxy SWF (for getURL), which is then monitored internally in Captivate via a callback function. This callback function uses timers, and that's probably where your problem lies.
If you're setting g_intAPIType to 0, you're forcing the browser to use FSCommand, which isn't supported in all browsers and operating systems. Setting g_intAPIType to 1 means you're forcing the browser to use getURL, which is cross-browser but has a few drawbacks (including lots of clicking sounds).
In both cases, the data is sent via an internal queue script, which uses the waitForResponse callback function.
The performance problems you're encountering are likely due to the queuing, and the asynchronous communication compounds the problem because of timers attached to waitForResponse. Changing g_intAPIType will probably only have a minor effect on your performance issues, though using getURL (g_intAPIType=1) may help improve consistency from browser to browser.
Regardless of the g_intAPIType settings, you cannot prevent the internal tracking mechanism from using the asynchronous waitForResponse function, so there is no way to stop Captivate from using timers when getting/setting data; over a period of time you will probably start to notice longer and longer delays like the ones you described, esp. if you're making a lot of calls to the LMS.
(* Small exception: I've been informed Captivate 4 and 5 use ExternalInterface if the project is built in AS3 and is published for SCORM 2004, but it appears the queue and waitForResponse timers are still used, basically treating ExternalInterface like the asynchronous methods listed above.)
Some Options:
You could change how you are doing the questions. Instead of 1 per frame put all the questions on 1 frame.
Otherwise, you will need to do some JavaScript magic in your SCORM Player JavaScript. I would start with minimizing the JS code with a tool like JSMin.
Then try to cache the JS files so they are only loaded once. I suspect that the files are being called over and over with each frame.
"There is a way to make this sync?? I mean, to force the flash wait some way?"
Apparently, the problem is this one :
"Captivate is the only SCO that calls SCORM JavaScript functions asynchronously. Firefox is the only browser that does not force synchronous communications between the SCO and the supporting JavaScript. When a Captivate SCO, running on Firefox, submits a status update to one of the JS functions, Captivate does not wait for a success or fail response before submitting the next status update. Since Captivate is quite verbose in its communications and JavaScript is not multithreaded, quiz status submissions can stack up and overwrite each other. This can cause a loss of data - especially for longer quizzes. [...]
If you'd like to see the asynchronous problem with any other LMS, take a long Captivate quiz using Firefox and answer the questions very quickly. Some of the questions near the end will get dropped.. " (interzoic.com forum)
And maybe a solution :
"The slow issue is resolved when I force the g_intAPIType to 0 (into the
.htm file), so it force Captivate to communicate as if it was into IE."
In captivate, while publishing a scorm you will see option "Send tracking data at the end",
Use this option, it will resolve your problem.

Categories