I have a script that pings a series of urls with a GET method. I only want to ping them each once and do not expect a response. My script works in Chrome and Safari, but Firefox won't complete the later requests.
Is there a way to trigger Firefox to make a series of calls (five, to be precise) once each, and not care if they fail? It seems that Firefox won't complete the series of requests when the first ones fail.
I'm working in javascript and jQuery, with a little jQuery.ajax() thrown in. I've searched, to no avail and have reached the limit of my beginner's skill set. Any insight would be appreciated.
(If you're interested in the full scope, there's code at jquery-based standalone port knocker)
Thank you.
Update:
After further research, I believe the issue is that Firefox isn't handling the calls truly asynchronously. I have versions of code making the pings with img calls, iframe url calls, and ajax calls to work in Chrome and Safari, but in Firefox they're not behaving as I need them to.
Our server monitoring for the knock sequence should see requests come sequentially to ports 1, 2, 3, 4, 5 (as it does when using Chrome or Safari) but in Firefox, no matter which method I've tried, I see the first attempt ping port 1 twice, then port 2, and on subsequent attempts I only see it ping port 1. My status updates appear as expected, but the server isn't receiving the calls in the order it needs them. It seems that Firefox is retrying failed calls rather than executing each one once, in sequence, which is what I need it to do.
Here is a sample of my script using a simple jquery.ajax call method. It works in Safari and Chrome, but doesn't achieve the desired result in Firefox. While all my code runs and I can see the status updates (generated with the jquery.append function), the request aren't sent once each, sequentially to my server.
<script src="http://code.jquery.com/jquery-latest.js"></script>
<script type="text/javascript">
$(document).ready(function(){
$('button').click(function(){
$('#knocks').append('<p>Knocking...</p>');
setTimeout(function(){
$.ajax({url: 'https://example.sample.com:1111'});
$('#knocks').append("<p>Knock 1 of 5 complete...</p>");
}, 500);
setTimeout(function(){
$.ajax({url: 'https://example.sample.com:2222'});
$('#knocks').append("<p>Knock 2 of 5 complete...</p>");
}, 3500);
setTimeout(function(){
$.ajax({url: 'https://example.sample.com:3333'});
$('#knocks').append("<p>Knock 3 of 5 complete...</p>");
}, 6500);
setTimeout(function(){
$.ajax({url: 'https://example.sample.com:4444'});
$('#knocks').append("<p>Knock 4 of 5 complete...</p>");
}, 9500)
setTimeout(function(){
$.ajax({url: 'https://example.sample.com:5555'});
$('#knocks').append("<p>Knock 5 of 5 complete...</p>");
}, 12000);
setTimeout(function(){
$('#knocks').append("<p>Knocking is complete... <br>Proceed to site: <a href='http://example-url.sample-url.com'>http://example-url.sample-url.com</a></p>");
}, 13000);
});
});
</script>
Seeing there's no real answer to your question and you'd probably want to move on, I though I'd give you some suggestions as a starting point.
For your function calls to truly execute in a sequential order (or synchronous, in-order, blocking,...) you will have to make sure you issue all the subsequent function calls (AJAX requests in your case) once the proceeding requests completed (either succeed, or failed, in which case you might not want to proceed with the next in-order call and issue a completely separate response).
The way you're doing it now isn't considered synchronous, instead it is actually asynchronous, delayed (or 'in the background' with a timeout). This might cause all kinds of problems when you expect your AJAX calls to execute synchronously (blocking) at your server end. From browsers re-issuing failed or timed-out requests (for various reasons, depending on their feature set and how they handle failed requests, caching,...) to preemptively issuing requests and caching results when some pre-fetchers are enabled (or however they're calling it in FF), then re-issuing them again if pre-fetcher failed. I believe this is similar to what you observed in Firefox and might be the main culprit for this unexpected behavior. As you can't control what features end user enables or disables in their browser, or what new features they implement in future versions, you can't really expect your server calls to execute asynchronous by delaying their calls with setTimeout, even if they appear to be doing so in other browsers (probably because of your server responding fast enough for them to appear as such).
In your code, the second call would only appear to be executing synchronously (waiting for the first one to complete) for up to half a second, the third request for up to 3 seconds and a half, and so on. But even if setTimeout was blocking execution (which it doesn't), which external request would it be waiting for? The first one, or the second one? I think you get what I'm trying to say and why your code doesn't work as expected.
Instead, you should either issue subsequent AJAX calls through your server's response (which is actually the point of using AJAX, otherwise there's no need for it), or preferably, create an external listener function that will handle these calls according to the status and/or return values of your previous external calls. If you need to handle failed requests as well and continue execution regardless, then the external listener (with preset execution stack timeout) is the way to go, as you obviously wouldn't be able to depend on response of failed requests.
You see, browsers have no problems issuing multiple concurrent requests and delaying them with setTimout doesn't stop pre-fetchers to try and cache their responses for later use either. It also doesn't issue requests in a blocking manner, the next one waiting for the previous one to finish, as you expected them to. Most will be happy to utilize up to a certain number of concurrent connection (~10 on client machines, a lot more on servers) in an effort to speed-up the download and/or page rendering process, and some obviously have even more advanced caching mechanism in place for this very same reason, Firefox being merely one of them.
I hope this clears things up a bit and you'll be able to rewrite your code to work as expected. As we have no knowledge of how your server-side code is supposed to work, you'll have to write it yourself, though. There are however plenty of threads on SE discussing similar techniques that you might decide on using, and you can always ask another question if you get stuck and we'll be glad to help.
Cheers!
Related
Does JSVM run just in one thread?
I am wondering how the JavaScript function executing inside the VM.
The source code below is interesting:
// include jQuery as $
function test() {
$.ajax({url:"xxx.com"})
.success(function() {alert("success 1");})
.fail(function() {alert("fail 1");});
$.ajax({url:"yyy.com"})
.success(function() {alert("success 2");})
.fail(function() {alert("fail 2");});
while(true);
}
It will make die loop at the "while" line and never pop up any alert dialog to show neither "success" nor "fail".
We know inside the $.ajax, the VM creates XMLHttpRequest and sends a HTTP request.
After sending out two requests, it meets the "while" line.
Thus I image that the JSVM:
1) can handle only function call at one time. (function is atomic)
2) follow the rule: first comes, first served.
Does my idea right?
Does anyone can explain the internal implementation of JSVM?
More specific,
If using AngularJS to develop a front end app, we would like to do something and then immediately record a log to remote server in form submit event like ng-submit.
function ngSubmitTest() {
doA();
recordA(ajax, remoteServer); // must after doA()
}
If recordA uses AJAX, we should ensure recordA is complete before ng-submit redirect the page meanwhile kill the old page and also the VM (if the old page is killed, the recordA may not complete). One solution is doing AJAX with async=false. And I wonder if there is any other solutions?
Thanks.
The implementation of JS depends on the context you're runing it.
Each browser has it's own implementantion, and they can do whatever they want as long as they follow the language specification.
It shouldn't bother you if it runs on one or multiple threads, but you can be sure JavaScript is not a "threaded" language, it works with an event loop flow, in which an event is fired, and consecutive functions are fired after that, until there is nothing more to call. This is the reason why it's pretty hard to block the UI in JavaScript if you're writing "good" code.
A good example on how this works, and the diferences betwen event loops and classic threading, is node.js, i'll give you a example:
Supose you're listening for a request on a server, and 2 seconds after the request arrives you'll send a message. Now let's supose you duplicate that listener, and both listeners do the same thing. If you request the server, you'll get the two messages at the same time, 2 seconds after the request is made, instead of one message on 2 seconds, and the other one on 4 seconds. That means both listeners are runing at the same time, instead of following a linear execution as most systems do.
Node runs Chrome's V8 if you're wondering, it's a very professional JS interpreter and it was a breakthorugh when it came out.
I have unpleasant situation when one of my "long" response in some way blocks another AJAX requests.
I call simultaneously 3 different resources:
var list = ['/api/filters','/api/criteria/brands','/api/criteria/genders']
list.forEach(function(item){$.post(item)})
On server side I could see the following times in logfile:
GET /api/filters 304 51ms
GET /api/criteria/genders 200 1ms
GET /api/criteria/brands 200 0ms
Thats look cool for me, but in browser the picture is absolutely different.
picture with google chrome network tab
So it looks like browser wait for answer on first ( long request ) and only afterwards receive last 2 results.
What could be the reason for this behavior?
Every browser just handles a specific amount of simultaneous requests at a time. If you fire 10 ajax requests at the same time, the browser put them on a stack and handles one after the other.
You can find more information about the concurrent requests (because that includes images, javascript, etc as well) in browsern in this question.
The node server runs is single threaded and any piece of code that uses CPU cycles blocks the entire process.
As a result, if GET /api/filters does a lot of CPU intensive computations, it will block any other requests till it completes. Adding some more information about what it actually does can help in putting together a better answer.
In case you have IO operations in there, then try to make them asynchronous. That will allow node to serve the other URLs while the first one is doing IO.
I'm wondering what the consensus is on how many simultaneous asynchronous ajax requests is generally allowable.
The reason I ask, is I'm working on a personal web app. For the most part I keep my requests down to one. However there are a few situations where I send up to 4 requests simultaneously. This causes a bit of delay, as the browser will only handle 2 at a time.
The delay is not a problem in terms of usability, for now. And it'll be awhile before I have to worry about scalability, if ever. But I am trying to adhere to best practices, as much as is reasonable. What are your thoughts? Is 4 requests a reasonable number?
I'm pretty sure the browser limits the number of connections you can have anyway.
If you have Firefox, type in about:config and look for network.http.max-connections-per-server and that will tell you your maximum. I'm almost positive that this will be the limit for AJAX connections as well. I think IE is limited to 2. I'm not sure about Chrome or Opera.
Edit:
In Firefox 23 the preference with name network.http.max-connections-per-server doesn't exist, but there is a network.http.max-persistent-connections-per-server and the default value is 6.
That really depends on if it works like that properly. If the logic of the application is built that 4 simultaneos requests make sense, do it like this. If the logic isn't disturbed by packing multiple requests into one request you can do that, but only if it won't make the code more complicated. Keep it as simple and straight forward until you have problems, then you can start to optimize.
But you can ask yourself if the design of the app can be improved that there is no need for multiple requests.
Also check it on a really slow connection. Simultaneous http requests are not necessarily executed on the server in the proper order and they might also return in a different order. That might give problems you'll experience only on slower lines.
It's tough to answer without knowing some details. If you're just firing the requests and forgetting about them, then 4 requests could be fine, as could 20 - as long as the user experience isn't harmed by slow performance. But if you have to collect information back from the service, then coordinating those responses could get tricky. That may be something to consider.
The previous answer from Christian has a good point - check it on a slow connection. Fiddler can help with that as it allows you to test slow connections by simulating different connection speeds (56K and up).
You could also consider firing a single async request that could contain one or more messages to a controlling service, which could then hand the messages off to the appropriate services, collect the results and then return back to the client. Having multiple async requests being fired and then returning at different times could present a choppy experience for the user as each response is then rendered on the page at different times.
In my experience 1 is the best number, but I'll agree there may be some rare situations that might require simultaneous calls.
If I recall, IE is the only browser that still limits connections to 2. This causes your requests to be queue, and if your first 2 requests take longer than expected or timeout, the other two requests will automatically fail. In some cases you also get the annoying "allow script to continue" dialog in IE.
If your user can't really do anything until all 4 requests come back (especially with IE's bogged down JavaScript performance) I would create a transport object that contains the data for all requests and then a returning transport object that can be parsed and delegated on return.
I'm not an expert in networking, but probably four wouldn't be much of a problem for a small to medium application, however, the larger it gets the higher the server load which could eventually cause problems. This really doesn't answer your questions, but here is a suggestion. If delay is not a problem why don't you use an queue.
var request = []//a queue of the requests to be sent to the server
request[request.length] = //whatever you want to send to the server
startSend();
function startSend(){//if nothing is in the queue go ahead and send this one
if(request.length===1){
send();
}
}
function send(){//the ajax call to the server using the first request in queue
var sendData = request[0];
//code to send the data
//then when you get the response (I can't remember exactly the code for it)
//send it to a function to process the data
}
function process(data){
request.splice(0,1);
if(request.length>0){//check to see if you need to do another ajax call
send();
}
//process data
}
This is probably isn't the best way to do it, but that's the idea you could probably modify it to do 2 requests instead of just one. Also, maybe you could modify it to send as many requests as their are in the queue as one request. Then the server splits them up and processes each one and sends the data back. All at once or even as it gets it since the server can flush the data several times. You just have to make sure your are parsing the response text correctly.
I'm writing a webapp (Firefox-compatible only) which uses long polling (via jQuery's ajax abilities) to send more-or-less constant updates from the server to the client. I'm concerned about the effects of leaving this running for long periods of time, say, all day or overnight. The basic code skeleton is this:
function processResults(xml)
{
// do stuff with the xml from the server
}
function fetch()
{
setTimeout(function ()
{
$.ajax({
type: 'GET',
url: 'foo/bar/baz',
dataType: 'xml',
success: function (xml)
{
processResults(xml);
fetch();
},
error: function (xhr, type, exception)
{
if (xhr.status === 0)
{
console.log('XMLHttpRequest cancelled');
}
else
{
console.debug(xhr);
fetch();
}
}
});
}, 500);
}
(The half-second "sleep" is so that the client doesn't hammer the server if the updates are coming back to the client quickly - which they usually are.)
After leaving this running overnight, it tends to make Firefox crawl. I'd been thinking that this could be partially caused by a large stack depth since I've basically written an infinitely recursive function. However, if I use Firebug and throw a breakpoint into fetch, it looks like this is not the case. The stack that Firebug shows me is only about 4 or 5 frames deep, even after an hour.
One of the solutions I'm considering is changing my recursive function to an iterative one, but I can't figure out how I would insert the delay in between Ajax requests without spinning. I've looked at the JS 1.7 "yield" keyword but I can't quite wrap my head around it, to figure out if it's what I need here.
Is the best solution just to do a hard refresh on the page periodically, say, once every hour? Is there a better/leaner long-polling design pattern that won't put a hurt on the browser even after running for 8 or 12 hours? Or should I just skip the long polling altogether and use a different "constant update" pattern since I usually know how frequently the server will have a response for me?
It's also possible that it's FireBug. You're console.logging stuff, which means you probably have a network monitor tab open, etc, which means every request is stored in memory.
Try disabling it, see if that helps.
I suspect that memory is leaking from processResults().
I have been using very similar code to yours in a long-polling web application, which is able to run uninterrupted for weeks without a page refresh.
Your stack should not be deep, because fetch() returns immediately. You do not have an infinitely recursive loop.
You may want to use the Firefox Leak Monitor Add-on to assist you in finding memory leaks.
The stack depth of 4-5 is correct. setTimeout and $.ajax are asynchronous calls, which return immediately. The callback is later called by the browser with an empty call stack. Since you cannot implement long polling in a synchronous way, you must use this recursive approach. There is no way to make it iterative.
I suspect the reason for this slow down is that your code has a memory leak. The leak could either be in $.ajax by jQuery (very unlikely) or in your processResults call.
It is a bad idea to call fetch() from inside the method itself. Recursivity is better used when you expect that at some point the method will reach an end and the results will start to be send to the caller. The thing is, when you call the method recursively it keeps the caller method open and using memory. If you are only 3-4 frames deep, it is because jQuery or the browser are somehow "fixing" what you've done.
Recent releases of jquery support long-polling by default. This way you can be sure that yhou are not deppending on browser's intelligence to deal with your infinite recursive call. When calling the $.ajax() method you could use the code below to do a long poll combined with a safe wait of 500 miliseconds before a new call.
function myLongPoll(){
setTimeout(function(){
$.ajax({
type:'POST',
dataType: 'JSON',
url: 'http://my.domain.com/action',
data: {},
cache: false,
success:function(data){
//do something with the result
},
complete: myLongPoll,
async : false,
timeout: 5000
});
//Doesn't matter how long it took the ajax call, 1 milisec or
//5 seconds (timeout), the next call will only happen after 2 seconds
}, 2000);
This way you can be sure that the $.ajax() call is closed before the next one starts. This can be proved by adding a simple console.log() at the prior and another after your $.ajax() call.
Well, first I want to say I'm a bit new in the world of Internet dev.
Anyway, I'm trying to know if its possible to run two pieces of code in parallel using javascript.
What I really need is to call two methods that are in a remote server. I pass, for both, a callback function that will be executed soon the data I want is ready. As the server running these functions take a time to answer, I'm trying to find a way to call both methods at the same time without need to wait till the first finishes to call the second.
Does methods like setTimeout run concurrently, for example
setTimeout(func1, 0);
setTimeout(func2, 0);
...
function func1()
{
webMethod1(function() {alert("function 1 returned"); } );
}
function func1()
{
webMethod2(function() {alert("function 2 returned"); } );
}
Edited
I've just found this article that may be very cool for the realease of next browsers: Javascript web workers
There is one single thread of execution in Javascript in normal WebBrowsers: your timer handlers will be called serially. Your approach using timers will work in the case you present.
There is a nice piece of documentation on timers by John Resig (author of the very popular jQuery javascript framework - if you are new to Web development, I would suggest you look it up).
Now, if you are referring to HTML5 based browsers, at some point, they should have threading support.
Yes, that's exactly how web requests through AJAX work. No need to setTimeout to 0, you can just call them one by one, and make an AJAX request, and it'll be executed asynchronously, allowing you to pass a callback function to be invoked when the request completes.
The means of creating an AJAX request differs some depending on what browser you're running. If you're going to build something that depends considerably upon AJAX, and you want it to work across multiple browsers, you're best off with a library. Here's how it's done in jQuery, for instance:
$.ajax({ url: '/webrequesturl', success: function(result) {
// this will be called upon a successful request
} });
$.ajax({ url: '/webrequest2url', success: function(result) {
// this will be called upon a successful request
// this may or may not be called before the above one, depending on how long it takes for the requests to finish.
} });
Well, JavaScript is single-threaded, the two timers will run sequentially one after the other, even if you don't notice it.
I would recommend you to give a look to the following article, it really explains how timers and asynchronous events work, it will also help you to understand the single-threaded nature of JavaScript:
How JavaScript Timers Work
And as an alternative you could give a look to WebWorkers, is a way to run scripts in separate background threads, but they are only supported by modern browsers.
What you are looking for is asynchronous client-server communication (keyword: async). Asynchronous functions return straight away, but the provided callback will be executed after the specified condition is satisfied.
So, if the function that sends a request to the server is asynchronous, this would let you send both requests to the server without waiting for one to respond.
Using setTimeout may work, as this will schedule both request-sending functions to be called. However, some browsers only run one thread of Javascript at a time, so the result would be that one of the scheduled functions would run and block (waiting for a reply) and the other scheduled function would wait until the first was done to start running.
It is advisable to use async support from your server communication library. For instance jQuery uses async by default.
It depends on the JavaScript engine.