I have my own stripped down script that currently makes uses of XHR or script tags depending on browser support. These requests ultimately return some JSON. My problem is now elements of this object now need to be updated by the server while on the client i.e. i need to implement some kinda of long pull/comet soln.
Google seems to come up with lots of solns using various frameworks such as JQuery etc.. to do this on the client side. However this is not an option for me.
I was wondering if you guys had any suggestions on how I could extend my existing approach to allow comet style updates from the server. One of the standard approaches seems to be using hidden iframes. This is a no go as the app server that is providing me with my json data is different to the actual webserver.
jQuery simply wraps around the XHR/XMLHTTPRequest object.
First of, you need a small function to return the object in a cross-browser manner. This is done in 3 lines or less, not too difficult. That said, the are great snippets for fixing different browser issues, such as memory leaks. I highly suggest you use this one. These of course span more than 3 lines (unless minified). But in either case, if you want repeated connections, you just can't do this from scratch.
Next, on the server side, assuming you're on PHP:
set_time_limit(300); // force connection only after 5 minutes
ignore_user_abort(false); // if the connection ends, terminate immediately
while(true){
if(some_condition()){
echo some_response();
break; // break the loop
}
sleep(2); // wait for a second or two
}
Client side, just repeat the query whenever the connection ends. At this point, also handle the output.
Clients-side example:
function poll(){
jQuery.get('http://somesite.com/poll.php',function(data){
alert('Just received: '+data);
poll(); // repeat poll
});
}
poll(); // begin polling
Related
At the risk of getting roasted for not posting code, what is the best way for getting around the 6 concurrent call limitation for ajax requests?
So currently I'm working with an application that can have up to 40 or so ajax requests on page load. For background, most of these requests are for various graphs, hidden behind tabs. There are some requests that can be triggered by the user (such as updating the name of an entity without refreshing the page). This means that with the limitation on concurrent requests the user won't be able to change anything until there's only 5 other requests running, and that's an unacceptable user experience.
This may mean that the app is structured badly, but most of the things loading are not required right away.
Anyway, I've looked a bit into fetch() and webworkers but can't find any information on whether these would help get around the limitation.
One solution would be to put resources on different subdomains, but this makes the backend API unnecessarily complicated (and it's a browser issue, not a server issue).
I've considered these approaches:
delay requests until the user actively needs them (IMO this is a bad user experience because they will have to wait a little bit a lot, which is annoying)
create a queuing system that leaves open one spot for user initiated requests (I'm not sure how to implement this, but it should be doable)
restructure the API so that more data is returned per request (this again is mainly a backend solution that feels a little dirty and unRESTful. Also it won't necessarily improve the load time)
chaining calls such as with Multiple Async AJAX Calls Best Practice (however given there are an unpredictable number of calls on unrelated endpoints so I don't think this is all that practical here)
webworkers? (again, not sure if this could help, since this is used to multithread js)
fetch()? (I can't find info on whether this is subject to the same limitation)
This is very much opinion based.
40 requests is not unreasonable but depending on your server and site setup it can take quite a while.
With that many calls I would bundle some of them together in a initializePage=X call. This does involve some serverside work.
Delay requests is not necessarily bad, depending on your estimated time to deliver. If possible you could present some kind of animation or "expected result" until the response ticks in, to keep the user entertained. The same applies to Queing your requests.
Restructuring your code to return everything in a bundle could also greatly speed up your site if you run a lot of initialization on your server (like security checks).
If performance is still a concern you can look into connections that provide faster results such as EventSource or WebSocket. Such a faster connection also allows for a more flexible approach to chaining. EventSource, for instance, supports events, so you could set several events on a single, bundled request and fire them as the server returns data.
Webworkers is not the answer, as the problem here is connection speed and concurrent connection limits.
I don't think we can answer this question directly. Several of the solutions you have mentioned are viable but vary by level of effort. If you are willing to adjust architecture you can consider a GraphQL approach which can wrap the bundling for you. I would also say that you can maintain REST but have a special proxy service that bundles data for you. I'd also say, don't let RESTfullness dictate or force how you develop.
Also, delaying requests until the user needs them seems like the appropriate choice to me. It's the basis for why we have "above the fold" CSS styling and infinite scrolling. Load what is needed right now first and defer the stuff that might not actually matter when it needs to be.
Concurrency of AJAX calls would come into picture if these requests are called from one thread. If WebWorker is used with AJAX then no issues at all, reason being each instance of webworker will be isolated, in a thread that is not in the main thread.
I would call that as JaxWeb and I will be pushing a git repo in coming week where you may find pure JS code that takes care of it. This is being tested right now, but yeah it does solve the problem.
Example:
Add below code in JaxWeb.js
onmessage = function (e) {
var JaxWeb = function (e) {
return {
requestChannel: {},
get_csrf_token: function () {
return this._csrf_token;
},
set_csrf_token: function (_csrf_token = null) {
this._csrf_token = _csrf_token;
},
prepare: (function ( e ) {
this.requestChannel = new XMLHttpRequest();
this.requestChannel.onreadystatechange = function () {
if (this.readyState == 4 && this.status == 200) {
postMessage(JSON.parse(this.responseText));
}
};
this.requestChannel.open(e.data.method, e.data.callname, true);
this.requestChannel.setRequestHeader("X-CSRF-TOKEN", e.data.token);
var postData = '';
if (e.data.data)
postData = JSON.stringify(e.data.data);
this.requestChannel.send(postData);
})(e)
}
};
return JaxWeb(e);
}
Usage:
jaxWebGetServerResponse = function () {
var wk2 = new Worker('path_to_jaxweb_js/JaxWeb.js');
wk2.postMessage({
"callname": '<url end point>',
"method": '<your http method>',
"data": ''
});
wk2.onmessage = function (serverResponse) {
//
//process results
//with data that is received from server
}
};
//Invoke the function
jaxWebGetServerResponse();
I have a performance problem in Javascript causing a crash lately at work. With the objective modernising our applications, we are looking into running our applications as webservers, onto which our client would connect via a browser (chrome, firefox, ...), and having all our interfaces running as HTML+JS webpages.
To give you an overview of our performance needs, our application run image processing from camera sources, running in some cases at more than 20 fps, but in the most case around 2-3fps max.
Basically, we have a Webserver written in C++, which HTTP requests, and provides the user with the HTML pages of the interface and the corresponding JS scripts of the application.
In order to simplify the communication between the two applications, I then open a web socket between the webpage and the c++ server to send formatted messages back and forth. These messages can be pretty big, up to several Mos.
It all works pretty well as long as the FPS stays relatively low. When the fps increases the following two things happen.
Either the c++ webserver memory footprint increases pretty fast and crashes when no more memory is available. After investigation, this happens when the network usage full, and the websocket cache fills up. I think this is due to the websocket TCP-IP way of doing stuff, as the socket must wait for the message to be sent and received to send the next one.
Or the browser crashes after a while, showing the Aw snap screen (see figure below). It seems in that case that the same thing more or less happen but it seems this time due to the garbage collection strategy. The other figure below shows the printscreen of the memory usage when the application is running, clearly showing saw pattern. It seems to indicate that garbage collection is doing its work at intervals that are further and further away.
I have trapped the problem down to very big messages (>100Ko) being sent at fast rate per second. And the bigger the message, the faster it happens. In order to use the message I receive, I start a web worker, pass the blob i received to the web worker, the webworker uses a FileReaderSync to convert the message as an ArrayBuffer, and passes it back to the main thread. I expect this to have quite a lot of copies under the hood, but I am not so well versed in JS yet so to be sure of this statement. Also, I initialy did the same thing without the webworker (FileReader), but the framerate and CPU usage were really bad...
Here is the code I call to decode the messages:
function OnDataMessage(msg)
{
var webworkerDataMessage = new Worker('/js/EDXLib/MessageDecoderEvent.js'); // please no comments about this, it's actually a bit nicer on the CPU than reusing the same worker :-)
webworkerDataMessage.onmessage = MessageFileReaderOnLoadComManagerCBack;
webworkerDataMessage.onerror=ErrorHandler;
webworkerDataMessage.postMessage(msg.data);
}
function MessageFileReaderOnLoadComManagerCBack(e)
{
comManager.OnDataMessageReceived(e.data);
}
and the webworker code:
function DecodeMessage(msg)
{
var retMsg = new FileReaderSync().readAsArrayBuffer(msg);
postMessage(retMsg);
}
function receiveDecodingRequest(e)
{
DecodeMessage(e.data);
}
addEventListener("message", receiveDecodingRequest, true);
My question are the following:
Is there a way to make the GC not have to collect so much memory, by for instance telling some of the parts I use to reuse buffers instead of recreating them, or keeping the GC work intervals fixed ? This is something I know how to do in C++, but in JS ?
Is there another method I should use for my big payloads? Keep in mind that the transmission should be as fast as possible.
Is there another method for reading blob data as arraybuffers that would faster than what I did?
I thank you in advance for you help/comments.
As it turns out, the memory problem was due to the new WebWorker line and the new FileReaderSync line in the WebWorker.
Removing these greatly improved the performances!
Also, it turns out that this decoding operation is not necessary if I want to use the websocket as array buffer. I just need to set the binaryType attribute of websockets to "arraybuffer"...
So all in all, a very simple solution to a pain in the *** problem :-)
I have a radio station at Tunein.com. In order to update album art and artist information, I need to send the following
# Update the song now playing on a station
GET http://air.radiotime.com/Playing.ashx?partnerId=<id>&partnerKey=<key>&id=<stationid>&title=Bad+Romance&artist=Lady+Gaga
The only way I can think to do this would be by setting up a PHP/JS page that updates the &title and &artist part of the URL and sends it off if there is a change. But I'd have to execute it every second, or at least every few seconds, using cron.
Are there any other more efficient ways this could be done?
Thank you for your help.
None of the code in this answer was tested. Use at your own risk.
Since you do not control the third-party API and the API is not capable of pushing information to you when it's available (an ideal situation), your only option is to poll the API at some interval to look for changes and to make updates as necessary. (Be sure the API provider is okay with such an approach as it might violate terms of use designed to prevent system abuse.)
You need some sort of long-running process that will execute at a given interval.
You mentioned cron calling a PHP script which is one option (here cron is the long-running process). Cron is very stable and would be a good choice. I believe though that cron has a minimum interval of 1 minute. I'm sure there are similar tools out there, but those might require you to have full control over your server.
You could also make a PHP script the long-running process with something like this:
while(true){
doUpdates(); # Call the API, make updates, etc
sleep(5); # Wait 5 seconds
}
If you do go down the PHP route, error handling of some sort will be a must:
while(true){
try{
doUpdates();
} catch (Exception $e) {
# manage the error
}
sleep(5);
}
Personal Advice
Using PHP as a daemon is possible but it is not as well tested as the typical use of PHP. If this task was given to me, I'd write a server/application in JavaScript using Node.js. I would prefer Node because it is designed to work as a long running process and intervals/events are a key part of JavaScript and I would be more confident in that working well than PHP for this specific task.
I'm creating a web application that allows users to make changes through Javascript. There is not yet any AJAX involved, so those changes to the DOM are being made purely in the user's local browser.
But how can I make those DOM changes occur in the browser of anyone else who is viewing that page at the time? I assume AJAX would be involved here. Perhaps the page could just send the entire, JS-modified source code back to the server and then the other people viewing would receive very frequent AJAX updates?
Screen sharing would obviously be an easy work-around, but I'm interested to know if there's a better way, such as described above.
Thanks!
You are talking about comet, for an easy implementation i'd suggest:
http://www.ape-project.org/
and also check these:
http://meteorserver.org/
http://activemq.apache.org/ajax.html
http://cometdaily.com/maturity.html
and new html5 way of it
http://dev.w3.org/html5/websockets/
Hope these help.
Max,
Ajax will have to be involved. If i may, I'd like to suggest jQuery as a starting point for this (i know you didn't tag as such, but i feel it'd be appropriate, even if only to prototype with). the basic semantics would involve running the ajax request in combination with a setInterval() timer to fire off the ajax request. this could be done in jQuery along the lines of:
$(document).ready(function() {
// run the initial request
GetFreshInfo();
// set the query to run every 15 seconds
setInterval(GetFreshInfo, 1500);
});
function GetFreshInfo(){
// do the ajax get call here (could be a .net or php page etc)
$.get('mypageinfostuff.php', null, function(data){$('#myDivToUpdate').html(data);});
}
that's the basic premise... i.e the webpage is loaded via GetFreshInfo() initially straight away, then it's requeried every 15 seconds. you can add logoc to only refresh the div if there is new data there, rather than always updating the page. as it's ajax, the page won't freeze and the process will be almost invisible to the user (unless you want to flag the changes in any way)
Hope this helps
jim
I'm wondering what the consensus is on how many simultaneous asynchronous ajax requests is generally allowable.
The reason I ask, is I'm working on a personal web app. For the most part I keep my requests down to one. However there are a few situations where I send up to 4 requests simultaneously. This causes a bit of delay, as the browser will only handle 2 at a time.
The delay is not a problem in terms of usability, for now. And it'll be awhile before I have to worry about scalability, if ever. But I am trying to adhere to best practices, as much as is reasonable. What are your thoughts? Is 4 requests a reasonable number?
I'm pretty sure the browser limits the number of connections you can have anyway.
If you have Firefox, type in about:config and look for network.http.max-connections-per-server and that will tell you your maximum. I'm almost positive that this will be the limit for AJAX connections as well. I think IE is limited to 2. I'm not sure about Chrome or Opera.
Edit:
In Firefox 23 the preference with name network.http.max-connections-per-server doesn't exist, but there is a network.http.max-persistent-connections-per-server and the default value is 6.
That really depends on if it works like that properly. If the logic of the application is built that 4 simultaneos requests make sense, do it like this. If the logic isn't disturbed by packing multiple requests into one request you can do that, but only if it won't make the code more complicated. Keep it as simple and straight forward until you have problems, then you can start to optimize.
But you can ask yourself if the design of the app can be improved that there is no need for multiple requests.
Also check it on a really slow connection. Simultaneous http requests are not necessarily executed on the server in the proper order and they might also return in a different order. That might give problems you'll experience only on slower lines.
It's tough to answer without knowing some details. If you're just firing the requests and forgetting about them, then 4 requests could be fine, as could 20 - as long as the user experience isn't harmed by slow performance. But if you have to collect information back from the service, then coordinating those responses could get tricky. That may be something to consider.
The previous answer from Christian has a good point - check it on a slow connection. Fiddler can help with that as it allows you to test slow connections by simulating different connection speeds (56K and up).
You could also consider firing a single async request that could contain one or more messages to a controlling service, which could then hand the messages off to the appropriate services, collect the results and then return back to the client. Having multiple async requests being fired and then returning at different times could present a choppy experience for the user as each response is then rendered on the page at different times.
In my experience 1 is the best number, but I'll agree there may be some rare situations that might require simultaneous calls.
If I recall, IE is the only browser that still limits connections to 2. This causes your requests to be queue, and if your first 2 requests take longer than expected or timeout, the other two requests will automatically fail. In some cases you also get the annoying "allow script to continue" dialog in IE.
If your user can't really do anything until all 4 requests come back (especially with IE's bogged down JavaScript performance) I would create a transport object that contains the data for all requests and then a returning transport object that can be parsed and delegated on return.
I'm not an expert in networking, but probably four wouldn't be much of a problem for a small to medium application, however, the larger it gets the higher the server load which could eventually cause problems. This really doesn't answer your questions, but here is a suggestion. If delay is not a problem why don't you use an queue.
var request = []//a queue of the requests to be sent to the server
request[request.length] = //whatever you want to send to the server
startSend();
function startSend(){//if nothing is in the queue go ahead and send this one
if(request.length===1){
send();
}
}
function send(){//the ajax call to the server using the first request in queue
var sendData = request[0];
//code to send the data
//then when you get the response (I can't remember exactly the code for it)
//send it to a function to process the data
}
function process(data){
request.splice(0,1);
if(request.length>0){//check to see if you need to do another ajax call
send();
}
//process data
}
This is probably isn't the best way to do it, but that's the idea you could probably modify it to do 2 requests instead of just one. Also, maybe you could modify it to send as many requests as their are in the queue as one request. Then the server splits them up and processes each one and sends the data back. All at once or even as it gets it since the server can flush the data several times. You just have to make sure your are parsing the response text correctly.