The request is too large for IE to process properly - javascript

I am using Websync3, Javascript API, and subscribing to approximately 9 different channels on one page. Firefox and Chrome have no problems, but IE9 is throwing an alert error stating The request is too large for IE to process properly.
Unfortunately the internet has little to no information on this. So does anyone have any clues as to how to remedy this?
var client = fm.websync.client;
client.initialize({
key: '********-****-****-****-************'
});
client.connect({
autoDisconnect: true,
onStreamFailure: function(args){
alert("Stream failure");
},
stayConnected: true
});
client.subscribe({
channel: '/channel',
onSuccess: function(args) {
alert("Successfully connected to stream");
},
onFailure: function(args){
alert("Failed to connect to stream");
},
onSubscribersChange: function(args) {
var change = args.change;
for (var i = 0; i < change.clients.length; i++) {
var changeClient = change.clients[i];
// If someone subscribes to the channel
if(change.type == 'subscribe') {
// If something unsubscribes to the channel
}else{
}
}
},
onReceive: function(args){
text = args.data.text;
text = text.split("=");
text = text[1];
if(text != "status" && text != "dummytext"){
//receiveUpdates(id, serial_number, args.data.text);
var update = eval('(' + args.data.text + ')');
}
}
});

This error occurs when WebSync is using the JSON-P protocol for transfers. This is mostly just for IE, cross domain environments. Meaning websync is on a different domain than your webpage is being served from. So IE doesn't want do make regular XHR requests for security reasons.
JSON-P basically encodes the up-stream data (your 9 channel subscriptions) as a URL encoded string that is tacked onto a regular request to the server. The server is supposed to interpret that URL-encoded string and send back the response as a JavaScript block that gets executed by the page.
This works fine, except that IE also has a limit on the overall request URL for an HTTP request of roughly 2kb. So if you pack too much into a single request to WebSync you might exceed this 2kb upstream limit.
The easiest solution is to either split up your WebSync requests into small pieces (ie: subscribe to only a few channels at a time in JavaScript), or to subscribe to one "master channel" and then program a WebSync BeforeSubscribe event that watches for that channel and re-writes the subscription channel list.
I suspect because you have a key in you example source above, you are using WebSync On-Demand? If that's the case, the only way to make a BeforeSubscribe event handler is to create a WebSync proxy.

So for the moment, since everyone else is stumped by this question as well, I put a trap in my PHP to not even load this Javascript script if the browser is Internet Destroyer (uhh, I mean Internet Explorer). Maybe a solution will come in the future though.

Related

What is the best technology for a chat/shoutbox system? Plain JS, JQuery, Websockets, or other?

I have an old site running, which also has a chat system, which always used to work fine. But recently I picked up the project again and started improving and the user base has been increasing a lot. (running on a VPS)
Now this shoutbox I have (running at http://businessgame.be/shoutbox) has been getting issues lately, when there are over 30 people online at the same time, it starts to really slow down the entire site.
This shoutbox system was written years ago by the old me (which ironically was the young me) who was way too much into old school Plain Old JavaScript (POJS?) and refused to use frameworks like JQuery.
What I do is I poll every 3 seconds with AJAX if there are new messages, and if YES, load all those messages (which are handed as an XML file which is then parsed by the JS code into HTML blocks which are added to the shoutbox content.
Simplified the script is like this:
The AJAX functions
function createRequestObject() {
var xmlhttp;
if (window.XMLHttpRequest) { // code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp = new XMLHttpRequest();
} else { // code for IE6, IE5
xmlhttp = new ActiveXObject("Microsoft.XMLHTTP");
}
// Create the object
return xmlhttp;
}
function getXMLObject(XMLUrl, onComplete, onFail) {
var XMLhttp = createRequestObject();
// Check to see if the latest shout time has been initialized
if(typeof getXMLObject.counter == "undefined") {
getXMLObject.counter = 0;
}
getXMLObject.counter++;
XMLhttp.onreadystatechange = function() {
if(XMLhttp.readyState == 4) {
if(XMLhttp.status == 200) {
if(onComplete) {
onComplete(XMLhttp.responseXML);
}
} else {
if(onFail) {
onFail();
}
}
}
};
XMLhttp.open("GET", XMLUrl, true);
XMLhttp.send();
setTimeout(function() {
if(typeof XMLhttp != "undefined" && XMLhttp.readyState != 4) {
XMLhttp.abort();
if(onFail) {
onFail();
}
}
}, 5000);
}
Chat functions
function initShoutBox() {
// Check for new shouts every 2 seconds
shoutBoxInterval = setInterval("shoutBoxUpdate()", 3000);
}
function shoutBoxUpdate() {
// Get the XML document
getXMLObject("/ajax/shoutbox/shoutbox.xml?time=" + shoutBoxAppend.lastShoutTime, shoutBoxAppend);
}
function shoutBoxAppend(xmlData) {
process all the XML and add it to the content, also remember the timestamp of the newest shout
}
The real script is far more convoluted, with slower loading times when the page is blurred and keeping track of AJAX calls to avoid double calls at the same time, ability to post a shout, load settings etc. All not very relevant here.
For those interested, full codes here:
http://businessgame.be/javascripts/xml.js
http://businessgame.be/javascripts/shout.js
Example of the XML file containing the shout data
http://businessgame.be/ajax/shoutbox/shoutbox.xml?time=0
I do the same for getting a list of the online users every 30 seconds and checking for new private messages every 2 minutes.
My main question is, since this old school JS is slowing down my site, will changing the code to JQuery increase the performance and fix this issue? Or should I choose to go for an other technology alltogether like nodeJS, websockets or something else? Or maybe I am overlooking a fundamental bug in this old code?
Rewriting an entire chat and private messages system (which use the same backend) requires a lot of effort so I'd like to do this right from the start, not rewriting the whole thing in JQuery, just to figure out it doesn't solve the issue at hand.
Having 30 people online in the chatbox at the same time is not really an exception anymore so it should be a rigid system.
Could perhaps changing from XML data files to JSON increase performance as well?
PS: Backend is PHP MySQL
I'm biased, as I love Ruby and I prefer using Plain JS over JQuery and other frameworks.
I believe you're wasting a lot of resources by using AJAX and should move to websockets for your use-case.
30 users is not much... When using websockets, I would assume a single server process should be able to serve thousands of simultaneous updates per second.
The main reason for this is that websockets are persistent (no authentication happening with every request) and broadcasting to a multitude of connections will use the same amount of database queries as a single AJAX update.
In your case, instead of everyone reading the whole XML every time, a POST event will only broadcast the latest (posted) shout (not the whole XML), and store it in the XML for persistent storage (used for new visitors).
Also, you don't need all the authentication and requests that end up being answered with a "No, there aren't any pending updates".
Minimizing the database requests (XML reads) should prove to be a huge benefit when moving from AJAX to websockets.
Another benifit relates to the fact that enough simultaneous users will make AJAX polling behave the same as a DoS attack.
Right now, 30 users == 10 requests per second. This isn't much, but it can be heavy if each request would take more than 100ms - meaning, that the server answers less requests than it receives.
The home page for the Plezi Ruby Websocket Framework has this short example for a shout box (I'm Plezi's author, I'm biased):
# finish with `exit` if running within `irb`
require 'plezi'
class ChatServer
def index
render :client
end
def on_open
return close unless params[:id] # authentication demo
broadcast :print,
"#{params[:id]} joind the chat."
print "Welcome, #{params[:id]}!"
end
def on_close
broadcast :print,
"#{params[:id]} left the chat."
end
def on_message data
self.class.broadcast :print,
"#{params[:id]}: #{data}"
end
protected
def print data
write ::ERB::Util.html_escape(data)
end
end
path_to_client = File.expand_path( File.dirname(__FILE__) )
host templates: path_to_client
route '/', ChatServer
The POJS client looks like so (the DOM update and from data access ($('#text')[0].value) use JQuery):
ws = NaN
handle = ''
function onsubmit(e) {
e.preventDefault();
if($('#text')[0].value == '') {return false}
if(ws && ws.readyState == 1) {
ws.send($('#text')[0].value);
$('#text')[0].value = '';
} else {
handle = $('#text')[0].value
var url = (window.location.protocol.match(/https/) ? 'wss' : 'ws') +
'://' + window.document.location.host +
'/' + $('#text')[0].value
ws = new WebSocket(url)
ws.onopen = function(e) {
output("<b>Connected :-)</b>");
$('#text')[0].value = '';
$('#text')[0].placeholder = 'your message';
}
ws.onclose = function(e) {
output("<b>Disonnected :-/</b>")
$('#text')[0].value = '';
$('#text')[0].placeholder = 'nickname';
$('#text')[0].value = handle
}
ws.onmessage = function(e) {
output(e.data);
}
}
return false;
}
function output(data) {
$('#output').append("<li>" + data + "</li>")
$('#output').animate({ scrollTop:
$('#output')[0].scrollHeight }, "slow");
}
If you want to add more events or data, you can consider using Plezi's auto-dispatch feature, that also provides you with an easy to use lightweight Javascript client with an AJAJ (AJAX + JSON) fallback.
...
But, depending on your server's architecture and whether you mind using heavier frameworks or not, you can use the more common socket.io (although it starts with AJAX and only moves to websockets after a warmup period).
EDIT
Changing from XML to JSON will still require parsing. The question is actually whether XML vs. JSON parsing speeds.
JSON will be faster on the client javascript, according to the following SO question and answer: Is parsing JSON faster than parsing XML
JSON seems to be also favored on the server-side for PHP (might be opinion based rather than tested): PHP: is JSON or XML parser faster?
BUT... I really think your bottleneck isn't the JSON or the XML. I believe the bottleneck relates to the multitude of times that the data is accessed, (parsed?) and reviewed by the server when using AJAX.
EDIT2 (due to comment about PHP vs. node.js)
You can add a PHP websocket layer using Ratchet... Although PHP wasn't designed for long running processes, so I would consider adding a websocket dedicated stack (using a local proxy to route websocket connections to a different application).
I love Ruby since it allows you to quickly and easily code a solution. Node.js is also commonly used as a dedicated websocket stack.
I would personally avoid socket.io, because it abstracts the connection method (AJAX vs Websockets) and always starts as AJAX before "warming up" to an "upgrade" (websockets)... Also, socket.io uses long-polling when not using websockets, which I this is terrible. I'd rather show a message telling the client to upgrade their browser.
Jonny Whatshisface pointed out that using a node.js solution he reached a limit of ~50K concurrent users (which could be related to the local proxy's connection limit). Using a C solution, he states to have no issues with more than 200K concurrent users.
This obviously depends also on the number of updates per second and on whether you're broadcasting the data or sending it to specific clients... If you're sending 2 updates per user per second for 200K users, that's 400K updates. However, updating all the users only once every 2 seconds, that's 100K updates per second. So trying to figure out the maximum load can be a headache.
Personally, I didn't get to reach these numbers on my apps, so I never got to discover Plezi's limits first hand... although, during testing, I had no issues with sending hundred of thousands of updates per second (but I did had a connection limit due to available ports and open file handle limits on my local machine).
This definitely shows how vast of an improvement you can reach by utilizing websockets (especially since you stated to notice slowdowns with 30 concurrent users).

ajax call to localhost from site loaded over https fails in chrome

-------------------- UPDATE 2 ------------------------
I see now that what I am trying to accomplish is not possible with chrome. But I am still curios, why is the policy set stricter with chrome than for example Firefox? Or is it perhaps that firefox doesn't actually make the call either, but javascript-wise it deems the call failed instead of all together blocked?
---------------- UPDATE 1 ----------------------
The issue indeed seems to be regarding calling http from https-site, this error is produced in the chrome console:
Mixed Content: The page at 'https://login.mysite.com/mp/quickstore1' was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint 'http://localhost/biztv_local/video/video_check.php?video=253d01cb490c1cbaaa2b7dc031eaa9f5.mov&fullscreen=on'. This request has been blocked; the content must be served over HTTPS.
Then the question is why Firefox allows it, and whether there is a way to make chrome allow it. It has indeed worked fine until just a few months ago.
Original question:
I have some jQuery making an ajax call to http (site making the call is loaded over https).
Moreover, the call from my https site is to a script on the localhost on the clients machine, but the file starts with the
<?php header('Access-Control-Allow-Origin: *'); ?>
So that's fine. Peculiar setup you might say but the client is actually a mediaplayer.
It has always worked fine before, and still works fine in firefox, but since about two months back it isn't working in chrome.
Has there been a revision to policies in chrome regarding this type of call? Or is there an error in my code below that firefox manages to parse but chrome doesn't?
The error only occurs when the file is NOT present on the localhost (ie if a regular web user goes to this site with their own browser, naturally they won't have the file on their localhost, most won't even have a localhost) so one theory might be that since the file isn't there, the Access-Control-Allow-Origin: * is never encountered and therefore the call in its entirety is deemed insecure or not allowed by chrome, therefore it is never completed?
If so, is there an event handler I can attach to my jQuery.ajax method to catch that outcome instead? As of now, complete is never run if the file on localhost isn't there.
before : function( self ) {
var myself = this;
var data = self.slides[self.nextSlide-1].data;
var html = myself.getHtml(data);
$('#module_'+self.moduleId+'-slide_'+self.slideToCreate).html(html);
//This is the fullscreen-always version of the video template
var fullscreen = 'on';
//console.log('runnin beforeSlide method for a video template');
var videoCallStringBase = "http://localhost/biztv_local/video/video_check.php?"; //to call mediaplayers localhost
var videoContent='video='+data['filename_machine']+'&fullscreen='+fullscreen;
var videoCallString = videoCallStringBase + videoContent;
//TODO: works when file video_check.php is found, but if it isn't, it will wait for a video to play. It should skip then as well...
//UPDATE: Isn't this fixed already? Debug once env. is set up
console.log('checking for '+videoCallString);
jQuery.ajax({
url: videoCallString,
success: function(result) {
//...if it isn't, we can't playback the video so skip next slide
if (result != 1) {
console.log('found no video_check on localhost so skip slide '+self.nextSlide);
self.skip();
}
else {
//success, proceed as normal
self.beforeComplete();
}
},
complete: function(xhr, data) {
if (xhr.status != 200) {
//we could not find the check-video file on localhost so skip next slide
console.log('found no video_check on localhost so skip slide '+self.nextSlide);
self.skip();
}
else {
//success, proceed as normal
self.beforeComplete();
}
}, //above would cause a double-slide-skip, I think. Removed for now, that should be trapped by the fail clause anyways.
async: true
});

Comet: XMLHttpRequest.responseText grows endlessly

All,
I'm working on implementing a comet JS library. Right now I'm tracking the size of the response text and returning the new portion as chunks arrive. This provides my callback the new data, but is a very obvious memory leak. Is there a way to force close an XMLHttpRequest object or to reset the contents of responseText periodically?
request.multi = function(type, handler, url, querystring){
querystring = (querystring == undefined) ? null: querystring;
var response = "";
var handle = makeRequestHandle();
handle.multipart = true;
handle.open(type, url, true);
handle.onreadystatechange = function(){
var return_val;
if(handle.readyState == 4){
m_log.debug("Conection died");
}else if(handle.readyState == 3){
return_val = handle.responseText.substring(response.length);
response = handle.responseText;
handler(return_val);
}else{
m_log.debug("readyState %s", handle.readyState);
}
};
handle.send(querystring);
}
Setting the field won't help
You are correct that as long as the request is active, and that your handler consumes it, you are accumulating data somewhere in memory.
I say 'somewhere', because there would be several software layers through which the socket data passes. As long as your XHR is ongoing, your runtime (browser, or js runtime), will keep receiving. It has to perform decoding for the content (e.g. (un)gzip), and possibly decode the transfer encoding (likely to be 'chunked' if it's a long-running connection).
Regardless of if you access or set the property, there is an underlying growing buffer containing the (growing) payload.
On Chrome at least, XMLHttpRequest.prototype.responseText is defined as a getter property only. Setting it (e.g. with xhr.responseText = '' will have no effect on the underlying buffer. Memory will keep growing
// you can verify
var xhr = new XMLHttpRequest();
Console.log(Object.getOwnPropertyDescriptor(x.__proto__, "response"))
// I get
{... get: function () {...}, set: undefined }
The same goes for the other "views" that you may have on the xhr payload, xhr.response (for blobs) and xhr.responseXML.
Do the tango with a second XHR
The most reliable way to have your connection memory "reset", without losing messages, is to start a new ajax request while the old one is still active, and switch to the new one after the old one receives its full last message. While both are active, one would have to ignore data on the new stream until a point where they are in 'sync'.
Memory efficient message chunk processing using a XMLHttpRequest
This of course assumes that the data received on the streams would be the same, or that there is some way to uniquely identify messages coming in to make the two streams "align".
multipart/x-mixed-replace
There used to be such a content-type you could set, which would instruct the client to reset its response whenever a particular string was received.
It was designed for streaming webcam images, got abused for a while with comet, and sort of does what you want, but its support was dropped at least from Firefox and Chrome. It was famously known to not be supported by IE from the start, so that's probably true today too.
Content-type: multipart/x-mixed-replace;boundary=<randomstringhere>
Where <randomstringhere> is something that is unlikely to appear in your application messages. Between each message "push" that the server makes, you insert <randomstringhere>. Browsers would reset the responseText at each boundary automatically.
One problem with that one, is that you'd have to have processed all the messages in the boundary before the browser received the next one. So it was racy, for AJAX at least.
Simplify your life with x.response
In the example given in the question, responseText is used to get the full payload. Before the send() call, one may set
handle.responseType = "text", and then in the progress callback, handle.response would carry only the information that is new since the last time the handler was called.
https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/response
Chances are, however, that the whole payload is still made available via responseText, so that wouldn't be enough to fix the memory runaway problem.

XDomainRequest POST with XML...what am I doing wrong?

This is (hopefully) an easy question. I have to submit a request to a web service via POST with XDomainRequest. I have found sparse documentation for this across the internet, but I refuse to believe that nobody has figured this out.
Here is my XDomainRequest code:
var testURL = 'http://localhost:4989/testendpoint';
//let us check to see if the browser is ie. If it is not, let's
if ($.browser.msie && window.XDomainRequest) {
var xdr2 = new XDomainRequest();
xdr2.open("POST", testURL);
xdr2.timeout = 5000;
xdr2.onerror = function () {
alert('we had an error!');
}
xdr2.onprogress = function () {
alert('we have some progress!');
};
xdr2.onload = function () {
alert('we load the xdr!');
var xml2 = new ActiveXObject("Microsoft.XMLDOM");
xml2.async = true;
xml2.loadXML(xdr2.responseText);
};
//what form should my request take to be sending a string for a POST request?
xdr2.send("thisisastring");
}
My web service (WCF) takes a single parameter according to the web service's help page, that looks like this:
<string xmlns="http://schemas.microsoft.com/2003/10/Serialization/">String content</string>
I've gotten this to work via other http clients (mobile and desktop APIs, fiddler) by building a string that concatenates the parameter I am trying to pass to the web service with the rest of the string serialization. For example, I have tried:
xdr2.send("thisisastring");
xdr2.send("<string xmlns="http://schemas.microsoft.com/2003/10/Serialization/">thisisastring</string>");
but the onerror handler is always tripped. I don't think it has anything to do with the WCF because:
The WCF is always successful in every other client I call it from,
and
If it was the service, the onerror method would never get tripped.
It would return garbage, but it would be returning something.
When i use the console (in the dev tools in ie9) to log the responseText, it says:
LOG:undefined
So I am fairly sure that the issue is in how I use the XDomainRequest.
If anybody comes across this, I ended up converting my web services to return JSON-formatted data. Using JSON negates the need for XDomainRequest, allowing me to use the conventional ajax jquery tools instead.

Ajax in Internet Explorer 9 slow after refresh

I'm working on a client-side webapplication that makes heavy use of JavaScript and Ajax to provide the required functionality.
This is not a problem for most browsers (Chrome, Firefox, ...), but in Internet Explorer the performance is a major issue.
It takes less than a second to load the page initially, even on Internet Explorer. But upon refreshing the page it can take anywhere between 1 and 20 seconds to load and display the page.
It's hard to post code since the application is divided into multiple files. I can only explain the intended behaviour.
The application initializes two content containers, one for static content and one for dynamic content. Each of these content containers is populated via Ajax and affects DOM elements via the innerHTML attribute.
The first time it takes less than a second to build the page. Subsequent refreshes take significantly longer.
What changes between the initial loading of the page and the refreshing of the page to explain this enormous performance drop? Do I need to uninitialize something on unloading the page?
Communication.request = function (method, target, async, data, callback) {
var types = ['string', 'string', 'boolean', 'null', 'null']; // Parameter types
if (data) { // Data was provided
types[3] = 'string'; // Data must be a string
}
if (callback) { // Callback was provided
types[4] = 'function'; // Callback must be a function
}
if (Utils.hasParams(arguments, types)) { // All the required parameters were provided and of the right type
var request = new XMLHttpRequest(); // Create a new request
request.open(method, target, async); // Open the request
if (callback) { // Callback was provided
request.handleCallback(callback); // Register the callback
}
if (data) { // Data was provided
var contentType = 'application/x-www-form-urlencoded'; // Prepare the content type
request.setRequestHeader('Content-Type', contentType); // Add a content type request header
}
request.send(data); // Send the request
}
};
The problem appeared to be related to the amount of concurrent connections. Depending on the connection / type of web server this is limited to 2 or 4 concurrent connections in Internet Explorer.
After clamping the number of connections to 2 the problem ceased to occur. Other browsers appear to have higher limits, though I have limited those to 4 just in case.
Also, the amount of concurrent messages is the amount of messages in-flight at any given time. This was previously unlimited and that made Internet Explorer quite sad :-(

Categories