XDomainRequest POST with XML...what am I doing wrong? - javascript

This is (hopefully) an easy question. I have to submit a request to a web service via POST with XDomainRequest. I have found sparse documentation for this across the internet, but I refuse to believe that nobody has figured this out.
Here is my XDomainRequest code:
var testURL = 'http://localhost:4989/testendpoint';
//let us check to see if the browser is ie. If it is not, let's
if ($.browser.msie && window.XDomainRequest) {
var xdr2 = new XDomainRequest();
xdr2.open("POST", testURL);
xdr2.timeout = 5000;
xdr2.onerror = function () {
alert('we had an error!');
}
xdr2.onprogress = function () {
alert('we have some progress!');
};
xdr2.onload = function () {
alert('we load the xdr!');
var xml2 = new ActiveXObject("Microsoft.XMLDOM");
xml2.async = true;
xml2.loadXML(xdr2.responseText);
};
//what form should my request take to be sending a string for a POST request?
xdr2.send("thisisastring");
}
My web service (WCF) takes a single parameter according to the web service's help page, that looks like this:
<string xmlns="http://schemas.microsoft.com/2003/10/Serialization/">String content</string>
I've gotten this to work via other http clients (mobile and desktop APIs, fiddler) by building a string that concatenates the parameter I am trying to pass to the web service with the rest of the string serialization. For example, I have tried:
xdr2.send("thisisastring");
xdr2.send("<string xmlns="http://schemas.microsoft.com/2003/10/Serialization/">thisisastring</string>");
but the onerror handler is always tripped. I don't think it has anything to do with the WCF because:
The WCF is always successful in every other client I call it from,
and
If it was the service, the onerror method would never get tripped.
It would return garbage, but it would be returning something.
When i use the console (in the dev tools in ie9) to log the responseText, it says:
LOG:undefined
So I am fairly sure that the issue is in how I use the XDomainRequest.

If anybody comes across this, I ended up converting my web services to return JSON-formatted data. Using JSON negates the need for XDomainRequest, allowing me to use the conventional ajax jquery tools instead.

Related

Serve different cache versions using the same URL through cloudflare worker

There's a very common problem I have seen from many people who use different versions of their site for mobile and desktop, many themes have this feature. The issue is Cloudflare caches the same page regardless of the user device causing mixes and inconsistencies between desktop and mobile versions.
The most common solution is to separate the mobile version into another URL, but in my case, I want to use the same URL and make Cloudflare cache work for both desktop and mobile properly.
I found this very nice guide showing how to fix this issue, however, the worker code seems to be outdated, I had to modify some parts to make it work.
I created a new subdomain for my workers and then assigned the route to my site so it starts running.
The worker is caching everything, however, it does not have the desired feature of having different cached versions according to the device.
async function run(event) {
const { request } = event;
const cache = caches.default;
// Read the user agent of the request
const ua = request.headers.get('user-agent');
let uaValue;
if (ua.match(/mobile/i)) {
uaValue = 'mobile';
} else {
uaValue = 'desktop';
}
console.log(uaValue);
// Construct a new response object which distinguishes the cache key by device
// type.
const url = new URL(request.url);
url.searchParams.set('ua', uaValue);
const newRequest = new Request(url, request);
let response = await cache.match(newRequest);
if (!response) {
// Use the original request object when fetching the response from the
// server to avoid passing on the query parameters to our backend.
response = await fetch(request, { cf: { cacheTtl: 14400 } });
// Store the cached response with our extended query parameters.
event.waitUntil(cache.put(newRequest, response.clone()));
}
return response;
}
addEventListener('fetch', (event) => {
event.respondWith(run(event));
});
it is indeed detecting the right user agent, but it should be having two separate cache versions according to the assigned query string...
I think maybe I'm missing some configuration, I don't know why it's not working as expected. As it is right now I still get mixed my mobile and desktop cache versions.
The problem here is that fetch() itself already does normal caching, independent of your use of the Cache API around it. So fetch() might still return a cached response that is for the wrong UA.
If you could make your back-end ignore the query parameter, then you could include the query in the request passed to fetch(), so that it correctly caches the two results differently. (Enterprise customers can use custom cache keys as a way to accomplish this without changing the URL.)
If you do that, then you can also remove the cache.match() and cache.put() calls since fetch() itself will handle caching.

What is the best technology for a chat/shoutbox system? Plain JS, JQuery, Websockets, or other?

I have an old site running, which also has a chat system, which always used to work fine. But recently I picked up the project again and started improving and the user base has been increasing a lot. (running on a VPS)
Now this shoutbox I have (running at http://businessgame.be/shoutbox) has been getting issues lately, when there are over 30 people online at the same time, it starts to really slow down the entire site.
This shoutbox system was written years ago by the old me (which ironically was the young me) who was way too much into old school Plain Old JavaScript (POJS?) and refused to use frameworks like JQuery.
What I do is I poll every 3 seconds with AJAX if there are new messages, and if YES, load all those messages (which are handed as an XML file which is then parsed by the JS code into HTML blocks which are added to the shoutbox content.
Simplified the script is like this:
The AJAX functions
function createRequestObject() {
var xmlhttp;
if (window.XMLHttpRequest) { // code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp = new XMLHttpRequest();
} else { // code for IE6, IE5
xmlhttp = new ActiveXObject("Microsoft.XMLHTTP");
}
// Create the object
return xmlhttp;
}
function getXMLObject(XMLUrl, onComplete, onFail) {
var XMLhttp = createRequestObject();
// Check to see if the latest shout time has been initialized
if(typeof getXMLObject.counter == "undefined") {
getXMLObject.counter = 0;
}
getXMLObject.counter++;
XMLhttp.onreadystatechange = function() {
if(XMLhttp.readyState == 4) {
if(XMLhttp.status == 200) {
if(onComplete) {
onComplete(XMLhttp.responseXML);
}
} else {
if(onFail) {
onFail();
}
}
}
};
XMLhttp.open("GET", XMLUrl, true);
XMLhttp.send();
setTimeout(function() {
if(typeof XMLhttp != "undefined" && XMLhttp.readyState != 4) {
XMLhttp.abort();
if(onFail) {
onFail();
}
}
}, 5000);
}
Chat functions
function initShoutBox() {
// Check for new shouts every 2 seconds
shoutBoxInterval = setInterval("shoutBoxUpdate()", 3000);
}
function shoutBoxUpdate() {
// Get the XML document
getXMLObject("/ajax/shoutbox/shoutbox.xml?time=" + shoutBoxAppend.lastShoutTime, shoutBoxAppend);
}
function shoutBoxAppend(xmlData) {
process all the XML and add it to the content, also remember the timestamp of the newest shout
}
The real script is far more convoluted, with slower loading times when the page is blurred and keeping track of AJAX calls to avoid double calls at the same time, ability to post a shout, load settings etc. All not very relevant here.
For those interested, full codes here:
http://businessgame.be/javascripts/xml.js
http://businessgame.be/javascripts/shout.js
Example of the XML file containing the shout data
http://businessgame.be/ajax/shoutbox/shoutbox.xml?time=0
I do the same for getting a list of the online users every 30 seconds and checking for new private messages every 2 minutes.
My main question is, since this old school JS is slowing down my site, will changing the code to JQuery increase the performance and fix this issue? Or should I choose to go for an other technology alltogether like nodeJS, websockets or something else? Or maybe I am overlooking a fundamental bug in this old code?
Rewriting an entire chat and private messages system (which use the same backend) requires a lot of effort so I'd like to do this right from the start, not rewriting the whole thing in JQuery, just to figure out it doesn't solve the issue at hand.
Having 30 people online in the chatbox at the same time is not really an exception anymore so it should be a rigid system.
Could perhaps changing from XML data files to JSON increase performance as well?
PS: Backend is PHP MySQL
I'm biased, as I love Ruby and I prefer using Plain JS over JQuery and other frameworks.
I believe you're wasting a lot of resources by using AJAX and should move to websockets for your use-case.
30 users is not much... When using websockets, I would assume a single server process should be able to serve thousands of simultaneous updates per second.
The main reason for this is that websockets are persistent (no authentication happening with every request) and broadcasting to a multitude of connections will use the same amount of database queries as a single AJAX update.
In your case, instead of everyone reading the whole XML every time, a POST event will only broadcast the latest (posted) shout (not the whole XML), and store it in the XML for persistent storage (used for new visitors).
Also, you don't need all the authentication and requests that end up being answered with a "No, there aren't any pending updates".
Minimizing the database requests (XML reads) should prove to be a huge benefit when moving from AJAX to websockets.
Another benifit relates to the fact that enough simultaneous users will make AJAX polling behave the same as a DoS attack.
Right now, 30 users == 10 requests per second. This isn't much, but it can be heavy if each request would take more than 100ms - meaning, that the server answers less requests than it receives.
The home page for the Plezi Ruby Websocket Framework has this short example for a shout box (I'm Plezi's author, I'm biased):
# finish with `exit` if running within `irb`
require 'plezi'
class ChatServer
def index
render :client
end
def on_open
return close unless params[:id] # authentication demo
broadcast :print,
"#{params[:id]} joind the chat."
print "Welcome, #{params[:id]}!"
end
def on_close
broadcast :print,
"#{params[:id]} left the chat."
end
def on_message data
self.class.broadcast :print,
"#{params[:id]}: #{data}"
end
protected
def print data
write ::ERB::Util.html_escape(data)
end
end
path_to_client = File.expand_path( File.dirname(__FILE__) )
host templates: path_to_client
route '/', ChatServer
The POJS client looks like so (the DOM update and from data access ($('#text')[0].value) use JQuery):
ws = NaN
handle = ''
function onsubmit(e) {
e.preventDefault();
if($('#text')[0].value == '') {return false}
if(ws && ws.readyState == 1) {
ws.send($('#text')[0].value);
$('#text')[0].value = '';
} else {
handle = $('#text')[0].value
var url = (window.location.protocol.match(/https/) ? 'wss' : 'ws') +
'://' + window.document.location.host +
'/' + $('#text')[0].value
ws = new WebSocket(url)
ws.onopen = function(e) {
output("<b>Connected :-)</b>");
$('#text')[0].value = '';
$('#text')[0].placeholder = 'your message';
}
ws.onclose = function(e) {
output("<b>Disonnected :-/</b>")
$('#text')[0].value = '';
$('#text')[0].placeholder = 'nickname';
$('#text')[0].value = handle
}
ws.onmessage = function(e) {
output(e.data);
}
}
return false;
}
function output(data) {
$('#output').append("<li>" + data + "</li>")
$('#output').animate({ scrollTop:
$('#output')[0].scrollHeight }, "slow");
}
If you want to add more events or data, you can consider using Plezi's auto-dispatch feature, that also provides you with an easy to use lightweight Javascript client with an AJAJ (AJAX + JSON) fallback.
...
But, depending on your server's architecture and whether you mind using heavier frameworks or not, you can use the more common socket.io (although it starts with AJAX and only moves to websockets after a warmup period).
EDIT
Changing from XML to JSON will still require parsing. The question is actually whether XML vs. JSON parsing speeds.
JSON will be faster on the client javascript, according to the following SO question and answer: Is parsing JSON faster than parsing XML
JSON seems to be also favored on the server-side for PHP (might be opinion based rather than tested): PHP: is JSON or XML parser faster?
BUT... I really think your bottleneck isn't the JSON or the XML. I believe the bottleneck relates to the multitude of times that the data is accessed, (parsed?) and reviewed by the server when using AJAX.
EDIT2 (due to comment about PHP vs. node.js)
You can add a PHP websocket layer using Ratchet... Although PHP wasn't designed for long running processes, so I would consider adding a websocket dedicated stack (using a local proxy to route websocket connections to a different application).
I love Ruby since it allows you to quickly and easily code a solution. Node.js is also commonly used as a dedicated websocket stack.
I would personally avoid socket.io, because it abstracts the connection method (AJAX vs Websockets) and always starts as AJAX before "warming up" to an "upgrade" (websockets)... Also, socket.io uses long-polling when not using websockets, which I this is terrible. I'd rather show a message telling the client to upgrade their browser.
Jonny Whatshisface pointed out that using a node.js solution he reached a limit of ~50K concurrent users (which could be related to the local proxy's connection limit). Using a C solution, he states to have no issues with more than 200K concurrent users.
This obviously depends also on the number of updates per second and on whether you're broadcasting the data or sending it to specific clients... If you're sending 2 updates per user per second for 200K users, that's 400K updates. However, updating all the users only once every 2 seconds, that's 100K updates per second. So trying to figure out the maximum load can be a headache.
Personally, I didn't get to reach these numbers on my apps, so I never got to discover Plezi's limits first hand... although, during testing, I had no issues with sending hundred of thousands of updates per second (but I did had a connection limit due to available ports and open file handle limits on my local machine).
This definitely shows how vast of an improvement you can reach by utilizing websockets (especially since you stated to notice slowdowns with 30 concurrent users).

Firefox Extension : Stopping the page load when suspicious url found

I am working on a simple firefox extension that tracks the url requested and call a web service at the background which detects whether the URL is suspicious or not and based on the result returned by the service, extension decides to stop the page load and alert the user about the case of forgery or whatever, and if user still wishes to go to that page he can get redirected to the original page he has requested for
I have added a http-on-modify-request observer
var observerService = Components.classes["#mozilla.org/observer-service;1"].getService(Components.interfaces.nsIObserverService);
observerService.addObserver(requestObserverListener.observe, "http-on-modify-request", false);
and the observer
var requestObserverListener = {observe: function (subject, topic, data) {
//alert("Inside observe");
if (topic == "http-on-modify-request") {
subject.QueryInterface(Components.interfaces.nsIHttpChannel);
var url = subject.URI.spec; //url being requested. you might want this for something else
//alert("inside modify request");
var urlbarvalue = document.getElementById("urlbar").value;
urlbarvalue = processUrl(urlbarvalue, url);
//alert("url bar: "+urlbarvalue);
//alert("url: "+url);
document.getElementById("urlbar").style.backgroundColor = "white";
if(urlbarvalue == url && url != "")
{
var browser = getWindowForRequest(subject);
if (browser != null) {
//alert(""+browser.contentDocument.body.innerHTML);
alert("inside browser: "+url);
getXmlHttpRequest(url);
}
}
}
},
}
so when the URL in the URLbar and the requested url matches REST service will be called through ajax getXmlHttpRequest(url); method
now when i am running this extension call is made to the service but before the service return any response the page gets loaded which is not appropriate because user might enter his credentials in the meanwhile and get compromised
I want to first display user a warning message on the browser tab and if he still wanted to visit to that page he can then be redirected to that page on a link click in warning message window
I haven't tried this code out so I'm not sure that suspend and resume will work well but here's what I would try. You're working with an nsIRequest object as your subject so you can call subject.suspend() on it. From there use callbacks to your XHR call to either cancel() or resume() the nsIRequest.
Here's the relevant (untested) snippet of code. My XHR assumes some kind of promise .the() return but hopefully you understand the intention:
if(urlbarvalue == url && url != "")
{
var browser = getWindowForRequest(subject);
if (browser != null) {
// suspend the pending request
subject.suspend();
getXmlHttpRequest(url).then(
function success() { subject.resume(); },
function failure() { subject.cancel(Components.results.NS_BINDING_ABORTED); });
}
}
Just some fair warning that you actually don't want to implement an add-on in this way.
It's going to be extremely slow to do a remote call for every HTTP request. The safe browsing module does a single call to download a database of sites considered 'unsafe', it then can quickly check the database against the HTTP request page such that it doesn't have to make individual calls every time.
Here's some more info on this kind of intercepting worth reading: https://developer.mozilla.org/en-US/docs/XUL/School_tutorial/Intercepting_Page_Loads#HTTP_Observers
Also I'd worry that your XHR request will actually loop because XHR calls creates an http-on-modify-request event so your code might actually check that your XHR request is valid before being able to check the current URL. You probably want a safety check for your URL checking domain.
And here's another stackoverflow similar question to yours that might be useful: How to block HTTP request on a particular tab?
Good luck!

The request is too large for IE to process properly

I am using Websync3, Javascript API, and subscribing to approximately 9 different channels on one page. Firefox and Chrome have no problems, but IE9 is throwing an alert error stating The request is too large for IE to process properly.
Unfortunately the internet has little to no information on this. So does anyone have any clues as to how to remedy this?
var client = fm.websync.client;
client.initialize({
key: '********-****-****-****-************'
});
client.connect({
autoDisconnect: true,
onStreamFailure: function(args){
alert("Stream failure");
},
stayConnected: true
});
client.subscribe({
channel: '/channel',
onSuccess: function(args) {
alert("Successfully connected to stream");
},
onFailure: function(args){
alert("Failed to connect to stream");
},
onSubscribersChange: function(args) {
var change = args.change;
for (var i = 0; i < change.clients.length; i++) {
var changeClient = change.clients[i];
// If someone subscribes to the channel
if(change.type == 'subscribe') {
// If something unsubscribes to the channel
}else{
}
}
},
onReceive: function(args){
text = args.data.text;
text = text.split("=");
text = text[1];
if(text != "status" && text != "dummytext"){
//receiveUpdates(id, serial_number, args.data.text);
var update = eval('(' + args.data.text + ')');
}
}
});
This error occurs when WebSync is using the JSON-P protocol for transfers. This is mostly just for IE, cross domain environments. Meaning websync is on a different domain than your webpage is being served from. So IE doesn't want do make regular XHR requests for security reasons.
JSON-P basically encodes the up-stream data (your 9 channel subscriptions) as a URL encoded string that is tacked onto a regular request to the server. The server is supposed to interpret that URL-encoded string and send back the response as a JavaScript block that gets executed by the page.
This works fine, except that IE also has a limit on the overall request URL for an HTTP request of roughly 2kb. So if you pack too much into a single request to WebSync you might exceed this 2kb upstream limit.
The easiest solution is to either split up your WebSync requests into small pieces (ie: subscribe to only a few channels at a time in JavaScript), or to subscribe to one "master channel" and then program a WebSync BeforeSubscribe event that watches for that channel and re-writes the subscription channel list.
I suspect because you have a key in you example source above, you are using WebSync On-Demand? If that's the case, the only way to make a BeforeSubscribe event handler is to create a WebSync proxy.
So for the moment, since everyone else is stumped by this question as well, I put a trap in my PHP to not even load this Javascript script if the browser is Internet Destroyer (uhh, I mean Internet Explorer). Maybe a solution will come in the future though.

Issuing a synchronous HTTP GET request or invoking shell script in JavaScript from iOS UIAutomation

I am trying to use Apple's UIAutomation to write unit tests for an iOS Application that has a server-side component. In order to setup the test server in various states (as well as simulate two clients communicating through my server), I would like to issue HTTP get requests from within my javascript-based test.
Can anyone provide an example of how to either issue HTTP GET requests directly from within UIAutomation javascript tests, or how to invoke a shell script from within my UIAutomation javascript tests?
FWIW, most of the core objects made available by all browsers are missing within the UIAutomation runtime. Try to use XMLHTTPRequest for example and you will get an exception reporting that it cannot find the variable.
Thanks!
Folks,
I was able to work around this by sending HTTP requests to the iOS client to process and return the results in a UIAlertView. Note that all iOS code modifications are wrapped in #if DEBUG conditional compilation directives.
First, setup your client to send out notifications in the event of a device shake. Read this post for more information.
Next, in your iOS main app delegate add this code:
[[NSNotificationCenter defaultCenter] addObserver:self
selector:#selector(deviceShakenShowDebug:)
name:#"DeviceShaken"
object:nil];
Then add a method that looks something like this:
- (void) deviceShakenShowDebug:(id)sender
{
if (!self.textFieldEnterDebugArgs)
{
self.textFieldEnterDebugArgs = [[[UITextField alloc] initWithFrame:CGRectMake(0, 0, 260.0, 25.0)] autorelease];
self.textFieldEnterDebugArgs.accessibilityLabel = #"AlertDebugArgsField";
self.textFieldEnterDebugArgs.isAccessibilityElement = YES;
[self.textFieldEnterDebugArgs setBackgroundColor:[UIColor whiteColor]];
[self.tabBarController.selectedViewController.view addSubview:self.textFieldEnterDebugArgs];
[self.tabBarController.selectedViewController.view bringSubviewToFront:self.textFieldEnterDebugArgs];
}
else
{
if ([self.textFieldEnterDebugArgs.text length] > 0)
{
if ([self.textFieldEnterDebugArgs.text hasPrefix:#"http://"])
{
[self doDebugHttpRequest:self.textFieldEnterDebugArgs.text];
}
}
}
}
- (void)requestDidFinishLoad:(TTURLRequest*)request
{
NSString *response = [[[NSString alloc] initWithData:((TTURLDataResponse*)request.response).data
encoding:NSUTF8StringEncoding] autorelease];
UIAlertView *resultAlert =
[[[UIAlertView alloc] initWithTitle:NSLocalizedString(#"Request Loaded",#"")
message:response
delegate:nil
cancelButtonTitle:NSLocalizedString(#"OK",#"")
otherButtonTitles:nil] autorelease];
resultAlert.accessibilityLabel = #"AlertDebugResult";
[resultAlert show];
}
This code will add a UITextField to the very top view controller after a shake, slapped right above any navigation bar or other UI element. UIAutomation, or you the user, can manually enter a URL into this UITextField. When you shake the device again, if the text begins with "http" it will issue an HTTP request in code (exercise for the reader to implement doDebugHttpRequest).
Then, in my UIAutomation JavaScript file, I have defined the following two functions:
function httpGet(url, delayInSec) {
if (!delayInSec) delay = 1;
var alertDebugResultSeen = false;
var httpResponseValue = null;
UIATarget.onAlert = function onAlert(alert) {
httpResponseValue = alert.staticTexts().toArray()[1].name();
alert.buttons()[0].tap();
alertDebugResultSeen = true;
}
var target = UIATarget.localTarget();
var application = target.frontMostApp();
target.shake(); // bring up the input field
application.mainWindow().textFields()["AlertDebugArgsField"].setValue(url);
target.shake(); // send back to be processed
target.delay(delayInSec);
assertTrue(alertDebugResultSeen);
return httpResponseValue;
}
function httpGetJSON(url, delayInSec) {
var response = httpGet(url, delayInSec);
return eval('(' + response + ')');
}
Now, in my javascript file, I can call
httpGet('http://localhost:3000/do_something')
and it will execute an HTTP request. If I want JSON data back from the server, I call
var jsonResponse = httpGetJSON('http://localhost:3000/do_something')
If I know it is going to be a long-running call, I call
var jsonResponse = httpGetJSON('http://localhost:3000/do_something', 10 /* timeout */)
I've been using this approach successfully now for several weeks.
Try performTaskWithPathArgumentsTimeout
UIATarget.host().performTaskWithPathArgumentsTimeout("/usr/bin/curl", "http://google.com", 30);
Just a small correction. The answer that suggests using UIATarget.host().performTaskWithPathArgumentsTimeout is an easy way to make a request on a URL in iOS 5.0+, but the syntax of the example is incorrect. Here is the correct way to make this call:
UIATarget.host().performTaskWithPathArgumentsTimeout("/usr/bin/curl", ["http://google.com"], 30);
The "[" around the "args" param is important, and the test will die with an exception similar to the following if you forget the brackets:
Error: -[__NSCFString count]: unrecognized selector sent to instance
Here is a fully working example that hits google.com and logs all the output:
var result = UIATarget.host().performTaskWithPathArgumentsTimeout("/usr/bin/curl", ["http://www.google.com"], 30);
UIALogger.logDebug("exitCode: " + result.exitCode);
UIALogger.logDebug("stdout: " + result.stdout);
UIALogger.logDebug("stderr: " + result.stderr);
+1 for creative use of "shake()". However, that's not an option for some projects, especially those that actually use the shake feature.
Think outside the box. Do the fetching with something else (Python, Ruby, node.js, bash+wget, etc). Then, you can use the pre-canned response and auto-generate the ui-test.js on the fly by including that dynamically generated json payload as the "sample data" into the test. Then you simply execute the test.
In my opinion, the test is the test, leave that alone. The test data you are using, if it's that dynamic, it ought to be separated from the test itself. By doing it this way of fetching / generating JSON, and referencing it from the test, you can update that JSON however often you like, either immediately right before every test, or on a set interval like when you know the server has been updated. I'm not sure you would want to generate it while the test is running, that seems like it would create problems. Taking it to the next level, you could get fancy and use functions that calculate what values ought to be based on other values, and expose them as "dynamic properties" of the data, rather than that math being inside the test, but at that point I think the discussion is more of an academic one rather than the practical one of how.
Apple has recently updated UIAutomation to include a new UIAHost element for performing a task on the Host that is running the instance of Instruments that is executing the tests.

Categories