Catch Malformed XML in Javascript - javascript

I'm using prototype to make Ajax requests. Occasionally the responses contain malformed XML. Prototype is using the onException callback, but the first error that is thrown is when I am trying to access a particular node.
I know Firefox recognizes the malformed response, because if I request the xml via the address bar, Firefox presents the error.
Is there a way for me to catch a "malformed xml" error in javascript?

With javascript typically you're relying on the browser to parse the XML for you. If the browser can't parse it due to it being malformed, you're going to have to tackle it manually. There appears to be a library for doing that at http://xmljs.sourceforge.net/ . I haven't used it myself, but it appears solid. Then again, it might also throw malformed xml errors at you.
What's causing the malformed xml? Maybe there's something you can do on that end?
And finally, if you're just trying to access some part of the document's data, you could consider using a regular expression:
doc = "<one><two>three</two></one>";
captures = doc.match(/<two>(.*)<\/two>/); // returns ["<two>three</two>", "three"]
data = captures[1]; // "three"

Related

Chrome Omnibox Special Characters Throw Error

I'm writing a basic Chrome Extension to add suggestions in the Omnibox from a JSON feed. Nearly all queries entered will display results as expected in the suggestions dropdown.
However, it seems that whenever an ampersand (&) is returned as part of the description, Chrome throws an error.
The error thrown reads "xmlParseEntityRef: no name(...)" and is called from the parseOmniboxDescription method within Chrome.
Any help with this matter would be greatly appreciated. I'm not sure if this is the only character with that problem or if it is more widespread.
The current API for omnibox suggestions requires that they be specified as encoded XML text, not just plain text. Some characters including & will need to be appropriately encoded.
To encode an entire XML string in browser JavaScript, you may do something like this:
function encodeXml(s) {
var holder = document.createElement('div');
holder.textContent = s;
return holder.innerHTML;
}
console.log(encodeXml("Good & Bad > Bad & Good"));
// "Good & Bad > Bad & Good"
If you perform this operation on your text before passing it to the omnibox API, the error should go away.
Per the documentation you can use <url>, <match>, and <dim> to further annotate your result. However, you may want to use a more structured XML-building approach for that, rather than simply concatenating strings. (I don't know if these XML elements have any attributes, but if they do, the approach above may not be adequate for encoding attribute values.)

rdflib.js with RDFa not returning any results

What I use:
Ubuntu 14.04
Node.js 0.10.36
rdflib.js 0.2.0
What I want to achieve:
Parse a html file with RDFa information
Print the found RDF triples
What I have in my Node.js script:
var rdflib = require('rdflib');
var kb = new rdflib.Formula();
rdflib.parse("http://ceur-ws.org/Vol-1085/", kb, "http://www.example.com/", "application/rdfa");
console.log(kb.statements);
The code of rdflib.js calls the third parameter of the 'parse' method 'base', however, I don't know what they mean by that, so I just added dummy URI.
What I get as a result:
[]
Hence, the there are no statements in the store. I would expect to see here the different triples that are present in the html file. Can anybody see what the problem is?
P.S. Because there is no (recent) documentation on how to use rdflib.js, I might be using the above methods in the wrong way.
The first argument of rdflib.parse takes a string that contains the resource at that URI, it doesn't read the URI. You need to use something like request to read the entity body first. Also the URI http://ceur-ws.org/Vol-1085/ does not seem to return application/rdf+xml.

DataCloneError in firefox when posting to web worker

I am working on a helper library called Ozai to make web workers easier, but am running in to a problem in firefox. I create a web worker from a URL Blob and attempt to post this payload to it:
msg = {
"id":"0fae0ff8-bfd1-49ea-8139-3d03fb9584e4",
"fn":"fn",
"args":[100,200]
}
Using this code:
worker.postMessage(msg)
But it throws a DataCloneError exception. It looks like Firefox's implementation of structured cloning is failing on a very simple object. The code runs without problems on Chrome and Safari, but fails in the latest version of Firefox. Am I missing something here? How do I get around this (preferably without stringifying the payload)?
Here's a fiddle: http://jsfiddle.net/V8aCy/6/
And a pic of Firelord Ozai:
You're trying to call postMessage with an object that has a property referencing arguments. That doesn't work because data has to be transferable, which means either fully JSON-serializable or implementing Transferable (e.g. ArrayBuffer), which arguments is not.
Use Array.prototype.slice.call(arguments, 0) to convert arguments into an array, which can be serialized (cloned) if the contents are OK.
Corrected fiddle.

Uncaught SyntaxError: Unexpected token <:

I have an opencart shop. Locally, uploading images work without a glitch. Online, it works only if the image is very small. If I upload an image of, say 300kb, the loading gif next to upload keeps spinning and I get the errors:
ajaxupload.js:
Line 609: if (response) {
Line 610: response = eval("(" + response + ")");
Line 611: } else {
Line 612: response = {};
Line 613: }
Why is this happening?
EDIT:
I did console.log(response) and you were right, what came back was the HTML of the 404 page. But how can it be too big? It works if the image is 100kb but doesn't if it's 130kb.
Why did I get downvoted?
Your Post request is 404ing so the response is not parseable JSON. Looks like your URL has unencoded /s in the query variables. Make sure to use encodeURIComponent() or some other function to make sure your url is properly escaped
From the error I believe that your response is not a valid JSON.
By the way eval is evil so it's much better to use JSON.parse
You can check your JSON here: http://jsonlint.com/
Edit: somebody asked why eval is evil so here are some reasons
it's slower than JSON.parse since it actually starts a compiler
if not done correctly you can end up with XSS attacks
it inherits the execution context and the scope in which its invoked
If you need to support old IE versions you also don't need to use eval(). You could use the excellent JSON library: https://github.com/douglascrockford/JSON-js
This will offer you... JSON.parse support for old IEs.
Try to rewrite the line just before that error.. (don't copy and paste it).. Works sometimes

How to decompress gzip xhr response in javascript

I have a gzipped response from a web request that I need to decompress in JavaScript (actually, in the success function of an AJAX call - my code is running in a headless browser, and doesn't have the built-in gzip processing support offered by a full browser). I've been looking around for an answer but I'm a bit stumped.
Ideally, what I'd like to have code for is:
var my_decompressed_string = someGzipDecompressor(xhr.responseText);
I found, what I thought was an answer at JavaScript implementation of Gzip but that may not be the answer I was hoping for. When trying to use the mentioned jsxcompressor library by way of the following code snippet
var my_decompressed_string = JXG.decompress(xhr.responseText);
... I get ...
TypeError: 'undefined' is not an object (evaluating '(new JXG.Util.Unzip(JXG.Util.Base64.decodeAsArray(str))).unzip()[0][0]')
Looking at that function in more detail, if I break up the code being executed by the decompress() function, I get what I assume is something good returned by the inner part ...
JXG.Util.Base64.decodeAsArray(xhr.responseText)
... but undefined returned for the outer part...
JXG.Util.Unzip( ... )
So, that may be a dead end of course, but if anyone knows a way that my original question should be possible, or has had any better luck with jsxcompressor.js I'd appreciate it.
Of course, I can force my AJAX request to return a 'deflate' response but the size of the page (for which I have no control over) is pretty large and requesting gzip is an attempt at speeding up my page load time.
jsxcompressor.js requires base64 encoded string to decompress, you should use:
var my_decompressed_string = JXG.decompress(btoa(xhr.responseText));
if your headless browser does not support btoa then you should use a base64 encoding library, if your node or iojs there are plenty of base64 npm packages.

Categories