Is parsing JSON faster than parsing XML - javascript

I'm creating a sophisticated JavaScript library for working with my company's server side framework.
The server side framework encodes its data to a simple XML format. There's no fancy namespacing or anything like that.
Ideally I'd like to parse all of the data in the browser as JSON. However, if I do this I need to rewrite some of the server side code to also spit out JSON. This is a pain because we have public APIs that I can't easily change.
What I'm really concerned about here is performance in the browser of parsing JSON versus XML. Is there really a big difference to be concerned about? Or should I exclusively go for JSON? Does anyone have any experience or benchmarks in the performance difference between the two?
I realize that most modern web developers would probably opt for JSON and I can see why. However, I really am just interested in performance. If there's a proven massive difference then I'm prepared to spend the extra effort in generating JSON server side for the client.

JSON should be faster since it's JS Object Notation, which means it can be recognized natively by JavaScript. In PHP on the GET side of things, I will often do something like this:
<script type="text/javascript">
var data = <?php json_encode($data)?>;
</script>
For more information on this, see here:
Why is Everyone Choosing JSON Over XML for jQuery?
Also...what "extra effort" do you really have to put into "generating" JSON? Surely you can't be saying that you'll be manually building the JSON string? Almost every modern server-side language has libraries that convert native variables into JSON strings. For example, PHP's core json_encode function converts an associative array like this:
$data = array('test'=>'val', 'foo'=>'bar');
into
{"test": "val", "foo": "bar"}
Which is simply a JavaScript object (since there are no associative arrays (strictly speaking) in JS).

Firstly, I'd like to say thanks to everyone who's answered my question. I REALLY appreciate all of your responses.
In regards to this question, I've conducted some further research by running some benchmarks. The parsing happens in the browser. IE 8 is the only browser that doesn't have a native JSON parser. The XML is the same data as the JSON version.
Chrome (version 8.0.552.224), JSON: 92ms, XML: 90ms
Firefox (version 3.6.13), JSON: 65ms, XML: 129ms
IE (version 8.0.6001.18702), JSON: 172ms, XML: 125ms
Interestingly, Chrome seems to have almost the same speed. Please note, this is parsing a lot of data. With little snippets of data, this isn't probably such a big deal.

Benchmarks have been done. Here's one. The difference in some of the earlier browsers appeared to be an entire order of magnitude (on the order of 10s of milliseconds instead of 100s of ms), but not massive. Part of this is in server response time - XML is bulkier as a data format. Part of it is parsing time - JSON lets you send JavaScript objects, while XML requires parsing a document.
You could consider adding to your public API a method to return JSON instead of modifying existing functions if it becomes and issue, unless you don't want to expose the JSON.
See also the SO question When to prefer JSON over XML?

Performance isn't really a consideration, assuming that you're not talking about gigabytes of XML. Yes, it will take longer (XML is more verbose), but it's not going to be something that the user will notice.
The real issue, in my opinion, is support for XML within JavaScript. E4X is nice, but it isn't supported by Microsoft. So you'll need to use a third-party library (such as JQuery) to parse the XML.

If possible, it would make sense to just measure it. By 'if possible' I mean that tooling for javascript (esp. for performance analysis) may not be quite as good as for stand-alone programming languages.
Why measure? Because speculation based solely on properties of data formats is not very useful for performance analysis -- developers' intuitions are notoriously poor at predicting performance. In this case it just means that it all comes down to maturity of respective XML and JSON parser (and generators) in use. XML has the benefit of having been around longer; JSON is bit simpler to process. This based on having actually written libraries for processing both. In the end, if all things are equal (maturity and performance optimization of libraries), JSON can indeed be bit faster to process. But both can be very fast; or very slow with bad implementations.
However: I suspect that you should not worry all that much about performance, like many have already suggested. Both xml and json can be parsed efficiently, and with modern browsers, probably are.
Chances are that if you have performance problems it is not with reading or writing of data but something else; and first step would be actually figuring out what the actual problem is.

since JSON is native in and designed FOR Javascript, it's going to out-perform XML parsing all day long. you didn't mention your server-side language, in PHP there is the json_encode/json_decode functionality built into the PHP core...

the difference in performace will be so tiny, you wouldn't even notice it (and: you shouldn't think about performance problems until you have performance problems - there are a lot of more important points to care for - maintainable, readable and documented code...).
but, to answer ayou question: JSON will be faster to parse (because it's simple javascript object notation).

In this situation, I'd say stick with the XML. All major browsers have a DOM parsing interface that will parse well-formed XML. This link shows a way to use the DOMParser interface in Webkit/Opera/Firefox, as well as the ActiveX DOM Object in IE: https://sites.google.com/a/van-steenbeek.net/archive/explorer_domparser_parsefromstring

It also depends on how your JSON is structured. Tree-like structures tend to parse more efficiently than a list of objects. This is where one's fundamental understanding of data structures will be handy. I would not be surprised if you parse a list-like structure in JSON that might look like this:
{
{
"name": "New York",
"country":"USA",
"lon": -73.948753,
"lat": 40.712784
},
{
"name": "Chicago",
"country":"USA",
"lon": -23.948753,
"lat": 20.712784
},
{
"name": "London",
"country":"UK",
"lon": -13.948753,
"lat": 10.712784
}
}
and then compare it to a tree like structure in XML that might look like this:
<cities>
<country name="USA">
<city name="New York">
<long>-73.948753</long>
<lat>40.712784</lat>
</city>
<city name="Chicago">
<long>-23.948753</long>
<lat>20.712784</lat>
</city>
</country>
<country name="UK">
<city name="London">
<long>-13.948753</long>
<lat>10.712784</lat>
</city>
</country>
</cities>
The XML structure may yield a faster time than that of JSON since if I loop through the node of UK to find London, I don't have to loop through the rest of the countries to find my city. In the JSON example, I just might if London is near the bottom of the list. But, what we have here is a difference in structure. I would be surprised to find that XML is faster in either case or in a case where the structures are exactly the same.
Here is an experiment I did using Python - I know the question is looking at this strictly from a JavaScript perspective, but you might find it useful. The results show that JSON is faster than XML. However, the point is: how you structure is going to have an effect on how efficiently you are able to retrieve it.

Another reason to stick with XML is, that if you switch to JSON, you modify the "maintenance contract". XML is more typed than JSON is, in the sense that it works more naturally with typed languages (i.e. NOT javascript).
If you change to JSON, some future maintainer of the code base might introduce a JSON array at some point which has mixed type content (e.g. [ "Hello", 42, false ]), which will present a problem to any code written in a typed language.
Yes, you could do that as well in XML but it requires extra effort, while in JSON it can just slip in.
And while it does not seem like a big deal at first glance, it actually is as it forces the code in the typed language to stick with a JSON tree instead of deserializing to a native type.

best example i have found about these two is :
http://www.utilities-online.info/xmltojson/#.VVGOlfCYK7M
that means JSON is more human readable and understandable than XML.

Related

How to parse JSON in Mongo script

As part of a MongoDB script, I need to parse JSON. (Not something you would typically do, but it is Javascript after all).
JSON.parse does not exist.
Here is the workaround I made:
function parseJSON(json) {
return eval("(function() { return "+json+"; })()");
}
This doesn't seem like it would be performant, and it looks a little ridiculous. Does anyone have a better way?
If you really have to convert a JSON string into a JavaScript object, then what you suggest is perfectly reasonable, although there are reasons to avoid eval due to performance concerns and security risks (see When is JavaScript's eval() not evil?). In the case that you generated the JSON data and you are sure that there's nothing dangerous inside, then you probably don't have to worry about an injection vulnerability.
As far as performance, I'm not aware of anything better other than JSON.parse. Can you alter the design of your program so that you do not have to parse JSON? For example, for importing JSON data into MongoDB, instead of using eval you should use the mongoimport utility.

Is there any well-known method for DRYing JSON

Consider this JSON response:
[{
Name: 'Saeed',
Age: 31
}, {
Name: 'Maysam',
Age: 32
}, {
Name: 'Mehdi',
Age: 27
}]
This works fine for small amount of data, but when you want to serve larger amounts of data (say many thousand records for example), it seems logical to prevent those repetitions of property names in the response JSON somehow.
I Googled the concept (DRYing JSON) and to my surprise, I didn't find any relevant result. One way of course is to compress JSON using a simple home-made algorithm and decompress it on the client-side before consuming it:
[['Name', 'Age'],
['Saeed', 31],
['Maysam', 32],
['Mehdi', 27]]
However, a best practice would be better than each developer trying to reinvent the wheel. Have you guys seen a well-known widely-accepted solution for this?
First off, JSON is not meant to be the most compact way of representing data. It's meant to be parseable directly into a javascript data structure designed for immediate consumption without further parsing. If you want to optimize for size, then you probably don't want self describing JSON and you need to allow your code to make a bunch of assumptions about how to handle the data and put it to use and do some manual parsing on the receiving end. It's those assumptions and extra coding work that can save you space.
If the property names and format of the server response are already known to the code, you could just return the data as an array of alternating values:
['Saeed', 31, 'Maysam', 32, 'Mehdi', 27]
or if it's safe to assume that names don't include commas, you could even just return a comma delimited string that you could split into it's pieces and stick into your own data structures:
"Saeed, 31, Maysam, 32, Mehdi, 27"
or if you still want it to be valid JSON, you can put that string in an array like this which is only slightly better than my first version where the items themselves are array elements:
["Saeed, 31, Maysam, 32, Mehdi, 27"]
These assumptions and compactness put more of the responsibility for parsing the data on your own javascript, but it is that removal of the self describing nature of the full JSON you started with that leads to its more compact nature.
One solution is known as hpack algorithm
https://github.com/WebReflection/json.hpack/wiki
You might be able to use a CSV format instead of JSON, as you would only specify the property names once. However, this would require a rigid structure like in your example.
JSON isn't really the kind of thing that lends itself to DRY, since it's already quite well-packaged considering what you can do with it. Personally, I've used bare arrays for JSON data that gets stored in a file for later use, but for simple AJAX requests I just leave it as it is.
DRY usually refers to what you write yourself, so if your object is being generated dynamically you shouldn't worry about it anyway.
Use gzip-compression which is usually readily built into most web servers & clients?
It will still take some (extra) time & memory to generate & parse the JSON at each end, but it will not take that much time to send over the network, and will take minimal implementation effort on your behalf.
Might be worth a shot even if you pre-compress your source-data somehow.
It's actually not a problem for JSON that you've often got massive string or "property" duplication (nor is it for XML).
This is exactly what the duplicate string elimination component of the DEFLATE-algorithm addresses (used by GZip).
While most browser clients can accept GZip-compressed responses, traffic back to the server won't be.
Does that warrant using "JSON compression" (i.e. hpack or some other scheme)?
It's unlikely to be much faster than implementing GZip-compression in Javascript (which is not impossible; on a reasonably fast machine you can compress 100 KB in 250 ms).
It's pretty difficult to safely process untrusted JSON input. You need to use stream-based parsing and decide on a maximum complexity threshold, or else your server might be in for a surprise. See for instance Armin Ronacher's Start Writing More Classes:
If your neat little web server is getting 10000 requests a second through gevent but is using json.loads then I can probably make it crawl to a halt by sending it 16MB of well crafted and nested JSON that hog away all your CPU.

JSON diff of large JSON data, finding some JSON as a subset of another JSON

I have a problem I'd like to solve to not have to spend a lot of manual work to analyze as an alternative.
I have 2 JSON objects (returned from different web service API or HTTP responses). There is intersecting data between the 2 JSON objects, and they share similar JSON structure, but not identical. One JSON (the smaller one) is like a subset of the bigger JSON object.
I want to find all the interesecting data between the two objects. Actually, I'm more interested in the shared parameters/properties within the object, not really the actual values of the parameters/properties of each object. Because I want to eventually use data from one JSON output to construct the other JSON as input to an API call. Unfortunately, I don't have the documentation that defines the JSON for each API. :(
What makes this tougher is the JSON objects are huge. One spans a page if you print it out via Windows Notepad. The other spans 37 pages. The APIs return the JSON output compressed as a single line. Normal text compare doesn't do much, I'd have to reformat manually or w/ script to break up object w/ newlines, etc. for a text compare to work well. Tried with Beyond Compare tool.
I could do manual search/grep but that's a pain to cycle through all the parameters inside the smaller JSON. Could write code to do it but I'd also have to spend time to do that, and test if the code works also. Or maybe there's some ready made code already for that...
Or can look for JSON diff type tools. Searched for some. Came across these:
https://github.com/samsonjs/json-diff or https://tlrobinson.net/projects/javascript-fun/jsondiff
https://github.com/andreyvit/json-diff
both failed to do what I wanted. Presumably the JSON is either too complex or too large to process.
Any thoughts on best solution? Or might the best solution for now be manual analysis w/ grep for each parameter/property?
In terms of a code solution, any language will do. I just need a parser or diff tool that will do what I want.
Sorry, can't share the JSON data structure with you either, it may be considered confidential.
Beyond Compare works well, if you set up a JSON file format in it to use Python to pretty-print the JSON. Sample setup for Windows:
Install Python 2.7.
In Beyond Compare, go under Tools, under File Formats.
Click New. Choose Text Format. Enter "JSON" as a name.
Under the General tab:
Mask: *.json
Under the Conversion tab:
Conversion: External program (Unicode filenames)
Loading: c:\Python27\python.exe -m json.tool %s %t
Note, that second parameter in the command line must be %t, if you enter two %ss you will suffer data loss.
Click Save.
Jeremy Simmons has created a better File Format package Posted on forum: "JsonFileFormat.bcpkg" for BEYOND COMPARE that does not require python or so to be installed.
Just download the file and open it with BC and you are good to go. So, its much more simpler.
JSON File Format
I needed a file format for JSON files.
I wanted to pretty-print & sort my JSON to make comparison easy.
I have attached my bcpackage with my completed JSON File Format.
The formatting is done via jq - http://stedolan.github.io/jq/
Props to
Stephen Dolan for the utility https://github.com/stedolan.
I have sent a message to the folks at Scooter Software asking them to
include it in the page with additional formats.
If you're interested in seeing it on there, I'm sure a quick reply to
the thread with an up-vote would help them see the value posting it.
Attached Files Attached Files File Type: bcpkg JsonFileFormat.bcpkg
(449.8 KB, 58 views)
I have a small GPL project that would do the trick for simple JSON. I have not added support for nested entities as it is more of a simple ObjectDB solution and not actually JSON (Despite the fact it was clearly inspired by it.
Long and short the API is pretty simple. Make a new group, populate it, and then pull a subset via whatever logical parameters you need.
https://github.com/danielbchapman/groups
The API is used basically like ->
SubGroup items = group
.notEqual("field", "value")
.lessThan("field2", 50); //...etc...
There's actually support for basic unions and joins which would do pretty much what you want.
Long and short you probably want a Set as your data-type. Considering your comparisons are probably complex you need a more complex set of methods.
My only caution is that it is GPL. If your data is confidential, odds are you may not be interested in that license.

Is it bad to store JSON on disk?

Mostly I have just used XML files to store config info and to provide elementary data persistence. Now I am building a website where I need to store some XML type data. However I am already using JSON extensively throughout the whole thing. Is it bad to store JSON directly instead of XML, or should I store the XML and introduce an XML parser.
Not bad at all. Although there are more XML editors, so if you're going to need to manually edit the files, XML may be better.
Differences between using XML and JSON are:
A lot easier to find an editor supporting nice way to edit XML. I'm aware of no editors that do this for JSON, but there might be some, I hope :)
Extreme portability/interoperability - not everything can read JSON natively whereas pretty much any language/framework these days has XML libraries.
JSON takes up less space
JSON may be faster to process, ESPECIALLY in a JavaScript app where it's native data.
JSON is more human readable for programmers (this is subjective but everyone I know agrees so).
Now, please notice the common thread: any of the benefits of using pure XML listed above are 100% lost immediately as soon as you store JSON as XML payload.
Therefore, the gudelines are as follows:
If wide interoperability is an issue and you talk to something that can't read JSON (like a DB that can read XML natively), use XML.
Otherwise, I'd recommend using JSON
NEVER EVER use JSON as XML payload unless you must use XML as a transport container due to existing protocol needs AND the cost of encoding and decoding JSON to/from XML is somehow prohibitively high as compared to network/storage lossage due to double encoding (I have a major trouble imagining a plausible scenario like this, but who knows...)
UPDATED: Removed Unicode bullets as per info in comments
It's just data, like XML. There's nothing about it that would preclude saving it to disk.
Define "bad". They're both just plain-text formats. Knock yourself out.
If your storing the data as a cache (meaning it was in one format and you had to process it programatically to "make" it JSON. Then I say no problem. As long as the consumer of your JSON reads native JSON then it's standard practice to save cache data to disk or memory.
However if you're storing a configuration file in JSON which needs human interaction to "process" then I may reconsider. Using JSON for simple Key:Value pairs is cool, but anything beyond that, the format may be too compact (meaning nested { and [ brackets can be hard to decipher).
one potential issue with JSON, when there is deep nesting, is readability,
you may actually see ]]]}], making debugging difficult

Is JSON used only for JavaScript?

I am storing a JSON string in the database that represents a set of properties. In the code behind, I export it and use it for some custom logic. Essentially, I am using it only as a storage mechanism. I understand XML is better suited for this, but I read that JSON is faster and preferred.
Is it a good practice to use JSON if the intention is not to use the string on the client side?
JSON is a perfectly valid way of storing structured data and simpler and more concise than XML. I don't think it is a "bad practice" to use it for the same reason someone would use XML as long as you understand and are OK with its limitations.
Whether it's good practice I can't say, but it strikes me as odd. Having XML fields in your SQL database are at least queryable (SQL Server 2000 or later, MySQL and others) but more often than not is a last resort for metadata.
JSON is usually the carrier between JavaScript and your back end, not the storage itself unless you have a JSON back end document orientated database such as CouchDB or SOLR as JSON lends itself perfectly for storing documents.
Not to say I don't agree with using JSON as a simple (that is, not serializing references) data serializer over XML, but I won't go into a JSON vs XML rant just for the sake of it :).
If you're not using JSON for its portability between 2 languages, and you're positive you will never query the data from SQL you will be better off with the default serialization from .NET.
You're not the only one using JSON for data storage. CouchDB is a document-oriented database which uses JSON for storing information. They initially stored that info as XML, then switched to JSON. Query-ing is done via good old HTTP.
By the way, CouchDB received some attention lately and is backed by IBM and the Apache Software Foundation. It is written in Erlang and promises a lot (IMHO).
Yeah I think JSON is great. It's an open standard and can be used by any programming language. Most programming languages have libraries to parse and encode JSON as well.
JSON is just a markup language like XML, and due to its more restrictive specification is is smaller in size and faster to generate and parse. So from that aspect it's perfectly acceptable to use in the database if you need ad-hoc structured markup (and you're absolutely sure that using tables isn't possible and/or the best solution).
However, you don't say what database you're using. If it is SQL Server 2005 or later then the database itself has extensive capabilities when dealing with XML, including an XML data type, XQuery support, and the ability to optimise XQuery expressions as part of an execution plan. So if this is the case I'd be much more inclined to use XML and take advantage of the database's facilities.
Just remember that JSON has all of the same problems that XML has as a back-end data store; namely, it in no way replaces a relational database or even a fixed-size binary format. If you have millions of rows and need random access, you will run into all the same fundamental performance issues with JSON that you would with XML. I doubt you're intending to use it for this kind of data store scenario, but you never know... stranger things have been tried :)
JSON is shorter, so it will use less space in your database. I'd probably use it instead of XML or writing my own format.
On the other hand, searching for matches will suck - you're going to have to use "where json like '% somevalue %'" which will be painfully slow. This is the same limitation with any other text storage strategy (xml included).
You shouldn't use either JSON or XML for storing data in a relational database. JSON and XML are serialization formats which are useful for storing data in files or sending over the wire.
If you want to store a set of properties, just create a table for it, with each row beeing a property. That way you can query the data using ordinary SQL.

Categories