I'm currently working on a project where I'm dealing with a fair amount of JSON data being transmitted backwards and forwards and stored by the browser as lists of javascript objects. For example:
person: {
// Primary Key
key: "id",
// The actual records
table: {
"1": {id: 1, name: "John", surname: "Smith", age: 26},
"2": {id: 2, name: "Mary", surname: "Brown", age: 19},
// etc..
},
indexes: {
// Arrays of pointers to records defined above
"name": [
{id: 89, name: "Aaron", surname: "Jones", age: 42},
// etc..
]
}
I'm finding myself coding all sorts of indexing and sorting algorithms to efficiently manipulate this data and I'm starting to think that this kind of thing must have been done before.
I have experience of using the Ext.data.Store and Ext.data.Record objects for performing this kind of data manipulation, but I think they are overly complex for junior developers and the project I'm working on is a small mobile application where we cant afford to have a 300K+ library added just for the sake of it, so I need something really minimal.
Any ideas if there is a Javascript JSON manipulation framework that has the following:
Can store,
retrieve,
sort,
and iterate through JSON data,
with a clean API,
minimal performance drag (Mobiles dont have a lot of computing power)
and a small payload that is ideally <10K?
I might be asking for too much, but hopefully someone's used something like this... The kind of thing I'm looking for the is the JSON equivalent of jQuery, maybe its not so outlandish.
Take a look on jsonQ
It fullfill all the requirement pointed on question.
Can store,
retrieve
and iterate through JSON data,
Provide traversing (like find,siblings, parent etc) and manipulation method like(value, append, prepend);
sort
Provide a direct array sort method and sort method which run on jsonQ object. (Both sort method run recursively)
with a clean API
Its a trial to have same API for JSON as jQuery DOM APIs . So if you are familiar with jquery. Its easy to catch up. More over complete documentation of apis is available.
minimal performance drag
It create a new format on initialization of jsonQ for a JSON data which is used internally to traverse data which are more efficient. (Its like having all the loops at once, so you don't have to make loop over loop to iterate each time that json).
and a small payload that is ideally <10K?
minified version is 11.7 kb.
Actually your question is not good, I suppose. From your example one could see, that you're trying to emulate SQL-like storage with JSON. Maybe, you just need to take IndexedDB?
jsonpath matches points 4-7 (and maybe 3) of your exact requirements and JSON global object allows 1 and 2 with just once call for each.
Also IMHO requirements are unreal, especially with the last one about size.
I think Lawnchair is something you're looking for. Its homepage says it is made with mobile in mind, however I've not used it before so I cannot comment on that.
It is simple key-value store, although you can define your own indexes, something like with CouchDB. I've not seen support for selectors, however there is a query plugin available, promising easy selection.
Something like jQuery relies on sizzle, which is CSS selector library, that isn't applicable in your case. XPath is in my opinion your best bet, since it's used primarily with XML, a tree structure. Since JSON object can be represented as a tree, I've found JSONSelect, a small library that supports JSON Xpath selectors in 'CSS-ish' way.
If you'd be able somehow to plug JSONSelect into Lawnchair, I think you've found a great combination :)
Related
I'm working on the front-end of a application which communicates with a back-end thought REST API. The back-end is some kind of standalone device, not a standard web server thus it is not so powerful (but can run php). The API is very generic and returns values like-key value pairs e.g.
[{ key: "key1", value: "some value" }, { key: "key2", value: "1234" }]
The problem what I'm facing is that it does not consider types, it returns everything like string in quotes (numbers: "123", boolean: "1"). Recently I asked for a change (my argument was that the manual type conversion is unnecessary work which has to be done by each client app and can be avoided if the server can do it) but I need some more convincing arguments. Some counterarguments to my request are:
RESTful communications is natively as a string (so regardless, if
transferring a 1 or a "1" -- a client side type conversion has to
be done)
it is the responsibility of the GUI designer to understand the context of each parameter
the back-end is following the KISS principle keeping everything as strings and no additional processing is needed on the back-and and can be done on the GUI which is typically on a much more powerful PC
So what could be some good arguments to convince my colleagues that types in JSON responses are good thing for me and for them as well?
Thanks
RESTful communcation and JSON are 2 different things. JSON is only the format of the data, it could be XML or even CSV or a custom one, this doesn't remove that RESTful aspect.
JSON is natively handled by pretty much all javascript library that handle server communcation, no parsing to do, little conversion (maybe date object in timestamp and other kind of stuff).
On the server-side there are a lot of library that can handle the JSON for you too, and how they generate the key-value thing ? A generic code with introspection or do they write tons of serializer for all classes ? This can lead to a lot of technical and unncessary code to write, and test.
KISS doesn't mean to keep it totally stupid and don't think about anything. Having boolean has a number in a string and number as string has nothing simple as a developper, it's merely hell to handle for all objects conversion. If you need to check for data constraint, this will lead to a repeat yourself when you will have to validate every fields (testing if the number is a number,...).
The more simple thing is not to write your own library that convert all to string, with probably less performance that specialized library. It's to used library that do the job for you.
If you write all as typed object, your json deserializer on backend will do a part of the validation for you, a boolean will be a boolean, a number will be a number, if you have a lot of validation to do, this lead to way less code to write to perform all the checks.
Client-side i guess there is a lot of code to deal with all this key/value things and to convert values. This slow down development as hell and if you perform unit-testing, it adds lot of testing to do.
it is the responsibility of the GUI designer to understand the context of each parameter
Well this is true, but i feel like providing well formatted data is the responsability of the server. Enforcing a format will lead to a fail fast pratice, which is a good thing. Did they never had any production problem because of this generic format ? They wouldn't with proper JSON.
Personnaly my JSON code on the server is Java annotation and one custom serializer, nothing more. I wonder how much code did they write to serialize/deserialize and convert types.
First of all, the idea of using the following data format [{ key: "key1", value: "some value" }, { key: "key2", value: "1234" }] is kinda stupid and is basically the same as { "key1": "some value", "key2": "1234" }.
In regards to the points, your colleagues made:
1. While this is indeed text-based transfer and conversion between the string representation of your object and an actual object has to be done, there would be no need to recursively walk the tree of key/value pairs if you were to transfer objects or other complex types as values.
Let's pretend that you have to transfer following piece of data:
{ "key1": { "x": 10, "y": 20 } }
if you were to encode inner object as string you would get something like this:
"{ \"key1\": \"{ \"x\": \"10\", \"y\": \"20\" }\" }"
If your value was converted to string, you'd have to call JSON.parse on the entire object first as well as on the object that was possibly stored as text inside that object, which requires recursive walking of the tree. On the other hand, when using native types (such as objects, numbers and arrays) as values you would achieve the same effect with just one call to JSON.parse (which would internally be recursive, but still better that managing it yourself).
Second point is valid but I feel like any service should return the data in ready-to-use form or at least as ready as it can be. Wouldn't it be stupid if your service gave you some piece of raw XML instead of parsed and prepared data? Same principle applies here.
I feel like your friends are just being lazy trying to cover their butts with KISS principle here. It is very likely that their server code has all the info it needs to encode the values in their proper types, which would be much harder to do on the client side. So their third point seems like a blatant 'we are too lazy so you have to do it' thing.
So I've done quite a lot of research about this particular topic, and I still find myself somewhat confused as to the best approach. There are two parts to my question.
1.) Basically my page is accessing a MySQL music database which holds the entire discography by a particular artist. There are literally 1000's of items. The page's function is to display all items and sort them by a particular value in the array (which the user chooses from a pre-defined list) like title, date released, catalog number, category, etc.
From what I understand, there are two ways I could achieve the display & sorting. Once the user loads the page, the PHP could output the discography array immediately, echo it into a javascript object, then use javascript to do the sorting once the user clicks a button.
OR
After the user clicks a button, I could use AJAX to send the value by which the user wants all items sorted, use PHP ** or database server sorting, like ORDER BY ** to do the sorting and output the array in JSON. An array sorted by pagename, for example, would look like something this:
var discography = [
{"id":"1",
"pagename":"item100001",
"category":"Albums",
"Label":"Atlantic",
"title":"Awesome Album",
"date":"July 2, 1998",
"country":"United States",
"catalog":"666 3333 44444",
"format":"CD"},
{"id":"12",
"pagename":"item100002",
"category":"Albums",
"Label":"Epic",
"title":"Fun Music",
"date":"January 22, 1992",
"country":"United Kingdom",
"catalog":"333 4444 5555",
"format":"Cassette"},
{"id":"3",
"pagename":"item100003",
"category":"Single",
"Label":"Atlantic",
"title":"Cool Single",
"date":"October 12, 1988",
"country":"United States",
"catalog":"444 5555 66666",
"format":"CD"}
];
Which I could then manipulate by jquery to output as html and display the items on the page.
The user should be able to sort and re-sort the items by whatever value they want, not just once.
Perhaps this question is oversimplifying the premise, but which is the smarter, faster and more efficient approach to array sorting? Server-side sorting (using AJAX and outputted as JSON) or Javascript? If it's server-side, I'm confident I'd know how to write functions to sort the data properly.
But if the answer is Javascript, that brings me to my second question.
2.) I have found plenty of wonderful javascript sorting functions out there, but I keep reading about the issue of "stable" versus unstable sorting in Javascript. Would this issue even apply? I definitely need this to be consistent across all browsers. Merge Sort is apparently a wonderful answer to this quandry, but since I need to sort by a particular field (category, date, etc), I would need a little direction on how to adapt the function to my purpose. But if I can do safe, consistent sorting without merge sort, then I'll just do that.
If you've read this far I appreciate it. Any help would be welcomed!
it is hard to say what the right way could be.
I would prefer a javascript solution because it produces less traffic on the server.
With array.sort() you get all you need for sorting your array of objects by passing an compare function. https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/Array/sort
array.sort(function(a,b){return a.id - b.id});
If the function returns 1 a is assumed to be greater than b, -1 means a is less than b and 0 means they are equal.
If there are many items it can happen that Javascript freezes your screen. in that case there are two Options. Handle everything on the server again or use WebWorker to do the sorting.
Consider this JSON response:
[{
Name: 'Saeed',
Age: 31
}, {
Name: 'Maysam',
Age: 32
}, {
Name: 'Mehdi',
Age: 27
}]
This works fine for small amount of data, but when you want to serve larger amounts of data (say many thousand records for example), it seems logical to prevent those repetitions of property names in the response JSON somehow.
I Googled the concept (DRYing JSON) and to my surprise, I didn't find any relevant result. One way of course is to compress JSON using a simple home-made algorithm and decompress it on the client-side before consuming it:
[['Name', 'Age'],
['Saeed', 31],
['Maysam', 32],
['Mehdi', 27]]
However, a best practice would be better than each developer trying to reinvent the wheel. Have you guys seen a well-known widely-accepted solution for this?
First off, JSON is not meant to be the most compact way of representing data. It's meant to be parseable directly into a javascript data structure designed for immediate consumption without further parsing. If you want to optimize for size, then you probably don't want self describing JSON and you need to allow your code to make a bunch of assumptions about how to handle the data and put it to use and do some manual parsing on the receiving end. It's those assumptions and extra coding work that can save you space.
If the property names and format of the server response are already known to the code, you could just return the data as an array of alternating values:
['Saeed', 31, 'Maysam', 32, 'Mehdi', 27]
or if it's safe to assume that names don't include commas, you could even just return a comma delimited string that you could split into it's pieces and stick into your own data structures:
"Saeed, 31, Maysam, 32, Mehdi, 27"
or if you still want it to be valid JSON, you can put that string in an array like this which is only slightly better than my first version where the items themselves are array elements:
["Saeed, 31, Maysam, 32, Mehdi, 27"]
These assumptions and compactness put more of the responsibility for parsing the data on your own javascript, but it is that removal of the self describing nature of the full JSON you started with that leads to its more compact nature.
One solution is known as hpack algorithm
https://github.com/WebReflection/json.hpack/wiki
You might be able to use a CSV format instead of JSON, as you would only specify the property names once. However, this would require a rigid structure like in your example.
JSON isn't really the kind of thing that lends itself to DRY, since it's already quite well-packaged considering what you can do with it. Personally, I've used bare arrays for JSON data that gets stored in a file for later use, but for simple AJAX requests I just leave it as it is.
DRY usually refers to what you write yourself, so if your object is being generated dynamically you shouldn't worry about it anyway.
Use gzip-compression which is usually readily built into most web servers & clients?
It will still take some (extra) time & memory to generate & parse the JSON at each end, but it will not take that much time to send over the network, and will take minimal implementation effort on your behalf.
Might be worth a shot even if you pre-compress your source-data somehow.
It's actually not a problem for JSON that you've often got massive string or "property" duplication (nor is it for XML).
This is exactly what the duplicate string elimination component of the DEFLATE-algorithm addresses (used by GZip).
While most browser clients can accept GZip-compressed responses, traffic back to the server won't be.
Does that warrant using "JSON compression" (i.e. hpack or some other scheme)?
It's unlikely to be much faster than implementing GZip-compression in Javascript (which is not impossible; on a reasonably fast machine you can compress 100 KB in 250 ms).
It's pretty difficult to safely process untrusted JSON input. You need to use stream-based parsing and decide on a maximum complexity threshold, or else your server might be in for a surprise. See for instance Armin Ronacher's Start Writing More Classes:
If your neat little web server is getting 10000 requests a second through gevent but is using json.loads then I can probably make it crawl to a halt by sending it 16MB of well crafted and nested JSON that hog away all your CPU.
I am creating an offline mobile web app, and am looking at using JSON to replicate some of my database tables and storing that in localStorage. (I am aware of Web SQL Database but it doesn't look particularly future-proof.)
I started with a very basic JSON output from the database, which looks a bit like this:
{
"1": {"id":"1","name":"Hello","alias":"hello","category":"8"},
"2": {"id":"2","name":"World","alias":"world","category":"3"},
...
}
However, there is a lot of data in many tables and space could be an issue with the constant repeating of field names. Storing the data like this halves the size:
{
"1": ["1","Hello","hello","8"},
"2": ["2","World","world","3"},
...
}
But now I have to reference a piece of data with a numeric index, possibly filling my code with magic numbers. I thought of storing an array like ["id","name"...] in another variable but the extra lookups seem like they would get messy.
Are there any practical ways to avoid that, but also keeping the Javascript code fairly neat? Any other useful strategies for this kind of development?
would it be possible to convert it into a format like this:
{
id:{1:1, 2:2, ...},
name:{1:hello, 2:world},
alias:{1:hello, 2:world},
category{1:8,2:3}
}
This way, you only store each column once, but you can still easily find things by their id.
JSON is not a database. JSON is a data interchange format.
Not sure if it would work across all mobile platforms, but XML would be an option.
I am creating a micro site that uses Javascript for data visualisation. I am working with a back-end developer who will pass me customer data to be displayed on the front end in order to graph and display different customer attributes (like age, sex, and total $ spent).
The problem I am having is that the developer is asking me what data I want and I have no idea what to tell them. I don't know what I need to or want to request, in the past I have always just taken data or content and marked it up. It's a new project for me and I am feeling a little bit out of my depth.
edit:
After thinking about this further and working a little bit with the back-end developer the specific problem I am having is how to do the actual ajax requests and update the results on my page. I know specifically that I am looking for things like age, sex, $ spend but I need to focus more on how to request them.
If you're using jQuery you can do asynchronous data requests (AJAX requests) in the JSON format using .getJSON which makes processing the response quite easy.
You can ask the backend developer to create a RESTful API which returns whichever data you need in the JSON format. As for the data itself, tell him to include whatever you think will need or may need in the future. Once you process the JSON data you can determine what you need. Don't go overboard and tell him to return stuff you'll never use or you'll just waste bandwidth though.
If you work with JavaScript, then the data format JavaScript understands natively is JSON. If they can provide you with data in JSON format, it would be a good start:
http://en.wikipedia.org/wiki/Json
{
"customers":
[
{
"age": "23",
"sex": "male",
"dollars-spent": "7"
},
{
"age": "22",
"sex": "female",
"dollars-spent": "10000"
}
]
}
I would guess you will need something like customer ID together with age and sex so that you could uniquely identify them.