I am creating a micro site that uses Javascript for data visualisation. I am working with a back-end developer who will pass me customer data to be displayed on the front end in order to graph and display different customer attributes (like age, sex, and total $ spent).
The problem I am having is that the developer is asking me what data I want and I have no idea what to tell them. I don't know what I need to or want to request, in the past I have always just taken data or content and marked it up. It's a new project for me and I am feeling a little bit out of my depth.
edit:
After thinking about this further and working a little bit with the back-end developer the specific problem I am having is how to do the actual ajax requests and update the results on my page. I know specifically that I am looking for things like age, sex, $ spend but I need to focus more on how to request them.
If you're using jQuery you can do asynchronous data requests (AJAX requests) in the JSON format using .getJSON which makes processing the response quite easy.
You can ask the backend developer to create a RESTful API which returns whichever data you need in the JSON format. As for the data itself, tell him to include whatever you think will need or may need in the future. Once you process the JSON data you can determine what you need. Don't go overboard and tell him to return stuff you'll never use or you'll just waste bandwidth though.
If you work with JavaScript, then the data format JavaScript understands natively is JSON. If they can provide you with data in JSON format, it would be a good start:
http://en.wikipedia.org/wiki/Json
{
"customers":
[
{
"age": "23",
"sex": "male",
"dollars-spent": "7"
},
{
"age": "22",
"sex": "female",
"dollars-spent": "10000"
}
]
}
I would guess you will need something like customer ID together with age and sex so that you could uniquely identify them.
Related
I develop a web app with Javascript + React and a REST API in Java as backend.
The frontend receives from backend via REST a list of objects which look like this:
{
"id": "11111",
"operationDate": "2020-02-21 00:00:00",
"status": "A"
...
}
Those objects are grouped by id (which is not unique in this case) and then sorted (within a group) by date. It's already done in JavaScript, but I wonder if I should move grouping and sorting to Java and send it via REST already grouped and sorted for the sake of performance. It will require some work so I'd like to know if it's worthwhile.
Answering my own question: I measured the time of the grouping and sorting for a set of 12k+ objects and it appears it took only 16 miliseconds. Considering the amount of work necessary to move it to backend and fact that incoming lists will hardly ever be larger it's not worth it.
I'm currently developing a Node/Express/MongoDB Restful Service.
Now, since MongoDB doesn't have "columns", it easily happens that a response for the same endpoint can contain a specific property or not. e.g.
# GET /users/1
{"name": "Alexander", "nickname": "Alex"}
# GET /users/2
{"name": "Simon"}
While this doesn't make any difference to handle with weakly typed languages like JavaScript, one of my coworkers who's implementing a client in C#, struggles to parse the JSON string when having missing properties.
IMO, the current approach is better from the API perspective, since it causes better performance, less code and even traffic on the serverside. Otherwise I would need to normalize Objects before sending or even run migrations every time a property gets added. Also it doesn't send "virtual" data which doesn't even exist on the resource.
But on the other hand, I also want to build a solid Service from the clients perspective and "normalizing" on the client side is at least as bad as on the server side.
Theres also another use case, which works well in JS but will cause problems in C# and refers to the same problem:
# GET /users/1/holidays
{
"2018-12-25": { "title": "Christmas" },
"2019-01-01": { "title": "New Year" }
}
I took this approach to automatically prevent multiple entries for the same days. But I could understand if this is really considered bad practice.
Update
As commented by #jfriend00, the second example is not that handy. So I won't use it anymore.
I am building a (RESTful) api (using HTTP) that I want to use with javascript.
I find my self writing stuff in javascript like
function getPost(id)
{
$.getJSON("/api/post/" + id, function(data){
// Success!
});
}
There must be a smarter way than hardcoding the api in javascript, maybe something like querying the api itself for how the getPost url should look like?
function getApiPostUrl()
{
$.getJSON("/api/getpost/");
}
returning something like
url: "/api/post/:id"
Which can be parsed by javascript to obtain the url for actually getting the post with id=:id. I like this approach.
Is a standard way of doing this? I am looking for a good approach so I don't have to invent it all, if there already exists a good solution.
Well, per definition, a RESTful API shall contain the full URI - a Resource Identifier, and not only the resource's path. Thus your question is more a question on how you're designing your whole API.
So, for example, your API could contain a http://fqdn/api/posts that contains a list of all the posts within your site, e.g.:
[ "http://fqdn/api/posts/1",
"http://fqdn/api/posts/2",
"http://fqdn/api/posts/3" ]
and then your javascript only iterates over the values within the list, never needing to craft the path for each resource. You only need to have one known entry point. This is the HATEOAS concept, that uses hyperlinks as API to identifies states of an application.
All in all, it's a good idea to thoroughly think your application (you can use UML tools like the state machine or sequence diagrams..) so that you can cover all your use cases with a simple yet efficient set of sequences defining your API. Then for each sequence, it's a good idea to have a single first state, and you can have a single first step linking to all the sequences.
Resources:
ACM Article
Restful API Design Second Edition Slides
Restful API design blog
Yes, there are quite a few standard ways of doing this. What you want to look for is "hypermedia APIs" - that is, APIs with embedded hypermedia elements such as link templates like yours, but also plain links and even more advanced actions (forms for APIs).
Here is an example representation using Mason to embed a link template in a response:
{
id: 1234,
title: "This is issue no. 1",
"#link-templates": {
"is:issue-query": {
"template": "http://issue-tracker.org/mason-demo/issues-query?text={text}&severity={severity}&project={pid}",
"title": "Search for issues",
"description": "This is a simple search that do not check attachments.",
"parameters": [
{
"name": "text",
"description": "Substring search for text in title and description"
},
{
"name": "severity",
"description": "Issue severity (exact value, 1..5)"
},
{
"name": "pid",
"description": "Project ID"
}
]
}
}
}
The URL template format is standardized in RFC 6570.
Mason is not the only available media type for hypermedia APIs. There is also HAL, Sirene, Collection-JSON and Hydra.
And here is a discussion about the benefits of hypermedia.
Your code clearly violates the self-descriptive messages and the hypermedia as the engine of application state (abbr. HATEOAS) of the uniform interface constraint of REST.
According to HATEOAS you should send back hyperlinks, and the client should follow them, so it won't break by changes of the API. A hyperlink does not equal with an URL. It contains an URL, a HTTP method, maybe the content-type of the body, possibly input fields, etc...
According to self-descriptive messages you should add semantics to the data, the links, the input fields, etc... The client should understand that semantics and behave accordingly. So for example you can add a "create-post" API specific link relation to your hyperlink so the client will understand that it is for creation of posts. Your client should always use these kind of semantics instead of parsing the URLs.
URLs are always API specific, semantics not necessarily, so these constraints decouple the client from the API. After that the client won't break by URL changes or not even data structure changes, because it will use a standard hypermedia format (for example HAL, JSON-LD, ATOM or even HTML) and semantics (probably RDF) to parse the response body.
I'm currently working on a project where I'm dealing with a fair amount of JSON data being transmitted backwards and forwards and stored by the browser as lists of javascript objects. For example:
person: {
// Primary Key
key: "id",
// The actual records
table: {
"1": {id: 1, name: "John", surname: "Smith", age: 26},
"2": {id: 2, name: "Mary", surname: "Brown", age: 19},
// etc..
},
indexes: {
// Arrays of pointers to records defined above
"name": [
{id: 89, name: "Aaron", surname: "Jones", age: 42},
// etc..
]
}
I'm finding myself coding all sorts of indexing and sorting algorithms to efficiently manipulate this data and I'm starting to think that this kind of thing must have been done before.
I have experience of using the Ext.data.Store and Ext.data.Record objects for performing this kind of data manipulation, but I think they are overly complex for junior developers and the project I'm working on is a small mobile application where we cant afford to have a 300K+ library added just for the sake of it, so I need something really minimal.
Any ideas if there is a Javascript JSON manipulation framework that has the following:
Can store,
retrieve,
sort,
and iterate through JSON data,
with a clean API,
minimal performance drag (Mobiles dont have a lot of computing power)
and a small payload that is ideally <10K?
I might be asking for too much, but hopefully someone's used something like this... The kind of thing I'm looking for the is the JSON equivalent of jQuery, maybe its not so outlandish.
Take a look on jsonQ
It fullfill all the requirement pointed on question.
Can store,
retrieve
and iterate through JSON data,
Provide traversing (like find,siblings, parent etc) and manipulation method like(value, append, prepend);
sort
Provide a direct array sort method and sort method which run on jsonQ object. (Both sort method run recursively)
with a clean API
Its a trial to have same API for JSON as jQuery DOM APIs . So if you are familiar with jquery. Its easy to catch up. More over complete documentation of apis is available.
minimal performance drag
It create a new format on initialization of jsonQ for a JSON data which is used internally to traverse data which are more efficient. (Its like having all the loops at once, so you don't have to make loop over loop to iterate each time that json).
and a small payload that is ideally <10K?
minified version is 11.7 kb.
Actually your question is not good, I suppose. From your example one could see, that you're trying to emulate SQL-like storage with JSON. Maybe, you just need to take IndexedDB?
jsonpath matches points 4-7 (and maybe 3) of your exact requirements and JSON global object allows 1 and 2 with just once call for each.
Also IMHO requirements are unreal, especially with the last one about size.
I think Lawnchair is something you're looking for. Its homepage says it is made with mobile in mind, however I've not used it before so I cannot comment on that.
It is simple key-value store, although you can define your own indexes, something like with CouchDB. I've not seen support for selectors, however there is a query plugin available, promising easy selection.
Something like jQuery relies on sizzle, which is CSS selector library, that isn't applicable in your case. XPath is in my opinion your best bet, since it's used primarily with XML, a tree structure. Since JSON object can be represented as a tree, I've found JSONSelect, a small library that supports JSON Xpath selectors in 'CSS-ish' way.
If you'd be able somehow to plug JSONSelect into Lawnchair, I think you've found a great combination :)
Let's say we want to populate some javascript models (eg backbone.js models), given a json response from a server like this:
{
"todo": {
"title": "My todo",
"items": [
{ "body": "first item." },
{ "body": "second item"}
]
}
}
This data does not contain the type information, so we do not know which model to populate when we see the "todo" key.
Of course one can create some custom standard to link the keys in the json response object to the client side models. For instance:
{
"todo": {
"_type": "Todo",
"title": "My todo",
...
}
}
While this works for objects, it gets awkward when it comes to lists:
"items": {
"_type": "TodoItem",
"_value": [
{ "body": "first item." },
{ "body": "second item"}
]
}
Before creating this custom rules, the questions are:
Are there any RESTful guidelines on including client side type information in response data?
If not, is it a good idea to include the client side type information in the response json?
Beside this whole approach of populating models, what are other alternatives?
Edit
While the model type can be retrieved from the url, eg /todo and /user, the problem with this approach is that the initial population of N models would mean N http requests.
Instead, the initial population can be done from a single big merged tree with only 1 request. In this case, the model type information in the url is lost.
A different endpoint (url) is used for each REST object. So the url includes the "which model" information.
And each model is a fixed collection of variables and (fixed) types.
So there is usually no need to send dynamic type information over the wire.
Added Re the comment from #ali--
Correct. But you're now asking a different/more precise question: "How do I handle the initial load of Backbone models without causing many http requests?" I'm not sure of the best answer to this question. One way would be to tell backbone to download multiple collections of models.
That would reduce the number of calls to one per model vs one per model instance.
A second way would be a non-REST call/response to download the current tree of data from the server. This is a fine idea. The browser-client can receive the response and then feed it model by model into backbone. Be sure to give the user some feedback about what's going on.
Re: nested models. Here's a SO q on it.
Consider that, as already said in other answers, in REST each resource has its own endpoint, and thus what you are trying to do (ie. hide all your models behind a single endpoint) is not fully REST-compliant, IMHO. Not a big deal per se.
Nested collections could be the answer here.
The "wrapper" collection fetches all the models from a single endpoint at init time, and pushes them to the respective collections. Of course you must send the type info in the json.
From that point on, each "inner" collection reacts to its own events, and deals with its own endpoint.
I don't see huge problems with such an optimization, as long as you are aware of it.
REST has nothing to do with the content sent back and forth. It only deals with how the state is transferred. JSON (which is the protocol you seem to be using) would be the one that indicated what would need to be sent, and as far as I know, it doesn't dictate that.
Including the type info in the JSON payload really depends on the libraries you are using. If it makes it easier for you to use JSON to include the types, then I would say put it in. If not, leave it out.
it's really useful when you have a model that extends another. indicating which model specifically to use eliminate the confusion