My current project uses JSON as data interchange format. Both Front-end and Back-end team agree upon a JSON structure before start integrating a service. At times due to un-notified changes in JSON structure by back-end team; it breaks the front-end code.
Is there any external library that we could use to compare a mock JSON (fixture) with servers JSON response. Basically it should assert the whole JSON object and should throw an error if there is any violation in servers JSON format.
Additional info: App is built on JQuery consuming REST JSON services.
I would recommend a schema for your JSON objects.
I use Kwalify but you can also use Rx if you like that syntax better.
I've been using QUnit: http://docs.jquery.com/QUnit recently for a lot of my JS code.
asyncTest http://docs.jquery.com/QUnit/asyncTest could be used pretty effectively to test JSON Structure.
Example:
asyncTest("Test JSON API 1", 1, function() {
$.getJSON("http://test.com/json", function(data) {
equals(data.expected, "what you expected", "Found it");
});
});
Looks like you're trying to solve a problem from other end. Why should you as a front-end developer bother with testing back-end developer's work?
A JSON that is generated on server is better to test on server using standard approach, i.e. functional tests in xUnit. You could also look at acceptance test frameworks like FITnesse if you want to have tests and documentation wiki all in one.
If even after introducing testing on server you'll get invalid JSON it is a problem in human communication, not in tests.
Since there is no answer I'll put my two cents in.
If the problem is that you are dealing with shifting requirements from the back-end then what you need to do is isolate yourself from those changes. Put an abstraction between the front-end and back-end.
Maybe you can call this abstraction the JSON Data Format Interchange.
So when GUI unit-testing (hopefully you are TDDing your Web GUI) you will have a mock for the JSON DIF. SO when the time to integrate the back-end with the front-end*, any software changes will be done in the abstraction layer implementation. And of course you already have tests for those based upon the agreed upon JSON structure.
OBTW, I think that the server-side team should have responsibility for specifying the protocol to be used against the server.
*Why does this remind of the joke my butt and your face could be twins.
https://github.com/skyscreamer/JSONassert may be helpful in eliminating false positives, so that if the order of fields returned by server changes, but overall response is the same, it doesn't trigger a failure.
Related
Is it possible to call a stored procedure within javascript on the client side?
I know how to do on the server side, but I am interesting in doing on the client side.
Basically it boils down to directly contacting a SQL server from within the client. Is that possible?
tldr; No, it is not possible to connect to SQL Server 'directly' from browser-JavaScript1.
JavaScript can "speak" HTTP and WebSockets, but SQL Server "speaks" TDS. To communicate there needs to be a common medium/protocol that both the client and server use.
While there are WebSocket proxies that technically make this possible it still requires a separate proxy service (and you'd still have to write/find a JavaScript TDS driver). I don't recommend eliminating the controlled access layer.
Likewise, an HTTP proxy where raw SQL commands are sent to/from the client could be used. I wouldn't advise this either, but some do exist.
External code/libraries (eg. ActiveX, Java) could establish the SQL connection and proxy through to the JavaScript client.
In all of these cases there is an intermediate helper and browser-JavaScript never connects 'directly'.
1 JavaScript is a language and this answer focuses on a browser implementation with browser-supported libraries/functions. One could argue that using node modules would still 'be JavaScript', and they would be correct .. in a different environment.
You cannot establish a direct connection to a database from a client's web browser. What you will need to do is create a server side application to expose an API for getting the data over HTTP.
Take a look at Microsoft's ASP.NET Web API
Sort of
You could create an endpoint that is a wrapper for stored procedure(s) that takes the procedure name as a parameter, as well as the parameters for the procedure.
Once you have such a mechanism in place, you can create endpoints that expose procedures automagically.
http://yourserver/services/yourprocname?prm1=val,prm2=val,etc
If you feel really ambitious you can try out SQL 2016 and return JSON directly from those procedures. Then you can nest data using subqueries and return the JSON in a single payload. No serialization, no objects, just read the data and return it.
< 2016 you could put the results into a Dictionary and use NewtonSoft to serialize it. Assuming you are returning flat data you'd be good to go. Just use a reader and get the meta data from the column names for the key, and the value as object. NewtonSoft will convert that into JSON for you.
If you are returning hierarchical you could (by convention) create a series of runners that take the reader, and pump it into a Dictionary where object is another Dictionary Again the Newtonsoft stuff will help you out with the serialization.
Hope this helps, we are using this approach with 2016 and it is very nice to be able to create a stored procedure and call it without any middle tier code, deployment, etc. It just works.
Hope this helps.
Yes, you can connect to SQL Server from Client side directly by using the WebAssembly. You can write your function that calls the SQL Server in C or C++ first. Compile it to .Wasm by Emscripten compiler. Then you can call the C or C++ code by using JavaScript. In future, you should be able to do that with C# but Microsoft just started work on that.
I am writing a post about it, and I will share it when it's ready.
Now just because you can do it, doesn't mean you should because of security issues. But I am not here to give a lecture about what you should or should not do.
In my application I receive json data in a post request that I store as raw json data in a table. I use postgresql (9.5) and node.js .
In this example, the data is an array of about 10 quiz questions experienced by a user, that looks like this:
[{"QuestionId":1, "score":1, "answerList":["1"], "startTime":"2015-12-14T11:26:54.505Z", "clickNb":1, "endTime":"2015-12-14T11:26:57.226Z"},
{"QuestionId":2, "score":1, "answerList":["3", "2"], "startTime":"2015-12-14T11:27:54.505Z", "clickNb":1, "endTime":"2015-12-14T11:27:57.226Z"}]
I need to store (temporarily or permanently) several indicators computed by aggregating data from this json at quizz level, as I need these indicators to perform other procedures in my database.
As of now I was computing the indicators using javascript functions at the time of handling the post request and inserting the values in my table alongside the raw json data. I'm wondering if it wouldn't be more performant to have the calculation performed by a stored trigger function in my postgresql db (knowing that the sql function would need to retrieve the data from inside the json raw data).
I have read other posts on this topic, but it was asked many years ago and not with node.js, so I thought people might have some new insight on the pros and cons of using sql stored procedures vs server-side javascript functions.
edit: I should probably have mentioned that most of my application's logic already mostly lies in postgresql stored procedures and views.
Generally, I would not use that approach due to the risk of getting the triggers out of sync with the code. In general, the single responsibility principle should be the guide: DB to store data and code to manipulate it. Unless you have a really pressing business need to break this pattern, I'd advise against it.
Do you have a migration that will recreate the triggers if you wipe the DB and start from scratch? Will you or a coworker not realise they are there at a later point when reading the app code and wonder what is going on? If there is a standardised way to manage the triggers where the configuration will be stored as code with the rest of your app, then maybe not a problem. If not, be wary. A small performance gain may well not be worth the potential for lost developer time and shipping bugs.
Currently working somewhere that has gone all-in on SQL functions.. We have over a thousand.. I'd strongly advise against it.
Having logic split between Javascript and SQL is a real pain when debugging issues especially if, like me, you are much more familiar with JS.
The functions are at least all tracked in source control and get updated/created in the DB as part of the deployment process but this means you have 2 places to look at when trying to follow the code.
I fully agree with the other answer, single responsibility principle, DB for storage, server/app for logic.
I came across this question and was quite baffled. I could not understand the underlying thoughts behind this. I have done some API intergation using AngularJS usng $http and $resource when its RESTFul, but these two questions was something like a puzzle. I want to understand this in detail.
Does the JavaScript framework you choose support a model abstraction
with REST integration? If so, what schema does it expect the JSON
replies to use?
Can anyone explain me the two questions.
Some libraries expect your REST API to return specifically structured result. (HAL, JSONP, HATEOAS, ...)
By default, $resource works best with HAL, but it can easily be extended to support other types of return formats (https://github.com/jmarquis/angular-hateoas)
Maybe the question is asking about something like jQuery.map() which allows you to convert the JSON object from the server to your own internal object (abstracted model).
If you use the object from the server throughout your code, and that object's schema changes (e.g. email string changes to emails array), you may have to change your code in many places. But if you've mapped the server data to a local object, you may only need to change the mapping (e.g. set internal email to first value from server emails).
I have to implement an architecture where, unfortunately, we are using SharePoint 2013 as, effectively, our principle database. (Not my choice, in case you hadn't picked that up). I have an Asp.Net MVC facade application on the server, handling composition of data from SP and a couple of other data sources, and then a JavaScript SPA as client. An additional wrinkle is that the client needs to be able to work offline, so I need to use IndexedDB to store the data for offline access.
This seems a perfect use-case for breeze.js. My basic architecture is to define a strongly typed model in the MVC facade that will wrap the untyped data I get from SP (in the form object["property"] - using the SP Client Side Object Model). Breeze will handle synchronization between this model and the client, and I will use the export/import functionality to cache data in IndexedDB as required.
So far so good. But... the SOA examples on the breeze site are still under development (and to me, this is fundamentally an SOA architecture, with each SP List a service to be composed). The closest thing I can find is the NoDB sample but this hard-codes metadata on the client. I'd like to establish relationships and validation in the MVC model, and then pass these through to the client, so validation can run off the same declaration in both places.
Is this possible? If so - how? If not does anyone have a workaround or a better idea? I'm already resigned to defining the model in two separate places (the facade and, implicitly, the structure of the SP lists). I would dearly like to avoid implementing it a third time in the client. I'm open to having breeze.js talk directly to the SP REST endpoints, but my understanding from https://stackoverflow.com/a/15364503/1014822 is that the implementation is flawed and does not expose the required metadata.
Resolution: Based on the accepted answer below, I came to the following solution:
1) Generate a service reference from the SP ListData.svc endpoint - thus creating an edmx and proxy classes.
2) Extend ContextProvider in my Repository and override BuildJsonMetadata like so:
protected override string BuildJsonMetadata()
{
XDocument xDoc = XDocument.Load(HttpContext.Current.Server.MapPath("PATH_TO_EDMX"));
String xString = xDoc.ToString();
xString = xString.Replace("DATA_SERVICE_NAMESPACE", "APP_NAMESPACE");
xDoc = XDocument.Parse(xString);
var jsonText = CsdlToJson(xDoc);
return jsonText;
}
3) Modify breeze.js very slightly, editing line 12383:
var schema = metadata.schema || metadata["edmx:Edmx"]["edmx:DataServices"].schema;
(I could presumably also have fixed this in the ContextProvider by choosing a descendant rather than the root node for my xDoc)
4) - Optionally use #Christoff's very useful T4TS.tt template script to generate a d.ts from the service proxy classes so I can have type safety on the data that breeze loads.
So far so good with this setup - I can perform basic CRUD with metadata, backed by SP as a data source.
As of v 1.2.7, We have documented Breeze's metadata schema, and json objects that adhere to this schema that are returned from your webservice will now be honored by Breeze.
--- previous post below
We are in the process of documenting how to expose arbitrary server side metadata over the next week or so, followed soon thereafter by some examples of how to consume an arbitrary web service. There are a few small code changes involved as well.
For the time being, until these docs are complete, the best workaround is to create your metadata on the client and use a jsonResultsAdapter to shape the results of your service call into "entities". The metadata you create on the client will be exactly the same as the metadata that you will eventually be creating on the server and sending down to the client.
Hope this helps.
I am working on a web application that gets JSON data from the server (Ruby/Rails) into the client using jQuery/ajax and then renders it to the browser using jQuery to populate the DOM. To simplify access to my data on the client side, I would like to use an object-relational mapper similar to ActiveRecord, but which starts with JSON data instead of data directly from a SQL data source.
Are there any such ORMs in Javascript that convert a JSON data set (itself derived from a set of SQL queries on the server side) to a set of ActiveRecord-like objects?
I may be missing something here, but JSON (JavaScript Object Notation) is already a Javascript object in and of itself.
If the data you are getting from the server doesn't map well to a usable Javascript object, I would argue that it's the server side that needs to change to return a more useful serialized object rather than a simple recordset.
ExtJS has a very nice JsonStore class
There is CouchDB which is a DB written in Erlang that uses HTTP as the transport. This eliminates the need for middle-ware and permits you to navigate the DB directly with AJAX calls. I can't speak good or bad about it. I haven't heard much about it in months and it seems as if the hype train departed a few years ago.
You can't have an ORM to a remote DB in Javascript.. An, ORM requires transcendental knowledge of the DB schema, and sending that out with an API just isn't that pragmatic as of yet.
For persistent local storage, there is the now deprecated Google Gears and the HTML5 Clientside DB.
Yes there is JSON ODM. Exactly what you are looking for. If you need a method that is not supported yet post an issue and I'll try my best to support it as soon as possible.
If you like it please give it a star!