Is it possible to call a stored procedure within javascript on the client side?
I know how to do on the server side, but I am interesting in doing on the client side.
Basically it boils down to directly contacting a SQL server from within the client. Is that possible?
tldr; No, it is not possible to connect to SQL Server 'directly' from browser-JavaScript1.
JavaScript can "speak" HTTP and WebSockets, but SQL Server "speaks" TDS. To communicate there needs to be a common medium/protocol that both the client and server use.
While there are WebSocket proxies that technically make this possible it still requires a separate proxy service (and you'd still have to write/find a JavaScript TDS driver). I don't recommend eliminating the controlled access layer.
Likewise, an HTTP proxy where raw SQL commands are sent to/from the client could be used. I wouldn't advise this either, but some do exist.
External code/libraries (eg. ActiveX, Java) could establish the SQL connection and proxy through to the JavaScript client.
In all of these cases there is an intermediate helper and browser-JavaScript never connects 'directly'.
1 JavaScript is a language and this answer focuses on a browser implementation with browser-supported libraries/functions. One could argue that using node modules would still 'be JavaScript', and they would be correct .. in a different environment.
You cannot establish a direct connection to a database from a client's web browser. What you will need to do is create a server side application to expose an API for getting the data over HTTP.
Take a look at Microsoft's ASP.NET Web API
Sort of
You could create an endpoint that is a wrapper for stored procedure(s) that takes the procedure name as a parameter, as well as the parameters for the procedure.
Once you have such a mechanism in place, you can create endpoints that expose procedures automagically.
http://yourserver/services/yourprocname?prm1=val,prm2=val,etc
If you feel really ambitious you can try out SQL 2016 and return JSON directly from those procedures. Then you can nest data using subqueries and return the JSON in a single payload. No serialization, no objects, just read the data and return it.
< 2016 you could put the results into a Dictionary and use NewtonSoft to serialize it. Assuming you are returning flat data you'd be good to go. Just use a reader and get the meta data from the column names for the key, and the value as object. NewtonSoft will convert that into JSON for you.
If you are returning hierarchical you could (by convention) create a series of runners that take the reader, and pump it into a Dictionary where object is another Dictionary Again the Newtonsoft stuff will help you out with the serialization.
Hope this helps, we are using this approach with 2016 and it is very nice to be able to create a stored procedure and call it without any middle tier code, deployment, etc. It just works.
Hope this helps.
Yes, you can connect to SQL Server from Client side directly by using the WebAssembly. You can write your function that calls the SQL Server in C or C++ first. Compile it to .Wasm by Emscripten compiler. Then you can call the C or C++ code by using JavaScript. In future, you should be able to do that with C# but Microsoft just started work on that.
I am writing a post about it, and I will share it when it's ready.
Now just because you can do it, doesn't mean you should because of security issues. But I am not here to give a lecture about what you should or should not do.
Related
I know memcache and redis are used when caching needs to be there for more than one servers.
I'm creating a node application which will run on single server only and uses mysql as db, and i need to hash around 100,000 keys and each key will contain json string of 200 in length, so that i dont have to call mysql for reads.
If i use memcache or redis i will use a callback to get my data, but if i use javascript hash i can get the data synchronously, but will it affect the application somehow, like high usage of memory. Which one i should be using for a application like this?
I know memcache and redis are used when caching needs to be there for more than one servers.
Not necessarily, for instance Facebook puts a memcache instance in front of each of their mysql servers. You can use Redis/Memcache for fast computation (e.g. real-time analytics) without having a whole cluster.
and i need to hash around 100,000 keys and each key will contain json string of 200 in length, so that i dont have to call mysql for reads.
It seems like premature optimization to mee, if MySQL have enough RAM (the dataset live in memory) you don't have to worry about performance, that's just 100 keys.
If i use memcache or redis i will use a callback to get my data
If really depends on what language you use (Ruby and Python offers synchronous Redis clients) and what type of paradygm is used (event-loop, thread pool...)
but if i use javascript hash i can get the data synchronously
To be more precise, that's just because you are using node_redis and not because you are using a javascript "hash" (an object in fact).
but will it affect the application somehow, like high usage of memory
It depends if you are loading all keys in your process or not, if you are using a Redis Hash, you will be able to only query the field you want and not the whole field each time.
Which one i should be using for a application like this?
The best thing to keep in mind is to lower the number of application you have to maintain in your stack while still using the right tool for the right job. Here MySQL could be enough but if you really want to use Redis or MemCached, I would go for Redis. It will offers simirarly the same features as memcached with the same performances will allowing you to use its other data-structures in the future without needing another application in your stack.
Moreover, if you put all your data in a Redis HASH, you will be able to retrieve a field (hget) or a group of fields (hmget) or all fields (hgetall) with just one call.
Finally, regarding recent statistics and Redis ecosystem (GUI, hosting, librairies, ...), Redis seems to be way more future proof than Memcached if you really want to go that way.
Disclaimer: I am the founder of Redsmin, an online developer oriented service for administrating and monitoring Redis.
It depends- you could even opt for memcached over mysql :). For simple operation such as only -readonly just storing it within your javascript code (I believe as dictionary objects) is enough. But be sure that you have enough RAM :) .
We have an SQLite database lying around, which is curerntly filled from an external batch job. The database is not very complex (essentially two tables in a 1:n relationship and some "catalog tables" holding lookup values).
We now have to add a user-frontend as well as some reporting. At one moment in time only one user is using the frontend, however, this should be possible from everywhere in our network (= wherever access to the SQLite file is possible).
What's the easiest way to create an easy-to-use frontend with as little effort as possible? I thought about using HTML/JS, but haven't found out how to access a local SQLite DB with JS (is this even possible? we could grant the application such access rights of course, however, do browsers even support this?)
If HTML/JS is not an option without a dedicated server, is there any other possiblity to get this done with little effort? We do not want to end up with MS Access... :(
Use the HTA application if not afraid of safety problems. Rename your html file to *.hta, make ODBC connection to your database then:
var Connection = new ActiveXObject ('ADODB.Connection');
Connection.Open (<ODBC-name>);
var Records = new ActiveXObject ('ADODB.Recordset');
Records.Open (Sql, Connection, 0, 2);
See the Properties & Methods for ADO Recordset Object.
I have to implement an architecture where, unfortunately, we are using SharePoint 2013 as, effectively, our principle database. (Not my choice, in case you hadn't picked that up). I have an Asp.Net MVC facade application on the server, handling composition of data from SP and a couple of other data sources, and then a JavaScript SPA as client. An additional wrinkle is that the client needs to be able to work offline, so I need to use IndexedDB to store the data for offline access.
This seems a perfect use-case for breeze.js. My basic architecture is to define a strongly typed model in the MVC facade that will wrap the untyped data I get from SP (in the form object["property"] - using the SP Client Side Object Model). Breeze will handle synchronization between this model and the client, and I will use the export/import functionality to cache data in IndexedDB as required.
So far so good. But... the SOA examples on the breeze site are still under development (and to me, this is fundamentally an SOA architecture, with each SP List a service to be composed). The closest thing I can find is the NoDB sample but this hard-codes metadata on the client. I'd like to establish relationships and validation in the MVC model, and then pass these through to the client, so validation can run off the same declaration in both places.
Is this possible? If so - how? If not does anyone have a workaround or a better idea? I'm already resigned to defining the model in two separate places (the facade and, implicitly, the structure of the SP lists). I would dearly like to avoid implementing it a third time in the client. I'm open to having breeze.js talk directly to the SP REST endpoints, but my understanding from https://stackoverflow.com/a/15364503/1014822 is that the implementation is flawed and does not expose the required metadata.
Resolution: Based on the accepted answer below, I came to the following solution:
1) Generate a service reference from the SP ListData.svc endpoint - thus creating an edmx and proxy classes.
2) Extend ContextProvider in my Repository and override BuildJsonMetadata like so:
protected override string BuildJsonMetadata()
{
XDocument xDoc = XDocument.Load(HttpContext.Current.Server.MapPath("PATH_TO_EDMX"));
String xString = xDoc.ToString();
xString = xString.Replace("DATA_SERVICE_NAMESPACE", "APP_NAMESPACE");
xDoc = XDocument.Parse(xString);
var jsonText = CsdlToJson(xDoc);
return jsonText;
}
3) Modify breeze.js very slightly, editing line 12383:
var schema = metadata.schema || metadata["edmx:Edmx"]["edmx:DataServices"].schema;
(I could presumably also have fixed this in the ContextProvider by choosing a descendant rather than the root node for my xDoc)
4) - Optionally use #Christoff's very useful T4TS.tt template script to generate a d.ts from the service proxy classes so I can have type safety on the data that breeze loads.
So far so good with this setup - I can perform basic CRUD with metadata, backed by SP as a data source.
As of v 1.2.7, We have documented Breeze's metadata schema, and json objects that adhere to this schema that are returned from your webservice will now be honored by Breeze.
--- previous post below
We are in the process of documenting how to expose arbitrary server side metadata over the next week or so, followed soon thereafter by some examples of how to consume an arbitrary web service. There are a few small code changes involved as well.
For the time being, until these docs are complete, the best workaround is to create your metadata on the client and use a jsonResultsAdapter to shape the results of your service call into "entities". The metadata you create on the client will be exactly the same as the metadata that you will eventually be creating on the server and sending down to the client.
Hope this helps.
I am working on a web application that gets JSON data from the server (Ruby/Rails) into the client using jQuery/ajax and then renders it to the browser using jQuery to populate the DOM. To simplify access to my data on the client side, I would like to use an object-relational mapper similar to ActiveRecord, but which starts with JSON data instead of data directly from a SQL data source.
Are there any such ORMs in Javascript that convert a JSON data set (itself derived from a set of SQL queries on the server side) to a set of ActiveRecord-like objects?
I may be missing something here, but JSON (JavaScript Object Notation) is already a Javascript object in and of itself.
If the data you are getting from the server doesn't map well to a usable Javascript object, I would argue that it's the server side that needs to change to return a more useful serialized object rather than a simple recordset.
ExtJS has a very nice JsonStore class
There is CouchDB which is a DB written in Erlang that uses HTTP as the transport. This eliminates the need for middle-ware and permits you to navigate the DB directly with AJAX calls. I can't speak good or bad about it. I haven't heard much about it in months and it seems as if the hype train departed a few years ago.
You can't have an ORM to a remote DB in Javascript.. An, ORM requires transcendental knowledge of the DB schema, and sending that out with an API just isn't that pragmatic as of yet.
For persistent local storage, there is the now deprecated Google Gears and the HTML5 Clientside DB.
Yes there is JSON ODM. Exactly what you are looking for. If you need a method that is not supported yet post an issue and I'll try my best to support it as soon as possible.
If you like it please give it a star!
My current project uses JSON as data interchange format. Both Front-end and Back-end team agree upon a JSON structure before start integrating a service. At times due to un-notified changes in JSON structure by back-end team; it breaks the front-end code.
Is there any external library that we could use to compare a mock JSON (fixture) with servers JSON response. Basically it should assert the whole JSON object and should throw an error if there is any violation in servers JSON format.
Additional info: App is built on JQuery consuming REST JSON services.
I would recommend a schema for your JSON objects.
I use Kwalify but you can also use Rx if you like that syntax better.
I've been using QUnit: http://docs.jquery.com/QUnit recently for a lot of my JS code.
asyncTest http://docs.jquery.com/QUnit/asyncTest could be used pretty effectively to test JSON Structure.
Example:
asyncTest("Test JSON API 1", 1, function() {
$.getJSON("http://test.com/json", function(data) {
equals(data.expected, "what you expected", "Found it");
});
});
Looks like you're trying to solve a problem from other end. Why should you as a front-end developer bother with testing back-end developer's work?
A JSON that is generated on server is better to test on server using standard approach, i.e. functional tests in xUnit. You could also look at acceptance test frameworks like FITnesse if you want to have tests and documentation wiki all in one.
If even after introducing testing on server you'll get invalid JSON it is a problem in human communication, not in tests.
Since there is no answer I'll put my two cents in.
If the problem is that you are dealing with shifting requirements from the back-end then what you need to do is isolate yourself from those changes. Put an abstraction between the front-end and back-end.
Maybe you can call this abstraction the JSON Data Format Interchange.
So when GUI unit-testing (hopefully you are TDDing your Web GUI) you will have a mock for the JSON DIF. SO when the time to integrate the back-end with the front-end*, any software changes will be done in the abstraction layer implementation. And of course you already have tests for those based upon the agreed upon JSON structure.
OBTW, I think that the server-side team should have responsibility for specifying the protocol to be used against the server.
*Why does this remind of the joke my butt and your face could be twins.
https://github.com/skyscreamer/JSONassert may be helpful in eliminating false positives, so that if the order of fields returned by server changes, but overall response is the same, it doesn't trigger a failure.