I've been pouring over BreezeJS documentation and stackoverflow posts looking for definitive information on using BreezeJS to interact with a REST tier (not .NET) that supports CRUD operations. I've read a number of posts that clearly communicates the philosophy behind the default behavior of saveChanges() which sends a collection of updated entities to the server. While creating server-side code to handle this may be possible it is likely that won't be our preferred path and yes, I do understand that there are transactional and state issues that decision would inflict on the client code.
In my searches on stackoverflow I've found a number of questions that are similar to mine, but those questions are all relatively old (at least for JavaScript libraries) - (ex. Save changes to RESTful URLS with Breeze JS).
Some of those posts, including the one above, seem to indicate that work is/has been done to address the desire to do CRUD operations via normal REST operations.
Finally my questions: What is the current state of BreezeJS with regard to supporting RESTful CRUD operations? If I choose to manage the entity updates via a named save operation what kinds of hoops will I need to jump through to get Breeze to send those CRUD operations to my REST tier? What else am I missing in my summary above that will make CRUD operations from BreezeJS to a REST tier challenging?
Note: our JavaScript framework is Angular.
I do understand your question and your perspective. I haven't had time to document how to do this yet.
You'll find clues in the "breeze.ccjsActiveRecordDataServiceAdapter.js" in the CC-JS Ruby sample. That adapter is easy to examine in github.And also in the "breeze.labs.dataservice.sharepoint" adapter which you can examine in github.
Both adapters target servers that want PUT/POST/DELETE to specific per-type endpoints (and do not understand "batch saves").
Intend to do a thorough presentation and sample for these "REST CRUD" scenarios "soon" ... but probably not before May.
Related
I am trying to understand how to perform crud operations using AngularJs, NodeJs
add Mysql. I need to understand the approach for this. Please explain with some
basic example.
CRUD is the acronym for the set of operations Create, Read, Update and Delete.
CRUD is the basic set of operations required to use and manipulate a collection, or resource if you will.
In this setting, "performing" CRUD operations would relate to your application in the following way:
Angularjs uses $http to run http requests asynchronously against your RESTful API
Nodejs implemest the routes that makes up the API, hopefully in accordance to the best practices widely used in existing APIs (An in-depth blog can be found here. Based on the route/api-method triggered, the nodejs will also send queries to MySQL to persist the changes nessesary.
MySQL's job is to persist the data you are working with. Using SQL queries to (C)reate, (R)ead, (U)pdate and (D)elete data (CRUD operations). MySQL ofcourse has a lot of other tools as well, but in reference to the CRUD operations, these are what you need.
Now this is just the practical idea of how to implement a RESTful API with CRUD operations, persisting it with MySQL and consuming it with Angular. If you really are serious about learning more about this, you should:
Google better (Really, it's too easy to find info about the subject as people have mentioned). The terms REST, RESTful, API, CRUD are all good keywords to google, then append whatever keywords you are more interested in "Best practice REST API" for example.
Read blogposts, follow engineers on social media/blogs, read books, watch youtube etc. This information is everywhere.
One important thing that might not be obvious when starting out: Always try to find and standards or best practices for what you want to do. That means someone already has done the hard work of trying and failing, and basically serves you the best solution on a silver platter.
I am new to SCORM and have been given an assignment to integrate SAP Workforce Performance Builder exported SCORM (can either be 1.2 or 2004) content into an existing PHP website.
To put it simple, I need to be able to display the exported SCORM material in the browser (I can already do this), and be able to get the statistics through the SCORM runtime API.
I understand that I will need to make use of an LMS to allow communication with the SCO through the SCORM runtime API. I have looked into several open source LMS's, but haven't found a good solution for my purpose. The problem is that a lot of these LMS's are designed to run on the domain of the provider, and have built in tools to follow up on users' progress and scoring.
What I'm looking for is a simple, lightweight solution to be able to interact with the SCORM runtime API, so I can fetch the time a user has spent on a course, his score, etc. I will insert the gathered data into my own database, and code the backend where results can be evaluated myself, all I need is a way to get to the SCORM data.
I feel like I'm missing something, as surely you don't need an entire LMS implementation to simply listen for the basic 8 SCORM API calls, and log the results? Any help or a nudge in the right direction is greatly appreciated!
If you just need to mimic an LMS, providing a pseudo SCORM API so the course can 'speak' to your PHP site, try Claude Ostyn's SCORM Test Wrapper. It's pure client-side JavaScript, as lightweight as you can get with SCORM.
In a nutshell, Claude's test wrapper provides a simple SCORM API for the course to connect to. It receives communication from the course, which you can handle however you like. No backend code is provided; if you want to incorporate with a database, you will need to modify the wrapper to push/pull data from your site's database (this is typically handled via AJAX).
Once you build out the data store, you can make your site behave as an LMS, enabling the site to launch SCORM courses, and enabling the courses to send/receive data to your site via the SCORM API. No LMS or 3rd-party server required.
Notes:
There is no support for unzipping packages or reading manifests. (I suspect you're not interested in going that far.)
SCORM also supports sequencing and navigation, which go way beyond simple JavaScript wrappers. If you need to support the sequencing and navigation features, you'll need to grab them from an existing open-source project (not easy) or pay a 3rd party like Rustici Software (SCORM Cloud). I suspect the content you create via SAP will not use any of SCORM's sequencing or navigation features, so you'll probably be OK.
Claude passed away a while ago, so he can't support you. Shout out to the guys at Rustici Software, who have preserved the site for the SCORM community.
From the courseware's point of view, it is just using javascript to call functions on an API or API_1484_11 object. If you can write the javascript code to sufficiently ape the interface, and store/return the necessary data model elements, then you don't need "an entire LMS implementation".
You need to carefully read the Run-Time Environment documentation though.
If you only ever plan to use it for running SAP Workforce Performance Builder produced courseware, then you can implement enough or the data-model to make that work correctly (although I've seen this done, then people surprised/confused/angry when other SCORM compliant courseware does not work, so beware.)
(Aside) You also need a reliable way to install/update your courseware packages from a PIF zip file. Again, for dealing with courseware from a specific content creator and not needing to write a full blown generic interface, you can just pick out the bits of the imsmanifest.xml file you need.
(Digression) Having written the courseware side of the interface a few times, I've seen interesting gotchas in various LMS implementations of the API, including things like returning the boolean true or false instead of a string "true" or "false" which can catch you off guard. May favourite so far is an LMS that truncates the cmi.suspend_data at the first newline character. (Actually, the implementation was that inept that there was a bug in their bug, and it also chopped off the character before the newline as well.)
You'll mainly want to capture, maintain and enforce the Student Attempt Object. I've used this in a JSON format now for a while, and you can take different approaches to how you store information collected by a Shareable Content Object. Normally people pluck the parts they need vs. trying to go 100% into full SCORM support so these types of questions are popular.
By creating the SCORM Runtime for either SCORM 1.2 or 2004 you'll mainly be providing those methods to build the data from the student session.
This can look like https://gist.github.com/cybercussion/4675334 (based on Unit test data for SCORM 2004)
You attempt to route your calls to your server side. Normally this results in a lot of lag. And I normally don't advocate it as an option.
You cache the student attempt, but you post the whole JSON object on a commit call. This normally results in a larger data post which can blimp on you if there are a lot of journaled interactions.
You take a hybrid approach and only post the data thats changed and merge that on your server limiting the data blimp issues that could occur.
I have a bunch of info up on the wiki here too https://github.com/cybercussion/SCOBot/wiki as well as a lot of sample code, tips etc...
I have a bit of conceptual question regarding the structure of users and their documents.
Is it a good practice to give each user within CouchDB their own database which hold their document?
I have read that couchDB can handle thousands of Databases and that It is not that uncommon for each user to have their database.
Reason:
The reason for asking this question is that I am trying to create a system where a logged in user can only view their own document and can't view any other users document.
Any suggestions.
Thank you in advance.
It’s rather common scenario to create CouchDB bucket (DB) for each user. Although there are some drawbacks:
You must keep ddocs in sync in each user bucket, so deployment of ddoc changes across multiple buckets may become a real adventure.
If docs are shared between users in some way, you get doc and viewindex dupes in each bucket.
You must block _info requests to avoid user list leak (or you must name buckets using hashes).
In any case, you need some proxy in front of Couch to create and prepare a new bucket on user registration.
You better protect Couch from running out of capacity when it receives to many requests – it also requires proxy.
Per-doc read ACL can be implemented using _list functions, but this approach has some drawbacks and it also requires a proxy, at least a web-server, in front of CouchDB. See CouchDb read authentication using lists for more details.
Also you can try to play with CoverCouch which implements a full per-doc read ACL, keeping original CouchDB API untouched, but it’s in very early beta.
This is quite a common use case, especially in mobile environments, where the data for each user is synchronized to the device using one of the Android, iOS or JavaScript (pouchdb) libraries.
So in concept, this is fine but I would still recommend testing thoroughly before going into production.
Note that one downside of multiple databases is that you can't write queries that span multiple database. There are some workarounds though - for more information see Cloudant: Searching across databases.
Update 17 March 2017:
Please take a look at Cloudant Envoy for more information on this approach.
Database-per-user is a common pattern with CouchDB when there is a requirement for each application user to have their own set of documents which can be synced (e.g. to a mobile device or browser). On the surface, this is a good solution - Cloudant handles a large number of databases within a single installation very well. However ...
Source: https://github.com/cloudant-labs/envoy
The solution is as old as web applications - if you think of a mySQL database there is nothing in the database to stop user B viewing records belonging to user A - it is all coded in the application layer.
In CouchDB there is likewise no completely secure way to prevent user B from accessing documents written by user A. You would need to code this in your application layer just as before.
Provided you have a web application between CouchDB and the users you have no problem. The issue comes when you allow CouchDB to serve requests directly.
Using multiple database for multiple users have some important drawbacks:
queries over data in different databases are not possible with the native couchdb API. Analysis on your website overall status are quite impossible!
maintenance will soon becomes very hard: let's think of replicating/compacting thousands of database each time you want to perform a backup
It depends on your use case, but I think that a nice approach can be:
allow access only through virtual host. This can be achieved using a proxy or much more simply by using a couchdb hosting provider which lets you fine-tune your "domains->path" mapping
use design docs / couchapps, instead of direct document CRUD API, for read/write operations
2.1. using _rewrite handler to allow only valid requests: in this way you can instantly block access to sensible handlers like _all_docs, _all_dbs and others
2.2. using _list and _view handlers for read doc/role based ACLs as described in CouchDb read authentication using list
2.3. using _update handlers for write doc/role based ACLs
2.4. using authenticated rewriting rules for read/write role based ACL.
2.3. filtered _changes handler is another way of retrieving all user's data with read doc/role based ACL. Depending on your use case this can effectively simplify as much as possible your read API, letting you concentrate on your update API.
I am just starting to look at MVC structure, first i looked at how backbone.js worked, and now I have just completed rails for zombies, by Code School. I know that I haven't delved too far into any of this, but I had a question to begin with.
Can you use these libraries together?
I have learned how to create models, views, etc in both, but when creating a real application do you use both backbone and rails?
If so...
When do you use a backbone.js model vs. a rails model?
Maybe I am just getting ahead of myself and need to keep practicing and doing tutorials but I couldn't seem to find anything directly on this.
Thanks!
Before anything else I'd suggest taking a look at thoughtbot's Backbone.js on Rails book, which is a great starting point, although aimed at an intermediate to advanced audience. I bought this book having already worked with rails but as a total backbone.js beginner and it has served me very well.
Beyond that, there are some fundamental issues with combining these frameworks which go beyond the details covered in this book and other books. Below are some things I'd suggest you think about, from my own experiences pairing RoR and backbone.js. This is a long answer and strays a bit from the specifics of your question, but I hope it might help you out in the "big picture" sense of understanding the problem you're facing.
Rails: Web Framework vs API
The first thing you confront when using backbone.js on top of a rails application is what to do about views, but this is really just the surface of a much deeper issue. The problem goes to the very heart of what it means to create a RESTful web service.
Rails is set up out of the box to encourage its users to create RESTful services, by structuring routing in terms of a set of resources accessed at uniform URIs (defined in your routes.rb file) through standard HTTP actions. So if you have a Post model, you can:
Get all posts by sending GET request to /posts
Create a new post by sending a GET request to /posts/new, filling out the form and sending it (a POST request) to /posts
Update a post with id 123 by sending a GET request to /posts/123/edit, filling out the form and sending it (a PUT request) to posts/123
Destroy a post with id 123 by sending a DELETE request to /posts/123
The key thing to remember about this aspect of Rails is that it is fundamentally stateless: regardless of what I was doing previously, I can create a new Post simply by sending a POST request with a valid form data to the correct URI, say /posts. Of course there are caveats: I may need to be logged in (have a session cookie identifying me), but in essence Rails doesn't really care what I was doing before I sent that request. I could follow it up by updating another post, or by sending a valid action to whatever other resources are made available to me.
This aspect of how Rails is designed makes it relatively easy to turn a (Javascript-light) Rails web application into an API: the resources will be similar or the same, the web framework returning HTML pages while the API (typically) returns data in JSON or XML format.
Backbone.js: A new stateful layer
Backbone is also based on RESTful resources. Whenever you create, update or destroy a backbone.js model, you do so via the standard HTTP actions sent to URIs which assume a RESTful architecture of the kind described above. This makes it ideal for integrating with RESTful services like RoR.
But there is a subtle point to be stressed here: backbone.js integrates seamlessly with Rails as an API. That is to say, if you strip away the HTML views and just use Rails for serving RESTful resources, integrating with the database, performing session management, etc., then it integrates very nicely with the structure that backbone.js provides for client-side code. Many people argue that there's nothing wrong with using rails this way, and I think in many ways they are right.
The complications arise though from the issue of what to do with that other part of Rails that we've just thrown away: the views and what they represent.
Stateful humans, stateless machines
This is actually more important than it may initially seem. HTML views represent the stateless interface that humans use for accessing the RESTful resources your service provides. Doing away with them leaves you with two access points:
For humans: a rich, client-side interface provided by the backbone.js layer (stateful)
For machines: a resource-oriented RESTful API provided by the rails layer (stateless)
Notice that there is no longer a stateless (RESTful) interface for humans. In contrast, in a traditional rails app with an API, we had something closer to this:
HTML resources for humans (stateless)
JSON/XML resources (API) for machines (stateless)
The latter two interfaces for accessing resources are much closer in nature to each other than the previous two. Just think for example of rails' respond_with, which takes advantage of the similarities to wrap various RESTful responders in a unified method.
Working together
This might all seem very abstract and beside the point, I know. To try to make it more concrete, consider the following problem, which gets back to your question about getting rails and backbone.js to work together. In this problem, you want to:
Create a web service with a rich client-side experience using backbone.js, with rails as the back end serving resources in JSON format.
Use pushState to give each page in the app a URL (e.g. /posts/123) which can be accessed directly (by entering it into the browser bar).
For each of these URLs, also serve an HTML page for clients without javascript.
These are not unusual demands for a modern web service, but they create a complex challenge. To make a long story short, you now have to create two "human-oriented" layers:
Stateful client-side interface (backbone.js templates and views)
Stateless HTML resources (Rails HTML views)
The complexity of actually doing this leads many nowadays to abandon the latter of these two and just offer a rich client-side interface. What you decide to do depends on your goals and what you want to achieve, but it's worth thinking about this problem carefully.
As another possible reference for doing that, I'd suggest having a look at O'Reilly's RESTful Web Services. It might seem odd to be recommending a book on REST in a question about Rails and Backbone.js, but actually I think this is the key piece that fits these very different frameworks together, and understanding it more fully will help you take advantage of the strengths of both.
Yes, you can use both side by side. Backbone is for storing and manipulating data within the client browser. It generally needs a server to talk to and fetch the data from. This is where Rails comes in. You can have a web application without heavy client-side code. Backbone is for building out sites that feel more like apps--think of Gmail or Pandora.
I advise just learning Rails first. Once you can get static pages loading and styled as you wish, then understanding Backbone's place will make more sense
I've used rails as a backend server to serve a fairly large website, which included a few one-page apps (built in backbone).
I'd suggest the backbone-on-rails gem. The idea is that your rails server will serve up the backbone app as a script tag in one of your views. You keep your backbone app itself in the rails app/assets folder.
Backbone understands rails routing conventions, and you just need to give it a path to a json api that rails can almost generate for you with rails generate resource.
Other than the syncing between the models, your backbone apps and rails apps are fairly separate. Backbone and Rails don't have quite the same MVC model, but getting them to cooperate is pretty easy.
I'm working on a personal project to build a small web app that is built using AJAX requests and talks to a RESTful API rather than traditional HTML pages and form submissions.
Are there any online articles or tutorials or any books that people could recommend that cover design patterns for this kind of thing?
Finished my part of a similar project on Friday which included using WCF endpoints to build parts of pages (we were calling them pods). So the pages had boilerplate and then ajaxed in the pods. From your description I think you will need a 'behavioural' pattern as they deal with the patterns of communication between objects.
The design pattern we started with was 'Chain of responsibility' (from GOF book):
"Avoid coupling the sender of a
request to its receiver by giving more
than one object a chance to handle the
request. Chain the receiving objects
and pass the request along the chain
until an object handles it"
Although I don't know if this covers your precise requirements.
You are better off googling than asking here. There's a ton of stuff out there.
I recommend using jQuery to help with the client side aspect of AJAX. It does a lot of heavy lifting for you and integrates well with forms, the DOM, etc.