I am new to SCORM and have been given an assignment to integrate SAP Workforce Performance Builder exported SCORM (can either be 1.2 or 2004) content into an existing PHP website.
To put it simple, I need to be able to display the exported SCORM material in the browser (I can already do this), and be able to get the statistics through the SCORM runtime API.
I understand that I will need to make use of an LMS to allow communication with the SCO through the SCORM runtime API. I have looked into several open source LMS's, but haven't found a good solution for my purpose. The problem is that a lot of these LMS's are designed to run on the domain of the provider, and have built in tools to follow up on users' progress and scoring.
What I'm looking for is a simple, lightweight solution to be able to interact with the SCORM runtime API, so I can fetch the time a user has spent on a course, his score, etc. I will insert the gathered data into my own database, and code the backend where results can be evaluated myself, all I need is a way to get to the SCORM data.
I feel like I'm missing something, as surely you don't need an entire LMS implementation to simply listen for the basic 8 SCORM API calls, and log the results? Any help or a nudge in the right direction is greatly appreciated!
If you just need to mimic an LMS, providing a pseudo SCORM API so the course can 'speak' to your PHP site, try Claude Ostyn's SCORM Test Wrapper. It's pure client-side JavaScript, as lightweight as you can get with SCORM.
In a nutshell, Claude's test wrapper provides a simple SCORM API for the course to connect to. It receives communication from the course, which you can handle however you like. No backend code is provided; if you want to incorporate with a database, you will need to modify the wrapper to push/pull data from your site's database (this is typically handled via AJAX).
Once you build out the data store, you can make your site behave as an LMS, enabling the site to launch SCORM courses, and enabling the courses to send/receive data to your site via the SCORM API. No LMS or 3rd-party server required.
Notes:
There is no support for unzipping packages or reading manifests. (I suspect you're not interested in going that far.)
SCORM also supports sequencing and navigation, which go way beyond simple JavaScript wrappers. If you need to support the sequencing and navigation features, you'll need to grab them from an existing open-source project (not easy) or pay a 3rd party like Rustici Software (SCORM Cloud). I suspect the content you create via SAP will not use any of SCORM's sequencing or navigation features, so you'll probably be OK.
Claude passed away a while ago, so he can't support you. Shout out to the guys at Rustici Software, who have preserved the site for the SCORM community.
From the courseware's point of view, it is just using javascript to call functions on an API or API_1484_11 object. If you can write the javascript code to sufficiently ape the interface, and store/return the necessary data model elements, then you don't need "an entire LMS implementation".
You need to carefully read the Run-Time Environment documentation though.
If you only ever plan to use it for running SAP Workforce Performance Builder produced courseware, then you can implement enough or the data-model to make that work correctly (although I've seen this done, then people surprised/confused/angry when other SCORM compliant courseware does not work, so beware.)
(Aside) You also need a reliable way to install/update your courseware packages from a PIF zip file. Again, for dealing with courseware from a specific content creator and not needing to write a full blown generic interface, you can just pick out the bits of the imsmanifest.xml file you need.
(Digression) Having written the courseware side of the interface a few times, I've seen interesting gotchas in various LMS implementations of the API, including things like returning the boolean true or false instead of a string "true" or "false" which can catch you off guard. May favourite so far is an LMS that truncates the cmi.suspend_data at the first newline character. (Actually, the implementation was that inept that there was a bug in their bug, and it also chopped off the character before the newline as well.)
You'll mainly want to capture, maintain and enforce the Student Attempt Object. I've used this in a JSON format now for a while, and you can take different approaches to how you store information collected by a Shareable Content Object. Normally people pluck the parts they need vs. trying to go 100% into full SCORM support so these types of questions are popular.
By creating the SCORM Runtime for either SCORM 1.2 or 2004 you'll mainly be providing those methods to build the data from the student session.
This can look like https://gist.github.com/cybercussion/4675334 (based on Unit test data for SCORM 2004)
You attempt to route your calls to your server side. Normally this results in a lot of lag. And I normally don't advocate it as an option.
You cache the student attempt, but you post the whole JSON object on a commit call. This normally results in a larger data post which can blimp on you if there are a lot of journaled interactions.
You take a hybrid approach and only post the data thats changed and merge that on your server limiting the data blimp issues that could occur.
I have a bunch of info up on the wiki here too https://github.com/cybercussion/SCOBot/wiki as well as a lot of sample code, tips etc...
Related
I'm building a relatively complex and data heavy web application in AngularJS. I'm planning to use php as a RESTful backend (with symfony2 and FOSRESTbundle). I have spent weeks looking around for different solutions to on/off line synchronization solutions and there seem to be many half solutions (see list below for some examples). But non of them seem to fit my situation perfectly. How do I go about deciding which strategy will suite me?
What issues that might determine “best practices” for building an on/off line synchronization system in AngularJS and symfony 2 needs some research, but on the top of my head I want to consider things like speed, ease of implementation, future proof (lasting solution), extensibility, resource usage/requirements on the client side, having multiple offline users editing the same data, how much and what type of data to store.
Some of my requirements that I'm presently aware of are:
The users will be offline often and then needs to synchronize (locally created) data with the database
Multiple users share some of the editable data (potential merging issues needs to be considered).
User's might be logged in from multiple devices at the same time.
Allowing large amount of data to be stored offline(up to a gigabyte)
I probably want the user to be able to decide what he wants to store locally.
Even if the user is online I probably want the user to be able to choose whether he uses all (backend) data or only what's available locally.
Some potential example solutions
PouchDB - Interesting strategies for synchronizing changes from multiple sources
Racer - Node lib for realtime sync, build on ShareJS
Meteor - DDP and strategies for sync
ShareJS - Node.js operational transformation, inspired by Google Wave
Restangular - Alternative to $resource
EmberData - EmberJS’s ORM-like data persistence library
ServiceWorker
IndexedDB Polyfill - Polyfill IndexedDB with browsers that support WebSQL (Safari)
BreezeJS
JayData
Loopback’s ORM
ActiveRecord
BackBone Models
lawnchair - Lightweight client-side DB lib from Brian Leroux
TogetherJS - Mozilla Labs’ multi-client state sync/collaboration lib.
localForage - Mozilla’s DOMStorage improvement library.
Orbit.js - Content synchronization library
(https://docs.google.com/document/d/1DMacL7iwjSMPP0ytZfugpU4v0PWUK0BT6lhyaVEmlBQ/edit#heading=h.864mpiz510wz)
Any help would be much appreciated :)
You seem to want a lot of stuff, the sync stuff is hard... I have a solution to some of this stuff in an OSS library I am developing. The idea is that it does versioning of local data, so you can figure out what has changed and therefore do meaningful sync, which also includes conflict resolution etc. This is sort-of the offline meteor as it is really tuned to offline use (for the London Underground where we have no mobile data signals).
I have also developed an eco system around it which includes a connection manager and server. The main project is at https://github.com/forbesmyester/SyncIt and is very well documented and tested. The test app for the ecosystem will be at https://github.com/forbesmyester/SyncItTodoMvc but I have yet to write virtually any docs for it.
It is currently using LocalStorage but will be easy to move to localForage as it actually is using a wrapper around localStorage to make it an async API... Another one for the list maybe?
To work offline with your requeriments I suggest to divide problem into two scenarios: content (html, js, css) and data (API REST).
The content
Will be stored offline by appcache for small apps or for advanced cases with the awesome serviceworkers. Chrome 40+.
The data
Require solve the storage and synchronization and It becames a more difficult problem.
I suggest a deep reading of the Differential Synchronization algorimth, and take next tips in consideration:
Frontend
Store the resource and shadow (using for example url as key) into the localstorage for small apps or into more advanced alternatives (pouchdb,indexdb,...). With the resource you could work offline and when needs synchronize with the server use jsonpath to get diffs between the resource-shadow and to send it to server the PATCH request.
Backend
At backend take in consideration storage the shadow copies into redis.
The two sides (Frontend/Backend) needs to identify the client node, to do so you could use x- syn-token at HTTP header (send it in all request of the client with angular interceptors).
https://www.firebase.com/
it's reliable and proven, and can be used as a backend and sync library for what you're after. but, it costs, and requires some integration coding.
https://goinstant.com/ is also a good hosted option.
In some of my apps, I prefer to have both: syncing db source AND another main database. (mogno/express, php/mysql, etc..). then each db handles what's its best with, and it's features (real-time vs. security, etc...). This is true regardless to sync-db provider (be it Racer or Firebase or GoInstant ...)
The app I am developing has many of the same requirements and is being built in AngularJS. In terms of future proofing, there are two main concerns that I have found, one is hacking attempts requiring encryption and possible use of one time keys and an backend key manager and the other is support for WebSQL being dropped by the standards consortium in preference to indesedDB. So finding an abstraction layer that can support both is important. The solution set I have come up with is fairly straight forward. Where offline data is is loaded first into the UI and a request goes out to the REST Server if in an online state. As for resolving data conflicts in a multi user environment, that becomes a business rule decision. My decision was to simplify the matter and not delve into data mergers but to use a microtime stamp comparison to determine which version should be kept and pushed out to clients. When in offline mode, store data as a dirty write and the push to server when returning to an online state.
Or use ydn-db, which I am evaluating now as it has built in support for AWS and Google cloud storage built in.
Another suggestion:
Yjs leverages an OT-like algorithm to share a wide range of supported data types, and you have the option to store the shared data in IndexedDB (so it is available for offline editing).
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I've read about SPA and it advantages. I find most of them unconvincing. There are 3 advantages that arouse my doubts.
Question: Can you act as advocate of SPA and prove that I am wrong about first three statements?
=== ADVANTAGES ===
1. SPA is extremely good for very responsive sites:
Server-side rendering is hard to implement for all the intermediate
states - small view states do not map well to URLs.
Single page apps are distinguished by their ability to redraw any part
of the UI without requiring a server roundtrip to retrieve HTML. This
is achieved by separating the data from the presentation of data by
having a model layer that handles data and a view layer that reads
from the models.
What is wrong with holding a model layer for non-SPA? Does SPA the only compatible architecture with MVC on client side?
2. With SPA we don't need to use extra queries to the server to download pages.
Hah, and how many pages user can download during visiting your site? Two, three? Instead there appear another security problems and you need to separate your login page, admin page etc into separate pages. In turn it conflicts with SPA architecture.
3.May be any other advantages? Don't hear about any else..
=== DISADVANTAGES ===
Client must enable javascript.
Only one entry point to the site.
Security.
P.S. I've worked on SPA and non-SPA projects. And I'm asking those questions because I need to deepen my understanding. No mean to harm SPA supporters. Don't ask me to read a bit more about SPA. I just want to hear your considerations about that.
Let's look at one of the most popular SPA sites, GMail.
1. SPA is extremely good for very responsive sites:
Server-side rendering is not as hard as it used to be with simple techniques like keeping a #hash in the URL, or more recently HTML5 pushState. With this approach the exact state of the web app is embedded in the page URL. As in GMail every time you open a mail a special hash tag is added to the URL. If copied and pasted to other browser window can open the exact same mail (provided they can authenticate). This approach maps directly to a more traditional query string, the difference is merely in the execution. With HTML5 pushState() you can eliminate the #hash and use completely classic URLs which can resolve on the server on the first request and then load via ajax on subsequent requests.
2. With SPA we don't need to use extra queries to the server to download pages.
The number of pages user downloads during visit to my web site?? really how many mails some reads when he/she opens his/her mail account. I read >50 at one go. now the structure of the mails is almost the same. if you will use a server side rendering scheme the server would then render it on every request(typical case).
- security concern - you should/ should not keep separate pages for the admins/login that entirely depends upon the structure of you site take paytm.com for example also making a web site SPA does not mean that you open all the endpoints for all the users I mean I use forms auth with my spa web site.
- in the probably most used SPA framework Angular JS the dev can load the entire html temple from the web site so that can be done depending on the users authentication level. pre loading html for all the auth types isn't SPA.
3. May be any other advantages? Don't hear about any else..
these days you can safely assume the client will have javascript enabled browsers.
only one entry point of the site. As I mentioned earlier maintenance of state is possible you can have any number of entry points as you want but you should have one for sure.
even in an SPA user only see to what he has proper rights. you don't have to inject every thing at once. loading diff html templates and javascript async is also a valid part of SPA.
Advantages that I can think of are:
rendering html obviously takes some resources now every user visiting you site is doing this. also not only rendering major logics are now done client side instead of server side.
date time issues - I just give the client UTC time is a pre set format and don't even care about the time zones I let javascript handle it. this is great advantage to where I had to guess time zones based on location derived from users IP.
to me state is more nicely maintained in an SPA because once you have set a variable you know it will be there. this gives a feel of developing an app rather than a web page. this helps a lot typically in making sites like foodpanda, flipkart, amazon. because if you are not using client side state you are using expensive sessions.
websites surely are extremely responsive - I'll take an extreme example for this try making a calculator in a non SPA website(I know its weird).
Updates from Comments
It doesn't seem like anyone mentioned about sockets and long-polling.
If you log out from another client say mobile app, then your browser
should also log out. If you don't use SPA, you have to re-create the
socket connection every time there is a redirect. This should also
work with any updates in data like notifications, profile update etc
An alternate perspective: Aside from your website, will your project
involve a native mobile app? If yes, you are most likely going to be
feeding raw data to that native app from a server (ie JSON) and doing
client-side processing to render it, correct? So with this assertion,
you're ALREADY doing a client-side rendering model. Now the question
becomes, why shouldn't you use the same model for the website-version
of your project? Kind of a no-brainer. Then the question becomes
whether you want to render server-side pages only for SEO benefits and
convenience of shareable/bookmarkable URLs
I am a pragmatist, so I will try to look at this in terms of costs and benefits.
Note that for any disadvantage I give, I recognize that they are solvable. That's why I don't look at anything as black and white, but rather, costs and benefits.
Advantages
Easier state tracking - no need to use cookies, form submission, local storage, session storage, etc. to remember state between 2 page loads.
Boiler plate content that is on every page (header, footer, logo, copyright banner, etc.) only loads once per typical browser session.
No overhead latency on switching "pages".
Disadvantages
Performance monitoring - hands tied: Most browser-level performance monitoring solutions I have seen focus exclusively on page load time only, like time to first byte, time to build DOM, network round trip for the HTML, onload event, etc. Updating the page post-load via AJAX would not be measured. There are solutions which let you instrument your code to record explicit measures, like when clicking a link, start a timer, then end a timer after rendering the AJAX results, and send that feedback. New Relic, for example, supports this functionality. By using a SPA, you have tied yourself to only a few possible tools.
Security / penetration testing - hands tied: Automated security scans can have difficulty discovering links when your entire page is built dynamically by a SPA framework. There are probably solutions to this, but again, you've limited yourself.
Bundling: It is easy to get into a situation when you are downloading all of the code needed for the entire web site on the initial page load, which can perform terribly for low-bandwidth connections. You can bundle your JavaScript and CSS files to try to load in more natural chunks as you go, but now you need to maintain that mapping and watch for unintended files to get pulled in via unrealized dependencies (just happened to me). Again, solvable, but with a cost.
Big bang refactoring: If you want to make a major architectural change, like say, switch from one framework to another, to minimize risk, it's desirable to make incremental changes. That is, start using the new, migrate on some basis, like per-page, per-feature, etc., then drop the old after. With traditional multi-page app, you could switch one page from Angular to React, then switch another page in the next sprint. With a SPA, it's all or nothing. If you want to change, you have to change the entire application in one go.
Complexity of navigation: Tooling exists to help maintain navigational context in SPA's, like history.js, Angular 2, most of which rely on either the URL framework (#) or the newer history API. If every page was a separate page, you don't need any of that.
Complexity of figuring out code: We naturally think of web sites as pages. A multi-page app usually partitions code by page, which aids maintainability.
Again, I recognize that every one of these problems is solvable, at some cost.
But there comes a point where you are spending all your time solving problems which you could have just avoided in the first place. It comes back to the benefits and how important they are to you.
Disadvantages
1. Client must enable javascript. Yes, this is a clear disadvantage of SPA. In my case I know that I can expect my users to have JavaScript enabled. If you can't then you can't do a SPA, period. That's like trying to deploy a .NET app to a machine without the .NET Framework installed.
2. Only one entry point to the site. I solve this problem using SammyJS. 2-3 days of work to get your routing properly set up, and people will be able to create deep-link bookmarks into your app that work correctly. Your server will only need to expose one endpoint - the "give me the HTML + CSS + JS for this app" endpoint (think of it as a download/update location for a precompiled application) - and the client-side JavaScript you write will handle the actual entry into the application.
3. Security. This issue is not unique to SPAs, you have to deal with security in exactly the same way when you have an "old-school" client-server app (the HATEOAS model of using Hypertext to link between pages). It's just that the user is making the requests rather than your JavaScript, and that the results are in HTML rather than JSON or some data format. In a non-SPA app you have to secure the individual pages on the server, whereas in a SPA app you have to secure the data endpoints. (And, if you don't want your client to have access to all the code, then you have to split apart the downloadable JavaScript into separate areas as well. I simply tie that into my SammyJS-based routing system so the browser only requests things that the client knows it should have access to, based on an initial load of the user's roles, and then that becomes a non-issue.)
Advantages
A major architectural advantage of a SPA (that rarely gets mentioned) in many cases is the huge reduction in the "chattiness" of your app. If you design it properly to handle most processing on the client (the whole point, after all), then the number of requests to the server (read "possibilities for 503 errors that wreck your user experience") is dramatically reduced. In fact, a SPA makes it possible to do entirely offline processing, which is huge in some situations.
Performance is certainly better with client-side rendering if you do it right, but this is not the most compelling reason to build a SPA. (Network speeds are improving, after all.) Don't make the case for SPA on this basis alone.
Flexibility in your UI design is perhaps the other major advantage that I have found. Once I defined my API (with an SDK in JavaScript), I was able to completely rewrite my front-end with zero impact on the server aside from some static resource files. Try doing that with a traditional MVC app! :) (This becomes valuable when you have live deployments and version consistency of your API to worry about.)
So, bottom line: If you need offline processing (or at least want your clients to be able to survive occasional server outages) - dramatically reducing your own hardware costs - and you can assume JavaScript & modern browsers, then you need a SPA. In other cases it's more of a tradeoff.
One major disadvantage of SPA - SEO. Only recently Google and Bing started indexing Ajax-based pages by executing JavaScript during crawling, and still in many cases pages are being indexed incorrectly.
While developing SPA, you will be forced to handle SEO issues, probably by post-rendering all your site and creating static html snapshots for crawler's use. This will require a solid investment in a proper infrastructures.
Update 19.06.16:
Since writing this answer a while ago, I gain much more experience with Single Page Apps (namely, AngularJS 1.x) - so I have more info to share.
In my opinion, the main disadvantage of SPA applications is SEO, making them limited to kind of "dashboard" apps only. In addition, you are going to have a much harder times with caching, compared to classic solutions. For example, in ASP.NET caching is extreamly easy - just turn on OutputCaching and you are good: the whole HTML page will be cached according to URL (or any other parameters). However, in SPA you will need to handle caching yourself (by using some solutions like second level cache, template caching, etc..).
I would like to make the case for SPA being best for Data Driven Applications. gmail, of course is all about data and thus a good candidate for a SPA.
But if your page is mostly for display, for example, a terms of service page, then a SPA is completely overkill.
I think the sweet spot is having a site with a mixture of both SPA and static/MVC style pages, depending on the particular page.
For example, on one site I am building, the user lands on a standard MVC index page. But then when they go to the actual application, then it calls up the SPA. Another advantage to this is that the load-time of the SPA is not on the home page, but on the app page. The load time being on the home page could be a distraction to first time site users.
This scenario is a little bit like using Flash. After a few years of experience, the number of Flash only sites dropped to near zero due to the load factor. But as a page component, it is still in use.
For such companies as google, amazon etc, whose servers are running at max capacity in 24/7-mode, reducing traffic means real money - less hardware, less energy, less maintenance. Shifting CPU-usage from server to client pays off, and SPAs shine. The advantages overweight disadvantages by far.
So, SPA or not SPA depends much on the use case.
Just for mentioning another, probably not so obvious (for Web-developers) use case for SPAs:
I'm currently looking for a way to implement GUIs in embedded systems and browser-based architecture seems appealing to me. Traditionally there were not many possibilities for UIs in embedded systems - Java, Qt, wx, etc or propriety commercial frameworks. Some years ago Adobe tried to enter the market with flash but seems to be not so successful.
Nowadays, as "embedded systems" are as powerful as mainframes some years ago, a browser-based UI connected to the control unit via REST is a possible solution. The advantage is, the huge palette of tools for UI for no cost. (e.g. Qt require 20-30$ per sold unit on royalty fees plus 3000-4000$ per developer)
For such architecture SPA offers many advantages - e.g. more familiar development-approach for desktop-app developers, reduced server access (often in car-industry the UI and system muddles are separate hardware, where the system-part has an RT OS).
As the only client is the built-in browser, the mentioned disadvantages like JS-availability, server-side logging, security don't count any more.
2. With SPA we don't need to use extra queries to the server to download pages.
I still have to learn a lot but since I started learn about SPA, I love them.
This particular point may make a huge difference.
In many web apps that are not SPA, you will see that they will still retrieve and add content to the pages making ajax requests. So I think that SPA goes beyond by considering: what if the content that is going to be retrieved and displayed using ajax is the whole page? and not just a small portion of a page?
Let me present an scenario. Consider that you have 2 pages:
a page with list of products
a page to view the details of a specific product
Consider that you are at the list page. Then you click on a product to view the details. The client side app will trigger 2 ajax requests:
a request to get a json object with the product details
a request to get an html template where the product details will be inserted
Then, the client side app will insert the data into the html template and display it.
Then you go back to the list (no request is done for this!) and you open another product. This time, there will be only an ajax request to get the details of the product. The html template is going to be the same so you don't need to download again.
You may say that in a non SPA, when you open the product details, you make only 1 request and in this scenario we did 2. Yes. But you get the gain from an overall perspective, when you navigate across of many pages, the number of requests is going to be lower. And the data that is transferred between the client side and the server is going to be lower too because the html templates are going to be reused. Also, you don't need to download in every requests all those css, images, javascript files that are present in all the pages.
Also, let's consider that you server side language is Java. If you analyze the 2 requests that I mentioned, 1 downloads data (you don't need to load any view file and call the view rendering engine) and the other downloads and static html template so you can have an HTTP web server that can retrieve it directly without having to call the Java application server, no computation is done!
Finally, the big companies are using SPA: Facebook, GMail, Amazon. They don't play, they have the greatest engineers studying all this. So if you don't see the advantages you can initially trust them and hope to discover them down the road.
But is important to use good SPA design patterns. You may use a framework like AngularJS. Don't try to implement an SPA without using good design patterns because you may end up having a mess.
Disadvantages:
Technically, design and initial development of SPA is complex and can be avoided. Other reasons for not using this SPA can be:
a) Security: Single Page Application is less secure as compared to traditional pages due to cross site scripting(XSS).
b) Memory Leak: Memory leak in JavaScript can even cause powerful Computer to slow down. As traditional websites encourage to navigate among pages, thus any memory leak caused by previous page is almost cleansed leaving less residue behind.
c) Client must enable JavaScript to run SPA, but in multi-page application JavaScript can be completely avoided.
d) SPA grows to optimal size, cause long waiting time. Eg: Working on Gmail with slower connection.
Apart from above, other architectural limitations are Navigational Data loss, No log of Navigational History in browser and difficulty in Automated Functional Testing with selenium.
This link explain Single Page Application's Advantages and Disadvantages.
Try not to consider using a SPA without first defining how you will address security and API stability on the server side. Then you will see some of the true advantages to using a SPA. Specifically, if you use a RESTful server that implements OAUTH 2.0 for security, you will achieve two fundamental separation of concerns that can lower your development and maintenance costs.
This will move the session (and it's security) onto the SPA and relieve your server from all of that overhead.
Your API's become both stable and easily extensible.
Hinted to earlier, but not made explicit; If your goal is to deploy Android & Apple applications, writing a JavaScript SPA that is wrapped by a native call to host the screen in a browser (Android or Apple) eliminates the need to maintain both an Apple code base and an Android code base.
I understand this is an older question, but I would like to add another disadvantage of Single Page Applications:
If you build an API that returns results in a data language (such as XML or JSON) rather than a formatting language (like HTML), you are enabling greater application interoperability, for example, in business-to-business (B2B) applications. Such interoperability has great benefits but does allow people to write software to "mine" (or steal) your data. This particular disadvantage is common to all APIs that use a data language, and not to SPAs in general (indeed, an SPA that asks the server for pre-rendered HTML avoids this, but at the expense of poor model/view separation). This risk exposed by this disadvantage can be mitigated by various means, such as request limiting and connection blocking, etc.
In my development I found two distinct advantages for using an SPA. That is not to say that the following can not be achieved in a traditional web app just that I see incremental benefit without introducing additional disadvantages.
Potential for less server request as rendering new content isn’t always or even ever an http server request for a new html page. But I say potential because new content could easily require an Ajax call to pull in data but that data could be incrementally lighter than the itself plus markup providing a net benefit.
The ability to maintain “State”. In its simplest terms, set a variable on entry to the app and it will be available to other components throughout the user’s experience without passing it around or setting it to a local storage pattern. Intelligently managing this ability however is key to keep the top level scope uncluttered.
Other than requiring JS (which is not a crazy thing to require of web apps) other noted disadvantages are in my opinion either not specific to SPA or can be mitigated through good habits and development patterns.
Is there any scorm player which is purely based on the javascript and html . i do,t wanna use any server side languages for it . I found an open source Scormpool but it is just playing the scorm its not tracking . and no documentation is available . If you guys know any please help .
This may get you started as well, or you can tailor it to your own needs.
https://github.com/skfriese/simple-scorm-api
It's a rudimentary SCORM 1.2-only RTE test environment in a single HTML file that I whipped up for my own needs years ago. I only recently elected to clean it up to share with others. It's easy to throw into your package folder and spin up from there. The SCO will assume it's running in an LMS, which is really all that's necessary for testing most of the time.
You can update the default values in the Javascript, and the data will be stored in LocalStorage or fall back to cookies if necessary. It will also "attempt" to read some values from a manifest, should one exist. It does not support multi-SCO packages or any other complex organizational features of SCORM.
The Reload Tools have been embedded into the page as well, so you'll find that most of the data elements will be validated.
Hope this helps someone.
I was searching for similar solution myself and was unable to find anything.
So after struggling with docs and bits found online I have created this:
https://bitbucket.org/jugger0/tsp
It is far away from a fully-featured player but can be used as a starting point.
Maybe someone will find it useful.
It would be helpful if the desired result was better defined. Do you just want to be able to run scorm content locally without errors?
The initial question mentions that the content is playing but not tracking... where would it track to? Are you looking to have things stored for later retrieval on the same system, or potentially on any system? If this is the case, where would it be stored? In the browser (single system only and prone to deletion with clearing cookies, cache, etc), or to a server (accessible from multiple systems)?
If to a server, it is going to require at least some server side language, though you could potentially use something like node.js which would let that language still be javascript.
Claude Ostyn's SCORM 1.2 Test Wrap will act as a standalone in-memory "LMS" for SCORM 1.2 courses.
There is some setup required; though it will work if you run it directly in the browser, you may find it more effective running it (and your course) from a webserver.
Available here: https://www.jcasolutions.com/development/code-repository/func-startdown/7/
This used to be part of a resource kit from Click2Learn. Though Rustici (scorm.com) has adopted some of it, I wasn't able to find the original file there.
Instructions and more details may be found by running "launchme.html" directly.
I've found this one of my more valuable testing tools.
A SCORM 2004 version, and many more useful articles and tools may be found at Claude Ostyn's website http://www.ostyn.com. He passed away several years ago, but someone has kindly kept the site up.
We are currently developing a few web applications and are letting our designers convert signed off paper prototypes into static web pages. In addition to having hyperlinks between pages the designers have started adding jquery calls to update elements on the pages by fetching data from static json files. Once the designers are finished and handoff the completed web pages, CSS, and JavaScript files; the server-side developers then edit the pages and replace the references to the local static json documents with references to live json urls that return the same json data structures.
My question:
What are efficient ways to decouple GUI design from serverside dev and reduce the integration time and effort? Some examples:
Do you have the developers manually change every json reference in the designers' prototype web pages?
Do you add a global variable somewhere to enable the designers' pages to be easily switched back and forth between using static and dynamic data?
Do you make the web pages self-aware of when they are running from a web server or just being served from a folder somewhere?
It depends on how much state information does your application need to manage. The examples you have given mention only read operations from the server. Are there any writes? Both read and write operations can potentially fail. Do your designers take care of those cases, or the server-side team jumps in and patches up the GUI later on?
I think it's best to provide a mock server-side implementation of your services for the designers. The server mock could simulate real life behavior as in throw errors and exceptions beyond just the happy path. It really is much less of a hassle to install a simple web server nowadays and a single script that acts as a mock service, rather than trying to create processes for later integration.
One of the things I have seen work fairly successfully for the whole "url/path" problem (dev vs test vs production, etc) is to create a subdomain for your project of "proto" (so proto.mycoolapp.company.int) and have that be the basis for all urls used anywhere in the app.
On top of that using whatever framework you are using set up global page handlers to identify section/page/function+[url data] for all of your data files (i.e. json calls etc).
This allows you to segregate your app into sections and build up functionality as you go and allows both the UI team and the dev team to work on hints for the other team.
For example say I have an app with three sections (login, account management, preview) I would have a login url service (data, files, whatever) at proto.mycoolapp.company.int/frontend/login/securelogin?user=GrayWizardX. The UI team can add these as they identify a need for data and your app dev team can see what functions are being requested (through server logs, etc) and make sure everything is matching up.
The good part is that when you move to production, the only change is to find "proto" and replace it. If that is in your global var, then its a quick and simple change.
It's not real clear what exactly your environment is (plain html only? or something like php?), so this is a bit of a shot in the dark...
If it's plain old HTML + javascript, you could probably use a javascript include on each page to get the right set of addresses. When the include file with all this environment specific information (i.e. use local or use server) remains in the same relative place, you don't need to worry about modifying the actual page. Just tell the GUI guys to always use this same set of variables for the address information and define the address information in that include file. The variable names don't change whether your getting locally or from the server, just the values stored in the variables.
I'm low on Vitamin Coffee, so hopefully that makes sense.
I would like to create a database backed interactive AJAX webapp which has a custom (specific kind of events, editing) calendaring system. This would involve quite a lot of JavaScript and AJAX, and I thought about Google Web Toolkit for the interface and Ruby on Rails for server side.
Is Google Web Toolkit reliable and good? What hidden risks might be if I choose Google Web Toolkit? Can one easily combine it with Ruby on Rails on server side? Or should I try to use directly a JavaScript library like jQuery?
I have no experience in web development except some HTML, but I am an experienced programmer (c++, java, c#), and I would like to use only free tools for this project.
RoR is actually one of the things the GWT is made to work well with, as long as you're using REST properly. It's in the Google Web Toolkit Applications book, and you can see a demo from the book using this kind of idea here. That's not to say that you won't have any problems, but I think the support is definitely out there for it.
There's a neat project for making RoR/GWT easy that you can find here (MIT license). I haven't had a chance to try it out yet, but it looks like a good amount of thought has been put into it. One catch is that it looks like it hasn't been fully tested with 2.1 Rails yet, just 2.0, so you may run into a few (probably minor and fixable) errors.
If you are looking to integrate GWT with non-Java backends such as ROR, PHP etc., you should bear in mind that GWT 1.5 now supports JavaScript Overlay types. This feature lets you write classes that can be mapped over the top of native JavaScript objects to easily provide accessor methods for properties of those objects and other extended functionality.
See this link for more details:
JavaScript Overlay Types
So you could return JSON encoded data from your backend via AJAX calls, parse it into a JavaScript Object and then access the data through your GWT Java code using the overlay classes you've created. Or when you render your page you can render static config data as JavaScript Objects and read it in via this mechanism, rather than having to do an AJAX call to grab the data.
If you know JAVA, and have somewhere you can host it (like a tomcat or glassfish container) I would recommend that much more than using Ruby for the back end. The main reason is that then you can share all of your objects, and use the built in RPC mechanism. I've done this for quite a lot of our projects and it's a huge timesaver, not to mention that the code is less error prone, because you don't convert your java objects to anything and then back again.
I have linked my GWT with Rails before, using the to_json function in Rails and then reading the JSON in GWT. It's all supported, but it is far more annoying than just doing the back end in JAVA.
Of course if you have cheap hosting, then Java containers are pretty much out of the question, in which case I would think Rails would be the next best thing.
GWT is very high quality with a great community. However you do need to know CSS if you want to adjust the look of things (you will) - CSS can do a lot of the layout, just like regular web if you want it to. Libraries like GWT-ext or ExtGWT can help a bit as they have stunning "out of the box" looks but for a price (extra size to your app).
You can code everything in Java using GWT, and you can integrate existing 3rd party javascript libraries with it. It's very good. I've never used RoR much though, so can't say anything about that.
If you're experienced in Java but not in Javascript/CSS, then GWT is going to be a lifesaver (unless you want to learn them, of course). CSS has so many little fiddly details. It is not uncommon to spend half a day fixing a 2 pixel misalignment that only occurs in IE6.
I am not sure about how easy it would be to use ROR for the back end... It is possible, I am sure, since GWT ajax communication is just servlets. But they provide some really nice functionality for passing Java objects back and forth which you won't be able to utilize if your server isn't also using Java.
I wrote about some of the disadvantages of GWT recently. Mainly, the disadvantages are: long deployment cycle for changes to some parts of the application and a rather steep learning curve. As a seasoned Java programmer, the second should be less of a problem and if you use a seperate backend, the first is also mitigated (as a complete redeploy is primarily required when you change the 'server' part of the application).
GWT is a wonderful framework with lots of potential. Keep in mind that it's still quite new, though. There are some unresolved bugs that can really annoy you, and they usually require ugly workarounds to get past. The community is great but you'll probably end up with a few problems sooner or later that Google can't answer yet.
But hey, I say go for it. The potential for GWT is awesome, and I bet it's future will be bright.
You should definitely use GWT for a new project (it's pretty easy to use in an old project too).
I my experience it's very fast to learn and use. The compiled javascript code is much better than anything you could ever write by hand and it works fast too.
Another benefit is the ability to debug you're code (which is hell with javascript alone)
This blog has inputs from many experienced users of GWT and have some great discussion points. I personally have huge experience with varied UI Frameworks. I will add my two cents. Lets look at fundamental advantages and disadvantages of GWT
Fundamental Advantage
GWT takes the web layer programming to JAVA. So, the obvious advantages of Java start getting into play. It will provide Object Oriented programming. It will also provide great debugging and compile time checks. Since it generates HTML and Javascript, it will also have ability to hide some complexity within its generator.
Fundamental Disadvantage
The disadvantage starts from the same statement. GWT takes the web layer programming to JAVA. If you know JAVA, probably you will never look out for an alternative language to write your business logic. It's self sufficient and great. But when it comes to writing configurations for a JAVA application. We use property files, database, XML etc. We never store configurations in a JAVA class file. Think hard, why is that?
This is because configuration is a static data. It often require hierarchy. It is supposed to be readable. It never requires compilation. It doesn't require knowledge of JAVA programming language. In short, it is a different ball game. Now the question is, how it relates to our discussion?
Now, lets think about a web page. Do you think when we write a web page we write a business logic? Absolutely not. Web page is just a configuration. It is a configuration of hierarchical containers and fields. We need to write business logic for the data that will be captured from and displayed on the web page and not to create the web page itself.
Previous paragraph makes a very very strong statement. This will explain why HTML and XML based web pages are still the most popular ones. XML is the best in business to write configurations. A framework must allow a clear separation of web page from business logic (the goal of MVC framework). By doing this a web designer will be able to apply his skills of visualization and artistry to create brilliant looking web pages just by configuring XMLs and without being bothered about the intricacies of a programming language. Developers will be able to use their best in business JAVA for writing business logic.
Finally, lets talk about the repercussions in direct terms. GWT breaks this principal so it is bound to fail. The cost for developing GWT application will be very high because you will need multiskill programmers to write web pages. The required look and feel will be very hard to achieve. The turn around time of modifying the web page will be very high because of unnecessary compilation. And lastly, since you are writing web pages in JAVA it is very easy to corrupt it with business logic. Unknowingly you will introduce complexities that must be avoided.
You could also consider Grails ("Groovy on Rails") which gives you the benefits of a Rails framework and the use of the Java VM.
Our team recently asked the same question, and we chose to go with GWT, especially since the designer plugin made working with GWT more accessible to non-java experts on the team. Whoever makes this choice, just beware you DON'T use the GWT Designer plugin !! It has not been updated (in at least a year, apparently) to create a GWT application that is compatible with IE8.
Our team had almost completed our application layouts, which were working perfectly in Chrome, FF and Safari. Then they blew up in IE. IE 7 would load partial pages (but not composite includes), and IE8 was not even able to load up the application. It just hung.
The designer plugin has buttons that allow the user to add CellTable widgets that are not IE compatible (CellTable, DeckPanel, Horizontal Panel, Vertical Panel, among others). These will cause intense pain when the layouts have to be re-done in java without assistance from the designer.
Experienced GWT users love it, but the designer plugin will kill you.