Delivering a javascript library for web developers - javascript

This is a broad-based question around delivering a javascript library that other web developers will use on their site. Here's the scope of my library:
I'm providing a data service that's delivered in the form of a JS file. A similar implementation would be Google Analytics.
Will always be hosted by me. Developers will simply use the #src attribute on the <script> tag.
My library consists of an object (let's call it Jeff for now) with a set of properties. No methods, just values.
The library isn't static, but is instead session-based. We're providing data points that can only be determined at request time. (Think of a web service normally called through AJAX, available at page-load.)
This is not a free service; implementors will pay for usage.
The Jeff object will always be returned, though not all properties may be populated due to a runtime error that happened back at my server. The Jeff object includes a Response section that indicates success/failure and a description.
Now, to my question: what's ideal in terms of best practices for providing a service in the form of a JS library such as I've described? Standard Googling has not given me much to go on, but references to guidelines are greatly appreciated.

Doesn't sound like something I'd use. The fact that you want it always hosted on your server leaves any consumer of the service open to you substituting malicious code after they've reviewed and determined its useful and safe. Thus I'd see limited uptake for it unless you're a large corporation with a trustworthy reputation.
No comment on you personally, just how I'd view something like that and how Information Security overseers in larger companies would likely view it as well.

YUI host all their files for developers to access directly, and with free use of their CDN to boot. Also hundreds of thousands of companies worldwide use Google Analytics, which is the same risk profile as "Jeff".
Admittedly the trust profile for Yahoo! and Goole is a lot higher than it is for "Jeff", but still, my point is there are plenty of precedents out there for this delivery model.
Personally (btw there is no right answer, except for the market's response) I believe it may have merit depending on the value proposition behind "Jeff". I agree with MadMurf, describe it as a 'web service' that requires only one JS file to integrate into a customer's website.
PS: I'm not sure if "javascript" was the best tag for discussing this. Maybe the "business" tag would have elicited wider feedback. Good luck!

Related

Getting formula from website calculator

So this is an issue I come across frequently... there are many medical website calculators online that health systems would like to use, but the formulas, equations and statistical models aren't readily available. I was wondering if it would be possible to use Developer Tools on chrome or something similar to find these in the javascript? I can find pages of calculations when I mine into the data but nothing that makes sense to me. (EX: http://riskcalculator.facs.org/RiskCalculator/PatientInfo.jsp)
Yes and no. If everything works on the client side you should be able to read the code if you really need it. There's an option to "prettify" the minified script in chrome dev tools (sources -> "{}" icon below the editor).
However, I'm not so sure if reusing it won't violate a law in some countries.
Also, I believe if this is some kind of expensive information, the website authors would rather send input to a server and send back a result. You could also do some reverse engineering by watching the I/O in numerous ways.
I'd suggest writing to them directly first and as if they are willing to share the algorithm with you.
Additionally just do a research on the topic you're interested in. Most of those calculators are using publicly accessible knowledge which is pretty easy to write as a script.

Use SCORM runtime API without LMS?

I am new to SCORM and have been given an assignment to integrate SAP Workforce Performance Builder exported SCORM (can either be 1.2 or 2004) content into an existing PHP website.
To put it simple, I need to be able to display the exported SCORM material in the browser (I can already do this), and be able to get the statistics through the SCORM runtime API.
I understand that I will need to make use of an LMS to allow communication with the SCO through the SCORM runtime API. I have looked into several open source LMS's, but haven't found a good solution for my purpose. The problem is that a lot of these LMS's are designed to run on the domain of the provider, and have built in tools to follow up on users' progress and scoring.
What I'm looking for is a simple, lightweight solution to be able to interact with the SCORM runtime API, so I can fetch the time a user has spent on a course, his score, etc. I will insert the gathered data into my own database, and code the backend where results can be evaluated myself, all I need is a way to get to the SCORM data.
I feel like I'm missing something, as surely you don't need an entire LMS implementation to simply listen for the basic 8 SCORM API calls, and log the results? Any help or a nudge in the right direction is greatly appreciated!
If you just need to mimic an LMS, providing a pseudo SCORM API so the course can 'speak' to your PHP site, try Claude Ostyn's SCORM Test Wrapper. It's pure client-side JavaScript, as lightweight as you can get with SCORM.
In a nutshell, Claude's test wrapper provides a simple SCORM API for the course to connect to. It receives communication from the course, which you can handle however you like. No backend code is provided; if you want to incorporate with a database, you will need to modify the wrapper to push/pull data from your site's database (this is typically handled via AJAX).
Once you build out the data store, you can make your site behave as an LMS, enabling the site to launch SCORM courses, and enabling the courses to send/receive data to your site via the SCORM API. No LMS or 3rd-party server required.
Notes:
There is no support for unzipping packages or reading manifests. (I suspect you're not interested in going that far.)
SCORM also supports sequencing and navigation, which go way beyond simple JavaScript wrappers. If you need to support the sequencing and navigation features, you'll need to grab them from an existing open-source project (not easy) or pay a 3rd party like Rustici Software (SCORM Cloud). I suspect the content you create via SAP will not use any of SCORM's sequencing or navigation features, so you'll probably be OK.
Claude passed away a while ago, so he can't support you. Shout out to the guys at Rustici Software, who have preserved the site for the SCORM community.
From the courseware's point of view, it is just using javascript to call functions on an API or API_1484_11 object. If you can write the javascript code to sufficiently ape the interface, and store/return the necessary data model elements, then you don't need "an entire LMS implementation".
You need to carefully read the Run-Time Environment documentation though.
If you only ever plan to use it for running SAP Workforce Performance Builder produced courseware, then you can implement enough or the data-model to make that work correctly (although I've seen this done, then people surprised/confused/angry when other SCORM compliant courseware does not work, so beware.)
(Aside) You also need a reliable way to install/update your courseware packages from a PIF zip file. Again, for dealing with courseware from a specific content creator and not needing to write a full blown generic interface, you can just pick out the bits of the imsmanifest.xml file you need.
(Digression) Having written the courseware side of the interface a few times, I've seen interesting gotchas in various LMS implementations of the API, including things like returning the boolean true or false instead of a string "true" or "false" which can catch you off guard. May favourite so far is an LMS that truncates the cmi.suspend_data at the first newline character. (Actually, the implementation was that inept that there was a bug in their bug, and it also chopped off the character before the newline as well.)
You'll mainly want to capture, maintain and enforce the Student Attempt Object. I've used this in a JSON format now for a while, and you can take different approaches to how you store information collected by a Shareable Content Object. Normally people pluck the parts they need vs. trying to go 100% into full SCORM support so these types of questions are popular.
By creating the SCORM Runtime for either SCORM 1.2 or 2004 you'll mainly be providing those methods to build the data from the student session.
This can look like https://gist.github.com/cybercussion/4675334 (based on Unit test data for SCORM 2004)
You attempt to route your calls to your server side. Normally this results in a lot of lag. And I normally don't advocate it as an option.
You cache the student attempt, but you post the whole JSON object on a commit call. This normally results in a larger data post which can blimp on you if there are a lot of journaled interactions.
You take a hybrid approach and only post the data thats changed and merge that on your server limiting the data blimp issues that could occur.
I have a bunch of info up on the wiki here too https://github.com/cybercussion/SCOBot/wiki as well as a lot of sample code, tips etc...

Prevent others from duplicating your single-page application (SPA)

What's the best way to protect a SPA+REST app built with one of the trending frameworks (backbone, angular, ember, etc.) from being replicated?
In a general environment, anyone can copy all the assets, modify the ajax endpoint and replicate the API (which in basic CRUD cases is easy) to have a fully functional copy of your app. Minification and obfuscation can help, but doesn't completely resolve the problem.
You can't
You can't prevent this from happening. Your front-end is served directly on the client, and can be copied and/or altered. The same goes for the assets.
Back-end
However, in practice you rarely need to worry about this. In almost all cases, the real business value of a web application lies in the back-end. This is where all the core business logic should go, where your awesome algorithms are, and where your security is applied. And more importantly, this is where all the (valuable) data is stored.
The value isn't in the code
A front-end is just an interface to your application. Do not worry about people 'stealing your awesome front-end code'. Your code is very likely not that special, and there's nothing bad about developers learning from looking at it. A good coder can probably reproduce your functionality without ever having seen your code anyway. And if someone blatantly copies your front-end code and reuses it, they are in violation of your rights as owner. They will not be able to just launch a competing product that runs with YOUR code base under the hood, and get away with it. More importantly, you've already established your product on the market, so you have an advantage that is very hard to beat.
Just let it go
Let go of trying to protect your code. It cannot be done. And neither is it necessary. A lot of companies have made a lot of money off of open-source products. The real value does not (just) lie in the source code, especially not on front-end source code.
Disclaimer: In case this comes across as me not appreciating front-end code: I am a full-time front-end developer and architect.

Is it a bad idea to use backbonejs as a paid service site?

I'm working on a Software as a Service site which we will use backbone primarily, but what I'm noticing is most of the logic for the application is lying on backbone. While we use ruby mostly as just a session controller and a bridge to the database it seems. So our site is very susceptible to being copied. (just a matter of copying the js files...)
I know this may be a dumb question but, is it anyway I can avoid this or would have a client side heavy application like this be bad for this type of application?
I'm not sure on how I can secure this site structure at this point.
Sure it can be copied, that is a risk you take with JavaScript. You have the same problem with your markup and your CSS as well, but I'd say you rarely see someone stealing it anyway. There is probably more to your service than just your code (your design, your copy, your business model, your customer support). Even if they did copy your code, you will probably be able to deliver a better service than them anyway, since your are devoted to your product, which they clearly are not.
Another way of looking at the whole thing is to see it as the beauty of web development. You are free to open up the code of any web page and learn from it.
If you still want to "protect" your code, your best shot is probably to use something like UglifyJS or similar, to minimize and obfuscate your code. Sure the "thief" could then use a prettyfier to get indentation etc. back, but the code will still be obscure and practically impossible to maintain. So it would probably not be worth the job of stealing it in the long run.
Protecting your javascript libraries is hard because you let your clients download them. The best thing you can do to protect them is to run a obfuscation and minification tool on them before you deploy them into production.

How to protect a site from API outages?

Watching the effects of today's Google outage across the web got me thinking about how to prevent this in the future. This may be a stupid question, but is there a good way to include external APIs (e.g., Google's AJAX libraries) so that if the API is unavailable, the including page can still soldier on without it? Is it generally a bad idea to use libraries hosted on an external server in general?
It's unavoidable to use a script tag to load cross-domain JavasScript files (this will cause a timeout if it goes down). However, in your code, check for the API objects being null to avoid errors:
E.g. instead of:
<script type="text/javascript">
google.load("maps", "2");
// Use Maps API
</script>
use:
<script type="text/javascript">
if(google != null)
{
google.load("maps", "2");
// Use Maps API
}
else
{
// Fallback
}
</script>
I don't think rare outages are worth rejecting an external API wholesale.
You will want to design your application to degrade gracefully (as others have stated) and there is actually a design pattern that can be useful in doing so. The Proxy Pattern, when implemented correctly can be used as a gatekeeper to check if a service is available (among many other uses) and return appropriately to the application either the correct data, cached data or inform the application that the service is not available.
The best general answer I can give is, degrade beautifully and gracefully & avoid sending errors. If the service can become unavailable, expect that and do the best job you can.
B
ut I don't think this is question that can be answered generically. It depends what your site does, what external libraries/API you are using, etc.
You could do some sort of caching to still serve up pages with older data. If allowed, you could run the API engine on your own server. Or you could just throw up status messages to users.
It's not a bad idea to rely on external APIs, but one of the major drawbacks is that you have little control over it. If it goes away? Welcome to a big problem. Outages? Not much you can do but wait.
I think its a great idea to use external libraries since it saves bandwidth for me (read $$). But its pretty easy to protect against this kind of api outage. Keep a copy on your server and in your JavaScript check if the api has been successfully loaded. If not load the one on your server.
I was thinking about jQuery and YUI here. The other guys are right about the problems when using actual services like mapping.
One possibility for mitigating the problem (will only work if your site is dynamically generated):
Set up a cronjob that runs every 10 minutes / hour / whatever, depending how much you care. Have it attempt to download the external file(s) that you are including, one attempt for each external host that you depend on. Have it set a flag in the database that represents whether each individual external host is currently available.
When your pages are being generated, check the external-host flags, and print the source attribute either pointing to the external host if it's up, or a local copy if it's down.
For bonus points, have the successfully downloaded file from the cronjob become the local copy. Then when one does go down, your local copy represents the most-current version from the external host anyway.
A lot of the time you need to access third party libraries over the web. The question you need to ask yourself is how much do you need this and can you cache any of it.
If your uptime needs to be as close to 100% as possible then maybe you should look at how much you rely on these third parties.
If all you are obtaining is the weather once an hour then you can probably cache that so that things carry on regardless. If you are asking a third party for data that is only valid for that milisecond then you probably need to look at error handling to cover the fact it may not be there.
The answer to the question is entirely based upon the specifics of your situation.
It'd certainly be fairly easy to include a switch in your application to toggle using Google or local web server to server your YUI, JQuery or similar library so that you can toggle provider.

Categories