As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
There's an awful lot on the web these days about how important it is to minify your JavaScript. Speed is all that matters.
But doesn't minification work against the openness of Open Source?
One of the great things about JS (as opposed to flash and the back-end) is that the source code is right there, available to be viewed by other developers who come along and think "Hey, that looks good, I wonder how they did that". The JS source code is available for everyone to see, and so developers can learn from it, adapt it, and use similar JS on their own projects.
Minifying JS makes it unreadable. It stops the external developer from being able to read the code, and so cancels out horizontal sharing and learning.
Obviously there will be some who wish to minify their JS for the express purpose of attempting to hold on to their intellectual property. It's always a shame when people undermine the creativity of the open-source community, but it's somewhat understandable, and certainly not going to stop.
But for the rest of us developers - the people who use open-source every day of our lives - JS minification gets in our way. It makes us unable to take advantage of the openness of the web. It closes down the possibility of creative sharing.
I'm all about some things being minified - libraries, plugins, etc (and maybe when serving JS to mobile). But for the custom-built code that makes your individual website individual, minifying your code is really not that necessary. It may save a few ms of download time, but keeping it open won't change that much. Most of the sites on the web probably have less than 20KB of custom JS code, and the benefit of minifying that really is minimal. Do a few ms really compare with the benefit of keeping JS code open, readable and available for others?
For sites with more JS, maybe we could start to develop an open-source-based standard, so that developers can type in a slightly different URL in order to be served the unminified code. If the minified code is at domain.com/script.min.js, let's make the unminified always available at domain.com/script.js or /script.full.js. Or are there other suggestions?
I can't really find anything else on the web talking about this issue. Everything is on the other side - pushing minification. And that alarms me. It makes me think that, as developers, we've allowed ourselves to sink into an unquestioned ideology of speed, regardless of other factors. And probably, because of the nature of ideology, some of you reading this will immediately want to dismiss it and argue against it. But think just a little bit longer - is the tiny speed benefit really worth the loss of open-source creativity? I don't believe it is.
So I guess my question is, where's the debate about open-source JavaScripting?
I'm pretty sure most — if not all — open source JavaScript libraries that offer minified versions also offer the original sources for developers to work on. It's just like how open source programs that distribute compiled binaries for general use also distribute their original sources to the public.
If you're referring to custom scripts made on a per-project basis specifically for a certain project, those scripts are not open source by default unless the author specifically cites/includes a FOSS license notice. To that end, I'm not obliged to provide an unminified version of my custom code unless I intend to distribute it freely and license it as such.
If the javascript is meant to be open source then you will also be able to find the un-minified version. For example, jQuery:
http://docs.jquery.com/Downloading_jQuery
There are both "minified" and "uncompressed" files for download.
If you find a javascript file which claims to be open source and does not have the uncompressed file available, then a mistake has been made.
Because there's no debate; I haven't seen many (any?) FOSS JS libs that don't have a non-minified version.
Even if there was, FOSS doesn't mean readable--even non-minified code can be completely illegible.
One of the great things about JS (as opposed to flash and the
back-end) is that the source code is right there, available to be
viewed by other developers who come along and think "Hey, that looks
good, I wonder how they did that".
I don't think we really want to encourage the practice of learning Javascript from the source of websites stumbled upon randomly. If you want to learn Javascript, it's much better to learn from an actual open source project that's been documented, tested and written with care.
99% of the time if I don't open-source a piece of JS, it's not because of intellectual property concerns. It's because it's a quick hack - not suitable for community consumption
Most of the sites on the web probably have less than 20KB of custom JS
code, and the benefit of minifying that really is minimal.
Whether or not the saving is 2kb (which still makes a difference, incidentally) or 2mb, minification is a best practice, and should be instilled in developers from the get-go.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I would appreciate your opinions. I have been put in charge to redevelop a major site that does quite a bit of traffic. As of the past few months, I have been using Backbone.js to develop applications. I have been researching the last couple of weeks on whether Backbone would be a good fit for the redevelopment of the new site.
My initial concern was SEO. Found a great post here that talks about progressive enhancement and a bunch of stackoverflow questions that have helped to. I can't seem to shake the feeling that building a static site and enhancing it with Backbone is quite a feat and will take much more time.
Now my question is, have we not passed the stage where we have to build sites that have to work with javascript disabled? Is it essential that our site is still functional for screen-readers etc?
My idea was to serve the relevant meta seo information from the server into my main app.html file so search engines will still be able to crawl the different urls. The Backbone app will be launched from whatever url you visit that is relevant to the app.
I have just visited the new hulu.com, and cant seem to come up with a reason as to why not re-develop the website into a Backbone application. Most if not all websites I have visited, will not function without js. Go to hulu.com with js disabled and you will be able to see what I mean. So in closing is it safe to build a website that will not function without js and will the above suffice for SEO?
Thank you
I think there will be a lot of opinions on this. Here's mine.
As a default mind setting I always find backward-compatibility and graceful fallback important. I normally believe a site should be able to fulfill it's main purpose: delivering content (content sells).
However.. what if the purpose aka content is delivering some kind of functionality, like a online calculator or drawing application.. Then the user would already need and expect things like javascript to be enabled. In those cases I'll happily make design/layout things easer on me, using javascript. Think of a site like jsfiddle: who would care if this site didn't display it's ui properly because javascript was disabled.. Nobody.
As to SEO: I think there are a lot things that influence this. If you sell apples and you own the domain apples.com, your pretty much set anyway. Again, content sells, that is how most engines try to index.
Apart from that, in this (horrible) day and internet age, the most popular search-engines will both filter and rank the search-results to the user.. so if one wants to optimize a site for the search engine.. then for who's personal bubble (search results) do you try to optimize?!?.
I have more faith in something that was semantically coded, maintainable and has a pretty stable foreseeable future (instead of having to rebuild the same thing over and over again, every 6 months or so). Simpler put: make the core/base 'simple' enough to 'always' be rendered in a useful way and then add the spice using javascript and css-edge-technology to flavor the content.
Have you looked into node.js at all? Since your porting the view rendering to javascript anyways. It would be a little friendlier to have more components speaking the same language. Plus the asynchronous processing model releases a lot of server stress that threaded processes usually cause. Threaded processes spend a lot of time (and power) waiting to execute. But in javascript, folks usually set up callback methods. So instead of waiting for the previous process to finish, node just leaves behind a callback method to be executed when needed, meanwhile the rest of the application is still going full speed ahead.
node is really light too. You can use it along side other server side technologies and it wont take up much space. It has some pretty powerful features, but, personally, I find it best for view rendering (it's javascript after all). It also makes setting up servers and routing reeeaally easy. So setting up the stuff you mention in your 4th paragraph would be a sinch.
Anyways, that's my 2 cents.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I often have the jquery library included for my page, roughly 75kb or so. Lately, I'm looking into the need to add more javascript libraries/plugins I found online, which I'm looking at adding another 25kb. In doing so, I would be concerned of adding sizes will increase the loading time for my site. Is there a 'rule of thumb' upper size limit I should be considering so I don't add any more unneeded loading time?
My rule has always been, use only what you need.
If you can achieve the goals of your application without something, don't use it. If an important feature set relies on a heavy library, use it.
Of course, no matter what, always make sure your JS files are gzip (or equivalent) compressed by your server, and use minified files as well.
Finally, it is important to keep your audience in mind. Good usage of analytics will reveal whether or not your users are tolerating your page load times.
No general rules I could suggest other than to keep it as small as is possible and don't link in libraries you don't intend to use.
If you are using common libraries, link your libraries from a common third party like the Google Libraries API. It is likely that many of your end users may already have the library versions you link from there cached in their browsers, improving their load times.
There is no limit to the size (You can have it 1GB if you wanted), but generally I'd keep it as very small as possible.
Some tips to optimize scripts:
Minify - This means shortening variable names, remove whitespaces and so on. Dean Edwards' /packer/ is an example. I'd suggest Google Closure Compiler also, but I'd reserve it for post-production phase since it will alter the code structure.
Merge - Compile your scripts so that they are contained, in the very least, only one file. This saves you a ton of HTTP requests later.
Use what you only need - If, for example, you only need a few parts of jQuery, they have an option to roll out your own jQuery (somewhere in the docs). Another example of this is jQueryUI, where you have an option to load selected UI features.
I'm not so sure about a 'rule of thumb' but you should certainly be asking yourself the following questions:
Are your scripts minified?
Are external scripts being linked externally?
Can these be linked from a CDN?
For example you're much better off letting someone else host a library like jQuery, as that lets your own server deal with only serving your content.
But most of all include only the absolute minimum you need for the functionality of your site.
It is difficult to make an estimation since we do not know what functionality your library offers. However you can compare it with jQuery. It offers so much cross browser functionality as a 30 kb file.
Every file you add to the page will take some time to transfer. It can be reduced by caching, minification,CDNs etc. Also there are approaches that bundles all JavaScript files into one. This significantly improves performance by reducing request number.
I already spent some time developing small projects with the GWT and I recently discovered Script#.
Now I am curious about how mature this toolkit is.
I am especially interested in the opinion of someone who tried both GWT and Sharp# and therefore is able to compare the two.
How mature is Script#?
Is it true that it is maintained by only one guy?
Where does it lack functionality when compared to the GWT?
Does it have advantages over the GWT?
Personal opinion on Sharp#?
Thanks for your time.
While this is a duplicate question, I think the answer in the previous question doesn't touch on some important points with regards to GWT.
GWT is a Java to Javascript compiler with a heavy emphasis on optimizing the generated Javascript beyond anything that's possible to do by hand. The generated JS is also browser specific, so Webkit browsers don't download IE hacks. The generated files are also cachable because the name is the md5 sum of the script contents, so you could cache it forever. This means a user only has to download the code once until it changes. Script#, from my quick skimming of the website, only seems to translate C# to Javascript.
GWT offers advanced features like developer guided code splitting, ClientBundle for bundling resources and CssResource for conditional CSS, etc. Combined with UiBinder, developing a site that has 2 round trips for application start up is possible to do and not very hard. I don't think Script# has any of this, and most JS libraries don't either.
GWT has dev mode for a JS like development environment (change code, refresh the browser, see the changes), I'm not sure if Script# has something like that.
I could keep going, but I think I'll stop with those. When you combine this with the other answers, GWT is pretty compelling.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I need to write a GUI related javascript library. It will give my website a bit of an edge (in terms of functionality I can offer) - up until my competitors play with it long enough to figure out how to write it by themselves (or finally hack the downloaded script). I can accept the fact that it will be emulated over time - thats par for the course (its part of business). I just want to have a few months breathing space where people go "Wow - how the f*** did they do that?" - which gives me a few months of free publicity and some momentum to move onto other things.
To be clear, I am not even concerned about hard core hackers who will still hack the source - thats a losing battle not worth fighting (and in any case I accept that my code is not "so precious"). However, what I cannot bear, is the idea of effectively, simply handing over all the hard work that would have gone into the library to my competitors, by using plain javascript that anyone can download and use. If someone is going to use what I have worked on, then I sure as hell don't want to simply hand it over to them - I want them to work hard at decoding it. If they can decode it, they deserve to have the code (they'll most likely find out they could have written better code themselves - they just didn't have the business sense to put all the [plain vanilla] components in that particular order) - So, I'm not claiming that no one could have written this (which would be a preposterous claim in any case) - but rather, what I am saying is that no one (up to now) has made the functionality I am talking about, available to this particular industry - and I (thinking as an entrepreneur rather than a geek/coder), want to milk it for all its worth, while it lasts i.e until it (inevitably) gets hacked.
It is an established fact that not one website in the industry I am "attacking" has this functionality, so the value of such a library is undeniable and is not up for discussion (i.e. thats not what I'm asking here).
What I am seeking to find out are the pros and cons of obfuscating a javascript library, so that I can come to a final decision.
Two of my biggest concerns are debugging, and subtle errors that may be introduced by the obfuscator.
I would like to know:
How can I manage those risks (being able to debug faulty code, ensuring/minimizing against obfuscation errors)
Are there any good quality industry standard obfuscators you can recommend (preferably something you use yourself).
What are your experiences of using obfuscated code in a production environment?
If they can decode it, they deserve to have the code (they'll most likely find out they could have written better code themselves - they just didn't have the business sense to put all the [plain vanilla] components in that particular order).
So really, you're trying to solve a business issue with technical measures.
Anybody worth his salt as a Javascript programmer should be able to recreate whatever you do pretty easily by just looking at the product itself, no code needed. It's not like you're inventing some new magical thing never seen before, you're just putting pieces together in a new way, as you admit yourself. It's just Javascript.
Even if you obfuscate the script, it'll still run as-is, competitors could just take it and run with it. A few customizations shouldn't be too hard even with obfuscated code.
In your niche business, you'll probably notice pretty quickly if somebody "stole" your script. If that happens, it's a legal issue. If your competitors want to be in the clear legally, they'll have to rewrite the script from scratch anyway, which will automatically buy you some time.
If your competitors are not technically able to copy your product without outright stealing the code, it won't make a difference whether the code is in the clear or obfuscated.
While you can go down the long, perilous road of obfuscators, you generally don't see them used on real, production applications for the simple reason that they don't really do much. You'll notice that Google apps, which is really a whole heap of proprietary and very valuable JavaScript when you get down to it, is only really minimized and not obfuscated, though the way minimizers work now, they are as good as obfuscated. You really need to know what you're doing to extract the meaning from them, but the determined ones will succeed.
The other problem is that obfuscated code must work, and if it works, people can just rip it wholesale, not understanding much of it, and use it as they see fit in that form. Sure, they can't modify it directly, but it isn't hard to layer on some patches that re-implement parts they don't like without having to get in too deep. That is simply the nature of JavaScript.
The reason Google and the like aren't suffering from a rash of cut-and-paste competitors is because the JavaScript is only part of the package. In order to have any degree of control over how and where these things are used, a large component needs to be server-based. The good news is you can leverage things like Node.js to make it fairly easy to split client and server code without having to re-implement parts in a completely different language.
What you might want to investigate is not so much obfuscating, but splitting up your application into parts that can be loaded on-demand from some kind of service, and as these parts can be highly inter-dependent and mostly non-functional without this core server, you can have a larger degree of control over when and where this library is used.
You can see elements of this in how Google is moving to a meta-library which simply serves as a loader for their other libraries. This is a step towards unifying the load calls for Google Apps, Google AdSense, Google Maps, Google Adwords and so forth.
If you wanted to be a little clever, you can be like Google Maps and add a poison pill your JavaScript libraries as they are served dynamically so that they only operate in a particular subdomain. This requires generating them on an as-needed basis, and while it can always be removed with sufficient expertise, it prevents wholesale copy-paste usage of your JavaScript files. To insert a clever call that validates document.href is not hard, and to find all these instances in an aggressively minimized file would be especially infuriating and probably not worth the effort.
Javascript obfuscation facts:
No one can offer a 100% crack free javascript obfuscation. This means that with time and knowledge every obfuscation can be "undone".
Minify != obfuscation: When you minify your objective is: reduce code size. Minified code looks completly different and its much more complex to read (hint:jsbeautifier.com). Obfucation has a completly different objective: to protect the code. The transformations used try to protect Obfuscated code from debugging and eavesdropping. Obfuscation can even produce a even bigger version of the original code which is completely contrary to the objectives of minification.
Obfuscation != encryption - This one is obvious but its common mistake people make.
Obfuscation should make debugging much much harder, its one of it objectives. So if it is done correctly you can expect to loose a lot of time trying to debug obfuscated code.That said, if it is done correctly the introduction of new errors is a rare issue and you can easily find if it is an obfuscation error by temporarily replacing the code with non obfuscated code.
Obfuscation is NOT a waste of time - Its a tool. If used correctly you can make others waste lots of time ;)
Javascript obfuscation fiction: ( I will skip this section ;) )
Answer to Q2 - Sugested obfuscation tools:
For an extensive list of javascript obfuscator: malwareguru.org. My personal choice is jscrambler.com.
Answer to Q3 - experiences of using obfuscated code
To date no new bugs introduced by obfuscation
Much better client retention. They must come to the source to get the source;)
Occasional false positives reported by some anti-virus tools. Can be tested before deploying any new code using a tool like Virustotal.com
Standard answer to obfuscation questions: Is using an obfuscator enough to secure my JavaScript code?
IMO, it's a waste of time. If the competitors can understand your code in the clear (assuming it's anything over a few thousand lines...), they should have no trouble deobfuscating it.
How can I manage those risks (being
able to debug faulty code,
ensuring/minimizing against
obfuscation errors)
Obfuscation will cause more bugs, you can manage them by spending the time to debug them. It's up to the person who wrote the obfuscation (be it you or someone else), ultimately it will just waste lots of time.
What are your experiences of using
obfuscated code in a production
environment?
Being completely bypassed by side channel attacks, replay attacks, etc.
Bugs.
Google's Closure Complier obfuscates your code after you finish writing it. That is, write your code, run it through the compiler, and publish the optimized (and obfuscated) js.
You do need to be careful if your using external js that interfaces with the lib though because it changes the names of your objects so you can't tell what is what.
Automatic full-code obfuscation is so far only available in the Closure Compiler's Advanced mode.
Code compiled with Closure Advanced mode is almost impossible to reverse-engineer, even passing through a beautifier, as the entire code base (includinhg the library) is obfuscated. It is also 25% small on average.
JavaScript code that is merely minified (YUI Compressor, Uglify etc.) is easy to reverse-engineer after passing through a beautifier.
If you use a JavaScript library, consider Dojo Toolkit which is compatible (after minor modifications) with the Closure Compiler's Advanced mode compilation.
http://dojo-toolkit.33424.n3.nabble.com/file/n2636749/Using_the_Dojo_Toolkit_with_the_Closure_Compiler.pdf?by-user=t
You could adopt an open-source business model and license your scripts with the GPL or Creative Commons BY-NC-ND or similar
While obfuscation in general is a bad thing, IMHO, with Javascript, the story is a little different. The idea is not to obfuscate the Javascript itself but to produce shorter code length (bandwidth is expensive, and that first-time users may just be pissed off waiting for your Javascript to load the first time). Initially called minification (with programs such as minify), it has evolved quite a bit and now a full compiler is available, such as YUI compiler and Google Closure Compiler. Such compiler performs static checking (which is a good thing, but only if you follow the rules of the compiler), minification (replace that long variable name with 'ab' for example), and many other optimization techniques. At the end, what you got is the best of both worlds, coding in non-compiled code, and deploying compiled (, minified, and obfuscated) code. Unfortunately, you would of course need to test it more extensively as well.
The truth is obfuscator or not, any programmer worth his salt could reproduce whatever it is you did in about as much time as it took you. If they stole what you did you could sue them. So bottom line from the business point of view is that you have, from the moment you publish, roughly the same amount of time it took you to implement your design until a competitor catches up. Period. That's all the head start you get. The rest is you innovating faster than your competitors and marketing it at least as well as they do.
Write your web site in flash, or better yet in Silverlight. This will give your company unmatched GUI, that your competitors will be salivating about. But compiled nature of flash/dotnet will not allow them easily pick into your code. It's a win/win situation for you ;)
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Coffeescript looks pretty cool. Has anyone used it? What are its Pros & Cons?
We've started to use CoffeeScript in our product - a non-public facing website which is basically an app for browsing certain kinds of data.
We use CoffeeScript as a command-line compiler (not on the server, which we'd eventually like to do).
PROS (for us):
It gets rid of a lot of needless clutter in javascript (eg braces, semi-colons, some brackets) to the extent that the code is cleaner & easier to comprehend at-a-glance than javascript
20-30% less lines of code than javascript (to do exactly the same thing)
CoffeeScript not only removes noise but adds keywords, classes, and features like heredocs to make coding cleaner and somewhat more enjoyable
Given the previous points, it is undoubtedly faster to code in CoffeeScript once you learn the ropes
CONS
When using the command-line compiler: to debug, you're looking at different code when solving the problem (javascript) as when writing the fix (coffeescript). However, somewhat unbelievably, our CoffeeScript is so awesome we've never needed to debug it!
Importantly, we can turn back at anytime. Our coffeescript compiler is just producing readable javascript, so if anyone changes their mind or can't figure something out, then we can just drop back to using the javascript that coffeescript produced - and keep coding.
We use coffeescript for all of the javascript in BusyConf. A large portion of BusyConf is a client side application that runs in browers, including support for offline mode.
All of our coffeescript code is fully tested. The tests themselves are written in coffeescript, and use the Qunit framework (which is written in javascript). We also wrote an extension to the Qunit framework that makes the tests nicer. The Qunit extension is written in CoffeeScript. Our application has a mobile version which is written in CoffeeScript, and it uses the Sencha Touch framework (which is written in javascript).
The take away from that is that you can freely intermix javascript dependencies in your application, but all of the code you write (your application code, tests, etc) can (and should!) be coffeescript.
Almost a year later, it's worth posting some updates:
Ruby on Rails 3.1 is incorporating official CoffeeScript support, which means it's going to see far more real-world use. I gave a talk at RailsConf last month, where most of the attendees hadn't heard of CoffeeScript before and—given dhh's strong endorsement—were eager to get into it.
There's a book on CoffeeScript, currently in eBook and soon to be in print from The Pragmatic Bookshelf. It's called CoffeeScript: Accelerated JavaScript Development, and it's by yours truly. It's based on CoffeeScript 1.1.1.
The language has actually changed very little in the six months between 1.0 and 1.1.1; nearly all of the changes qualify as "bugfixes." I had to make very few tweaks to the code in the book for the transition from 1.0.1 to 1.1.1. However, I'm sure the language will see more significant changes in the future.
The most definitive list of CoffeeScript projects is on the CoffeeScript wiki's In the Wild page.
I'd say that most of the production use of CoffeeScript so far is in conjunction with Appcelerator to create iPhone/Android apps. (Wynn Netherland of The Changelog blurbed my book by describing CoffeeScript as "my secret weapon for iOS, Android, and WebOS mobile development"), but there's going to be a lot more use in production Rails apps—and, I hope, elsewhere—in the coming months.
Coffeescript was used in the Ars Technica reader for iPad http://arstechnica.com/apple/news/2010/11/introducing-the-ars-technica-reader-for-ipad.ars
I really love Coffeescript these days. Essentially the entire HotelTonight iPhone application is written in it (using Appcelerator Titanium, which lets you write "native" apps in JavaScript - they are not web apps, say like Phonegap). I chose to use Coffeescript in this case because it makes organizing and maintaining a large amount of JS a lot easier. I also find it simply a lot more pleasurable to write code with Coffeescript (vs. JavaScript). We also use Coffeescript for the JS in our Rails app, but this is incredibly minor/small amount of code in relation to the entire phone app.
The pros mostly have to do with just being a nicer syntax, but also that it standardizes an OO mechanism, and then adds some nice additions (list comprehensions, some scope things, etc.).
The cons are almost zero for me. The primary one is that it's an extra layer to debug. You will need to look at the generated JS (which is VERY readable and nice), and then map that to your Coffeescript code. For us, this hasn't been an issue at all, but YMMV.
In the end, my take is, there is zero risk in terms of using it on a production app, so, don't let that be a blocker. Then, go try it out. Write some code with it, compare that to what you'd write in JS, look at the generated code to see if you are comfy with being able to read that for debugging needs. Also, hang out in the #coffeescript IRC, people are good there. And finally, see how it would integrate with your app, e.g. what's your "build" process (e.g. for Rails, try Barista, for something standalone, just use the included "coffee -w", etc.).
Coffeescript really just makes writing JS easier. You end up with cleaner, more efficient code.
That being said, you still can only do whatever you can do in vanilla JS. Once you use coffeescript enough, it does become a lot easier to write (good) JS.
So if you haven't used JS a ton, I'd suggest learning coffescript instead. You'll get better, cleaner, less buggy code. If you're already really fluent in JS, it might not be a good idea to start using coffeescript on a "real" app.
(Also, coffeescript does irk me a bit in that it seems to encourage rather "floofy" code. I don't know if it's a good thing or a bad thing, but it seems an extreme case of TMTOWTDI)
Note that although there is a compiler, you don't get static checking due to JavaScript's dynamic nature. As written in the FAQ:
Static Analysis
CoffeeScript uses a straight source-to-source compiler. No type
checking is performed, and we can't work out if a variable even exists
or not. This means that we can't implement features that other
languages can build in natively without costly runtime checks. As a
result, any feature which relies on this kind of analysis won't be
considered.
IDE support is less mature than that of JavaScript (Cloud9 has syntax highlight support, but Eclipse JSDT has refactorings and more): https://stackoverflow.com/questions/4084167/ide-or-its-add-in-for-coffescript-programming