I want to improve my Coffeescript coding style. When I program in Scala, I can write a module in an hour or two, run it and have only a few minor bugs that I can quickly identify and fix.
In Coffeescript, I spend about the same time up front but I end up having a staggering amount of small bugs that would have been caught by a static type checker and I end up having to compile, reload the browser, step through some code, add some break points, etc. It's an infuriating experience and takes significantly longer.
It's much harder to abstract and encapsulate functionality due to the lack of interfaces and many other OO-features.
Are there design patterns that replace the encapsulation/abstraction generally provided by OO? Or is there a primer/guide on how to think in a more Coffeescript-y way (or how to solve problems using a prototypical approach)?
What have you done to become more productive in Coffeescript (or Javascript - perhaps even any dynamically typed languages)?
If you're coming from a statically-typed, class-centric language like Java or Scala, learning JavaScript/CoffeeScript is going to be a challenge. The compiler doesn't help you nearly as much, which means that it takes you minutes to discover small mistakes instead of seconds.
If that's your major bottleneck, then I'd suggest embracing a more test-driven coding methodology. Use a library like QUnit to write small tests for each piece of functionality you develop. Used properly, this style gives you the same benefits as a static compiler without compromising the flexibility of a dynamic language.
Don't go straight to Coffee Script. Learn the core concepts from prototype and Javascript OO. IMMO You can learn both at the same time, but you will benefit much more if you get Vanilla Javascript first. Based on my personal experience, Coffee Script syntactic sugar for classes can be a trap if you don't understand prototypical inheritances (it's easy to get stuck on a bug).
Coffee Script debugging is still not a completely solved matter in terms of tools, the only way I know it can be done is to write tests (a pain when you're just starting) or look at the generated code (at least for the more obscure bugs).
It was odd for me too; in my case coming from a C/C++ background.
What clicked for me is that you can reduce your iteration time significantly with a few tweaks of your work environment. The idea is to reduce it enough that you can write code in small chunks and test your code very very frequently.
On the lack of compile time checks: You'll get used to it. Like significant white space, the lack of compile time type checking just melts away after a few weeks. It's hard to say how exactly, but at least I can tell you that it did happen for me.
On the lack of interfaces: that's a tricky one. It would be nice to get a little more help in larger systems to remind you to implement entire interfaces. If you're finding that you really are losing a lot of time to that, you could write your own run time checks, and insert them where appropriate. E.g. if you register your objects with a central manager, that would be a good time to ensure that the objects qualify for the role they're being submitted to.
In general, it's a good to bear in mind that you have decent reflection abilities to hand.
On the lack of encapsulation: Given that coffeescript implements a very nice class wrapper to the prototype scheme, I'm assuming you mean the lack of private variables? There are actually a number of ways you can hide details from clients, if you feel the need to, and I do; usually to stop myself from shooting my foot in the future. The key is usually to squirrel things away in closures.
Also, have a look at Object.__defineGetter__ / Object.defineProperty? Getters and setter can help a lot in these situations.
On reducing iteration time:
I was using the built in file watcher in coffee to compile the scripts on change. Coupled with TextMate's ability to save all open files on losing focus, this meant that testing was a matter of switching from textmate to chrome/firefox and hitting refresh. Quite fast.
On a node.js project though, I've setup my views to just compile and serve on the fly so even the file watcher is superfluous. They're cached in release, but in the debug mode they're always reloaded from disk, recompiled, and on encountering errors I just serve them up instead. So now every few minutes I switch to the browser, hit refresh and either see my test running, or the compiler errors.
Related
This is a generic question
I've seen javascript on some websites which is obfuscated
When you try to deobfuscate the code using standard deobfuscators (deobfuscatejavascript.com, jsnice.org and jsbeautifier.org)
, the code is not easily deobfuscated
I know it's practically impossible to avoid deobfuscation. I want to make it really tough for an attacker to deobfuscate it
Please suggest some ways I can acheive this
Should I write my own obfuscator, then obfuscate the output with another online obfuscator. Will this beat it?
Thanks in advance
P.S: I tried google closure compiler, uglifyjs, js-obfuscator and a bunch of other tools. None of them (used individually or in combination) are able to beat the deobfuscators
Obfuscation can be accomplished at several levels of sophistication.
Most available obfuscators scramble (shrink?) identifiers and remove whitespace. Prettyprinting the code can restore nice indentation; sweat and lots of guesses can restore sensible identifier names with enough effort. So people say this is weak obfuscation. They're right; sometimes it is enough.
[Encryption is not obfuscation; it is trivially reversed].
But one can obfuscate code in more complex ways. In particular, one can take advantage of the Turing Tarpit and the fact that reasoning about the obfuscated program can be hard/impossible in practice. One can do this by scrambling the control flow and injecting opaque control-flow control predicates that are Turing-hard to reason about; you can construct such predicates in a variety of ways. For example, including tests based on constructing artificial pointer-aliasing (or array subscripting, which is equivalent) problems of the form of "*p==*q" for p and q being pointers computed from messy complicated graph data structures.
Such obfuscated programs are much harder to reverse engineer because they build on problems that are Turing hard to solve.
Here's an example paper that talks about scrambling control flow. Here's a survey on control flow scrambling, including opaque predicates.
What OP wants is an obfuscator that operates at this more complex level. These are available for Java and C#, I believe, because building program analyzers to determine (and harness) control flow is relatively easy once you have a byte code representation of the program rather than just its text. They are not so available for other languages. Probably just a matter of time.
(Full disclosure: my company builds the simpler kind of obfuscators. We think about the fancier ones occasionally but get distracted by shiny objects a lot).
The public de-obfuscators listed by you use not much more than a simple eval() followed by a beautifier to de-obfuscate the code. This might need several runs. It works because the majority of obfuscators do their thing and add a function at the end to de-obfuscate it enough to allow the engine to run it. It is a simple character replacement (a kind of a Cesar cipher) in most cases and an eval() is enough to get some code, made more or less readable by a beautifier after that.
To answer your question: you can make it tougher ("tougher" in the sense that just c&p'ing it into a de-obfuscator doesn't work anymore) by using some kind of "encryption" that uses a password the the code gets from the server after the first round of de-obfuscation and uses a relative path that the browser completes instead of a full path. That would need manual intervention. Build that path in a complicated and non-obvious way and you have a deterrent for the average script-kiddie.
In general: you need something to de-obfuscate the script that is not in the script itself.
But beware: it does only answer your question, that is, it makes it impossible to de-obfuscate by simple c&p into one of those public de-obfuscators and not more. See Ira's answer for the more complex stuff.
Please be aware of the reasons to obfuscate code:
hide malicious intent/content
hide stolen code
hide bad code
a pointy haired boss/investor
other (I know what that is, but I am too polite to say)
Now, what do the people think, if they see your obfuscated code? That your investor insisted on it to give you money to write that little browser game everyone loves so much?
JavaScript is interpreted from clear text by your browser. If a browser can do it, so can you. It's the nature of the beast. There are plenty of other programming languages out there that allow you to compile/black box before distribution. If you are hell-bent on protecting your intellectual property, compile the server side data providers that your JavaScript uses.
No JavaScript obfuscation or protection can say it makes it impossible to reverse a piece of code. That being said there are tools that offer a very simple obfuscation that is easy to reverse and others that actually turn your JavaScript into something that is extremely hard and unfeasible to reverse. The most advanced product I know that actually protects your code is Jscrambler. They have the strongest obfuscation techniques and they add code locks and anti-debugging features that turn the process of retrieving your code into complete hell. I've used it to protect my apps and it works, it's worth checking out IMO
I'm curious what's the view on "things that compile into javascript" e.g. GWT, Script# and WebSharper and their like. These seem to be fairly niche components aimed at allowing folks to write javascript without writing javascript.
Personally I'm comfortable writing javascript (using JQuery/Prototype/ExtJS or some other such library) and view things like GWT these as needless abstractions that may end up limiting what a developer needs to accomplish or best-case providing a very long-winded workaround. In some cases you still end up writing javascript e.g. JSNI.
Worse still if you don't know what's going on under the covers you run the risk of unintended consequences. E.g. how do you know GWT is creating closures and managing namespaces correctly?
I'm curious to hear others' opinions. Is this where web programming is headed?
Should JavaScript be avoided in favor of X? By all means!
I will start with a disclaimer: my answer is very biased as I am on the WebSharper developer team. The reason I am on this team in the first place is that I found myself a complete failure in writing pure JavaScript, and then suggested to my company that we try and write a compiler from our favorite language, F#, to JavaScript.
For me, JavaScript is the portable assembly of the web, fulfilling the same role as C does in the rest of the world. It is portable, widely used, and it will stay. But I do not want to write JavaScript, no more than I want to write assembly. The reasons that I do not want to use JavaScript as a language include:
There is no static analysis, it does not even check if functions are called with the right number of arguments. Typos bite!
There is no or a very limited concept of libraries, namespaces, modules, classes, therefore every framework invents their own (a similar situation to that of R5RS Scheme).
The tooling (code editors, debuggers, profilers) is rather poor, and most of it because of (1) and (2): JavaScript is not amenable to static analysis.
There is no or a very limited standard library.
There are many rough edges and a bias to using mutation. JavaScript is a poorly designed language even in the untyped family (I prefer Scheme).
We are trying to address all of these issues in WebSharper. For example, WebSharper in Visual Studio has code completion, even when it exposes third-party JavaScript APIs, like Ext Js. But whether we have or will succeed or fail is not really the point. The point is that it is possible, and, I would hope, very desirable to address these issues.
Worse still if you don't know what's
going on under the covers you run the
risk of unintended consequences. E.g.
how do you know GWT is creating
closures and managing namespaces
correctly?
This is just about writing the compiler the right way. WebSharper, for instance, maps F# lambdas to JavaScript lambdas in a 1-1 manner (in fact, it never introduces a lambda). I would perhaps accept your argument if you mentioned that, say, WebSharper is not yet mature and tested enough and therefore you are hesitant to trust it. But then GWT has been around for a while and should produce correct code.
The bottom line is that strongly typed languages are strictly better than untyped languages - you can easily write untyped code in them if you need to, but you have the option of using the type-checker, which is the programmer's spell-checker. Why wouldn't you? A refusal to do so sounds a bit luddite to me.
Although, I don't personally favor one style over another, I don't think that abstraction from Javascript is the only benefit that these frameworks bring to the table. Surely, in abstracting the entire language, there will be things that become impossible that were previously possible, and vice-versa. The decision to go with a framework such as GWT over writing vanilla JavaScript depends on many factors.
Making this a discussion of JavaScript vs language X is fruitless as each language has its strengths and weaknesses. Instead, do an objective cost-benefit analysis on what is to be gained or lost by going with such a framework, and that can only be done by you and not the SO community unfortunately.
The issue of not knowing what goes on under the hood applies to JavaScript just as much as it does to any translated source. How many people do you think would know exactly what is going on in jQuery when they try to do a comparison such as $("p") == $("p") and get back false as a result. This is not a hypothetical situation and there are several questions on SO regarding the same. It takes time to learn a language or framework, and given sufficient time, developers could just as well understand the compiled source of these frameworks.
Another related aspect to the above question is of trust. We continuously build higher level abstractions upon lower level abstractions, and rely on the fact that the lower level stuff is supposed to work as expected. What was the last time you dug down into the compiled binary of a C++ or Java program just to ensure that it worked correctly? We don't because we trust the compiler.
Moreover, when using such a framework, there is no shame in falling back to JavaScript using JSNI, for example. It's all about solving the problem in the best possible manner with the tools at hand. There is nothing sacred about JavaScript, or Java, or C#, or Ruby, etc. They are all tools for solving problems, and while it may be a barrier for you, it might be a real time-saver and advantageous to someone else.
As for where I think web programming is headed, there are many interesting trends that I think or rather hope will succeed such as JavaScript on the server side. It solves very real problems for me at least in that we can avoid code duplication easily in a web application. Same validations, logic, etc. can be shared on the client and server sides. It also allows for writing a simple (de)serialization mechanism so RPC or RMI communication becomes possible very easily. I mean it would be really nice to be able to write:
account.debit(200);
on the client side, instead of:
$.ajax({
url: "/account",
data: { amount: 200 },
success: function(data) {
..
}
error: function() {
..
}
});
Finally, it's great that we have all this diversity in frameworks and solutions for building web applications as the next generation of solutions can learn from the failures of each and focus on their successes to build even better, faster, and more awesome tools.
I have three big practical issues I have with websharper and other compilers that claim to avoid the pain of Javascript.
If you won’t know Javascript well you can’t understand most of the examples on the web of using the DOM/ExtJs etc., so you have to learn Javascript whatever. (For the some reason all F# programmers must be able to at least read C# or VB.NET otherwise they cannot access most information about the .net framework)
On any web project you need a few web experts that know the DOM and CSS inside out; would such a person be willing to work with F# rather than Javascript?
Being tied into the provider of the compiler, will they be about in 5 years’ time; I want full open source or the tools to be supported by Microsoft.
The 4 big positives I see with these frameworks are:
Shareing code between the server/client
Having fewer languages a programmer needs to know (javascript is a real pain as it looks like Jave/C# but is not anything like them)
The average quality of a F# programmer is a lot better than a jscript programmer.
My opinion for what it's worth is that every framework has its pros/cons and a project team should evaluate their use cases before including one. To me any framework is just a tool to be used to solve a problem, and you should pick the best one for the job.
I prefer to stick to pure JavaScript solutions myself, but that being said I can think of a few cases where GWT would be helpful.
GWT would allow a team to share code between the server/client, reducing the need to write the same code twice (JS and Java). Or if someone was porting a Java client to a web UI, they may find it easier to stick to GWT ( of course then again it may make it harder :-) ).
I know this is a gross over-simplification, because there are many other things that frameworks like GWT offer, but here is how I view it: if you like JavaScript, write JavaScript; if you don't, use GWT or Cappuccino or whatever.
The reason people use frameworks like GWT is not necessarily the abstraction that they give--you can have that with JavaScript frameworks like ExtJS--but rather the fact that they allow you to write web applications in something other than JavaScript. If I were a Java programmer who wanted to write a web application, I would use GWT because I would not have to learn a new language.
It's all preference, really. I prefer to write JavaScript, but many people don't.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I need to write a GUI related javascript library. It will give my website a bit of an edge (in terms of functionality I can offer) - up until my competitors play with it long enough to figure out how to write it by themselves (or finally hack the downloaded script). I can accept the fact that it will be emulated over time - thats par for the course (its part of business). I just want to have a few months breathing space where people go "Wow - how the f*** did they do that?" - which gives me a few months of free publicity and some momentum to move onto other things.
To be clear, I am not even concerned about hard core hackers who will still hack the source - thats a losing battle not worth fighting (and in any case I accept that my code is not "so precious"). However, what I cannot bear, is the idea of effectively, simply handing over all the hard work that would have gone into the library to my competitors, by using plain javascript that anyone can download and use. If someone is going to use what I have worked on, then I sure as hell don't want to simply hand it over to them - I want them to work hard at decoding it. If they can decode it, they deserve to have the code (they'll most likely find out they could have written better code themselves - they just didn't have the business sense to put all the [plain vanilla] components in that particular order) - So, I'm not claiming that no one could have written this (which would be a preposterous claim in any case) - but rather, what I am saying is that no one (up to now) has made the functionality I am talking about, available to this particular industry - and I (thinking as an entrepreneur rather than a geek/coder), want to milk it for all its worth, while it lasts i.e until it (inevitably) gets hacked.
It is an established fact that not one website in the industry I am "attacking" has this functionality, so the value of such a library is undeniable and is not up for discussion (i.e. thats not what I'm asking here).
What I am seeking to find out are the pros and cons of obfuscating a javascript library, so that I can come to a final decision.
Two of my biggest concerns are debugging, and subtle errors that may be introduced by the obfuscator.
I would like to know:
How can I manage those risks (being able to debug faulty code, ensuring/minimizing against obfuscation errors)
Are there any good quality industry standard obfuscators you can recommend (preferably something you use yourself).
What are your experiences of using obfuscated code in a production environment?
If they can decode it, they deserve to have the code (they'll most likely find out they could have written better code themselves - they just didn't have the business sense to put all the [plain vanilla] components in that particular order).
So really, you're trying to solve a business issue with technical measures.
Anybody worth his salt as a Javascript programmer should be able to recreate whatever you do pretty easily by just looking at the product itself, no code needed. It's not like you're inventing some new magical thing never seen before, you're just putting pieces together in a new way, as you admit yourself. It's just Javascript.
Even if you obfuscate the script, it'll still run as-is, competitors could just take it and run with it. A few customizations shouldn't be too hard even with obfuscated code.
In your niche business, you'll probably notice pretty quickly if somebody "stole" your script. If that happens, it's a legal issue. If your competitors want to be in the clear legally, they'll have to rewrite the script from scratch anyway, which will automatically buy you some time.
If your competitors are not technically able to copy your product without outright stealing the code, it won't make a difference whether the code is in the clear or obfuscated.
While you can go down the long, perilous road of obfuscators, you generally don't see them used on real, production applications for the simple reason that they don't really do much. You'll notice that Google apps, which is really a whole heap of proprietary and very valuable JavaScript when you get down to it, is only really minimized and not obfuscated, though the way minimizers work now, they are as good as obfuscated. You really need to know what you're doing to extract the meaning from them, but the determined ones will succeed.
The other problem is that obfuscated code must work, and if it works, people can just rip it wholesale, not understanding much of it, and use it as they see fit in that form. Sure, they can't modify it directly, but it isn't hard to layer on some patches that re-implement parts they don't like without having to get in too deep. That is simply the nature of JavaScript.
The reason Google and the like aren't suffering from a rash of cut-and-paste competitors is because the JavaScript is only part of the package. In order to have any degree of control over how and where these things are used, a large component needs to be server-based. The good news is you can leverage things like Node.js to make it fairly easy to split client and server code without having to re-implement parts in a completely different language.
What you might want to investigate is not so much obfuscating, but splitting up your application into parts that can be loaded on-demand from some kind of service, and as these parts can be highly inter-dependent and mostly non-functional without this core server, you can have a larger degree of control over when and where this library is used.
You can see elements of this in how Google is moving to a meta-library which simply serves as a loader for their other libraries. This is a step towards unifying the load calls for Google Apps, Google AdSense, Google Maps, Google Adwords and so forth.
If you wanted to be a little clever, you can be like Google Maps and add a poison pill your JavaScript libraries as they are served dynamically so that they only operate in a particular subdomain. This requires generating them on an as-needed basis, and while it can always be removed with sufficient expertise, it prevents wholesale copy-paste usage of your JavaScript files. To insert a clever call that validates document.href is not hard, and to find all these instances in an aggressively minimized file would be especially infuriating and probably not worth the effort.
Javascript obfuscation facts:
No one can offer a 100% crack free javascript obfuscation. This means that with time and knowledge every obfuscation can be "undone".
Minify != obfuscation: When you minify your objective is: reduce code size. Minified code looks completly different and its much more complex to read (hint:jsbeautifier.com). Obfucation has a completly different objective: to protect the code. The transformations used try to protect Obfuscated code from debugging and eavesdropping. Obfuscation can even produce a even bigger version of the original code which is completely contrary to the objectives of minification.
Obfuscation != encryption - This one is obvious but its common mistake people make.
Obfuscation should make debugging much much harder, its one of it objectives. So if it is done correctly you can expect to loose a lot of time trying to debug obfuscated code.That said, if it is done correctly the introduction of new errors is a rare issue and you can easily find if it is an obfuscation error by temporarily replacing the code with non obfuscated code.
Obfuscation is NOT a waste of time - Its a tool. If used correctly you can make others waste lots of time ;)
Javascript obfuscation fiction: ( I will skip this section ;) )
Answer to Q2 - Sugested obfuscation tools:
For an extensive list of javascript obfuscator: malwareguru.org. My personal choice is jscrambler.com.
Answer to Q3 - experiences of using obfuscated code
To date no new bugs introduced by obfuscation
Much better client retention. They must come to the source to get the source;)
Occasional false positives reported by some anti-virus tools. Can be tested before deploying any new code using a tool like Virustotal.com
Standard answer to obfuscation questions: Is using an obfuscator enough to secure my JavaScript code?
IMO, it's a waste of time. If the competitors can understand your code in the clear (assuming it's anything over a few thousand lines...), they should have no trouble deobfuscating it.
How can I manage those risks (being
able to debug faulty code,
ensuring/minimizing against
obfuscation errors)
Obfuscation will cause more bugs, you can manage them by spending the time to debug them. It's up to the person who wrote the obfuscation (be it you or someone else), ultimately it will just waste lots of time.
What are your experiences of using
obfuscated code in a production
environment?
Being completely bypassed by side channel attacks, replay attacks, etc.
Bugs.
Google's Closure Complier obfuscates your code after you finish writing it. That is, write your code, run it through the compiler, and publish the optimized (and obfuscated) js.
You do need to be careful if your using external js that interfaces with the lib though because it changes the names of your objects so you can't tell what is what.
Automatic full-code obfuscation is so far only available in the Closure Compiler's Advanced mode.
Code compiled with Closure Advanced mode is almost impossible to reverse-engineer, even passing through a beautifier, as the entire code base (includinhg the library) is obfuscated. It is also 25% small on average.
JavaScript code that is merely minified (YUI Compressor, Uglify etc.) is easy to reverse-engineer after passing through a beautifier.
If you use a JavaScript library, consider Dojo Toolkit which is compatible (after minor modifications) with the Closure Compiler's Advanced mode compilation.
http://dojo-toolkit.33424.n3.nabble.com/file/n2636749/Using_the_Dojo_Toolkit_with_the_Closure_Compiler.pdf?by-user=t
You could adopt an open-source business model and license your scripts with the GPL or Creative Commons BY-NC-ND or similar
While obfuscation in general is a bad thing, IMHO, with Javascript, the story is a little different. The idea is not to obfuscate the Javascript itself but to produce shorter code length (bandwidth is expensive, and that first-time users may just be pissed off waiting for your Javascript to load the first time). Initially called minification (with programs such as minify), it has evolved quite a bit and now a full compiler is available, such as YUI compiler and Google Closure Compiler. Such compiler performs static checking (which is a good thing, but only if you follow the rules of the compiler), minification (replace that long variable name with 'ab' for example), and many other optimization techniques. At the end, what you got is the best of both worlds, coding in non-compiled code, and deploying compiled (, minified, and obfuscated) code. Unfortunately, you would of course need to test it more extensively as well.
The truth is obfuscator or not, any programmer worth his salt could reproduce whatever it is you did in about as much time as it took you. If they stole what you did you could sue them. So bottom line from the business point of view is that you have, from the moment you publish, roughly the same amount of time it took you to implement your design until a competitor catches up. Period. That's all the head start you get. The rest is you innovating faster than your competitors and marketing it at least as well as they do.
Write your web site in flash, or better yet in Silverlight. This will give your company unmatched GUI, that your competitors will be salivating about. But compiled nature of flash/dotnet will not allow them easily pick into your code. It's a win/win situation for you ;)
I was wondering how I can detect code plagiarism with Javascript. I want to test assignment submissions for homework I'm going to hand out.
I've looked at using MOSS, but—from what I've heard—it's pretty poor for anything other than C. Unfortunately, I can't test it yet because I don't have submissions.
How can I go about detecting code plagiarism with JavaScript?
They claim that MOSS works on Javascript. Why don't you just try it. Write a Javascript file, then modify it, like a cheater would modify somebody elses code and feed it to MOSS to see what it says?
I build Clone detection tools, that find similar blocks of code across files.
See CloneDR overview
and example reports. CloneDR works for a wide variety of languages, and uses
the langauge structure to makethe clone detection efficient and effective.
As per yar's comment pasting chunks of javascript into Google will work pretty well - but is stopping them cheating realistic?
Could you split the task into two parts, the first part allowing them to 'cheat' if they want to but tell them that there will be a second part of the task in class. Then have the class do exactly the same task in supervised class time.
If everyone has 'cheated' first time that's one thing. But if anyone is unable to redo their homework in class then they a) cheated (which is bad enough) and b) learnt nothing (which is worse).
Using the internet to 'research' is always going to happen - but its the ones who forget their 'research' that are cheating both you and themselves.
I wouldn't go out of my way to try and run through a plagiarism checker.
Code is code and bad code is bad code. People who can't code (those who are more likely to copy/paste code**) generally don't have good code. Difficulties (and questionable approaches around them) will be easily detectable if you even take a few seconds to check the source. Something just won't match up and it should smack you in the face.
**I would argue that adapted code isn't plagiarized unless it violates the authors distribution intent (e.g. violates copyright or license) and would encourage the students to simply document which existing resources, if any, they used as a base and/or incorporated as well as to encourage them to understand and adapt the code to fit their needs (and to make it better, so much code out there is soup). I do this all the time for "real programming work". Of course, it's not my curriculum :-)
I'm talking about things like page/stylesheet caching, minifying javascript, etc.
Half of me thinks it's better to do these things as early as possible while still in development so I can be consciously aware of more realistic speed and response issues as well as interacting with something that more closely resembles what will be deployed into production, but the other half of my brain thinks it makes more sense to not do anything until just before launch so that I'm constantly working with the raw data that has not been optimized while in development.
Is there common or conventional wisdom on this subject?
I do all optimizations at the end. That way I know when something doesn't work it is because the code is wrong. I've tried to optimize things too early at times, and realized that I wasted an hour because I was caching something etc.
Realize that a user spents most of his time waiting on frontend objects to (down)load. Your application may generate html in 0.1 second but the user spends at least 2 seconds waiting on all the images etc to load. Getting these download times to a small number will positively increase the user experience.
A lot can be done already by enabling GZIP and using minified javascript libraries. You should download and install YSlow and configure your webserver with appropriate caching headers. This alone can save hundreds of miliseconds loading time.
The last step is to optimize the amount of images using CSS sprites. Other steps can include minimizing css and javascript, but this will gain the least of all methods I mentioned already.
To summarize, most of this can be done by properly configuring your webserver, the sprites however should be done during development.
I'm a fan of building the site first, then using a user experience profiler like YSlow to do the optimizations at the very end.
http://developer.yahoo.com/yslow/
I should add that a profiler of some sort is essential. Otherwise, you're optimizing without before/after data, which is not exactly scientific (not to mention you won't be able to quantify how much improvement you made).
Premature optimization is the root of all evil :)
Especially early in development and even more so when the optimizations will interfere with your ability to debug the code or understand the flow of the program.
That said, it is important to at least plan for certain optimizations during the design phase so you don't code yourself into a corner where those optimizations are no longer easy to implement (certain kinds of internal caching being a good example).
I agree with your premise that you should do as much optimization in the early stages as you can. It will not only improve development time (think: saving 1/2 seconds per refresh adds up when you're spamming control+r!) but it will keep your focus in the end -- during the time when you're refacting the actual code you've implemented -- on the important stuff. Minify everything that you won't be modifying right off of the bat.
I agree with the other half of your brain that says 'do what you have to do, and then do it faster'. But for anything you do you must know and keep in mind how it can be done faster.
The main problem I see with this approach is that at the end its easier to ignore the fact that you have to optimise, especially if everything seems to be 'good enough'. The other problem is that if you are not experienced with optimisation issues you may indeed 'code your self in a corner' and that's where things tend to get really ugly.
I think this might be one where it's difficult to get a clear answer, as different projects will have different requirements, depending on how much work they are doing on the client side.
Mine rule of thumb would be probably later rather than sooner. Only because a lot of the typical front-end optimisation techniques (at least the stuff I'm aware of) tend to be fairly easy to implement. I'm thinking of whitespace stripping and changing http headers and so forth here. So I would favour focusing on work that directly answers the problem your project is tackling; once that is being addressed in an effective way move onto stuff like optimising front-end response times.
After coding for a number of years, you get to have a pretty clear idea what the performance bottle necks will be.
DB Queries
JS/Jquery functions
cfc's
images
javascripts
By identifying the bottlenecks ahead of time, while you create/modify each piece of the application that deals with those bottlenecks, you can spend time tweaking, knowing that by spending time while coding, gives you better performance in the end.
Also tends to make us less lazy, by always creating optimal code, and learning what that is in real life.