Does anyone have a good strategy for adding TypeScript to a continuous integration system?
I'm planning on compiling it in my build and then running unit tests against it's generated JavaScript.
However, I'd like to add code standards checking for the TypeScript. Any ideas?
Thanks.
The TypeScript team are deliberately reserving judgement on the official coding standards because they want to see how people use the language in real life. Anecdotally, people seem to be following the JavaScript naming conventions, i.e. ModuleName, ClassName, functionName.
You can write your unit tests in TypeScript (tsUnit) or JavaScript (Jasmine, QUnit et al).
How you integrate it with CI depends a little on the framework and on the CI platform. I have integrated tsUnit tests with Visual Studio and TFS using the MS Script Engine to execute the tests. If you want more details on this particular set up I am happy to share.
One option is to set up TSLint and integrate it into your build process.
In my case (an Angular 2+ app), I added an npm task to run TSLint. Then the CI system (VSTS for me) executes that npm task. The CI system fails the build if TSLint detects any errors. The TSLint task I'm using comes from Angular's quickstart project. Here: https://github.com/angular/quickstart/blob/master/package.json [Notice the "lint" script]
Also note that TSLint's rules are configurable, so you can customize the coding standard to whatever you want to use.
Your question is vague; it's difficult to give a precise answer.
For integrating TS compilation into your build system, you'll want to simply invoke the TypeScript command line compiler (tsc.exe) on your .ts files. This will output the JS and you can run your unit tests against those.
Regarding TS code standards, I don't think there's any tooling available now that look at TS coding standards, seeing as how the language went public just a few months ago.
Related
I want to write a node cli application and im wondering how i should structure the application. Im fairly new to node and im a confused with all the design patterns used when building such a application.
I want to be able to call the application from the command line, but also use it as a node module for better testing.
Currently i have one file with lots of functions that get called directly from the cli, but i feel this is rather difficult to maintain.
Is there any good writing on how to do such things? i looked at rimraf but it confused me even more. Thanks for your time
I don't know if there is a "right" way to do it but I can tell you how I have dealt with a problem similar to yours. I wanted to create a CLI and a visual studio code plugin so people would be able to use the functionality both from VSC and from the CLI (for those that don't use VSC), so the approach I took was to put all the logic in its own package and then create two other packages that included the first one, one for CLI and one VSC plugin that required the "logic" package.
In the CLI package you would only have code strictly related to command handling and then the real meat happens in the logic package. In my case the VSC plugin package had very few lines of code, just configuration and the calls to the needed functions.
Then regarding the structure of the code some recommendations:
expose only what is strictly necessary
isolate your code in different files/classes based on common functionality (and go to point 1)
test your code
lint your code
But those are common sense and language independent recommendations.
There's no one "standard" way to structure Node.js apps, however you will notice that many authors follow similar patterns. Instead of having one file containing all code, it should be split out into modules, grouped by function. Have a look at this repo on Github, it has some very good suggestions about Node.js best practice https://github.com/i0natan/nodebestpractices#1-project-structure-practices.
A couple more pointers I would add: Ensure you're logging any errors, consider using something like Winston.js for this purpose. Also have some mechanism in place to restart the service if a critical error occurs, e.g. Forever.js.
Ensure likewise you're unit testing, there are some good test frameworks, Jasmine, Mocha, Cucumber.js.
Since I know now, what "angular compiler" actually means, this would be my next question.
I read here about the positive and negative points of ahead of time compiling. To me, it boils down to this:
Use AOT for deployments
Use JIT for development
The only valid reason (imho) for JIT is, that it runs way faster (which is nice for the development process). Is there any other reason?
AOT has so many HUGE advantages over JIT, that I wonder why JIT is even an option for for a deployment.
Well, There are many difference and some of them are pointed out by you very well. It also dependent what kind of environment you are in and your requirement.
I have been using angular since angular-2.beta.17 when angular cli was not existed and we have to take the help of many build system and run environment to run the project in anywhere.
Systemjs building and running:
In some situation where multiple project and frameworks run together, you can not make a AOT build and bootstrap your code into the SPA and run. In system js environement you have to load the scripts one by one and they bootstrap it.
You must have seen this kind of build and loading script in many online tools like codepen, plunker, etc. I think they all uses systemjs loading
There are many other like commonjs loader, babel build system, webpack build system. But now angular cli is a bullet proof tool with internally use web pack and amber cli to handle everything you want.
Ahead of Time compilation and JIT debugging
As the name suggest AOT do a tree shaking and include everything which is really used in your code and throw away unused codes and compact the code size which can be loaded very fast. But by doing that it looses debugging control and don't give you a nice error message if in case in production you want to see what is wrong.
Same time JIT can point to the line number in typescript file which have the error and make your life super easy to debug while running in dev mode through angular cli.
You can get lot more in angular compiler and there are many tools and games available as well.
one of my favorite is ngrev
I'm an Angular 1 dev currently looking to get to grips with Angular 2, however with the changes and rewrites so far it's a bit of a quagmire.
All of the full ecosystems I've found are from late 2015, the testing ngdocs are outdated and there doesn't seem to be a clear up to date scaffold for Angular 2.
Has anyone come across a recommended setup or been able to set something up themselves? Something like Yeo is not strictly necessary but I'd need Typescript compilation and linting, live reload for development, unit testing, minification, SASS compilation and dependency management.
I would default to Gulp, Jasmine & Karma for the build system and tests but ideally I want to start off learning the best suited technologies from the start.
Bonus points for Webstorm integration :)
I'd check out the Angular-CLI project. It's an official CLI toolchain with testing and build built in. It's still a bit unstable with the changeover to webpack, but if you're working with RC software that's to be expected.
While waiting for angular-cli to become a bit more mature, I've been using the Fountain Webapp generator. Long term, I'll probably move to angular-cli.
I'm creating a javascript library, and i want it to be environment agnostic (It will not use DOM, AJAX, or NodeJS api. It will be vanilla javascript). So, it's supposed to run in any javascript environment (browsers, npm, meteor smart packages, V8 C bindings...).
My currently approach is creating git repo with the library, with all the library inside a single global variable, without thinking about patterns like CommonJS or AMD.
Later, i'll create another git repo, using my library as a git submodule, and create what is needed to release it as a npm module. I'm concerned if it's a good approach, i didn't found anyone doing this way.
Pros: code will be vanilla javascript, without awareness of environment patterns. It will not bind itself to CommonJS. It will be repackable (copy and paste or git submodule) to any javascript environment. It will be as small as needed to be sent to browsers.
Cons: I'll have to maintain as many git as environments i want to support. At least a second git repo to deliver on npm.
Taking jQuery as example, it runs in both browser and nodejs, with just one git repo. There is some code to be aware of the "exports" variable to run on nodejs or other CommonJS compatible enviroment.
Pros: Just one git repo to mantain.
Cons: It will be binded to CommonJS pattern (to achieve npm compatibility)
My question is: Am i following a correct (or acceptable) approach? Or should i follow jquery's path, and try to create a single git repo?
Update 1:
Browserify and other require() libraries are not valid answers. My question is not how to use require() on the browser, instead, it's about the architecture pattern to achieve enviroment agnosticism.
Update 2:
Create a browser/nodejs module is not the question, it's known. The question is: can make a real enviroment agnostic library? This example is binded to CommonJS pattern, used in NodeJS.
If you are looking for design recommendation for your future library work then in my opinion you can think-future and just use usual Object Oriented Practices well proven in other languages, systems and libraries.
Mainly concentrate on the UML view of your design.
Forget the "one variable" requirement.
Use features proposed in the planned next version of JavaScript.
http://wiki.ecmascript.org/doku.php?id=strawman:maximally_minimal_classes
http://wiki.ecmascript.org/doku.php?id=harmony:modules_rationale
There is an experimental compiler available that allows you to write ES6-style code even today (see https://www.npmjs.org/package/es6-module-transpiler-rewrite).
Node.js has a --harmony command line switch that allows for the same (see What does `node --harmony` do?)
So in my opinion correct approach is to follow best practices and "think future"
"Use a build tool" is the answer for this question. With a build tool, you can develop with the best code pratices, without accopling your code to some enviroment standard of today (AMD, commonjs...) and still publish your code to these kind of enviroments.
For example, I'm using Grunt.js to run some tasks, like build, lint, test, etc.
It perform tedious operations (minification, compilation...) like Make, Maven, Gulp.js, and various others.
The build task can handle standards (like commonjs) for the compiled code. So, the library can be totally enviroment agnostic, and the build process handle enviroment adaptations.
Note that i'm not talking about compiling to binaries. It's compiling source to another source, like CoffeScript to JavaScript. In my case, it's compilation of JavaScript without enviroment standard to JavaScript with commonjs standard (to run as a Node.js module).
The final result is that i can compile my project to various standards without messing with my code.
Aditionally, with a build phase i can "think-future", like xmojmr answered and use the EcmaScript 6 features on my JavaScript code, using Grunt plugins like grunt-es6-transpiler or grunt-traceur to compile JavaScript code from ES 6 to 5 (so it can run on enviroments of today)
According to modular your library (if you need modules). Read this question Relation between CommonJS, AMD and RequireJS?
Take bootstrap for example, it uses npm to manage project dependencies and use bower to publish as static content for other web apps.
Take a look at browserify as reference, it's a little heavy because it provides the capability to bundle dependent npm modules as resource for browsers.
I am writing a java webapp, but the bulk of the codebase is actually browser-side javascript, which depends on dojo, and uses qunit for unit testing.
With serious abuse of maven and git I was able to wire in javascript unit testing and javascript dependencies.
However I cannot get pom.xml into a shape where both maven command line and eclipse maven plugin would be able to compile and test the whole project.
Moreover neither sonar can realize the project have javascript components, nor eclipse can help much with the javascript parts.
For reference the project is here: https://github.com/magwas/worldmodel/
What would be the correct project structure to have one war in the end of the day which contains all the javascript with dependencies, unit tested, analized (with sonar), and ready to submit to jenkins?
I have found http://mojo.codehaus.org/javascript-maven-tools/, but I don't know whether
is putting the javascript part to a separate project better, and if so, how should it be included in the java one?
how should I handle the dependencies (dojo, dijit & quint)?
I realize it is a complex question, pointers to documentation or source code of projects which have already solved it would be welcome.
I have found this: http://mojo.codehaus.org/javascript-maven-tools/javascript-ria-archetype/index.html
This might be an answer to parts of the question.
Now I only have to figure out how to use it with dojo.