How to automate jshint as part of build process? - javascript

I want to automatically run JSHINT on all my javascript files as part of our continuous integration environment (actually, probably as part of TFS Gated Checkin, but I'm not sure yet as it will depend on speed).
I tried using rhino-jshint like this:
java -jar js.jar jshint-rhino.js myFile.js
but how can I set the required JSHint options? I know I could list them in a comment at the top of myfile.js, but I've got lots of javascript files, and I don't want the options duplicated in all my source files. (Or does JavaScript have an 'include' feature that I'm not aware of?)
I had hoped to pass an options.js file in as a parameter on the command line, and then keep options.js under version control. But I don't think this is possible with jshint-rhino.js.
Additionally, we are using a Visual studio extension to 'JSHint' all JavaScript files as we save them. But this tool cannot be ran on the command-line. We want the best of both worlds - running JSHint inside Visual Studio and Automated for the CI build, and then without duplicating the options (and indeed keeping the options under version control).
So the question is, how do other people automate JSHint in their development process?

If you want to check your Javascript while running a TFS build I would recommend having a look at sharplinter:
https://github.com/jamietre/SharpLinter
This contains a executable which allows you to check your Javascript files with JSLint/JSHint. To run this during your TFS build you could create a Code Activity which can be included into your workflow.
This video by Marcel de Vries from the Techdays 2012 goes through the automated build process step by step, and gives a demonstration on how to include your custom activity.
http://channel9.msdn.com/Events/TechDays/Techdays-2012-the-Netherlands/2361

Since you are essentially wanting to use a command-line activity inside of the TFS 2010 build process, then it is actually pretty simple. You'll want to add an InvokeProcess workflow activity (see MSDN documentation) at the point you want to run the tool.
There is actually a walkthrough available for how to do this with a different command-line utility but you'll just replace it with your own. The workflow activity even allows you to specify a set of command-line parameters that you can pass in.

Look at both http://gitcasette.com/ and http://net.tutsplus.com/tutorials/javascript-ajax/meeting-grunt-the-build-tool-for-javascript/

Related

How to deploy a JavaScript automation app?

I have built a web automation programme with selemium & javascript. Now i want make it usable for everyone so that anyone can use it without any dependecies or coding environment i mean any non technical people can use it easily.
how can i do it?
One option is to use: https://www.npmjs.com/package/pkg
This command line interface enables you to package your Node.js
project into an executable that can be run even on devices without
Node.js installed.
The most useful part will be dependencies:
During packaging process pkg parses your sources, detects calls to
require, traverses the dependencies of your project and includes them
into executable
However, i expect this will not manage the webdriver executable. You'll most likely need to ship chromedriver/geckodriver/etc with your resultant exe.
Looking forward, i expect you'll need to manage your distributable for the rollout of new browser versions. Different users will be on different versions at different times and your relevant drivers will need to be updated and re-shipped.

How make deployment onto Amazon Web Services (AWS) with ReactJS/NodeJS together?

I currently have ReactJS + NodeJS/ExpressJS + Webpack onto EC2, Amazon Web Services (AWS) under one project and would like to get it deployed together at once, in one project.
What are some suggestions on how to go about doing so? Done the research, and I've only seen tutorials on deploying one in specific, whether it be just ReactJS or just NodeJS. Any insights or leads would be greatly appreciated.
Will accept/upvote answer. Thank you in advance
You don't "deploy" ReactJS, it's just a static file or files like any other JS libraries in your applications. You also don't deploy Webpack. Webpack should run on a developer machine (or in CI/CD stack or build system).
As for the NodeJS part just use Elastic Beanstalk.
I do not commit builds to source control. I see that a lot and it can make things easier, but you can also forget to rebuild as you have to do it manually, and it adds a lot of bloat to your repo.
I believe builds should be run as part of the deployment process. Assuming you are using git, you can add script hooks/post-receive in a remote repo there. When you push to that remote, the script will run. This is where I do my webpack build.
You may want to look into https://github.com/git-deploy/git-deploy for context, but I do this manually.
In my projects, on the deployment machine I do git --init --bare /var/git/myproject.git then add the script in /var/git/hooks/post-receive. The hook checks out the code into /var/www/myproject, runs the build, which fills in the /var/www/myproject/build. Then it removes the old /var/www/myproject/public and renames build to public. And done.
I'm coming from more of an operations background and would say that if your goal includes keeping that site up as much as possible then use Packer to generate AMI's and CloudFormation to build an Application Load Balancer (the newer, cheaper brother of ELB) in front of an AutoScalingGroup which keeps the EC2 instances up and running.
I'm currently working on a large scale project doing exactly what you describe. First off, there are so many different ways to do this, so what you really need is some general guidelines to get started, then we can dig a little deeper into details when some initial decisions are made, if you'd like. If you've already got the app deploying and running in two separate steps, but are just looking to combine those, I can definitely help. I'd just need to know how you're currently building/deploying. If you're just getting started on building your pipeline and need to set up the process from scratch, then read on:
First off you'll want to set up some kind of build server that will install your npm dependancies and run your webpack build. Most likely you'll want a separate webpack config that's just for your build server, this'll give you a build optimized for production or qa/staging environments. This config should split out vendor files that you won't update all the time, pull out seperate css files with extract text plugin and uglify the files. If you have an isomorphic React app, or are using es6 features not supported in your version of node, then you'll need a webpack build for your server code as well. This is really different from the hot reloading build you'll want to have on your local machine while you're actually coding the app. I'll be happy to show some examples if you'd like of our webpack config files for both local development and our CI build. You may also need a build.sh or makefile to do something with the compiled .js files that your webpack build creates, but that'll depend on your deployment which I'll cover later. You can run your production build locally as your getting your config just right and fire up the app from those files to test it's all working. Additionally, since you'll likely want to be able to automate all of this, you probably want to run your tests and linting right before you build your app, we run eslint and mocha/jdom to run our enzyme/expect specs as part of our build. Once that's all working nicely, you'll most likely want to set up a build server that can run your builds automatically. My team is using Jenkins for this, which is a little more work to set up, but it's free (aside from the ec2 box we run it on). There are also a ton of subscription based build/continuous integration servers, such as Travis and CodeShip. There's plenty of articles on the pros and cons of these different products and how to set them up. The bottom line is you'll want to have a build server that can pull down your code from source control, install npm deps, lint, test and build your app. If anything fails it should fail your build and if your build succeeds you'll have some sort of archive that you'll later deploy to an ec2 instance. In our shop we use a build.sh file to tarball up our build archive (basically a folder with our node server files as well as our minified client files, css files and any fonts or images needed to run the app) and upload it to an S3 bucket that we deploy from. We like this fairly old school method because the tarball will never change, so we have ultra reliable roll backs.
What you do with your build archive will depend on how you want to do deployments. We have a custom deployment system using puppet, but there are plenty of products that do this such as elastic beanstalk, that would be much easier to set up. You'll want some kind of process supervisor to actually run your node app, so unless you have a dev ops team that wants to build custom pipelines, using AWS built in features will probably be the easiest way to get started. As usual, there are so many ways to do this, but the basic principal is that you need something to download your build archive and run/supervise your node process. You also may want to be able to create and configure ec2 boxes on the fly (Puppet, chef, etc.), or even use containers (Docker) which allow you to move complete stacks around as single units. Using automation to create and configure servers is crucial if you need to scale your app, but it is complicated and may not be necessary for smaller projects. This is definitely an area where you can start simply and add complexity later on, as long as you have good long term goals and make sure to take the necessary step to prepare for future complexity.
All of this can get you pretty far in the weeds, so it's best to find the simplest thing that will serve your needs as you get started and then add complexity as real life situations demand so. I'll be happy to elaborate on any of these details if you provide a little more context about how big and well funded of a project your working on. If it's a little side project to learn the tech, I'd have very different advice then if you're trying to build an app that'll have a lot of traffic and/or complex features.
This could get 100 different answers and they could all end up being good ideas. First, you mention react + nodejs - keep in mind that these solve different tasks. React is going to be frontend and served out via static files. Nodejs is focused more around the server-side and would be the code that serves data. They can easily work together. You might use Express for the webserver (nodejs) to serve the HTML/React pages.
Unfortunately, I saw that you mentioned webpack, so you are going to have to 'build' your application with something - either via webpack, gulp, grunt, etc. This is where source control and build servers are great - but if you're new to it, it might be more complex than you need.
If you have just basic EC2 images as webservers and only 1-2, then the biggest hurdle is just pushing up your code. Something like https://deploybot.com/ could work as it can push your git repo down to multiple hosts via ftp, etc. If you wanted to get a bit fancier, you could look at something like Jenkins or some of the other items.
Docker is a great choice and if you are going to be dealing with multiple developers, server environments, deployments - it's worth the time. Otherwise, keep it simple and just get your code on the EC2 instance ;).

Setting manually the buildcontroller (unit test)

I got a question regarding build controllers of Visual Studio.
I got a project where I run multiple C# unit test. I currently implemented Javascript Unit tests to the project and I want to let the unit test be part of the build.
Several tutorials are available on the internet. One of them I used as guideline towards running JS intergrated within my TFS which is running on a buildserver.
The problem that I have is that the tutorial is saying that I should check in the files (of Chutzpah) and add the source file to the build controller. Here is my problem. Due to the fact that I do not want to affect all the other unit tests and build processes I can not modify the build controller. I can change any build definition but I can not change the "Version Control Path to custom Assemblies". So I was wondering is there alternative method where I still can make sure that JS unit tests are part of the build but not changing the version control path for the whole project?
I hope I stated my situation clear enough.
You can enable your build process to leverage binaries that you have
uploaded to your Team Foundation Server, for example:
Assemblies that contain your custom workflow activities.
Third-party unit test frameworks. See Run tests in your build process.
Custom MSBuild tasks
To enable your build processes to leverage these kinds of code, upload
the binaries to the folder (or any of its descendant folders) that you
specify in the Version control path to custom assemblies box. MSDN
So, if you haven't configured this path of your build control. It's easy, you just need to set a sever path. This will not affect other unit tests and build processes cuase they didn't even use this path before.
If there is aleady a server path, you just need to add the files mentioned in the tutorials into the source control with the same path. Just like a share folder, when the build definitions need the file, the build control will automatically find and call it in this path. When you set or modify the value in this box, the build server automatically restarts to load the assemblies.

What is Grunt for?

I'm trying to get into Grunt, which I am new to, but I do not understand its utility.
I understand that it is a taskrunner. I understand that it can be used to do things like bundle, uglify, jshint, minify, etc etc etc, anything that can be turned into a scripted task.
But I don't see what advantage this gives. Nearly all of these can be run from the command line anyway, which is to say you could just combine them using a simple shell script. It seems to me that setting up grunt + gruntfiles and writing tasks is more work than writing a shell script, rather than less.
What am I missing about this?
Grunt is basically a build / task manager written on top of NodeJS. I would call it the NodeJS stack equivalent of ANT for Java. Here are some common scenarios you would want to use grunt under:
You have a project with javascript files requiring minification, and generally generating a front end build seperately (in case you're using say JAVA for your backend). (grunt-contrib-uglify)
When you save code on your machine during development, you want the browser to reload your page automatically (might seem like a small thing, but believe me this has saved me lots of time). (Live reload)
When a developer saves code on his machine, he wants a comprehensive list of JS errors / general best practice violations to be shown. (grunt-contrib-jshint)
You have a project with SASS/ LESS files which need to be compiled to CSS files on the developers machine during development, For example whenever he saves a SASS file, you want it to be compiled to a CSS file automatically, for inclusion in your page. (grunt-contrib-sass)
You have a team of front end developers who're working on the UI, and a team of backend developers working on the backend, you want the front end devs to use the backend REST API's without having to compile & deploy code everytime on their own machines. In case you were wondering, this isn't possible with a typical web server setup because XHR isn't allowed to be cross-domain by browser. Grunt can setup a proxy for you redirecting XHR requests on your own system within the grunt connect server to another system! (grunt-contrib-proxy, grunt-contrib-connect)
I do not think your shell script can do ALL of these. To summarize, yes, setting up a Gruntfile.js is tedious for someone who has had little exposure to javascript / is new to nodeJS, I went through the same pains as a learner, but Grunt is an amazing piece of software. DO invest the time to setup a proper Gruntfile.js for your front end project, and you'll thank god for making your life a lot easier :)
The Advantage vs shell script:
If you write shell script for every one of these tasks, it is tedious to maintain and then customize for every one of your needs. Gruntfile.js is actually pretty easy. there is a config that you init it with, specifying what tasks you want to perform, the sources and targets for each.
The integration with project seed generators on Yeoman, Gulp is another major factor to consider. Yeoman and Gulp come with Gruntfile.js' with intelligent defaults. For someone who is the sole UI contributor on his team, this is priceless to me!
For someone who is working on frontend technologies, if you have more than one person working with you, its rather easy for them to get to know Grunt, which is already well documented with a lot of answers on SO, than to get to know your shell scripts. This might be a factor in large teams.
The numerous plugins for Grunt extend base functionality. Unless your shell script is VERY popular, and VERY modular, I dont see plugins being built for it. This also extends to inclusion of new front end technologies in your project. Say, if you want to use typescript in your project tomorrow, your shell script will need to incorporate this and account for it with your own effort. With Grunt, its just as simple as "npm install " and adding a config.
Even though I agree with most advantages pointed in Accepted Answer, I still have to consider the disadvantages that are highlighted by Keith Cirkel in Why we should stop using Grunt & Gulp
Thus, some advantages are rebut by Grunt overheads and at least you should consider all this in your final decision of using Grunt, or not.

VS2013: Publish minified bundle created on files outside of the project

I use Visual Studio 2013 and .NET 4.5 for an MVC project.
I've learning to use AngularJS via several videos on Pluralsight and one of them walks through the process of using Grunt to clean the output directory, then use ngmin to min-safe the Javascript files.
My process is using a gruntfile.js to clean and run ngmin against the javascript files in my solution, then put them in a directory called app_built. This is executed via a batch file in the pre-build for the project and then I include it via a ScriptBundle with IncludeDirectory pointing to the app_built directory. My intent is to use the Bundling features of .NET 4.5 to do the rest of the minification and concatenation of the Javascript after all the files have been min-safed via Grunt.
I specify the path to the min-safed files with the following:
bundles.Add(new ScriptBundle("~/bundles/minSafed")
.IncludeDirectory("~/app_built/", "*.js", true));
If I run this on my local machine, it runs fine without a hitch. The Javascript is minified and bundled as I'd expect and the resulting web application runs fine as well.
If I publish the website to a remote server, I get a server error that the "Directory does not exist. Parameter name: directoryVirtualPath". I assume this error is saying that it's unable to find the directory populated with my many *.js files. I also assume this is because they weren't published since they aren't part of the solution, even though the folder they reside in is a part of the solution (it's just empty within the solution explorer in Visual Studio).
If my assumption is correct, what can I do to add these files to my solution so they'll be published with the rest of my web application with minimal effort on my end each time?
And if I'm incorrect in the assumption, what I can I do to resolve this otherwise?
Thanks!
I never did find a great way of going about this. I found information at http://sedodream.com/2010/05/01/WebDeploymentToolMSDeployBuildPackageIncludingExtraFilesOrExcludingSpecificFiles.aspx that seems related, but I was unable to make it work.
Rather, since I knew the name of the outputted file, I simply created such an empty file in my project and referenced that where I needed to. I then had the pre-build task replace the contents of that file with the externally minified version and it would be packaged with the project as necessary, so it works well enough.

Categories