Strategy for JavaScript files on Azure - javascript

I am working on a strategy for storing and deploying JavaScript files on Azure (ASP.NET web role)
My requirements are:
To use minified versions in production
Use original versions (i.e. not minified) local versions in development environment (to simplify debugging)
Simple build/deployment process (VS2010)
Simple update process (my files will change from time-to-time)
There is a great discussion here Visual Studio 2010: Publish minified javascript files instead of the original ones however this does not take into account the benefits Azure can offer or working with multiple instances.
I am considering deploying my minified JavaScript files to blob storage and use these in the production version. These will be stored with a large max-age Cache Control for client side caching and filenames will store the version (so I can easily update). I welcome feedback on this strategy.
Thus, in development the rendered HTML would refer to a local script file, i.e.:
<script src="Scripts/myjavascript-0.0.1.js" type="text/javascript"></script>
But in Production the result should use the following to refer to a minified version.
<script src="http://myblob.blob.core.windows.net/Scripts/myjavascript-0.0.1.js" type="text/javascript"></script>
My main question though is how to best to achieve automatic switching of the paths in development and production. Or would a custom handler be the normal route (and if so how would that work – I don’t want each instance to reload from the blob on each request).

Regarding #1 & 2:
I discuss a strategy for this here. The basic idea is to user a helper function to emit the script tag. The function can construct a link to the debug files when in debug mode, and the minified files otherwise (which also makes it easy to test locally with the minified files). The same function can handle adding a version to the path for cache invalidation, etc.
Regarding #3:
Add the minification as an after-build step. I added this to my csproj (which is just an msbuild file), which use yui-compressor:
<Target Name="AfterBuild" Condition="'$(Configuration)' != 'Debug'">
<!-- remove previous minified files -->
<Exec Command="del $(ProjectDir)Styles\*-min.css" />
<Exec Command="del $(ProjectDir)Scripts\*-min.js" />
<!-- Minify javascript and css, unless we're in Debug -->
<Exec Command="java -jar $(ProjectDir)..\yuicompressor\build\yuicompressor-2.4.6.jar -o .css$:-min.css --charset utf-8 $(ProjectDir)Styles\*.css" />
<Exec Command="java -jar $(ProjectDir)..\yuicompressor\build\yuicompressor-2.4.6.jar -o .js$:-min.js --charset utf-8 $(ProjectDir)Scripts\*.js" />
</Target>
This will create minified *-min.js and *-min.css files in ~\Scripts and ~\Styles, respectively.
Warning B/c of a bug in version 2.4.6 of the yui compressor, the above won't work if there is only one .css or .js file in the directory.

Your basic plan sounds good. It will even enable you to make use of the CDN with very little effort (you just need to replace the path to your storage account with the path to the CDN).
I don't think I'd try to over think this too much. As suggested elsewhere a control is a good way to go. Simply have this control look up a web.config setting to get the root directory for your scripts and prepend it to the path of the script (your local version this setting would be empty). In order to make sure that you don't have to mess around changing the config for every deploy, I'd use config transformations so it just happens automatically.

For switching the URL of the script links dynamically when running from Azure, you should put all the script blocks inside a usercontrol and use that usercontrol in all the pages. You should not put the script links directly on the aspx/master pages, instead put then on ascx and use the ascx. This helps keeping common script links in a single file and when you need to make a sitewide change, you just change the ascx.
Another approach is to use my httphandler that changes the URL of the scripts from relative to absolute in order to facilitate download of scripts from different domain than the site is running from. You can of course use it to prepend the absolute URL of your Azure site.
http://omaralzabir.com/loading_static_content_in_asp_net_pages_from_different_domain_for_faster_parallel_download/

You may want to check out the Windows Azure CDN helpers project. It should do pretty much everything you are asking for. You can set in the config if you want your minified files to automatically be deployed to blob storage or stay on the web roles.
http://cdnhelpers.codeplex.com/
http://ntotten.com/2011/06/windows-azure-cdn-helpers/
http://nuget.org/List/Packages/CdnHelpers.Razor
http://nuget.org/List/Packages/CdnHelpers.ASPX

Related

How to have web page include all js files in a directory tree?

I'm trying to figure out how to set up a JavaScript development project that will allow me to factor my code into several files. I plan to run this eventually on a client web browser, but first I need to set up an efficient development environment.
I've used other programming languages before that let you keep a large number of files in a subdirectory and then let you compile everything into your final deployable (or have an interpreter do something similar). Javascript doesn't seem to allow this - I have to manually add a <script> tag for each js file to the head of my web page to get the browser to load it. This can get very hard to manage once you have more than about 10 files that you need to keep track of. It would be nice if I could write <script src="myscripts/**/*.js"> to suck in everything, at least during development time.
I've found Grunt 'uglify' which looks like it would be a handy tool for creating a final file for deployment, but during development I need to keep everything separate so I can debug properly. Is there any way to have my web page load every js file in my development directory?
As others have mentioned in comments, Webpack (or similar) is the way to go. It bundles up all of your relevant code, and can also process it for minification.
I want to address this comment though:
but during development I need to keep everything separate so I can debug properly
You don't need, or want, that. While developing, you want to be testing against the same sort of build process you'll use in a deployment later. So, how can you easily debug your compiled scripts? There's a .map file that gets built, which tells the browser what your original code looked like.
Chrome and other browsers will automatically load and parse this file when you open your developer tools. Then, you'll be able to see the original source code (and in the original language, for anything transpiled) and debug it as if it were not bundled in the first place.
Don't deploy this map file, unless you want external users to be able to see all your original source code.

How do I automatically install web assets like bootstrap, jquery and font-awesome without using CDN?

I want to know if installing jquery/bootstrap/font-awesome can be done automatically, instead of installing it via npm and then manually dragging the code to my css/js/fonts folder?
Is there no program that can update and automatically drag them to the correct folder?
I know people are saying that you can just manually drag the javascript file to the correct location, but bootstrap for example consists of more than a single javascript file. It includes font and css files.
If I were to include them in this manner:
\web
-\css
--\app
---\main.css
--\font-awesome
---\font-awesome.min.css
-\fonts
etc.
Then it wouldn't work, because font-awesome expects it's fonts to be one folder aside.
JQuery, Bootstrap and Fontawesome are not softwares or applications that you install in a webpage. They are just CSS and Javascript files. So these are like any other javascript or CSS file you may have written from scratch for your webpage. Except that they are well maintained, highly optimized and made for a particular application. (Like Bootstrap primary purpose is to provide a framework for making webpages responsive.)
To include them to a webpage all you have to do is tell the HTML file to use those files. And this is done by linking them to the HTML using the <script> tag and its src* attribute. (*W3schools link. Hehe).
Now in src attribute you may provide a URL to a location on the web containing the file or you may provide a relative local path to a location in your server or local machine containing the file. Yes, you can manually drag the files into your css/js folder and just include the files using that path. No Im not aware of any softwares to automate the process. But you need only place the file in one location for an entire webpage and its sub pages to access it. So its not a very intensive process.
As for why CDN's host such files for public access, an insight is given here : How cloudfare provides free service. And security, well, they are pretty darn secure, it is literally their job to provide secure access to the files they host. And why people use CDN in the first place is because this (in short performance).
Update:
As for how to include files in your HTML, it goes like this (Bootstrap example) :
<link rel="stylesheet" href="static/bootstrap/css/bootstrap.min.css">
<script type="text/javascript" src="static/bootstrap/js/bootstrap.min.js"></script>
You need to provide the path to the required CSS and JS files. In the case of Bootstrap these two are the only ones you need to include to get full functionality of the library.
I think it is not a good idea to use local files instead of CDNs until you are not working offline.
Here you can read about CDNs vs Local Files:
https://halfelf.org/2015/cdn-vs-local/
Multiple files on CDN vs. one file locally
https://www.sitepoint.com/7-reasons-to-use-a-cdn/
Although there is one another link that is just opposite:
7 Reasons NOT to use a Content Delivery Network
Nevertheless if you want to use the files locally you can follow the instructions below:
Move at the cdn link in your project
Copy the link from src or href and open it in your browser.
Save the file locally and give the reference of the file in your project.

Webstorm JavaScript External Libraries vs Project Directories

In IIS and therefore VS, there are virtual directories which allow simplified, virtual, relative referencing in script tags. They are handy. In WebStorm you can get the same effect with Project Directories and then marking your project root as a Resource Root. If you do this, you also get coding assistance in the text editor.
WebStorm also has External Libraries, what is the point of these?
Is this for when you have a link to a CDN in your script tag and you want to get coding assistance? If you already have Project Directories, what is the point of External Libraries?
I've seen this answer and I kind of get the different modes of referencing/inclusion, but I don't get the big picture. What is the core reason for the External Libraries vs the Project Directories?
Is this for when you have a link to a CDN in your script tag and you want to get coding assistance?
Yes, this is the most common case - WebStorm can't use online resources for code assistance, it needs to have the corresponding javascript files available locally. So, if you don't like to pollute your project folder with all these library files, you can have them stored outside of your project and set up as libraries.
What is the core reason for the External Libraries vs the Project Directories?
See above - external libraries allow storing library files in an arbitrary location outside your project folder and still get code completion/highlighting/etc. Please also see the answer you refer to:
Note also that libraries are 'light-weight' as compared to .js files in your project - they are treated read-only, have the inspections turned off. Plus, you can assign documentation URLs to them, enabling external documentation for library code. So, even if you have your library files in your project, it might make sense to add them as libraries
see also this blog post

Best way to keep files that are used in multiple projects in sync?

I have a few files called "helpers.scss", "helpers.js" and "consolerules.js" that I use in every one of my projects. When I'm working on a project I'm modifying one of the files, for example I will add a function for replacing all strings within a strings into "helpers.js" but then when I open my other project I don't have that function.
Or I will add a helper css class in helpers.scss in the other project and I don't have it in the other projects.
What is the best way so I can always keep them in sync when I edit them in one of the projects? I was thinking of bower, gists, git, dropbox, google drive or something like that ...
I used two ways to handle these:
Get a CDN like server
Have a single version of those files and place them on a server. For example you could have URLs such as:
https://cdn.example.com/css/helpers.css
https://cdn.example.com/js/helpers.js
If you want to support versions (maybe you should?), you can add that to the filename:
https://cdn.example.com/css/helpers-1.3.css
https://cdn.example.com/js/helpers-1.2.js
Or to the path if you view all your files as having one common version:
https://cdn.example.com/1.2/css/helpers.css
https://cdn.example.com/1.2/js/helpers.js
Versioning is useful if you want to test a website with the newest version before using that version on your live site.
This is most certainly the easiest way if you can implement it that way. Now all your other websites will use those URLs instead of local versions of the files:
<link type="text/stylesheet" href="https://cdn.example.com/1.2/css/helpers.css"/>
Pull those files at build time
Depending on how you organize your websites (it is really not clear from your questions) and assuming you have folders on your machine with the original source, you can bring in those files as required with a script that you run before you upload your sites.
In my case, I like to do that in three steps:
I write the files
I copy the files to a .../build/... folder
I send the .../build/... folder to my test or production server
One reason for this is to generate a build folder that includes exactly what you want, verify it, then send it to your server. That verification happens only when you write your script. Once done, it should not require any additional work.
So... one reason to get such a script is that I can compile my files. For example, if you write PHP code, the servers only need the most compressed version of your code (unless you are debugging and need to find line numbers...) The script that generate the build folder could do:
for p in php/*.php
do
php -w $p build/$p
done
Now your PHP code on your server may be something like 20% smaller.
Similarly, you could copy your helper.css file as in:
cp ../helper-project/css/helper.css build/public_html/css/.
This copies the helpers.css file to your build folder. Since it grabs that file from your unique ../helper-project folder, you will always end up with the latest.
And instead of a simple cp command, you could also minimize that file at the same time:
cleancss --remove-empty ../helper-project/css/helper.css > build/public_html/css/.
The only problem here is that if you make changes to the helper-project, it won't automatically update all the projects. You still have to do in each project and run the script(s) that generate the build folder and copy that to your servers. Yet, I find that to be a practical way of doing things because that way I know when I do the update and I can test the resulting website(s) before going to production and once I update a production site, I can verify that it's still all working just fine.
You can do this with git (or any modern VCS); I assume you are using some sort of VCS for your code.
If you have a project being managed in git, you can even add multiple remotes, such that you can pull in code from multiple sources.
If you are using a VCS like git, then it is just a matter of doing a git pull <remote ref> <branch ref> whenever you want to sync up.
Otherwise, the comments to your question offer some alternatives.

Should I version control the minified versions of my jQuery plugins?

Let's say I write a jQuery plugin and add it to my repository (Mercurial in my case). It's a single file, say jquery.plugin.js. I'm using BitBucket to manage this repository, and one of its features is a Downloads page. So, I add jquery.plugin.js as one of the downloads.
Now I want to make available a minified version of my plugin, but I'm not sure what the best practice is. I know that it should be available on the Downloads page as jquery.plugin.min.js, but should I also version control it each time I update it to reflect the unminified version?
The most obvious problem I see with version controlling the minified version is that I might forget to update it each time I make a change to the unminified version.
So, should I version control the minified file?
No, you should not need to keep generated minimized versions under source control.
We have had problems when adding generated files into source control (TFS), because of the way TFS sets local files to be read-only. Tools that generate files as part of the build process then have write access problems (this is probably not a problem with other version control systems).
But importantly, all the:
tools
scripts
source code
resources
third party libraries
and anything else you need to build, test and deploy your product should be under version control.
You should be able to check out a specific version from source control (by tag or revision number or the equivalent) and recreate the software exactly as it was at that point in time. Even on a 'fresh' machine.
The build should not be dependent on anything which is not under source control.
Scripts: build-scripts whether ant, make, MSBuild command files or whatever you are using, and any deployment scripts you may have need to be under version control - not just on the build machine.
Tools: this means the compilers, minimizers, test frameworks - everything you need for your build, test and deployment scripts to work - should be under source control. You need the exact version of those tools to be available to recreate to a point in time.
The book 'Continuous Delivery' taught me this lesson - I highly recommend it.
Although I believe this is a great idea - and stick to it as best as possible - there are some areas where I am not 100% sure. For example the operating system, the Java JDK, and the Continuous Integration tool (we are using Jenkins).
Do you practice Continuous Integration? It's a good way to test that you have all the above under control. If you have to do any manual installation on the Continuous Integration machine before it can build the software, something is probably wrong.
My simple rule of thumb:
Can this be automatically generated during a build process?
If yes, then it is a resource, not a source file. Do not check it in.
If no, then it is a source file. Check it in.
Here are the Sensible Rules for Repositories™ that I use for myself:
If a blob needs to be distributed as part of the source package in order to build it, use it, or test it from within the source tree, it should be under version control.
If an asset can be regenerated on demand from versioned sources, do that instead. If you can (GNU) make it, (Ruby) rake it, or just plain fake it, don't commit it to your repository.
You can split the difference with versioned symlinks, maintenance scripts, submodules, externals definitions, and so forth, but the results are generally unsatisfactory and error prone. Use them when you have to, and avoid them when you can.
This is definitely a situation where your mileage may vary, but the three Sensible Rules work well for me.

Categories