For d3, or any javascript package in general, what is the difference between the js file which has the entire source code(say, d3.v3.js) and the github repo for it(in the case of d3, it is https://github.com/mbostock/d3).
What does the github repo contain that the entire source code does not?
I read on Scott Murray's tutorials that the D3 repository contains "all of the component source code". Can someone explain what's meant by 'component'?
Let's look at the Whatever library. It does whatever. The repo for it is located at https://github.com/someone/whatever.js (this is not a real repo).
The repo itself usually contains a variety of info, including documentation, style guides, and code organization. Whatever.js is actually made up of three files: lib/whatever.js, lib/whatever-tools.js, and lib/whatever-xml.js. These get concatenated for actual use, but for development of whatever.js itself it's easier to work with separate files.
Having to deal with just commits all on a single file is absolutely horrible. Pull requests would be even worse.
The distributed version, aka whatever.js and whatever.min.js, is a version of the repo code after it's been dealt with however it needs to be. In the case of most libraries the files just get concatenated, but for some libraries fancy things happen. The .min.js version is the normal file, but after being run through a minification tool, these days usually UglifyJS2.
Some libraries will not even have all of the code in the main generated file, usually due to usage reasons. For example, Angular.js doesn't have the ng-route module in angular(.min).js, you need to include angular-route(.min).js too. This is for sanity reasons, because quite a lot of Angular uses don't need or want the routing system, and it's a fairly big addon.
it is the same as with any project in development environment and deployment environment, so in github that's a development environment for d3.js d3.v3.js is the compiled library that you need to use in your product.
Zeke Sonxx's answer is excellent. I'll just add that in the case of Javascript, because the source code can be run directly, there might be less need for a github repo. But even in the simplest cases, you get to add additional files when needed, keep track of problems and plans in the github issue system, etc. Example: The gexf-parser repo only has one main source file, src/parser.js, but there is a collection of files for testing as well, and a few other useful files. Javascript can also be "compiled", but it's not compilation in the sense of some languages (C, Java, Clojure, etc.). The application distributed will often be built from many different source files in the repo.
Related
I’ve just got started with Swagger and NodeJS. I was able to implement Swagger to my NodeExpress application and was also able to generate typescript-client-code with Swagger-Codegen (Typescript-Angular) to be exact.
One problem that I have is the generated code is so spread out many different files. I was hoping that it only output one file api.ts and it contains everything from API calls and interfaces/models.
I’ve been looking for a way to solve this problem because it is hard to read and maintain the generated-client-code as the backend grows.
Any suggestions or pointers would be much appreciated.
Happy Holiday! Thank you
EDIT: I have been looking for answers for this for a couple of days and still haven't found one. I'm currently working on a project with ASP.NET Core and they have NSwag which does what I want to achieve with Node Swagger.
TL; DR
You may compile all files into a single *.ts file as shown at this post.
Continuous Integration Approach
Swagger code generator simplifies maintenance because it allows you to think in terms of continuous integrations.
You should not be worried about code review or aesthetics (because it is a machine generated code), but about:
API versioning
Functions, methods and classes signatures
Documentation
If you are working with a CI system such as Jenkins or Ansible, you could automatically deploy the library to a private NPM account (for JS and TS) or Maven server (for Java and Kotlin).
Keeping the package version number consistently updated will allow the IDE to correctly prompt the user about updates on the API.
Swagger uses Mustache templates for generating the code. For making simpler
changes you can simple create a copy of one of the built-in templates and
modify that.
Then you can use your modified template like this:
swagger-codegen-cli generate -t path/to/template/dir/ -i spec.json
The output directory structure, however, cannot be changed using templates
alone. For that you'd need a custom codegen module. You can either create your
own or modify one of the built-in ones.
I'm very keen to make use of some build techniques in my Javascript/Web App development such as
Concatentation
Minification
Image replacement with data:uri's
Build vs Source *
App Cache Manifest generation *
It's those last two that I haven't found an answer for yet.
Build vs Source
By this I mean having a "source" version of my HTML and Javascript that is untouched so that I do not have to build each time to preview a change. All of my JS files are separate <script> tags as usual with the build vs updating these script sections with the final concatenated versions. To be honest I feel like I'm missing something here with all of these new Javascript build systems as this seems like an obvious need but I can't find anyone else talking about it. How is everyone else dealing with this?.. Build on each change during development?? surely not.
App Cache Manifest generation
This explains itself - walk through my source tree and build up a manifest and insert it into my <html> tag.
I've searched for these two with no luck - any pointers?
I'd be on the road with a killer build system if it wasn't for those two.
Thanks!!!
Re: Build vs Source
It sounds like you're already familiar with grunt. You may want to consider looking into the grunt node-build-script plugin.
It adds a number of new tasks, notably grunt mkdirs and grunt copy which duplicates your project directory into a separate staging folder and then copies your optimised project into a publish folder. If I'm not mistaken, this is what you mean by keeping an 'untouched' version of your source files?
Running grunt server will then serve up the contents of your publish files on localhost. You could always point your web server to your initial project directory if you want to examine your application in its unoptimised state.
node-build-script adds a bunch of other super convenient tasks, such as image optimisation, automatic file revving and substitution. It's incredibly easy to use and super customisable.
I have a basic single page template which uses node-build-script which also may be of interest.
Re: App Cache Manifest generation
I believe this used to be part of node-build-script but was since removed, see 1, 2
There would be nothing stopping you from creating a custom grunt task that utilised something like confess.js however.
Finally, it looks like Google's upcoming Yeoman might be worth keeping an eye on if you're not already!
Let's say I write a jQuery plugin and add it to my repository (Mercurial in my case). It's a single file, say jquery.plugin.js. I'm using BitBucket to manage this repository, and one of its features is a Downloads page. So, I add jquery.plugin.js as one of the downloads.
Now I want to make available a minified version of my plugin, but I'm not sure what the best practice is. I know that it should be available on the Downloads page as jquery.plugin.min.js, but should I also version control it each time I update it to reflect the unminified version?
The most obvious problem I see with version controlling the minified version is that I might forget to update it each time I make a change to the unminified version.
So, should I version control the minified file?
No, you should not need to keep generated minimized versions under source control.
We have had problems when adding generated files into source control (TFS), because of the way TFS sets local files to be read-only. Tools that generate files as part of the build process then have write access problems (this is probably not a problem with other version control systems).
But importantly, all the:
tools
scripts
source code
resources
third party libraries
and anything else you need to build, test and deploy your product should be under version control.
You should be able to check out a specific version from source control (by tag or revision number or the equivalent) and recreate the software exactly as it was at that point in time. Even on a 'fresh' machine.
The build should not be dependent on anything which is not under source control.
Scripts: build-scripts whether ant, make, MSBuild command files or whatever you are using, and any deployment scripts you may have need to be under version control - not just on the build machine.
Tools: this means the compilers, minimizers, test frameworks - everything you need for your build, test and deployment scripts to work - should be under source control. You need the exact version of those tools to be available to recreate to a point in time.
The book 'Continuous Delivery' taught me this lesson - I highly recommend it.
Although I believe this is a great idea - and stick to it as best as possible - there are some areas where I am not 100% sure. For example the operating system, the Java JDK, and the Continuous Integration tool (we are using Jenkins).
Do you practice Continuous Integration? It's a good way to test that you have all the above under control. If you have to do any manual installation on the Continuous Integration machine before it can build the software, something is probably wrong.
My simple rule of thumb:
Can this be automatically generated during a build process?
If yes, then it is a resource, not a source file. Do not check it in.
If no, then it is a source file. Check it in.
Here are the Sensible Rules for Repositories™ that I use for myself:
If a blob needs to be distributed as part of the source package in order to build it, use it, or test it from within the source tree, it should be under version control.
If an asset can be regenerated on demand from versioned sources, do that instead. If you can (GNU) make it, (Ruby) rake it, or just plain fake it, don't commit it to your repository.
You can split the difference with versioned symlinks, maintenance scripts, submodules, externals definitions, and so forth, but the results are generally unsatisfactory and error prone. Use them when you have to, and avoid them when you can.
This is definitely a situation where your mileage may vary, but the three Sensible Rules work well for me.
Multiple sites reference combining JavaScript and CSS files to improve web page performance, including examples of using ANT build scripts to concatenate the files prior to deployment.
I've search, and haven't found any information how to automate updating references to those files in HTML and other documents. I am looking to avoid hacking together something error prone, and want to learn from others who have automated builds in the past.
Are there automated tools in the wild to complete this task that I'm not seeing? Are there recommended processes to update the script and link tags in HTML? Can these solutions be integrated with ANT or similar build tools?
There sure is and it's a smart thing to do.
I found a PHP solution, don't know it that's okay for you, but if it isn't you can still read it's source (it's not difficult) and learn a lot. The solution works like this:
Rewrite your requests like this: from css/main.css and css/skin.css to css/main.css,skin.css (of course you can put many more).
Use apache's mod_rewrite to redirect this request to a script (in our case combine.php), that will combine all files to a single one.
The script combines all the files and sends the combined file. Then it saves it to a cache folder.
Next time around it checks if there is an up-to-date version of the cache and serves that one. If the latest file modification time has changed, it discards the cache.
The solution works great and it even makes use of HTTP cache headers and spits out an [ETags], which you should do anyway.
You are correct this is a great way to speed up page loading. It will even work in conjunction with a CDN, which the other poster recommended.
Here is a small script that will pack multiple files in to one for deployment. It supports both JS and CSS, and will even "minify" them by removing whitespace, etc. Just hook this in to your build and deploy scripts.
juicer: http://cjohansen.no/en/ruby/juicer_a_css_and_javascript_packaging_tool
What even better, it will follow JS and CSS import statements, so you only need to point your HTML files to the loader file and it will work in both development and production. (Assuming you replace the loader file with the combined file on deployment.)
There are others, including some run-time solutions. But it sounds like you have a build process in place anyway.
As far as HTML updating, if you still need it, since automated deployments are very popular in the Ruby world, and you may find some standalone utilities to help even for non-ruby projects. (As above) Methinks this would be best handled by your own project's template language, though. (With a static resource revision id, or such.)
Good luck, and let us know what you find.
I think what you really want is a CDN Content Delivery Network.
Read about it here
http://developer.yahoo.com/performance/rules.html
http://en.wikipedia.org/wiki/Content_delivery_network
I'm starting to use the Dojo toolkit and it has rich features like Dijits and themes which are useful but take forever to load.
I've got a good internet connection but those with slower connections would experience rather slow page loads.
This is also a question about heavy vs light frameworks. If you make heavy use of widgets, what are some techniques to keep page load times down?
Dojo has a build system that will drastically improve load times. Take a look at one of the dojo books or the online docs & look at layered builds. In order to do a build, you need to have the "source" (or "full") version of dojo, which has the build tool included -- you can tell if you have this by the presence of the 'util' directory (which is at the same level as dojo, dijit, dojox). If you don't have the full version, go back to the dojo site & delve down into the download area -- it's not completely obvious perhaps.
Anyway, if you have the right version, you basically just need to make a "build profile" file (or files ... aka a layered build), which is essentially your list of dojo.requires that you would normally have in your html. The build system will jam all the javascript code for all the dijits, dojox stuff, etc. together into a "layered build" (a file) and it will run shrinksafe on it, which sort of minifies the code (removes whitespace, shortens names, etc). It will also do some of this to the css files. Aside from making things much smaller, you get just a single file for all the js code (or a few files if you do more than one layer, but most of the time a single layer suffices).
This will improve your load times at least ten-fold, if not more. It might take you a bit of reading to get down the format of the profiles and the build command itself, but it's not too hard really. Once you create a build file, name it something obvious like "mystuff" and then you can dojo.require the "mystuff" file (which will be in the new build directory that is created when you build, then underneath that & hanging out with the dojo.js file in the dojo directory). Requiring in your built file will satisfy all the dojo.require's you normally do (assuming you have them all listed in the profile to build) and things will load very fast.
Here's a link to the older build docs, which mostly still hold true:
http://www.dojotoolkit.org/book/dojo-book-0-9/part-4-meta-dojo/package-system-and-custom-builds
Here's the updated docs (though perhaps a little incomplete):
docs.dojocampus.org/build/index
It reads harder than it really is ... use the layer.profile file in the profiles directory as a starting point. Just put a couple of things & then do a build & see if you get the release directory created (which should be at the same level as dojo, dijit, etc.) and it will have the entire dojo system in it (all minified) as well as your built (layered) stuff. Much faster.
Dylan Tynan
It's not that big (28k gzipped). Nevertheless you can use Google's hosted version of Dojo. Many of your users will already have it cached.
Once you create a build file, name it something obvious like "mystuff"
and then you can dojo.require the "mystuff" file (which will be in the
new build directory that is created when you build, then underneath
that & hanging out with the dojo.js file in the dojo directory).
Requiring in your built file will satisfy all the dojo.require's you
normally do (assuming you have them all listed in the profile to
build) and things will load very fast
Slight correction -- you don't dojo.require that file, you reference it in an ordinary script tag.
<script type="text/javascript" src="js/dojo/dojo/dojo.js" ></script>
<script type="text/javascript" src="js/dojo/mystuff/mystuff.js"></script>
For the directory layout I put the built file "mystuff.js" into the same directory as my package. So at the same level as dojo, dojox, and dijit, I would have a directory named "mystuff", and within that I have MyClass1.js and MyClass2.js. Then a fragment from the profile.js file for the build looks like:
layers:[
{
name: "../mystuff/mystuff.js",
dependencies: [
"mystuff.MyClass1",
"mystuff.MyClass2"
]
},...
I know this is an old thread. But I'm posting this answer for the benefit of other users like me who may read this.
If you are serving from apache there are other factors too. These settings can make a huge difference - MaxClients and MaxRequestsPerChild. You will need to tweak them based on the resources available to your server/machine serving the files.
Changing this worked really well for me.
Using the google CDN is also a good option although it may not be practical in some situations.
Custom build also has an effect as pointed out in other answers.