I'm currently working with Ember's build tool Ember-CLI so there are a lot of JavaScript source files. But this is a more broadly applicable question.
After a source file changes, there's some time spent building and doing file I/O before tests can be run or a browser refresh can happen.
Could the file reading be minimized and file writing part be skipped during development?
So the workflow would be:
Start server
Process each file (ex. CoffeeScript -> JavaScript -> ES6 transpile -> lint).
Store each processed file output in-memory.
When a file is changed, process that one file and swap it out in memory. (No file writing.)
Concatenate the in-memory files and serve directly to the browser. (Files that didn't change don't need to be read in order to concatenate into a single JS source file.)
Would this be noticeably faster than having all of the file I/O?
Related
How are the files in Node.js source code's lib directory used by node? Does the node executable interpret the files in the library before running or are these javascript files some how used during compilation of the node executable?
Those files are what are termed internal Javascript files. They are packaged into the node executable and node.js knows how to get them from the executable when needed during the running of a node.js app. Executable files contain a resource system so that in addition to code, they can also contain other types of resources (text, images, dialogs, etc...).
When you do require() in a node.js script file, it checks the name you're looking for against a list of known internal script filenames. If it matches, then it fetches the source from its internal location in the executable file, not from a separate file in your local file system. Similarly, if the require() is coming from within one of these internal files, it knows to look for the required file in its internal location too.
They are run as Javascript at the time of execution. They are not precompiled into something other than Javascript. The main difference is that they are Javascript script resources contained within the node executable, not something loaded from the file system.
I have the following in my Index.cshtml:
#Scripts.Render("/signalr/hubs")
In my BundleConfig.cs, I have the following:
.Include("~/Scripts/jquery.signalR-{version}.min.js")
With EnableOptimizations on, I get a nicely bundled vendor? package. But in my Sources, I see:
Why is this raw unminified JS getting loaded? How do I bundle/minify it?
SignalR's proxy scripts are dynamically generated at runtime at /signalr/hubs by default. They're typically small, on the order of a couple of kilobytes or smaller, so minifying them will not yield any performance benefits (perhaps zero benefit at all if they already fit into an entire Ethernet frame).
Additionally, the hubs themselves cannot have their internal symbols/identifiers minified because it exposes a "public API" that your code consumes - see how dynamic (or interfaced) "client method" calls inside your Hub class are transferred over the pipe, so those names must be preserved for the system to work.
Finally, IIS is usually configured to HTTP gzip-compress certain dynamically-generated content anyway, this includes the SignalR proxy scripts - further minification can be counterproductive (as the entropy of minified scripts can be higher than uncompressed scripts).
But if you believe you can safely compress them, or if you want to bundle them, and you're certain you don't need the benefit of dynamically-generated proxies to handle rapidly changing developer requirements, then you can generate them offline:
https://learn.microsoft.com/en-us/aspnet/signalr/overview/guide-to-the-api/hubs-api-guide-javascript-client
How to create a physical file for the SignalR generated proxy
As an alternative to the dynamically generated proxy, you can create a physical file that has the proxy code and reference that file. You might want to do that for control over caching or bundling behavior, or to get IntelliSense when you are coding calls to server methods.
Install the Microsoft.AspNet.SignalR.Utils NuGet package.
Open a command prompt and browse to the tools folder that contains the SignalR.exe file. The tools folder is at the following location: packages\Microsoft.AspNet.SignalR.Utils.2.1.0\tools
signalr ghp /path:[path to the .dll that contains your Hub class] - This command creates a file named server.js in the same folder as signalr.exe.
Put the server.js file in an appropriate folder in your project, rename it as appropriate for your application, and add a reference to it in place of the "signalr/hubs" reference.
I have been trying to find an answer why webpack cares about module loading on the backend. Is there a reason why this may be needed?
Does JSPM do backend module loading as well?
Assuming your first question is along the lines of "Why pre-bundle JavaScript code for the client?"
There are many reasons for module bundling. A few:
Simple file aggregation: Bundling related code makes many tasks easier / more intuitive. Instead of deploying a large directory tree of files after bundling those files may instead be a single bundle file.
Loading Performance: Individually loading dependencies that are in separate files on the client side has historically been very slow. Each file must be parsed and evaluated separately and depending on the module system used may incur considerable delay while waiting for dependencies to be discovered and loaded.
Media type abstraction: Bundlers typically allow methods for bundling non-JavaScript content. Including assets like images and stylesheets is convenient and encourages explicit/clear dependency by parts of your application that use them.
Tree shaking: By analyzing dependency among modules and code it's often possible to selectively include what is needed by the application and reduce the size of your overall code base. This isn't inherently a characteristic of bundling, but is commonly done because there is some notion of a build step.
Regarding your second question:
JSPM does offer this functionality. This is can be done on the command line with the jspm bundle command.
The simplest reason is for performance. Opening a file and closing a file is a slower processes than the time it takes to send the file (stream) so the fewer open and close file operations the faster the server can send the files requested. So by reducing the number of files that make up a javascript/web project the faster the browser will finish getting the files and start processing them for the end user.
The things that a good build process can do for you web project can go beyond simply adding all your Js files together as tools such as JSPM can also bring css and html files together into one bundle.js file further adding to your end-user experience.
I use Visual Studio 2013 and .NET 4.5 for an MVC project.
I've learning to use AngularJS via several videos on Pluralsight and one of them walks through the process of using Grunt to clean the output directory, then use ngmin to min-safe the Javascript files.
My process is using a gruntfile.js to clean and run ngmin against the javascript files in my solution, then put them in a directory called app_built. This is executed via a batch file in the pre-build for the project and then I include it via a ScriptBundle with IncludeDirectory pointing to the app_built directory. My intent is to use the Bundling features of .NET 4.5 to do the rest of the minification and concatenation of the Javascript after all the files have been min-safed via Grunt.
I specify the path to the min-safed files with the following:
bundles.Add(new ScriptBundle("~/bundles/minSafed")
.IncludeDirectory("~/app_built/", "*.js", true));
If I run this on my local machine, it runs fine without a hitch. The Javascript is minified and bundled as I'd expect and the resulting web application runs fine as well.
If I publish the website to a remote server, I get a server error that the "Directory does not exist. Parameter name: directoryVirtualPath". I assume this error is saying that it's unable to find the directory populated with my many *.js files. I also assume this is because they weren't published since they aren't part of the solution, even though the folder they reside in is a part of the solution (it's just empty within the solution explorer in Visual Studio).
If my assumption is correct, what can I do to add these files to my solution so they'll be published with the rest of my web application with minimal effort on my end each time?
And if I'm incorrect in the assumption, what I can I do to resolve this otherwise?
Thanks!
I never did find a great way of going about this. I found information at http://sedodream.com/2010/05/01/WebDeploymentToolMSDeployBuildPackageIncludingExtraFilesOrExcludingSpecificFiles.aspx that seems related, but I was unable to make it work.
Rather, since I knew the name of the outputted file, I simply created such an empty file in my project and referenced that where I needed to. I then had the pre-build task replace the contents of that file with the externally minified version and it would be packaged with the project as necessary, so it works well enough.
I want to write coffeescript that would read a file at compile time and produce a javascript file that initializes a variable with the file contents.
My app has a bunch of error messages and stubs that need to be maintained independently by copy editors and such. But all of them need to be inline in the js that is served to the client browser.
Are there 'pre-processor' directives that will let me do this?
Javasccript in and of itself doesn't have this capability. So what you are trying to do is not a good practice. The variables that you are trying to change on compile time, keep them in a separate javascript file. Then while building your project, initialize those variables in the separate file and concat them to serve (possibly minified) javascript file. There are tools available to do this. Check out uglify and grunt