RequireJS optimizer prepend - javascript

I am looking for a way to prepend some information to minimized file.
I've found the option here, but it doesn't useful for me because uglifier runs after the wrapper code has been added.

You could assign a function to the out key to post-process the file. By setting this option, the result will not automatically be written to a file, so you have to do that yourself. For example:
({
// Let's optimize mainApp.js
name: "mainApp",
optimize: "uglify",
out: function(text) {
// Transform the compiled result.
text = '// Stuff to prepend \n' + text;
var filename = 'outputfile.js';
// By default, the name is resolved to the current working directory.
// Let's resolve it to the directory that contains this .build.js:
filename = path.resolve(this.buildFile, '..', filename);
// Finally, write the transformed result to the file.
file.saveUtf8File(filename, text);
}
})
Note: In the previous snippet, file.saveUtf8File is an internal RequireJS API and path is the path module imported from Node.js standard library (only if you run r.js with Node.js, and not e.g. with Rhino or in the browser).
If you save the previous as test.build.js, create an empty file called mainApp.js and run `r.js -o test.build.js, then a file called "outputfile.js" will be created with the following content:
// Stuff to prepend
define("mainApp",function(){});

Related

Read environment variables and then replace them in client-side JS when using gulp for building prod or dev code

So lets say I have some code in js
const myApiKey = 'id_0001'
But instead of harcoding it I want to put it in some bash script with other env vars and read from it and then replace it in the JS
So lets say for prod I would read from prod-env.sh or for dev I would read them from dev-env.sh and then gulp or some other tool does the magic and replaces MY_API_KEY based on whatever is established inside of prod-env.sh or dev-env.sh.
const myApiKey = MY_API_KEY
Update: I want to add I only care about unix OS, not concerned about windows. In golang there is way to read for example envVars.get('MY_API_KEY'), I'm looking for something similar but for JS in the client side.
If you're using gulp, it sounds like you could use any gulp string replacer, like gulp-replace.
As for writing the gulp task(s). If you are willing to import the environment into your shell first, before running node, you can access the environment via process.env
gulp.task('build', function(){
gulp.src(['example.js'])
.pipe(replace('MY_API_KEY', process.env.MY_API_KEY))
.pipe(gulp.dest('build/'));
});
If you don't want to import the environment files before running node, you can use a library like env2 to read shell environment files.
Another option would be to use js/json to define those environment files, and load them with require.
prod-env.js
{
"MY_API_KEY": "api_key"
}
gulpfile.js
const myEnv = require('./prod-env')
gulp.task('build', function(){
gulp.src(['example.js'])
.pipe(replace('MY_API_KEY', myEnv.MY_API_KEY))
.pipe(gulp.dest('build/'));
});
Also, for a more generic, loopy version of the replace you can do:
gulp.task('build', function () {
stream = gulp.src(['example.js']);
for (const key in process.env) {
stream.pipe('${' + key + '}', process.env[key]);
}
stream.pipe(gulp.dest('build/'));
});
In that last example I added ${} around the environment variable name to make it less prone to accidents. So the source file becomes:
const myApiKey = ${MY_API_KEY}
This answer is an easy way to do this for someone who doesn't want to touch the code they are managing. For example you are on the ops team but not the dev team and need to do what you are describing.
The environment variable NODE_OPTIONS can control many things about the node.js runtime - see https://nodejs.org/api/cli.html#cli_node_options_options
One such option we can set is --require which allows us to run code before anything else is even loaded.
So using this you can create a overwrite.js file to perform this replacement on any non-node_modules script files:
const fs = require('fs');
const original = fs.readFileSync;
// set some custom env variables
// API_KEY_ENV_VAR - the value to set
// API_KEY_TEMPLATE_TOKEN - the token to replace with the value
if (!process.env.API_KEY_TEMPLATE_TOKEN) {
console.error('Please set API_KEY_TEMPLATE_TOKEN');
process.exit(1);
}
if (!process.env.API_KEY_ENV_VAR) {
console.error('Please set API_KEY_ENV_VAR');
process.exit(1);
}
fs.readFileSync = (file, ...args) => {
if (file.includes('node_modules')) {
return original(file, ...args);
}
const fileContents = original(file, ...args).toString(
/* set encoding here, or let it default to utf-8 */
);
return fileContents
.split(process.env.API_KEY_TEMPLATE_TOKEN)
.join(process.env.API_KEY_ENV_VAR);
};
Then use it with a command like this:
export API_KEY_ENV_VAR=123;
export API_KEY_TEMPLATE_TOKEN=TOKEN;
NODE_OPTIONS="--require ./overwrite.js" node target.js
Supposing you had a script target.js
console.log('TOKEN');
It would log 123. You can use this pretty much universally with node, so it should work fine with gulp, grunt, or any others.

Using RequireJS with node to optimize creating single output file does not include all the required files

I use the FayeJS and the latest version has been modified to use RequireJS, so there is no longer a single file to link into the browser. Instead the structure is as follows:
/adapters
/engines
/mixins
/protocol
/transport
/util
faye_browser.js
I am using the following nodejs build script to try and end up with all the above minified into a single file:
var fs = require('fs-extra'),
requirejs = require('requirejs');
var config = {
baseUrl: 'htdocs/js/dev/faye/'
,name: 'faye_browser'
, out: 'htdocs/js/dev/faye/dist/faye.min.js'
, paths: {
dist: "empty:"
}
,findNestedDependencies: true
};
requirejs.optimize(config, function (buildResponse) {
//buildResponse is just a text output of the modules
//included. Load the built file for the contents.
//Use config.out to get the optimized file contents.
var contents = fs.readFileSync(config.out, 'utf8');
}, function (err) {
//optimization err callback
console.log(err);
});
The content of faye_browser.js is:
'use strict';
var constants = require('./util/constants'),
Logging = require('./mixins/logging');
var Faye = {
VERSION: constants.VERSION,
Client: require('./protocol/client'),
Scheduler: require('./protocol/scheduler')
};
Logging.wrapper = Faye;
module.exports = Faye;
As I under stand it the optimizer should pull in the required files, and then if those files have required files, it should pull in those etc..., and and output a single minified faye.min.js that contains the whole lot, refactored so no additional serverside calls are necessary.
What happens is faye.min.js gets created, but it only contains the content of faye_browser.js, none of the other required files are included.
I have searched all over the web, and looked at a heap of different examples and none of them work for me.
What am I doing wrong here?
For anyone else trying to do this, I mist that on the download page it says:
The Node.js version is available through npm. This package contains a
copy of the browser client, which is served up by the Faye server when
running.
So to get it you have to pull down the code via NPM and then go into the NPM install dir and it is in the "client" dir...

How to rename the original files of scripts in index.html using gulp?

I've written a gulp task to rename files so that they can be versioned. The problem is that the filenames of the files that the index.html scripts reference are not changed.
For example, in my index.html:
<script src=pub/main_v1.js"></script>
But if you actually navigate through the build folder to the subdirectory pub, you will find main.js.
Here is the custom gulp task:
const gulpConcat = require('gulp-concat');
const gulpReplace = require('gulp-replace');
const version = require('./package.json').version;
gulp.task('version', function () {
var vsn = '_' + version + '.js';
gulp.src('scripts/**/*.js')
.pipe(gulpConcat(vsn))
.pipe(gulp.dest('./prodBuild'));
return gulp.src('./prodBuild/index.html', { base: './prodBuild' })
.pipe(gulpReplace(/* some regex */, /* append vsn */))
.pipe(gulp.dest('./prodBuild'));
});
What do I need to fix/add so that the original filename changes to match that in the script tag?
Note: According to the gulp-concat docs, I should be able to find the concated files at prodBuild/[vsn], where [vsn] is _v1.js. However, it is no where to be found.
Update: The files rename properly in index.html, but I can't seem to get the renaming of the original files to work. Here's a snapshot of my build directory:
prodBuild/
pub/
main.js
someDir/
subDirA/
// unimportant stuff
subDirB/
file2.js
file3.js
// ...other files and folders...
EDIT:
The issue is that you return only one of the two tasks. The first task is simply ignored by gulp, since it is not returned. A simple solutions: Split it into two tasks, and reference the one from the other, like in this SO answer.
Old Answer
This looks like a perfect case for the gulp-rename. You could simply pipe your scripts through gulp-rename, like this:
.pipe(rename(function (path) {
path.basename += vsn;
path.extname = ".js"
}))
Gulp concat is, AFAIK, made for the concatination of files, not particularly for the renaming of them.

How to wait until a file is available in a Jake build (Node.js)?

Is there a way in a Node.js Jake build to wait until a certain file has been copied, and advance to do some operation only after the destination file can be found? I think this question pretty much comes down to "is there a way to copy files synchronously in Node.js/Jake?" (Perhaps something else than writing something from scratch, using the combination of fs.readSync and fs.writeSync.)
Background:
I'm developing a web app that is run on Node.js (with Express) during development, but will be deployed on a Java server in production. (We use Jade and Stylus in the client and Express enables us to run the app without generating all the HTML files etc. and deploying it after every change.)
I use Jake for making the build, i.e. generating HTML files from Jade files and CSS from Stylus files etc. Now I'm also trying to concatenate all of the app's JavaScript files into one minimized file and change all the HTML files to use that instead of all the separate JS files that are used in "raw" form during development.
However, I now have a problem with that last step. My idea was to copy all of my Jade files into a temporary directory for the deployment build and replace the reference (in a Jade file used as a header on all HTML pages) to a list of all separate JS files to the one that has just been generated by concatenating and minimizing the whole bunch. But as I first copy all of the Jade files to another location (which happens asynchronously) and try to edit one of the files, opening the file always fails since the copy operation hasn't really finished yet.
This is what I have now (in a simplified form) in my jakefile:
var fs = require('fs');
var fse = require('fs-extra');
var path = require('path');
var glob = require('glob');
var Snockets = require('snockets');
var snockets = new Snockets();
// generating the minimized JS file
snockets.getConcatenation(baseDir + '/scripts/all.js', { minify: true }, function(err, allJs) {
if (err) {
throw err;
}
fs.writeFileSync(generatedJsFileName, allJs);
});
// copying all the Jade files to a temp dir
glob.sync('**/*.*', {
cwd : srcDir
}).forEach(function(file) {
var loadPath = srcDir + '/' + file;
var savePath = targetDir + '/' + file;
fse.mkdirsSync(path.dirname(savePath));
fse.copy(loadPath, savePath);
});
// trying to read one of the copied files (which fails, since the file cannot be found yet)
fs.readFile(targetDir + '/views/includes/head.jade', 'utf8', function(err, data) {
...
});
This might be a stupid question, and a stupid way to try to solve the problem in the first place. So, also suggestions for a better approach are very welcome.
Update:
I also tried using Parseq, putting each operation (creating the JS file, copying the Jade files, reading one file) in its own function, but even that gives me the same error. If I run the script several times without deleting the target directory of the copy operation in between, the file can be found. So e.g. the path is correct and the problem really seems to be about timing.
I didn't really find an answer to the main question so I don't know if this helps anyone else facing the same problem. But I did find a way to get around the problem.
I ended using the same original Jade files for the two different conversions, but in the second conversion I use a custom js function to change the script tag reference to point to the minified file.
I.e.
var data = jade.compile(str, { filename: file, pretty: true })({
css: function(path) {
return '<link rel="stylesheet" href="/styles/' + path + '.css" />';
},
js: function(path) {
var name = '<script src="/scripts/';
if (path == 'all') {
name += generatedJsFileName;
}
else {
name += path + '.js';
}
name += '"></script>';
return name;
}
});
It might not be the prettiest workaround but it works.

Is it possible to stop requireJS from adding the .js file extension automatically?

I'm using requireJS to load scripts. It has this detail in the docs:
The path that is used for a module name should not include the .js
extension, since the path mapping could be for a directory.
In my app, I map all of my script files in a config path, because they're dynamically generated at runtime (my scripts start life as things like order.js but become things like order.min.b25a571965d02d9c54871b7636ca1c5e.js (this is a hash of the file contents, for cachebusting purposes).
In some cases, require will add a second .js extension to the end of these paths. Although I generate the dynamic paths on the server side and then populate the config path, I have to then write some extra javascript code to remove the .js extension from the problematic files.
Reading the requireJS docs, I really don't understand why you'd ever want the path mapping to be used for a directory. Does this mean it's possible to somehow load an entire directory's worth of files in one call? I don't get it.
Does anybody know if it's possible to just force require to stop adding .js to file paths so I don't have to hack around it?
thanks.
UPDATE: added some code samples as requested.
This is inside my HTML file (it's a Scala project so we can't write these variables directly into a .js file):
foo.js.modules = {
order : '#Static("javascripts/order.min.js")',
reqwest : 'http://5.foo.appspot.com/js/libs/reqwest',
bean : 'http://4.foo.appspot.com/js/libs/bean.min',
detect : 'order!http://4.foo.appspot.com/js/detect/detect.js',
images : 'order!http://4.foo.appspot.com/js/detect/images.js',
basicTemplate : '#Static("javascripts/libs/basicTemplate.min.js")',
trailExpander : '#Static("javascripts/libs/trailExpander.min.js")',
fetchDiscussion : '#Static("javascripts/libs/fetchDiscussion.min.js")'
mostPopular : '#Static("javascripts/libs/mostPopular.min.js")'
};
Then inside my main.js:
requirejs.config({
paths: foo.js.modules
});
require([foo.js.modules.detect, foo.js.modules.images, "bean"],
function(detect, images, bean) {
// do stuff
});
In the example above, I have to use the string "bean" (which refers to the require path) rather than my direct object (like the others use foo.js.modules.bar) otherwise I get the extra .js appended.
Hope this makes sense.
If you don't feel like adding a dependency on noext, you can also just append a dummy query string to the path to prevent the .js extension from being appended, as in:
require.config({
paths: {
'signalr-hubs': '/signalr/hubs?noext'
}
});
This is what the noext plugin does.
requirejs' noext plugin:
Load scripts without appending ".js" extension, useful for dynamic scripts...
Documentation
check the examples folder. All the info you probably need will be inside comments or on the example code itself.
Basic usage
Put the plugins inside the baseUrl folder (usually same folder as the main.js file) or create an alias to the plugin location:
require.config({
paths : {
//create alias to plugins (not needed if plugins are on the baseUrl)
async: 'lib/require/async',
font: 'lib/require/font',
goog: 'lib/require/goog',
image: 'lib/require/image',
json: 'lib/require/json',
noext: 'lib/require/noext',
mdown: 'lib/require/mdown',
propertyParser : 'lib/require/propertyParser',
markdownConverter : 'lib/Markdown.Converter'
}
});
//use plugins as if they were at baseUrl
define([
'image!awsum.jpg',
'json!data/foo.json',
'noext!js/bar.php',
'mdown!data/lorem_ipsum.md',
'async!http://maps.google.com/maps/api/js?sensor=false',
'goog!visualization,1,packages:[corechart,geochart]',
'goog!search,1',
'font!google,families:[Tangerine,Cantarell]'
], function(awsum, foo, bar, loremIpsum){
//all dependencies are loaded (including gmaps and other google apis)
}
);
I am using requirejs server side with node.js. The noext plugin does not work for me. I suspect this is because it tries to add ?noext to a url and we have filenames instead of urls serverside.
I need to name my files .njs or .model to separate them from static .js files. Hopefully the author will update requirejs to not force automatic .js file extension conventions on the users.
Meanwhile here is a quick patch to disable this behavior.
To apply this patch (against version 2.1.15 of node_modules/requirejs/bin/r.js) :
Save in a file called disableAutoExt.diff or whatever and open a terminal
cd path/to/node_modules/
patch -p1 < path/to/disableAutoExt.diff
add disableAutoExt: true, to your requirejs.config: requirejs.config({disableAutoExt: true,});
Now we can do require(["test/index.njs", ...] ... and get back to work.
Save this patch in disableAutoExt.diff :
--- mod/node_modules/requirejs/bin/r.js 2014-09-07 20:54:07.000000000 -0400
+++ node_modules/requirejs/bin/r.js 2014-12-11 09:33:21.000000000 -0500
## -1884,6 +1884,10 ##
//Delegates to req.load. Broken out as a separate function to
//allow overriding in the optimizer.
load: function (id, url) {
+ if (config.disableAutoExt && url.match(/\..*\.js$/)) {
+ url = url.replace(/\.js$/, '');
+ }
+
req.load(context, id, url);
},
The patch simply adds the following around line 1887 to node_modules/requirejs/bin/r.js:
if (config.disableAutoExt && url.match(/\..*\.js$/)) {
url = url.replace(/\.js$/, '');
}
UPDATE: Improved patch by moving url change deeper in the code so it no longer causes a hang after calling undef on a module. Needed undef because:
To disable caching of modules when developing with node.js add this to your main app file:
requirejs.onResourceLoad = function(context, map)
{
requirejs.undef(map.name);
};

Categories