I'm writing an application where I need two templating languages. Currently, I'm using code like
app.get('/route1', function(req, res) {
res.render('path/to/jade/template' {title: "Express")
})
app.get('route2', function(req, res) {
var _ = require('lodash')
, fs = require('fs')
, template = fs.readFileSync('views/path/to/lodash/template')
res.send ( _.template(template, {title: "Express"}) )
})
I'd really like to move that into another res function, like
res.template('path/to/lodash/template', { data })
While I can just edit the express node_module, that's hardly an elegant solution. Are there any examples of how to do this? Does express.js give you the option to expand on what it already has?
Update:
I've looked into express's app.engine function and have the following code:
app.engine('js', function(path, options, fn) {
var file = fs.readFileSync(path, 'utf-8')
, template = require('lodash').template(file)
console.log(template(options)) // this logs the file how I want
return template(options)
})
While it's logging what I want, nothing seems to be returning to the client. The request just hangs. I've tried return statements such as return function() { return template(options) } and return function(locals) { return template(locals) }, however it doesn't seem to be calling that function, so I believe that syntax is for an older version of express
Also, I'm aware that consolidate.js solves this problem, but if possible I would prefer to solve it without it (or at least know how :)
I got it to work after playing around some. I'd love some comments on my solution :)
The working code is as follows:
app.engine('js', function(path, options, fn) {
var file = require('fs').readFileSync(path, 'utf-8')
var str = require('lodash').template(file, options)
fn(null, str)
})
I still need to add better (some) error handling, probably move the rendering into a readFile callback as opposed to readFileSync, maybe pull out the locals from the options param, but at the very least this works
Related
I'm in the process of splitting my cloud functions in separate files, so as to lazy load some dependencies and reduce cold start.
Basically, I'm trying to replicate Doug's answer from here, but not having to create a separate file for each function.
In my index.js file:
const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp();
exports.createStripeCustomer = functions.auth.user().onCreate(async (user) => {
const { createUser } = require('./user/main');
await createUser(user, admin);
});
And in my 'user/main.js' file:
const functions = require('firebase-functions');
const { Stripe } = require('stripe');
const stripe = new Stripe(functions.config().stripe.secret);
const endpointSecret = functions.config().stripe.singing;
const createStripeCustomer = async (user, admin) => {
// Do some stuff
};
module.exports = { createUser: createStripeCustomer };
The intention behind this split, is that I have some functions which require Stripe, and some which do not, hence I don't want them all to load it unnecessarily, but I get an error- "missing ) after argument list".
Any suggestions as to what has gone wrong?
Your solution does not look like the real solution...
Maybe you also fixed something which looked insignificant... Like an extra double quote.
let something = "A"
console.log("hello", something")
See the extra double quote after the variable?
It produces the same error you mentionned.
It is a common error due to code editors just adding thing for you... And if you are like me and look at the keyboard instead of the screen, it is easy not to notice.
Just in case anyone experiences a similar problem, I managed to get it working, simply by changing:
const createStripeCustomer = async (user, admin) => {
// Do some stuff
};
to:
async function createStripeCustomer(user, admin) {
// Do some stuff
};
I have no idea why that was an issue, but it seemed to resolve the problem I had before.
I'm using UiPath Orchestrator. This runs as expected. But I now additionally want to reduce the authentication to a single call (instead of always do an auth when requesting an odata). So my idea was to write the object to a file and on the odata request read that object and re-use it.
The following orchestrator object comes from the constructor of new Orchestrator. This object is ready to be used and has the following structure (via console.log(orchestrator)):
In my tool I need the object functions of odata. So this works:
console.log(orchestrator['v2']['odata']);
I now want to save that object as file to be able to re-use it, so I did:
fs.writeFileSync('./data.json', orchestrator, 'utf-8')
But sadly I get the error:
Converting circular structure to JSON
That is intended as the node package is using a circulare structure. So my idea was to use the circular-json package to fix that issue:
const {parse, stringify} = require('circular-json');
...
var savetofile = stringify(orchestrator);
...
var readfromfile = parse(savetofile);
...
console.log(readfromfile['v2']['odata']);
But sadly than readfromfile['v2']['odata'] is not available anymore. The reason is that stringify(orchestrator) is already minifying too heavy:
So how I achieve that I am able to read the Orchestrator object from the file and being able to use the functions again? Or is it more useful to use a memory tool in my case?
The issue was not located in the Orchestrator object itself. So there is no need to do a single authentication.
My problem was that I put the res.send outside of the callback. So it never waited for the actual finish of the REST api call.
This was the base code where it just took the static result of the first request, it never updated the results:
app.get('/jobs', function (req, res) {
...
var orchestrator = require('./authenticate');
var results = {};
var apiQuery= {};
orchestrator.get('/odata/Jobs', apiQuery, function (err, data) {
for (row in data) {
results[i] =
{
'id' : row.id,
...
};
}
});
return res.send({results});
});
The solution is to moving the res.send({results}); into the orchestrator.get, then it properly overwrites the results as it waits correctly for the callback:
app.get('/jobs', function (req, res) {
...
var orchestrator = require('./authenticate');
var results = {};
var apiQuery= {};
orchestrator.get('/odata/Jobs', apiQuery, function (err, data) {
for (row in data) {
results[i] =
{
'id' : row.id,
...
};
}
return res.send({results});
});
});
I would like to make use of a function called executeJavaScript() from the Electron webContents API. Since it is very close to eval() I will use this in the example.
The problem:
I have a decent sized script but it is contained in a template string.
Expanding this app, the script could grow a lot as a string.
I am not sure what the best practices are for this.
I also understand that eval() is dangerous, but I am interested in the principal of my question.
Basic eval example for my question:
// Modules
const fs = require('fs');
// CONSTANTS
const EXAMPLE_1 = 'EXAMPLE_1';
const EXAMPLE_2 = 'EXAMPLE_2';
const EXAMPLE_3 = 'EXAMPLE_3';
const exampleScriptFunction = require('./exampleScriptFunction');
const exampleScriptFile = fs.readFileSync('./exampleScriptFile.js');
// using direct template string
eval(`console.log(${EXAMPLE_1})`);
// using a method from but this doesnt solve the neatness issue.
eval(exampleScriptFunction(EXAMPLE_2));
// What I want is to just use a JS file because it is neater.
eval(`${exampleScriptFile}`);
exampleScriptFunction.js
module.exports = function(fetchType) {
return `console.log(${fetchType});`;
}
This will allow me to separate the script to a new file
what if I have many more then 1 variable???
exampleScriptFile.js:
console.log(${EXAMPLE_3});
This clearly does not work, but I am just trying to show my thinking.
back ticks are not present, fs loads as string, main file has back ticks.
This does not work. I do not know how else to show what I mean.
Because I am loading this will readFileSync, I figured the es6 template string would work.
This allows me to write a plain js file with proper syntax highlighting
The issue is the variables are on the page running the eval().
Perhaps I am completely wrong here and looking at this the wrong way. I am open to suggestions. Please do not mark me minus 1 because of my infancy in programming. I really do not know how else to ask this question. Thank you.
Assuming your source is stored in exampleScriptFile:
// polyfill
const fs = { readFileSync() { return 'console.log(`${EXAMPLE_3}`);'; } };
// CONSTANTS
const EXAMPLE_1 = 'EXAMPLE_1';
const EXAMPLE_2 = 'EXAMPLE_2';
const EXAMPLE_3 = 'EXAMPLE_3';
const exampleScriptFile = fs.readFileSync('./exampleScriptFile.js');
// What I want is to just use a JS file because it is neater.
eval(exampleScriptFile);
Update
Perhaps I wasn't clear. The ./exampleScriptFile.js should be:
console.log(`${EXAMPLE_3}`);
While what you're describing can be done with eval as #PatrickRoberts demonstrates, that doesn't extend to executeJavaScript.
The former runs in the caller's context, while the latter triggers an IPC call to another process with the contents of the code. Presumably this process doesn't have any information on the caller's context, and therefore, the template strings can't be populated with variables defined in this context.
Relevant snippets from electron/lib/browsers/api/web-contents.js:
WebContents.prototype.send = function (channel, ...args) {
// ...
return this._send(false, channel, args)
}
// ...
WebContents.prototype.executeJavaScript = function (code, hasUserGesture, callback) {
// ...
return asyncWebFrameMethods.call(this, requestId, 'executeJavaScript',
// ...
}
// ...
const asyncWebFrameMethods = function (requestId, method, callback, ...args) {
return new Promise((resolve, reject) => {
this.send('ELECTRON_INTERNAL_RENDERER_ASYNC_WEB_FRAME_METHOD', requestId, method, args)
// ...
})
}
Relevant snippets from electron/atom/browser/api/atom_api_web_contents.cc
//...
void WebContents::BuildPrototype(v8::Isolate* isolate,
v8::Local<v8::FunctionTemplate> prototype) {
prototype->SetClassName(mate::StringToV8(isolate, "WebContents"));
mate::ObjectTemplateBuilder(isolate, prototype->PrototypeTemplate())
// ...
.SetMethod("_send", &WebContents::SendIPCMessage)
// ...
}
I'm struggling to come up with a pattern that will satisfy both my tests and ability for Travis to run my script.
I'll start off by saying that the way I have Travis running my script is that I specify the script to be run via node-babel command in my travis.yml as so:
script:
- babel-node ./src/client/deploy/deploy-feature-branch.js
That means when babel-node runs this, I need a method to auto run in deploy-feature-branch.js which I have. That's the line let { failure, success, payload } = deployFeatureBranch(). That forces deployFeatureBranch() to run because it's set to a destructure command.
In there I also have an options object:
let options = {
localBuildFolder: 'build',
domain: 'ourdomain',
branch: process.env.TRAVIS_PULL_REQUEST_BRANCH
}
During a PR build, travis automatically sets the value for process.env.TRAVIS_PULL_REQUEST_BRANCH. That's great! However the way I've set up this module doesn't work so well for tests. The problem I have is that if I try to set options from my test, for some reason the options object isn't being set.
I guess the problem I want to address is first and foremost, why options isn't being set when I try to set them from my test. And then is there a better way to design this module overall.
Test
import {options, deployFeatureBranch } from '../../../client/deploy/deploy-feature-branch'
it.only('creates a S3 test environment for a pull request', async () => {
options.branch = 'feature-100'
options.domain = 'ourdomain'
options.localDeployFolder = 'build'
const result = await deployFeatureBranch()
expect(result.success).to.be.true
})
})
When deployFeatureBranch() runs above in my test, the implementation of
tries to reference options.branch but it ends up being undefined even though I set it to be 'feature-100'. branch is defaulted to process.env.TRAVIS_PULL_REQUEST_BRANCH but I want to be able to override that and set it from tests.
deploy-feature-branch.js
import * as deployApi from './deployApi'
let options = {
localBuildFolder: 'build',
domain: 'ourdomain',
branch: process.env.TRAVIS_PULL_REQUEST_BRANCH
}
const deployFeatureBranch = async (options) => {
console.log(green(`Deploying feature branch: ${options.branch}`))
let { failure, success, payload } = await deployApi.run(options)
return { failure, success, payload }
}
let { failure, success, payload } = deployFeatureBranch(options)
export {
options,
deployFeatureBranch
}
I can't really think of a better way to structure this and also to resolve the setting options issue. I'm also not limited to using Node Modules either, I would be fine with ES6 exports too.
Instead of exporting options and modifying it, just pass in your new options object when calling the function in your test:
import {deployFeatureBranch } from '../../../client/deploy/deploy-feature-branch'
it.only('creates a S3 test environment for a pull request', async () => {
const options = {
branch: 'feature-100',
domain: 'ourdomain',
localDeployFolder: 'build'
};
const result = await deployFeatureBranch(options)
expect(result.success).to.be.true
})
});
The reason it isn't working is because your deployFeatureBranch() function expects options to be passed in when you call it, which you aren't doing.
Also, exporting and changing an object, while it might work, is also really weird and should be avoided. Creating a new object (or cloning the exported object) is definitely the way to go.
I'm upgrading from Gulp 3 to 4, and I'm running into an error:
The following tasks did not complete: build
Did you forget to signal async completion?
I understand what it's saying, but can't understand why this code is triggering it.
Error or not, the task completes (the files are concatenated and written to dest). Executing the same code without lazypipe results in no error, and removing the concatenation within lazypipe also fixes the error.
Wrapping the whole thing in something that creates a stream (like merge-stream) fixes the issue. I guess something about the interaction between gulp-concat and lazypipe is preventing a stream from being correctly returned.
Here's the (simplified) task:
gulp.task('build', function() {
var dest = 'build';
var buildFiles = lazypipe()
.pipe(plugins.concat, 'cat.js') // Task will complete if I remove this
.pipe(gulp.dest, dest);
// This works
// return gulp.src(src('js/**/*.js'))
// .pipe(plugins.concat('cat.js'))
// .pipe(gulp.dest(dest));
// This doesn't (unless you wrap it in a stream-making function)
return gulp.src(src('js/**/*.js'))
.pipe(buildFiles());
});
Any advice appreciated!
This is a known issue when using lazypipe with gulp 4 and it's not going to be fixed in the near future. Quote from that issue:
OverZealous commented on 20 Dec 2015
As of now, I have no intention of making lazypipe work on Gulp 4.
As far as I can tell this issue is caused by the fact that gulp 4 uses async-done which has this to say about its stream support:
Note: Only actual streams are supported, not faux-streams; Therefore, modules like event-stream are not supported.
When you use lazypipe() as the last pipe what you get is a stream that doesn't have a lot of the properties that you usually have when working with streams in gulp. You can see this for yourself by logging the streams:
// console output shows lots of properties
console.log(gulp.src(src('js/**/*.js'))
.pipe(plugins.concat('cat.js'))
.pipe(gulp.dest(dest)));
// console output shows much fewer properties
console.log(gulp.src(src('js/**/*.js'))
.pipe(buildFiles()));
This is probably the reason why gulp considers the second stream to be a "faux-stream" and doesn't properly detect when the stream has finished.
Your only option at this point is some kind of workaround. The easiest workaround (which doesn't require any additional packages) is to just add a callback function cb to your task and listen for the 'end' event:
gulp.task('build', function(cb) {
var dest = 'build';
var buildFiles = lazypipe()
.pipe(plugins.concat, 'cat.js')
.pipe(gulp.dest, dest);
gulp.src(src('js/**/*.js'))
.pipe(buildFiles())
.on('end', cb);
});
Alternatively, adding any .pipe() after buildFiles() should fix this, even one that doesn't actually do anything like gutil.noop():
var gutil = require('gulp-util');
gulp.task('build', function() {
var dest = 'build';
var buildFiles = lazypipe()
.pipe(plugins.concat, 'cat.js')
.pipe(gulp.dest, dest);
return gulp.src(src('js/**/*.js'))
.pipe(buildFiles())
.pipe(gutil.noop());
});
So the error is clear. I had to do some refactoring to make things work again for gulp 4. I ended up making some extra methods that take a source and destination and perform the tasks previously done by my lazypipe implementation.
I have to say I don't miss lazypipe now. It's just a different approach. I did end up with some extra tasks but they use a standard method like in the example below:
// previously a lazypipe, now just a method to return from a gulp4 task
const _processJS = (sources, destination) => {
return src(sources)
.pipe(minify(...))
.pipe(uglify(...))
.pipe(obfuscate(...))
.pipe(whatever())
.pipe(dest(destination));
};
const jsTaskXStep1 = ()=>{
return src(...).pipe(...).pipe(...).pipe(dest(...));
};
const jsTaskXStep2 = ()=>{
return _processJS(['./src/js/x/**/*.js'], './dist/js');
};
const jsTaskYStep1 = ()=>{
return src(...).pipe(...).pipe(...).pipe(dest(...));
};
const jsTaskYStep2 = ()=>{
return _processJS(['./src/js/y/**/*.js'], './dist/js');
};
const jsTaskX = series(jsTaskXStep1, jsTaskXStep2);
const jsTaskY = series(jsTaskYStep1, jsTaskYStep2);
module.exports = {
js: parallel(jsTaskX, jsTaskY),
css: ...,
widgets: ...,
...
default: parallel(js, css, widgets, series(...), ...);
}
So basically you can put your lazypipe stuff in methods like _processJS in this example. And then create tasks that use it and combine everything with gulp series and parallel. Hope this helps out some of you who are strugling with this.