I am having an issue with the Mongoose fixture loader and I am not sure what is wrong.
When I load my data according to docs such as:
var data = { User: [{name: 'Alex'}, {name: 'Bob'}] };
It does not load. Exploring the code I see that in this file there is an async.forEach iterator which doesn't seem to get triggered. Creating a simple file to test I still cannot get this to work as it should. Evidently the console should print 'User' but it does not. Can someone shed some light on what the issue might be? Note that while I have phrased my question about the async, ultimately I am trying to get the mongoose loader to work so I need to stay within their code structure.
var async = require('async');
var data = { User: [{name: 'Alex'}, {name: 'Bob'}] };
var iterator = function(modelName, next){
// not working
console.log(modelName);
next();
};
async.forEach(data, iterator, function() { });
The pow-mongoose-fixtures module in the NPM repository contains a bug (see bug report).
Your code contains the same bug:
async.forEach(data, ...)
forEach() operates on arrays, but data is an object. In case of the module, it was fixed by using Object.keys() to get an array of keys. You could use it too:
async.forEach(Object.keys(data), ...);
To get mongoose-fixtures working, install the GitHub version:
npm install git://github.com/powmedia/mongoose-fixtures.git
There's a couple of changed you need to make to your code as well:
var fixtures = require('mongoose-fixtures'); // renamed from 'pow-mongoose-fixtures'
var client = mongoose.connect(...);
...
fixtures.load(data, client); // need to pass the client object
Related
I am working with the npm ws library on a Node.js server. I was looking at the documentation and found a way to loop through the clients to, for example, send a message to everyone using the wss.clients object:
const WebSocket = require('ws');
const wss = new WebSocket.server({ port:8080 });
//...
wss.clients.forEach(client => {
client.send("A message to you!");
});
I initially thought that wss.clients was an array because it let me iterate through it with the array prototype forEach(), but when I tried running wss.clients.find() on it to send a message only to a specific connection, I got an error:
TypeError: wss.clients.find is not a function
I ran console.log(Array.isArray(wss.clients)) it said false. When I tried console.log(wss.clients), I got an object looking like this:
Set { WebSocket { ... } }
So, my question is, how is the wss.clients object able to run the array prototype forEach()? It worked without using Object.keys() or anything.
I also tried wss.clients.pop() out of curiosity, and it gave another type error.
What really is wss.clients? An object or an array?
I have discovered (thanks to #waiaan) that the type of wss.clients is a Set.
Sets have different methods than arrays, but they are similar.
The best implementation for Set.prototype.find() would be to define a method like this:
Set.prototype.find = function(cb) {
for (const e of this) {
if (cb(e)) {
return e;
}
}
}
More about arrays and sets in this article.
I'm struggling to come up with a pattern that will satisfy both my tests and ability for Travis to run my script.
I'll start off by saying that the way I have Travis running my script is that I specify the script to be run via node-babel command in my travis.yml as so:
script:
- babel-node ./src/client/deploy/deploy-feature-branch.js
That means when babel-node runs this, I need a method to auto run in deploy-feature-branch.js which I have. That's the line let { failure, success, payload } = deployFeatureBranch(). That forces deployFeatureBranch() to run because it's set to a destructure command.
In there I also have an options object:
let options = {
localBuildFolder: 'build',
domain: 'ourdomain',
branch: process.env.TRAVIS_PULL_REQUEST_BRANCH
}
During a PR build, travis automatically sets the value for process.env.TRAVIS_PULL_REQUEST_BRANCH. That's great! However the way I've set up this module doesn't work so well for tests. The problem I have is that if I try to set options from my test, for some reason the options object isn't being set.
I guess the problem I want to address is first and foremost, why options isn't being set when I try to set them from my test. And then is there a better way to design this module overall.
Test
import {options, deployFeatureBranch } from '../../../client/deploy/deploy-feature-branch'
it.only('creates a S3 test environment for a pull request', async () => {
options.branch = 'feature-100'
options.domain = 'ourdomain'
options.localDeployFolder = 'build'
const result = await deployFeatureBranch()
expect(result.success).to.be.true
})
})
When deployFeatureBranch() runs above in my test, the implementation of
tries to reference options.branch but it ends up being undefined even though I set it to be 'feature-100'. branch is defaulted to process.env.TRAVIS_PULL_REQUEST_BRANCH but I want to be able to override that and set it from tests.
deploy-feature-branch.js
import * as deployApi from './deployApi'
let options = {
localBuildFolder: 'build',
domain: 'ourdomain',
branch: process.env.TRAVIS_PULL_REQUEST_BRANCH
}
const deployFeatureBranch = async (options) => {
console.log(green(`Deploying feature branch: ${options.branch}`))
let { failure, success, payload } = await deployApi.run(options)
return { failure, success, payload }
}
let { failure, success, payload } = deployFeatureBranch(options)
export {
options,
deployFeatureBranch
}
I can't really think of a better way to structure this and also to resolve the setting options issue. I'm also not limited to using Node Modules either, I would be fine with ES6 exports too.
Instead of exporting options and modifying it, just pass in your new options object when calling the function in your test:
import {deployFeatureBranch } from '../../../client/deploy/deploy-feature-branch'
it.only('creates a S3 test environment for a pull request', async () => {
const options = {
branch: 'feature-100',
domain: 'ourdomain',
localDeployFolder: 'build'
};
const result = await deployFeatureBranch(options)
expect(result.success).to.be.true
})
});
The reason it isn't working is because your deployFeatureBranch() function expects options to be passed in when you call it, which you aren't doing.
Also, exporting and changing an object, while it might work, is also really weird and should be avoided. Creating a new object (or cloning the exported object) is definitely the way to go.
I have another question (last question). At the moment i am working on a Node.js project and in this I have many console.log() functions. This has worked okay so far but I also want everything that's written to the console to also be written in a log-file. Can someone please help me?
For example:
Console.log('The value of array position [5] is '+ array[5]);
In my real code its a bit more but this should give you an idea.
Thank you hopefully.
Just run the script in your terminal like this...
node script-file.js > log-file.txt
This tells the shell to write the standard output of the command node script-file.js to your log file instead of the default, which is printing it to the console.
This is called redirection and its very powerful. Say you wanted to write all errors to a separate file...
node script-file.js >log-file.txt 2>error-file.txt
Now all console.log are written to log-file.txt and all console.error are written to error-file.txt
I would use a library instead of re-inventing the wheel. I looked for a log4j-type library on npm, and it came up with https://github.com/nomiddlename/log4js-node
if you want to log to the console and to a file:
var log4js = require('log4js');
log4js.configure({
appenders: [
{ type: 'console' },
{ type: 'file', filename: 'logs/cheese.log', category: 'cheese' }
]
});
now your code can create a new logger with
var logger = log4js.getLogger('cheese');
and use the logger in your code
logger.warn('Cheese is quite smelly.');
logger.info('Cheese is Gouda.');
logger.debug('Cheese is not a food.');
const fs = require('fs');
const myConsole = new console.Console(fs.createWriteStream('./output.txt'));
myConsole.log('hello world');
This will create an output file with all the output which can been triggered through console.log('hello world') inside the console.
This is the easiest way to convert the console.log() output into a text file.`
You could try overriding the built in console.log to do something different.
var originalLog = console.log;
console.log = function(str){
originalLog(str);
// Your extra code
}
However, this places the originalLog into the main scope, so you should try wrapping it in a function. This is called a closure, and you can read more about them here.
(function(){
var originalLog = console.log;
console.log = function(str){
originalLog(str);
// Your extra code
})();
To write files, see this stackoverflow question, and to override console.log even better than the way I showed, see this. Combining these two answers will get you the best possible solution.
Just write your own log function:
function log(message) {
console.log(message);
fs.writeFileSync(...);
}
Then replace all your existing calls to console.log() with log().
#activedecay's answer seems the way to go. However, as of april 30th 2018, I have had trouble with that specific model (node crashed due to the structure of the object passed on to .configure, which seems not to work in the latest version). In spite of that, I've managed to work around an updated solution thanks to nodejs debugging messages...
const myLoggers = require('log4js');
myLoggers.configure({
appenders: { mylogger: { type:"file", filename: "path_to_file/filename" } },
categories: { default: { appenders:["mylogger"], level:"ALL" } }
});
const logger = myLoggers.getLogger("default");
Now if you want to log to said file, you can do it just like activedecay showed you:
logger.warn('Cheese is quite smelly.');
logger.info('Cheese is Gouda.');
logger.debug('Cheese is not a food.');
This however, will not log anything to the console, and since I haven't figured out how to implement multiple appenders in one logger, you can still implement the good old console.log();
PD: I know that this is a somewhat old thread, and that OP's particular problem was already solved, but since I came here for the same purpose, I may as well leave my experience so as to help anyone visiting this thread in the future
Here is simple solution for file logging
#grdon/logger
const logger = require('#grdon/logger')({
defaultLogDirectory : __dirname + "/logs",
})
// ...
logger(someParams, 'logfile.txt')
logger(anotherParams, 'anotherLogFile.log')
So, I'm a big fan of creating global namespaces in javascript. For example, if my app is named Xyz I normally have an object XYZ which I fill with properties and nested objects, for an example:
XYZ.Resources.ErrorMessage // = "An error while making request, please try again"
XYZ.DAL.City // = { getAll: function() { ... }, getById: function(id) { .. } }
XYZ.ViewModels.City // = { .... }
XYZ.Models.City // = { .... }
I sort of picked this up while working on a project with Knockout, and I really like it because there are no wild references to some objects declare in god-knows-where. Everything is in one place.
Now. This is ok for front-end, however, I'm currently developing a basic skeleton for a project which will start in a month, and it uses Node.
What I wanted was, instead of all the requires in .js files, I'd have a single object ('XYZ') which would hold all requires in one place. For example:
Instead of:
// route.js file
var cityModel = require('./models/city');
var cityService = require('./services/city');
app.get('/city', function() { ...........});
I would make an object:
XYZ.Models.City = require('./models/city');
XYZ.DAL.City = require('./services/city');
And use it like:
// route.js file
var cityModel = XYZ.Models.City;
var cityService = XYZ.DAL.City;
app.get('/city', function() { ...........});
I don't really have in-depth knowledge but all of the requires get cached and are served, if cached, from memory so re-requiring in multiple files isn't a problem.
Is this an ok workflow, or should I just stick to the standard procedure of referencing dependencies?
edit: I forgot to say, would this sort-of-factory pattern block the main thread, or delay the starting of the server? I just need to know what are the downsides... I don't mind the requires in code, but I just renamed a single folder and had to go through five files to change the paths... Which is really inconvenient.
I think that's a bad idea, because you are going to serve a ton of modules every single time, and you may not need them always. Your namespaced object will get quite monstrous. require will check the module cache first, so I'd use standard requires for each request / script that you need on the server.
I'm trying to do a pretty basic example using meteor js.
In my lib folder (shared by client and server) i have the following code
if (typeof hair === 'undefined') {
hair = {};
}
if (!hair.dao) {
hair.dao = {};
}
hair.dao.store = (function() {
return new Meteor.Collection('store');
})();
In folder server/libs i have this code
Meteor.startup(function() {
console.log(hair.dao.store.find().fetch());
});
(Which log one element)
In my client/libs folder i have this code
var cursorStores;
cursorStores = hair.dao.store.find();
console.log(cursorStores.fetch());
(Which logs no element)
It used to work, but now it stops.
Just to be clear i'm running on windows, and i removed and added again the autopublish package.
The data probably hasn't reached the client yet when you do that find. Try wrapping those 3 lines of client code in a Deps.autorun
I think find needs to take an argument. See http://docs.meteor.com/#find
If you are wanting the first element there are other ways of getting it. http://docs.meteor.com/
Try find({}) with empty curly braces