Whenever I retrieve an object from my mongo database, it doesn’t have any methods and I am trying to figure out (1) why and (2) a solution.
To summarise, the following sums up the problem:
(1) I can create an object and its methods work:
const newUser = new User(email, hashedPassword);
console.log(newUser.test); // correct - it’s not undefined
(2) I can insert the instance (with its methods) into the database (via saveToDb):
const insertedUser = await collection.insertOne(this);
(3) I can retrieve it from the database (via findByEmail):
const user = await collection.findOne({ email });
(4) But it doesn’t have any methods anymore:
if (!user) return; // temporary code
console.log(user); // correctly displays object keys and values
console.log('user.test', user.test); // undefined - why?
Why does this happen? I’ve read a few other posts about it, but they all seem to use mongoose, which I do not use in my app (maybe I should use it?). This post did not get an answer, which I think is a similar issue.
Any insight would be appreciated, thank you.
Incase needed to see, here’s the class:
export class User {
email: string;
hashedPassword: any;
dateCreated: Date;
constructor(email: string, hashedPassword: any) {
this.email = email; // make email.toLowerCase();
this.hashedPassword = hashedPassword;
this.dateCreated = new Date();
}
async saveToDb() {
try {
const collection = getCollection(USERS_COLLECTION_NAME);
const sanitisedEmail = this.email.toLowerCase();
const insertedUser = await collection.insertOne(this);
console.log('THIS HAS BEEN INSERTED TO DB:', this);
console.log('this.test:', this.test); // works
this.test();
const token = jwt.sign(insertedUser, sanitisedEmail, {
expiresIn: 60 * 24,
});
return token;
} catch (err) {
throw err;
}
}
test() {
console.log('test() within class Fn');
return 5;
}
static staticTest() {
console.log('staticTest() within class Fn');
return 6;
}
signToken() {
const token = jwt.sign(this, this.email, {
expiresIn: 60 * 24,
});
return token;
}
static async fetchAll() {
try {
const collection = getCollection(USERS_COLLECTION_NAME);
const users = await collection.find().toArray();
return users;
} catch (err) {
throw err;
}
}
static async findByEmail(email: string) {
try {
const collection = getCollection(USERS_COLLECTION_NAME);
const user = await collection.findOne({ email });
if (!user) return; // temporary code
console.log('FOUND THIS USER: ', user); // works
console.log('user.test', user.test); // undefined - key problem lies here...
return user;
} catch (err) {
throw err;
}
}
}
The objects you get back via query methods, such as findOne, will be plain objects. They are not instances of your class, as these objects were sent and saved as JSON in the database, and this does not include class information, prototypes, nor methods.
So what you can do, is change the prototype of the object you get back. Or better, create a new instance of your class with Object.create and inject the properties with Object.assign:
const user = Object.assign(
Object.create(User.prototype),
await collection.findOne({ email })
);
Now your user object will again have access to the prototype methods.
For completeness sake, I'll also mention the alternative:
const user = await collection.findOne({ email });
Object.setPrototypeOf(user, User.prototype);
But read the disclaimer provided by Mozilla Contributors on setPrototypeOf:
Changing the [[Prototype]] of an object is, by the nature of how modern JavaScript engines optimize property accesses, currently a very slow operation in every browser and JavaScript engine. In addition, the effects of altering inheritance are subtle and far-flung, and are not limited to the time spent in the Object.setPrototypeOf(...) statement, but may extend to any code that has access to any object whose [[Prototype]] has been altered.
Related
I've been reading some articles, and posts here on Stack Overflow about when I should mock a function and when I shouldn't, but I have a case where I'm not sure about what to do.
I have a UserService class which uses dependency injection concept to receive dependencies through its constructor.
class UserService {
constructor(userRepository) {
this.userRepository = userRepository;
}
async getUserByEmail(userEmail) {
// would perform some validations to check if the value is an e-mail
const user = await this.userRepository.findByEmail(email);
return user;
}
async createUser(userData) {
const isEmailInUse = await this.getUserByEmail(userData.email);
if(isEmailInUse) {
return "error";
}
const user = await this.userRepository.create(userData);
return user;
}
}
I want to test if the createUser method works properly, and for my tests, I created a fake userRepository which is basically a object with mocked methods that I will use while instantiating UserService Class
const UserService = require('./UserService.js');
describe("User Service tests", () => {
let userService;
let userRepository;
beforeEach(() => {
userRepository = {
findOne: jest.fn(),
create: jest.fn(),
}
userService = new UserService(userRepository);
});
afterEach(() => {
resetAllMocks();
});
describe("createUser", () => {
it("should be able to create a new user", async () => {
const newUserData = { name: 'User', email: 'user#test.com.br' }
const user = { id: 1, name: 'User', email: 'user#test.com.br' }
userRepository.create.mockResolvedValue(user);
const result = await userService.createUser();
expect(result).toStrictEqual(user);
})
})
})
Note that in the createUser method, there is a call to the getUserByEmail method which is also a method of UserService class, and that is where I got confused.
Should I mock the getUserByEmail method even it is a method of the class I'm testing? If it is not the correct approach, what should I do?
You should almost always prefer not to mock parts of the thing you're supposed to be testing, in this case UserService. To illustrate why, consider these two tests:
Provides a test double implementation for findByEmail on the repo object:
it("throws an error if the user already exists", async () => {
const email = "foo#bar.baz";
const user = { email, name: "Foo Barrington" };
const service = new UserService({
findByEmail: (_email) => Promise.resolve(_email === email ? user : null),
});
await expect(service.createUser(user)).rejects.toThrow("User already exists");
});
Stubs out the service's own getUserByEmail method:
it("throws an error if the user already exists", async () => {
const email = "foo#bar.baz";
const user = { email, name: "Foo Barrington" };
const service = new UserService({});
service.getUserByEmail = (_email) => Promise.resolve(_email === email ? user : null);
await expect(service.createUser(user)).rejects.toThrow("User already exists");
});
For your current implementation, both pass just fine. But let's think about how things might change.
Imagine we need to enrich the user model getUserByEmail provides at some point:
async getUserByEmail(userEmail) {
const user = await this.userRepository.findByEmail(userEmail);
user.moreStuff = await.this.userRepository.getSomething(user.id);
return user;
}
Obviously we don't need this extra data just to know whether or not the user exists, so we factor out the basic user object retrieval:
async getUserByEmail(userEmail) {
const user = await this._getUser(userEmail);
user.moreStuff = await.this.userRepository.getSomething(user.id);
return user;
}
async createUser(userData) {
if (await this._getUser(userData.email)) {
throw new Error("User already exists");
}
return this.userRepository.create(userData);
}
async _getUser(userEmail) {
return this.userRepository.findByEmail(userEmail);
}
If we were using test 1, we wouldn't have to change it at all - we're still consuming findByEmail on the repo, the fact that the internal implementation has changed is opaque to our test. But with test 2, that's now failing even though the code still does the same things. This is a false negative; the functionality works but the test fails.
In fact you could have applied that refactor, extracting _getUser, prior to a new feature making the need so clear; the fact that createUser uses getUserByEmail directly reflects accidental duplication of this.userRepository.findByEmail(email) - they have different reasons to change.
Or imagine we make some change that breaks getUserByEmail. Let's simulate a problem with the enrichment, for example:
async getUserByEmail(userEmail) {
const user = await this.userRepository.findByEmail(userEmail);
throw new Error("lol whoops!");
return user;
}
If we're using test 1, our test for createUser fails too, but that's the correct outcome! The implementation is broken, a user cannot be created. With test 2 we have a false positive; the test passes but the functionality doesn't work.
In this case, you could say that it's better to see that only getUserByEmail is failing because that's where the problem is, but I'd contend that would be pretty confusing when you looked at the code: "createUser calls that method too, but the tests say it's fine...".
You shouldn't mock any of these functions since its creating users and reading data from the database. If you mock them then what's the point of the test. In other words, you wouldn't know if your app is working correctly with the database or not. Anyway, I would mock functions such as the functions that send emails and so on. Don't mock the functions that are the heart of the application. You should have a database for testing and another one for production.
I have the following user defined role in security with a predicate function on the create for a collection called formEntryData. Now I can create if I don't have the function which is below.
Under the Create function option
Lambda("values", Equals(Identity(), Select(["data"], Var("values"))))
Now I am creating a request with the following code, which works when the create is just set to all via checking the box, but if I use the function above it fails with permission denied. I must be doing somethign wrong
import { query as q } from "faunadb";
import { serverClient } from "../../utils/fauna-auth";
import { authClient } from "../../utils/fauna-auth";
import { getAuthCookie } from "../../utils/auth-cookies";
export default async (req, res) => {
// const { firstName, lastName, telephone, creditCardNumber } = req.body;
const token = getAuthCookie(req);
console.log(token);
const data = req.body.data;
var element = req.body.data;
element["FormID"] = req.body.id;
try {
await authClient(token).query(
q.Create(q.Collection("FormEntryData"), {
data: element,
})
);
res.status(200).end();
} catch (e) {
res.status(500).json({ error: e.message });
}
};
Any help would be greatly appreciated.
UPDATE: I have also added a index for the collection and given it read permissions in the Role
This was also asked on the Fauna community forums: https://forums.fauna.com/t/roles-membership-auth-token-permissions-denied/1681/4
It looks like two things were needed:
update the create predicate to match the data.user field: Lambda("values", Equals(CurrentIdentity(), Select(["data", "user"], Var("values")))), and
a user field needs to be provided in order to pass the provided predicate.
The answer in the forums used two requests: One to retrieve the calling user document (with CurrentIdentity()), and another to create the FormEntryData document. This can be (should be) done with a single request to limit cost (Every request to Fauna will take at least one Transactional Compute Op), and of course network time for two round trips. Consider the following:
await authClient(token).query(
Let(
{
userRef: q.CurrentIdentity(),
},
q.Create(q.Collection("FormEntryData"), {
data: {
...element,
user: q.Var("userRef")
}
})
)
);
New to Cloud Functions and trying to understand my error here from the log. It says cannot read property 'uid' of undefined. I am trying to match users together. onCreate will call matching function to check if a user exists under live-Channels and if so will set channel value under both users in live-Users to uid+uid2. Does the log also say which line the error is from? Confused where it shows that.
const functions = require('firebase-functions');
//every time user added to liveLooking node
exports.command = functions.database
.ref('/liveLooking/{uid}')
.onCreate(event => {
const uid = event.params.uid
console.log(`${uid} this is the uid`)
const root = event.data.adminRef.root
//match with another user
let pr_cmd = match(root, uid)
const pr_remove = event.data.adminRef.remove()
return Promise.all([pr_cmd, pr_remove])
})
function match(root, uid) {
let m1uid, m2uid
return root.child('liveChannels').transaction((data) => {
//if no existing channels then add user to liveChannels
if (data === null) {
console.log(`${uid} waiting for match`)
return { uid: uid }
}
else {
m1uid = data.uid
m2uid = uid
if (m1uid === m2uid) {
console.log(`$m1uid} tried to match with self!`)
return
}
//match user with liveChannel user
else {
console.log(`matched ${m1uid} with ${m2uid}`)
return {}
}
}
},
(error, committed, snapshot) => {
if (error) {
throw error
}
else {
return {
committed: committed,
snapshot: snapshot
}
}
},
false)
.then(result => {
// Add channels for each user matched
const channel_id = m1uid+m2uid
console.log(`starting channel ${channel_id} with m1uid: ${m1uid}, m2uid: ${m2uid}`)
const m_state1 = root.child(`liveUsers/${m1uid}`).set({
channel: channel_id
})
const m_state2 = root.child(`liveUsers/${m2uid}`).set({
channel: channel_id
})
return Promise.all([m_state1, m_state2])
})
}
You are referring to a very old version of the Cloud Functions API. Whatever site or tutorial you might be looking it, it's showing examples that are no longer relevant.
In modern Cloud Functions for Firebase, Realtime Database onCreate triggers receive two parameters, a DataSnapshot, and a Context. It no longer receives an "event" as the only parameter. You're going to have to port the code you're using now to the new way of doing things. I strongly suggest reviewing the product documentation for modern examples.
If you want to get the wildcard parameters as you are trying with the code const uid = event.params.uid, you will have to use the second context parameter as illustrated in the docs. To access the data from snapshot, use the first parameter.
I have a node.js module pg-promise instantiated as follows.
const pgp = require('pg-promise')();
// Database connection details;
const cn = {
host: 'localhost', // 'localhost' is the default;
...
}
// Create db connection and verify it
var db = pgp(process.env.DATABASE_URL || cn);
db.one('Select version()')
.then(data => {
log.info('Connected: ', data);
})
.catch(error => {
log.error("Error connecting to db", error);
})
// extension methods
db.findById = function (table, id) {
log.debug('read ', table, id);
return db.one('Select * from ' + table + ' where id = $1', id);
}
module.exports = db;
The db object is an instance of interface type pgPromise.IDatabase<{}, pg.Iclient>
I would like to be able to call the functions provided by this lib along with my own functions.:
const db = require('../db');
db.any('Select query..')
.then(data => { res.send(data); })
.catch(err => { log.error(err); });
db.findById('users',1)
.then(data => { res.send(data); })
.catch(err => { log.error(err); });
But when I run it I get the error
TypeError: db.findById is not a function
I tried this too but with the same effect.
module.exports = db;
module.exports.findById = function()...;
Only one sollution I could come up with was this:
module.exports = {
db: db,
findById: function(){
...
}
}
But it is now ugly to use it other modules, as I need always to ask specificaly for the db property.
From the author of pg-promise.
Database protocol in pg-promise is extendable, supporting event extend that lets you extend the protocol on all levels. You need this level of automation, because when it comes to the essential Tasks and Transactions, which encapsulate the allocated connection, the protocol becomes dynamic, and so you need a special provision to make the protocol extension work automatically, which is exactly what event extend does.
In order to understand it better, I wrote pg-promise-demo to show how to do it correctly, plus some other high-level stuff that comes useful most of the time.
pg-promise seems to use an annoying pattern where they freeze every object and make every property read-only, so you'll be unable to simply add properties to it manually like you're attempting. The library supports extensions in the extend property of initOptions like this:
const initOptions = {
extend(obj, dc) {
obj.findById = function() {
...
}
//add other extension methods or properties here
}
};
const pgp = require('pg-promise')(initOptions);
//now any databases created with pgp will contain those extension methods
Alternatively, you can define a Proxy over your export object that defers either to the db or to your own custom function:
const extension = {
findById: function() {
...
},
//other functions
};
module.exports = new Proxy(extension, { get(target, name) {
if(db[name] !== undefined) return db[name];
return target[name];
});
But you should prefer the natively supported way to do this using initOptions.
I use NodeJS to insert documents in MongoDB. Using collection.insert I can insert a document into database like in this code:
// ...
collection.insert(objectToInsert, function(err){
if (err) return;
// Object inserted successfully.
var objectId; // = ???
});
// ...
How can I get the _id of inserted object?
Is there any way to get the _id without getting latest object inserted _id?
Supposing that in same time a lot of people access the database, I can't be sure that the latest id is the id of object inserted.
A shorter way than using second parameter for the callback of collection.insert would be using objectToInsert._id that returns the _id (inside of the callback function, supposing it was a successful operation).
The Mongo driver for NodeJS appends the _id field to the original object reference, so it's easy to get the inserted id using the original object:
collection.insert(objectToInsert, function(err){
if (err) return;
// Object inserted successfully.
var objectId = objectToInsert._id; // this will return the id of object inserted
});
There is a second parameter for the callback for collection.insert that will return the doc or docs inserted, which should have _ids.
Try:
collection.insert(objectToInsert, function(err,docsInserted){
console.log(docsInserted);
});
and check the console to see what I mean.
As ktretyak said, to get inserted document's ID best way is to use insertedId property on result object. In my case result._id didn't work so I had to use following:
db.collection("collection-name")
.insertOne(document)
.then(result => {
console.log(result.insertedId);
})
.catch(err => {
// handle error
});
It's the same thing if you use callbacks.
I actually did a console.log() for the second parameter in the callback function for insert. There is actually a lot of information returned apart from the inserted object itself. So the code below explains how you can access it's id.
collection.insert(objToInsert, function (err, result){
if(err)console.log(err);
else {
console.log(result["ops"][0]["_id"]);
// The above statement will output the id of the
// inserted object
}
});
if you want to take "_id" use simpley
result.insertedId.toString()
// toString will convert from hex
Mongo sends the complete document as a callbackobject so you can simply get it from there only.
for example
collection.save(function(err,room){
var newRoomId = room._id;
});
You could use async functions to get _id field automatically without manipulating data object:
async function save() {
const data = {
name: "John"
}
await db.collection('users').insertOne(data)
return data
}
Returns (data object):
{
_id: '5dbff150b407cc129ab571ca',
name: 'John',
}
Now you can use insertOne method and in promise's result.insertedId
#JSideris, sample code for getting insertedId.
db.collection(COLLECTION).insertOne(data, (err, result) => {
if (err)
return err;
else
return result.insertedId;
});
Similar to other responses, you can grab the variable using async await, es6+ features.
const insertData = async (data) => {
const { ops } = await db.collection('collection').insertOne(data)
console.log(ops[0]._id)
}
Another way to do it in async function :
const express = require('express')
const path = require('path')
const db = require(path.join(__dirname, '../database/config')).db;
const router = express.Router()
// Create.R.U.D
router.post('/new-order', async function (req, res, next) {
// security check
if (Object.keys(req.body).length === 0) {
res.status(404).send({
msg: "Error",
code: 404
});
return;
}
try {
// operations
let orderNumber = await db.collection('orders').countDocuments()
let number = orderNumber + 1
let order = {
number: number,
customer: req.body.customer,
products: req.body.products,
totalProducts: req.body.totalProducts,
totalCost: req.body.totalCost,
type: req.body.type,
time: req.body.time,
date: req.body.date,
timeStamp: Date.now(),
}
if (req.body.direction) {
order.direction = req.body.direction
}
if (req.body.specialRequests) {
order.specialRequests = req.body.specialRequests
}
// Here newOrder will store some informations in result of this process.
// You can find the inserted id and some informations there too.
let newOrder = await db.collection('orders').insertOne({...order})
if (newOrder) {
// MARK: Server response
res.status(201).send({
msg: `Order N°${number} created : id[${newOrder.insertedId}]`,
code: 201
});
} else {
// MARK: Server response
res.status(404).send({
msg: `Order N°${number} not created`,
code: 404
});
}
} catch (e) {
print(e)
return
}
})
// C.Read.U.D
// C.R.Update.D
// C.R.U.Delete
module.exports = router;