Mongoose schema won't let me use # sign in key when inserting to MongoDB with using Node.js. For instance:
var blogSchema = new Schema({
#context : Object //error illegal token
#id : String // illegal token
}, {strict: false});
I tried key with unicode characters like this one:
"\u0040context" = Object // ignored unicode, inserted as context
"\x40context" = Object // ignored unicode, inserted as context
\x40context = Object // illegal token
Also tried with normal way using this link(first way), still cannot define key with #:
http://blog.modulus.io/mongodb-tutorial
My purpose is to create document with using JSON-LD format which requires of using # symbol in key. How to accomplish this? Here are the similar links I have looked for solution:
variable with mongodb dotnotation
Syntax error Unexpected token ILLEGAL Mongo Console
How to use mongoose model schema with dynamic keys?
How to do a query using dot( . ) through Mongoose in Node.js and How to add an empty array
Create a Schema object in Mongoose/Handlebars with custom keys/values
http://mongoosejs.com/docs/schematypes.html
You can use directly # between quotes such as "#field" :
"use strict";
var mongoose = require('./node_modules/mongoose');
var Schema = mongoose.Schema;
var db = mongoose.connection;
var itemSchema = new Schema({
"#field": {
type: String,
required: true
},
"#field2": {
type: String,
required: true
}
});
var Items = mongoose.model('Items', itemSchema);
var db = mongoose.connect('localhost', 'testDB');
Items.create({ "#field": "value", "#field2": "value" }, function(err, doc) {
console.log("created");
if (err)
console.log(err);
else {
console.log(doc);
}
db.disconnect();
});
Related
New to MongoDB, Javascript stack and need help understanding cause of this error.
I have my model created :
const
Mongoose = require('mongoose');
Schema = Mongoose.Schema,
Model = Mongoose.model;
module.exports = Model('Project',
new Schema({
icon : String,
name : String,
state : String,
number : String
})
);
This is my MongoDB document :
[![MongoDB Document][1]][1]
I am attempting to receive all the documents in the collection when I call the API so therefore as per the Mongoose document I am using the find() method.
Here is my API Implementation:
const Project = require('../../models/project');
router.get('/projects/:page?/:limit?',
function(req, res, next){
const page = Math.max(req.params.page || 1, 1) - 1;
const limit = Math.max(req.params.limit || 20, 20);
//Verified : I am hitting the API
console.log("Reached API /projects");
Project.find()
.populate('icon')
.populate('name')
.populate('state')
.populate('number')
.limit(limit).skip(page * limit).exec(
function(err, project)
{
if (err) { return next(err); }
res.send(project);
}
); //End of exec()
}//End of unction
);
I am successful in making the API call using fetch() but I am receiving "Cast to ObjectId failed error" for all the String values.
I believe there is something really simple within my Mongo DB document that I might be missing. Please help me understand and solve this issue.
**EDIT ---
The error seems to point at the string values of the keys:
**
Thank you
Population is the process of automatically replacing the specified paths in the document with document(s) from other collection(s). So you're Id cast is not valid, because of string, you need to have ObjectId, some changes need to be made before it, Let's debug:
const alldata = await Project.find()
console.log(alldata) // ?
does this return something, I'm using async await here if it return data then the problem is with your populate because your Id case isn't valid as you save in schema string and you're referring here populate, example of using populate:
module.exports = Model('Project',
new Schema({
icon : [{ type: Schema.ObjectId, ref: 'your icon document' }],
name : [{ type: Schema.ObjectId, ref: 'you name document' }],
state : [{ type: Schema.ObjectId, ref: 'state document' }],
number : [{ type: Schema.ObjectId, ref: 'number document' }]
})
);
but it seems to me that you don't need to use the populate because you have simple data, name, number... so you should be good to go with the above example
Resources: mongoose fetching data, using populate, relation
I'm currently working on a vocabulary application using node.js, Express, MongoDB and mongoose.
My aim: Putting out translations for various languages depending on the choices made in the front-end (E. g.: German > English, English > Portuguese etc.)
Main problem: Interdependent Schemas. The translation of a word stored in WordSchema depends on the language represented by the LanguageSchema.
For me, there appear two different ways on how to structure the relevant Schemas:
1.
There is one Schema representing the language (e.g. German, Englisch,...). It stores several words according to the language. Because Word represents another Schema it is referenced to the WordSchema inside the LanguageSchema. The problem which appears here is that the values of the word depend on the chosen language.
var Schema = mongoose.Schema;
var LanguageSchema = new Schema({
language: String, // 'German'
words: [{type: Schema.ObjectId, ref: 'Word'}]
// word: 'Haus' instead of e. g. 'house'
});
module.exports = mongoose.model('Language', LanguageSchema);
var WordSchema = new Schema({
name: String // 'house', 'Haus', 'casa' depending on the language
});
module.exports = mongoose.model('Word', WordSchema);
2. I could solve this by using just the WordSchema and adding all the languages which exist as a property and add the according translation of the word. But this doesn't seem the best working solution for me as I won't translate the words into all languages right from the beginning. So there just should be stored those translations for a word where there actually exists a translation.
LanguageSchema
var Schema = mongoose.Schema;
var LanguageSchema = new Schema({
language_name: {type:String}, // English
language_code: {type:String} // en
});
module.exports = mongoose.model('Language', LanguageSchema);
In Word Schema , you need to push objects with word_name and word_language
WordSchema
var WordSchema = new Schema({
words:[{
word_name:{type:String},
word_language:{type:String}
}]
});
module.exports = mongoose.model('Word', WordSchema);
Example : Language in Database
languages : [
{
"_id":"54ef3f374849dcaa649a3abc",
"language_name":"English" ,
"language_code":"en"
},
{
"_id":54ef3f374849dcaa649a3asd",
"language_name":"Portuguese" ,
"language_code":"pt"
},
{
"_id":54ef3f374849dcaa649a3xxx",
"language_name":"German" ,
"language_code":"de"},
]
Example : Words in Database
words:[
{
word:[
{
"_id":"54ef3f374849dcaa649azzz",
"word_name":"Friend" ,
"word_language":"English"
},
{
"_id":"54ef3f374849dcaa6491111",
"word_name":"Amigo" ,
"word_language":"Portuguese"
},
{
"_id":"54ef3f374849dcaa649a233",
"word_name":"Freund" ,
"word_language":"German"
},
]
},
{ word: [...] },
{ word: [...] },
{ word: [...] },
{ word: [...] }
]
from frontend you have to pass 3 parameters
word , input_language , output_language
Example : You want "Friend" meaning from English to Portuguese
so in this case :
word="Friend" , input_language="English" ,
output_language="Portuguese"
Now Applying Mongoose Find Query and search Word in WordSchema
Word.findOne({word_name:{ $regex:word, $options: "$i" },word_language:input_language},function(err,result){
if(err){ return err;}
if(!err && result){
// now you have to use underscore.js and find out result by output language
// http://underscorejs.org
// . npm i --save underscore
var outputObj= _.find(result.word, { word_language :output_language});
res.json(outputObj);
}
})
I want to use 2 or more different databases simultaneously, these 2 connections have different properties. And according to database selection data should be displayed. Its dynamic db switch in mongoose if anyone have any idea please help.
Lets say there is a model
var mongoose = require('mongoose');
var Promise = require("bluebird");
mongoose.Promise = Promise;
var Schema = mongoose.Schema;
var uomSchema = new Schema({
uom: {
type: String,
required: [
true,
"Please enter valid uom"
],
elementType: "TEXT",
elementText: "Unit Of Measure",
placeholder: ""
}
}, { strict: false });
var uom = mongoose.model('uom', uomSchema);
module.exports = uom;
so here it creates model over default connection foo, so if there is another connection bar and over that db needs to create same model to operate over data, how is that possible ?
mongoose-glue provide somewhat similar solution, but not exact that I want
Maintain one json file in that { "db1":"asas" , "db2":"xczc" } and while connecting to db fetch db name from this json file as
var conUrl = config.db1 OR db2 as per your condition
I have doing referencing using the populate method in node js , but I have no more idea about the populate method. In this code, i am a reference to user collection from my child collection. I have two collections one child and second user
This child collection
userId: {
type: mongoose.Schema.Types.ObjectId,
ref: 'User',
index: true
}
This is user collection
"firstname":{type:String},
"lastname":{type:String}
I have send id from URL (http://localhost:8000/v1/childPopulate/:57a4e4e67429b91000e225a5) and queried with my stored userId in child schema
This is node js
this.childPopulate = function(req, res, next){
var o_userid = req.params.id;
var query = {userId:o_userid};
Child.findOne(query).populate({
path: 'userId',
model: 'User',
select: 'firstname lastname'
}).exec(function(err,userinfo){
if(err){
return next(err);
}else{
res.send(userinfo);
console.log(userinfo);
}
});
};
But this shows this error in browser
{"message":"Cast to ObjectId failed for value \":57a4e4e67429b91000e225a5\" at path \"userId\""}
The error message indicates that you are passing a string instead of ObjectID.
Basically the default id in mongodb is not a normal string.
Its is a unique 12-byte identifier.
So you should convert the string to type ObjectID.
var query = {userId: mongoose.Types.ObjectId(stringId)};
User model contian SubscriptionSchema and AccessToken Schema, I had defined this two plugin schemas with {capped : 234556} too.
var User = new Schema({
email : String
, firstName : String
, password : String
, isAdmin : Boolean
, lastSeen : Date
, subscriptions : [ SubscriptionSchema ]
, accessTokens : [ AccessToken ]
}, {
toObject : { virtuals : true }
, toJSON : { virtuals : true }
, capped : 234556
});
var streamTest = User.find().limit(1).tailable().stream();
When I try to run the above code, I still get the error:
MongoError: tailable cursor requested on non capped collection
That doesn't look like a correct usage of a capped collection or tailable stream. But perhaps a little code first to demonstrate a working example:
var async = require('async'),
mongoose = require('mongoose'),
Schema = mongoose.Schema;
var userSchema = new Schema({
email: String,
},{
capped: 2048
});
var User = mongoose.model( "User", userSchema );
mongoose.connect('mongodb://localhost/atest');
var stream;
async.waterfall(
[
function(callback) {
var user = new User({ email: "existing" });
user.save(function(err,doc) {
if (err) throw err;
callback();
});
},
function(callback) {
User.find({}).sort({ "$natural": -1 }).limit(1).exec(function(err,docs) {
if (err) throw err;
console.log( docs );
callback(err,docs[0]);
});
},
function(doc,callback) {
stream = User.find({ "_id": { "$gt": doc._id } }).tailable().stream();
stream.on("data",function(doc) {
console.log( "Streamed:\n%s", doc );
});
callback();
},
function(callback) {
async.eachSeries(
['one','two','three'],
function(item,callback) {
var user = new User({ email: item });
user.save(function(err,doc) {
if (err) throw err;
console.log( "Saved:\n%s", doc );
callback();
});
},
function(err) {
if (err) throw err;
callback();
}
);
}
]
);
First things first, there really needs to be something in the capped collection for anything to work. This presumes that the collection does not exist and it is going to be initialized as a capped collection. Then the first step is making sure something is there.
Generally when you want to "tail", you just want the new documents that are inserted to appear. So before setting up the tailable cursor you want to find the "last" document in the collection.
When you know the last document in the collection, the "tailable stream" is set up to look for anything that is "greater than" that document, which are the new documents. If you did not do this, your first "data" event on the stream would empty all of the current collection items. So the options to .sort() and .limit() here do not apply. Tailing cursors initialize and "follow".
Now that the streaming interface is set up and an listener established, you can add items to the stream. These will then log accordingly, yet as this is "evented", there is no particular sequence to the logging here as either the "save" or the "stream" data event may actually fire first.
Now onto your implementation. These two lines stand out:
, subscriptions : [ SubscriptionSchema ]
, accessTokens : [ AccessToken ]
Those contain embedded arrays, they are not "external" documents in another collection, even though it would not matter if it even did.The general problem here is that you are (at least in some way) introducing an array, which seems to imply some concept of "growth".
Unless your intention is to never "grow" these arrays and only ever insert the content with the new document and never update it, then this will cause problems with capped collections.
Documents in capped collections cannot "grow" beyond their initial allocated size. Trying to update where this happens will result in errors. If you think you are going to be smart about it and "pad" your documents, then this is likely to fail when replicated secondary hosts "re-sync". All documented with capped collections.