I have read the official documentation about int64 and I need use NumberLong wrapper the int64. But I find there is some special values could be used without NumberLong:
In my image, I think the MongoDB Compass will treat 1128505640310804481 as double just like Javascript and use the round 1128505640310804500(this is what I get from Javascript). The data in DB is shown in int64 so I think the 1128505640310804481 is stored correctly as int64. Since 1128505640310804500 is not equal to 1128505640310804481, I think I should find no data matched my filter, but MongoDB Compass give me the result.
So my question is: when I enter int64 in MongoDB Compass Filter like the picture,
how does it deal with the int64 and why it could match the correct int64 data stored in DB?
when I enter int64 in MongoDB Compass Filter like the picture, how does it deal with the int64 and why it could match the correct int64 data stored in DB?
To start with, MongoDB can store 64-bit integer value because data are stored as BSON (binary serialisation format). This solves the issue in the server. See also BSON Types.
Now, for MongoDB Compass it is able to identify the type of number (int32, int64, or Double) by auto-casting. It detects the value in the editor, when an int32 is edited to a value over 32 bits AND the value passes the +/- of Number.isSafeInteger then it casts to int64.
Part of MongoDB Compass that does the type checking is actually has been open-sourced. See the type checker code: mongodb-js/hadron-type-checker/blob/master/src/type-checker.js. The NPM package is hadron-type-checker.
Related
We use Postgres and prisma for our Next.js app. Previous developers have used cuid for every table on our schema. For some reasons we are restructuring the tables and I was wondering would it be better to use int ids? Would it result in any performance gain?
What are the tradeoffs between using Int autoincrement id vs cuid for Postgres prisma client?
If you start comparing GUID vs Int ids for Postgres, please quote authentic reference proving that cuid is mapped to guid for Postgres.
A sequence generating bigint values will certainly be faster than even the most efficient CUID or GUID algorithm, and the result will need less storage space.
The only good reasons to use something else like a CUID or GUID are
you have cryptographic requirements to obscure the creation order (but CUID doesn't do that)
you need to generate primary keys outside the database and in a distributed environment
I have a field in mongo which is of number type and I want to encrypt it into a unique number to save it and then decrypt it back to the original number without converting it to a string value.
You can create a Buffer object, write a Javascript number to the Buffer (creating a binary set of bytes) and then encrypt the contents of the buffer. Then, save the encrypted binary data from the buffer in your database.
To decrypt, you can reverse the process by reading the binary from the database into a Buffer, decrypt the buffer and then read the number from it.
You can write a Javascript number to a Buffer with something like:
buf.writeDoubleBE(value[, offset])
and there's a complementary method for reading a number from the buffer.
So, this doesn't have to be converted to a string anywhere, but it is converted to a binary Buffer.
FYI, here's an article on encrypting strings, buffers and streams in nodejs: https://attacomsian.com/blog/nodejs-encrypt-decrypt-data
FYI, MongoDB has some built-in encryption options (depending upon what storage engine you're using). This is more like database-level encryption that protects the whole database on disk rather than encrypting individual fields. Here's some info on that: https://docs.mongodb.com/manual/core/security-encryption-at-rest/#std-label-encrypted-storage-engine.
The MongoDB doc also has some info on client-side encrypting of individual fields here: https://docs.mongodb.com/manual/core/security-client-side-encryption/
and here:
https://docs.mongodb.com/manual/core/security-explicit-client-side-encryption/
And, apparently the enterprise version of mongodb 4.2 or later has some automatic field level encryption.
I am currently working on a kotlin multi project solution.
I have one project defining some data classes and defining an api to access a mongodb. The objectId is created automatically. This project is using morphia:1.3.2.
Entries are stored using this function:
fun store(myClass: MyClass) = db.save(myClass).let { myClass.id?.toHexString() ?: "0" }
Now I'm using this project in a spring-boot kotlin project.
I created a small web page with some filters. These filters should be applied on my query. So far so good, everything is working.
The results of my query are returned via my Rest-controller without any conversions. In my web page I want to print the ObjectId foreach result.
But the ObjectId is not a String as it used to be, it is an object.
id:
counter:15304909
date:"2018-08-27T23:45:35.000+0000"
machineIdentifier:123456
processIdentifier:1234
time:1535413535000
timeSecond:1535413535
timestamp:1535413535
Is it possible to force morphia to return the objectId in the String representation? Or is there a on Option to activate the correct mapping? Or do I have to touch each result one by one and convert the object id to the hexadecimal string representation? I hope that there is a better, and quicker solution then this.
I am also not able to remap the object to a valid id, due to an java.lang.IllegalArgumentException: Invalid character found in the request target. The valid characters are defined in RFC 7230 and RFC 3986 exception. The request looks like this:
myClass?id={"timestamp":1535413631,"machineIdentifier":123456,"processIdentifier":1234,"counter":16576969,"time":1535413631000,"date":"2018-08-27T23:47:11.000+0000","timeSecond":1535413631}
I'm a little bit out of ideas, how to fix this issue.
Depending on your REST framework, you would need to provided a serializer for writing out that ObjectId as its String version, say. Most such frameworks make that transparent once it's configured so you need only worry about returning your objects out of your REST service and the framework will serialize properly.
I, personally, wouldn't muck about by trying to change how it's serialized in the database. ObjectId is a pretty good _id type and I wouldn't change it.
this might seem a very naive question but I am having a hard time to figure this out. I have a float value 37.50378 in my PostgreSQL database. When I am trying to fetch this value in my Nodejs application it gives me 37.5038. I want to fetch the exact number without rounding off the decimal digits. How do I do that?
The data type of the column in Postgres is Real.
EDIT
I am using Knex schema builder and using float(column, precision, scale) to create a column(to store above-said value). I have tried different numbers for precision and scale just in case that's causing the above-said behavior. But every time I tried to fetch the value 37.50378, all I get back is 37.5038.
Thanks.
You may want to use double(column) in knex, which is translated to double precision in postgres.
This is because of the real 4-byte precision. See PostgreSQL Numeric Types.
It's got nothing to do with Node.js or its PostgreSQL driver.
The number is larger than 9223372036854775807 - too big for NumberLong, which is mongo's native 64-bit long type. What's the best way to do this/the best field type?
Is it possible to preserve any of the querying functionality of a smaller integer (such as {$lt})?
The big numbers are being generated by bignumber.js, and I'm using mongoose to interact with mongoDb.
I'm afraid that the only viable/safe option would be to store such big numbers as a string and serialize it back and forth between the application and MongoDB when read/written. However, you will loose the ability to use MongoDB built-in functions that work with numbers (you can still cast the values to numbers, but it won't be safe anymore).