node-redis/search query elements by geospatial position and additional indexes - javascript

I want to be able to query elements in a redis cache based on 3 different indexes. Those indexes would be:
A MAC address stored as a String.
A number.
A latitude and longitude(to be able to query spatially).
I have seen that Redis has support for multi indexing using redis search and native geospatial api.
so using nodejs and node-redis I have written the following index:
client.ft.create(
'idx:cits',
{
mid: {
type: SchemaFieldTypes.TEXT
},
timestamp: {
type: SchemaFieldTypes.NUMERIC,
sortable: true
},
position: {
type: SchemaFieldTypes.GEO
}
},
{
ON: 'HASH',
PREFIX: 'CITS'
}
)
Now, i would like to insert records on the database that include those 3 parameters plus an additional String that stores some payload. I have tried using
await client.hSet('CITS:19123123:0:0:00:00:5e:00:53:af', {
timestamp: 19123123,
position: {latitude:0, longitude:0},
mid: '00:00:5e:00:53:af',
message: 'payload'
})
But I get the following error:
throw new TypeError('Invalid argument type');
^
TypeError: Invalid argument type
So, i can't add the latitude and longitude that way, I also tried
using the module ngeohash and computing an 11 character wide geohash like so:
await client.hSet('CITS:19123123:0:0:00:00:5e:00:53:af', {
timestamp: 19123123,
position: geohash.encode(0, 0, 11),
mid: '00:00:5e:00:53:af',
message: 'payload'
})
And it does not give any error but when using redis search querys It does not find points near it.
Is it even possible what I am trying to do? If so, how would you input the data to the redis database?
Here is a minimal reproducible example (Im using "ngeohash": "^0.6.3" and "redis": "^4.5.0"):
const { createClient, SchemaFieldTypes } = require('redis')
const geohash = require('ngeohash')
const client = createClient()
async function start(client) {
await client.connect()
try {
// We only want to sort by these 3 values
await client.ft.create(
'idx:cits',
{
mid: {
type: SchemaFieldTypes.TEXT
},
timestamp: {
type: SchemaFieldTypes.NUMERIC,
sortable: true
},
position: {
type: SchemaFieldTypes.GEO
}
},
{
ON: 'HASH',
PREFIX: 'CITS'
}
)
} catch (e) {
if (e.message === 'Index already exists') {
console.log('Skipping index creation as it already exists.')
} else {
console.error(e)
process.exit(1)
}
}
await client.hSet('CITS:19123123:0:0:00:00:5e:00:53:af', {
timestamp: 19123123,
position: geohash.encode(0, 0, 11),
mid: '00:00:5e:00:53:af',
message: 'payload'
})
await client.hSet('CITS:19123123:0.001:0.001:ff:ff:ff:ff:ff:ff', {
timestamp: 19123123,
position: geohash.encode(0.001, 0.001, 11),
mid: 'ff:ff:ff:ff:ff:ff',
message: 'payload'
})
const results = await client.ft.search(
'idx:cits',
'#position:[0 0 10000 km]'
)
console.log(results)
await client.quit()
}
start(client)
Additionally, I would like to ask if there is maybe another type of database that better suits my needs. I have chosen redis because it offers low latency, and that is the biggest constraint in my environment(I will probably do more writes than reads per second). I only want it to act as a inmediate cache, as persistent data will be stored in another database that does not need to be fast.
Thank you.

You get the Invalid argument type error because Redis does not support nested fields in hashes.
"GEO allows geographic range queries against the value in this attribute. The value of the attribute must be a string containing a longitude (first) and latitude separated by a comma" (https://redis.io/commands/ft.create/)

Related

Mongoose findOne sending null

I am trying to edit a discord bot made in python (I stored data initially in python) and transferring it to javascript (node.js) and can't feature out while connecting to my old db why findOne giving me null while providing proper discord id.
Without anything inside
Code
anifarm.findOne();
Output
{
_id: 707876147324518400,
farmed: 17,
ordered: 5,
pimage: 'https://media.tenor.com/images/e830217a5d9926788ef25119955edc7f/tenor.gif',
pstatus: 'I want you to be happy. I want you to laugh a lot. I don’t know what exactly I’ll be able to do for you, but I’ll always be by your side.',
avg: 184,
speed: 2,
badges: [
'https://cdn.discordapp.com/attachments/856137319149207563/856137435696332800/Black-and-Yellow-Gaming-Badge--unscreen.gif',
'https://cdn.discordapp.com/attachments/856137319149207563/862219383866523688/Front-removebg-preview.png', 'https://cdn.discordapp.com/attachments/856137319149207563/862240758768599100/download-removebg-preview.png'
],
setBadges: 'https://cdn.discordapp.com/attachments/856137319149207563/862240758768599100/download-removebg-preview.png'
}
With id inside
Code
anifarm.findOne({
_id: 707876147324518400
});
Output
null
anifarm in the schema.
Decleared Schema
module.exports = mongoose.model('anifarm', new mongoose.Schema({
_id: Number,
farmed: {
type: Number,
default: 0
},
ordered: {
type: Number,
default: 0
},
pimage: {
type: String,
default: ""
},
pstatus: {
type: String,
default: ""
},
avg: {
type: Number,
default: 200
},
speed: {
type: Number,
default: 2
},
badges: {
type: Array,
default: []
},
setBadges: {
type: String,
default: ""
}
},
{
collection: 'anifarm',
versionKey: false
})
);
I cannot figure out what am I doing wrong. This problem also happens with .find()
Nothing inside find fetches everything by if I provide id it sends a empty array.
A Little help would be appreciated
For you problem use mongoose-long that should fix your problem.
This library will handle all long type data for mongoose since mongoose cannot handle long type data
you can't pass an id as a number, you will have to use ObjectId to convert the id to an instanceof ObjectId
Change your code like this
anifarm.findOne({
_id: mongoose.Types.ObjectId(707876147324518400);
});
If you're querying by _id, use findById() instead.
anifarm.findById("707876147324518400")
Official docs here

How can I limit the objects from a group in a query in Gatsby?

I have this query in my code which allows me to build a tag cloud for this blog front page
tagCloud:allContentfulBlogPost {
group(field: tags, limit: 8) {
fieldValue
}
}
It's passing data that I map in my component using {data.tagCloud.group.map(tag => (...))};. The code works nicely, but it won't be limited by the filter I'm passing above in the group(fields: tags, limit: 8) in my query. It renders all the tags and not only the first eight.
I've unsuccessfully tried the skip filter as well for the sake of seeing if it works.
Is this the proper way to limit the count to my mapping component in Gatsby?
The Contentful source plugin doesn't define arguments on any of the nodes it creates, unfortunately. Instead you would need to create these yourself. The easiest way to do that is through the createResolvers API.
Here's a similar example from a project of mine:
// in gatsby-node.js
exports.createResolvers = ({ createResolvers }) => {
createResolvers({
SourceArticleCollection: {
// Add articles from the selected section(s)
articles: {
type: ["SourceArticle"],
args: {
// here's where the `limit` argument is added
limit: {
type: "Int",
},
},
resolve: async (source, args, context, info) => {
// this function just needs to return the data for the field;
// in this case, I'm able to fetch a list of the top-level
// entries that match a particular condition, but in your case
// you might want to instead use the existing data in your
// `source` and just slice it in JS.
const articles = await context.nodeModel.runQuery({
query: {
filter: {
category: {
section: {
id: {
in: source.sections.map((s) => s._ref),
},
},
},
},
},
type: "SourceArticle",
})
return (articles || []).slice(0, args.limit || source.limit || 20)
},
},
},
})
}
Because resolvers run as part of the data-fetching routines that support the GraphQL API, this will run server-side at build-time and only the truncated/prepared data will be sent down to the client at request time.

What is the best way to keep track of changes of a document's property in MongoDB?

I would like to know how to keep track of the values of a document in MongoDB.
It's a MongoDB Database with a Node and Express backend.
Say I have a document, which is part of the Patients collection.
{
"_id": "4k2lK49938d82kL",
"firstName": "John",
"objective": "Burn fat"
}
Then I edit the "objective" property, so the document results like this:
{
"_id": "4k2lK49938d82kL",
"firstName": "John",
"objective": "Gain muscle"
}
What's the best/most efficient way to keep track of that change? In other words, I would like to know that the "objective" property had the value "Burn fat" in the past, and access it in the future.
Thanks a lot!
Maintaining/tracking history in the same document is not all recommended. As the document size will keep on increasing leading to
probably if there are too many updates, 16mb document size limit
Performance degrades
Instead, you should maintain a separate collection for history. You might have use hibernates' Javers or envers for auditing for your relational databases. if not you can check how they work. A separate table (xyz_AUD) is maintained for each table (xyz). For each row (with primary key abc) in xyz table, there exist multiple rows in xyz_AUD table, where each row is version of that row.
Moreover, Javers also support MongoDB auditing. If you are using java you can directly use it. No need to write your own logic.
Refer - https://nullbeans.com/auditing-using-spring-boot-mongodb-and-javers/
One more thing, Javers Envers Hibernate are java libraries. But I'm sure for other programming languages also, similar libraries will be present.
There is a mongoose plugin as well -
https://www.npmjs.com/package/mongoose-audit (quite oudated 4 years)
https://github.com/nassor/mongoose-history#readme (better)
Maybe you can change the type of "objective" to array and track the changes in it. the last one of the array is the latest value.
Maintain it as a sub-document like below
{
"_id": "4k2lK49938d82kL",
"firstName": "John",
"objective": {
obj1: "Gain muscle",
obj2: "Burn fat"
}
}
You can also maintain it as an array field but remember, mongodb doesn't allow you to maintain uniqueness in an array field and if you plan to index the "objective" field, you'll have to create a multi key index
I think the simplest solution would be to use and update an array:
const patientSchema = new Schema({
firstName: { type: String, required: true },
lastName: { type: String, required: true },
objective: { type: String, required: true }
notes: [{
date: { type: Date, default: Date.now() },
note: { type: String, required: true }
}],
});
Then when you want to update the objective...
const updatePatientObjective = async (req, res) => {
try {
// check if _id and new objective exist in req.body
const { _id, objective, date } = req.body;
if (!_id || !objective) throw "Unable to update patient's objective.";
// make sure provided _id is valid
const existingPatient = await Patient.findOne({ _id });
if (!existingPatient) throw "Unable to locate that patient.";
// pull out objective as previousObjective
const { objective: previousObjective } = existingPatient;
// update patient's objective while pushing
// the previous objective into the notes sub document
await existingPatient.updateOne({
// update current objective
$set { objective },
// push an object with a date and note (previouseObjective)
// into a notes array
$push: {
notes: {
date,
note: previousObjective
},
},
}),
);
// send back response
res
.status(201)
.json({ message: "Successfully updated your objective!" });
} catch (err) {
return res.status(400).json({ err: err.toString() });
}
};
Document will look like:
firstName: "John",
lastName: "Smith",
objective: "Lose body fat.",
notes: [
{
date: 2019-07-19T17:45:43-07:00,
note: "Gain muscle".
},
{
date: 2019-08-09T12:00:38-07:00,
note: "Work on cardio."
}
{
date: 2019-08-29T19:00:38-07:00,
note: "Become a fullstack web developer."
}
...etc
]
Alternatively, if you're worried about document size, then create a separate schema for patient history and reference the user's id (or just store the patient's _id as a string instead of referencing an ObjectId, whichever you prefer):
const patientHistorySchema = new Schema({
_id: { type: Schema.Types.ObjectId, ref: "Patient", required: true },
objective: { type: String, required: true }
});
Then create a new patient history document when the objective is updated...
PatientHistory.create({ _id, objective: previousObjective });
And if you need to access to the patient history documents...
PatientHistory.find({ _id });

AWS Javascript SDK : Greengrass createFunctionDefinition

I'm formatting my parameters according to this https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Greengrass.html#createFunctionDefinition-property
But for whatever reason its giving me a key error when for "Execution" as well as DefaultConfig
Response:
Request ID:
"3ed83472-39af-493b-9df7-7f82d2f14636"
Function Logs:
r: Unexpected key \'Execution\' found in params.InitialVersion.Functions[0].FunctionConfiguration.Environment',
code: 'MultipleValidationErrors',
errors:
[ { UnexpectedParameter: Unexpected key 'DefaultConfig' found in params.InitialVersion
at ParamValidator.fail (/var/runtime/node_modules/aws-
and the code
GG.createFunctionDefinition({
InitialVersion: {
DefaultConfig: {
Execution: {
IsolationMode: "NoContainer"}
},
Functions: [
{
FunctionArn: "arn:aws:lambda:us-west-2:644226108543:function:SahmCumminsTelemetryTest:1",
FunctionConfiguration: {
MemorySize: 524288,
Pinned: true,
Timeout: 600,
Environment: {
AccessSysfs: false,
Execution: {
IsolationMode: "NoContainer",
RunAs: {
Gid: 0,
Uid: 0
}
}
}
},
Id: "function_definition",
},
],
},
Name: "function_definition",
}, function (err, data) {
if (err) {
console.log(err, err.stack);
}
else {
funcArn = data.LatestVersionArn;
};
I think that the issue is that the configuration data specifies two mutually exclusive options. There is a memory size value specified AND it says it should use "NoContainer". When the Greengrass container isn't in use memory size isn't a valid option.
Try removing memory size and see if that fixes it.
The provisioning code I've shared on Github "scrubs" functions when NoContainer is set to get around this issue. The scrubbing process is to set the memory size to NULL so when it is serialized to JSON the field is missing.
https://github.com/awslabs/aws-greengrass-provisioner/blob/e2608654b65682ca9b5b03da962cc8cb29ea1cbf/src/main/java/com/awslabs/aws/greengrass/provisioner/implementations/helpers/BasicGreengrassHelper.java#L390

How to perform date comparisons against postgres with sequelize

I want to delete all records with dates before 20 minutes ago. Postgres (or Sequelize) is not satisfied with the bare javascript Date object I provide as the comparison value.
I'm using sequelize 4.37 on top of a postgres 9.6 database.
The column in question was declared with type: Sequelize.DATE, which research suggests is equivalent to TIMESTAMP WITH TIME ZONE: a full date and time with microsecond precision and a timezone signifier. (That is also what I see when I use the psql CLI tool to describe the table.)
So, I do this:
const Sequelize = require('sequelize')
const { SomeModel } = require('../models.js')
// calculate 20 minutes ago
async function deleteStuff() {
const deletionCutoff = new Date()
deletionCutoff.setMinutes( deletionCutoff.getMinutes() - 20 )
await SomeModel.destroy({
where: {
[ Sequelize.Op.lt ]: { dateColumn: deletionCutoff }
}
})
But I get this error:
Error: Invalid value { dateColumn: 2018-11-21T21:26:16.849Z }
The docs suggest I should be able to provide either a bare javascript Date, or an ISO8601 string, but both throw the same Invalid Value error. The only difference is that, if I pass a string, the error shows single quotes around the value:
// error when dateColumn: deletionCutoff.toISOString()
Error: Invalid value { dateColumn: '2018-11-21T21:26:16.849Z' }
Well, this is pretty embarrassing. I structured the where clause incorrectly.
// BAD CODE
await SomeModel.destroy({
where: {
[ Sequelize.Op.lt ]: {
dateColumn: deletionCutoff
}
}
})
// GOOD CODE
await SomeModel.destroy({
where: {
dateColumn: {
[ Sequelize.Op.lt ]: deletionCutoff
}
}
})
Maybe I should delete the question. Maybe not -- the error I got probably could be more helpful.

Categories