Currently I store some data in FaunaDB every week. This is done using a cronjob. In my code I'm trying to fetch the documents from only the last two weeks. I'd like to use the timestamp to do so.
One of the documents to fetch:
{
"ref": Ref(Collection("weeklyContributors"), "350395411XXXXXXXX"),
"ts": 1670421954340000,
"data": {
...allMyDataFields
}
}
My code
const now = Date.now() * 1000;
const twoWeeksAgo = (Date.now() - 12096e5) * 1000;
console.log(now); //returns 1670493608804000
console.log(twoWeeksAgo); // returns 1669284008804000
// the stored document has a timestamp of 1670421954340000, so this should be in between [now] and [twoWeeksAgo]
await client.query(
q.Paginate(
q.Range(
q.Match(q.Index("get_weekly_list_by_ts")),
twoWeeksAgo,
now
)
)
);
This is a screenshot of the index I created in Fauna
Above code should fetch all documents where the timestamp's between now and twoWeeksAgo but it returns an empty array (so no documents match the query). Above code doesn't generate any errors, it does return a statuscode 200, so syntax should be fine. Why can't I fetch the document I gave in this example?
UPDATE
Found the solution for the index. The index should filter on Values, not Terms. Enter TS and Ref returns the document. BUt now I don't know how to get the corresponding document.
This returns an error
await client.query(
q.Map(
q.Paginate(
q.Range(
q.Match(q.Index("get_weekly_list_by_ts")),
twoWeeksAgo,
now
)
),
q.Lambda((x) => q.Get(x))
)
);
Changed index screenshot here
Congratulations on figuring out most of the answer for yourself!
As you deduced, the terms definition in an index specifies the fields to search for, and the values definition specifies the field values to return for matching entries.
Since you added the document reference to the values definition, all that you need now is to fetch that document. To do that, you need to Map over the results.
The following example uses Shell syntax, and involves sample documents that I created with a createdAt field recording the creation timestamp (since ts is the last-modified timestamp):
> Map(
Paginate(
Range(
Match(Index("get_weekly_list_by_ts")),
TimeSubtract(Now(), 14, "days"),
Now()
)
),
Lambda(
["ts", "ref"],
Get(Var("ref"))
)
)
{
data: [
{
ref: Ref(Collection("weeklyContributors"), "350498857823502848"),
ts: 1670520608640000,
data: { createdAt: Time("2022-12-01T17:30:08.633Z"), name: 'Fourth' }
},
{
ref: Ref(Collection("weeklyContributors"), "350498864657072640"),
ts: 1670520615160000,
data: { createdAt: Time("2022-12-07T17:30:15.152Z"), name: 'Fifth' }
}
]
}
Since your index returns ts and ref, notice that the Lambda function accepts both parameters in an array. The Lambda parameters have to match the number returned by the index. Then the Lambda calls Get to fetch the document.
In case you're wondering, here's the index definition that I used for my example:
> Get(Index("get_weekly_list_by_ts"))
{
ref: Index("get_weekly_list_by_ts"),
ts: 1670520331720000,
active: true,
serialized: true,
name: 'get_weekly_list_by_ts',
source: Collection("weeklyContributors"),
values: [ { field: [ 'data', 'createdAt' ] }, { field: [ 'ref' ] } ],
partitions: 8
}
My index is misnamed: I used the same name from your original query to help you correlate what is being used.
Note: there is no need to mask the document ID in a document that you share. It is only valid for the database containing the document.
Related
I have been spending hours trying to debug this error. I am using Node, Joi and Oracledb. When I attempted to make a POST request to insert my data from the payload of the request to my table, it gives me the error: NJS-012: encountered invalid bind data type in parameter 2. Nothing I did has been productive to fix the issue. What did I do wrong to cause this issue? The code I use to update my table is in this manager:
manager.js
async function create(payload) {
try {
return await MassRepublishJob.query().insertAndFetch(payload)
} catch (error) {
log.error(`Error while creating mass republish jobs with payload: ${JSON.stringify(payload)}`, error)
}
}
This manager code is called from controller code:
Controller.js
async save(request, reply) {
const instance = await massRepublishJobManager.create(request.payload)
reply(instance)
}
This controller is the handler of my POST route:
method: 'POST',
path: `/${root}`,
config: {
tags: ['api'],
handler: controller.save,
description: 'Create new mass republish job',
notes: 'You can create a mass republish job by either sending a list of content ids or send an object of search querys',
validate: {
payload: {
counts: Joi.object().example(Joi.object({
total: Joi.number().example(1),
completed: Joi.number().example(1),
failed: Joi.number().example(0),
queued: Joi.number().example(0),
})),
type: Joi.string().example('manual'),
content_ids: Joi.array().items(Joi.number().example(11111)),
search_query: Joi.object().example(Joi.object({
query: Joi.string().example('string'),
})),
republishing_reason: Joi.string().example('string'),
duration: Joi.number().example(0),
}
}
}
And finally, I set up the table as described in this sql file, the fields "creation_date", "last_update_date", "created_by", "last_updated_by" are the field of the base model. This model extends that base models with extra fields: "counts", "type", "content_ids", "search_query", "republishing_reason", "duration"
CREATE TABLE wwt_atc_api.mass_republish_jobs (
id NUMBER GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
counts CLOB DEFAULT('{}') CHECK(counts IS JSON), -- JSON object of
counts: each has completed, failed, total, queued
type VARCHAR(255) NOT NULL, -- right now only support 'manual' but can be more in the future
content_ids CLOB DEFAULT('[]'),
search_query CLOB DEFAULT('{}') CHECK(search_query IS JSON), -- JSON object of query: each has field, operator, value
republishing_reason VARCHAR(255),
duration NUMBER NOT NULL,
creation_date DATE NOT NULL,
last_update_date DATE NOT NULL,
created_by NUMBER NOT NULL,
last_updated_by NUMBER NOT NULL
);
GRANT SELECT, INSERT, UPDATE, DELETE ON WWT_ATC_API.mass_republish_jobs TO wwt_cf_atc_api;
Knex instance also prints out this debugging message:
method: 'insert',
options: {},
timeout: false,
cancelOnTimeout: false,
bindings: [
[ { id: 11111 } ],
'{"total":1,"completed":1,"failed":0,"queued":0}',
310242,
2022-06-23T18:53:28.463Z,
0,
310242,
2022-06-23T18:53:28.463Z,
'string',
'{"query":"string"}',
'manual',
ReturningHelper { columnName: 'ID' }
],
__.knexQueryUid: 'c5885af0-f325-11ec-a4bd-05b5ac65699c',
sql: 'insert into "WWT_ATC_API"."MASS_REPUBLISH_JOBS" ("CONTENT_IDS", "COUNTS",
"CREATED_BY", "CREATION_DATE", "DURATION", "LAST_UPDATED_BY", "LAST_UPDATE_DATE",
"REPUBLISHING_REASON", "SEARCH_QUERY", "TYPE") values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
returning "ID" into ?',
outBinding: [ [ 'ID' ] ],
returning: [ 'ID' ]
I would really appreciate any helps to get me debug this error.
The issue here is that CONTENT_IDS is not being stringified by Knex because the value is an array. This is a well known issue with Knex that occurs because its QueryBuilder does not have any metadata to determine what format it needs to convert a given binding value into in order for that value to be acceptable to underlying driver/database. Typically objects are stringified but arrays are left as is -hence the inconsistent behavior you are seeing in the bindings. The developers of Knex thus recommend stringifying all JSON values before passing them to the QueryBuilder.
In order to ensure that stringification always properly occurs on JSON values, you should define jsonAttributes on your MassRepublishJob Objection Model.
class MassRepublishJob extends Model {
...
static get jsonAttributes() {
return ["counts", "content_ids","search_query"];
}
}
This will ensure both that your model stringifies these values before binding and that it parses the stringified JSON when mapping rows from the database.
In addition, you can use the $parseJson or $toDatabaseJson lifecycle methods to manually modify how the Objection Model will bind arguments before any queries are run. You can further utilize objection helpers like val,raw and fn in $parseJson for fine grained control on how values for your model properties are bound.
import {fn, ...} from "objection"
...
class MassRepublishJobs extends Model {
...
$parseJson(json: Pojo, opt: ModelOptions) {
const superJson = super.$parseJson(json, opt);
superJson.job_id = randomUUID();
// will format sql with: to_date(?,?) and safely bind the arguments
superJson.creation_date = fn('to_date',new Date().toIsoString(),dateFormat);
return superJson;
}
Given the following MongoDB document structure (which I cannot change), I'm not sure how to read this in JavaScript?
so loginHistory is the main field name. It contains an Object type (okay???)
it can then contain multiple children fields (this above example has only ONE child), which are arrays. These field names are dynamic, but unique.
the 'array' content is a C# DateTimeOffset , I've been told.
So in the example above, Jane is the field and the value is an Array, but really it's a DateTime.Offset.
Here's another document I've found:
4x fields
So i don't know how to read this with node/JavaScript. Oh - and the field loginHistory might not exist on some documents, also :(
So given that existing document schema/structure, I need to somehow read in each loginHistory value and then create a new document (which i'll do other stuff with later).
This is some JavaScript code I tried, but doesn't work:
users.loginHistory.forEach(loginHistory => {
// do stuff, like create a new { id = users._id, name = "Jane", createdOn = "that date/time offset" }
}
Assuming we have an input set of docs like the screen shots e.g.:
var r = [
{
loginHistory: {
"Jane": [ 1648835363929, 0 ],
"Bob": [ 1648835363929, 0 ]
}
},
{
noLoginHistory: "nope"
},
{
loginHistory: {
"Dan": [ 1648835363929, 0 ],
"Dave": [ 1648835363929, 0 ],
"Jane": [ 1648835363929, 0 ]
}
}
];
then the following pipeline will create new, "converted" docs in a collection named foo2.
db.foo.aggregate([
// OK if loginHistory does not exist. X will be set to null
// and the $unwind will not produce anything:
{$project: {X: {$objectToArray: "$$ROOT.loginHistory"} }},
{$unwind: "$X"},
// Now we have individual docs of
// X: {k: name, v: the array}
// The OP wanted to make a new doc for each one. I don't know
// what function to apply to turn X.v[0] into a MongoDB datetime
// because I don't know what that big int (637807576034080256) is
// supposed to be so I used regular ms since epoch for the example
// instead. The OP will have to get more creative with division
// and such to turn 637807576034080256 into something "toDate-able"
// in MongoDB. You *could* store the big int as is but it is always
// good to try to turn a datetime into a real datetime in mongodb.
{$project: {
_id: false,
name: '$X.k',
createdOn: {$toDate: {$arrayElemAt:['$X.v',0]}}
}}
// Now we have docs like :
// {name: "Jane", createdOn: ISODate(...) }
// By calling $out to a new collection, a new _id will be assigned
// to each:
,{$out: "foo2"}
]);
EDIT: Re-structured question, cleaer, and cleaner:
I have a data object from Sequelize that is sent by node-express:
{
"page": 0,
"limit": 10,
"total": 4,
"data": [
{
"id": 1,
"title": "movies",
"isActive": true,
"createdAt": "2020-05-30T19:26:04.000Z",
"updatedAt": "2020-05-30T19:26:04.000Z",
"questions": [
{
"questionsCount": 4
}
]
}
]
}
The BIG question is, how do I get the value of questionsCount?
The PROBLEM is, I just can't extract it, these two methods give me undefined result:
category.questions[0].questionsCount
category.questions[0]['questionsCount']
I WAS ABLE to get it using toJSON() (From Sequelize lib I think), like so:
category.questions[0].toJSON().questionsCount
But I'd like to know the answer to the question, or at least a clear explanation of why do I have to use toJSON() just to get the questionsCount?
More context:
I have this GET in my controller:
exports.getCategories = (req, res) => {
const page = myUtil.parser.tryParseInt(req.query.page, 0)
const limit = myUtil.parser.tryParseInt(req.query.limit, 10)
db.Category.findAndCountAll({
where: {},
include: [
{
model: db.Question,
as: "questions",
attributes: [[db.Sequelize.fn('COUNT', 'id'), 'questionsCount']]
}
],
offset: limit * page,
limit: limit,
order: [["id", "ASC"]],
})
.then(data => {
data.rows.forEach(function(category) {
console.log("------ May 31 ----> " + JSON.stringify(category.questions[0]) + " -->" + category.questions[0].hasOwnProperty('questionsCount'))
console.log(JSON.stringify(category))
console.log(category.questions[0].toJSON().questionsCount)
})
res.json(myUtil.response.paging(data, page, limit))
})
.catch(err => {
console.log("Error get categories: " + err.message)
res.status(500).send({
message: "An error has occured while retrieving data."
})
})
}
I loop through the data.rows to get each category object.
The console.log outputs are:
------ May 31 ----> {"questionsCount":4} -->false
{"id":1,"title":"movies","isActive":true,"createdAt":"2020-05-30T19:26:04.000Z","updatedAt":"2020-05-30T19:26:04.000Z","questions":[{"questionsCount":4}]}
4
https://github.com/sequelize/sequelize/blob/master/docs/manual/core-concepts/model-querying-finders.md
By default, the results of all finder methods are instances of the model class (as opposed to being just plain JavaScript objects). This means that after the database returns the results, Sequelize automatically wraps everything in proper instance objects. In a few cases, when there are too many results, this wrapping can be inefficient. To disable this wrapping and receive a plain response instead, pass { raw: true } as an option to the finder method.
(emphasis by me)
Or directly in the source code, https://github.com/sequelize/sequelize/blob/59b8a7bfa018b94ccfa6e30e1040de91d1e3d3dd/lib/model.js#L2028
#returns {Promise<{count: number, rows: Model[]}>}
So the thing is that you get an array of Model objects which you could navigate with their get() method. It's an unfortunate coincidence that you expected an array, and got an array so you thought it is "that" array. Try the {raw:true} thing, I guess it looks something like this:
db.Category.findAndCountAll({
where: {},
include: [
{
model: db.Question,
as: "questions",
attributes: [[db.Sequelize.fn('COUNT', 'id'), 'questionsCount']]
}
],
offset: limit * page,
limit: limit,
order: [["id", "ASC"]],
raw: true // <--- hopefully it is this simple
}) [...]
toJSON() is nearby too, https://github.com/sequelize/sequelize/blob/59b8a7bfa018b94ccfa6e30e1040de91d1e3d3dd/lib/model.js#L4341
/**
* Convert the instance to a JSON representation.
* Proxies to calling `get` with no keys.
* This means get all values gotten from the DB, and apply all custom getters.
*
* #see
* {#link Model#get}
*
* #returns {object}
*/
toJSON() {
return _.cloneDeep(
this.get({
plain: true
})
);
}
So it worked exactly because it did what you needed, removed the get() stuff and provided an actual JavaScript object matching your structure (POJSO? - sorry, I could not resist). I rarely use it and thus always forget, but the key background "trick" is that a bit contrary to its name, toJSON() is not expected to create the actual JSON string, but to provide a replacement object which still gets stringified by JSON.stringify(). (https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify#toJSON_behavior)
try to do so category.data[0].questions.questionCount
As mentioned by others already, you need category.data[0].questions[0].questionCount.
Let me add to that by showing you why. Look at your object, I annotated it with how each part would be accessed:
category = { // category
"page": 0,
"limit": 10,
"total": 2,
"data": [ // category.data
{ // category.data[0]
"id": 1,
"title": "movies",
"createdAt": "2020-05-30T19:26:04.000Z",
"updatedAt": "2020-05-30T19:26:04.000Z",
"questions": [ // category.data[0].questions
{ // category.data[0].questions[0]
"questionCount": 2 // category.data[0].questions[0].questionCount
}
],
"questionsCount": "newValue here!"
}
]
}
try this
category.data[0].questions[0].questionCount
the reason why you have to use toJSON is because it's sometimes it is used to customise the stringification behavior. like doing some calculation before assinging the value to the object that will be returned , so it is most likley been used here to calculate the "numb of questions and then return an object with the property questionscount and the number calculated
so the object you retreived more or less looks like this
var cathegory = {
data: 'data',
questions:[{
// some calulation here to get the questionsCount
result=4,
toJSON () {
return {"questionsCount":this.result}
}
}
]
};
console.log(cathegory.questions[0].toJSON().questionsCount) //4
console.log(JSON.stringify(cathegory)) // {"data":"data","questions":[{"questionsCount":4}]}
console.log("------ May 31 ----> " + JSON.stringify(cathegory.questions[0]) + " -->" + cathegory.questions[0].hasOwnProperty('questionsCount')) //false
I have a JSONB column in DB.
I'd like to have request to DB where I can check if some value in this JSON it true or false:
SELECT *
FROM table
WHERE ("json_column"->'data'->>'data2')::boolean = true AND id = '00000000-1111-2222-3333-456789abcdef'
LIMIT 1
So, my sequelize request:
const someVariableWithColumnName = 'data2';
Model.findOne({
where: {
[`$("json_column"->'data'->>'${someVariableWithColumnName}')::boolean$`]: true,
id: someIdVariable,
},
order: [/* some order, doesn't matter */],
})
And sequelize generate bad result like:
SELECT *
FROM table
WHERE "(json_column"."->'data'->>'data2')::boolean" = true AND id = '00000000-1111-2222-3333-456789abcdef'
LIMIT 1
Split my column by . and add " to every element.
Any idea how to get rid of adding " to the column in where condition?
Edit:
Here is my query with sequelize.literal():
const someVariableWithColumnName = 'data2';
Model.findOne({
where: {
[sequelize.literal(`$("json_column"->'data'->>'${someVariableWithColumnName}')::boolean$`)]: true,
id: someIdVariable,
},
order: [/* some order, doesn't matter */],
})
You can use Sequelize.literal() to avoid spurious quotes. IMHO, wrapping the json handling in a db function might also be helpful.
I just came across a similar use case.
I believe you can use the static sequelize.where method in combination with sequelize.literal.
Here is the corresponding documentation in sequelize API reference: https://sequelize.org/master/class/lib/sequelize.js~Sequelize.html#static-method-where
And here is an example (although I will admit hard to find) in the regular documentation:
https://sequelize.org/master/manual/model-querying-basics.html#advanced-queries-with-functions--not-just-columns-
In the end for your specific sit try something like this:
const someVariableWithColumnName = 'data2';
Model.findOne({
where: {
[Op.and]: [
// We provide the virtual column sql as the first argument of sequelize.where with sequelize.literal.
// We provide the matching condition as the second argument of sequelize.where, with the usual sequelize syntax.
sequelize.where(sequelize.literal(`$("json_column"->'data'->>'${someVariableWithColumnName}')::boolean$`), { [Op.eq]: true }),
{ id: someIdVariable }
]
})
I have documents that set up like this:
{ _id: 1, name: "A", timestamp: 1478115739, type: "report" }
{ _id: 2, name: "B", timestamp: 1478103721, type: "transmission" }
{ _id: 3, name: "C", timestamp: 1473114714, type: "report" }
I am trying to create a view that only returns the documents within a specific timestamp range. And I would love to be able to filter by type as well.
Here is my javascript call for the the data:
db.query('filters/timestamp_type', { startKey: 1378115739, endKey: 1478115740 })
.then(function(resp) {
//do stuff
})
I only know where to put the starting and ending timestamps. I'm having a hard time figuring out where I would say I only want the report's returned.
In addition, this is my map function for my filter, which is obviously not even close to being complete. I'm not sure how I even access the start and end key.
function (doc) {
if(type == "report" && startKey >= doc.timestamp && endKey <= doc.timestamp)
emit(doc._id, doc.name);
}
My question remains:
Where do I retrieve the start and end key's in my map function?
How can I add an addition type filter for only getting a specific type of report.
I know I might need to use a reduce function but it's going over my head. Here is the default reduce function but I'm not sure how it would work with the map function.
function (keys, values, rereduce) {
if (rereduce) {
return sum(values);
} else {
return values.length;
}
}
Thank you, any help or guidance would be appreciated.
Use a map function to get reports by a specific type-
function(doc) {
if(doc.type == "report") {
emit(doc.timestamp, doc);
}
}
when the view is queried, only documents with the type 'report' will be returned. If you need to support multiple types, you will have to create a new view for each type.
To query this view and specify the start & end timestamps, just add them to your query-
curl -XGET http://localhost:5984/<your-database>/_design/docs/_view/<your-view-name>?startkey="1478115739"&endkey="1478103721"
Reference