I have looked at recent posts and nothing on them has worked for me. I have a string set called "friendRequests" in my DynamoDB table and I am trying to append an element to it. However, I keep getting many errors when I try to call db.updateItem with these parameters. Here is my current code, the error is with ExpressionAttributeValues but I have probably spent an hour changing my syntax to no avail.
var params = {
TableName: "users",
Key: { "username": { "S": addFriendInfo.friendUsername } },
UpdateExpression: "SET friendRequests = list_append(friendRequests, :newFriend)",
ExpressionAttributeValues: {
':newFriend': { "SS" : [addFriendInfo.username] }
}
}
That is my code above. addFriendInfo.username / friendUsername are both just strings. This currently gives me a 'Invalid UpdateExpression: Incorrect operand type for operator or function; operator or function: list_append, operand type: SS'. I have tried many things. Can anyone point me in right direction to fixing this damn syntax?
As the DynamoDB documentation explains, DynamoDB has two similar but distinct types - a list and a set. A list is an ordered list of items of any type, while a set is a non-empty, non-ordered collection of items of the same type. The "SS" type you used is a "set of strings". This is distinct from a "list", and you cannot use list_append on sets, as the error message tells you.
To add an element to a set (or create a new set if it doesn't yet exist) you can use the ADD operation, e.g., ADD friendRequests :newFriend.
Related
When importing a document, I get an error that is attached below.
I guess the problem arose when the data provider (esMapping.js) was changed, to use the integer sub-field to sort documents.
Is it possible to use some pattern to sort the document so that this error does not occur again? Does anyone have an idea?
The question refers to the one already asked - Enable ascending and descending sorting of numbers that are of the keyword type (Elasticsearch)
Error:
022-05-18 11:33:32.5830 [ERROR] ESIndexerLogger Failed to commit bulk. Errors:
index returned 400 _index: adama_gen_ro_importdocument _type: _doc _id: 4c616067-4beb-4484-83cc-7eb9d36eb175 _version: 0 error: Type: mapper_parsing_exception Reason: "failed to parse field [number.sequenceNumber] of type [integer] in document with id '4c616067-4beb-4484-83cc-7eb9d36eb175'. Preview of field's value: 'BS-000011/2022'" CausedBy: "Type: number_format_exception Reason: "For input string: "BS-000011/2022"""
Mapping (sequenceNumber used for sorting):
"number": {
"type": "keyword",
"copy_to": [
"_summary"
],
"fields": {
"sequenceNumber": {
"type": "integer"
}
}
}
In the returned error message, the value being indexed into the number field is a string with alphabetical characters, 'BS-000011/2022'. This is no problem for the number field that has a keyword type. However, it is an issue for the sequenceNumber sub-field which has an integer type. The text value passed into number is also passed into sequenceNumber sub-field, hence the error.
Unfortunately, the text analyzer used in the previous question won't help either, as sorting can't be performed on a text field. However, the tokenizer used by the custom analyzer document_number_analyzer can be repurposed into an ingest pipeline.
The custom tokenizer, for context, provided by the author in the previous question :
"tokenizer": {
"document_number_tokenizer": {
"type": "pattern",
"pattern": "-0*([1-9][0-9]*)\/",
"group": 1
}
}
If the custom analyzer is used, with the Elasticsearch _analyze API on the value above like so (stack_index being a temporary index to use the analyzer) :
POST stack_index/_analyze
{
"analyzer": "document_number_analyzer",
"text": ["BS-000011/2022"]
}
The analyzer returns one token of 11, but tokens are for search analysis, not sorting.
An Elasticsearch ingest pipeline, using the grok processor, can be applied to the index to perform the extraction of the desired number from the value and indexed as an integer. The processor needs to be configured to expect the value's format, which would be similar to 'BS-0000011/2022'. An example is provided below:
PUT _ingest/pipeline/numberSort
{
"processors": [
{
"grok": {
"field": "number",
"patterns": ["%{WORD}%{ZEROS}%{SORTVALUES:sequenceNumber:int}%{SEPARATE}%{NUMBER}"],
"pattern_definitions": {
"SEPARATE": "[/]",
"ZEROS" : "[-0]*",
"SORTVALUES": "[1-9][0-9]*"
}
}
}
]
}
Grok takes an input text value and extracts structured fields from it. The pattern where the sortable number will be extracted is the SORTVALUES pattern, %{SORTVALUES:sequenceNumber:int}. A new field, called sequenceNumber, will be created in the document. When 'BS-000011/2022' is indexed in the number field, 11 is indexed into the sequenceNumber field as an integer.
You can then create an index template to apply the ingest pipeline. The sequenceNumber field will need to be explicitly added as an integer type. The ingest pipeline will automatically index into as long as a value matching the format of the input above is indexed into the number field. The sequenceNumber field will then be available to sort on.
I'm using Mongo 4.1 and would like to update a collection named "location_copy", by adding a new field to it of type object named "time", with two subfields: "utcTime", which will be populated by the value of that documents "time" field, and "tz", which will be populated by value of "subject.contactInf[0].addresses[0].timeZoneID" from of the document in the collection "subjects" in the database "Subjects" (a different database from the one of the first collection) with "_id" field value corresponding to "subjectID" field in locations_copy.
I have tried to accomplish this with the following code:
const get_time_zone_id = function(doc) {return doc.contactInfo[0].addresses[0].timeZoneID}
const get_location_doc = function(subjectID) { return db.getSiblingDB('Subjects').subjects.find({"_id": subjectID, "contactInfo": {"$exists": true}, "$where" : function() {
return (this.contactInfo.length > 0 && this.contactInfo[0].addresses && this.contactInfo[0].addresses.length > 0 && this.contactInfo[0].addresses[0].timeZoneID)
}}, {"contactInfo" : {"$slice": 1}, "contactInfo.addresses": {"$slice": 1},"contactInfo.addresses.timeZoneID" : 1}).map(get_time_zone_id)}
db.locations_copy.aggregate( [
{ $match: {"subjectID": {"$exists": true}}},
{ $addFields: {
time: { utc: "$timeUTC",
tz: { "$arrayElemAt": [get_location_doc(ObjectId("$subjectID")), 0 ] }}
}
}
] ).forEach(function(x){db.locations_copy.save(x)})
everything works except for one thing: when I try to pass ObjectId("$subjectID") as a parameter to "get_location_doc", it parses "$subjectID" as a literal string rather than passing the value of the underlying field in each document. I have also tried passing simply subjectID (without quotes) in which case it was simply undefined, or "$$subjectID" which led me to a literal string again. I understand this is due to client/server side parsing in run time.
I have tried to utilize the "$function" operator, but apparently it's only available from version 4.4 (I'm using 4.1).
I should note, that if I replace "$subjectID" with a hard-coded string ID (for example "5ff4c037bc0a716381231277") everything works as you'd expect.
Can anyone please help me accomplish what I intend? since this script is only meant to be executed once, performance is not much of an issue.
Thank you!
db.getSiblingDB().collection.find() is a client-side operation. It is not something you can use to join collections as part of a query. For that, see https://docs.mongodb.com/manual/reference/operator/aggregation/lookup/.
The second thing you are doing is retrieving nested fields out of a document. You can do this with $set and dot notation. See specifically the example at https://docs.mongodb.com/manual/reference/operator/aggregation/set/#adding-fields-to-an-embedded-document.
You will need to construct a single aggregation pipeline that does everything your current mix of aggregation and javascript does using only the operations documented in https://docs.mongodb.com/manual/reference/operator/aggregation/ and the stages documented in https://docs.mongodb.com/manual/reference/operator/aggregation-pipeline/.
When I do a create using sequelize it returns me the response i.e. the newly created entry row in the response,
Sequelize Create Object Code:
let createdObj= await sequelize.ModelName.create(modelObject,{ transaction :t, //more options can be added here, need some value of option that prevents the output inserted })
Below is the query created:
INSERT INTO [TABLE_NAME] ([COL1],[COL2],[COL3],[COL4]) OUTPUT INSERTED.* VALUES (#0,#1,#2,#3,#4)
Now I don't want the output clause to be part of the query, I want a simple insert like:
INSERT INTO [TABLE_NAME] ([COL1],[COL2],[COL3],[COL4]) VALUES (#0,#1,#2,#3,#4)
I don't want the output clause to be part of the query.
How can I achieve this in at the query level as well as at the model level? In some Create operations, I want the output clause and in some create operations, I don't want.
EDIT 1
On Further research I found an option called { returning: false } this does what is required i.e. create an insert query like this INSERT INTO [TABLE_NAME] ([COL1],[COL2],[COL3],[COL4]) VALUES (#0,#1,#2,#3,#4) but now the Sequelize is breaking because it's expecting those values back in return idk why?
C:\Users\MG265X1\project\node_modules\sequelize\lib\dialects\mssql\query.js:389
id = id || results && results[0][this.getInsertIdField()];
^
TypeError: Cannot read property 'id' of undefined
at Query.handleInsertQuery (C:\Users\MG265X1\project\node_modules\sequelize\lib\dialects\mssql\query.js:389:39)
Turns out if an autoIncrementAttribute is present in the model, it will look for the output clause, removing the attribute {autoIncrement: true } from the model hasn't helped as IDENTITY_INSERT cannot be null. How do I move ahead on this??
Edit 2 I could get it working with a combination of { returning: false } and {hasTriggers: true}. Have hasTriggers Attribute as true in your Model, this will allow you to single creates but for bulk Creates pass option returning: false at the time of bulkCreate.
Note: When using bulkCreate with { returning: false } you'll not be able to get the autogenerated Id, It's a trade-off that we had to live with as we want
bulkCreate to work with triggers, we ended up fetching the Id later from DB
Seems I raised this issue but was closed as it wasn't good SSCCE
Long story short, I'm making a real estate agent chatbot and I just implemented a filter allowing the user to search within a range of numbers (e.g. at least one bedroom, under $2500). In order to do this, I made an entity_range composite entity composed of the range type (e.g. at most, exactly) and the entity itself (unit-currency for price, plus some custom entities like the number of bedrooms). Prior to creating entity_range, the entities themselves worked fine. But now, it seems as though the entity part of entity_range is undefined. See a sample of my code below:
function get_count(req, res) {
console.log("price: " + req.queryResult.parameters["entity_range"]["unit-currency"])
var price, beds, baths, num_filter_funct
if(req.queryResult.parameters["entity_range"]["unit-currency"] != undefined) {
price = req.queryResult.parameters["entity_range"]
console.log("price: " + price)
} else {
console.log("could not find parameter")
}
Before creating entity_range, my code looked exactly the same, except without ["entity_range"] between parameters and ["unit-currency"]. Anyway, this code logs:
price: undefined
could not find parameter
after the input "How many for $2500," with the following diagnostic info:
...
"queryResult": {
"queryText": "how many for $2500",
"parameters": {
"entity_range": [
{
"unit-currency": {
"amount": 2500,
"currency": "USD"
}
}
]
}...
So the entity "unit-currency" is recognized by Dialogflow, but not by my program. entity_range does allow users to not specify a range, so that's not the issue:
see screenshot here.
I would greatly appreciate any advice you have to offer!
That JSON shows entity_range being an array instead of an object. an object.
parameters.entity_range[0][“unit-currency”] should work. Note the [0]. You’ll also want to add some checks before this to make sure enitiy_range exists and it’s length is > 0.
And this part is just a guess but perhaps you mistakenly clicked the “Is List” box for this parameter in dialogflow? I’m checking it would probably make it be an object instead of an array and your existing code would work.
I will start off by saying while I am not new to CouchDB, I am new to querying the views using JavaScript and the web.
I have looked at multiple other questions on here, including CouchDB - Queries with params, couchDB queries, Couchdb query with AND operator, CouchDB Querying Dates, and Basic CouchDB Queries, just to list a few.
While all have good information in them, I haven't found one that has my particular problem in it.
I have a view set up like so:
function (docu) {
if(docu.status && docu.doc && docu.orgId.toString() && !docu.deleted){
switch(docu.status){
case "BASE":
emit(docu.name, docu);
break;
case "AIR":
emit(docu.eta, docu);
break;
case "CHECK":
emit(docu.checkTime, docu);
break;
}
}
}
with all documents having a status, doc, orgId, deleted, name, eta, and checkTime. (I changed doc to docu because of my custom doc key.
I am trying to query and emit based on a set of keys, status, doc, orgId, where orgId is an integer.
My jQuery to do this looks like so:
$.couch.db("myDB").view("designDoc/viewName", {
keys : ["status","doc",orgId],
success: function(data) {
console.log(data);
},
error: function(status) {
console.log(status);
}
});
I receive
{"total_rows":59,"offset":59,"rows":[
]}
Sometimes the offset is 0, sometimes it is 59. I feel I must be doing something wrong for this not to be working correctly.
So for my questions:
I did not mention this, but I had to set docu.orgId.toString() because I guess it parses the URL as a string, is there a way to use this number as a numeric value?
How do I correctly view multiple documents based on multiple keys, i.e. if(key1 && key2) emit(doc.name, doc)
Am I doing something obviously wrong that I lack the knowledge to notice?
Thank you all.
You're so very close. To answer your questions
When you're using docu.orgId.toString() in that if-statement you're basically saying: this value must be truthy. If you didn't convert to string, any number, other than 0, would be true. Since you are converting to a string, any value other than an empty string will be true. Also, since you do not use orgId as the first argument in an emit call, at least not in the example above, you cannot query by it at all.
I'll get to this.
A little.
The thing to remember is emit creates a key-value table (that's really all a view is) that you can use to query. Let's say we have the following documents
{type:'student', dept:'psych', name:'josh'},
{type:'student', dept:'compsci', name:'anish'},
{type:'professor', dept:'compsci', name:'kender'},
{type:'professor', dept:'psych', name:'josh'},
{type:'mascot', name:'owly'}
Now let's say we know that for this one view, we want to query 1) everything but mascots, 2) we want to query by type, dept, and name, all of the available fields in this example. We would write a map function like this:
function(doc) {
if (doc.type === 'mascot') { return; } // don't do anything
// allow for queries by type
emit(doc.type, null); // the use of null is explained below
// allow queries by dept
emit(doc.dept, null);
// allow for queries by name
emit(doc.name, null);
}
Then, we would query like this:
// look for all joshs
$.couch.db("myDB").view("designDoc/viewName", {
keys : ["josh"],
// ...
});
// look for everyone in the psych department
$.couch.db("myDB").view("designDoc/viewName", {
keys : ["psych"],
// ...
});
// look for everyone that's a professor and everyone named josh
$.couch.db("myDB").view("designDoc/viewName", {
keys : ["professor", "josh"],
// ...
});
Notice the last query isn't and in the sense of a logical conjunction, it's in the sense of a union. If you wanted to restrict what was returned to documents that were only professors and also joshs, there are a few options. The most basic would be to concatenate the key when you emit. Like
emit('type-' + doc.type + '_name-' + doc.name, null);
You would then query like this: key : ["type-professor_name-josh"]
It doesn't feel very proper to rely on strings like this, at least it didn't to me when I first started doing it, but it is a quite common method for querying key-value stores. The characters - and _ have no special meaning in this example, I simply use them as delimiters.
Another option would be what you mentioned in your comment, to emit an array like
emit([ doc.type, doc.name ], null);
Then you would query like
key: ["professor", "josh"]
This is perfectly fine, but generally, the use case for emitting arrays as keys, is for aggregating returned rows. For example, you could emit([year, month, day]) and if you had a simple reduce function that basically passed the records through:
function(keys, values, rereduce) {
if (rereduce) {
return [].concat.apply([], values);
} else {
return values;
}
}
You could query with the url parameter group_level set to 1 or 2 and start querying by year and month or just year on the exact same view using arrays as keys. Compared to SQL or Mongo it's mad complicated and convoluted, but hey, it's there.
The use of null in the view is really for resource saving. When you query a view, the rows contain an _id that you can use in a second ajax call to get all the documents from, for example, _all_docs.
I hope that makes sense. If you need any clarification you can use the comments and I'll try my best.