AWS Javascript SDK : Greengrass createFunctionDefinition - javascript

I'm formatting my parameters according to this https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Greengrass.html#createFunctionDefinition-property
But for whatever reason its giving me a key error when for "Execution" as well as DefaultConfig
Response:
Request ID:
"3ed83472-39af-493b-9df7-7f82d2f14636"
Function Logs:
r: Unexpected key \'Execution\' found in params.InitialVersion.Functions[0].FunctionConfiguration.Environment',
code: 'MultipleValidationErrors',
errors:
[ { UnexpectedParameter: Unexpected key 'DefaultConfig' found in params.InitialVersion
at ParamValidator.fail (/var/runtime/node_modules/aws-
and the code
GG.createFunctionDefinition({
InitialVersion: {
DefaultConfig: {
Execution: {
IsolationMode: "NoContainer"}
},
Functions: [
{
FunctionArn: "arn:aws:lambda:us-west-2:644226108543:function:SahmCumminsTelemetryTest:1",
FunctionConfiguration: {
MemorySize: 524288,
Pinned: true,
Timeout: 600,
Environment: {
AccessSysfs: false,
Execution: {
IsolationMode: "NoContainer",
RunAs: {
Gid: 0,
Uid: 0
}
}
}
},
Id: "function_definition",
},
],
},
Name: "function_definition",
}, function (err, data) {
if (err) {
console.log(err, err.stack);
}
else {
funcArn = data.LatestVersionArn;
};

I think that the issue is that the configuration data specifies two mutually exclusive options. There is a memory size value specified AND it says it should use "NoContainer". When the Greengrass container isn't in use memory size isn't a valid option.
Try removing memory size and see if that fixes it.
The provisioning code I've shared on Github "scrubs" functions when NoContainer is set to get around this issue. The scrubbing process is to set the memory size to NULL so when it is serialized to JSON the field is missing.
https://github.com/awslabs/aws-greengrass-provisioner/blob/e2608654b65682ca9b5b03da962cc8cb29ea1cbf/src/main/java/com/awslabs/aws/greengrass/provisioner/implementations/helpers/BasicGreengrassHelper.java#L390

Related

node-redis/search query elements by geospatial position and additional indexes

I want to be able to query elements in a redis cache based on 3 different indexes. Those indexes would be:
A MAC address stored as a String.
A number.
A latitude and longitude(to be able to query spatially).
I have seen that Redis has support for multi indexing using redis search and native geospatial api.
so using nodejs and node-redis I have written the following index:
client.ft.create(
'idx:cits',
{
mid: {
type: SchemaFieldTypes.TEXT
},
timestamp: {
type: SchemaFieldTypes.NUMERIC,
sortable: true
},
position: {
type: SchemaFieldTypes.GEO
}
},
{
ON: 'HASH',
PREFIX: 'CITS'
}
)
Now, i would like to insert records on the database that include those 3 parameters plus an additional String that stores some payload. I have tried using
await client.hSet('CITS:19123123:0:0:00:00:5e:00:53:af', {
timestamp: 19123123,
position: {latitude:0, longitude:0},
mid: '00:00:5e:00:53:af',
message: 'payload'
})
But I get the following error:
throw new TypeError('Invalid argument type');
^
TypeError: Invalid argument type
So, i can't add the latitude and longitude that way, I also tried
using the module ngeohash and computing an 11 character wide geohash like so:
await client.hSet('CITS:19123123:0:0:00:00:5e:00:53:af', {
timestamp: 19123123,
position: geohash.encode(0, 0, 11),
mid: '00:00:5e:00:53:af',
message: 'payload'
})
And it does not give any error but when using redis search querys It does not find points near it.
Is it even possible what I am trying to do? If so, how would you input the data to the redis database?
Here is a minimal reproducible example (Im using "ngeohash": "^0.6.3" and "redis": "^4.5.0"):
const { createClient, SchemaFieldTypes } = require('redis')
const geohash = require('ngeohash')
const client = createClient()
async function start(client) {
await client.connect()
try {
// We only want to sort by these 3 values
await client.ft.create(
'idx:cits',
{
mid: {
type: SchemaFieldTypes.TEXT
},
timestamp: {
type: SchemaFieldTypes.NUMERIC,
sortable: true
},
position: {
type: SchemaFieldTypes.GEO
}
},
{
ON: 'HASH',
PREFIX: 'CITS'
}
)
} catch (e) {
if (e.message === 'Index already exists') {
console.log('Skipping index creation as it already exists.')
} else {
console.error(e)
process.exit(1)
}
}
await client.hSet('CITS:19123123:0:0:00:00:5e:00:53:af', {
timestamp: 19123123,
position: geohash.encode(0, 0, 11),
mid: '00:00:5e:00:53:af',
message: 'payload'
})
await client.hSet('CITS:19123123:0.001:0.001:ff:ff:ff:ff:ff:ff', {
timestamp: 19123123,
position: geohash.encode(0.001, 0.001, 11),
mid: 'ff:ff:ff:ff:ff:ff',
message: 'payload'
})
const results = await client.ft.search(
'idx:cits',
'#position:[0 0 10000 km]'
)
console.log(results)
await client.quit()
}
start(client)
Additionally, I would like to ask if there is maybe another type of database that better suits my needs. I have chosen redis because it offers low latency, and that is the biggest constraint in my environment(I will probably do more writes than reads per second). I only want it to act as a inmediate cache, as persistent data will be stored in another database that does not need to be fast.
Thank you.
You get the Invalid argument type error because Redis does not support nested fields in hashes.
"GEO allows geographic range queries against the value in this attribute. The value of the attribute must be a string containing a longitude (first) and latitude separated by a comma" (https://redis.io/commands/ft.create/)

aws dynamodb appending an item to a list

I have a dynamoDB that has 4 strings and two lists.
Item: {
uuid: {
S: uuid
},
Age: {
S: age
},
Locale: {
S: locale
},
Gender: {
S: gender
},
Quiz: {
L: [
]
},
Trail: {
L: [
]
}
}
What I need to do is update either quiz or trail when they are completed. This is the function that I use as part of my call:
updateDemographicTrail = (uuid,id) =>{
console.log('trail');
console.log(uuid)
console.log(id)
dynamodb.updateItem(makeUpdateDemographicTrail(uuid,id))
}
I know that this gets called because I can see my logs in my stack trace on AWS. One possibility I thought of was to use putItem instead of updateItem. However, since I am only changing trail or quiz, I am afraid putItem will just wipe out everything.
I am trying to do most of my work in this function makeUpdateDemographicTrail:
makeUpdateDemographicTrail = (uuid,id) => {
console.log('make call');
return {
TableName: 'demographics',
ExpressionAttributeNames:{
"#T":"Trail"
},
ExpressionAttributeValues:{
":tID":id
},
Key:{
"UUID":{
S: uuid
}
},
UpdateExpression: "SET #T = list_append(#T, :tID)",
ReturnConsumedCapacity: 'TOTAL'
};
}
I know that it goes all the way here because I can again see the console log in my stack trace. So that leaves me to think it has something to do with my construction of the call. I have looked at the documentation: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithItems.html#WorkingWithItems.WritingData as well as several SO answers, and nothing seems to work. I do not get any issues, except that my database is never updated.
UPDATE
I added a console log to catch any errors. The error that I am getting points back towards my ExpressionAttributeValues:
UnexpectedParameter: Unexpected key '0' found in params.ExpressionAttributeValues

Firebase nested queries slow - sendRequest call when we're not connected not allowed

I want to implement a follow system between users.
For that, I want to display all of the 250 users of my app, then add a checkmark button next to the ones I already follow, and an empty button next to the ones I do not follow.
var usersRef = firebase.database().ref(‘/users’);
var followingRef = firebase.database().ref(‘/followingByUser’);
var displayedUsers = [];
// I loop through all users of my app
usersRef.once('value', users => {
users.forEach(user => {
// For each user, I check if I already follow him or not
followingRef.child(myUid).child(user.key).once('value', follow => {
if (follow.val()) {
// I do follow this user, follow button is on
displayedUsers.push({
name: user.val().name,
following: true
});
} else {
// I do not follow this user, follow button is off
displayedUsers.push({
name: user.val().name,
following: false
});
}
})
})
})
When doing that, I often (not always) get the following error: "Error: Firebase Database (4.1.3) INTERNAL ASSERT FAILED: sendRequest call when we're not connected not allowed."

Eventually, all the data is fetched, but after 10 seconds instead of 1 (without the error).
I do not believe it is an internet connection issue, as I have a very fast and stable wifi.
Is it a bad practice to nest queries like that?
If not, why do I get this error?
My data is structured as below:
users: {
userId1: {
name: User 1,
email: email#exemple.com,
avatar: url.com
},
userId2: {
name: User 2,
email: email#exemple.com,
avatar: url.com
},
...
}
followByUser: {
userId1: {
userId2: true,
userId10: true,
userId223: true
},
userId2: {
userId23: true,
userId100: true,
userId203: true
},
...
}
Your current database structure allows you to efficiently look up who each user is following. As you've found out it does not allow you to look who a user is follow by. If you also want to allow an efficient lookup of the latter, you should add additional data to your model:
followedByUser: {
userId2: {
userId1: true,
}
userId10: {
userId1: true,
},
userId223: {
userId1: true,
},
...
}
This is a quite common pattern in Firebase and other NoSQL databases: you often expand your data model to allow the use-cases that your app needs.
Also see my explanation on modeling many-to-many relations and the AskFirebase video on the same topic.

MongoDB Aggregation Cursor with NodeJS Driver

I am using MongoDB v3.2 and I'm using the native nodejs driver v2.1. When running the aggregation pipeline on large data sets(1mil+ documents), I am encountering the following error:
'aggregation result exceeds maximum document size (16MB)'
Here is my aggregation pipeline code:
var eventCollection = myMongoConnection.db.collection('events');
var cursor = eventCollection.aggregate([
{
$match: {
event_type_id: {$eq: 89012}
}
},
{
$group: {
_id: "$user_id",
score: {$sum: "$points"}
}
},
{
$sort: {
score: -1
}
}
],
{
cursor: {
batchSize: 500
},
allowDiskUse: true,
explain: false
}, function () {
});
Things I've tried:
//Using cursor event listeners. None of the on listeners seem to work. Always get error about 16mb.
cursor.on("data", function (data) {
console.log("Some data: ", data);
});
cursor.on("end", function (data) {
console.log("End of data: ", data);
});
//Using forEach. Which I thought would allow for >16mb because it's used in conjunction with the batchSize and cursor.
cursor.forEach(function (item) {
})
I've seen in other answers (How could I write aggregation without exceeds maximum document size?) that I need to have the results returned by a cursor, so how do I properly do that? I just can't seem to get it to work. Any suggestions on what the batchSize should be?
I am using the native mongodb package - https://github.com/mongodb/node-mongodb-native for a nodejs project not the mongo command line.
Ok I figured it out. It was not working because I was passing in a callback function as the last parameter in the aggregate method. By passing null, it allowed the stream to work as expected. Changes shown below:
var cursor = eventCollection.aggregate([
{
$match: {
event_type_id: {$eq: 89012}
}
},
{
$group: {
_id: "$user_id",
score: {$sum: "$points"}
}
},
{
$sort: {
score: -1
}
}
],
{
cursor: {
batchSize: 500
},
allowDiskUse: true,
explain: false
}, null);

Suppress MongoDB unique errors on Node.js

I am running a bulk insert with continueOnError on, so that I can insert an array of documents to a collection with a unique constraint; adding documents that pass, ignoring the ones that fail.
However, despite the continueOnError flag on, Mongo still returns the error; this makes the framework I use think that there's a problem. I would know how to suppress all errors, but this bulk insert is only one of many operations, and I'd still need to see errors from those (mind you, I'd want to see errors of any other kind than 11000 from this bulk insert too).
I am using the official MongoDB driver under Node.js; and I'm controlling the flow with ff.
How do I suppress the unique failures for the bulk insert, without suppressing other errors, or disrupting the flow?
var mongo_options = {
collection: { strict: true },
insert: { w: 1, strict: true },
insertbulk: { w: 1, strict: true, continueOnError: true, keepGoing: true }
}
var f = ff(this, function () {
this.connection.collection(table, mongo_options.collection, f.slot());
}, function (collection) {
// DON'T SUPPRESS THIS
collection.insert(data, mongo_options.insert, f.slot());
}, function () {
this.connection.collection("tags", mongo_options.collection, f.slot());
}, function (collection) {
// SUPPRESS THIS
collection.insert(tag_array, mongo_options.insertbulk, f.slot());
}).cb(next); // <-- error going out into the world

Categories