I'm using Protobuf.js to build a node package, containing our protocol and offering encode and decode functionality to Proto Messages defined in this package. I would be fine with using .proto files (The loading of .proto files happens at runtime) , but since the module needs to be usable on the client side and I can't pack the .proto files to my resolved .js-file (built with browserify), I need to use a way, that enables the packaging in the build.js.
Enter JSON Descriptors.
var jsonDescriptor = require("./awesome.json"); // exemplary for node
var root = protobuf.Root.fromJSON(jsonDescriptor);
The json file can be packed up (requirement resolved by browserify). Proto Type Defintions also are possible in .json
I translated my .proto file into a .json file and tried it with my example data. Unfortunately it failed with the repeated fields.
The .proto file looks kind of like this:
message Structure {
map <int32, InnerArray> blocks = 1;
}
message Inner{
int32 a = 1;
int32 b = 2;
bool c = 3;
}
message InnerArray{
repeated Inner inners = 1;
}
Which I translated into this JSON Descriptor
{
"nested": {
"Structure": {
"fields": {
"blocks": {
"type": "InnerArray",
"id": 1,
"map" : true,
"keyType" : "int32"
}
}
},
"InnerArray" : {
"fields": {
"inners" : {
"repeated" : true,
"type" : "Inner",
"id" : 1
}
}
},
"Inner" : {
"fields" : {
"a" : {
"type" : "int32",
"id" : 1
},
"b" : {
"type" : "int32",
"id" : 2
},
"c" : {
"type" : "bool",
"id" : 3
}
}
}
}
}
If I'm not mistaken there is the required attribute for a field.
When I encode and decode my example data it stops at the repeated field: (note that the map works fine).
{
"blocks": {
"0": {
"inners": {}
},
...
I also examined my root to find out how the loaded type looks and it looks exactly like my defintion EXCEPT that repeated is missing:
"InnerArray" : {
"fields": {
"inners" : {
"type" : "Inner",
"id" : 1
}
}
},
How do I define a repeated field correctly in a JSON Descriptor?
If there is a way to pre include proto files and not load them at runtime, so that i can wrap them up with browserify, I would accept this as a solution, too.
After browsing through the code, I found that you can not set required in a JSON Descriptor.
The correct way is to set
"rule": "repeated";
since a field is set with a Field Descriptor
Related
This question already has answers here:
Return JSON response from Flask view
(15 answers)
Closed 5 years ago.
OK, I have a map on a webpage using flask, python, html, javascript. My problem is that when I hard code the geoJson data, it works fine. When I pass the geoJson data from the python script to the Javascript to be used in leaflet, it does not work. I am not getting any errors other then the visual error of no data.
I am using python to read data from a csv file, and creating a geoJson file using modified version of:
http://www.andrewdyck.com/how-to-convert-csv-data-to-geojson/
Thank you Andrew
Here is the JavaScript used to call the python method. It is called with an onClick function call in the html:
function getValues(){
$.ajax({
data: chks}, // there is data passed, but it doesn't effect this issue
type : 'POST',
url : '/process'
})
.done(function(data){ // its this passed data that is not being read correctly. When I hard code this with the geojson found below, it works great.
var markOptions = {
radius:8,
... };
var pntslay = L.geoJson(data, {
pointsToLayer: function(feature, latlng){
return L.circleMarker(latlng, markOptions);
}}).addTo(map);
Here is my modified python and flask code:
#app.route('/process', methods=['POST'])
def process():
data = request.form['chks']
rawData = csv.reader(open('sample.csv', 'rb'), dialect='excel')
# the template. where data from the csv will be formatted to geojson
template = \
''' \
{ "type" : "Feature",
"geometry" : {
"type" : "Point",
"coordinates" : [%s,%s]},
"properties" : { "name" : "%s", "value" : "%s"}
},
'''
# the head of the geojson file
output = \
''' \
{ "type" : "Feature Collection",
{"features" : [
'''
# loop through the csv by row skipping the first
iter = 0
for row in rawData:
iter += 1
if iter >= 2:
id = row[0]
lat = row[1]
lon = row[2]
output += template % (row[2], row[1], row[0])
# the tail of the geojson file
output += \
''' \
]};
'''
return output
Here is the geoJson file. When I hard code this, it works. When I use the passed data from the python script, it does not. The hardcoded data was copied from the output in firefox firebug.
var geojsonPnts = {
"type": "Feature Collection",
"feature" : [
{"type" : "Feature",
"geometry" : {
"type" : "Point",
"coordinates" : [ -86.27, 32.36 ]},
"properties" : { "name" : "some place" }
},
{"type" : "Feature",
"geometry" : {
"type" : "Point",
"coordinates" : [ -105.45, 40.63 ]},
"properties" : { "name" : "some other place" }
},
]};
I am not sure why the passed data is not working. Please excuse typos and fat fingers errors. I am not able to copy and paste my working code to here.
I suspect you have an extra semicolon (;) at the end of your python output.
Also make sure the data you pass to L.geoJSON factory is already a proper JavaScript object. You can output a typeof data to check that, for example.
If it is still a string, simply convert it first using JSON.parse(data).
I have a collection of Facebook Page Likes (titled pagelikes) that is stored in a Mongo database/JSON file. Below is an example of one entry.
{
"_id" : ObjectId("4725bf8731b8faf4c04595bb"),
"user_id" : "0939bf9w9804842f9f817ad100",
"page_likes" : [
{
"id" : "859302873383",
"name" : "Hotdogs"
},
{
"id" : "8593683902",
"name" : "Video Games"
},
{
"id" : "849204859849028",
"name" : "Road Bikes"
}
]
}
id = the unique Facebook Page identifier, name = the name of a Facebook page.
I would like to export this entire collection to a CSV file with three columns, user_id, page_likes.id, page_likes.name. It would look like the following:
user_id page_likes.id page_likes.name
0939bf9w9804842f9f817ad100 859302873383 Hotdogs
0939bf9w9804842f9f817ad100 8593683902 Video Games
0939bf9w9804842f9f817ad100 849204859849028 Road Bikes
... ... ...
The JSON file is quite large (4GB), contains over 120K users, and there is no limit on the number of an entry has.
I have tried and failed with mongoexport, although an aggregation framework seems most useful (possibly the project and unwind functions). That said, I have little experience with Mongo.
Any advice, examples or suggestions would be very helpful.
Many thanks,
R
You can deal with this in a number of ways.
Firstly if you have MongoDB 3.4 available then you could use a "View" in order to represent the collection with the array contents "un-wound". A "View" is basically an aggregation pipeline statement that appears to be a normal collection as far as most actions that would use a collection are concerned.
So presuming your source collection is called "pages" here, then you would create the "View" with:
db.createView("pageArray", "pages", [{ "$unwind": "$page_likes" }])
Then you can query the collection as normal:
db.pageArray.find()
/* 1 */
{
"_id" : ObjectId("4725bf8731b8faf4c04595bb"),
"user_id" : "0939bf9w9804842f9f817ad100",
"page_likes" : {
"id" : "859302873383",
"name" : "Hotdogs"
}
}
/* 2 */
{
"_id" : ObjectId("4725bf8731b8faf4c04595bb"),
"user_id" : "0939bf9w9804842f9f817ad100",
"page_likes" : {
"id" : "8593683902",
"name" : "Video Games"
}
}
/* 3 */
{
"_id" : ObjectId("4725bf8731b8faf4c04595bb"),
"user_id" : "0939bf9w9804842f9f817ad100",
"page_likes" : {
"id" : "849204859849028",
"name" : "Road Bikes"
}
}
And subsequently issue the mongoexport as if it were a normal collection:
mongoexport -d test -c pageArray --type=csv --fields user_id,page_likes.id,page_likes.name
2017-07-05T13:14:11.588+1000 connected to: localhost
user_id,page_likes.id,page_likes.name
0939bf9w9804842f9f817ad100,859302873383,Hotdogs
0939bf9w9804842f9f817ad100,8593683902,Video Games
0939bf9w9804842f9f817ad100,849204859849028,Road Bikes
2017-07-05T13:14:11.589+1000 exported 3 records
Of course adding --out or a standard redirect to actually output to a file.
If your MongoDB is an older version but at least has $out available ( from MongoDB 2.6 ) then write to another collection:
db.pages.aggregate([
{ "$unwind": "$page_likes" },
{ "$project": { "_id": 0 } },
{ "$out": "pagesArray" }
])
Then you basically run the same mongoexport as above since it's also a collection that is accessible to do so.
If you really don't want to create either a "View" or "another collection", then you could simply send a short script to the mongo shell. Albeit in a very hacky way:
mongo --quiet --eval '
print("user_id,page_likes.id,page_likes.name");
db.pages.aggregate([
{ "$unwind": "$page_likes" },
{ "$project": { "_id": 0 } },
]).forEach(p => print(`${p.user_id},${p.page_likes.id},${p.page_likes.name}`))'
Or even without aggregate() and $unwind at all:
mongo --quiet --eval '
print("user_id,page_likes.id,page_likes.name");
db.pages.find({},{ _id: 0 }).forEach(p =>
p.page_likes.forEach(l => print(`${p.user_id},${l.id},${l.name}`)))'
Which gives you the same output:
user_id,page_likes.id,page_likes.name
0939bf9w9804842f9f817ad100,859302873383,Hotdogs
0939bf9w9804842f9f817ad100,8593683902,Video Games
0939bf9w9804842f9f817ad100,849204859849028,Road Bikes
Note also that if you want or "need" a different delimiter than comma ,here, then either of the two last approaches with the shell is probably the way to go. As this is "scheduled" for addition to mongoexport and mongoimport with TOOLS-87, but of course is "yet to be resolved". So if you want different output, then you do it yourself.
I'm attempting to filter returned data sets with Meteor's find().fetch() to contain just a single object, it doesn't appear very useful if I query for a single subdocument but instead I receive several, some not even containing any of the matched terms.
I have a simple mixed data collection that looks like this:
{
"_id" : ObjectId("570d20de3ae6b49a54ee01e7"),
"name" : "Entertainment",
"items" : [
{
"_id" : ObjectId("57a38b5f2bd9ac8225caff06"),
"slug" : "this-is-a-long-slug",
"title" : "This is a title"
},
{
"_id" : ObjectId("57a38b835ac9e2efc0fa09c6"),
"slug" : "mc",
"title" : "Technology"
}
]
}
{
"_id" : ObjectId("570d20de3ae6b49a54ee01e8"),
"name" : "Sitewide",
"items" : [
{
"_id" : ObjectId("57a38bc75ac9e2efc0fa09c9"),
"slug" : "example",
"name" : "Single Example"
}
]
}
I can easily query for a specific object in the nested items array with the MongoDB shell as this:
db.categories.find( { "items.slug": "mc" }, { "items.$": 1 } );
This returns good data, it contains just the single object I want to work with:
{
"_id" : ObjectId("570d20de3ae6b49a54ee01e7"),
"items" : [
{
"_id" : ObjectId("57a38b985ac9e2efc0fa09c8")
"slug" : "mc",
"name" : "Single Example"
}
]
}
However, if a similar query within Meteor is directly attempted:
/* server/publications.js */
Meteor.publish('categories.all', function () {
return Categories.find({}, { sort: { position: 1 } });
});
/* imports/ui/page.js */
Template.page.onCreated(function () {
this.subscribe('categories.all');
});
Template.page.helpers({
items: function () {
var item = Categories.find(
{ "items.slug": "mc" },
{ "items.$": 1 } )
.fetch();
console.log('item: %o', item);
}
});
The outcome isn't ideal as it returns the entire matched block, as well as every object in the nested items array:
{
"_id" : ObjectId("570d20de3ae6b49a54ee01e7"),
"name" : "Entertainment",
"boards" : [
{
"_id" : ObjectId("57a38b5f2bd9ac8225caff06")
"slug" : "this-is-a-long-slug",
"name" : "This is a title"
},
{
"_id" : ObjectId("57a38b835ac9e2efc0fa09c6")
"slug" : "mc",
"name" : "Technology"
}
]
}
I can then of course filter the returned cursor even further with a for loop to get just the needed object, but this seems unscalable and terribly inefficient while dealing with larger data sets.
I can't grasp why Meteor's find returns a completely different set of data than MongoDB's shell find, the only reasonable explanation is both function signatures are different.
Should I break up my nested collections into smaller collections and take a more relational database approach (i.e. store references to ObjectIDs) and query data from collection-to-collection, or is there a more powerful means available to efficiently filter large data sets into single objects that contain just the matched objects as demonstrated above?
The client side implementation of Mongo used by Meteor is called minimongo. It currently only implements a subset of available Mongo functionality. Minimongo does not currently support $ based projections. From the Field Specifiers section of the Meteor API:
Field operators such as $ and $elemMatch are not available on the client side yet.
This is one of the reasons why you're getting different results between the client and the Mongo shell. The closest you can get with your original query is the result you'll get by changing "items.$" to "items":
Categories.find(
{ "items.slug": "mc" },
{ "items": 1 }
).fetch();
This query still isn't quite right though. Minimongo expects your second find parameter to be one of the allowed option parameters outlined in the docs. To filter fields for example, you have to do something like:
Categories.find(
{ "items.slug": "mc" },
{
fields: {
"items": 1
}
}
).fetch();
On the client side (with Minimongo) you'll then need to filter the result further yourself.
There is another way of doing this though. If you run your Mongo query on the server, you won't be using Minimongo, which means projections are supported. As a quick example, try the following:
/server/main.js
const filteredCategories = Categories.find(
{ "items.slug": "mc" },
{
fields: {
"items.$": 1
}
}
).fetch();
console.log(filteredCategories);
The projection will work, and the logged results will match the results you see when using the Mongo console directly. Instead of running your Categories.find on the client side, you could instead create a Meteor Method that calls your Categories.find on the server, and returns the results back to the client.
The Issue
I have tried several approaches, but haven't been able to find out how to add numbers to a NS set. This is all running inside a lambda function.
What I'm trying to accomplish
I am creating a dynamodb table where different colors in hex align to a set of ids. I am optimizing the table for fast reads and to avoid duplicates which is why I would like to maintain a set of ids for each hex.
How I'm adding items to the table:
let doc = require('dynamodb-doc');
let dynamo = new doc.DynamoDB();
var object = {
'TableName': 'Hex',
'Item': {
'hex': '#FEFEFE',
'ids': {
'NS': [2,3,4]
}
}
}
dynamo.putItem(object, callback);
Which results in
Then I try to add more ids to the set
Using the Dynamodb Update Item Documentation standards
var params = {
"TableName" : "Hex",
"Key": {
"hex": "#FEFEFE"
},
"UpdateExpression" : "ADD #oldIds :newIds",
"ExpressionAttributeNames" : {
"#oldIds" : "ids"
},
"ExpressionAttributeValues": {
":newIds" : {"NS": ["5", "6"]}
},
};
dynamo.updateItem(params, callback);
This returns the following error, so dynamo thinks :newIds is a map type instead of a set(?)
"errorMessage": "Invalid UpdateExpression: Incorrect operand type for operator or function; operator: ADD, operand type: MAP"
I have also tried these alternative approaches
Try 2:
var setTest = new Set([5, 6]);
var params = {
"TableName" : "Hex",
"Key": {
"hex": "#FEFEFE"
},
"UpdateExpression" : "ADD #oldIds :newIds",
"ExpressionAttributeNames" : {
"#oldIds" : "ids"
},
"ExpressionAttributeValues": {
":newIds" : setTest
},
};
dynamo.updateItem(params, callback);
Error 2 (same error):
"errorMessage": "Invalid UpdateExpression: Incorrect operand type for operator or function; operator: ADD, operand type: MAP"
Try 3:
var params = {
"TableName" : "Hex",
"Key": {
"hex": "#FEFEFE"
},
"UpdateExpression" : "ADD #oldIds :newIds",
"ExpressionAttributeNames" : {
"#oldIds" : "ids"
},
"ExpressionAttributeValues": {
":newIds" : { "NS" : { "L" : [ { "N" : "5" }, { "N" : "6" } ] }}
},
};
dynamo.updateItem(params, callback);
Error 3 (same error):
"errorMessage": "Invalid UpdateExpression: Incorrect operand type for operator or function; operator: ADD, operand type: MAP"
Try 4:
var params = {
"TableName" : "Hex",
"Key": {
"hex": "#FEFEFE"
},
"UpdateExpression" : "ADD #oldIds :newIds",
"ExpressionAttributeNames" : {
"#oldIds" : "ids"
},
"ExpressionAttributeValues": {
":newIds" : [5,6]
},
};
dynamo.updateItem(params, callback);
Error 4 (similar error, but dynamo thinks I'm adding a list this time)
"errorMessage": "Invalid UpdateExpression: Incorrect operand type for operator or function; operator: ADD, operand type: LIST"
Stack Overflow/Github Questions I've Tried
https://stackoverflow.com/a/37585600/4975772 (I'm adding to a set, not a list)
https://stackoverflow.com/a/37143879/4975772 (I'm using javascript, not python, but I basically need this same thing just different syntax)
https://github.com/awslabs/dynamodb-document-js-sdk/issues/40#issuecomment-123003444 (I need to do this exact thing, but I'm not using the dynamodb-document-js-sdk, I'm using AWS Lambda
How to Create a Set and Add Items to a Set
let AWS = require('aws-sdk');
let docClient = new AWS.DynamoDB.DocumentClient();
...
var params = {
TableName : 'Hex',
Key: {'hex': '#FEFEFE'},
UpdateExpression : 'ADD #oldIds :newIds',
ExpressionAttributeNames : {
'#oldIds' : 'ids'
},
ExpressionAttributeValues : {
':newIds' : docClient.createSet([1,2])
}
};
docClient.update(params, callback);
Which results in this dynamodb table:
If the set doesn't exist, then that code will create it for you. You can also run that code with a different set to update the set's elements. Super convenient.
Create a Set and Add Items to a Set (OLD API)
let doc = require('dynamodb-doc');
let dynamo = new doc.DynamoDB();
var params = {
TableName : 'Hex',
Key: {'hex': '#555555'},
UpdateExpression : 'ADD #oldIds :newIds',
ExpressionAttributeNames : {
'#oldIds' : 'ids'
},
ExpressionAttributeValues : {
':newIds' : dynamo.Set([2,3], 'N')
}
};
dynamo.updateItem(params, callback);
(Don't use this code for future development, I only include it to help anyone using the existing DynamoDB Document SDK)
Why the Original Was Failing
Notice how when I asked the question, the resulting set looked like a literal json map object (when viewed in the dynamodb screenshot), which would explain this error message
"errorMessage": "Invalid UpdateExpression: Incorrect operand type for operator or function; operator: ADD, operand type: MAP"
So I was using the wrong syntax. The solution is found in the (now depricated) AWS Labs dynamodb-document-js-sdk docs
The full documentation for the newer Document Client can be viewed here: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/DynamoDB/DocumentClient.html
I've been struggling with this too. I discovered that there are actually 2 api's for DynamoDB.
AWS.DynamoDB
AWS.DynamoDBDocumentClient
Like you, I was not able to make the ADD function work. I tried to import and use the DynamoDB api instead of the DynamoDBDocumentClient api. It solved the problem for me. Below is my code snippet that works with the DynamoDB api but not with the DynamoDBDocumentClient api. I use a String Set instead of a Number Set, but that won’t make a difference I guess.
var AWS = require('aws-sdk');
var dynamo = new AWS.DynamoDB();
// var dynamo = new AWS.DynamoDB.DocumentClient();
...
var params = {};
params.TableName = “MyTable”;
params.ExpressionAttributeValues = { ':newIds': { "SS": ["someId"] } };
// :newIds represents a new dynamodb set with 1 element
params.UpdateExpression = "ADD someAttribute :newIds";
dynamoClient.updateItem(params, function(err, data) { ….}
This answer might be helpful to those who are using npm dynamoDB
module
We had the same issue . AS we were using npm dynamodb module for our queries rather than exporting from AWS.DynamoDB module, given solution of AWS.DynamoDBDocumentClint was implementable untill and unless we could shift from npm dynamoDb module to AWS.DynamoDB queries. So instead of shifting/transforming queries.
We tried to downgrade dynamoDB npm module to version ~1.1.0 and it worked.
Previous version was 1.2.X.
Hello guys I use jstree and I have multiple tree in the same page. I have two problems:
1) I want cookies in order to distinguish which nodes are open in each tree. I try to implement this functionality using prefix but unfortunately:
"cookies" : { "cookie_options" : { "prefix" : "home" } },
does not work since only the last opened node is re-opened after refresh.
2) I do not want to be able to create new root nodes.I only want to be able to create files or transfer files into my root directory.
I am trying to achieve that using:
"types" : {
"types" : {
// The default type
"default" : {
"valid_children" : "none",
"icon" : {
"image" : "./file.png"
}
},
// The `folder` type
"folder" : {
"valid_children" : [ "default", "folder", "file" ],
"icon" : {
"image" : "./folder.png"
}
},
// The `drive` nodes
"drive" : {
// can have files and folders inside, but NOT other `drive` nodes
"valid_children" : [ "default", "folder" ],
"icon" : {
"image" : "./root.png"
},
// those prevent the functions with the same name to be used on `drive` nodes
// internally the `before` event is used
"start_drag" : false,
"move_node" : false,
"delete_node" : false,
"remove" : false
}
}
},
but I am still able to post files into my root directory. Should I create another <li> without rel=drive above the root directory?
Thanks.
The solution for cookies in multiple trees:
.
.
"cookies": {
"save_selected": "node_selected_" + tree_id
"save_opened": "node_opened_" + tree_id
},
.
.
There is no such option as "prefix". "save_selected" and "save_opened" take either a string or false. By providing different tree_id you effectively use different cookies for each tree.