MongoDB values with undefined script - javascript

I'm trying to make some query from different databases, which I cannot merge them to one forAll, the following query probably fails due to async call which I cannot manage to solve.
var someImageIds = ["111111111111111111"]
use "databaseA"
var transactions = db.transactions.find({"data.transactionImaginaryId": {$in: someImageIds}}) // uses index
use "databaseB"
transactions.forEach(transaction => {
var report = db.reports.find({"metadata.companyId" : parseInt(transaction.data.companyId) , "metadata.originReportId": transaction.data.reportId}).project({}) // uses index
var expenses = db.expenses.find({"metadata.reportId": report._id}) // uses index
var assets = db.assets.find({"_id": report.assets[0].imaginaryId}) // uses index
print(`report with status: ${"report.reportFlow.value"}, ${expenses.count()} expenses, ${assets.count()} assets for ${transaction.data.matchType} transaction _id: ${transaction._id.valueOf()}`)
})
The problem is that
var report = db.reports.find({"metadata.companyId" : parseInt(transaction.data.companyId) , "metadata.originReportId": transaction.data.reportId}).project({})
returns value of undefined and I cannot continue with the query since the next line is using this line data.
Any ideas on how to solve that?
I'm using NoSqlBooster v6.2.8, mongo4, and written in NoSqlBooster console.
Thanks!
Thanks to #Jeremy Thille I managed to write the following WORKING code:
var someImageIds = ["111111111111111111"]
use "databaseA"
var transactions = db.transactions.find({"data.transactionImaginaryId": {$in: someImageIds}}) // uses index
use "databaseB"
transactions.forEach((transaction)=> {
const report = await(db.reports.find({ "metadata.companyId": parseInt(transaction.data.companyId), "metadata.originReportId": transaction.data.reportId }).toArray()) // uses index
const expenses = await(db.expenses.find({ "metadata.reportId": report[0]._id }).toArray()) // uses index
const assets = await(db.assets.find({ "_id": report[0].assets[0].imaginaryId }).toArray()) // uses index
print(`report with status: ${report[0].reportFlow.value}, ${expenses.length} expenses, ${assets.length} assets for ${transaction.data.matchType} transaction _id: ${transaction._id.valueOf()}`)
});

Unfortunately, databases (and HTTP requests, and many other things) are not instantaneous. They need some time to perform an operation. So you need to await them, which can't be done in a .forEach() loop, but can be in a for loop :
const someFunctionName = async () => { // needs async
for (let transaction of transactions) {
const report = await db.reports.find({ "metadata.companyId": parseInt(transaction.data.companyId), "metadata.originReportId": transaction.data.reportId }).project({}) // uses index
const expenses = await db.expenses.find({ "metadata.reportId": report._id }) // uses index
const assets = await db.assets.find({ "_id": report.assets[0].imaginaryId }) // uses index
print(`report with status: ${"report.reportFlow.value"}, ${expenses.count()} expenses, ${assets.count()} assets for ${transaction.data.matchType} transaction _id: ${transaction._id.valueOf()}`)
}
}

Related

Using Transactions and batched writes within a Cloud Function

Transactions and batched writes can be used to write multiple documents by means of an atomic operation.
Documentation says that Using the Cloud Firestore client libraries, you can group multiple operations into a single transaction.
I cannot understand what is the meaning of client libraries here and if it's correct to use Transactions and batched writes within a Cloud Function.
Example given: suppose in the database I have 3 elements (which doc IDs are A, B, C). Now I need to insert 3 more elements (which doc IDs are C, D, E). The Cloud Function should add just the latest ones and send a Push Notification to the user telling him that 2 new documents are available.
The doc ID could be the same but since I need to calculate how many documents are new (the ones that will be inserted) I need a way to read the doc ID first and check for its existence. Hence, I'm wondering if Transactions fit Cloud Functions or not.
Also, each transaction or batch of writes can write to a maximum of 500 documents. Is there any other way to overcome this limit within a Cloud Function?
Firestore Transaction behaviour is different between the Clients SDKs (JS SDK, iOS SDK, Android SDK , ...) and the Admin SDK (a set of server libraries), which is the SDK we use in a Cloud Function. More explanations on the differences here in the documentation.
Because of the type of data contention used in the Admin SDK you can, with the getAll() method, retrieve multiple documents from Firestore and hold a pessimistic lock on all returned documents.
So this is exactly the method you need to call in your transaction: you use getAll() for fetching documents C, D & E and you detect that only C is existing so you know that you need to only add D and E.
Concretely, it could be something along the following lines:
const db = admin.firestore();
exports.lorenzoFunction = functions
.region('europe-west1')
.firestore
.document('tempo/{docId}') //Just a way to trigger the test Cloud Function!!
.onCreate(async (snap, context) => {
const c = db.doc('coltest/C');
const d = db.doc('coltest/D');
const e = db.doc('coltest/E');
const docRefsArray = [c, d, e]
return db.runTransaction(transaction => {
return transaction.getAll(...docRefsArray).then(snapsArray => {
let counter = 0;
snapsArray.forEach(snap => {
if (!snap.exists) {
counter++;
transaction.set(snap.ref, { foo: "bar" });
} else {
console.log(snap.id + " exists")
}
});
console.log(counter);
return;
});
});
});
To test it: Create one of the C, D or E doc in the coltest collection, then create a doc in the tempo collection (Just a simple way to trigger this test Cloud Function): the CF is triggered. Then look at the coltest collection: the two missing docs were created; and look a the CF log: counter = 2.
Also, each transaction or batch of writes can write to a maximum of
500 documents. Is there any other way to overcome this limit within a
Cloud Function?
AFAIK the answer is no.
There used to also be a one second delay required as well between 500 record chunks. I wrote this a couple of years ago. The script below reads the CSV file line by line, creating and setting a new batch object for each line. A counter creates a new batch write per 500 objects and finally asynch/await is used to rate limit the writes to 1 per second. Last, we notify the user of the write progress with console logging. I had published an article on this here >> https://hightekk.com/articles/firebase-admin-sdk-bulk-import
NOTE: In my case I am reading a huge flat text file (a manufacturers part number catalog) for import. You can use this as a working template though and modify to suit your data source. Also, you may need to increase the memory allocated to node for this to run:
node --max_old_space_size=8000 app.js
The script looks like:
var admin = require("firebase-admin");
var serviceAccount = require("./your-firebase-project-service-account-key.json");
var fs = require('fs');
var csvFile = "./my-huge-file.csv"
var parse = require('csv-parse');
require('should');
admin.initializeApp({
credential: admin.credential.cert(serviceAccount),
databaseURL: "https://your-project.firebaseio.com"
});
var firestore = admin.firestore();
var thisRef;
var obj = {};
var counter = 0;
var commitCounter = 0;
var batches = [];
batches[commitCounter] = firestore.batch();
fs.createReadStream(csvFile).pipe(
parse({delimiter: '|',relax_column_count:true,quote: ''})
).on('data', function(csvrow) {
if(counter <= 498){
if(csvrow[1]){
obj.family = csvrow[1];
}
if(csvrow[2]){
obj.series = csvrow[2];
}
if(csvrow[3]){
obj.sku = csvrow[3];
}
if(csvrow[4]){
obj.description = csvrow[4];
}
if(csvrow[6]){
obj.price = csvrow[6];
}
thisRef = firestore.collection("your-collection-name").doc();
batches[commitCounter].set(thisRef, obj);
counter = counter + 1;
} else {
counter = 0;
commitCounter = commitCounter + 1;
batches[commitCounter] = firestore.batch();
}
}).on('end',function() {
writeToDb(batches);
});
function oneSecond() {
return new Promise(resolve => {
setTimeout(() => {
resolve('resolved');
}, 1010);
});
}
async function writeToDb(arr) {
console.log("beginning write");
for (var i = 0; i < arr.length; i++) {
await oneSecond();
arr[i].commit().then(function () {
console.log("wrote batch " + i);
});
}
console.log("done.");
}

Firebase Firestore - Async/Await Not Waiting To Get Data Before Moving On?

I'm new to the "async/await" aspect of JS and I'm trying to learn how it works.
The error I'm getting is Line 10 of the following code. I have created a firestore database and am trying to listen for and get a certain document from the Collection 'rooms'. I am trying to get the data from the doc 'joiner' and use that data to update the innerHTML of other elements.
// References and Variables
const db = firebase.firestore();
const roomRef = await db.collection('rooms');
const remoteNameDOM = document.getElementById('remoteName');
const chatNameDOM = document.getElementById('title');
let remoteUser;
// Snapshot Listener
roomRef.onSnapshot(snapshot => {
snapshot.docChanges().forEach(async change => {
if (roomId != null){
if (role == "creator"){
const usersInfo = await roomRef.doc(roomId).collection('userInfo');
usersInfo.doc('joiner').get().then(async (doc) => {
remoteUser = await doc.data().joinerName;
remoteNameDOM.innerHTML = `${remoteUser} (Other)`;
chatNameDOM.innerHTML = `Chatting with ${remoteUser}`;
})
}
}
})
})
})
However, I am getting the error:
Uncaught (in promise) TypeError: Cannot read property 'joinerName' of undefined
Similarly if I change the lines 10-12 to:
remoteUser = await doc.data();
remoteNameDOM.innerHTML = `${remoteUser.joinerName} (Other)`;
chatNameDOM.innerHTML = `Chatting with ${remoteUser.joinerName}`;
I get the same error.
My current understanding is that await will wait for the line/function to finish before moving forward, and so remoteUser shouldn't be null before trying to call it. I will mention that sometimes the code works fine, and the DOM elements are updated and there are no console errors.
My questions: Am I thinking about async/await calls incorrectly? Is this not how I should be getting documents from Firestore? And most importantly, why does it seem to work only sometimes?
Edit: Here are screenshots of the Firestore database as requested by #Dharmaraj. I appreciate the advice.
You are mixing the use of async/await and then(), which is not recommended. I propose below a solution based on Promise.all() which helps understanding the different arrays that are involved in the code. You can adapt it with async/await and a for-of loop as #Dharmaraj proposed.
roomRef.onSnapshot((snapshot) => {
// snapshot.docChanges() Returns an array of the documents changes since the last snapshot.
// you may check the type of the change. I guess you maybe don’t want to treat deletions
const promises = [];
snapshot.docChanges().forEach(docChange => {
// No need to use a roomId, you get the doc via docChange.doc
// see https://firebase.google.com/docs/reference/js/firebase.firestore.DocumentChange
if (role == "creator") { // It is not clear from where you get the value of role...
const joinerRef = docChange.doc.collection('userInfo').doc('joiner');
promises.push(joinerRef.get());
}
});
Promise.all(promises)
.then(docSnapshotArray => {
// docSnapshotArray is an Array of all the docSnapshots
// corresponding to all the joiner docs corresponding to all
// the rooms that changed when the listener was triggered
docSnapshotArray.forEach(docSnapshot => {
remoteUser = docSnapshot.data().joinerName;
remoteNameDOM.innerHTML = `${remoteUser} (Other)`;
chatNameDOM.innerHTML = `Chatting with ${remoteUser}`;
})
});
});
However, what is not clear to me is how you differentiate the different elements of the "first" snapshot (i.e. roomRef.onSnapshot((snapshot) => {...}))). If several rooms change, the snapshot.docChanges() Array will contain several changes and, at the end, you will overwrite the remoteNameDOM and chatNameDOM elements in the last loop.
Or you know upfront that this "first" snapshot will ALWAYS contain a single doc (because of the architecture of your app) and then you could simplify the code by just treating the first and unique element as follows:
roomRef.onSnapshot((snapshot) => {
const roomDoc = snapshot.docChanges()[0];
// ...
});
There are few mistakes in this:
db.collection() does not return a promise and hence await is not necessary there
forEach ignores promises so you can't actually use await inside of forEach. for-of is preferred in that case.
Please try the following code:
const db = firebase.firestore();
const roomRef = db.collection('rooms');
const remoteNameDOM = document.getElementById('remoteName');
const chatNameDOM = document.getElementById('title');
let remoteUser;
// Snapshot Listener
roomRef.onSnapshot(async (snapshot) => {
for (const change of snapshot.docChanges()) {
if (roomId != null){
if (role == "creator"){
const usersInfo = roomRef.doc(roomId).collection('userInfo').doc("joiner");
usersInfo.doc('joiner').get().then(async (doc) => {
remoteUser = doc.data().joinerName;
remoteNameDOM.innerHTML = `${remoteUser} (Other)`;
chatNameDOM.innerHTML = `Chatting with ${remoteUser}`;
})
}
}
}
})

Creating objects in Model.js, NodeJs

In my model.js (using mongoose) , I am initially creating 40 objects in model.js which are to be used in the entire program. No other function in any file creates more objects but only updates the existing ones.
My model.js
var TicketSchema = mongoose.model('Tickets', TicketSchema);
for(let i = 1;i<=40;i++)
{
var new_ticket = new TicketSchema({ticket_number:i});
new_ticket.save(function(err, ticket) {
});
}
Problem is I noticed there were much more objects than 40 after some time. I wanted to know if model.js runs more than once during execution or is it just due to repeated calling of npm run start and then closing the server?
Also is there way better way of creating objects initially which are to be used for the entire program?
It will create new 40 documents every time you start the server. You can use this function to avoid creating if the records already exist by checking count.
const TicketModel = mongoose.model('Tickets', TicketSchema);
const insertTicketNumber = async () => {
try {
const count = await TicketModel.countDocuments({});
if (count) return;
await TicketModel.create(
[...Array(40).keys()]
.map(i => i + 1)
.map(number => ({ ticket_number: number }))
);
} catch (error) {
console.log(error.message);
}
};

Javascript: Is there anything I can do on my end to stop a 403 from occurring?

im on javascript and im currently trying to work with pull requests, issues and commits from a repo. I have the following code:
const axios = require('axios');
var gitPullApiLink = "https://api.github.com/repos/elixir-lang/elixir/pulls";
var listOfCommits = [];
var listOfSHAs = [];
var mapOfInfoObjects = new Map();
var mapPullRequestNumberToCommits = new Map();
var mapPRNumbersToCommitObjects = new Map();
var listOfPrObjects = [];
var setOfFileObjects = new Set();
var listOfNumbersOfTargetedIssues = [];
var mapPRnumberToCloseOpenDateObjects = new Map();
class PullRequestParser {
async getListOfPullRequests(pullrequestLink) {
const message = await axios.get(pullrequestLink);
//console.log(message);
listOfPrObjects = message['data'];
}
async getCommitsForEachPullRequestAndPRinformation() {
var listOfPrNumbers = [];
var k;
// this loop will just make a list of Pull Request Numbers
for (k = 0; k < listOfPrObjects.length; k++){
var currPrNumber = listOfPrObjects[k]['number'];
listOfPrNumbers.push(currPrNumber);
}
// I created a separate list just because... I did it this way because on the github API website it seems
// like the pull request has the same number as the issue it affects. I explain how you can see this down below
listOfNumbersOfTargetedIssues = listOfPrNumbers;
// next loop will make objects that contain information about each pull request.
var n;
for (n = 0; n < listOfPrNumbers; n++){
var ApiLinkForEachPullRequest = gitPullApiLink + "/" + listOfPrNumbers[n];
const mes = await axios.get(ApiLinkForEachPullRequest);
var temp = {OpeningDate: mes['data']['created_at'],
ClosingDate: mes['data']['closed_at'],
IssueLink: mes['data']['_links']['issue']['href']};
//mapPRnumberToCloseOpenDateObjects will be a map where the key is the pull request number and the value
// is the object that stores the open date, close date, and issue link for that pull request. The reason
// why I said I think the pull request number is the same as the number of the issue it affects is because
// if you take any object from the map, say you do mapPRnumberToCloseOpenDateObjects.get(10). You'll
// get an object with a pull request number 10. Now if you take this object and look at it's "IssueLink"
// field, the very last part of the link will have the number 10, and if you look at the github API
// it says for a single issue, you do: /repos/:owner/:repo/issues/:issue_number <---- As you can see,
// the IssueLink field will have this structure and in place of the issue_number, the field will be 10
// for our example object.
mapPRnumberToCloseOpenDateObjects.set(listOfPrNumbers[n], temp);
}
//up to this point, we have the pull request numbers. we will now start getting the commits associated with
//each pull request
var j;
for (j = 0; j < listOfPrNumbers.length; j++){
var currentApiLink = "https://api.github.com/repos/elixir-lang/elixir/pulls/" + listOfPrNumbers[j] + "/commits";
const res = await axios.get(currentApiLink);
//here we map a single pull request to the information containing the commits. I'll just warn you in
// advance: there's another object called mapPRNumbersToCommitObjects. THIS MAP IS DIFFERENT! I know it's
// subtle, but I hope the language can make the distinction: mapPullRequestNumberToCommits will just
// map a pull request number to some data about the commits it's linked to. In contrast,
// mapPRNumbersToCommitObjects will be the map that actually maps pull request numbers to objects
// containing information about the commits a pull request is associated with!
mapPullRequestNumberToCommits.set(listOfPrNumbers[j], res['data']);
}
// console.log("hewoihoiewa");
}
async createCommitObjects(){
var x;
// the initial loop using x will loop over all pull requests and get the associated commits
for (x = 0; x < listOfPrObjects.length; x++){
//here we will get the commits
var currCommitObjects = mapPullRequestNumberToCommits.get(listOfPrObjects[x]['number']);
//console.log('dhsiu');
// the loop using y will iterate over all commits that we get from a single pull request
var y;
for (y = 0; y < currCommitObjects.length; y++){
var currentSHA = currCommitObjects[y]['sha'];
listOfSHAs.push(currentSHA);
var currApiLink = "https://api.github.com/repos/elixir-lang/elixir/commits/" + currentSHA;
const response = await axios.get(currApiLink);
//console.log("up to here");
// here we start extracting some information from a single commit
var currentAuthorName = response['data']['commit']['committer']['name'];
var currentDate = response['data']['commit']['committer']['date'];
var currentFiles = response['data']['files'];
// this loop will iterate over all changed files for a single commit. Remember, every commit has a list
// of changed files, so this loop will iterate over all those files, get the necessary information
// from those files.
var z;
// we create this temporary list of file objects because for every file, we want to make an object
// that will store the necessary information for that one file. after we store all the objects for
// each file, we will add this list of file objects as a field for our bigger commit object (see down below)
var tempListOfFileObjects = [];
for (z = 0; z < currentFiles.length; z++){
var fileInConsideration = currentFiles[z];
var nameOfFile = fileInConsideration['filename'];
var numberOfAdditions = fileInConsideration['additions'];
var numberOfDeletions = fileInConsideration['deletions'];
var totalNumberOfChangesToFile = fileInConsideration['changes'];
//console.log("with file");
var tempFileObject = {fileName: nameOfFile, totalAdditions: numberOfAdditions,
totalDeletions: numberOfDeletions, numberOfChanges: totalNumberOfChangesToFile};
// we add the same file objects to both a temporary, local list and a global set. Don't be tripped
// up by this; they're doing the same thing!
setOfFileObjects.add(tempFileObject);
tempListOfFileObjects.push(tempFileObject);
}
// here we make an object that stores information for a single commit. sha, authorName, date are single
// values, but files will be a list of file objects and these file objects will store further information
// for each file.
var tempObj = {sha: currentSHA, authorName: currentAuthorName, date: currentDate, files: tempListOfFileObjects};
var currPrNumber = listOfPrObjects[x]['number'];
console.log(currPrNumber);
// here we will make a single pull request number to an object that will contain all the information for
// every single commit associated with that pull request. So for every pull request, it will map to a list
// of objects where each object stores information about a commit associated with the pull request.
mapPRNumbersToCommitObjects.set(currPrNumber, tempObj);
}
}
return mapPRNumbersToCommitObjects;
}
startParsingPullRequests() {
this.getListOfPullRequests(gitPullApiLink + "?state=all").then(() => {
this.getCommitsForEachPullRequestAndPRinformation().then(() => {
this.createCommitObjects().then((response) => {
console.log("functions were successful");
return mapPRNumbersToCommitObjects;
}).catch((error) => {
console.log("printing first error");
// console.log(error);
})
}).catch((error2) => {
console.log("printing the second error");
console.log(error2);
})
}).catch((error3) => {
console.log("printing the third error");
// console.log(error3);
});
}
//adding some getter methods so they can be used to work with whatever information people may need.
//I start all of them with the this.startParsingPullRequests() method because by calling that method it gets all
// the information for the global variables.
async getSetOfFileObjects(){
var dummyMap = this.startParsingPullRequests();
return setOfFileObjects;
}
async OpenCloseDateObjects(){
var dummyMap = this.startParsingPullRequests();
return mapPRnumberToCloseOpenDateObjects;
}
async getNumbersOfTargetedIssues(){
var dummyMap = this.startParsingPullRequests();
return listOfNumbersOfTargetedIssues;
}
}
I then try to play around and run the function to make sure all the data I need is there by doing:
var dummy = new PullRequestParser();
var dummyMap = dummy.startParsingPullRequests();
And when I run it on webstorm using:
node PullRequestParser.js
It will print out some pull request numbers, then stop about halfway with a 403 error. I know what the 403 error is, but I'm wondering if there's anything on my end to stop it from happening, or is it just a matter of working with another repo that won't throw me this error. Thanks!
The 403 error from the server means that your access is forbidden. You either need to use different credentials (that is, log in as a user with that access), not use that repository, or gracefully handle the error and do something else. Retrying will not be effective, since GitHub wouldn't be very secure if it just let you have access to things you weren't supposed to.

How to use query in Hyperledger composer logic.js if the query have multiple inputs?

for querying in logic.js I can use
await query('selectCommoditiesWithHighQuantity')
But how can I do that if I have multiple input?
if the query have function like this
query selectCommoditiesByTimeAndOwnerAndDataType {
description: "Select all commodities based on their sender country"
statement:
SELECT org.stock.mynetwork.Commodity
WHERE(time > _$from AND time < _$to AND owner == _$owner AND dataType == _$dataType)
}
how can I call that query from js side?
edit:
js code
/**
* Track the trade of a commodity from one trader to another
* #param {org.stock.mynetwork.Receive} receive - the receive to be processed
* #transaction
*/
async function receiveCommodity(receive) {
let q1 = await buildQuery('SELECT org.stock.mynetwork.Commodity ' +
'WHERE (productName == _$productName AND owner == _$owner)');
let result2 = await query(q1,{productName:receive.productName,owner: receive.newOwner});
}
there is a problem with let result2 = await query(q1,{productName:receive.productName,owner: receive.newOwner}); part. If I just use productName: receive.productName it works perfectly, but when I add owner: receive.newOwner it need serialize.json
So you can write a query inside the .qry file and call it, but I do not recommend doing that. You can make the same queries directly from the SDK and in the logic.js file. Reasoning behind this is, say a few days later, you want to add a new API that queries by a certain value, if you rely on the .qry file (which will work), then you will need to deploy a new version of smart contact, whereas if you use the SDK, you can do a change in the API and deploy a new application server asap.
async function someTransaction(receive) {
let assetRegistry = await getAssetRegistry('YOUR_NAME_SPACE');
let ownerRegistry = await getParticipantRegistry('YOUR_NAME_SPACE');
let statement = 'SELECT NAME_SPACE_OF_ASSET WHERE (owner == _$owner && dataType == _$dataType)';
let qry = buildQuery(statement);
// This query can be done in different ways
// assuming newOwner is a string (id of participant)
let allAssets = await query(qry, { owner: receive.newOwner, dataType: receive.dataType });
// assuming newOwner is a participant
let allAssets = await query(qry, { owner: receive.newOwner.getIdentifier(), dataType: receive.dataType });
if (allAssets.length === 0) {
// No assets exists, add one
// use assetRegistry.add()
} else {
for (var i = 0; i < allAssets.length; i++) {
// Iterate over assets belonging to an owner of a product type
// Do whatever here
// use assetRegistry.update()
};
};
};

Categories