I am trying to create a function in js for k6 tool scripts which would enable me to create multiple type of metrics for "Transaction Name" as input and then create another function to populate those metrics. This will help in avoiding writing similar code for different transaction names and also help keep consistent names of the metric.
// line of code to define the metrics
let Search_RT_Trend = new Trend("Search_duration");
let Search_PassRate = new Rate("Search_PassRate");
let Search_PassCount = new Counter("Search_PassCount");
let Search_FailCount = new Counter("Search_FailCount");
// line of code populating the data in metrics
Search_RT_Trend.add(res.timings.duration);
Search_PassRate.add(1);
Search_PassCount.add(1);
Search_FailCount.add(1);
Hoping to create two functions which take input for the transaction name possibly as below:
CreateMetric ("Search")
PopulateMetric ("Search")
how to achieve this?
Something like this?
function MetaMetric(name) {
this.RT_Trend = new Trend(`${name}_duration`);
this.PassRate = new Rate(`${name}_PassRate`);
this.PassCount = new Counter(`${name}_PassCount`);
this.FailCount = new Counter(`${name}_FailCount`);
}
MetaMetric.prototype.track = function (req) {
this.RT_Trend.add(req.timings.duration);
if (req.timings.duration < 200 /* or whatever */) {
this.PassRate.add(1);
this.PassCount.add(1);
} else {
this.PassRate.add(0);
this.FailCount.add(1);
}
};
let myMetaMetric = new MetaMetric("Search")
export default function () {
let resp = http.get("https://httpbin.test.loadimpact.com/");
myMetaMetric.track(resp);
sleep(3 * Math.random());
}
Some things to consider:
You don't need pass and fail Counter metrics when you have a Rate one. Rate is essentially the ratio between passing and failing, so it's basically those two counters combined :)
You might find the k6 checks and thresholds useful.
Related
Transactions and batched writes can be used to write multiple documents by means of an atomic operation.
Documentation says that Using the Cloud Firestore client libraries, you can group multiple operations into a single transaction.
I cannot understand what is the meaning of client libraries here and if it's correct to use Transactions and batched writes within a Cloud Function.
Example given: suppose in the database I have 3 elements (which doc IDs are A, B, C). Now I need to insert 3 more elements (which doc IDs are C, D, E). The Cloud Function should add just the latest ones and send a Push Notification to the user telling him that 2 new documents are available.
The doc ID could be the same but since I need to calculate how many documents are new (the ones that will be inserted) I need a way to read the doc ID first and check for its existence. Hence, I'm wondering if Transactions fit Cloud Functions or not.
Also, each transaction or batch of writes can write to a maximum of 500 documents. Is there any other way to overcome this limit within a Cloud Function?
Firestore Transaction behaviour is different between the Clients SDKs (JS SDK, iOS SDK, Android SDK , ...) and the Admin SDK (a set of server libraries), which is the SDK we use in a Cloud Function. More explanations on the differences here in the documentation.
Because of the type of data contention used in the Admin SDK you can, with the getAll() method, retrieve multiple documents from Firestore and hold a pessimistic lock on all returned documents.
So this is exactly the method you need to call in your transaction: you use getAll() for fetching documents C, D & E and you detect that only C is existing so you know that you need to only add D and E.
Concretely, it could be something along the following lines:
const db = admin.firestore();
exports.lorenzoFunction = functions
.region('europe-west1')
.firestore
.document('tempo/{docId}') //Just a way to trigger the test Cloud Function!!
.onCreate(async (snap, context) => {
const c = db.doc('coltest/C');
const d = db.doc('coltest/D');
const e = db.doc('coltest/E');
const docRefsArray = [c, d, e]
return db.runTransaction(transaction => {
return transaction.getAll(...docRefsArray).then(snapsArray => {
let counter = 0;
snapsArray.forEach(snap => {
if (!snap.exists) {
counter++;
transaction.set(snap.ref, { foo: "bar" });
} else {
console.log(snap.id + " exists")
}
});
console.log(counter);
return;
});
});
});
To test it: Create one of the C, D or E doc in the coltest collection, then create a doc in the tempo collection (Just a simple way to trigger this test Cloud Function): the CF is triggered. Then look at the coltest collection: the two missing docs were created; and look a the CF log: counter = 2.
Also, each transaction or batch of writes can write to a maximum of
500 documents. Is there any other way to overcome this limit within a
Cloud Function?
AFAIK the answer is no.
There used to also be a one second delay required as well between 500 record chunks. I wrote this a couple of years ago. The script below reads the CSV file line by line, creating and setting a new batch object for each line. A counter creates a new batch write per 500 objects and finally asynch/await is used to rate limit the writes to 1 per second. Last, we notify the user of the write progress with console logging. I had published an article on this here >> https://hightekk.com/articles/firebase-admin-sdk-bulk-import
NOTE: In my case I am reading a huge flat text file (a manufacturers part number catalog) for import. You can use this as a working template though and modify to suit your data source. Also, you may need to increase the memory allocated to node for this to run:
node --max_old_space_size=8000 app.js
The script looks like:
var admin = require("firebase-admin");
var serviceAccount = require("./your-firebase-project-service-account-key.json");
var fs = require('fs');
var csvFile = "./my-huge-file.csv"
var parse = require('csv-parse');
require('should');
admin.initializeApp({
credential: admin.credential.cert(serviceAccount),
databaseURL: "https://your-project.firebaseio.com"
});
var firestore = admin.firestore();
var thisRef;
var obj = {};
var counter = 0;
var commitCounter = 0;
var batches = [];
batches[commitCounter] = firestore.batch();
fs.createReadStream(csvFile).pipe(
parse({delimiter: '|',relax_column_count:true,quote: ''})
).on('data', function(csvrow) {
if(counter <= 498){
if(csvrow[1]){
obj.family = csvrow[1];
}
if(csvrow[2]){
obj.series = csvrow[2];
}
if(csvrow[3]){
obj.sku = csvrow[3];
}
if(csvrow[4]){
obj.description = csvrow[4];
}
if(csvrow[6]){
obj.price = csvrow[6];
}
thisRef = firestore.collection("your-collection-name").doc();
batches[commitCounter].set(thisRef, obj);
counter = counter + 1;
} else {
counter = 0;
commitCounter = commitCounter + 1;
batches[commitCounter] = firestore.batch();
}
}).on('end',function() {
writeToDb(batches);
});
function oneSecond() {
return new Promise(resolve => {
setTimeout(() => {
resolve('resolved');
}, 1010);
});
}
async function writeToDb(arr) {
console.log("beginning write");
for (var i = 0; i < arr.length; i++) {
await oneSecond();
arr[i].commit().then(function () {
console.log("wrote batch " + i);
});
}
console.log("done.");
}
I have written the script below which creates multiple WebSocket connections with a smart contract to listen to events. it's working fine but I feel this is not an optimized solution and probably this could be done in a better way.
const main = async (PAIR_NAME, PAIR_ADDRESS_UNISWAP, PAIR_ADDRESS_SUSHISWAP) => {
const PairContractHTTPUniswap = new Blockchain.web3http.eth.Contract(
UniswapV2Pair.abi,
PAIR_ADDRESS_UNISWAP
);
const PairContractWSSUniswap = new Blockchain.web3ws.eth.Contract(
UniswapV2Pair.abi,
PAIR_ADDRESS_UNISWAP
);
const PairContractHTTPSushiswap = new Blockchain.web3http.eth.Contract(
UniswapV2Pair.abi,
PAIR_ADDRESS_SUSHISWAP
);
const PairContractWSSSushiswap = new Blockchain.web3ws.eth.Contract(
UniswapV2Pair.abi,
PAIR_ADDRESS_SUSHISWAP
);
var Price_Uniswap = await getReserves(PairContractHTTPUniswap);
var Price_Sushiswap = await getReserves(PairContractHTTPSushiswap);
// subscribe to Sync event of Pair
PairContractWSSUniswap.events.Sync({}).on("data", (data) => {
Price_Uniswap = (Big(data.returnValues.reserve0)).div(Big(data.returnValues.reserve1));
priceDifference(Price_Uniswap, Price_Sushiswap, PAIR_NAME);
});
PairContractWSSSushiswap.events.Sync({}).on("data", (data) => {
Price_Sushiswap = (Big(data.returnValues.reserve0)).div(Big(data.returnValues.reserve1));
priceDifference(Price_Uniswap, Price_Sushiswap, PAIR_NAME);
});
};
for (let i = 0; i < pairsArray.length; i++){
main(pairsArray[i].tokenPair, pairsArray[i].addressUniswap, pairsArray[i].addressSushiswap);
}
In the end, I instantiate the main function multiple times for each pair from a pair array, in a for-loop. I think this way of solving is brute force and there is a better way of doing this.
Any suggestions/opinions would be really appreciated.
Just to clear up the terms: You're opening a websocket connection to the WSS node provider - not to the smart contracts. But yes, your JS snippet subscribes to multiple channels (one for each contract) within this one connection (to the node provider).
You can collect event logs from multiple contracts through just one WSS channel using the web3.eth.subscribe("logs") function (docs), passing it the list of contract addresses as a param. Example:
const options = {
// list of contract addresses that you want to subscribe to their event logs
address: ["0x123", "0x456"]
};
web3.eth.subscribe("logs", options, (err, data) => {
console.log(data);
});
But it has a drawback - it doesn't decode the event log data for you. So your code will need to find the expected data types based on the event signature (returned in data.topics[0]). Once you know which event log is emitted based on the topics[0] event signature (real-life example value in this answer), you can use the decodeLog() function (docs) to get the decoded values.
im on javascript and im currently trying to work with pull requests, issues and commits from a repo. I have the following code:
const axios = require('axios');
var gitPullApiLink = "https://api.github.com/repos/elixir-lang/elixir/pulls";
var listOfCommits = [];
var listOfSHAs = [];
var mapOfInfoObjects = new Map();
var mapPullRequestNumberToCommits = new Map();
var mapPRNumbersToCommitObjects = new Map();
var listOfPrObjects = [];
var setOfFileObjects = new Set();
var listOfNumbersOfTargetedIssues = [];
var mapPRnumberToCloseOpenDateObjects = new Map();
class PullRequestParser {
async getListOfPullRequests(pullrequestLink) {
const message = await axios.get(pullrequestLink);
//console.log(message);
listOfPrObjects = message['data'];
}
async getCommitsForEachPullRequestAndPRinformation() {
var listOfPrNumbers = [];
var k;
// this loop will just make a list of Pull Request Numbers
for (k = 0; k < listOfPrObjects.length; k++){
var currPrNumber = listOfPrObjects[k]['number'];
listOfPrNumbers.push(currPrNumber);
}
// I created a separate list just because... I did it this way because on the github API website it seems
// like the pull request has the same number as the issue it affects. I explain how you can see this down below
listOfNumbersOfTargetedIssues = listOfPrNumbers;
// next loop will make objects that contain information about each pull request.
var n;
for (n = 0; n < listOfPrNumbers; n++){
var ApiLinkForEachPullRequest = gitPullApiLink + "/" + listOfPrNumbers[n];
const mes = await axios.get(ApiLinkForEachPullRequest);
var temp = {OpeningDate: mes['data']['created_at'],
ClosingDate: mes['data']['closed_at'],
IssueLink: mes['data']['_links']['issue']['href']};
//mapPRnumberToCloseOpenDateObjects will be a map where the key is the pull request number and the value
// is the object that stores the open date, close date, and issue link for that pull request. The reason
// why I said I think the pull request number is the same as the number of the issue it affects is because
// if you take any object from the map, say you do mapPRnumberToCloseOpenDateObjects.get(10). You'll
// get an object with a pull request number 10. Now if you take this object and look at it's "IssueLink"
// field, the very last part of the link will have the number 10, and if you look at the github API
// it says for a single issue, you do: /repos/:owner/:repo/issues/:issue_number <---- As you can see,
// the IssueLink field will have this structure and in place of the issue_number, the field will be 10
// for our example object.
mapPRnumberToCloseOpenDateObjects.set(listOfPrNumbers[n], temp);
}
//up to this point, we have the pull request numbers. we will now start getting the commits associated with
//each pull request
var j;
for (j = 0; j < listOfPrNumbers.length; j++){
var currentApiLink = "https://api.github.com/repos/elixir-lang/elixir/pulls/" + listOfPrNumbers[j] + "/commits";
const res = await axios.get(currentApiLink);
//here we map a single pull request to the information containing the commits. I'll just warn you in
// advance: there's another object called mapPRNumbersToCommitObjects. THIS MAP IS DIFFERENT! I know it's
// subtle, but I hope the language can make the distinction: mapPullRequestNumberToCommits will just
// map a pull request number to some data about the commits it's linked to. In contrast,
// mapPRNumbersToCommitObjects will be the map that actually maps pull request numbers to objects
// containing information about the commits a pull request is associated with!
mapPullRequestNumberToCommits.set(listOfPrNumbers[j], res['data']);
}
// console.log("hewoihoiewa");
}
async createCommitObjects(){
var x;
// the initial loop using x will loop over all pull requests and get the associated commits
for (x = 0; x < listOfPrObjects.length; x++){
//here we will get the commits
var currCommitObjects = mapPullRequestNumberToCommits.get(listOfPrObjects[x]['number']);
//console.log('dhsiu');
// the loop using y will iterate over all commits that we get from a single pull request
var y;
for (y = 0; y < currCommitObjects.length; y++){
var currentSHA = currCommitObjects[y]['sha'];
listOfSHAs.push(currentSHA);
var currApiLink = "https://api.github.com/repos/elixir-lang/elixir/commits/" + currentSHA;
const response = await axios.get(currApiLink);
//console.log("up to here");
// here we start extracting some information from a single commit
var currentAuthorName = response['data']['commit']['committer']['name'];
var currentDate = response['data']['commit']['committer']['date'];
var currentFiles = response['data']['files'];
// this loop will iterate over all changed files for a single commit. Remember, every commit has a list
// of changed files, so this loop will iterate over all those files, get the necessary information
// from those files.
var z;
// we create this temporary list of file objects because for every file, we want to make an object
// that will store the necessary information for that one file. after we store all the objects for
// each file, we will add this list of file objects as a field for our bigger commit object (see down below)
var tempListOfFileObjects = [];
for (z = 0; z < currentFiles.length; z++){
var fileInConsideration = currentFiles[z];
var nameOfFile = fileInConsideration['filename'];
var numberOfAdditions = fileInConsideration['additions'];
var numberOfDeletions = fileInConsideration['deletions'];
var totalNumberOfChangesToFile = fileInConsideration['changes'];
//console.log("with file");
var tempFileObject = {fileName: nameOfFile, totalAdditions: numberOfAdditions,
totalDeletions: numberOfDeletions, numberOfChanges: totalNumberOfChangesToFile};
// we add the same file objects to both a temporary, local list and a global set. Don't be tripped
// up by this; they're doing the same thing!
setOfFileObjects.add(tempFileObject);
tempListOfFileObjects.push(tempFileObject);
}
// here we make an object that stores information for a single commit. sha, authorName, date are single
// values, but files will be a list of file objects and these file objects will store further information
// for each file.
var tempObj = {sha: currentSHA, authorName: currentAuthorName, date: currentDate, files: tempListOfFileObjects};
var currPrNumber = listOfPrObjects[x]['number'];
console.log(currPrNumber);
// here we will make a single pull request number to an object that will contain all the information for
// every single commit associated with that pull request. So for every pull request, it will map to a list
// of objects where each object stores information about a commit associated with the pull request.
mapPRNumbersToCommitObjects.set(currPrNumber, tempObj);
}
}
return mapPRNumbersToCommitObjects;
}
startParsingPullRequests() {
this.getListOfPullRequests(gitPullApiLink + "?state=all").then(() => {
this.getCommitsForEachPullRequestAndPRinformation().then(() => {
this.createCommitObjects().then((response) => {
console.log("functions were successful");
return mapPRNumbersToCommitObjects;
}).catch((error) => {
console.log("printing first error");
// console.log(error);
})
}).catch((error2) => {
console.log("printing the second error");
console.log(error2);
})
}).catch((error3) => {
console.log("printing the third error");
// console.log(error3);
});
}
//adding some getter methods so they can be used to work with whatever information people may need.
//I start all of them with the this.startParsingPullRequests() method because by calling that method it gets all
// the information for the global variables.
async getSetOfFileObjects(){
var dummyMap = this.startParsingPullRequests();
return setOfFileObjects;
}
async OpenCloseDateObjects(){
var dummyMap = this.startParsingPullRequests();
return mapPRnumberToCloseOpenDateObjects;
}
async getNumbersOfTargetedIssues(){
var dummyMap = this.startParsingPullRequests();
return listOfNumbersOfTargetedIssues;
}
}
I then try to play around and run the function to make sure all the data I need is there by doing:
var dummy = new PullRequestParser();
var dummyMap = dummy.startParsingPullRequests();
And when I run it on webstorm using:
node PullRequestParser.js
It will print out some pull request numbers, then stop about halfway with a 403 error. I know what the 403 error is, but I'm wondering if there's anything on my end to stop it from happening, or is it just a matter of working with another repo that won't throw me this error. Thanks!
The 403 error from the server means that your access is forbidden. You either need to use different credentials (that is, log in as a user with that access), not use that repository, or gracefully handle the error and do something else. Retrying will not be effective, since GitHub wouldn't be very secure if it just let you have access to things you weren't supposed to.
I'm trying to keep track of an increment of a certain reactive value in Meteor. If the current value has increased by 1 or more, I want something to happen. I do have two problems:
First: I don't know how I can make an if-statement of this function.
Second: I don't know how I can keep track of the increases.
This is the code I have now, using the Mongo.Collection cars (which is from an API):
api = DDP.connect('url');
const currentCars = new Meteor.Collection('cars', api);
const newCars = cars.find().count()
if (Meteor.isClient) {
Template.currentCars.helpers({
carsInCity: function() {
return currentCars.find(
{
location: "city"
}).count();
},
})
}
So there's a current amount of cars in the city. Everytime when there is one more car, I want something to happen in the code. But how on earth can I do that? Maybe by keeping track of when the database has been updated?
A fairly straight-forward solution would be to store the current amount of data in that collection, then run a reactive computation to see if anything changed.
Something like this:
let currentCarsCount = cars.find().count()
Tracker.autorun(function checkForAddedCars() {
// cars.find() is our reactive source
const newCarsCount = cars.find().count()
if(newCarsCount > currentCarsCount) {
currentCarsCount = newCarsCount
// There's new cars, handle them now
// ...
}
})
You may also want to use a template-level autorun so that you don't have to manage stopping checkForAddedCars. You could also store currentCarsCount as a state on the template instance instead of as a hoisted loner.
For example:
Template.currentCars.onCreated(function() {
const templateInstance = this;
// equivalent:
const templateInstance = Template.instance();
templateInstance.currentCarsCount = cars.find().count();
templateInstance.autorun(function checkForAddedCars() {
// cars.find() is our reactive source
const newCarsCount = cars.find().count();
if(newCarsCount > templateInstance.currentCarsCount) {
templateInstance.currentCarsCount = newCarsCount;
// There's new cars, handle them now
// ...
}
});
});
It would also allow you to access currentCarsCount from other places in the template code.
Here is a simple task I would like to accomplish on Parse.com with Cloud Code.
The task consists to delete a Unit and what is related to it.
One Unit has several Sentences related to it and each Sentence has one or more Translations.
So when the task is performed, the Unit as well as the Sentence and Translations should be deleted.
I have a strong feeling I should be using Promises (and chain them up) in order to do what I want in a good manner.
Below is the code I wrote, but it works only partially (The translations are deleted, not the rest).
Parse.Cloud.define("deleteUnitAndDependencies", function(request, response) {
var unitListQuery = new Parse.Query("UnitList");
unitListQuery.equalTo("objectId", request.params.unitID);
unitListQuery.equalTo("ownerID", request.params.userID);
unitListQuery.find().then(function(resUnit) {
var sentenceListQuery = new Parse.Query("SentenceList");
sentenceListQuery.equalTo("unit", resUnit[0]);
return sentenceListQuery.find();
}).then(function(resSentence) {
var translatListQuery = new Parse.Query("TranslatList");
for (i = 0; i < resSentence.length; i++) {
var query = new Parse.Query("TranslatList");
query.equalTo("sentence", resSentence[i]);
translatListQuery = Parse.Query.or(translatListQuery, query);
}
return translatListQuery.find();
}).then(function(resTranslat) {
for (iT = 0; iT < resTranslat.length; iT++) {
resTranslat[iT].destroy({});
}
});
});
I surely need to add some lines of code like:
resSentence[x].destroy({});
and:
resUnit[0].destroy({});
The problem is that I do not quite see where is the adequate place for that.
Collect the objects to be deleted then use Parse.Object.destroyAll(someArray); to delete all at once.
In cases like this I like to use a scope variable to hold things for later use.
var scope = {
sentences: [],
units: []
};
// later inside then block...
scope.sentences.push(resSentence[i]);
// ...now we have them collected safely
.then(function() {
return Parse.Object.destroyAll(scope.sentences);
})