Avoid an infinite loop but still ensure user validation - javascript

As part of the system I'm building a registered user needs to validate his email address, By clicking on a link in the following format:
https://example.com/verification/${A key of 256 random characters}
i use crypto.randomBytes(128) to generate the key.
After creating the key, i check that the key is not already in use, The problem is that i have to limit the number of tests, and i don't want to get to the point where 2 users get the same key.
How to deal with this situation? Limit the number of tests to a high number like 10,000 for example or is there a better way?

The module crypto in NodeJs uses the method randomBytes(size[, callback]), where you can provide the value of the size in your case 265, and implement the callback method that will take two arguments the err and the buf which represents a Buffer object with the generated values. This is an example of using the method :
const {
randomBytes,
} = await import('node:crypto');
randomBytes(256, (err, buf) => {
if (err) throw err;
// Rest of the code
});

Related

Filter web3.eth.getTransactionCount() based on transaction type

I'm using this line of code to get all the transaction counts under a particular contract.
web3.eth.getTransactionCount("//contract address").then(console.log);
But I only want to count the transactions where the minting of NFTs was done.
As you can see in the screenshot, there are six transactions, and getTransactionCount() would return six. But I only want to count those transactions which have the method "Mint NFT".
Is there any way to do that?
The web3 getTransactionCount() function (docs) is a wrapper to the eth_getTransactionCount JSON-RPC method (docs) returning the amount of mined transactions sent from the specified address.
Since your code specifies the "contract address", I'm assuming that you want to return the amount of transactions performing the specified action on the contract address - by any caller.
Mind that the "Mint NFT" in the "Method" column of the screenshot simply represents the executed function name. It does not mean that an NFT was in fact mined. So theoretically you can have a following function named mintNFT() that performs a totally different action:
function mintNFT() external {
// does not mint any NFT
counter++;
}
Also - and this is important for the next paragraph - a contract can contain multiple functions with the same name that result in different function selectors because they have different parameter datatypes.
// all of these can be in the same contract
mintNFT(address to) // selector `0x54ba0f27`
mintNFT(address to, uint256 ID) // selector `0x3c168eab`
mintNFT(uint256 ID) // selector `0x92642744`
So if you want to filter by the invoked function, you'll need to know (or calculate) its specific selector - not just the function name. It's the first 4 bytes (8 hex characters) of the keccak-256 hash of the function name, following argument datatypes separated by comma, wrapped in parentheses. Example:
// keccak-256 hash of this string, first 4 bytes
// and you get the `0x3c168eab` selector mentioned earlier
mintNFT(address,uint256)
Finally, there is no native way (in the Ethereum JSON-RPC API that some other blockchains, such as Polygon or BSC, implement as well) to get a list of transactions sent or received by an address. Since web3 is a wrapper library to the JSON-RPC API (plus it contains a few related helpers and utility functions), there's no way to achieve this result using web3.
So you'll need to collect all existing transactions (at least in the block range that you're interested in) to a searchable database. Or - to use an already existing searchable collection.
For the purpose of this answer, I'm going to be using the PolygonScan API, which is free for limited use, as your question suggests the target contract is on Polygon. And this contract that implements two functions with the same name (mint()) but different function selectors. Lets use the latter defined function from line 2003:
function mint(
address to,
uint32 batch,
uint32 sequence,
uint32 limit,
string memory name,
string memory page,
string memory description,
string memory link,
string memory content,
string memory created
)
The selector is calculated from the following string
mint(address,uint32,uint32,uint32,string,string,string,string,string,string)
and its value is 0xab2a6d77.
Then the process is straightforward. Once you get the list of all transactions to the contract address, you simply filter them by the chosen function selector, which is always the first 4 bytes of the data field of the raw transaction (named as input in the API).
Note that it's possible to send a transaction to a contract having less than 4 bytes in the data field (usually 0), so your code should also account for that.
const axios = require("axios");
const API_KEY = "YourApiKeyToken";
const CONTRACT_ADDRESS = "0x90410A6bc2285dF5A726b0b89D8bE60C9B6fA26E";
const SELECTOR = "0xab2a6d77";
const run = async () => {
const allTransactions = await _getAllTransactionsTo(CONTRACT_ADDRESS);
const filteredTransactions = _filterTransactionsBySelector(allTransactions, SELECTOR);
const txHashes = _getTransactionHashes(filteredTransactions);
console.log(txHashes);
}
const _getAllTransactionsTo = async (address) => {
const response = await axios(
"https://api.polygonscan.com/api"
+ "?module=account"
+ "&action=txlist"
+ "&address=" + address
+ "&startblock=0"
+ "&endblock=99999999"
+ "&page=1"
+ "&offset=10"
+ "&sort=asc"
+ "&apikey=" + API_KEY
);
// the API returns transaction both `from` and `to` the address
// this filter is unnecessary for contract addresses (as there are no transactions `from` a contract)
return response.data.result.filter((item) => {
// mind that the API might return the params in lowercase, while your input address might be checksum
return item.to.toLowerCase() === address.toLowerCase();
});
}
const _filterTransactionsBySelector = (allTransactions, selector) => {
return allTransactions.filter((item) => {
return item?.input.startsWith(selector);
});
}
const _getTransactionHashes = (transactions) => {
return transactions.map((item) => {
return item.hash;
});
}
run();
Which prints hashes of transaction invoking the specified mint() function (just this one specified by the selector - not the other one).
[
'0xd6a141780585e0833382cfa5940db4bd1b2acde8108c949421242077d6a5d16d',
'0x63bfaae7afbd50f1ee052182aeee7147ba256c6285bc2783f8607a881dbf135f',
'0x57cd8c74c13aded28841f14a79b1901a5e12a87aae38ea4f8af19a7ef976e281',
'0x8dbbb9f7fa11ec0c025d1cec2ac75862a89dc10fe69d4599f2401db2a089c0c9',
'0x34fa6285fbf63313127422be8b84214906642a2399b971e863b0b7950bd57ca4',
'0x4fcab773f03ed5f12b85c2329e14226b5afb4120e9227e8da423bd7d88fb06ed',
'0x67d540059dcf9d6c6a63e03f8882fe1522897d09c03f9185c540c807cf1972b8'
]

How to do find all or get all in dynamo db in NodeJS

I want to get all the data from a table in Dynamo DB in nodejs, this is my code
const READ = async (payload) => {
const params = {
TableName: payload.TableName,
};
let scanResults = [];
let items;
do {
items = await dbClient.scan(params).promise();
items.Items.forEach((item) => scanResults.push(item));
params.ExclusiveStartKey = items.LastEvaluatedKey;
} while (typeof items.LastEvaluatedKey != "undefined");
return scanResults;
};
I implemented this and this is working fine, but our code review tool is flagging red that this is not optimized or causing some memory leak, I just cannot figure out why, I have read somewhere else that scanning API from dynamo DB is not the most efficient way to get all data in node or is there something else that I am missing to optimize this code
DO LIKE THIS ONLY IF YOUR DATA SIZE IS VERY LESS (less than 100 items or data size less than 1MB, that's I prefer and in that case you don't need a do-while loop)
Think about the following scenario, What about in case in future, more and more items will add in to DynamoDB table? - This will return all your data and put into the scanResults variable right? This will impact the memory. Also, DynamoDB scan operation is expensive - in terms of both memory and cost
It's perfectly okay to use SCAN operation if the data is very less. Otherwise, go with pagination (I always prefer this). If there are 1000's of items, then who will look in to all these in a single shot? So use pagination instead.
Lets take another scenario, If your requirement is to retrieve all the data for doing some analytics or aggregation. Then better store the aggregate data upfront into the table (same or different DynamoDB table) as an item or use some analytics database.
If your requirement is something else, elaborate it in the question.

Better performance when saving large JSON file to MySQL

I have an issue.
So, my story is:
I have a 30 GB big file (JSON) of all reddit posts in a specific timeframe.
I will not insert all values of each post into the table.
I have followed this series, and he coded what I'm trying to do in Python.
I tried to follow along (in NodeJS), but when I'm testing it, it's way too slow. It inserts one row every 5 seconds. And there 500000+ reddit posts and that would literally take years.
So here's an example of what I'm doing in.
var readStream = fs.createReadStream(location)
oboe(readStream)
.done(async function(post) {
let { parent_id, body, created_utc, score, subreddit } = data;
let comment_id = data.name;
// Checks if there is a comment with the comment id of this post's parent id in the table
getParent(parent_id, function(parent_data) {
// Checks if there is a comment with the same parent id, and then checks which one has higher score
getExistingCommentScore(parent_id, function(existingScore) {
// other code above but it isn't relevant for my question
// this function adds the query I made to a table
addToTransaction()
})
})
})
Basically what that does, is to start a read stream and then pass it on to a module called oboe.
I then get JSON in return.
Then, it checks if there is a parent saved already in the database, and then checks if there is an existing comment with the same parent id.
I need to use both functions in order to get the data that I need (only getting the "best" comment)
This is somewhat how addToTransaction looks like:
function addToTransaction(query) {
// adds the query to a table, then checks if the length of that table is 1000 or more
if (length >= 1000) {
connection.beginTransaction(function(err) {
if (err) throw new Error(err);
for (var n=0; n<transactions.length;n++) {
let thisQuery = transactions[n];
connection.query(thisQuery, function(err) {
if (err) throw new Error(err);
})
}
connection.commit();
})
}
}
What addToTransaction does, is to get the queries I made and them push them to a table, then check the length of that table and then create a new transaction, execute all those queries in a for loop, then comitting (to save).
Problem is, it's so slow that the callback function I made doesn't even get called.
My question (finally) is, is there any way I could improve the performance?
(If you're wondering why I am doing this, it is because I'm trying to create a chatbot)
I know I've posted a lot, but I tried to give you as much information as I could so you could have a better chance to help me. I appreciate any answers, and I will answer the questions you have.

Concurrent beforeSave calls allowing duplicates

In an effort to prevent certain objects from being created, I set a conditional in that type of object's beforeSave cloud function.
However, when two objects are created simultaneously, the conditional does not work accordingly.
Here is my code:
Parse.Cloud.beforeSave("Entry", function(request, response) {
var theContest = request.object.get("contest");
theContest.fetch().then(function(contest){
if (contest.get("isFilled") == true) {
response.error('This contest is full.');
} else {
response.success();
});
});
Basically, I don't want an Entry object to be created if a Contest is full. However, if there is 1 spot in the Contest remaining and two entries are saved simultaneously, they both get added.
I know it is an edge-case, but a legitimate concern.
Parse is using Mongodb which is a NoSQL database designed to be very scalable and therefore provides limited synchronisation features. What you really need here is mutual exclusion which is unfortunately not supported on a Boolean field. However Parse provides atomicity for counters and array fields which you can use to enforce some control.
See http://blog.parse.com/announcements/new-atomic-operations-for-arrays/
and https://parse.com/docs/js/guide#objects-updating-objects
Solved this by using increment and then doing the check in the save callback (instead of fetching the object and checking a Boolean on it).
Looks something like this:
Parse.Cloud.beforeSave("Entry", function(request, response) {
var theContest = request.object.get("contest");
theContest.increment("entries");
theContest.save().then(function(contest) {
if (contest.get("entries") > contest.get("maxEntries")) {
response.error('The contest is full.');
} else {
response.success();
}
});
}

No update in the mongoose-cache after change in the collection

I have a MEAN stack based application and recently, I was trying to implement some caching mechanism for caching the query results. I implemented mongoose-cache.
mongoose-cache configuration
require('mongoose-cache').install(mongoose, {max: 150, maxAge:1000*60*10});
I have two document in sample collection say
{name:'dale', dep:2},
{name:'john', dep:4}
I run a query with mongoose-cache enabled and the maxAge is say 10 minutes.
sample.find().cache().exec(function (err, doc) {
// returns 2 document
});
Next, I inserted one more document say
{name:'rasig', dep:4} and execute the same query
sample.find().cache().exec(function (err, doc) {
// returns 2 document instead of 3
});
I executed the same query twice within 10 minutes, though there was a change in the collection, I got the previous result. Is there any way to drop the cached result once there is a change in the collection. If not, can you suggest something else to implement the same.
I am the author of a new Mongoose module called Monc
Using Monc is quite easy to clean up or purge the whole cache or even the the associated Query objects simple by using:
sample.find().clean().cache().exec(function (err, doc) {});

Categories