Get events from a transaction receipt in hardhat - javascript

I have an ethers contract that I've made a transaction with:
const randomSVG = new ethers.Contract(RandomSVG.address, RandomSVGContract.interface, signer)
let tx = await randomSVG.create()
I have an event with this transaction:
function create() public returns (bytes32 requestId) {
requestId = requestRandomness(keyHash, fee);
emit requestedRandomSVG(requestId);
}
However, I can't see the logs in the transaction receipt.](https://docs.ethers.io/v5/api/providers/types/#providers-TransactionReceipt)
// This returns undefined
console.log(tx.logs)

When you create a transaction with Ethers.js, you get back a TransactionResponse that may not be included in the blockchain yet. Therefore it doesn't know what logs will be emitted.
Instead, you want to wait until the transaction is confirmed and you get back a TransactionReceipt. At this point the transaction was included in a block, and you can see which events were emitted.
const randomSVG = new ethers.Contract(RandomSVG.address, RandomSVGContract.interface, signer)
const tx = await randomSVG.create()
// Wait until the tx has been confirmed (default is 1 confirmation)
const receipt = await tx.wait()
// Receipt should now contain the logs
console.log(receipt.logs)

Related

how can I fix MetaMask Failed to fetch error?

I call the contract.methods.ownerOf 2500 times then I got the MetaMask - RPC Error: Failed to fetch. But I call the method about 200 times, then there's no error. how can I fix the error?
(ownerOf is etherscan contract interface)
here's my code.
export async function list() {
const MAX_TOKEN_ID = 2500;
const web3 = new Web3(Web3.givenProvider ||'http://localhost:8080');
const contract = new web3.eth.Contract(ERC721ABI as AbiItem[], CONTRACT);
const list = new Set<number>();
let result: object;
for (let tokenId = 1; tokenId <= MAX_TOKEN_ID; tokenId++) {
const address = await contract.methods.ownerOf(tokenId).call();
list.add(address);
}
result = Array.from(list).map((item) => ({address: item}));
return result;
}
and my error
To make a large number of JSON-RPC requests you need to host your own node or purchase as a node as a service.
RPC requests are not free and someone must pay for them. If you are not paying yourself, then whoever is serving you has all their right to cut you off.

Gnosis Transaction Failing On Polygon w/ "Transaction Underpriced" error

I've been trying to get a transaction to execute on Polygon but it's been failing with the following error:
reason: 'processing response error',
code: 'SERVER_ERROR',
body: '{"jsonrpc":"2.0","id":89,"error":{"code":-32000,"message":"transaction underpriced"}}',
This error only occurs on Polygon and it only occurs when using the Gnosis SDK. I've tested it using the Gnosis UI and it executes successfully. I've also tested it using the Gnosis SDK on rinkeby and that works as well.
Here is the code that fails:
const infuraProvider = new ethers.providers.JsonRpcProvider(RPC_PROVIDER);
const wallet = new ethers.Wallet(`0x${PRIVATE_KEY}`, infuraProvider);
const owner1 = wallet.connect(infuraProvider);
const ethAdapterOwner1 = new EthersAdapter({ ethers, signer: owner1 });
const safeSdkInstance = await Safe.create({
ethAdapter: ethAdapterOwner1,
safeAddress: GNOSIS_SAFE_ADDR,
});
const contract = new ethers.Contract(contractJson.address, contractJson.abi, owner1);
const tx = {
to: contract.address,
value: '0',
data: contract.interface.encodeFunctionData('mint', ['[wallet address here]', '[token id here]']),
};
const safeTransaction = await safeSdkInstance.createTransaction(tx);
const executeTxResponse = await safeSdkInstance.executeTransaction(safeTransaction);
Other things I've tried:
Adding a gasPrice - making that gas price incredibly large
Adding a gasLimit
Changing RPC_PROVIDER from an infura provider to a public one
Calling a different function - I tried calling my contracts burn function and it came back with the exact same error
I did notice that even when providing a gasPrice the error I get back says that the transaction provided had a gasPrice of null. But when logging the safeTransaction object it shows a gasPrice is generated by the createTransaction function. So something must be going wrong in the executeTransaction function

How to modify my code to send custom SPL Token instead of regular SOL?

I'm building a website where people log in to their phantom wallet then by clicking on a button they will send a certain amount of our custom token to one wallet.
The code shown below is working with SOL and I would like to make it work with our custom SPL token, I have the mint address of the token but I couldn't find any way to make it work. Could anyone help me?
async function transferSOL(toSend) {
// Detecing and storing the phantom wallet of the user (creator in this case)
var provider = await getProvider();
console.log("Public key of the emitter: ",provider.publicKey.toString());
// Establishing connection
var connection = new web3.Connection(
"https://api.mainnet-beta.solana.com/"
);
// I have hardcoded my secondary wallet address here. You can take this address either from user input or your DB or wherever
var recieverWallet = new web3.PublicKey("address of the wallet recieving the custom SPL Token");
var transaction = new web3.Transaction().add(
web3.SystemProgram.transfer({
fromPubkey: provider.publicKey,
toPubkey: recieverWallet,
lamports: (web3.LAMPORTS_PER_SOL)*toSend //Investing 1 SOL. Remember 1 Lamport = 10^-9 SOL.
}),
);
// Setting the variables for the transaction
transaction.feePayer = await provider.publicKey;
let blockhashObj = await connection.getRecentBlockhash();
transaction.recentBlockhash = await blockhashObj.blockhash;
// Request creator to sign the transaction (allow the transaction)
let signed = await provider.signTransaction(transaction);
// The signature is generated
let signature = await connection.sendRawTransaction(signed.serialize());
// Confirm whether the transaction went through or not
console.log(await connection.confirmTransaction(signature));
//Signature chhap diya idhar
console.log("Signature: ", signature);
}
I'd like to specify that people will use phantom and I cant have access to their private keys (cause it was needed in all the answers I found on internet)
You're very close! You just need to replace the web3.SystemProgram.transfer instruction with an instruction to transfer SPL tokens, referencing the proper accounts. There's an example at the Solana Cookbook covering this exactly situation: https://solanacookbook.com/references/token.html#transfer-token
You can do this with help of anchor and spl-token, which is used for dealing with custom tokens on solana.
This is a custom transfer function. You'll need the mint address of the token, wallet from which the tokens will be taken( which you get in front end when user connects wallet. Can make use of solana-web3), to address and amount.
import * as splToken from "#solana/spl-token";
import { web3, Wallet } from "#project-serum/anchor";
async function transfer(tokenMintAddress: string, wallet: Wallet, to: string, connection: web3.Connection, amount: number) {
const mintPublicKey = new web3.PublicKey(tokenMintAddress);
const {TOKEN_PROGRAM_ID} = splToken
const fromTokenAccount = await splToken.getOrCreateAssociatedTokenAccount(
connection,
wallet.payer,
mintPublicKey,
wallet.publicKey
);
const destPublicKey = new web3.PublicKey(to);
// Get the derived address of the destination wallet which will hold the custom token
const associatedDestinationTokenAddr = await splToken.getOrCreateAssociatedTokenAccount(
connection,
wallet.payer,
mintPublicKey,
destPublicKey
);
const receiverAccount = await connection.getAccountInfo(associatedDestinationTokenAddr.address);
const instructions: web3.TransactionInstruction[] = [];
instructions.push(
splToken.createTransferInstruction(
fromTokenAccount.address,
associatedDestinationTokenAddr.address,
wallet.publicKey,
amount,
[],
TOKEN_PROGRAM_ID
)
);
const transaction = new web3.Transaction().add(...instructions);
transaction.feePayer = wallet.publicKey;
transaction.recentBlockhash = (await connection.getRecentBlockhash()).blockhash;
const transactionSignature = await connection.sendRawTransaction(
transaction.serialize(),
{ skipPreflight: true }
);
await connection.confirmTransaction(transactionSignature);
}

How to end Google Speech-to-Text streamingRecognize gracefully and get back the pending text results?

I'd like to be able to end a Google speech-to-text stream (created with streamingRecognize), and get back the pending SR (speech recognition) results.
In a nutshell, the relevant Node.js code:
// create SR stream
const stream = speechClient.streamingRecognize(request);
// observe data event
const dataPromise = new Promise(resolve => stream.on('data', resolve));
// observe error event
const errorPromise = new Promise((resolve, reject) => stream.on('error', reject));
// observe finish event
const finishPromise = new Promise(resolve => stream.on('finish', resolve));
// send the audio
stream.write(audioChunk);
// for testing purposes only, give the SR stream 2 seconds to absorb the audio
await new Promise(resolve => setTimeout(resolve, 2000));
// end the SR stream gracefully, by observing the completion callback
const endPromise = util.promisify(callback => stream.end(callback))();
// a 5 seconds test timeout
const timeoutPromise = new Promise(resolve => setTimeout(resolve, 5000));
// finishPromise wins the race here
await Promise.race([
dataPromise, errorPromise, finishPromise, endPromise, timeoutPromise]);
// endPromise wins the race here
await Promise.race([
dataPromise, errorPromise, endPromise, timeoutPromise]);
// timeoutPromise wins the race here
await Promise.race([dataPromise, errorPromise, timeoutPromise]);
// I don't see any data or error events, dataPromise and errorPromise don't get settled
What I experience is that the SR stream ends successfully, but I don't get any data events or error events. Neither dataPromise nor errorPromise gets resolved or rejected.
How can I signal the end of my audio, close the SR stream and still get the pending SR results?
I need to stick with streamingRecognize API because the audio I'm streaming is real-time, even though it may stop suddenly.
To clarify, it works as long as I keep streaming the audio, I do receive the real-time SR results. However, when I send the final audio chunk and end the stream like above, I don't get the final results I'd expect otherwise.
To get the final results, I actually have to keep streaming silence for several more seconds, which may increase the ST bill. I feel like there must be a better way to get them.
Updated: so it appears, the only proper time to end a streamingRecognize stream is upon data event where StreamingRecognitionResult.is_final is true. As well, it appears we're expected to keep streaming audio until data event is fired, to get any result at all, final or interim.
This looks like a bug to me, filing an issue.
Updated: it now seems to have been confirmed as a bug. Until it's fixed, I'm looking for a potential workaround.
Updated: for future references, here is the list of the current and previously tracked issues involving streamingRecognize.
I'd expect this to be a common problem for those who use streamingRecognize, surprised it hasn't been reported before. Submitting it as a bug to issuetracker.google.com, as well.
My bad — unsurprisingly, this turned to be an obscure race condition in my code.
I've put together a self-contained sample that works as expected (gist). It helped me tracking down the issue. Hopefully, it may help others and my future self:
// A simple streamingRecognize workflow,
// tested with Node v15.0.1, by #noseratio
import fs from 'fs';
import path from "path";
import url from 'url';
import util from "util";
import timers from 'timers/promises';
import speech from '#google-cloud/speech';
export {}
// need a 16-bit, 16KHz raw PCM audio
const filename = path.join(path.dirname(url.fileURLToPath(import.meta.url)), "sample.raw");
const encoding = 'LINEAR16';
const sampleRateHertz = 16000;
const languageCode = 'en-US';
const request = {
config: {
encoding: encoding,
sampleRateHertz: sampleRateHertz,
languageCode: languageCode,
},
interimResults: false // If you want interim results, set this to true
};
// init SpeechClient
const client = new speech.v1p1beta1.SpeechClient();
await client.initialize();
// Stream the audio to the Google Cloud Speech API
const stream = client.streamingRecognize(request);
// log all data
stream.on('data', data => {
const result = data.results[0];
console.log(`SR results, final: ${result.isFinal}, text: ${result.alternatives[0].transcript}`);
});
// log all errors
stream.on('error', error => {
console.warn(`SR error: ${error.message}`);
});
// observe data event
const dataPromise = new Promise(resolve => stream.once('data', resolve));
// observe error event
const errorPromise = new Promise((resolve, reject) => stream.once('error', reject));
// observe finish event
const finishPromise = new Promise(resolve => stream.once('finish', resolve));
// observe close event
const closePromise = new Promise(resolve => stream.once('close', resolve));
// we could just pipe it:
// fs.createReadStream(filename).pipe(stream);
// but we want to simulate the web socket data
// read RAW audio as Buffer
const data = await fs.promises.readFile(filename, null);
// simulate multiple audio chunks
console.log("Writting...");
const chunkSize = 4096;
for (let i = 0; i < data.length; i += chunkSize) {
stream.write(data.slice(i, i + chunkSize));
await timers.setTimeout(50);
}
console.log("Done writing.");
console.log("Before ending...");
await util.promisify(c => stream.end(c))();
console.log("After ending.");
// race for events
await Promise.race([
errorPromise.catch(() => console.log("error")),
dataPromise.then(() => console.log("data")),
closePromise.then(() => console.log("close")),
finishPromise.then(() => console.log("finish"))
]);
console.log("Destroying...");
stream.destroy();
console.log("Final timeout...");
await timers.setTimeout(1000);
console.log("Exiting.");
The output:
Writting...
Done writing.
Before ending...
SR results, final: true, text: this is a test I'm testing voice recognition This Is the End
After ending.
data
finish
Destroying...
Final timeout...
close
Exiting.
To test it, a 16-bit/16KHz raw PCM audio file is required. An arbitrary WAV file wouldn't work as is because it contains a header with metadata.
This: "I'm looking for a potential workaround." - have you considered extending from SpeechClient as a base class? I don't have credential to test, but you can extend from SpeechClient with your own class and then call the internal close() method as needed. The close() method shuts down the SpeechClient and resolves the outstanding Promise.
Alternatively you could also Proxy the SpeechClient() and intercept/respond as needed. But since your intent is to shut it down, the below option might be your workaround.
const speech = require('#google-cloud/speech');
class ClientProxy extends speech.SpeechClient {
constructor() {
super();
}
myCustomFunction() {
this.close();
}
}
const clientProxy = new ClientProxy();
try {
clientProxy.myCustomFunction();
} catch (err) {
console.log("myCustomFunction generated error: ", err);
}
Since it's a bug, I don't know if this is suitable for you but I have used this.recognizeStream.end(); several times in different situations and it worked. However, my code was a bit different...
This feed may be something for you:
https://groups.google.com/g/cloud-speech-discuss/c/lPaTGmEcZQk/m/Kl4fbHK2BQAJ

Delay message rabbitmq on nodejs

I'm trying to add some functionality for rabbitmq with delay messages. Actually I need to get this message after 2 weeks. As I know we do not need any plugin. Also when this message invokes, how should I reschedule new x delay exchanger to invoke again over 2 weeks. Where shoul I added this x delay message.
config
"messageQueue": {
"connectionString": "amqp://guest:guest#localhost:5672?heartbeat=5",
"queueName": "history",
"exchange": {
"type": "headers",
"prefix": "history."
},
"reconnectTimeout": 5000
},
service:
import amqplib from 'amqplib'
import config from 'config'
import logger from './logger'
const {reconnectTimeout, connectionString, exchange: {prefix, type: exchangeType}, queueName} = config.messageQueue
const onConsume = (expectedMessages, channel, onMessage) => async message => {
const {fields: {exchange}, properties: {correlationId, replyTo}, content} = message
logger.silly(`consumed message from ${exchange}`)
const messageTypeName = exchange.substring(exchange.startsWith(prefix) ? prefix.length : 0)
const messageType = expectedMessages[messageTypeName]
if (!messageType) {
logger.warn(`Unexpected message of type ${messageTypeName} received. The service only accepts messages of types `, Object.keys(expectedMessages))
return
}
const deserializedMessage = messageType.decode(content)
const object = deserializedMessage.toJSON()
const result = await onMessage(messageTypeName, object)
if (correlationId && replyTo) {
const {type, response} = result
const encoded = type.encode(response).finish()
channel.publish('', replyTo, encoded, {correlationId})
}
}
const startService = async (expectedMessages, onMessage) => {
const restoreOnFailure = e => {
logger.warn('connection with message bus lost due to error', e)
logger.info(`reconnecting in ${reconnectTimeout} milliseconds`)
setTimeout(() => startService(expectedMessages, onMessage), reconnectTimeout)
}
const exchanges = Object.keys(expectedMessages).map(m => `${prefix}${m}`)
try {
const connection = await amqplib.connect(connectionString)
connection.on('error', restoreOnFailure)
const channel = await connection.createChannel()
const handleConsume = onConsume(expectedMessages, channel, onMessage)
const queue = await channel.assertQueue(queueName)
exchanges.forEach(exchange => {
channel.assertExchange(exchange, exchangeType, {durable: true})
channel.bindQueue(queue.queue, exchange, '')
})
logger.debug(`start listening messages from ${exchanges.join(', ')}`)
channel.consume(queue.queue, handleConsume, {noAck: true})
}
catch (e) {
logger.warn('error while subscribing for messages message', e)
restoreOnFailure(e)
}
}
export default startService
RabbitMQ has a plug-in for scheduling messages. You can use it, subject to an important design caveat which I explain below.
Use Steps
You must first install it:
rabbitmq-plugins enable rabbitmq_delayed_message_exchange
Then, you have to set up a delayed exchange:
Map<String, Object> args = new HashMap<String, Object>();
args.put("x-delayed-type", "direct");
channel.exchangeDeclare("my-exchange", "x-delayed-message", true, false, args);
Finally, you can set the x-delay parameter (where delay is in milliseconds).
byte[] messageBodyBytes = "delayed payload".getBytes();
AMQP.BasicProperties.Builder props = new AMQP.BasicProperties.Builder();
headers = new HashMap<String, Object>();
headers.put("x-delay", 5000);
props.headers(headers);
channel.basicPublish("my-exchange", "", props.build(), messageBodyBytes);
Two weeks is equal to (7*24*60*60*1000 = 604,800,000) milliseconds.
Important Caveat
As I explained in this answer, this is a really bad thing to ask the message broker to do.
It's important to keep in mind, when dealing with message queues, they perform a very specific function in a system: to hold messages while the processor(s) are busy processing earlier messages. It is expected that a properly-functioning message queue will deliver messages as soon as reasonable. Basically, the fundamental expectation is that as soon as a message reaches the head of the queue, the next pull on the queue will yield the message -- no delay.
Delay becomes a result of how a system with a queue processes messages. In fact, Little's Law offers some interesting insights into this. If you're going to stick an arbitrary delay in there, you really have no need of a message queue to begin with - all your work is scheduled up front.
So, in a system where a delay is necessary (for example, to join/wait for a parallel operation to complete), you should be looking at other methods. Typically a queryable database would make sense in this particular instance. If you find yourself keeping messages in a queue for a pre-set period of time, you're actually using the message queue as a database - a function it was not designed to provide. Not only is this risky, but it also has a high likelihood of hurting the performance of your message broker.

Categories