How to set MAX_ATTEMPT of the tasks in google cloud queue in code?
When I create new task, I want to set how many repetitions of a given task should be, can I do it from the code below?
I have google cloud queue like here:
const {CloudTasksClient} = require('#google-cloud/tasks');
const client = new CloudTasksClient();
async function createHttpTask() {
const project = 'my-project-id';
const queue = 'my-queue';
const location = 'us-central1';
const url = 'https://example.com/taskhandler';
const payload = 'Hello, World!';
const inSeconds = 180;
const parent = client.queuePath(project, location, queue);
const task = {
httpRequest: {
httpMethod: 'POST',
url,
},
};
if (payload) {
task.httpRequest.body = Buffer.from(payload).toString('base64');
}
console.log('Sending task:');
console.log(task);
const request = {parent: parent, task: task};
await client.createTask(request);
}
createHttpTask();
From the Google Cloud Documentation i see that I can do it from the console, for the whole queue - https://cloud.google.com/tasks/docs/configuring-queues#retry , but I want to set this dynamically for the tasks
thanks for any help!
Unfortunately, the answer is no. You can not set retry parameters on individual tasks. I make this claim by looking at this document:
REST Resource: projects.locations.queues
which is the REST API for creating task queues. In there, under the documentation for retryConfig we read:
For tasks created using Cloud Tasks: the queue-level retry settings apply to all tasks in the queue that were created using Cloud Tasks. Retry settings cannot be set on individual tasks.
Related
The goal is to call a function from my main script that connects to a database, reads a document from it, stores pieces of that document in a new object, and returns that object to my main script. The problem is I cannot get it all to work together. If I try one thing, I get the results but my program locks up. If I try something else I get undefined results.
Long story short, how do I open a database and retrieve something from it to another script.
The program is a quiz site and I want to return the quiz name and the questions.
const myDb = require('./app.js');
var myData = myDb.fun((myData) => {
console.log(myData.quizName);
});
Here is the script that tries to open the database and find the data
const { MongoClient } = require("mongodb");
const {mongoClient} = require("mongodb");
const uri = connection uri goes here but my name is hard coded into it at the moment so I removed for privacy
const client = new MongoClient(uri);
const fun = async (cback) => {
try {
await client.connect();
const database = client.db('Quiz-Capstone');
const quizzes = database.collection('Quiz');
const query = {quizName: "CIS01"};
const options = {
sort: {},
projection: {}
};
const quiz = await quizzes.findOne(query, options);
var quizObject = {
quizName: quiz.quizName,
quizQuestions: quiz.quizQuestions
}
//console.log(testOb);
} finally {
await client.close();
cback(quizObject);
}
}
fun().catch(console.dir);
module.exports = {
fun: fun
}
UPDATE: Still stuck. I have read several different threads here about asynchronous calls and callbacks but I cannot get my function located in one file to return a value to the caller located in another file.
This question already has an answer here:
Why is my PDF not saving intermittently in my Node function?
(1 answer)
Closed last year.
As described in the firebase docs, it is required to
"resolve functions that perform asynchronous processing (also known as
"background functions") by returning a JavaScript promise."
(https://firebase.google.com/docs/functions/terminate-functions?hl=en).
otherwise it might happen, that
"the Cloud Functions instance running your function does not shut down
before your function successfully reaches its terminating condition or
state. (https://firebase.google.com/docs/functions/terminate-functions?hl=en)
In this case I am trying to adapt a demo-code for pdf-generation written by Volodymyr Golosay on https://medium.com/firebase-developers/how-to-generate-and-store-a-pdf-with-firebase-7faebb74ccbf.
The demo uses 'https.onRequest' as trigger and fulfillis the termination requirement with 'response.send(result)'. In the adaption I need to use a 'document.onCreate' trigger and therefor need to find a different termination.
In other functions I can fulfill this requirement by using async/await, but here I am struggling to get a stable function with good performance. The shown function logs after 675 ms "finished with status: 'ok' ", but around 2 minutes later it logs again that the pdf-file is saved now (see screenshot of the logger).
What should I do to terminate the function properly?
// adapting the demo code by Volodymyr Golosay published on https://medium.com/firebase-developers/how-to-generate-and-store-a-pdf-with-firebase-7faebb74ccbf
// library installed -> npm install pdfmake
const functions = require("firebase-functions");
const admin = require("firebase-admin");
admin.initializeApp();
const db = admin.firestore();
const Printer = require('pdfmake');
const fonts = require('pdfmake/build/vfs_fonts.js');
const fontDescriptors = {
Roboto: {
normal: Buffer.from(fonts.pdfMake.vfs['Roboto-Regular.ttf'], 'base64'),
bold: Buffer.from(fonts.pdfMake.vfs['Roboto-Medium.ttf'], 'base64'),
italics: Buffer.from(fonts.pdfMake.vfs['Roboto-Italic.ttf'], 'base64'),
bolditalics: Buffer.from(fonts.pdfMake.vfs['Roboto-Italic.ttf'], 'base64'),
}
};
exports.generateDemoPdf = functions
// trigger by 'document.onCreate', while demo uses 'https.onRequest'
.firestore
.document('collection/{docId}')
.onCreate(async (snap, context) => {
const printer = new Printer(fontDescriptors);
const chunks = [];
// define the content of the pdf-file
const docDefinition = {
content: [{
text: 'PDF text is here.',
fontSize: 19 }
]
};
const pdfDoc = printer.createPdfKitDocument(docDefinition);
pdfDoc.on('data', (chunk) => {
chunks.push(chunk);
});
pdfDoc.on('end', async () => {
const result = Buffer.concat(chunks);
// Upload generated file to the Cloud Storage
const docId = "123456789"
const bucket = admin.storage().bucket();
const fileRef = bucket.file(`${docId}.pdf`, {
metadata: {
contentType: 'application/pdf'
}
});
await fileRef.save(result);
console.log('result is saved');
// NEEDS PROPER TERMINATION HERE?? NEEDS TO RETURN A PROMISE?? FIREBASE DOCS: https://firebase.google.com/docs/functions/terminate-functions?hl=en
// the demo with 'https.onRequest' uses the following line to terminate the function properly:
// response.send(result);
});
pdfDoc.on('error', (err) => {
return functions.logger.log('An error occured!');
});
pdfDoc.end();
});
I think everything is fine in your code. It seems it takes 1m 34s to render the file and save it to storage.
Cloud function will be terminated automatically when all micro and macro tasks are done. Right after you last await.
To check how long does it takes and does it terminate right after saving, you can run the firebase emulator on your local machine.
You will see logs in the terminal and simultaneously watch on storage.
I suspect you did terminate properly - that's the nature of promises. Your function "terminated" with a 200 status, returning a PROMISE for the results of the PDF save. When the PDF save actually terminates later, the result is logged and the promise resolved. This behavior is WHY you return the promise.
I am fairly new to JS/Winappdriver.
The application I am trying to test is a windows based "Click Once" application from .Net, so I have to go to a website from IE and click "Install". This will open the application.
Once the application is running, I have no way to connect the application to perform my UI interactions while using JavaScript.
Using C#, I was looping through the processes looking for a process name, get the window handle, convert it to hex, add that as a capability and create the driver - it worked. Sample code below,
public Setup_TearDown()
{
string TopLevelWindowHandleHex = null;
IntPtr TopLevelWindowHandle = new IntPtr();
foreach (Process clsProcess in Process.GetProcesses())
{
if (clsProcess.ProcessName.StartsWith($"SomeName-{exec_pob}-{exec_env}"))
{
TopLevelWindowHandle = clsProcess.Handle;
TopLevelWindowHandleHex = clsProcess.MainWindowHandle.ToString("x");
}
}
var appOptions = new AppiumOptions();
appOptions.AddAdditionalCapability("appTopLevelWindow", TopLevelWindowHandleHex);
appOptions.AddAdditionalCapability("ms:experimental-webdriver", true);
appOptions.AddAdditionalCapability("ms:waitForAppLaunch", "25");
AppDriver = new WindowsDriver<WindowsElement>(new Uri(WinAppDriverUrl), appOptions);
AppDriver.Manage().Timeouts().ImplicitWait = TimeSpan.FromSeconds(60);
}
How do I do this in Javascript ? I can't seem to find any code examples.
Based on an example from this repo, I tried the following in JS to find the process to latch on to but without luck.
import {By2} from "selenium-appium";
// this.appWindow = this.driver.element(By2.nativeAccessibilityId('xxx'));
// this.appWindow = this.driver.element(By2.nativeXpath("//Window[starts-with(#Name,\"xxxx\")]"));
// this.appWindow = this.driver.elementByName('WindowsForms10.Window.8.app.0.13965fa_r11_ad1');
// thisappWindow = this.driver.elementByName('xxxxxxx');
async connectAppDriver(){
await this.waitForAppWindow();
var appWindow = await this.appWindow.getAttribute("NativeWindowHandle");
let hex = (Number(ewarpWindow)).toString(16);
var currentAppCapabilities =
{
"appTopLevelWindow": hex,
"platformName": "Windows",
"deviceName": "WindowsPC",
"newCommandTimeout": "120000"
}
let driverBuilder = new DriverBuilder();
await driverBuilder.stopDriver();
this.driver = await driverBuilder.createDriver(currentEwarpCapabilities);
return this.driver;
}
I keep getting this error in Winappdriver
{"status":13,"value":{"error":"unknown error","message":"An unknown error occurred in the remote end while processing the command."}}
I've also opened this ticket here.
It seems like such an easy thing to do, but I couldn't figure this one out.
Any of nodes packages I could use to get the top level window handle easily?
I am open to suggestions on how to tackle this issue while using JavaScript for Winappdriver.
Hope this helps some one out there,
Got around this by creating an exe using C# that generated hex of the app to connect based on the process name, it looks like something like this.
public string GetTopLevelWindowHandleHex()
{
string TopLevelWindowHandleHex = null;
IntPtr TopLevelWindowHandle = new IntPtr();
foreach (Process clsProcess in Process.GetProcesses())
{
if (clsProcess.ProcessName.StartsWith(_processName))
{
TopLevelWindowHandle = clsProcess.Handle;
TopLevelWindowHandleHex = clsProcess.MainWindowHandle.ToString("x");
}
}
if (!String.IsNullOrEmpty(TopLevelWindowHandleHex))
return TopLevelWindowHandleHex;
else
throw new Exception($"Process: {_processName} cannot be found");
}
Called it from JS to get the hex of the top level window handle, like this,
async getHex () {
var pathToExe =await path.join(process.cwd(), "features\\support\\ProcessUtility\\GetWindowHandleHexByProcessName.exe");
var pathToDir =await path.join(process.cwd(), "features\\support\\ProcessUtility");
const result = await execFileSync(pathToExe, [this.processName]
, {cwd: pathToDir, encoding: 'utf-8'}
, async function (err, data) {
console.log("Error: "+ err);
console.log("Data(hex): "+ data);
return JSON.stringify(data.toString());
});
return result.toString().trim();
}
Used the hex to connect to the app like this,
async connectAppDriver(hex) {
console.log(`Hex received to connect to app using hex: ${hex}`);
const currentAppCapabilities=
{
"browserName": '',
"appTopLevelWindow": hex.trim(),
"platformName": "Windows",
"deviceName": "WindowsPC",
"newCommandTimeout": "120000"
};
const appDriver = await new Builder()
.usingServer("http://localhost:4723/wd/hub")
.withCapabilities(currentAppCapabilities)
.build();
await driver.startWithWebDriver(appDriver);
return driver;
}
Solution:
In WebDriverJS (used by selenium / appium), use getDomAttribute instead of getAttribute. Took several hours to find :(
element.getAttribute("NativeWindowHandle")
POST: /session/270698D2-D93B-4E05-9FC5-3E5FBDA60ECA/execute/sync
Command not implemented: POST: /session/270698D2-D93B-4E05-9FC5-3E5FBDA60ECA/execute/sync
HTTP/1.1 501 Not Implemented
let topLevelWindowHandle = await element.getDomAttribute('NativeWindowHandle')
topLevelWindowHandle = parseInt(topLevelWindowHandle).toString(16)
GET /session/DE4C46E1-CC84-4F5D-88D2-35F56317E34D/element/42.3476754/attribute/NativeWindowHandle HTTP/1.1
HTTP/1.1 200 OK
{"sessionId":"DE4C46E1-CC84-4F5D-88D2-35F56317E34D","status":0,"value":"3476754"}
and topLevelWindowHandle have hex value :)
I would like to use the google iot core api from a firebase function.
It all works, but it is very slow. I think is due to the authentication process that needs to be carried out one very call. Is there a way to speed this up?
Right now I have this:
function getClient(cb) {
const API_VERSION = 'v1';
const DISCOVERY_API = 'https://cloudiot.googleapis.com/$discovery/rest';
const jwtAccess = new google.auth.JWT();
jwtAccess.fromJSON(serviceAccount);
// Note that if you require additional scopes, they should be specified as a
// string, separated by spaces.
jwtAccess.scopes = 'https://www.googleapis.com/auth/cloud-platform';
// Set the default authentication to the above JWT access.
google.options({ auth: jwtAccess });
const discoveryUrl = `${DISCOVERY_API}?version=${API_VERSION}`;
google.discoverAPI(discoveryUrl, {}).then( end_point => {
cb(end_point);
});
}
And this allows me to do:
export function sendCommandToDevice(deviceId, subfolder, mqtt_data) {
const cloudRegion = 'europe-west1';
const projectId = 'my-project-id;
const registryId = 'my-registry-id';
getClient(client => {
const parentName = `projects/${projectId}/locations/${cloudRegion}`;
const registryName = `${parentName}/registries/${registryId}`;
const binaryData = Buffer.from(mqtt_data).toString('base64');
const request = {
name: `${registryName}/devices/${deviceId}`,
binaryData: binaryData,
subfolder: subfolder
};
client.projects.locations.registries.devices.sendCommandToDevice(request,
(err, data) => {
if (err) {
console.log('Could not update config:', deviceId);
}
});
});
}
The way that I've found to speed it up is to avoid doing the authentication. I've solved it doing this:
const google = new GoogleApis();
const API_VERSION = 'v1';
const DISCOVERY_API = 'https://cloudiot.googleapis.com/$discovery/rest';
const jwtAccess = new google.auth.JWT();
jwtAccess.fromJSON(serviceAccount);
// Note that if you require additional scopes, they should be specified as a
// string, separated by spaces.
jwtAccess.scopes = 'https://www.googleapis.com/auth/cloud-platform';
// Set the default authentication to the above JWT access.
google.options({ auth: jwtAccess });
const discoveryUrl = `${DISCOVERY_API}?version=${API_VERSION}`;
var googleClient;
google.discoverAPI(discoveryUrl, {}).then( client => {
//cb(end_point);
googleClient = client;
});
// Returns an authorized API client by discovering the Cloud IoT Core API with
// the provided API key.
function getClient(cb) {
cb(googleClient);
}
But when happens then when the client expires? Is there any good solution from using google apis from firebase functions?
The problem may be the discovery pieces. There's a direct IoT Core admin REST API, so you don't have to use discovery...I think. I haven't worked with the Firebase Functions, but they're roughly equivalent to the Google Cloud Functions which may end up working here also. The code we (in a live demo we did) ran to do what you're doing is here if you wanted to tinker around and see if you can get this running in a Firebase Function.
I'm trying to add some functionality for rabbitmq with delay messages. Actually I need to get this message after 2 weeks. As I know we do not need any plugin. Also when this message invokes, how should I reschedule new x delay exchanger to invoke again over 2 weeks. Where shoul I added this x delay message.
config
"messageQueue": {
"connectionString": "amqp://guest:guest#localhost:5672?heartbeat=5",
"queueName": "history",
"exchange": {
"type": "headers",
"prefix": "history."
},
"reconnectTimeout": 5000
},
service:
import amqplib from 'amqplib'
import config from 'config'
import logger from './logger'
const {reconnectTimeout, connectionString, exchange: {prefix, type: exchangeType}, queueName} = config.messageQueue
const onConsume = (expectedMessages, channel, onMessage) => async message => {
const {fields: {exchange}, properties: {correlationId, replyTo}, content} = message
logger.silly(`consumed message from ${exchange}`)
const messageTypeName = exchange.substring(exchange.startsWith(prefix) ? prefix.length : 0)
const messageType = expectedMessages[messageTypeName]
if (!messageType) {
logger.warn(`Unexpected message of type ${messageTypeName} received. The service only accepts messages of types `, Object.keys(expectedMessages))
return
}
const deserializedMessage = messageType.decode(content)
const object = deserializedMessage.toJSON()
const result = await onMessage(messageTypeName, object)
if (correlationId && replyTo) {
const {type, response} = result
const encoded = type.encode(response).finish()
channel.publish('', replyTo, encoded, {correlationId})
}
}
const startService = async (expectedMessages, onMessage) => {
const restoreOnFailure = e => {
logger.warn('connection with message bus lost due to error', e)
logger.info(`reconnecting in ${reconnectTimeout} milliseconds`)
setTimeout(() => startService(expectedMessages, onMessage), reconnectTimeout)
}
const exchanges = Object.keys(expectedMessages).map(m => `${prefix}${m}`)
try {
const connection = await amqplib.connect(connectionString)
connection.on('error', restoreOnFailure)
const channel = await connection.createChannel()
const handleConsume = onConsume(expectedMessages, channel, onMessage)
const queue = await channel.assertQueue(queueName)
exchanges.forEach(exchange => {
channel.assertExchange(exchange, exchangeType, {durable: true})
channel.bindQueue(queue.queue, exchange, '')
})
logger.debug(`start listening messages from ${exchanges.join(', ')}`)
channel.consume(queue.queue, handleConsume, {noAck: true})
}
catch (e) {
logger.warn('error while subscribing for messages message', e)
restoreOnFailure(e)
}
}
export default startService
RabbitMQ has a plug-in for scheduling messages. You can use it, subject to an important design caveat which I explain below.
Use Steps
You must first install it:
rabbitmq-plugins enable rabbitmq_delayed_message_exchange
Then, you have to set up a delayed exchange:
Map<String, Object> args = new HashMap<String, Object>();
args.put("x-delayed-type", "direct");
channel.exchangeDeclare("my-exchange", "x-delayed-message", true, false, args);
Finally, you can set the x-delay parameter (where delay is in milliseconds).
byte[] messageBodyBytes = "delayed payload".getBytes();
AMQP.BasicProperties.Builder props = new AMQP.BasicProperties.Builder();
headers = new HashMap<String, Object>();
headers.put("x-delay", 5000);
props.headers(headers);
channel.basicPublish("my-exchange", "", props.build(), messageBodyBytes);
Two weeks is equal to (7*24*60*60*1000 = 604,800,000) milliseconds.
Important Caveat
As I explained in this answer, this is a really bad thing to ask the message broker to do.
It's important to keep in mind, when dealing with message queues, they perform a very specific function in a system: to hold messages while the processor(s) are busy processing earlier messages. It is expected that a properly-functioning message queue will deliver messages as soon as reasonable. Basically, the fundamental expectation is that as soon as a message reaches the head of the queue, the next pull on the queue will yield the message -- no delay.
Delay becomes a result of how a system with a queue processes messages. In fact, Little's Law offers some interesting insights into this. If you're going to stick an arbitrary delay in there, you really have no need of a message queue to begin with - all your work is scheduled up front.
So, in a system where a delay is necessary (for example, to join/wait for a parallel operation to complete), you should be looking at other methods. Typically a queryable database would make sense in this particular instance. If you find yourself keeping messages in a queue for a pre-set period of time, you're actually using the message queue as a database - a function it was not designed to provide. Not only is this risky, but it also has a high likelihood of hurting the performance of your message broker.