Inserting ACL after creating Google Room Resource frequently throws error - javascript

I have a Google Cloud function which first creates a Google Room Resource using resources.calendars.insert method from the Google admin sdk,
and right after I try to insert an ACL using Acl: insert method from the google calendar api.
Similar to the following code:
const AdminService = google.admin({version: 'directory_v1'});
try {
const response = await AdminService.resources.calendars.insert(options); // options omitted
} catch (error) {
console.log(`Google Room Resource FAIL`);
console.error(error.message);
}
await new Promise((r) => setTimeout(r, 10000));
const CalendarService = google.calendar({version: 'v3'});
try {
const res = await CalendarService.acl.insert(option); // options omitted
console.log(res);
} catch (error) {
console.log(error);
throw new Error(error.message);
}
As for the authentication, I am using a service account with the correct scopes which impersionates an admin user with the correct permissions. This is how I generate the required JWT token:
const generateJWT = async (scope:string[])=>{
const jwtClient = new google.auth.JWT(
client_email, // service account
undefined,
private_key,
scope,
subject // admin user
);
return jwtClient;
}
In the options parameter for each api call I directly acquire the token for the auth attribute like this:
const option = {
'calendarId': acl.calendarId,
'auth': await generateJWT('https://www.googleapis.com/auth/calendar'),
'resource': {
'role': acl.role,
'scope': {
'type': acl.scopeType,
'value': acl.scopeValue,
},
},
};
Since I await all api calls, I thought that I will only get the response back when everything is already propagated in Google Workspace but when I do not use the setTimeout in between I always get an Error: Not Found back.
First I had the timeout set to 5 seconds which worked until it didn't so I moved it up to 10 seconds. This worked quite long but now I get again sometimes the Not Found error back...
I don't like the setTimeout hack...and even less if it does not work reliable, so how should I deal with this asynchronous behavior without spinning up any other infrastructure like queues or similar?

Working with Google Workspace Calendar Resource
As a Super Admin on my organization when creating a Calendar Resource, from the API or the Web interface, it could take up to 24 hours to correctly propagate the information of the Calendar for the organizations, which generally affect the time it would take for any application to gather the ID of the newly created calendar, which could explain why you are increasing the time out.
You have already implemented the await option which is one of the best things you can do. You can also review the option to apply exponential back off to your application or similar to Google App Script a Utitlies.sleep().
There are multiple articles and references on how to utilize it for the retry process needed when the Resource itself has not fully propagated correctly.
You can also review the official Calendar API documentation that suggests that the error "Not Found" is a 404 error:
https://developers.google.com/calendar/api/guides/errors#404_not_found
With a suggested action of reviewing the option to set up exponential backoff to the application.
References:
GASRetry - Exponential backoff JavaScript implementation for Google Apps Script
Exponential backoff

Related

Manage a long-running operation node.js

I am creating a telegram bot, which allows you to get some information about the destiny 2 game world, using the Bungie API. The bot is based on the Bot Framework and uses Telegram as a channel (as a language I am using JavaScript).
now I find myself in the situation where when I send a request to the bot it sends uses series of HTTP calls to the EndPoints of the API to collect information, format it and resubmit it via Adaptive cards, this process however in many cases takes more than 15 seconds showing in chat the message "POST to DestinyVendorBot timed out after 15s" (even if this message is shown the bot works perfectly).
Searching online I noticed that there doesn't seem to be a way to hide this message or increase the time before it shows up. So the only thing left for me to do is to make sure it doesn't show up. To do this I tried to refer to this documentation article. But the code shown is in C #, could someone give me an idea on how to solve this problem of mine or maybe some sample code?
I leave here an example of a call that takes too long and generates the message:
//Mostra l'invetraio dell'armaiolo
if (LuisRecognizer.topIntent(luisResult) === 'GetGunsmith') {
//Take more 15 seconds
const mod = await this.br.getGunsmith(accessdata, process.env.MemberShipType, process.env.Character);
if (mod.error == 0) {
var card = {
}
await step.context.sendActivity({
text: 'Ecco le mod vendute oggi da Banshee-44:',
attachments: [CardFactory.adaptiveCard(card)]
});
} else {
await step.context.sendActivity("Codice di accesso scaduto.");
await this.loginStep(step);
}
}
I have done something similar where you call another function and send the message once the function is complete via proactive message. In my case, I set up the function directly inside the bot instead of as a separate Azure Function. First, you need to save the conversation reference somewhere. I store this in conversation state, and resave it every turn (you could probably do this in onMembersAdded but I chose onMessage when I did it so it resaves the conversation reference every turn). You'll need to import const { TurnContext } = require('botbuilder') for this.
// In your onMessage handler
const conversationData = await this.dialogState.get(context, {});
conversationData.conversationReference = TurnContext.getConversationReference(context.activity);
await this.conversationState.saveChanges(context);
You'll need this for the proactive message. When it's time to send the API, you'll need to send a message (well technically that's optional but recommended), get the conversation data if you haven't gotten it already, and call the API function without awaiting it. If your API is always coming back around 15 seconds, you may just want a standard message (e.g. "One moment while I look that up for you"), but if it's going to be longer I would recommend setting the expectation with the user (e.g. "I will look that up for you. It may take up to a minute to get an answer. In the meantime you can continue to ask me questions."). You should be saving user/conversation state further down in your turn handler. Since you are not awaiting the call, the turn will end and the bot will not hang up or send the timeout message. Here is what I did with a simulation I created.
await dc.context.sendActivity(`OK, I'll simulate a long-running API call and send a proactive message when it's done.`);
const conversationData = await this.dialogState.get(context, {});
apiSimulation.longRunningRequest(conversationData.conversationReference);
// That is in a switch statement. At the end of my turn handler I save state
await this.conversationState.saveChanges(context);
await this.userState.saveChanges(context);
And then the function that I called. As this was just a simulation, I have just awaited a promise, but obviously you would call and await your API(s). Once that comes back you will create a new BotFrameworkAdapter to send the proactive message back to the user.
const request = require('request-promise-native');
const { BotFrameworkAdapter } = require('botbuilder');
class apiSimulation {
static async longRunningRequest(conversationReference) {
console.log('Starting simulated API');
await new Promise(resolve => setTimeout(resolve, 30000));
console.log('Simulated API complete');
// Set up the adapter and send the message
try {
const adapter = new BotFrameworkAdapter({
appId: process.env.microsoftAppID,
appPassword: process.env.microsoftAppPassword,
channelService: process.env.ChannelService,
openIdMetadata: process.env.BotOpenIdMetadata
});
await adapter.continueConversation(conversationReference, async turnContext => {
await turnContext.sendActivity('This message was sent after a simulated long-running API');
});
} catch (error) {
//console.log('Bad Request. Please ensure your message contains the conversation reference and message text.');
console.log(error);
}
}
}
module.exports.apiSimulation = apiSimulation;

Unable to get Notify data using Noble

Can't receive any notifications sent from the Server peripheral.
I am using ESP32 as Server with the "BLE_notify" code that you can find in the Arduino app (File> Examples ESP32 BLE Arduino > BLE_notify).
With this code the ESP32 starts notifying new messages every second once a Client connects.
The client used is a Raspberry Pi with Noble node library installed on it (https://github.com/abandonware/noble). this is the code I am using.
noble.on('discover', async (peripheral) => {
console.log('found peripheral:', peripheral.advertisement);
await noble.stopScanningAsync();
await peripheral.connectAsync();
console.log("Connected")
try {
const services = await peripheral.discoverServicesAsync([SERVICE_UUID]);
const characteristics = await services[0].discoverCharacteristicsAsync([CHARACTERISTIC_UUID])
const ch = characteristics[0]
ch.on('read', function(data, isNotification) {
console.log(isNotification)
console.log('Temperature Value: ', data.readUInt8(0));
})
ch.on('data', function(data, isNotification) {
console.log(isNotification)
console.log('Temperature Value: ', data.readUInt8(0));
})
ch.notify(true, function(error) {
console.log(error)
console.log('temperature notification on');
})
} catch (e) {
// handle error
console.log("ERROR: ",e)
}
});
SERVICE_UUID and CHARACTERISTIC_UUID are obviously the UUIDs coded in the ESP32.
This code sort of works, it can find Services and Characteristics and it can successfully connect to the peripheral, but it cannot receive messages notifications.
I also tried an Android app that works as client, from that app I can get all the messages notified by the peripheral once connected to it. So there is something missing in the noBLE client side.
I think there is something wrong in the on.read/on.data/notify(true) callback methods. Maybe these are not the methods to receive notifications from Server?
I also tried the subscribe methods but still not working.
The official documentation is not clear. Anyone could get it up and running? Please help.
on.read/on.data/ are event listeners. There is nothing wrong with them. They are invoked when there is a certain event.
For example adding characteristic.read([callback(error, data)]); would have invoked the on.read.
From the source:
Emitted when:
Characteristic read has completed, result of characteristic.read(...)
Characteristic value has been updated by peripheral via notification or indication, after having been enabled with
characteristic.notify(true[, callback(error)])
I resolve using the following two envs NOBLE_MULTI_ROLE=1 and NOBLE_REPORT_ALL_HCI_EVENTS=1 (see the documentation https://github.com/abandonware/noble)

Google Picker API with a valid access_token asks for sign in on first use

Firstly, I have reviewed multiple SO questions relating to similar issues and the Google Picker API docs but cannot find a solution. Links at the bottom of this post.
Goal
After using the Google auth2 JavaScript library to complete a ux_mode='popup' Sign In process for a user to obtain an access_code, I wish to open a Google Picker window using the picker JavaScript library and the obtained access_code.
Expected
After checking the access_code is still valid, when the google.picker.PickerBuilder() object is set to visible via picker.setVisible(true) the user is ALWAYS able to select files as per the configuration set for the picker object.
Actual
On a fresh browser, encountering this flow results in the Google Picker window asking the user to sign in again. If the user chooses to do so an additional popup will be triggered that automatically executes a login flow again and the user is then informed that "The API developer key is invalid."
What is truly unexpected about this is that if the user refreshes the page and repeats the exact same actions the Google Picker works exactly as expected. The only way to replicate the anomalous behaviour is to clear all the browser cache, cookies and any other site related settings.
On the JavaScript console there are the common errors of:
Failed to execute ‘postMessage’ on ‘DOMWindow’: The target origin provided (‘https://docs.google.com’) does not match the recipient window’s origin (‘http://localhost’).
Invalid X-Frame-Options header was found when loading “https://docs.google.com/picker?protocol=gadgets&origin=http%3A%2F%2Flocalhost&navHidden=true&multiselectEnabled=true&oauth_token=.............: “ALLOW-FROM http://localhost” is not a valid directive.
But otherwise, no other indication of error in the console, and the exact same errors are reported when the Picker works exactly as expected.
List of things I have confirmed
I have added the Google Picker API to the associated project in the Google developer console
I am working with a validated OAuth application configured in the OAuth Consent Screen
I have tried working with both localhost and an https:// top level domain with valid cert and registered with the Google console as verified
I have generated an API Key that is explicitly associated with the Picker API and the relevant URLs
The API key is set as the .setDeveloperKey(APIKey)
The API key does show usage in the GCP console
The clientId is correct and the appId is correct for the GCP project
I have tried with scopes of ['https://www.googleapis.com/auth/drive.file'] and with scopes of ['openid', 'email', 'profile', 'https://www.googleapis.com/auth/drive.file', 'https://www.googleapis.com/auth/documents']
Attempts to Resolve
I can replicate this behaviour with the bare minimum example provided in the docs Google Picker Example used as a direct cut and paste, only replacing the required credential strings.
Right before invoking the picker = new google.picker.PickerBuilder() I have validated the access_token by executing a GET fetch to https://www.googleapis.com/oauth2/v1/tokeninfo and the signin behavior still results when this returns a successful validation of the token.
I check the token using this simple function:
function checkToken(access_token) {
fetch("https://www.googleapis.com/oauth2/v1/tokeninfo", {
method: "GET",
headers: {
'Authorization': 'Bearer ' + access_token
}
}).then(response => {
if (response.status === 200) {
return response.json();
} else {
console.log('User Token Expired');
resetLoginCache();
}
}).then(data => {
console.log('User Token Still Valid');
}).catch((error) => {
console.error('User Token Check Error:', error);
})
}
The JavaScript API's are initialized with:
<meta name="google-signin-client_id" content="<clientId>">
<script type="text/javascript" src="https://apis.google.com/js/api.js"></script>
function initApi() {
gapi.load('signin2:auth2:picker', () => {
window.gapi.auth2.init(
{
'client_id': clientId,
'scope': scope.join(" ")
});
});
};
In my application code I've (poorly) implemented a simple generalization of a picker builder:
// Use the Google API Loader script to load the google.Picker script.
function loadPicker( mimeTypes = null, callback = pickerCallback ) {
gapi.load('picker', {
'callback': () => {
console.log("Gonna build you a new picker...")
createPicker( mimeTypes, callback );
}
});
}
// Create and render a Picker object for searching images.
function createPicker( mimeTypes, callback ) {
let gAuthTop = gapi.auth2.getAuthInstance();
let gUser = gAuthTop.currentUser.get();
let gAuthResp = gUser.getAuthResponse(true);
console.log(gAuthResp.access_token);
checkToken(gAuthResp.access_token)
if (pickerApiLoaded && oauthToken) {
// based on MIME type:
// FOLDER => google.picker.DocsView(google.picker.ViewId.FOLDERS)
// Cannot set mimeTypes to filter view
// DOCS => google.picker.View(google.picker.ViewId.DOCS)
// Can set MIME types to match
let selectView = new google.picker.View(google.picker.ViewId.DOCS);
if (mimeTypes) {
if (mimeTypes.includes(gdriveFolderMIME)) {
selectView = new google.picker.DocsView(google.picker.ViewId.FOLDERS);
selectView.setIncludeFolders(true);
selectView.setSelectFolderEnabled(true);
selectView.setParent('root');
} else {
selectView.setMimeTypes(mimeTypes);
}
}
let picker = new google.picker.PickerBuilder()
.enableFeature(google.picker.Feature.NAV_HIDDEN)
.enableFeature(google.picker.Feature.MINE_ONLY)
.setAppId(appId)
.setMaxItems(1)
.setOAuthToken(gAuthResp.access_token)
.addView(selectView)
.setSelectableMimeTypes(mimeTypes)
.setDeveloperKey(developerKey)
.setOrigin(window.location.protocol + '//' + window.location.host)
.setCallback(callback)
.setTitle('Application Title')
.build();
console.log('Origin was set to: ', window.location.protocol + '//' + window.location.host)
picker.setVisible(true);
}
}
I've even tried to dig into the minified code loaded by the Picker but I'm not that good at JavaScript and the Firefox debugger wasn't able to help me understand what might be triggering this. However, once the "error" has been passed once, it will not appear on the same browser again for any user account and within Firefox using Private Mode will also no longer show the sign in error until all history, cookies and cache are cleared.
As proof I have done some reasonable research, similar SO questions that I have reviewed and tried working with are:
Google picker asking to sign in even after providing access token
The API developer key is invalid when viewing file in google picker
Is it possible to open google picker with access token which is fetched from server side Oauth2?
How do I use Google Picker to access files using the “drive.file” scope?
Google Picker with AccessToken not working
Google Picker API sign in
Picker API - manually set access_token
Google Picker API - how to load a picker on demand
As well as the following documentation:
G Suite Developer Picker Documentation
Google Sign-In JavaScript client reference
Cannot find any related issue on the tracker

Error: getaddrinfo EAI_AGAIN api.spotify.com:443

While I am in a process of integrating Spotify API into Google Assistant app, implementing Account Linking,
getaddrinfo EAI_AGAIN api.spotify.com:443
This above error has been kept coming out in the console, although it seems to be working like nothing is wrong around API implementation. The access token is properly created and received and client and secret ids were filled without any typo. Also, I tested API calls on Spotify Console (https://developer.spotify.com/console/get-artist-albums/). No error was found. It fetched expected data from the Spotify server, so it should not be related to Account Linking and Spotify Server. The created code myself is below: I assume there is something wrong around spotify-web-api-node, node, npm, or firebase-functions?? I recently have done node versioning so I might did something wrong.
Node version: v7.9.0
spotify-web-api-node: ^4.0.0
firebase-functions: ^2.0.3
npm version: 6.4.1
Added engines: { "node": "8" } // this is in package.json to use asyn and await
app.intent(SomeIntent, async (conv, params) => {
console.log('user info', JSON.stringify(conv.user));
conv.ask('lets play'); //okay
const access_token = conv.user.access.token || ''; // okay
console.log('Your TOKEN information here: ' + access_token); // okay
spotifyApi.setAccessToken(access_token); // should be set correctly
let data = await findMusic(); // error in the findMusic func
conv.ask('found this song。', data.name); // so no data.name
});
function findMusic() {
return spotifyApi.getArtistAlbums('43ZHCT0cAZBISjO8DG9PnE').then((data) => {
console.log('artist song', data.body);
return data.body; //this does not return because error is produced
}).catch(err => {
console.log('Something went wrong!', err);
return err; // this error is WebapiError: getaddrinfo EAI_AGAIN api.spotify.com:443
});
}
UPDATE
#Nick-Felker mentioned in the comment below that external calls are allowed only through paid plans. So this might be the solution (not proved to be working right now because I am not using a paid plan. The detailed explanation below is quoted from An answer comment from another StackOverflow post
The Spark plan allows outbound network requests only to Google-owned services. Inbound invocation requests are allowed within the quota. On the Blaze plan, Cloud Functions provides a perpetual free tier. The first 2,000,000 invocations, 400,000 GB-sec, 200,000 CPU-sec, and 5 GB of Internet egress traffic is provided for free each month. You are only charged on usage past this free allotment. Pricing is based on total number of invocations, and compute time. Compute time is variable based on the amount of memory and CPU provisioned for a function. Usage limits are also enforced through daily and 100s quotas. For more information, see Cloud Functions Pricing.
UPDATE
In my case, the above solution worked. Hope this article helps others!
I got this error due to network problem. And solved when connected

Send FCM to all Android devices using Cloud Functions

Just trying to understand the process for sending a Firebase Cloud Message using Cloud Functions to notify all users who have my app installed on their phone. This would fire whenever a new event has been added at a particular branch, as follows:
var functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp(functions.config().firebase);
const payload = {
notification: {
title: 'New event added'
}
};
exports.bookingsChanged = functions.database.ref("/Events")
.onWrite(event => {
return admin.messaging().sendToDeviceGroup("latest_events", payload);
});
The above function I've uploaded doesn't appear to send the message to the Android device I'm using at all, despite setting up and testing FCM using the Firebase Console option to send messages. I've noticed there is little documentation for this at the moment, so any help would be greatly appreciated!
EDIT
I may've missed this, but I've replaced the string 'latest_events' with my Android application package name that I assume is required, as per the console to target a 'User Segment'.
Ended up solving this by waiting for a topic I had set up to appear in the Firebase Notifications dashboard. I then changed the following code to send to this topic directly:
return admin.messaging().sendToTopic("latest_events", payload);
I also found out that you have to provide a token when using 'sendToDevicegroup' after coming across the API documentation. Therefore, topics are more effective in my use case as I do not wish to obtain tokens to send to specific user devices.
Hope this helps someone who experiences a similar problem!
ADDITIONAL EDIT
If like me, you would like to alert users only of new events that have been added to a specific branch, typically including a push id, I've created the following code to implement this.
With a little help from the examples in the documentation, this will evaluate the number of records at the location compared to the previous location. Thus, this will only alert users of new child records that are added, rather than every time a record is edited and deleted.
var functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp(functions.config().firebase);
exports.bookingsChanged = functions.database.ref("/Bookings").onWrite(event
=> {
var payload = {
notification: {
title: "A new event has been added!"
}
};
if (event.data.previous.exists()) {
if (event.data.previous.numChildren() < event.data.numChildren()) {
return admin.messaging().sendToTopic("latest_events", payload);
} else {
return;
}
}
if (!event.data.exists()) {
return;
}
return admin.messaging().sendToTopic("latest_events", payload);
});

Categories