I have Firebase storage bucket and I would like to use Node.js Google-cloud notification API in order to listen to changes in the storage.
What I have so far:
const gcloud = require('google-cloud');
const storage = gcloud.storage({
projectId: 'projectId',
credentials: serviceAccount
});
const storageBucket = storage.bucket('bucketId');
Now from what I understand I have to create a channel in order to listen to storage changes.
So I have:
const storageBucketNotificationChannel = storage.channel('channelId', 'resourceId');
This is the threshold where the docs stop being clear, as I can't figure out what channelId a resourceId stand for.
Nor do I understand how to declare listening to channel changes itself. Are there any lifecycle-type methods to do so?
Can I do something like?
storageBucketNotificationChannel.onMessage(message => { ... })
Based on the existing documentation of the Google Cloud Node.js Client and the feedback from this Github issue, there is presently no way for the node client to create a channel or subscribe to object change notifications.
One of the reasons being that the machine using the client may not necessarily be the machine on which the application runs, and thus a security risk. One can still however, subscribe to object change notifications for a given bucket and have notifications received a Node.js GAE application.
Using Objects: watchAll JSON API
When using gsutil to subscribe, gsutil sends a POST request to https://www.googleapis.com/storage/v1/b/bucket/o/watch where bucket is the name of the bucket to be watched. This is essentially a wrapper around the JSON API Objects: watchAll. Once a desired application/endpoint has been authorized as described in Notification Authorization, one can send the appropriate POST request to said API and provide the desired endpoint URL in address. For instance, address could be https://my-node-app.example.com/change.
The Node/Express application service would then need to listen to POST requests to path /change for notifications resembling this. The application would then act upon that data accordingly. Note, the application should respond to the request as described in Reliable Delivery for Cloud Storage to retry if it failed or stop retrying if it succeeded.
Related
I have a question, please, if possible explain in simple terms. Im new to react native, what's the best way to store API keys in a secure way where they can't reverse engineer it and get the keys right away. Can I just retrieve it from the server side using a restapi to get the apikey only if user is signed in? I'm trying to upload pictures to aws storage, but I want to store the APIKey somewhere where it's difficult to retrieve at least by hackers. Also, is there a way to send images trough the server express.js(from react native to express app) how can I do it, so I can upload it to aws storage or even if it is possible on mongodb instead of aws storage.
For example:
const express = require("express");
const requireAuth = require("../middlewares/requireAuth");
const router = express.Router();
router.use(requireAuth); //make sure they are signed in
/**
* * GET: Api key for the amazon s3 bucket storage
*/
router.get("/apikey/amazonstorage", (req, res) => {
const APIKEY = process.env.APIKEY;
if (APIKEY) {
res.status(200).send(APIKEY);
} else {
res.status(502).send(null);
}
});
Thank you in advance
In general, the safest way to handle API secret keys is to store them on your backend server and have the server make those requests to the third party APIs for the client (and send the results back to the client if necessary).
From the React Native docs:
If you must have an API key or a secret to access some resource from your app, the most secure way to handle this would be to build an orchestration layer between your app and the resource. This could be a serverless function (e.g. using AWS Lambda or Google Cloud Functions) which can forward the request with the required API key or secret. Secrets in server side code cannot be accessed by the API consumers the same way secrets in your app code can.
I have created a web-based system in codeigniter and some trello integration using its API services. I wanted to achieve something like if there is a new card created in a particular board it will also send a notification in my system that a new card is created. I started reading some documentation in trello webhooks but I just can't figure it out. Am I heading in the right way? Would it be valid if I provide a callbackURL pointing in localhost callbackURL: "localhost/main_controller/trelloCallback" ? However the code below returns a 400 status. Please help me. Thank you.
Javascript
$.post("https://api.trello.com/1/tokens/5db4c9fbb5b2kaf8420771072b203616f3874fa92a4c57f0c796cf90819fa05c/webhooks?key=a2a93deccc7064dek5f4011c2e9810d6", {
description: "My first webhook",
callbackURL: "localhost/dti_infosys/main_controller/trelloCallback",
idModel: "5a73c33ad9a2dk1b473612eb",
});
main_controller/trelloCallback
function trelloCallback() {
$json = file_get_contents('php://input');
$action = json_decode($json,true);
var_dump($action);
}
I know it's an old question, but this use case of having an external tool access our localhost for development is quite common.
So for anyone (OP included) that would like a cloud based service to be able to call a local endpoint, you can use tools like ngrok.
Basically, it sets up a URL accessible via internet that forwards all calls to one of your local ports.
Let's say that your local webserver running your PHP listens on the port 8000 on your machine.
With ngrok installed, you could execute the following command:
$> ngrok http 8000
That would set up the forwarding session:
ngrok by #inconshreveable
Session Status online
Session Expires 6 hours, 21 minutes
Version 2.3.35
Region United States (us)
Web Interface http://127.0.0.1:4040
Forwarding http://b5d44737.ngrok.io -> http://localhost:8000
Forwarding https://b5d44737.ngrok.io -> http://localhost:8000
You can then use any of the Forwarding addresses pointing to ngrok.io to access your local webserver through internet, meaning that if you were to provide an URL using any of these addresses instead of localhost to an external tool, it would be able to indirectly call your local endpoint.
In your case, your javascript call to create a Trello webhook would be:
$.post("https://api.trello.com/1/tokens/<YOUR_ACCESS_TOKEN>/webhooks?key=<YOUR_API_KEY>", {
description: "My first webhook",
callbackURL: "https://b5d44737.ngrok.io/dti_infosys/main_controller/trelloCallback",
idModel: "<YOUR_MODEL_ID>",
});
A bit of warning though: In the case of ngrok, the Forwarding URLs are randomized at each session startup, meaning that the webhooks you created on one session would not work an another session because there callbacksUrl wouldn't be valid anymore.
Note that you can subscribe to a paid plan to "reserve" ngrok subdomains and have URL consistency between sessions. Or you could manually update your webhooks callbackUrls with the new forarding URLs.
Anyway, I hope this will help!
P.S.: When the forwarding session runs, you can access localhost:4040 to inspect calls made on the forwarding URLs and retry some of them.
I'm using a third-party SDK that needs temporary AWS credentials to access AWS services. I'm using this SDK as part of an application that is running on EC2. All SDKs in my application need access to the same role, which is attached to my the EC2 instance. Below, I have listed two options I have found for getting temporary credentials. Which one of these options is the recommended way for getting temporary credentials for my third-party SDK?
AWS.config
var AWS = require("aws-sdk");
AWS.config.getCredentials();
var creds = AWS.config.credentials
Security Token Service (STS)
var sts = new AWS.STS();
var params = {
RoleArn: "arn:aws:iam::123456789012:role/demo",
RoleSessionName: "Bob",
};
sts.assumeRole(params, function(err, data) {
var creds = data.Credentials;
});
Should in this case is a bit fluid, but when you launch an EC2 instance and assign it an instance profile, (somewhat) temporary credentials are made available as instance metadata. You access instance metadata via a local HTTP server bound on 169.254.169.254
e.g.
curl http://169.254.169.254/latest/meta-data/ami-id
returns the AMI-ID of the running instance. AWS credentials associated with the instance profile assigned to the instance can be accessed in this manner.
Anything running on the instance can access this data, meaning that if you're trying to isolate the third-party SDK from your instance profile, you've already failed.
However, it doesn't sound like that's what you're trying to do. When you execute AWS.config.getCredentials();, it uses the instance metadata (among other things) to look up the credentials. This is advantageous because it allows you to supply the credentials in a variety of manners without changing the code that looks them up.
The STS use case, however, is if you want to temporarily change a given user to a particular role. The user you're requesting from must have the sts:AssumeRole permission and have the same permissions as the target role. This can be used for auditing purposes, etc.
i have users Uid in an 'users' array as ['uid1','uid2'] now i will be sending notifications to these users in cloud function?
exports.sendNotificationFromCr = functions.database.ref('/cr/{crUid}/notifications/{notificationid}/').onWrite(event => {
const uid = ['uid1','uid2']; // some how i get this.
// some work to send notifications
// to all tokens of uid1 and uid2.
}
here is the database structure:
users/
uid1/
name:{name}
FCM-key/
token1:true
token2:true
uid2/
...
FCM-key/
token3:true
using ['uid1','uid2'] i want to send notification to all 3 tokens in my database. how to do that?
If you're using something like firebase. Then you would want to have a notifications database model, that has the userId, the notification title, body and perhaps image, also a seen flag (true or false).
You would then post notifications either from your clients or from your cloud server code into the database. One per client/notification. If you have thousands of users you would use some sort of server-side cronjob, to offload this so that it runs outside of say your client to server API.
On the clients, you would be listening for new rows in that model filtering on the userId and when they appear, display them to the client in your UI. Once the client has seen the notification you would mark it as seen on the client.
Without knowing what platforms, code base, DB you are using it's impossible to explain in code terms how this would be done.
There are various API's for IOS and Android and Firebase that resolve this.
I'm trying to evaluate using IndexedDB to solve the offline issue. It would be populated with data currently stored in a MongoDB database (as is).
Once data is stored in IndexedDB, it may be changed on the MongoDB server and I need to propagate those changes. Is there any existing framework or Library to do somehting like this for Mongo. I already know about CouchDB/PouchDB and am not exploring those two.
[Sync solution for 2021]
I know the question asked was for MongoDB specifically, but since this is an old thread I thought readers might be looking for other solutions for new apps or rebuilds. I can really recommend to check out AceBase because it does exactly what you were looking for back then.
AceBase is a free and open source realtime database that enables easy storage and synchronization between browser and server databases. It uses IndexedDB in the browser, its own binary db / SQL Server / SQLite storage on the server. Offline edits are synced upon reconnect and clients are notified of remote database changes in realtime through a websocket (FAST!).
On top of this, AceBase has a unique feature called "live data proxies" that allow you to have all changes to in-memory objects to be persisted and synced to local and server databases, and remote changes to automatically update your in-memory objects. This means you can forget about database coding altogether, and code as if you're only using local objects. No matter whether you're online or offline.
The following example shows how to create a local IndexedDB database in the browser, how to connect to a remote database server that syncs with the local database, and how to create a live data proxy that eliminates further database coding. AceBase supports authentication and authorization as well, but I left it out for simplicity.
const { AceBaseClient } = require('acebase-client');
const { AceBase } = require('acebase');
// Create local database with IndexedDB storage:
const cacheDb = AceBase.WithIndexedDB('mydb-local');
// Connect to server database, use local db for offline storage:
const db = new AceBaseClient({ dbname: 'mydb', host: 'db.myproject.com', port: 443, https: true, cache: { db: cacheDb } });
// Wait for remote database to be connected, or ready to use when offline:
db.ready(async () => {
// Create live data proxy for a chat:
const emptyChat = { title: 'New chat', messages: {} };
const proxy = await db.ref('chats/chatid1').proxy(emptyChat); // Use emptyChat if chat node doesn't exist
// Get object reference containing live data:
const chat = proxy.value;
// Update chat's properties to save to local database,
// sync to server AND all other clients monitoring this chat in realtime:
chat.title = `Changing the title`;
chat.messages.push({
from: 'ewout',
sent: new Date(),
text: `Sending a message that is stored in the database and synced automatically was never this easy!` +
`This message might have been sent while we were offline. Who knows!`
});
// To monitor and handle realtime changes to the chat:
chat.onChanged((val, prev, isRemoteChange, context) => {
if (val.title !== prev.title) {
alert(`Chat title changed to ${val.title} by ${isRemoteChange ? 'us' : 'someone else'}`);
}
});
});
For more examples and documentation, see AceBase realtime database engine at npmjs.com
Open up a changeStream with the resumeToken. There's no guarantee of causal consistency however since we're talking multiple disparate databases.
I haven't worked with IndexDB, but the design problem isn't that uncommon. My understanding of your app is that when the client makes the connection to MongoDB, you pull a set of documents down for local storage and disconnect. The client then can do things locally (not connected to the data server), and then push up the changes.
The way I see it you've got to handle two general cases:
when the MongoDB server is updated and breaks continuity with the client, the client will have to
poll for the data (timer?) or
keep a websocket open to let notifications free-flow over the pipe
when the user needs to push changed data back up the pipe
you can reconnect asynchronously, check for state changes, (resolving conflicts according to your business rules)
have a server side (light) interface for handling conflicts (depending on complexity of your app, comparing time stamps of state changes in MongoDB to IndexedDB updates should suffice)