track message reports using Multicast ID on google firebase in php - javascript

Is there a way to track delivery report of a particular message that have been sent via FCM to android device, i found we can add delivery_receipt_requested to track delivery and i have added that my json data as follows,
{"to":"KEY",
"data":{
"data":{
"title":"test message",
"message":"sent",
"image":null}
},
"notification":{
"delivery_receipt_requested":true
}
}
and i receive a response
{"multicast_id":6417448921485349071,"success":1,"failure":0,"canonical_ids":0,"results":[{"error":"false"}]}
In php or javascript i need something like if we pass that multicast_id need to get the current status of the text. I found it was almost nightmare to get the desired result, but its not impossible is there anyway guys?

There is currently no way to manually ask the FCM server about the status of the sent message.
Based from your post, it seems you already did your homework on checking the FCM service. Implementing the delivery receipts is the only way (AFAIK) that you could attain the behavior you mentioned in your post.
Implementing the delivery receipt not only needs the delivery_receipt_requested parameter in your payload, you have to implement an XMPP server protocol as well. Along with the Upstream Messaging part on your client app (for the acknowledgement part).

Related

Shiny server - Sending huge data through custom message Error

Context
A shiny application I work on sends processed data through:
session$sendCustomMessage("handler", data)
The Javascript on the front end catches this message and does some visualizations:
Shiny.addCustomMessageHandler('handler', (data) => visualizer(data))
When the fetch data button is clicked this data is sent.
The shiny server is currently hosted on an Amazon EC2 R6a Large instance.
Issue
Depending on the parameters the user selects, the data can be small or huge. When the parameters are such that the data is a huge JSON object, there is an error given and the server disconnects with the following messages:
I have tested the same scenarios but without sending the data through session$sendCustomMessage("handler", data) and am not able to reproduce the error. It seems like sending a huge amount of data through the network causes the shiny server to disconnect.
What could be the fix for this?
It seems like increasing the sockjs_heartbeat_delay and sockjs_disconnect_delay helped with this. After increasing this, the error seemed to be gone.
The connection simply times out when the huge data is loading and the comment above was correct about it.
https://docs.posit.co/shiny-server/
Simply add these directives in the shiny-server.conf with a reasonable value.

What should I use as the document key to maintain idempotency?

What should I use as the document key to maintain idempotency?
I'm building a text messaging application that uses CouchDB (with PouchDB on the client) to store messages locally. Twilio (SMS provider) generates an ID for each message, and I use that as the CouchDB document ID. This way fetching messages from Twilio's API is idempotent -- if I come across the same message twice, it will only store one copy in my database.
// twilio API /messages
[
{smsid: 123, body: 'foo'},
{smsid: 456, body: 'bar'}
]
// transformed into couchdb docs
[
{id: 123, doc: {_id: 123, body: 'foo'}},
{id: 456, doc: {_id: 456, body: 'bar'}}
]
This is easy to do when fetching messages from twilio. But when the user sends an outbound message from the client application, there is no twilio ID yet because it hasn't been sent to twilio yet.
A traditional approach would involve POSTing the message to some endpoint on my server, and have the server send it to twilio, then add the record to the database once it has the smsid from twilio's response. The problem with this is (a) there's a noticeable delay from when the user presses "send" and when the message shows up in the UI, and (b) we can't take advantage of couchdb's auth system.
Instead, I have it setup so the client generates a random ID, and inserts it into the database (via pouchdb w/sync). The server then watches for new outbound records added and dispatches them to twilio.
This approach works fine, but if I GET /messages again, it's no longer idempotent -- it would create an additional record for the outbound message because I don't have a couchdb document with that message's smsid as its key (it didn't have an smsid when it was added to couchdb).
Is there a way around this or a better approach?
An idea to make this work is that you must rely on other data from each message, and ignore Twilio's smsid.
Perhaps hashing together the user id, the message body and an abrangent version of the timestamp (for example, int(UNIX-TIMESTAMP-IN-SECONDS/100) will tolerate a delay of 100 seconds between the time your server gets the message and Twilio acknowledges it).
Thanks for your replies. This was a tough one. #rnewson from #couchdb in freenode was kind enough to spend some time thinking about this one and proposed a solution that worked out great:
Message documents in couchdb use an arbitrary _id that can be generated by the server or the client
When the client sends a message, it generates an arbitrary _id and puts it into the database. The server observes this and dispatches it to twilio, then updates the database document by adding a twilio_id property to the document
I created a view to index the documents by twilio_id
When the server starts, it fetches the latest messages from twilio. In order to prevent adding duplicate records to the database, it queries the above view for each twilio id. For each match, it uses the match's _id and _rev to perform an update. For records with no matches, it generates a new arbitrary _id to perform an insert.
For anyone curious, here's the code.
Thanks again for your responses!

App API design advice specifically around security

I'm building an app and would like some feedback on my approach to building the data sync process and API that supports it. For context, these are the guiding principles for my app/API:
Free: I do not want to charge people at all to use the app/API.
Open source: the source code for both the app and API are available to the public to use as they wish.
Decentralised: the API service that supports the app can be run by anyone on any server, and made available for use to users of the app.
Anonymous: the user should not have to sign up for the service, or submit any personal identifying information that will be stored alongside their data.
Secure: the user's data should be encrypted before being sent to the server, anyone with access to the server should have no ability to read the user's data.
I will implement an instance of the API on a public server which will be selected in the app by default. That way initial users of the app can sync their data straight away without needing to find or set up an instance of the API service. Over time, if the app is popular then users will hopefully set up other instances of the API service either for themselves or to make available to other users of the app should they wish to use a different instance (or if the primary instance runs out of space, goes down, etc). They may even access the API in their own apps. Essentially, I want them to be able to have the choice to be self sufficient and not have to necessarily rely on other's providing an instance on the service for them, for reasons of privacy, resilience, cost-saving, etc. Note: the data in question is not sensitive (i.e. financial, etc), but it is personal.
The user's sync journey works like this:
User downloads the app, and creates their data in the process of using the app.
When the user is ready to initially sync, they enter a "password" in the password field, which is used to create a complex key with which to encrypt their data. Their password is stored locally in plain text but is never sent to the server.
User clicks the "Sync" button, their data is encrypted (using their password) and sent to the specified (or default) API instance and responds by giving them a unique ID which is saved by the app.
For future syncs, their data is encrypted locally using their saved password before being sent to the API along with their unique ID which updates their synced data on the server.
When retrieving synced data, their unique ID is sent to the API which responds with their encrypted data. Their locally stored password is then used to decrypt the data for use by the app.
I've implemented the app in javascript, and the API in Node.js (restify) with MongoDB as a backend, so in practice a sync requests to the server looks like this:
1. Initial sync
POST /api/data
Post body:
{
"data":"DWCx6wR9ggPqPRrhU4O4oLN5P09onApoAULX4Xt+ckxswtFNH/QQ+Y/RgxdU+8+8/muo4jo/jKnHssSezvjq6aPvYK+EAzAoRmXenAgUwHOjbiAXFqF8gScbbuLRlF0MsTKn/puIyFnvJd..."
}
Response:
{
"id":"507f191e810c19729de860ea",
"lastUpdated":"2016-07-06T12:43:16.866Z"
}
2. Get sync data
GET /api/data/507f191e810c19729de860ea
Response:
{
"data":"DWCx6wR9ggPqPRrhU4O4oLN5P09onApoAULX4Xt+ckxswtFNH/QQ+Y/RgxdU+8+8/muo4jo/jKnHssSezvjq6aPvYK+EAzAoRmXenAgUwHOjbiAXFqF8gScbbuLRlF0MsTKn/puIyFnvJd...",
"lastUpdated":"2016-07-06T12:43:16.866Z"
}
3. Update synced data
POST /api/data/507f191e810c19729de860ea
Post body:
{
"data":"DWCx6wR9ggPqPRrhU4O4oLN5P09onApoAULX4Xt+ckxswtFNH/QQ+Y/RgxdU+8+8/muo4jo/jKnHssSezvjq6aPvYK+EAzAoRmXenAgUwHOjbiAXFqF8gScbbuLRlF0MsTKn/puIyFnvJd..."
}
Response:
{
"lastUpdated":"2016-07-06T13:21:23.837Z"
}
Their data in MongoDB will look like this:
{
"id":"507f191e810c19729de860ea",
"data":"DWCx6wR9ggPqPRrhU4O4oLN5P09onApoAULX4Xt+ckxswtFNH/QQ+Y/RgxdU+8+8/muo4jo/jKnHssSezvjq6aPvYK+EAzAoRmXenAgUwHOjbiAXFqF8gScbbuLRlF0MsTKn/puIyFnvJd...",
"lastUpdated":"2016-07-06T13:21:23.837Z"
}
Encryption is currently implemented using CryptoJS's AES implementation. As the app provides the user's password as a passphrase to the AES "encrypt" function, it generates a 256-bit key which which to encrypt the user's data, before being sent to the API.
That about sums up the sync process, it's fairly simple but obviously it needs to be secure and reliable. My concerns are:
As the MongoDB ObjectID is fairly easy to guess, it is possible that a malicious user could request someone else's data (as per step 2. Get sync data) by guessing their ID. However, if they are successful they will only retrieve encrypted data and will not have the key with which to decrypt it. The same applies for anyone who has access to the database on the server.
Given the above, is the CryptoJS AES implementation secure enough so that in the real possibility that a user's encrypted data is retrieved by a malicious user, they will not realistically be able to decrypt the data?
Since the API is open to anyone and doesn't audit or check the submitted data, anyone could potentially submit any data they wish to be stored in the service, for example:
Post body:
{
"data":"This is my anyold data..."
}
Is there anything practical I can do to guard against this whilst adhering to the guiding principles above?
General abuse of the service such as users spamming initial syncs (step 1 above) over and over to fill up the space on the server; or some user's using disproportionately large amounts of server space. I've implemented some features to guard against this, such as logging IPs for initial syncs for one day (not kept any longer than that) in order to limit a single IP to a set number of initial syncs per day. Also I'm limiting the post body size for syncs. These options are configurable in the API however, so if a user doesn't like these limitations on a public API instance, they can host their own instance and tweak the settings to their liking.
So that's it, I would appreciate anyone who has any thoughts or feedback regarding this approach given my guiding principles. I couldn't find any examples where other apps have attempted a similar approach, so if anyone knows of any and can link to them I'd be grateful.
I can't really comment on whether specific AES algorithms/keys are secure or not, but assuming they are (and the keys are generated properly), it should not be a problem if other users can access the encrypted data.
You can maybe protect against abuse, without requiring other accounts, by using captchas or similar guards against automatic usage. If you require a catcha on new accounts, and set limits to all accounts on data volume and call frequency, you should be ok.
To guard against accidental clear-text data, you might generate a secondary key for each account, and then check on the server with the public secondary key whether the messages can be decrypted. Something like this:
data = secondary_key(user_private_key(cleartext))
This way the data will always be encrypted, and in worst case the server will be able to read it, but others wouldn't.
A few comments to your API :) If you're already using HTTP and POST, you don't really need an id. The POST usually returns a URI that points to the created data. You can then GET that URI, or PUT it to change:
POST /api/data
{"data": "..."}
Response:
Location: /api/data/12345
{"data": "...", "lastmodified": "..." }
To change it:
PUT /api/data/12345
{"data": "..."}
You don't have to do it this way, but it might be easier to implement on the client side, and maybe even help with caching and cache invalidation.

How to fetch data from my google analytics account to use on site?

Consider that I am owner of http://ethernalsite.com
On my website i've added google analytics script and it's happily collecting data about my precious visitors.
I would like to create page views counter. How I can fetch data from my own account?
I mean, from site's account on which data is collected.
I've tried this:
create browser token on google's developer console and allow access for ethernalsite.com/*
I've got something like: xxxxSyCl5wAzQVSxxxxxKHFcxn6Uxzi-skoFZgo
make ajax get on client(javascript) side: https://www.googleapis.com/analytics/v3/data/ga?ids=ga%3A00263622&metrics=ga%3AuniquePageviews&start-date=2014-08-19&end-date=2014-09-02&max-results=50&key={KEY}
I've got the following response for ajax get query(firebug)
"NetworkError: 401 Unauthorized - https://www.googleapis.com/analytics/v3/data
/ga?ids=ga%3A00263622&metrics=ga%3AuniquePageviews&start-date=2014-08-19&
end-date=2024-09-02&max-results=50&key=xxxxSyCl5wAzQVSxxxxxKHFcxn6Uxzi-skoFZgo"
With content:
{"error":{"errors":[{"domain":"global","reason":"required",
"message":"LoginRequired","locationType":"header","location":"Authorization"}],
"code":401,"message":"Login Required"}}
In the future I would like to cache this value in the server side and allow client to fetch this data from my server every minute. If you also have tips about solving this problem using python i would be grateful.
The Google Developer Console will only give you the CLIENT_ID and CLIENT_SECRET. With these keys you will be able to generate an ACCESS_TOKEN and you will have to change your query string to
?ids=ga%3A00263622&metrics=ga%3AuniquePageviews&start-date=2014-08-19&end-date=2024-09-02&max-results=50&access_key=xxxxACCESS_KEYxxxx&_src=explorer

Handling successful payment processing but database update failure

I am trying to implement a stripe checkout process in one of my express.js routes. To do this, I have:
Official Node.js Stripe module
Official client-side Stripe module
A json logger I use to log things like javascript errors, incoming requests and responses from external services like stripe, mongodb, etc…
An Order model defined using mongoose - a MongoDB ODM
My steps are as follows:
Client:
Submit order details which include a stripe payment token
Server:
Create an unpaid order and save to database (order.status is created)
Use stripe client to charge user's credit/debit card
Update order and save to database (order.status is accepted or failed depending on response from Stripe)
Question: If payment is successful after step 2 but an error occurs updating the order in step 3 (due to database server error, outage or similar), what are some appropriate ways to handle this failure scenario and potentially recover from it?
With payment systems, you always need a consolidation process (hourly, daily, monthly) based on sane accounting principles that will check that every money flow is matched.
In your case, I suggest that every external async call logs the sent parameters and the received response. If you do not have a response within a certain time, you know that something has gone wrong on the external system (Stripe, in your case) or on the way back from the external system (you mention a database failure on your side)
Basically, for each async "transaction" that you spawn, you know when you start it and have to decide of a reasonable amount of time before it ends. Thus you have an expected_end_ts in the database.
If you have not received an answer after expected_end_ts, you know that something is wrong. Then you could ask for the status to Stripe or another PSP. Hopefully the API will give you a sane answer as to whether the payment went through or not.
Also note that you should add a step between 1. and 2 : re-read the database. You want to make sure that every payment request you make is really in the database, stored exactly as you are going to send it.

Categories