MongoDB 3.6.x change stream marshalling/unmarshalling in NodeJS driver - javascript

I am using MongoDB 3.6.2's change streams(with Mongo NodeJS driver 3.0.1) to try to implement resumeable streams of data to the browser. So at some point in my code I am doing a JSON.stringify on the resume token that I get back during an update(ie the _id for the update from the change stream). I send this across the wire to the front end app and then when there is a disconnect and subsequent reconnect, this information is sent back to the server to let it know where to resume from. However, I can not seemingly simply supply this JSON object back to the driver to resume from as I get an invalid type for the resume token as a runtime error.
An example of what the stringify is resulting in:
{"_data":"glpeTK8AAAABRmRfaWQAZFoygBEXtikxY6F/zgBaEAQkFlJHID5PgaLDUFQD2jMyBA=="}
The actual resume token appears to be a specialized buffer object in the form:
{
_data: {
buffer: Buffer(49),
position = 49,
sub_type = 0,
_bsontype = "Binary"
}
}
My problem is, of course, in getting the string back into an actual resume token. The Buffer(49) itself seems to be getting converted into a base64 string which is then assigned to _data. I am uncertain what the other fields are. I have not been able to find much documentation on this sort of marshalling/unmarshalling of the tokens to handle resumptions of the streamed data to the client(given multiple node servers for scaling, simply keeping the token on the server is not really a good option, since that server may go down and the client tries to reconnect, so it having the token that relates to where it left off and the next server it connects to picking up from there is optimal).
In general it seems the resume tokens have been locked down hard from the developer, it contains valuable information that I could use (what collection we are on, timestamp for the update, etc), but none of this is made available to me(although it is apparently a feature they will be adding for 3.7). Likewise I can't even get a resume token for the current moment in time for a given collection(very useful if I've read a collection in and haven't had any updates, but don't want to read it in fully again if I disconnect/reconnect just because no updates have occurred to the collection). But hopefully some of these facilities will be getting added as Mongo realizes their usefulness.
I have tested successfully using the resume token's to resume a stream if there is no marshalling/unmarshalling involved (ie the token sits as an object on the server and is not converted to a wire-acceptable form). But this is not very useful in a scaled environment.

Just in case anyone else has this problem I thought I would post my current solution, though I still invite better solutions.
Through the magic of BSON, I simply serialize the resume token, convert that buffer to base64, and send that to the browser. Then when the browser sends it back after a disconnect/reconnect, I simply make a buffer from the base64, and use bson to deserialize that buffer. The resulting token works like a charm.
Ie, my marshalling of the update token looks like this code:
b64String = bson.serialize(resumeToken).toString('base64');
And, my unmarshalling of the base64 token sent after a disconnect/reconnect looks like this code:
token = bson.deserialize(Buffer.from(b64String, 'base64'));

Alternatively, you can utilise MongoDB Extended JSON library: npm module mongodb-extjson to stringify and parse the token.
For example:
const EJSON = require("mongodb-extjson");
resumeToken = EJSON.stringify(changeStreamDoc._id);
and to resume:
changeStream = collection.watch([], { resumeAfter: EJSON.parse(resumeToken) });
Tested on mongodb-extjson version 2.1.0 and MongoDB v3.6.3

Related

React Native bluetooth device authentication

I'm trying to communicate with a bluetooth LE device, but have been told I need to "authenticate" before being able to read / write data. The hardware dev has told me that the device sends a key to the recipient, and I need to reply with 12000000000000000000000000. He has tested this successfully with the NRF Connect desktop app (but I need to replicate this in react native).
I've tried sending 12000000000000000000000000 (converted to base64) to the device's notify characteristic as soon as I connect to it using the code below:
const Buffer = require("buffer").Buffer;
const loginString = "12000000000000000000000000";
const hexToBase64 = Buffer.from(loginString).toString("base64");
characteristics[0].writeWithResponse(hexToBase64).then(()=>...)
However, I keep getting "GATT exception from MAC address C7:7A:16:6B:1F:56, with type BleGattOperation{description='CHARACTERISTIC_WRITE'}" even though the code executes properly (no catch error).
I've looked through the react-native-ble-plx docs and still haven't found a solution to my problem, any help would be apreciated!
In case the BLE device runs a Authorization Control Service(ASC)(UUID 0x183D), your application would play a Client-role. In ACS, there are two characteristics that a client is able to write to: "ACS Data In"(UUID 0x2B30) and "ACS Control Point"(UUID 0x2B3D), while only the "ACS Data Out Notify" characteristic(UUID 0x2B31) has a Notify-property which would be initiated by the Server but enabled by the Client. Basically, the data structure in these characteristics are little-endian in the payload, and converting the key to little-endian before the write operation may work. These are what I known from my recently studying on BLE documents and may these help.
The data type for writing to a characteristic is typically an array of bytes.
Try converting the string to a byte array, something like this:
const loginString = "12000000000000000000000000";
const byteArray = Array.from(loginString, c => c.charCodeAt(0));
characteristics[0].writeWithResponse(byteArray).then(() => ...)

JWTs before decoding and after signing are different

I am in the process of testing a migration of data from an already existing server to a new server.
Part of that is checking to make sure JWTs saved on the old server are being sent to the new server correctly. The process is to fetch tokens from the old server to the test server, and then send them to the new server to check to see if they exist. The old server sends unsigned JWTs over to my test server, and then I need to sign them in order to check them against the new server.
In order to get a signature for these tokens, running the following code:
// Get the object represented by the token
this.token = jwt.decode(`${this.unsignedToken}.a`)
// Turn the object into a signed token string
this.signedToken = jwt.sign(this.token, this.tokenSecret)
I concatenated the '.a' onto the end of the unsignedToken because jwt.decode needs a "signed"" token in order to get the data back.
The problem I am having is that the unsignedToken and the signedToken don't have the same payload section of the JWT, even though they both decode to the exact same object. Because of that, the endpoint the signedTokens are sent to isn't able to match them up to what is on that server properly.
When I manually check the unsigned token against the new server's database, it does exist, but because the signedToken isn't the same string before the signature, the test process won't work.
Am I doing something wrong?
Edit:
Answer:
When I manually decoded the two tokens as base64 at https://www.base64decode.org/, I discovered that the unsignedToken included a URL that looked like "https:\/\/" while the signedToken's url was "https://".
For anyone out there coming across this as well, my final solution was how I signed token:
this.signedToken = jwt.sign(JSON.stringify(this.token).replace(/\//g, '\\/'), this.tokenSecret)
When I manually decoded the two tokens as base64 at https://www.base64decode.org/, I discovered that the unsignedToken included a URL that looked like "https:\/\/" while the signedToken's url was "https://".
For anyone out there coming across this as well, my final solution was how I signed token:
this.signedToken = jwt.sign(JSON.stringify(this.token).replace(/\//g, '\\/'), this.tokenSecret)

App API design advice specifically around security

I'm building an app and would like some feedback on my approach to building the data sync process and API that supports it. For context, these are the guiding principles for my app/API:
Free: I do not want to charge people at all to use the app/API.
Open source: the source code for both the app and API are available to the public to use as they wish.
Decentralised: the API service that supports the app can be run by anyone on any server, and made available for use to users of the app.
Anonymous: the user should not have to sign up for the service, or submit any personal identifying information that will be stored alongside their data.
Secure: the user's data should be encrypted before being sent to the server, anyone with access to the server should have no ability to read the user's data.
I will implement an instance of the API on a public server which will be selected in the app by default. That way initial users of the app can sync their data straight away without needing to find or set up an instance of the API service. Over time, if the app is popular then users will hopefully set up other instances of the API service either for themselves or to make available to other users of the app should they wish to use a different instance (or if the primary instance runs out of space, goes down, etc). They may even access the API in their own apps. Essentially, I want them to be able to have the choice to be self sufficient and not have to necessarily rely on other's providing an instance on the service for them, for reasons of privacy, resilience, cost-saving, etc. Note: the data in question is not sensitive (i.e. financial, etc), but it is personal.
The user's sync journey works like this:
User downloads the app, and creates their data in the process of using the app.
When the user is ready to initially sync, they enter a "password" in the password field, which is used to create a complex key with which to encrypt their data. Their password is stored locally in plain text but is never sent to the server.
User clicks the "Sync" button, their data is encrypted (using their password) and sent to the specified (or default) API instance and responds by giving them a unique ID which is saved by the app.
For future syncs, their data is encrypted locally using their saved password before being sent to the API along with their unique ID which updates their synced data on the server.
When retrieving synced data, their unique ID is sent to the API which responds with their encrypted data. Their locally stored password is then used to decrypt the data for use by the app.
I've implemented the app in javascript, and the API in Node.js (restify) with MongoDB as a backend, so in practice a sync requests to the server looks like this:
1. Initial sync
POST /api/data
Post body:
{
"data":"DWCx6wR9ggPqPRrhU4O4oLN5P09onApoAULX4Xt+ckxswtFNH/QQ+Y/RgxdU+8+8/muo4jo/jKnHssSezvjq6aPvYK+EAzAoRmXenAgUwHOjbiAXFqF8gScbbuLRlF0MsTKn/puIyFnvJd..."
}
Response:
{
"id":"507f191e810c19729de860ea",
"lastUpdated":"2016-07-06T12:43:16.866Z"
}
2. Get sync data
GET /api/data/507f191e810c19729de860ea
Response:
{
"data":"DWCx6wR9ggPqPRrhU4O4oLN5P09onApoAULX4Xt+ckxswtFNH/QQ+Y/RgxdU+8+8/muo4jo/jKnHssSezvjq6aPvYK+EAzAoRmXenAgUwHOjbiAXFqF8gScbbuLRlF0MsTKn/puIyFnvJd...",
"lastUpdated":"2016-07-06T12:43:16.866Z"
}
3. Update synced data
POST /api/data/507f191e810c19729de860ea
Post body:
{
"data":"DWCx6wR9ggPqPRrhU4O4oLN5P09onApoAULX4Xt+ckxswtFNH/QQ+Y/RgxdU+8+8/muo4jo/jKnHssSezvjq6aPvYK+EAzAoRmXenAgUwHOjbiAXFqF8gScbbuLRlF0MsTKn/puIyFnvJd..."
}
Response:
{
"lastUpdated":"2016-07-06T13:21:23.837Z"
}
Their data in MongoDB will look like this:
{
"id":"507f191e810c19729de860ea",
"data":"DWCx6wR9ggPqPRrhU4O4oLN5P09onApoAULX4Xt+ckxswtFNH/QQ+Y/RgxdU+8+8/muo4jo/jKnHssSezvjq6aPvYK+EAzAoRmXenAgUwHOjbiAXFqF8gScbbuLRlF0MsTKn/puIyFnvJd...",
"lastUpdated":"2016-07-06T13:21:23.837Z"
}
Encryption is currently implemented using CryptoJS's AES implementation. As the app provides the user's password as a passphrase to the AES "encrypt" function, it generates a 256-bit key which which to encrypt the user's data, before being sent to the API.
That about sums up the sync process, it's fairly simple but obviously it needs to be secure and reliable. My concerns are:
As the MongoDB ObjectID is fairly easy to guess, it is possible that a malicious user could request someone else's data (as per step 2. Get sync data) by guessing their ID. However, if they are successful they will only retrieve encrypted data and will not have the key with which to decrypt it. The same applies for anyone who has access to the database on the server.
Given the above, is the CryptoJS AES implementation secure enough so that in the real possibility that a user's encrypted data is retrieved by a malicious user, they will not realistically be able to decrypt the data?
Since the API is open to anyone and doesn't audit or check the submitted data, anyone could potentially submit any data they wish to be stored in the service, for example:
Post body:
{
"data":"This is my anyold data..."
}
Is there anything practical I can do to guard against this whilst adhering to the guiding principles above?
General abuse of the service such as users spamming initial syncs (step 1 above) over and over to fill up the space on the server; or some user's using disproportionately large amounts of server space. I've implemented some features to guard against this, such as logging IPs for initial syncs for one day (not kept any longer than that) in order to limit a single IP to a set number of initial syncs per day. Also I'm limiting the post body size for syncs. These options are configurable in the API however, so if a user doesn't like these limitations on a public API instance, they can host their own instance and tweak the settings to their liking.
So that's it, I would appreciate anyone who has any thoughts or feedback regarding this approach given my guiding principles. I couldn't find any examples where other apps have attempted a similar approach, so if anyone knows of any and can link to them I'd be grateful.
I can't really comment on whether specific AES algorithms/keys are secure or not, but assuming they are (and the keys are generated properly), it should not be a problem if other users can access the encrypted data.
You can maybe protect against abuse, without requiring other accounts, by using captchas or similar guards against automatic usage. If you require a catcha on new accounts, and set limits to all accounts on data volume and call frequency, you should be ok.
To guard against accidental clear-text data, you might generate a secondary key for each account, and then check on the server with the public secondary key whether the messages can be decrypted. Something like this:
data = secondary_key(user_private_key(cleartext))
This way the data will always be encrypted, and in worst case the server will be able to read it, but others wouldn't.
A few comments to your API :) If you're already using HTTP and POST, you don't really need an id. The POST usually returns a URI that points to the created data. You can then GET that URI, or PUT it to change:
POST /api/data
{"data": "..."}
Response:
Location: /api/data/12345
{"data": "...", "lastmodified": "..." }
To change it:
PUT /api/data/12345
{"data": "..."}
You don't have to do it this way, but it might be easier to implement on the client side, and maybe even help with caching and cache invalidation.

protect http request URL

i am getting remote JSON value into to my client app as below.
var $Xhr = Ti.Network.createHTTPClient({
onerror : function($e) {
Ti.API.info($e);
},
timeout : 5000,
});
$Xhr.open("GET", "http://***********.json");
$Xhr.send();
$Xhr.onload = function() {
if ($Xhr.status == 200) {
try {
Ti.API.info(this.responseText);
} catch($e) {
Ti.API.info($e);
} finally {
$Xhr = null;
}
}
};
My json URL is static. i would like to protect this URL from stranger eyes after creating APK file or publishing for iOS.
Also my server side support PHP. I have thouhgt MD5, SHA etc. but i didn't develop any project about this algortim.
Do you have any suggestion or approach?
Thank you in advance.
I would just say that it is not possible for you to "hide" the end point. Your url will always to visible to the user because otherwise user's browser wouldn't know how to actually post it to your server.
If you meant to only hide the json object, even that is not totally possible. If your javascript knows what the values are then any of your client smart enough to understand javascript will be able to decode your encoded json object. Remember, your javascript has decoded object and a user would have full access to it. There is no protection against that. At best, you can hide it from everyday user by encoding to with md5 or sha as you put it.
I you wish to restrict access to app user only, you will need to authenticate your users first.
Once they are authenticated, you should generate a hash by concatenating userid (or any user identifying data) and a key that you know (a string will do it), and hashing it using any hashing method, md5 would be enough for that kind of usage I guess, SHA is good anyway.
The next step would be to send this hash with every AJAX request to your server. consider it as an additional data.
Finally, server-side, before treating the request and fetching the data to be sent, just generate a hash the same way you did in your app, using the userid of the requesting user and the same "secret" key you chose. You can now compare both hashes and see if they're identical. If not, then it's probably that someone tried to forge a request from outside your app.
Note that it could be possible for someone authenticated to get his hash (which depends on his ID) and then use it in one of his applications, so it may be a good idea to track the requests server-side in order to check if there's any suspicious usage of your API. You could aswell change your "secret key" regularily (forcing an update of your app though) or define an array with a different key for each day of the year in both your app and server code, so that each individual hashkey will change everyday, recurring each year.

Adding couchdb persistence to a socketio json feed in node

I'm currently researching how to add persistence to a realtime twitter json feed in node.
I've got my stream setup, it's broadcasting to the client, but how do i go about storing this data in a json database such as couchdb, so i can access the stores json when the client first visits the page?
I can't seem to get my head around couchdb.
var array = {
"tweet_id": tweet.id,
"screen_name": tweet.user.screen_name,
"text" : tweet.text,
"profile_image_url" : tweet.user.profile_image_url
};
db.saveDoc('tweet', strencode(array), function(er, ok) {
if (er) throw new Error(JSON.stringify(er));
util.puts('Saved my first doc to the couch!');
});
db.allDocs(function(er, doc) {
if (er) throw new Error(JSON.stringify(er));
//client.send(JSON.stringify(doc));
console.log(JSON.stringify(doc));
util.puts('Fetched my new doc from couch:');
});
These are the two snippets i'm using to try and save / retrieve tweet data. The array is one individual tweet, and needs to be saved to couch each time a new tweet is received.
I don't understand the id part of saveDoc - when i make it unique, db.allDocs only lists ID's and not the content of each doc in the database - and when it's not unique, it fails after the first db entry.
Can someone kindly explain the correct way to save and retrieve this type of json data to couchdb?
I basically want to to load the entire database when the client first views the page. (The database will have less than 100 entries)
Cheers.
You need to insert the documents in the database. You can do this by inserting the JSON that comes from the twitter API or you can insert one status at a time (for loop)
You should create a view that exposes that information. If you saved the JSON directly from Twitter you are going to need to emit several times in your map function
There operations (ingestion and querying) are not the same thing, so you should really do them at the different times in your program.
You should consider running a bg process (maybe in something as simple as a setInterval) that updates your database. Or you can use something like clarinet (http://github.com/dscape/clarinet) to parse the Twitter streaming API directly.
I'm the author of nano, and here is one of the tests that does most of what you need:
https://github.com/dscape/nano/blob/master/tests/view/query.js
For the actual query semantics and for you learn a bit more of how CouchDB works I would suggest you read:
http://guide.couchdb.org/editions/1/en/index.html
I you find it useful I would suggest you buy the book :)
If you want to use a module to interact with CouchDB I would suggest cradle or nano.
You can also use the default http module you find in Node.js to make requests to CouchDB. The down-side is that the default http module tends to be a little verbose. There are alternatives that give you an better API to deal with http requests. The request is really popular.
To get data you need to make a GET request to a view you can find more information here. If you want to create a document you have to use PUT request to your database.

Categories