Meteor: How to prevent client from accessing methods - javascript

All meteor methods can be called same way from client and server side.
Let's say user knows or can predict all the method names on server, then he is able to call them and use it's result however he want.
example:
A method which performs cross domain http request and return response can be used to overload server by calling huge amounts of data Meteor.call(httpLoad, "google.com");, or a method which load data from mongo can be used to access database documents if the client know document _id Meteor.call(getUserData, "_jh9d3nd9sn3js");.
So, how to avoid this situations, may be there is a better way to store server-only functions than in Meteor.methods({...})?

Meteor methods are designed to be accessed from the client, if you don't want this, you just need to define a normal javascript function on the server. A really basic example would be:
server/server.js:
someFunction = function(params) {
console.log('hello');
}
As long as it's in the server folder, the function won't be accessible from the client.
For coffeescript users, each file is technically a separate scope, so you would have to define a global variable with #, e.g.
#someFunction = (params) ->
console.log 'hello'
or if you want to scope the function to a package:
share.someFunction = (params) ->
console.log 'hello'
If you have methods that need to be accessible from the client but only for say admin users, you need to add those checks at the start of the meteor method definition:
Meteor.methods({
'someMethod': function(params) {
var user = Meteor.user();
if (user && (user.isAdmin === true)) {
// Do something
} else {
throw new Meteor.Error(403, 'Forbidden');
}
}
});
I'm not going to vouch for the security of this example - it's just that, an example - but hopefully it gives you some idea of how you would secure your methods.
EDIT: Noticed the other answers mention using a if (Meteor.isServer) { ... } conditional. Note that if you are doing this inside methods which are also accessible on the client, the user will be still be able to see your server code, even if they can't run it. This may or may not be a security problem for you - basically be careful if you're hardcoding any 3rd-party API credentials or any kind of sensitive data in methods whose code can be accessed from the client. If you don't need the method on the client, it would be better to just use normal JS functions. If you're wrapping the whole Meteor.methods call with a isServer conditional, the code will be on the server only, but can still be called from the client.

as rightly stated in other answers, your methods will always be accessible from the client (per design). yet, there is a simple workaround to check if the call originates from the client or from the server. if you do a
if ( this.connection == null )
this will return true if the method was called from server. like that you can restrict the method body execution to 'secure' calls.

I think this page explains it: http://meteortips.com/first-meteor-tutorial/methods/
I'm quoting:
"The safer approach is to move these functions to the isServer conditional, which means:
Database code will execute within the trusted environment of the server.
Users won’t be able to use these functions from inside the Console, since users don’t have direct access to the server.
Inside the isServer conditional, write the following:
Meteor.methods({
// methods go here
});
This is the block of code we’ll use to create our methods."
and so on. I hope this helps.

With proper app design, you shouldn't care whether a request was through the web UI or via something typed in a console window.
Basically, don't put generic, abuse worthy functions in Meteor.methods, implement reasonable access controls, and rate-limit and/or log anything that could be a problem.
Any server-side function defined in Meteor.methods will have access to the current user id through this.userid. This userid is supplied by Meteor, not a client API parameter.
Since that Meteor Method server-side code knows the login status and userid, it can then do all the checking and rate limiting you want before deciding to do that thing that the user asked it to do.
How do you rate limit? I've not looked for a module for this lately. In basic Meteor you would add a Mongo collection for user actions accessible server-side only. Insert timestamped, userid specific data on every request that arrives via a Meteor method. Before fulfilling a request in the server method code, do a Mongo find for how many such actions occurred from this userid in a relevant period. This is a little work and will generates some overhead, but the alternative of rate-limiting via a server-wide underscore-style debounce leaves a function open for both abuse and denial-of-service by an attacker.

Related

Retrieve Cookie, store, and use within Node

I'm using npm package 'request' to make API calls. Upon initial login, I should receive a cookie back, I need to store that cookie indefinitely to make subsequent calls.
I'm doing this in Python with requests like so:
#set up the session
s = requests.session()
#logs in and stores the cookie in session to be used in future calls
request = s.post(url, data)
How do I accomplish this in node? I'm not tied to anything right now, the request package seems easy to work with, except I'm having issues getting known username and passwords to work, that said, I'm sure that's mostly my inexperience with JS/node.js.
This is all backend code, no browsers involved.
I need to essentially run a logon function, store the returned encrypted cookie and use for all subsequent calls against that API. These calls can have any number of parameters so I'm not sure a callback in the logon function would be a good answer, but am toying with that, although that would defeat the purpose of 'logon once, get encrypted cookie, make calls'.
Any advice, direction appreciated on this, but really in need of a way to get the cookie data retrieved/stored for future use.
The request package can retain cookies by setting jar: true -
let request = request.defaults({jar: true})
request('http://www.google.com', function () {
request('http://images.google.com')
})
The above is copied near-verbatim from the request documentation: https://github.com/request/request/blob/master/README.md#requestoptions-callback

I can't call Accounts.findUserByEmail() server-side via Meteor.call

I'm just trying to verify if an Account exists with a particular email, however I learned that Accounts.findUserByEmail() only works server-side.
It would appear the repeatedly-suggested way would be to define a Meteor.method() and do all the work in there. Unfortunately I apparently have no idea what I'm doing because I'm getting an error that no one else has been getting.
component.js:
Meteor.call('confirm', email);
methods.js:
Meteor.methods({
'confirm': (email) => {
if (Accounts.findUserByEmail(email)) {
return;
}
}
});
All I get is this error:
Exception while simulating the effect of invoking 'confirm' TypeError: Accounts.findUserByEmail is not a function
Am I completely misunderstanding the dynamic of Meteor.methods + Meteor.call? Is it not actually server-side??
Currently using Meteor package, accounts-password#1.3.3
Meteor simulates method calls in the front-end too by running "stubs" of your methods. The idea is to have a better user experience because the UI is updated immediately before the server has responded. However this also means that if you run server-only code in Meteor methods, you have to make sure that the code is only run on the server:
Meteor.methods({
'confirm': (email) => {
if (Meteor.isServer && Accounts.findUserByEmail(email)) {
return;
}
}
});
Alternatively, you can place the above method definition in a file that is only loaded on the server, like any file on the /server-directory or (recommended) in /imports to a file that is only included by server code. Then you shouldn't need to use Meteor.isServer separately.
If your client-side code includes a method definition, it is treated as a stub, which means that it is run in a special mode that provides "optimistic UI" and its effects on data are undone once the actual server method returns its response to the client.
It could be worthwhile to implement different versions of (at least some of the) methods for the client and server, and to avoid including some of them on the client altogether.
If you choose to use the same function on both the client and the server, there are Meteor.isServer, Meteor.isClient and this.isSimulation (the latter is specifically for the methods), that allow you to execute some of the blocks only on the client/server.
Note that the code in your question does not do what you expect it to, and you do not check the method argument.
For this specific use case, you should probably only implement the method on the server (simply don't import its code in your client build):
Meteor.methods({
isEmailInSystem(email) {
check(email, String);
return !!Accounts.findUserByEmail(email);
}
});
You can read more about the method lifecycle in The Meteor Guide.
From the guide (gist, some details omitted):
Method simulation runs on the client - If we defined this Method in client and server code, as all Methods should be, a Method simulation is executed in the client that called it.
The client enters a special mode where it tracks all changes made to client-side collections, so that they can be rolled back later. When this step is complete, the user of your app sees their UI update instantly with the new content of the client-side database, but the server hasn’t received any data yet.
A method DDP message is sent to the server
Method runs on the server
Return value is sent to the client
Any DDP publications affected by the Method are updated
updated message sent to the client, data replaced with server result, Method callback fires
After the relevant data updates have been sent to the correct client, the server sends back the last message in the Method life cycle - the DDP updated message with the relevant Method ID. The client rolls back any changes to client side data made in the Method simulation in step 1, and replaces them with the actual changes sent from the server in step 5.
Lastly, the callback passed to Meteor.call actually fires with the return value from step 4. It’s important that the callback waits until the client is up to date, so that your Method callback can assume that the client state reflects any changes done inside the Method.

Security in JavaScript Code

I am starting to build/design a new single page web application and really wanted to primarily use client-side technology (HTML, CSS, JavaScript/CoffeScript) for the front-end while having a thin REST API back-end to serve data to the front-end. An issue that has come up is about the security of JavaScript. For example, there are going to be certain links and UI elements that will only be displayed depending on the roles and resources the user has attached to them. When the user logs in, it will make a REST call that will validate the credentials and then return back a json object that has all the permissions for that user which will be stored in a JavaScript object.
Lets take this piece of javascript:
// Generated by CoffeeScript 1.3.3
(function() {
var acl, permissions, root;
root = typeof exports !== "undefined" && exports !== null ? exports : this;
permissions = {
//data…
};
acl = {
hasPermission: function(resource, permission, instanceId) {
//code….
}
};
root.acl = acl;
}).call(this);
Now this code setup make sure even through the console, no one can modify the variable permissions. The issue here is that since this is a single page application, I might want to update the permissions without having to refresh the page (maybe they add a record that then needs to be added to thier permissions). The only way I can think of doing this is by adding something like
setPermission: function(resource, permission, instanceId){
//code…
}
to the acl object however if I do that, that mean someone in the browser console could also use that to add permissions to themself that they should not have. Is there any way to add code that can not be accessed from the browser console however can be accessed from code in the JavaScript files?
Now even if I could prevent the issue described above, I still have a bigger one. No matter what I am going to need to have the hasPermission functionality however when it is declared this way, I can in the browser console overwrite that method by just doing:
acl.hasPermission(resource, permission, instanceId){return true;}
and now I would be able to see everything. Is there anyway to define this method is such a way that a user can not override it (like marking it as final or something)?
Something to note is that every REST API call is also going to check the permissions too so even if they were to see something they should not, they would still not be able to do anything and the REST API would regret the request because of permissions issue. One suggestion has been made to generate the template on the server side however I really don't like that idea as it is creating a very strong coupling between the front-end and back-end technology stacks. If for example for whatever reason we need to move form PHP to Python or Ruby, if the templates are built on the client-side in JavaScript, I only have to re-build the REST API and all the front-end code can stay the same but that is not the case if I am generating templates on the server side.
Whatever you do: you have to check all the permissions on the server-side as well (in your REST backend, as you noted). No matter what hoops you jump through, someone will be able to make a REST call that they are not supposed to make.
This effectively makes your client-side security system an optimization: you try to display only allowed operations to the user and you try to avoid round-trips to the server to fetch what is allowed.
As such you don't really need to care if a user can "hack" it: if they break your application, they can keep both parts. Nothing wrong can happen, because the server won't let them execute an action that they are not authorized to.
However, I'd still write the client-side code in a way that it expect an "access denied" as a valid answer (and not necessary an exception). There are many reasons why that response might come: If the permissions of the logged-in user are changed while he has a browser open, then the security descriptions of the client no longer match the server and that situation should be handled gracefully (display "Sorry, this operation is not permitted" and reload the security descriptions, for example).
Don't ever trust Javascript code or the front-end in general. People can even modify the code before it reaches your browser (sniffers etc) and most variables are accessible and modifiable anyways... Trust me: you are never going to be safe on the front-end :)
Always check credentials on the server-side, never only on the front-end!
In modern browsers, you can use Object.freeze or Object.defineProperty to make sure the hasPermission method cannot be redefined.
I don't know yet how to overcome the problem with setPermission. Maybe it's best to just rely on the server-side security there, which as you said you have anyway.

Node.js Programming Pattern for getting Execution Context

I am writing a web app in node.js. Now every processing on the server is always in the context of a session which is either retrieved or created at the very first stage when the request hits the server. After this the execution flows through multiple modules and callbacks within them. What I am struggling with is in creating a programming pattern so that at any point in the code the session object is available without the programmer requiring it to pass it as an argument in each function call.
If all of the code was in one single file I could have had a closure but if there are function calls to other modules in other files how do I program so that the session object is available in the called function without passing it as an argument. I feel there should be some link between the two functions in the two files but how to arrange that is where I am getting stuck.
In general I would like to say there is always a execution context which could be a session or a network request whose processing is spread across multiple files and the execution context object is to be made available at all points. There can actually be multiple use cases like having one Log object for each network request or one Log object per session. And the plumbing required to make this work should be fitted sideways without the application programmer bothering about it. He just knows that that execution context is available at all places.
I think it should fairly common problem faced by everyone so please give me some ideas.
Following is the problem
MainServer.js
app = require('express').createServer();
app_module1 = require('AppModule1');
var session = get_session();
app.get('/my/page', app_module1.func1);
AppModule1.js
app_module2 = require('AppModule2');
exports.func1 = function(req,res){
// I want to know which the session context this code is running for
app_module2.func2(req,res);
}
AppModule2.js
exports.func2 = function(req,res){
// I want to know where the session context in which this code is running
}
You can achieve this using Domains -- a new node 0.8 feature. The idea is to run each request in it's own domain, providing a space for per-request data. You can get to the current request's domain without having to pass it all over via process.domain.
Here is an example of getting it setup to work with express:
How to use Node.js 0.8.x domains with express?
Note that domains in general are somewhat experimental and process.domain in particular is undocumented (though apparently not going away in 0.8 and there is some discussion on making it permanent). I suggest following their recommendation and adding an app-specific property to process.domain.data.
https://github.com/joyent/node/issues/3733
https://groups.google.com/d/msg/nodejs-dev/gBpJeQr0fWM/-y7fzzRMYBcJ
Since you are using Express, you can get session attached to every request. The implementation is following:
var express = require('express');
var app = express.createServer();
app.configure('development', function() {
app.use(express.cookieParser());
app.use(express.session({secret: 'foo', key: 'express.sid'}));
});
Then upon every request, you can access session like this:
app.get('/your/path', function(req, res) {
console.log(req.session);
});
I assume you want to have some kind of unique identifier for every session so that you can trace its context. SessionID can be found in the 'express.sid' cookie that we are setting for each session.
app.get('/your/path', function(req, res) {
console.log(req.cookies['express.sid']);
});
So basically, you don't have to do anything else but add cookie parser and enable sessions for your express app and then when you pass the request to these functions, you can recognize the session ID. You MUST pass the request though, you cannot build a system where it just knows the session because you are writing a server and session is available upon request.
What express does, and the common practice for building an http stack on node.js is use http middleware to "enhance" or add functionality to the request and response objects coming into the callback from your server. It's very simple and straight-forward.
module.exports = function(req, res, next) {
req.session = require('my-session-lib');
next();
};
req and res are automatically passed into your handler, and from their you'll need to keep them available to the appropriate layers of your architecture. In your example, it's available like so:
AppModule2.js
exports.func2 = function(req,res){
// I want to know where the session context in which this code is running
req.session; // <== right here
}
Nodetime is a profiling tool that does internally what you're trying to do. It provides a function that instruments your code in such a way that calls resulting from a particular HTTP request are associated with that request. For example, it understands how much time a request spent in Mongo, Redis or MySQL. Take a look at the video on the site to see what I mean http://vimeo.com/39524802.
The library adds probes to various modules. However, I have not been able to see how exactly the context (url) is passed between them. Hopefully someone can figure this out and post an explanation.
EDIT: Sorry, I think this was a red-herring. Nodetime is using the stack trace to associate calls with one another. The results it presents are aggregates across potentially many calls to the same URL, so this is not a solution for OP's problem.

How to avoid too many ajax calls and cache json data on the client side

I have a calendar application and it loads all of the event data using ajax and json results. the issue is that i have different view and right now i have to re call the server when i change views.
Is there any recommendation for ways i can cache this data on the client side and check if i have loaded these events already before firing off more ajax calls.
What is the best practice for this ?
Like hvgotcodes said, an MVC framework would help; try backbone.js (http://documentcloud.github.com/backbone/), for instance.
Alternatively, you might want to consider using jStorage (http://www.jstorage.info/). Every time you need to make an AJAX call, check first if it's in your storage object, then run the AJAX call if it isn't. On the other end, whenever you finish an AJAX call, store the results in the storage object. Make sure you have some kind of index (a CalendarEvent id) to reference when looking it up in the data store. Might want to add some kind of "expire time" to the data in your storage, too ... a timestamp after the AJAX call, and re-request up front if it's out of date.
It's called MVC.
You need to construct a data model for you application, write some sort of Record objects, and then you can determine their status. So your application would have some sort of CalendarEvent model, and when you load data from the server, you would instantiate instances.
So when changing views, you would first check to see if you had the model object for that view, and if you did, you wouldn't need to load it from the server (unless you want to check for changes).
Your scheme doesn't need to be that complicated. If you load events by Id, you can do something like
window.App = {};
window.App.Models = {};
when you load a record you could put
window.App.Models[id] = InstanceOfYourRecord
and that way its pretty fast to look for records. Or just use a framework (like Sproutcore) that has a robust data layer.
I had similar issues on a recent project.
Conceptually, I have the "real" data model (DM) kept on the server, persisted to a database.
To make life sane, the client keeps its own local data model. Outside of the client DM, all the client code thinks it's pulling results locally.
When reading data (GET) from the client DM it:
checks the cache for existing results
invokes appropriate AJAX queries when cached data is not available, then caches the results.
When changing data (POST) via the client DM it:
invalidates the cache as appropriate
invokes appropriate AJAX queries
emits custom jQuery event indicating client DM changed
Note that this client DM also:
centralizes AJAX error handling
tracks AJAX calls still in-flight. (Lets us warn users when leaving pages with unsaved changes).
allows a drop-in, dummy replacement for unit testing, where all the calls hit local data and are completely synchronous.
Implementation notes:
I coded this as a JavaScript class called DataModel. As the design becomes more complex, it makes sense to further break-down the responsibilities in to separate objects.
jQuery's custom events let you easily implement the observer pattern. Client components update themselves from the client DM whenever it indicates data has changed.
JSON in your remote API helps simplify the code. My client DM stores the JSON results directly in its cache.
The client dm function arguments include call-backs so everything can naturally be passed along via AJAX when needed: function listAll( contactId, cb ) { ... }
My project only allowed single user logins. If outside parties can change the server datamodel, some sort of has-data-changed probe should be fired regularly to ensure the client cache is still valid.
For my app, multiple client components would request the same data when receiving a client DM changed event. This resulted in multiple AJAX calls with the same info. I fixed this problem with a getJsonOnce() helper, which manages a queue of client component call-backs awaiting the same result.
Example function in my implementation:
listAll:
function( contactId, cb ) {
// pull from cache
if ( contactId in this.notesCache ) {
cb( this.notesCache[contactId] );
return;
}
// init queue if needed
this.listAllQueue[contactId] = this.listAllQueue[contactId] || [];
// pull from server
var self = this;
dataModelHelpers.getJsonOnce(
'/teafile/api/notes.php',
{'req': 'listAll', 'contact': contactId},
function(resp) { self.notesCache[contactId] = resp; },
this.listAllQueue[contactId],
cb
);
}
The getJsonOnce() helper makes sure that if multiple client components request the exact same (uncached) data, that we only send out a single AJAX request and inform everyone once it comes in.
The notesCache is just a simple javascript object:
this.notesCache = {};

Categories