I am writing a web app in node.js. Now every processing on the server is always in the context of a session which is either retrieved or created at the very first stage when the request hits the server. After this the execution flows through multiple modules and callbacks within them. What I am struggling with is in creating a programming pattern so that at any point in the code the session object is available without the programmer requiring it to pass it as an argument in each function call.
If all of the code was in one single file I could have had a closure but if there are function calls to other modules in other files how do I program so that the session object is available in the called function without passing it as an argument. I feel there should be some link between the two functions in the two files but how to arrange that is where I am getting stuck.
In general I would like to say there is always a execution context which could be a session or a network request whose processing is spread across multiple files and the execution context object is to be made available at all points. There can actually be multiple use cases like having one Log object for each network request or one Log object per session. And the plumbing required to make this work should be fitted sideways without the application programmer bothering about it. He just knows that that execution context is available at all places.
I think it should fairly common problem faced by everyone so please give me some ideas.
Following is the problem
MainServer.js
app = require('express').createServer();
app_module1 = require('AppModule1');
var session = get_session();
app.get('/my/page', app_module1.func1);
AppModule1.js
app_module2 = require('AppModule2');
exports.func1 = function(req,res){
// I want to know which the session context this code is running for
app_module2.func2(req,res);
}
AppModule2.js
exports.func2 = function(req,res){
// I want to know where the session context in which this code is running
}
You can achieve this using Domains -- a new node 0.8 feature. The idea is to run each request in it's own domain, providing a space for per-request data. You can get to the current request's domain without having to pass it all over via process.domain.
Here is an example of getting it setup to work with express:
How to use Node.js 0.8.x domains with express?
Note that domains in general are somewhat experimental and process.domain in particular is undocumented (though apparently not going away in 0.8 and there is some discussion on making it permanent). I suggest following their recommendation and adding an app-specific property to process.domain.data.
https://github.com/joyent/node/issues/3733
https://groups.google.com/d/msg/nodejs-dev/gBpJeQr0fWM/-y7fzzRMYBcJ
Since you are using Express, you can get session attached to every request. The implementation is following:
var express = require('express');
var app = express.createServer();
app.configure('development', function() {
app.use(express.cookieParser());
app.use(express.session({secret: 'foo', key: 'express.sid'}));
});
Then upon every request, you can access session like this:
app.get('/your/path', function(req, res) {
console.log(req.session);
});
I assume you want to have some kind of unique identifier for every session so that you can trace its context. SessionID can be found in the 'express.sid' cookie that we are setting for each session.
app.get('/your/path', function(req, res) {
console.log(req.cookies['express.sid']);
});
So basically, you don't have to do anything else but add cookie parser and enable sessions for your express app and then when you pass the request to these functions, you can recognize the session ID. You MUST pass the request though, you cannot build a system where it just knows the session because you are writing a server and session is available upon request.
What express does, and the common practice for building an http stack on node.js is use http middleware to "enhance" or add functionality to the request and response objects coming into the callback from your server. It's very simple and straight-forward.
module.exports = function(req, res, next) {
req.session = require('my-session-lib');
next();
};
req and res are automatically passed into your handler, and from their you'll need to keep them available to the appropriate layers of your architecture. In your example, it's available like so:
AppModule2.js
exports.func2 = function(req,res){
// I want to know where the session context in which this code is running
req.session; // <== right here
}
Nodetime is a profiling tool that does internally what you're trying to do. It provides a function that instruments your code in such a way that calls resulting from a particular HTTP request are associated with that request. For example, it understands how much time a request spent in Mongo, Redis or MySQL. Take a look at the video on the site to see what I mean http://vimeo.com/39524802.
The library adds probes to various modules. However, I have not been able to see how exactly the context (url) is passed between them. Hopefully someone can figure this out and post an explanation.
EDIT: Sorry, I think this was a red-herring. Nodetime is using the stack trace to associate calls with one another. The results it presents are aggregates across potentially many calls to the same URL, so this is not a solution for OP's problem.
Related
I am sorry for how I have framed the question in the title but I have started programming very recently so once again, I am really sorry.
I am developing a project with React js as my front-end and node js as my backend, I have been successful in doing some basic test api calls to confirm the connection between the two but now, how am I supposed to actually process different actions. For example, while a user is logging in, I need to first check if they are an existing user or not, sign them in if they are not, deleting a user account, changing username, etc.
I tried very hard to look for relevant articles but all I can find are basic "one-time" api calls, what am I supposed to do for an entire batch of operations? From what I have understood, the process of sending a request from React to getting it processed in Node js is like this:
react js ======>
(request for operation) node js ======>
(process the operation) ======>
send a response back to react
Please correct me if there are any mistakes in my question...
Thanks a lot!
This question is really broad, so I'm going to focus in on this part at a high level:
I tried very hard to look for relevant articles but all I can find are basic "one-time" api calls, what am I supposed to do for an entire batch of operations?
I'm guessing when you talk about requests and APIs you're talking about HTTP requests and REST APIs, which are fundamentally "stateless" kinds of things; HTTP requests are self-contained, and the backend doesn't have to keep track of any application state between requests in order to speak HTTP.
But backend applications usually do maintain state, often in the form of databases. For example, many REST APIs expose CRUD (create, read, update, and delete) operations for important business entities — in fact, Stack Overflow probably does exactly this for answers. When I post this answer, my browser will probably send some kind of a "create" request, and when I inevitably notice and fix a typo I will send some kind of "update" request. When you load it in your browser, you will send a "read" request which will get the text of the answer from Stack Overflow's database. Stack Overflow's backend keeps track of my answer between the requests, which makes it possible to send it to many users as part of many different requests.
Your frontend will have to do something similar if your project does things which involve multiple interdependent requests — for example, you would need to keep track of whether the user is authenticated or not in order to show them the login screen, and you would need to hold onto their authentication token to send along with subsequent requests.
Edit: Let's walk through a specific example. As a user, I want to be able to create and read notes. Each note should have a unique ID and a string for its text. On the backend, we'll have a set of CRUD endpoints:
var express = require(“express”);
var router = express.Router();
class Note {
constructor(id, text) {
this.id = id;
this.text = text;
}
}
// This is a quick and dirty database!
notes = {};
router.put("/note/:id", function(req, res) {
notes[req.params.id] = "hello"; // TODO: accept text as well
res.send("");
});
router.get(“/note/:id”, function(req, res) {
res.send(notes[req.params.id].text);
});
module.exports = router;
Then on the client side, one could create a note and then read it like this:
// this request will create the note
fetch("http://localhost:9000/note/42", { method: 'PUT', body: 'hello note!' });
// this request will read the note, but only after it is created!
fetch("http://localhost:9000/note/42")
.then(res => res.text())
.then(res => console.log(res));
I am new to node.js and am acquainting myself with Express. The following code is my source of confusion:
var server = http.createServer(handleRequest);
function handleRequest(req, res) {
var path = req.url;
switch (path) {
case "/n":
return renderPage_1(req, res);
default:
return renderPage_2(req, res);
}
}
I understand that the server needs to accept an HTTP request(req). However, if we are returning a response, why is the response also an argument in the callback function? I keep running into a dead-end thinking that it has to do with the scope of the response object, though I am not sure.
I would greatly appreciate clarification on this matter. I have not been able to find a resource that delineates my confusion.
Best,
Abid
I think the answer to your question is that this is how the authors of express decided to implement the library. At a high level, express is really just a light-ish weight wrapper that makes it easy to build middleware based http services with NodeJS. The reason that both the req & res objects are passed to each express middleware function is that in practice, web services are rarely able to fulfill an entire request in a single step. Often services are built as layer of middleware the build up a response in multiple steps.
For example, you might have a middleware function that looks for identity information in the request and fetches any relevant identity metadata while setting some auth specific headers on the response. The request might then flow to an authorization middleware that uses the fetched metadata to determine if the current user is authorized and if the user is not authorized can end the request early by closing the response stream. If the user is authorized then the request will continue to the next piece of middleware etc. To make this work, each middleware function (step of the stack) needs to be able to access information from the request as well as write information to the response. Express handles this by passing the request and response objects as arguments to the middleware function but this is just one way to do it.
Now the authors could have decided to implement the library differently such that each route handler was supposed to return an object such as { status: 200, content: "Hello, world" } instead of calling methods on the response object but this would be a matter of convention and you could pretty easily write a wrapper around express that let you write your services like this if you wanted.
Hope this helps.
I'm an html5 developer with mainly JavaScript experience. I'm starting to learn the backend using Node.js. I don't have a particular example of this question/requirements. I'd like to call a back end function with JavaScript, but I'm not sure how. I already researched events and such for Node.js, but I'm still not sure how to use them.
Communicating with node.js is like communicating with any other server side technology.. you would need to set up some form of api. What kind you need would depend on your use case. This would be a different topic but a hint would be if you need persistent connections go with web sockets and if you just need occasional connections go with rest. Here is an example of calling a node function using a rest api and express.
var express = require('express');
var app = express();
app.post('/api/foo', foo);
function foo(req, res){
res.send('hello world');
};
app.listen(3000);
From the frontend you can post to this REST endpoint like so.
$.post("/api/foo", function(data) {
console.log( "Foo function result:", data );
});
If you're just starting with node-js, don't worry about Websockets just yet.
You're going to want to create a REST API (most likely) depending on what you're trying to accomplish. You can put that REST API behind some kind of authentication if desired.
A REST API is going to have endpoints for creating/deleting/updating and getting (finding) a document, like a given user.
My recommendation is to work backwards from something that's already working. Clone this app locally and check out the controllers to see examples of how this application interacts with creating users.
https://github.com/sahat/hackathon-starter
Once you create a controller that returns data when a client hits an endpoint (like http://localhost:3000/user/create ) , you'll want to create some HTML that will interact with endpoint through a form HTML element. Or you can interact with that endpoint with Javascript using a library like jQuery.
Let me know if that makes sense to you. Definitely a good starting point is to clone that app and work backwards from there.
Can I suggest trying api-mount. It basically allows calling API as simple functions without having to think about AJAX requests, fetch, express, etc. Basically in server you do:
const ApiMount = apiMountFactory()
ApiMount.exposeApi(api)
"api" is basically an object of methods/functions that you are willing to call from your web application.
On the web application you then do this:
const api = mountApi({baseUrl: 'http://your-server.com:3000'})
Having done that you can call your API simply like this:
const result = await api.yourApiMethod()
Try it out. Hope it helps.
All meteor methods can be called same way from client and server side.
Let's say user knows or can predict all the method names on server, then he is able to call them and use it's result however he want.
example:
A method which performs cross domain http request and return response can be used to overload server by calling huge amounts of data Meteor.call(httpLoad, "google.com");, or a method which load data from mongo can be used to access database documents if the client know document _id Meteor.call(getUserData, "_jh9d3nd9sn3js");.
So, how to avoid this situations, may be there is a better way to store server-only functions than in Meteor.methods({...})?
Meteor methods are designed to be accessed from the client, if you don't want this, you just need to define a normal javascript function on the server. A really basic example would be:
server/server.js:
someFunction = function(params) {
console.log('hello');
}
As long as it's in the server folder, the function won't be accessible from the client.
For coffeescript users, each file is technically a separate scope, so you would have to define a global variable with #, e.g.
#someFunction = (params) ->
console.log 'hello'
or if you want to scope the function to a package:
share.someFunction = (params) ->
console.log 'hello'
If you have methods that need to be accessible from the client but only for say admin users, you need to add those checks at the start of the meteor method definition:
Meteor.methods({
'someMethod': function(params) {
var user = Meteor.user();
if (user && (user.isAdmin === true)) {
// Do something
} else {
throw new Meteor.Error(403, 'Forbidden');
}
}
});
I'm not going to vouch for the security of this example - it's just that, an example - but hopefully it gives you some idea of how you would secure your methods.
EDIT: Noticed the other answers mention using a if (Meteor.isServer) { ... } conditional. Note that if you are doing this inside methods which are also accessible on the client, the user will be still be able to see your server code, even if they can't run it. This may or may not be a security problem for you - basically be careful if you're hardcoding any 3rd-party API credentials or any kind of sensitive data in methods whose code can be accessed from the client. If you don't need the method on the client, it would be better to just use normal JS functions. If you're wrapping the whole Meteor.methods call with a isServer conditional, the code will be on the server only, but can still be called from the client.
as rightly stated in other answers, your methods will always be accessible from the client (per design). yet, there is a simple workaround to check if the call originates from the client or from the server. if you do a
if ( this.connection == null )
this will return true if the method was called from server. like that you can restrict the method body execution to 'secure' calls.
I think this page explains it: http://meteortips.com/first-meteor-tutorial/methods/
I'm quoting:
"The safer approach is to move these functions to the isServer conditional, which means:
Database code will execute within the trusted environment of the server.
Users won’t be able to use these functions from inside the Console, since users don’t have direct access to the server.
Inside the isServer conditional, write the following:
Meteor.methods({
// methods go here
});
This is the block of code we’ll use to create our methods."
and so on. I hope this helps.
With proper app design, you shouldn't care whether a request was through the web UI or via something typed in a console window.
Basically, don't put generic, abuse worthy functions in Meteor.methods, implement reasonable access controls, and rate-limit and/or log anything that could be a problem.
Any server-side function defined in Meteor.methods will have access to the current user id through this.userid. This userid is supplied by Meteor, not a client API parameter.
Since that Meteor Method server-side code knows the login status and userid, it can then do all the checking and rate limiting you want before deciding to do that thing that the user asked it to do.
How do you rate limit? I've not looked for a module for this lately. In basic Meteor you would add a Mongo collection for user actions accessible server-side only. Insert timestamped, userid specific data on every request that arrives via a Meteor method. Before fulfilling a request in the server method code, do a Mongo find for how many such actions occurred from this userid in a relevant period. This is a little work and will generates some overhead, but the alternative of rate-limiting via a server-wide underscore-style debounce leaves a function open for both abuse and denial-of-service by an attacker.
How should I gave access to the current session to the modules of my node.js app considering that I can only get it inside the request's function?
It's a fairly complex but modular app and passing the session around seems a bit difficult, and I would have to reimplement a lot of things.
Ideally i would attach it to an object exported with module.exports and require it in other modules. But I'm afraid that on concurrent requests it would get overwritten and I could end up with conflicts.
Have you tried res.locals ?
In the Official Docs, it says "Response local variables are scoped to the request, thus only available to the view(s) rendered during that request / response cycle, if any. This object is useful for exposing request-level information such as the request pathname, authenticated user, user settings etcetera. "
When your middleware/route callback is called, you can define res.locals.myProp = "some value" and access it from a view, using locals.myProp.
I solved this by using express-domain-middleware which binds incoming request & response from express to a new domain.