I've been playing with NodeJS for a while now, and I've really been enjoying it, but I've run myself into a wall though...
I'm trying to create a connect module to intercept http.ServerResponse methods. My end goal is to allow the user to apply some kind of filter to outgoing data. For example, they would have the option to apply compression or something before the data goes out.
I am having this weird bug though... this method is getting called twice as often as it should be.
Here's my code:
var http = require('http'), orig;
orig = http.ServerResponse.prototype.write;
function newWrite (chunk) {
console.log("Called");
orig.call(this, chunk);
}
http.ServerResponse.prototype.write = newWrite;
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.write("Hello");
res.write(" World");
res.end();
console.log("Done");
}).listen(12345);
This works and I get 'Hello World' as output when I access this using a browser, but I'm getting 4 'Called' to the console, and 'Done' gets output twice. This leads me to believe that for some reason, my server code is getting called twice. I did a console.log in my newWrite method on this.constructor, and the constructor in both instances is ServerResponse, so that doesn't help much.
I'm really confused about what is going on here. What could possibly be going on?? This doesn't directly affect the output, but I could potentially be serving gigabytes of compressed data to many clients simultaneously, and doing everything twice will put undue strain on my server.
This is going to be part of a larger fileserver, hence the emphasis on not doing everything twice.
Edit:
I've already read this question in case you're wondering:
Can I use an http.ServerResponse as a prototype in node.js?
There is no problem with your code: if you add a console.log(req.url) in the createServer call, you'll probably see that it is actually called twice by your browser. Most browsers make a request for the requested url and an additional call for /favicon.ico if no favicon is specified in the html markup.
You can use connect's favicon middleware to avoid that problem:
http://senchalabs.github.com/connect/middleware-favicon.html
Related
so I'm trying to host my website on Heroku and set up everything to get my app up and running.
Whenever I try to submit the form I get undefined errors.
Undefined Errors Console Errors
I've set it up to use the port like shown in the documenation:
app.listen(process.env.PORT || 8081, function () {
console.log('Example app listening on port 8081!');
});
When starting the app locally with heroku local web I get Typerror: Failed to Fetch and the undefined results but when I go into my .env file and add a port=8081 it works perfectly fine.
Good result
When I open it with heroku open I still have that undefined problem.
I don't really have to set a PORT in .env right? Or do I?
I read that that standard port is 80 but that didn't work either.
Can someone please help me?
Thank you!
Heres the link to the public site: https://shrouded-everglades-61993.herokuapp.com/
Here the link to my Github rep: https://github.com/stefanfeldner/capstone-travel-app
So the reason that they're undefined is that they are being set by these lines in main.js:
uiData.imageURL = data[1].imageUrl;
...
uiData.iconCode = data[0].iconCode;
Where data is the object you're retrieving from your /getData endpoint. The problem is that what /getData actually returns is [{}, {}] so of course these values are both undefined, leading to that visual broken-ness.
Now, why does /getData return these empty objects? I can't check your server logs, but there are two obvious possibilities based on the way server.js is written.
The first is that there's an error somewhere and you're simply not making it all the way to the end of your try-catch in callApi, so neither weatherData nor pixabayData are being updated.
Second, it's also possible that these calls are successful but that the desired data is not in the results, i.e. that neither of these if statements are true:
if('city_name' in data) {`
...
if('hits' in data) {
Again, in this case, neither weatherData nor pixabayData are being updated.
The way that your handler for /sendFormData is written, it doesn't check that callApi actually got any useful data, just that it has finished execution. So your code flow continues on its merry way despite the data objects still being empty.
However, there's a bigger, unrelated design flaw here: What happens if more than one person uses your website? Your client-side code calls /sendFormData, which hopefully correctly populates the global variables weatherData and pixabayData, and then separately calls /getData to try and retrieve this data.
The thing is, though, between the time your client-side calls /sendFormData and /getData, anyone else using your website could separately call /sendFormData and change the data contained in the global variables from the result of your search to the result of their search. So you'd get their search results back instead of yours, since their results overwrote your results on the server. You need to handle getting the API results and sending them back to the requester in a single transaction.
(Re: all the local Heroku Config, that's hard to answer without messing around with your local computer, sorry.)
I am using a res.redirect('page.ejs'); and on my browser I get the message:
Cannot GET /page.ejs
I have not declared this in my routes file in the style of :
app.get('/page', function(req, res) {
res.render('page.ejs');
});
Should this be included in order for the res.redirect() to work?
When I do not use res.redirect() but res.render(), even if I have not the app.get() code, it still works.
so to understand this, let's look at what each of these methods do.
res.redirect('page.ejs');
// or, more correctly, you're redirecting to an *endpoint*
// (not a page. the endpoint will render a *page*) so it should be:
res.redirect('/page');
this will tell express to redirect your request to the GET /page.ejs endpoint. An endpoint is the express method you described above:
app.get('/page', function(req, res) {
res.render('page.ejs');
});
since you don't have that endpoint defined it will not work. If you do have it defined, it will execute the function, and the res.render('page.ejs') line will run, which will return the page.ejs file. You could return whatever you want though, it can be someOtherPage.ejs or you can even return json res.json({ message: 'hi' });
res.render('page.ejs');
this will just respond to the client (the front-end / js / whatever you want to call it) with the page.ejs template, it doesn't need to know whether the other endpoint is present or not, it's returning the page.ejs template itself.
so then, it's really up to you what you want to use depending on the scenario. Sometimes, one endpoint can't handle the request so it defers the request to another endpoint, which theoretically, knows how to handle the request. In that case redirect is used.
hope that makes sense and clarifies your confusion
(I'm not an expert on the inner-workings of express, but this is a high-level idea of what it's doing)
You should do res.redirect('/page')
I'm working with an express application. There are some express routes, as
server.get('*' , ... )
etc. which perform some common operations: authentication, validation... etc.
they also decorates the response with meaningful information: i.e. in every request to the server it gives not only the expected json/html, but also information regarding the user, some app metadata that the front-end consumes etc. etc.
Let's say all this extra metadata cames in a field called extradata in every request to the server.
Now, there is a bug that is causing a problem: instead of returning its expected response (a json with a bunch of system logs), is sending only this extradata field.
I'm pretty confident the problem is in one of the middlewares, because that code that sends the response in this case is really simple, it's just a res.send() of a json. So I believe this part of the app is requiring some module that sets a middleware which causes the error. There are a lot of global vars and implicit parameters in the app so is really difficult to debug it manualluy.
I attempted to bypass such middlewares programmatically, like:
delete server._router.stack[2];
but is causing an TypeError: Cannot read property 'route' of undefined and thus preventing my app to build: sure this is not the way to go.
so, is there a way to programmatically ignore or bypass express routes that are yet set?
Even better, is there a way to programmatically tap into express middlewares and log every request and response?
(afaik, there are libreries like morgan that logs every request, but I don't think they apply to this case since I need to discriminate between middlewares).
What I generally do is simply use the next method. You can access it by simply passing it to the callback function. Something like:
app.use(function(req, res, next) {
if(...) {
next();
} else {
...
}
}
What this is going to do is go to the next middleware.
So if I understand correctly, you can check what you exactly need in the if-statement and do things accordingly.
What I would suggest is you read the Express API documentation, especially the section about middleware, which you can find here. Moreover, try to isolate the suspects and solve the issue by removing the problem, rather than deleting handlers and trying to solve the problem the easy way.
I am writing a web app in node.js. Now every processing on the server is always in the context of a session which is either retrieved or created at the very first stage when the request hits the server. After this the execution flows through multiple modules and callbacks within them. What I am struggling with is in creating a programming pattern so that at any point in the code the session object is available without the programmer requiring it to pass it as an argument in each function call.
If all of the code was in one single file I could have had a closure but if there are function calls to other modules in other files how do I program so that the session object is available in the called function without passing it as an argument. I feel there should be some link between the two functions in the two files but how to arrange that is where I am getting stuck.
In general I would like to say there is always a execution context which could be a session or a network request whose processing is spread across multiple files and the execution context object is to be made available at all points. There can actually be multiple use cases like having one Log object for each network request or one Log object per session. And the plumbing required to make this work should be fitted sideways without the application programmer bothering about it. He just knows that that execution context is available at all places.
I think it should fairly common problem faced by everyone so please give me some ideas.
Following is the problem
MainServer.js
app = require('express').createServer();
app_module1 = require('AppModule1');
var session = get_session();
app.get('/my/page', app_module1.func1);
AppModule1.js
app_module2 = require('AppModule2');
exports.func1 = function(req,res){
// I want to know which the session context this code is running for
app_module2.func2(req,res);
}
AppModule2.js
exports.func2 = function(req,res){
// I want to know where the session context in which this code is running
}
You can achieve this using Domains -- a new node 0.8 feature. The idea is to run each request in it's own domain, providing a space for per-request data. You can get to the current request's domain without having to pass it all over via process.domain.
Here is an example of getting it setup to work with express:
How to use Node.js 0.8.x domains with express?
Note that domains in general are somewhat experimental and process.domain in particular is undocumented (though apparently not going away in 0.8 and there is some discussion on making it permanent). I suggest following their recommendation and adding an app-specific property to process.domain.data.
https://github.com/joyent/node/issues/3733
https://groups.google.com/d/msg/nodejs-dev/gBpJeQr0fWM/-y7fzzRMYBcJ
Since you are using Express, you can get session attached to every request. The implementation is following:
var express = require('express');
var app = express.createServer();
app.configure('development', function() {
app.use(express.cookieParser());
app.use(express.session({secret: 'foo', key: 'express.sid'}));
});
Then upon every request, you can access session like this:
app.get('/your/path', function(req, res) {
console.log(req.session);
});
I assume you want to have some kind of unique identifier for every session so that you can trace its context. SessionID can be found in the 'express.sid' cookie that we are setting for each session.
app.get('/your/path', function(req, res) {
console.log(req.cookies['express.sid']);
});
So basically, you don't have to do anything else but add cookie parser and enable sessions for your express app and then when you pass the request to these functions, you can recognize the session ID. You MUST pass the request though, you cannot build a system where it just knows the session because you are writing a server and session is available upon request.
What express does, and the common practice for building an http stack on node.js is use http middleware to "enhance" or add functionality to the request and response objects coming into the callback from your server. It's very simple and straight-forward.
module.exports = function(req, res, next) {
req.session = require('my-session-lib');
next();
};
req and res are automatically passed into your handler, and from their you'll need to keep them available to the appropriate layers of your architecture. In your example, it's available like so:
AppModule2.js
exports.func2 = function(req,res){
// I want to know where the session context in which this code is running
req.session; // <== right here
}
Nodetime is a profiling tool that does internally what you're trying to do. It provides a function that instruments your code in such a way that calls resulting from a particular HTTP request are associated with that request. For example, it understands how much time a request spent in Mongo, Redis or MySQL. Take a look at the video on the site to see what I mean http://vimeo.com/39524802.
The library adds probes to various modules. However, I have not been able to see how exactly the context (url) is passed between them. Hopefully someone can figure this out and post an explanation.
EDIT: Sorry, I think this was a red-herring. Nodetime is using the stack trace to associate calls with one another. The results it presents are aggregates across potentially many calls to the same URL, so this is not a solution for OP's problem.
This is a pretty esoteric issue that I can't produce a small test case for, so sorry in advance. But maybe someone has run into something like it previously.
I have code like this (using restify):
server.put("/whatever", function (serverRequest, serverResponse, next) {
serverRequest.pause();
serverRequest.on("data", function (chunk) {
console.log("from outside", chunk);
});
doSomeAsyncStuff(function (err) {
serverRequest.on("data", function (chunk) {
console.log("from inside", chunk);
});
serverRequest.on("end", function () {
next();
});
serverRequest.resume();
});
});
When I hit this server using CURL, this works great. But when I hit it with XMLHttpRequest, I get one less "from inside" log line than I do "from outside" log lines. It seems one of the data events is getting lost, despite my best efforts to pause ASAP.
Here is the CURL command I am using:
curl -X PUT -T file.pdf http://localhost:7070/whatever -v
And here is the XMLHttpRequest code (works in recent versions of Chrome):
var arrayBuffer = fromElsewhere();
var xhr = new XMLHttpRequest();
xhr.open("PUT", "http://localhost:7070/whatever");
xhr.setRequestHeader("Content-Length", arrayBuffer.byteLength);
xhr.setRequestHeader("Content-Type", "application/pdf");
xhr.send(arrayBuffer);
One notable difference is that CURL seems to send Expect: 100-continue before uploading, whereas XMLHttpRequest does not. I tried adding that header manually but of course it didn't actually do much (i.e. Chrome did not wait for a response, it just sent up all the PDF data along with the original request). Even so, I don't know why this would effect things.
Somewhat predictably, this didn't have anything to do with curl vs. XMLHttpRequest, but instead with the fact that serverRequest.pause is only advisory; it doesn't actually pause right away. That is, it's pretty much useless.
So presumably in the CURL case the timing was nice enough that pause actually worked as expected, whereas in the XMLHttpRequest case the timing was off, and one of the "from outside" data events managed to slip through the "advisory" pause.
There are apparently various fixes for this, discussed in the thread, but I'm still pretty shaky on this whole streams/buffers universe so I won't try to recommend one in this answer.
I've added a documentation pull request in the hopes nobody else tries to use pause assuming that it actually works.