In browser javascript is pathetically broken in that the only way to make requests is using script tags and jsonp. To make this useful, I'm trying to make a nodejs server that, given a callback name and address, loads the page at the address and pads it in a call to callback and serves the result. However, I know next to nothing about nodejs. If the server's response is loaded from a script tag it would result in actually loading a web page. Currently, I'm writing the request as localhost:8000/callback/address so a script tag might be <script src="localhost:8000/alert/https://www.google.com" type="text/javascript"></script>. Here is my code for the server:
var http = require("http");
var request = require("request");
var server = http.createServer(function(req, res){
req.on("end", function(){
console.log("alive");
var url = req.url;
var i = url.indexOf("/", 1);
request(url.substring(i + 1), function(err, ret, body){
res.writeHead(200);
res.write(url.substring(1, i) + "(\"" + body + "\");");
res.end();
});
});
});
server.listen(8000);
Why does this stay loading for a very long time but never actually load? By using console.log() it seems as if the req.on("end") callback is never even called.
If you don't care about any request data, you could just add req.resume(); after you add your end event handler.
The reason it's getting "stuck" is that since node v0.10, streams start out in a paused state, so you need to unpause them by reading from them in some way. req.resume(); accomplishes this. Once there is nothing left in the request stream (which there could be nothing), the end event will be emitted.
Related
New to Node.js I do understand that createReadStream() function is better for the performance than readFile(), because createReadStream() reads and writes data in chucks while readFile() first reads the whole content. Thus if the file is large, readFile() function might take longer before data can be processed futher. Thus I choose to create server using createReadStream() function as following.
// Create a server with fs.createReadStream(), better performance and less memory usage.
http.createServer( function (request, response) {
// Parse the request containing file name
var pathname = url.parse(request.url).pathname;
// Create a readable stream.
var readerStream = fs.createReadStream(pathname.substr(1));
// Set the encoding to be UTF8.
readerStream.setEncoding('UTF8');
// Handle stream events --> data, end and error
readerStream.on('data', function(chunk) {
// Page found
// HTTP Status: 200 : OK
// Content Type: text/plain
response.writeHead(200, {'Content-type': 'text/html'});
// Write the content of the file to response body.
response.write(chunk);
console.log('Page is being streamed...');
});
readerStream.on('end', function() {
console.log('Page is streamed and emitted successfully.');
});
readerStream.on('error', function(err) {
// HTTP Status: 404 : NOT FOUND
// Content Type: text/plain
response.writeHead(404, {'Content-type': 'text/html'});
console.log('Page streaming error: ' + err);
});
console.log('Code ends!');
}).listen(8081);
// Console will print the message
console.log('Server running at http://127.0.0.1:8081/');
My .html or .txt file contains three short lines of text. After starting my server I visit my web page by going to http://127.0.0.1:8081/index.html. Everything works fine and the content of index.html is echoed on the browser.
But on the tab of the browser, the loader icon keeps turning like it keeps loading for about 1 minute.
Is that normal with Node.js server? Does the icon just keep turning, but costs nothing to the server? Or do I miss something and icon is not supposed to keep turning?
It doesn't look like you are ending your response. The browser probably thinks the request isn't finished and thus continues to "load".
If you look at the Network tab in the developer console you might see the request hasn't finished.
You should be sending response.end()
This method signals to the server that all of the response headers and body have been sent; that server should consider this message complete. The method, response.end(), MUST be called on each response.
I believe you should be calling response.end() in both the readerStream.on('end' and readerStream.on('error' callbacks after you write the head. This will tell the browser the request is finished and it can stop the loading action.
I'm trying to write a simple Node.js http server which stops and exits its process when a given url is accessed: in this case, /stop. This is my program so far:
var http = require('http');
var server = http.createServer(function(request,response) {
response.end(function(){
if (request.url === '/stop') server.close();
});
});
server.listen(12345);
What I expect to happen is that the callback is called when the response finishes sending, and the server closes if the URL is /stop. However, all this ends up doing is it shows a blank page (which I expected) and does nothing else.
On the other hand, this program almost works if I omit response.end():
var http = require('http');
var server = http.createServer(function(request,response) {
if (request.url === '/stop') server.close();
});
server.listen(12345);
It does manage to stop the server, but not until I terminate the GET request by closing the browser window.
How can I get the server to stop when I GET a certain URL without having to close the browser window? (It would be nice if I could send a "Server stopped" message to the browser, too.)
NOTE: I know that I don't actually have the code to differentiate between GET and POST requests, but I would like to be able to handle both, and I'm assuming the answer isn't different depending upon the type of request.
I'm using the npm request library and am running into an issue where the request is never sent if I call express's res.send() after calling request. I realize the request callback won't fire if I close the connection, but I'm not even seeing the request being sent in the first place.
This code is being executed on RunKit (formerly TonicDev), an online code editor that allows code execution via endpoints. I'm not seeing this issue on my local machine, so it seems like it may have to do with RunKit. Anyone have any ideas as to what's going on here or how I might work around this?
You can execute the code yourself by going to:
https://runkit.com/gragland/58056bc6e9d9ed00130c84d5 and clicking the endpoint link at the top.
// Helper to return a RunKit compatible express app (runkit.com/tonic/express-endpoint)
var tonicExpress = require("#runkit/tonic/express-endpoint/1.0.0")
// Provide the exports object to the tonicExpress helper
var app = tonicExpress(module.exports)
var request = require('request')
app.get("/", function(req, res){
var request_number = 9
request({
// To see if request is sent go to: https://requestb.in/1coqbqn1?inspect
url: 'http://requestb.in/1coqbqn1',
method: 'POST',
json: {
request_number: request_number,
message: 'hello'
}
})
// The line below has to be commented out for the above request to be sent
// I don't care about the request callback() firing, I just want the request to be sent
res.send('Done')
})
Code is listed below. The problem is that the console.log() is firing twice, indicating that rs.end is firing twice, even though I only registered it to fire once. If I comment out res.end() it only fires once, so I know the call to res.end is causing rs.end to fire as well, I just don't understand why.
I recognize that this could just be a misunderstanding of the event system or the server streaming objects, neither of which I've looked much into.
Where it gets a bit odd though, is that if I change that console.log to be res.write so that it's writing it to the browser, it only writes it to the browser once, even with res.end() being called.
Thanks in advance for any help you can offer!
require('http').createServer(function(req, res) {
var rs = require('fs').createReadStream('sample.txt');
//Set the end option to false to prevent res.end() being automatically called when rs ends.
rs.pipe(res, { end: false });
rs.once('end', function() {
console.log('Read stream completed');
res.end();
});
}).listen(8080);
Most likely you are seeing two separate http requests: the one you explicitly sent and the one sent automatically by the browser for /favicon.ico.
the browser with every response is looking for a favicon.ico , in this case because you didn't send a html with a <link rel="icon" href="favicon.ico" type="image/x-icon"/> send another request looking for that, if you are trying to avoid it you can do something like this:
var http = require('http');
var fs = require('fs');
http.createServer(function(req, res) {
var rs;
if(req.url !== "/favicon.ico") {
rs = fs.createReadStream('sample.txt');
//Set the end option to false to prevent res.end() being automatically called when rs ends.
rs.pipe(res, { end: false });
rs.once('end', function() {
console.log('Read stream completed');
res.end();
});
}
}).listen(3000);
I'm requesting a remote file using an https.request in node.js. I'm not interested in receiving the whole file, I just want what's in the first chunk.
var req = https.request(options, function (res) {
res.setEncoding('utf8');
res.on('data', function (d) {
console.log(d);
res.pause(); // I want this to end instead of pausing
});
});
I want to stop receiving the response altogether after the first chunk, but I don't see any close or end methods, only pause and resume. My worry using pause is that a reference to this response will be hanging around indefinitely.
Any ideas?
Pop this in a file and run it. You might have to adjust to your local google, if you see a 301 redirect answer from google (which is sent as a single chunk, I believe.)
var http = require('http');
var req = http.get("http://www.google.co.za/", function(res) {
res.setEncoding();
res.on('data', function(chunk) {
console.log(chunk.length);
res.destroy(); //After one run, uncomment this.
});
});
To see that res.destroy() really works, uncomment it, and the response object will keep emitting events until it closes itself (at which point node will exit this script).
I also experimented with res.emit('end'); instead of the destroy(), but during one of my test runs, it still fired a few additional chunk callbacks. destroy() seems to be a more imminent "end".
The docs for the destroy method are here: http://nodejs.org/api/stream.html#stream_stream_destroy
But you should start reading here: http://nodejs.org/api/http.html#http_http_clientresponse (which states that the response object implements the readable stream interface.)