I have a webservice (private-bower) that listen on port 5678 on a remote ubuntu machine.
netstats:
tcp 0 0 0.0.0.0:5678 0.0.0.0:* LISTEN 10605/node
I don't have any rules in my iptables and the ufw status is inactive.
I get a response by doing $curl ip:5678 only within my remote machine, but when I do it on my local one, I receive no response what so ever.
The remote ip is pingable from my local machine btw.
I didn't write that webservice, here's the line where it starts to listen:
var server = app.listen(_config.port, function() {
logger.log('Bower server started on port ' + _config.port);
});
How can I let external ip address to make request on that port?
thanks!
app.listen by default listens on localhost, you can pass in a different bind address as the second argument as described in docs. as I understand it: 0.0.0.0 represents 'all available interfaces' and localhost represents 'only loopback interface'
var server = app.listen(_config.port, '0.0.0.0', function() {
logger.log('Bower server started on port ' + _config.port);
});
Related
I have two AWS EC2 instanses. A Server and a client Node JS app.
Locally my code is working fine.
But at AWS the client simply shuts down after some time - like 30 sec - without any warning/exception (For some reason it can't find and connect to the server)
Both AWS instances are running: Windows Server 2016 Base.
Both AWS instance have their own sererate "AWS security group". Just to make sure i'm not blocking any thing both security groups currently allow:
"All traffic to ANY IP" - both Ingoing and Outgoing traffic.
Both instances run in the same "Availability zone" in AWS.
The server is listening at host: '0.0.0.0', port: 4080.
And the client tries to connect to the server's IP at port 4080. I have tried to connect to all possible options - like:
Public DNS (IPv4)
IPv4 Public IP
Elastic IP
Private IPs
I can't even ping the server's IP from the client or from my own PC. I can access both AWS instances fine via RDP.
Here is a bit of my code:
SERVER.JS
var server = require('http').createServer();
var WebSocketServer = require('ws').Server;
var wss = new WebSocketServer({ server: server });
server.on('request', app);
server.listen(_port, '0.0.0.0', function () { console.log('SERVER STARTED! (listening on port # ' + _port + ')') });
CLIENT.JS
var WebSocket = require('ws');
var _ws = new WebSocket(THE_SERVER_IP);
_ws.on('open', function open() {
...
});
_ws.on('message', function (data, flags) {
...
});
_ws.on('close', function close() {
...
});
Solution: I added a custom TCP rule to the Windows Firewall in the server where my server.js app is running - which allow the client's IP to connect to port 4080. Done.
when i run a node.js application in CentOs 6.5 in AWS
var sys = require( "sys" );
var http = require( "http" );
// Create our HTTP server.
var server = http.createServer(
function( request, response ){
// Create a SUPER SIMPLE response.
response.writeHead( 200, {"content-type": "text/plain"} );
response.write( "Hellow world from AWS!\n" );
response.end();
}
);
// Point the HTTP server to port 8080.
server.listen( 3000 );
// For logging....
console.log( "Server is running on 3000" );
it runs and shows this on console:
Server is running on 3000
but when i open my browser and run the
public DNS given by amazon
:http://ec2-54-152-55-189.compute-1.amazonaws.com:3000/, it shows webpage not available ,but in terminal of CentOs in aws ,when I run command :curl http://ec2-54-152-55-189.compute-1.amazonaws.com:3000/ it shows
Hellow world from AWS!
1)Inbound rules are
HTTP -- AnyWhere
SSH -- AnyWhere
CUSTOM TCP RULE (PORT-3000)-- AnyWhere
HTTPS-- AnyWhere
CUSTOM UDP RULE (PORT-3000)-- AnyWhere
2)Outbound Rules are
All traffic| All protocol |ALL port| AnyWhere
Any help is appreciated. Thanks a lot.
the issue is resolved ,even setting inbound rules as follows:
HTTP -- AnyWhere
SSH -- AnyWhere
CUSTOM TCP RULE (PORT-3000)-- AnyWhere
HTTPS-- AnyWhere
CUSTOM UDP RULE (PORT-3000)-- AnyWhere
in centOS 6.5 instance ,firewall was not allowing any connection from port (HTTP)80,(CUSTOM)3000,(HTTPS)443.Therefore,i made allow the required port no. in iptables
see:
http://www.cyberciti.biz/faq/howto-rhel-linux-open-port-using-iptables/ to edit the iptables and restarted the firewall
https://www.digitalocean.com/community/tutorials/how-to-set-up-a-basic-iptables-firewall-on-centos-6 to allow particular port in firewall
Whenever I deploy my Hapi.js web application to azure, it starts the server using the socket protocol (see output below).
socket:\\.\pipe\b5c0af85-9393-4dcb-bd9a-3ba9b41ed6fb
GET /
GET /{param*}
GET /api/employees
POST /api/employees
GET /api/employees/{id}
PUT /api/employees/{id}
DELETE /api/employees/{id}
POST /api/worklog
GET /login
POST /login
Hapi server started # socket:\\.\pipe\b5c0af85-9393-4dcb-bd9a-3ba9b41ed6fb
150914/214730.270, [response], socket:\\.\pipe\b5c0af85-9393-4dcb-bd9a-3ba9b41ed6fb: [1;32mget[0m / {} [32m200[0m (316ms)
However, whenever I am running this locally, it starts using http... I have not run into this issue using express or loopback, only Hapi. Is there some sort of configuration that I am missing? This is the server.connection function:
var server = new Hapi.Server();
var host = process.env.host || '0.0.0.0';
var port = process.env.port || 3000;
server.connection({host: host, port: port});
The reason this is a big deal is because I cannot pass socket://*<mydoamin>* to google as a callback URI for OAuth.
You shouldn't need to pass socket://<domain> to google, you'd pass the normal https://yourDomain.com or even the https://yourSiteName.azurewebsites.net to Google for OAuth callback and it should work as you would expect.
The fact that the node application is listening on a pipe rather than a normal tcp socket is just an implementation detail of iisnode. Basically the problem is that node has it's own webserver so you can't use it with other webservers like IIS, Apache, nginx, etc. iisnode bridges the gap between IIS and node in that it allows IIS to listen to the HTTP port on the machine 80 and when IIS gets a request on that port, it just forwards it to the node process that's listening on a named pipe. This allows you to manage your sites in IIS as you normally would on a Windows Server machine, while actually writing your app in node.
You can think of it as 2 webservers running on the box, one (IIS) is acting as a proxy for the other (node) where all the work is actually happening. The fact that the iisnode developer chose to use a named pipe instead of a normal tcp socket is odd (though kind of understandable since you can't easily reserve a port per se as you can a pipe), but it's the way it is.
I have an unusual problem. I am running a simple node.js app. The code below works.
var app = require('http').createServer(handler);
var io = require('socket.io').listen(app);
app.listen(8000, '127.0.0.1');
However, if I use app.listen(8000, '192.168.1.4');, no client is able to connect to the server. 192.168.1.4 is the IP address of my local machine.
One thing I noticed is that even when app.listen(8000, '127.0.0.1'); is used, on the local browser, http://localhost:8000/ works but http://192.168.1.4:8000/ does not work.
Can anyone what have I done wrong?
The line:
app.listen(8000, IP_ADDRESS);
means to listen to port 8000 on the device (ethernet, wifi, loopback) that owns that IP address for connections destined to that IP address.
Therefore, if you use 127.0.0.1 only localhost can connect to it and if you use 192.168.1.4 localhost cannot connect to it and only machines on the 192.168.1.xxx network can connect to it (I'm assuming a netmask of /8).
In order to allow both networks to connect, you can listen to both IP addresses:
var http = require('http');
var app1 = http.createServer(handler);
app1.listen(8000, '127.0.0.1');
var app2 = http.createServer(handler);
app2.listen(8000, '192.168.1.4');
Or, if you don't care about where the request comes from and want it to listen to packets coming from anywhere, simply don't pass it an IP address:
// listen to port 8000 on all interfaces:
app.listen(8000);
127.0.0.1 (localhost) is the IP address for the loopback adapter. The loopback adapter is a special interface that essentially allows programs to talk to each other on the same machine (communication bypasses physical interfaces).
Your actual IP address (the one that doesn't work in your example) is bound to a network device such as an ethernet adapter.
As suggested, using 0.0.0.0 (all available interfaces) should work if you want to expose your API externally.
I've successfully made a test chat app and I've gotten a node js server with socket.io running on heroku. On my local computer I have to specify the port number of localhost on the client side to the port that the server has set up. However, when I run my server code on heroku. Removing the server
I'm using the process.env.PORT variable since heroku sets that up:
var port = process.env.PORT || 3000;
http.listen(port, function(){
console.log('listening on *:' + port);
});
Naturally I find the port number that the app is running on place it in the url
var socket = io('https://xxxx.herokuapp.com:1111');
However this gives me an "net::ERR_CONNECTION_REFUSED".
I got it to work by removing the port nubmer after the url (in this example :1111). I'm wondering why this is working since most of the tutorials and articles online have it specifying the port and why my local computer needs the port to work as well.
When you connect to your https://xxxx.herokuapp.com subdomain on heroku on port 443 (which is the port that is used for an https connection when no port is specified), Heroku is probably using a proxy or router to route that incoming connection to the particular port that your node.js server is listening to. In the Heroku infrastructure, they know what actual internal host your server is running on and what actual port number it is running on so they can map a default port request on your subdomain to the actual port/host.
This is done so that browsers can connect to your subdomain directly on the default port without having to know the particulars of your node server installation and so that Heroku can auto-manage your server and likely share hardware with other customers. You are each running on a different port, but sharing the same machine. The ports are managed entirely by Heroku and this is one way that they are able to put multiple customers on the same hardware without each having to specify a custom port in the browser URL (which would be a non-starter for most customers).
So, Heroku is hosting some sort of proxy for your sub-domain that is listening to the default https port. Thus, you don't have to specify the port in the URL. Internally, they route that connection to your actual port on your actual server.
When running on your desktop, there is no such proxy to do this for you so you have to make sure client and server port numbers match.