I am listening for HTTP requests to localhost on a specified port using chrome.sockets.tcpServer running in a Chrome App. I'm basing my app on the code samples from Google:
https://developer.chrome.com/apps/app_network#tcpServer.
The HTTP requests are coming from a single instance of a Flash application. It starts off working okay. When the first request comes through, the Chrome app makes a connection with a client socket, and seems to use this connection for subsequent requests.
Sooner or later though in the same session, Chrome will open another client socket for a given request, then will quickly throw a chrome.sockets.tcp.onReceiveError for the new client socket. Then the original socket will also throw the same error, and no sockets remain connected.
The Flash code is just making a regular HTTP request (it's not specifically asking for a new port).
Does anyone know:
what do I need to do to keep all requests on the same client socket id?
or how do I get the server to cope with changing sockets?
I've put the server code here if anyone has time to take a look:
https://github.com/tarling/http-test/tree/flash/js
I wondered if it was something to do with "Connection: keep-alive" so have included this in the HTTP response header
https://github.com/tarling/http-test/blob/flash/js/http.js#L52
Thanks for any insight
I think I've fixed it. Updated code is on GitHub
Changes were:
recognising a result code of -100 from chrome.sockets.tcp.onReceiveError as a disconnection, and so disconnecting and closing the client socket. This was after looking at this code
remembering the lclient socket ID from the last successful chrome.sockets.tcp.onReceiveError call for subsequent sends
only needing to register listeners for the onReceive and onReceiveError events once.
Related
I'm developing a AngularJS driven SPA. I want to show a fullscreen overlay saying something like "Sorry - Server is not available atm" as soon as the connection between client and server is interrupted.
So I wrote a simple while(true) loop which is basically just calling /api/ping which always returns true. When there is no http response tho, then the message shall appear.
I don't like this approach since it creates a lot of overhead. Especially considering mobile connections.
So what would be a decent solution for my case?
I could imagine work with a websocket and an onInterrupted (or similar) event.
Basically, when an HTTP endpoint is offline you'll receive a HTTP/404 Not found status code.
I would say that it should be better wait for the error rather than querying the server overtime to check if it's online or not.
Use Angular $http provider's interceptors to catch all unsuccessful HTTP status codes and perform an action when you catch one like showing notification to the user to notify that there's no connection available to the server or the server is offline.
With WebSockets you still need to have some sort of ping mechanism, since there is functionally no difference between an inactive/very slow connection and a dropped connection.
Having said that, the overhead regarding wire traffic is much lower, since no HTTP headers are sent on each request, and this will result in lower processing overheads as well.
I'd like to know if there's any way to establish a P2P connection between two browsers using socket.io-client (but I'm willing to use anything else that may do the trick).
Both browsers are currently connected to a node.js app serving HTTP requests with Express, which stores both clients's IP addresses (and ports when running locally). What I'd like to do is add a third connection that links both clients (let's call them A and B) directly, so that messages/data will go straight from one client to another, without transiting through the node.js server.
Is that feasible? If so, how?
So far, I've tried connecting the two clients (let's call them A and B) with the following code:
Client A:
A_to_server_socket = io();
A_to_server_socket.on('p2p', function(address_B){
A_to_B_socket = io(address_B); // Instantiates a new socket
A_to_B_socket.on('connect', function() {
console.log('Connected!');
});
});
I'm not sure about the code for client B. However I've tried:
repeat the above code for B, using B's own address (to override the default of connecting to the server)
repeat the above code for B, this time using A's address
having B_to_server_socket listen for a new connect event
However regardless of B's code, when running A's code I'm confronted with a "Cross-Origin Request blocked" error on Firefox, or "Failed to load resource: net::ERR_CONNECTION_REFUSED" followed by "net::ERR_CONNECTION_REFUSED" on Chrome.
Any hints towards a solution, or insights for better understanding the problem and how sockets work would be most welcome.
I'll try to summarize my comments into an answer.
In TCP, a connection is made when one endpoint A connects to another endpoint B. To connect to endpoint B, that host must be "listening" for incoming connections originating from other hosts. In a typical web request, the browser establishes a TCP connection to the web server and this works because the web server is listening for incoming requests on a specific port and when one of those requests comes in, it accepts the incoming request to establish an active TCP connection. So, you must have one side initiating the request and another side listening for the request.
For various security reasons, browsers themselves don't "listen" for incoming connections so you can't connect directly to them. They only allow you to connect outbound to some other listening agent on the internet. So, if a browser never listens for an incoming webSocket connection, then you can't establish a true peer-to-peer (e.g. browser-to-browser) webSocket connection.
Furthermore, TCP is designed so that you can't have two browsers both connect to a common server and then somehow have that server connect up their pipelines such that the two browser are now just wired directly to each other. TCP just doesn't work that way. It is possible to have an agent in the middle forwarding packets from one client to another via a separate connection to each (that's how Chat applications generally work), but the agent in the middle can't simply plug the two TCP connections together such that the packets go directly from one client to the other (and no longer go through the server) as a fireman would connect two firehoses. TCP just doens't work that way. It might be possible to have some complicated scheme that involved rewriting packet headers to a packet sent from endPoint A was forwarded to endPoint B and looked like it came from the server instead, but that still involves the server as the middleman or proxy.
The usual way to solve this problem is to just let each browser connect to a common server and have the server act as the middleman or traffic cop, forwarding packets from one browser to another.
Existing P2P applications (outside of browsers) work by having each client actually listen for incoming connections (act like a server) so that another client can connect directly to them. P2P itself is more complicated than this because one needs to have a means of discovering an IP address to connect to (since clients typically aren't in DNS) and often there are firewalls in the way that need some cooperation between the two ends in order to make the firewall allow the incoming connection. But, alas, this capability of listening for an incoming connection is not something a browser from plain javascript will allow you to do.
There is no something like "connection between two browsers using socket.io-client."
But there is "Both browsers are connected to a node.js app serving HTTP requests with Express, which keeps track of both clients's IP addresses (and ports when running locally)."
If you want to have P2P connection between two browser, following may be a way to do so.
when you get client A connection, join to a room "P2P"
when you get client B connection, join to a room "P2P"
to exchange between client A and client B, use
socket.broadcast.to('P2P').emit("message", "Good Morning");
Hope this may help.
I'm creating an app where the server and the clients will run on the same local network. Is it possible to use web sockets, or rather more specifically, socket.io to have one central server and many clients that are running native apps
? The way I understand socket.io to work is that the clients read the web-pages that are served from the server but what happens when your clients become tablet devices running native apps instead of web pages in a browser?
The scenario I'm working with at the minute will have one central server containing a MEAN app and the clients (iPads) will make GET requests to the data available on the server. However, I'd also like there to be real-time functionality so if someone triggers a POST request on their iPad, the server acknowledges it and displays it in the server's client-side. The iPad apps will (ideally) be running native phonegap applications rather than accessing 192.168.1.1:9000 from their browser.
Is this technically possible to connect to the socket server from the native apps or would the devices have to send POST requests to a central server that's constantly listening for new 'messages'? I'm totally new to the whole real-time stuff so I'm just trying to wrap my head around it all.
Apologies if this isn't totally clear, it's a bit hard to describe with just text but I think you get the idea?
Correct me if I am wrong.
You have multiple iPads running native app. They send a POST request to your node JS server which is running in a computer in the same local network. Whenever the server receives a request from app, you want to display that a request has been received in your computer screen.
If my assumptions about the scenario is correct, then it is fairly easy to do. Here are the steps to do it.
Create a small webpage (front end). Load socket IO in the front end page like this -
<script type="text/javascript" src="YOUR_SERVER_IP/socket.io/socket.io.js"></script>
Then connect to server using var socket = io(). This should trigger connection event in your backend.
Handle all POST request from apps normally. Nothing special. Just add a small snippet in between. socket.emit('new_request', request_data). This sends new_request event to front end.
Handle the new_request in your front end using socket.on('new_request', function(request_data) { ... });. That's it. No need to add anything to your native app for realtime update.
The second step would be a little complicated as it is necessary to make socket variable available inside all POST requests. Since you chose node.js, I don't think you need any help with that.
Not totally clear on your project, but I'll try to give you some pointers.
An effective way to send data between native apps and a server is using a REST server. REST is based on HTTP requests and allows you to modify data on the server, which can connect to your database. The data returned is typically either JSON or XML formatted. See here for a brief intro: http://www.infoq.com/articles/rest-introduction
Android/iOS/etc have built in APIs for making HTTP requests. Your native app would send a request to the server, parse the response, and update your native UI accordingly. The same server can be used from a website using jQuery ajax HTTP requests.
Express.js is more suited to serving web pages and includes things like templating. Look into "restify" (see here: mcavage.me/node-restify/) if you just want to have a REST server that handles requests. Both run on top of node.js (nodejs.org).
As far as real-time communication, if you're developing for iOS look into APNS (Apple Push Notification Service). Apple maintains a persistent connection, and by going through their servers you can easily send messages to your app. The equivalent of this on Android is GCM (Google Cloud Messaging).
You can also do sockets directly if that's easier for you. Be careful with maintaining an open socket on a mobile device though, it can be a huge battery drain. Here's a library for connecting ObjC to Socket.IO using websockets, it may be useful for you: https://github.com/pkyeck/socket.IO-objc
Hope that helps!
To answer your question, it is definitely possible. Socket.io would serve as the central server that can essentially emit messages to all of the client. You can also make Socket.io listen for the messages from any of the clients and serve the emitted message to the rest of the clients.
Here's an example of how socket.io can be used. Simply clone, npm install, and run using 'node app.js'
All you have to do is to provide a valid server address when you connect your socket from the iPad clients:
var socket = io.connect( 'http://my.external.nodejs.server' );
Let us know if you need help with actual sending/receiving of socket events.
It is possible to connect to Websockets from your apps.
If you are using PhoneGap then you need a pluging that gives support to websockets in your app (the client) and then use websocket like normal way using Javascript see this.
If your app is native iOS look into this it could help you.
The primary use of the Sockets in your case is to be a bidirectional "pipe" between an app and server. There is no need of server sending the whole web-page to the native app. All what you need is to send some data from server to the client(app) in response to POST (or GET) request and then using this data on client side to update client's UI in real-time. If you are going to use moderate amount of devices (say tens of them), you may have connected all of them to the server permanently keeping individual socket connection open for every individual link server-to-app. Thus you may deliver data and update client's state in real time.
In fact web browsers also employ sockets to communicate to web servers. However as in general case there is no control on amount of concurrent clients in Internet, for the sake of limited networking resources conservation, servers do not keep sockets open for a long time, closing it just after the web-page was sent to client (or timeout has expired). That's how HTTP protocol works on the low level. The server waiting for the HTTP clients (browsers) by listening the 80 port, responding them by sending the whole web page content, then closing the connection and keep waiting for another requests on the same port.
In your case it's basically a good idea to use socket.io as it's a uniform implementation of sockets (ok WebSockets) on both client and server side. The good starting point is here
I have a client/server application using nodejs on the server and socket.io as the connection mechanism. For reasons relevant to my application I want to have only one active connection per browser, and reject all the connections from other tabs that may be opened later on during the session. This works great with WebSockets, but if WebSockets is not supported by the browser and XHR-polling is used instead, the disconnection never happens, so if the user just refreshes the page, this is not interpreted as a reconnection ( I have a delay for reconnection and session restoring), but as a new tab, which ends in the connection being rejected because the old connection made by this same tab is still active.
I'm looking for a way to effectively end the connection from the client whenever a refresh occurs. I've tried binding to the beforeunload and calling socket.disconnect() on the client side, and also sending a message like socket.emit('force-disconnect') and triggering the disconnect from the server with no success. Am I missing something here? Appreciate your help!
I've read this question and couldn't find it useful for my particular case.
Solved the issue, it turns out it was a bug introduced in socket.io 0.9.5. If you have this issue just update BOTH your server and client-side code to socket.io > 0.9.9 and set the socket.io client-side options sync disconnect on unload to true and you're all set.
Options are set this way:
var socket = io.connect('http://yourdomain.com', {'sync disconnect on unload' : true});
You can also get "Error: xhr poll error" if you run out of open file descriptors available. This is likely to happen during a load test.
Check the current open file descriptor size:
ulimit -n
Increase it to a high number:
ulimit -n 1000000
Here's the scenario, I have a client side application, served by PHP on a different server to the node.js + socket.io application. It can connect and receive broadcasts sent from the server. If the connection drops, the application falls back to polling a database (using setInterval()). It attempts to reconnect every 5 polls, and it can successfully reconnect and continue to receive messages.
My problem occurs when the user loads the page and the node server cannot be reached (I turned it off for testing), I then turn on the server and on the 5th poll, it successfully connects to the server, using socket.socket.reconnect();. However, whenever the server broadcasts messages, it doesn't fire the event. Note that this doesn't happen when testing on a phone (which falls back to a different socket method)
I have already seen the question found here Reconnection in socket.io problem in `socket.on('message',function(){})`, however, the socket has not previously been connected so I don't think it could be the session?
EDIT: I changed the socket.socket.reconnect() to socket.socket.connect() and it fixed the problem. If someone could explain the reasons of why this works I'd like to know. I know its because the server isn't actually reconnecting, but would like more info.
Thanks.
well you possibly know the reason for this. server is not reconnecting. it is actually connecting. when you tell socket.io to reconnect it searches for the previous connection handle and thats where the problem arises.