If a browser opens a connection to a remote server, is it possible to access that same connection via Javascript?
I have a small Ethernet module on my network that I program sort of like this (pseudocode):
private var socket
while(true) {
if(socket is disconnected) {
open socket
listen on socket (port 80)
}
if(connection interrupt) {
connect socket
}
if(data receive interrupt) {
serve
}
if(disconnection interrupt) {
disconnect socket
}
}
The point is that it listens on one socket for HTTP requests and serves them.
In my web browser, I can connect to the device, making an HTTP GET request for some HTML/JS that I've written, and it works. A connection is opened on the socket and the files come back as HTTP responses.
Now I want to click a button on the webpage and have the browser send an HTTP POST request over that same connection. In my Javascript, I have (edited and formatted for clarity):
// This function sends an HTTP request
function http(type, url, data, callbacks) {
// make a new HTTP request
var request = new XMLHttpRequest();
// open a connection to the URL
request.open(type, url + (data ? "?" + data : ""));
// add headers
if(type == "POST")
request.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded');
// register callbacks for the request
Object.keys(callbacks).forEach(function(callback) {
request[callback] = function() {
callbacks[callback](request.responseText);
};
});
// send and return the request
request.send();
return request;
}
// Here is where I call the function
http("POST", // use POST method
"http://192.168.1.99", // IP address of the network device
dataToSend, // the data that needs to be sent
{ // callbacks
onloadend: function(data) {
console.log("success. got: " + data); // print 'success' when the request is done
},
onerror: function(data) {
console.log("There was an error."); // print 'error' when it fails
console.log(data);
}
}
);
The issue here is that this opens a new connection to the device, but I want to use the same socket that the browser is already connected to. Is this possible, and if so, how?
There is no application control inside the browser to decide if a new connection is used for the next request or if an existing connection is used. In fact, it is perfectly normal that the browser will use multiple connections in parallel to the same server and your server has to be able to deal with this.
Since your server architecture seems to be only able to deal with one connection at a time you either would need to change the architecture to handle multiple parallel connections or to make sure that you only need to handle a single connection at a time. The latter could be achieved by not supporting HTTP keep-alive, i.e. by closing the connection immediately after each response. This way a new request will result in a new connection (which is not what you wanted according to your question) but your server will also be able to handle this new connection (which is what you likely ultimately need) since the previous one was closed.
Related
So I have a node-js server and an apache server on the same machine, and one of the javascript files is sending an HTTP request to the node-js server. The node-js server receives the file, reads the data, puts it in the database, as it should, but it isn't sending back any status codes or data.
Here is the XHTMLRequest send code snippet,
// creates a new http request to be sent to the nodejs server
function createNewUser(username, password, email) {
// The url is the URL of our local nodejs server
var userCreateRequest = new XMLHttpRequest();
userCreateRequest.open( "POST", "http://<machine's IP>:8080/api/users" );
// Create json object for user data
var user = "name="+username+"&password="+password+"&email="+email;
alert(user);
// set content type for http request
userCreateRequest.setRequestHeader("Content-type", "application/x-www-form-urlencoded");
// Event listern for server response
// userCreateRequest.addEventListener("readystatechange", processRequest, false);
// Call process request whenever state changes
userCreateRequest.onreadystatechange = function() {
alert(this.readyState + ", " + this.status);
if (this.readyState == 4 && this.status == 200) {
var response = this.response;
alert(response.name);
}
}
// Send user data to server
userCreateRequest.send(user);
}
And here is the code for the node-js server (with express)
router.route('/users')
.post(function(req, res) { //create a new user
var user = new User();
user.name = req.body.name;
user.password = req.body.password;
user.email = req.body.email;
user.save(function(err) { //add user object to database
if(err)
res.send(err);
res.status(200).json(user);
});
});
As I said above, the code works fine in terms of putting the body of the request in the database and what-not, but the server is not sending back the 200 OK response (or I'm failing to receive it for some reason). The only times I get an alert from onreadystatechange is when it's state 2, status 0, and state 4, status 0.
Try below code snippet.
user.save(function(err, user) {
if(err)
res.send(err);
res.status(200).json(user);
});
It did end up being a CORS issue. I'm still a little iffy on exactly why, but after configuring the express/CORS package to allow requests from the IP and port of my apache server, it started working.
My understanding is that cross origin implies a different domain, where-as both of my servers are (as I understand it) on different ports on the same domain.
Either way, enabling CORS fixed the issue. Thank you to Jaromanda X for pointing it out and getting me on the right track.
I send JSON requests one by one to the nodejs server. After 6th request, server can't reply to the client immediately and then it takes a little while(15 seconds or little bit more and send back to me answer 200 ok) It occurs a writing json value into MongoDB and time is important option for me in terms with REST call. How can I find the error in this case? (which tool or script code can help me?) My server side code is like that
var controlPathDatabaseSave = "/save";
app.use('/', function(req, res) {
console.log("req body app use", req.body);
var str= req.path;
if(str.localeCompare(controlPathDatabaseSave) == 0)
{
console.log("controlPathDatabaseSave");
mongoDbHandleSave(req.body);
res.setHeader('Content-Type', 'application/json');
res.write('Message taken: \n');
res.write('Everything all right with database saving');
res.send("OK");
console.log("response body", res.body);
}
});
My client side code as below:
function saveDatabaseData()
{
console.log("saveDatabaseData");
var oReq = new XMLHttpRequest();
oReq.open("POST", "http://192.168.80.143:2800/save", true);
oReq.setRequestHeader("Content-type", "application/json;charset=UTF-8");
oReq.onreadystatechange = function() {//Call a function when the state changes.
if(oReq.readyState == 4 && oReq.status == 200) {
console.log("http responseText", oReq.responseText);
}
}
oReq.send(JSON.stringify({links: links, nodes: nodes}));
}
--Mongodb save code
function mongoDbHandleSave(reqParam){
//Connect to the db
MongoClient.connect(MongoDBURL, function(err, db)
{
if(!err)
{
console.log("We are connected in accordance with saving");
} else
{
return console.dir(err);
}
/*
db.createCollection('user', {strict:true}, function(err, collection) {
if(err)
return console.dir(err);
});
*/
var collection = db.collection('user');
//when saving into database only use req.body. Skip JSON.stringify() function
var doc = reqParam;
collection.update(doc, doc, {upsert:true});
});
}
You can see my REST call in google chrome developer editor. (First six call has 200 ok. Last one is in pending state)
--Client output
--Server output
Thanks in advance,
Since it looks like these are Ajax requests from a browser, each browser has a limit on the number of simultaneous connections it will allow to the same host. Browsers have varied that setting over time, but it is likely in the 4-6 range. So, if you are trying to run 6 simultaneous ajax calls to the same host, then you may be running into that limit. What the browser does is hold off on sending the latest ones until the first ones finish (thus avoiding sending too many at once).
The general idea here is to protect servers from getting beat up too much by one single client and thus allow the load to be shared across many clients more fairly. Of course, if your server has nothing else to do, it doesn't really need protecting from a few more connections, but this isn't an interactive system, it's just hard-wired to a limit.
If there are any other requests in process (loading images or scripts or CSS stylesheets) to the same origin, those will count to the limit too.
If you run this in Chrome and you open the network tab of the debugger, you could actually see on the timline exactly when a given request was sent and when its response was received. This should show you immediately whether the later requests are being held up at the browser or at the server.
Here's an article on the topic: Maximum concurrent connections to the same domain for browsers.
Also, keep in mind that, depending upon what your requests do on the server and how the server is structured, there may be a maximum number of server requests that can efficiently processed at once. For example, if you had a blocking, threaded server that was configured with one thread for each of four CPUs, then once the server has four requests going at once, it may have to queue the fifth request until the first one is done causing it to be delayed more than the others.
I have the following nodeJS Server that seems to work fine. I would like to write a client that receives message from the server and invokes some JS based on the message.
The steps involved are:
User accesses the url http://server.xyz.com:8080/pa
nodeJS Server receives that call and broadcasts to the connected clients that pa is the api call received.
nodeJS Clients that are connected to the server invoke some JS related to the pa action.
My questions are:
1. How do I make sure the server broadcasts that message like Step 2?
2. How do I write a client that performs Step 3 above.
For the client, I am seeing a lot of references to socket.io, but I am not sure what's the best framework in this case.
server.js
var http = require('http');
http.createServer(function(request, response) {
request.on('error', function(err) {
console.error(err);
response.statusCode = 400;
response.end();
});
response.on('error', function(err) {
console.error(err);
});
response.writeHead(200, {'Content-Type': 'application/json'});
var body=[];
if (request.method === 'GET' && request.url === '/pa') {
response.end(JSON.stringify({"action": "pa"}));
}
else if (request.method === 'GET' && request.url === '/pi') {
response.end(JSON.stringify({"action": "pi"}));
}
else {
response.statusCode = 404;
response.end();
}
}).listen(8080);
If the clients are also in Node.js then you should be able to set up a webhook/push service. Webhooks are used in every major API today, they are extremely prevalent in Slack's and Microsoft's. The following is inspired by Microsoft's Office 365 push and streaming services APIs.
Add a route that clients can POST to, let's call this /subscribe. In the request body to the /subscribe route, they include a url, we'll call it the paReceiverUrl.
If a client wishes to be notified when someone else messages the /pa endpoint, a client can send a request to the /subscribe endpoint and include the location to notify them at - the paReceiverUrl (along with any other info, probably some auth data). You should store the subscribed clients' information in non-volatile storage to be safe, have a database do this for you or simply write it to a file until it gets more complex.
Now your '/pa' route becomes something more like this (your Step 2):
if (request.method === 'GET' && request.url === '/pa') {
// Send response to the request sender
response.end(JSON.stringify({"action": "pa"}));
// Get subscribed clients' information
var subscribedClients = <Read from file or db>
// Broadcast
for(var i = 0; i < subscribedClientsUrls.length; i++) {
// Send them a request at subscribedClients[i].paReceiverUrl
}
}
Now for your Step 3, the clients simply just need to have their server accepting requests at their paReceiverUrl and there they can invoke whatever JS they want.
If you need it to be more real-time then I would go with the web sockets protocol to set a persistent HTTP connection to stream data over.
Looking at the example given at the nodejs domain doc page: http://nodejs.org/api/domain.html, the recommended way to restart a worker using cluster is to call first disconnect in the worker part, and listen to the disconnect event in the master part. However, if you just copy/paste the example given, you will notice that the disconnect() call does not shutdown the current worker:
What happens here is:
try {
var killtimer = setTimeout(function() {
process.exit(1);
}, 30000);
killtimer.unref();
server.close();
cluster.worker.disconnect();
res.statusCode = 500;
res.setHeader('content-type', 'text/plain');
res.end('Oops, there was a problem!\n');
} catch (er2) {
console.error('Error sending 500!', er2.stack);
}
I do a get request at /error
A timer is started: in 30s the process will be killed if not already
The http server is shut down
The worker is disconnected (but still alive)
The 500 page is displayed
I do a second get request at error (before 30s)
New timer started
Server is already closed => throw an error
The error is catched in the "catch" block and no result is sent back to the client, so on the client side, the page is waiting without any message.
In my opinion, it would be better to just kill the worker, and listen to the 'exit' event on the master part to fork again. This way, the 500 error is always sent during an error:
try {
var killtimer = setTimeout(function() {
process.exit(1);
}, 30000);
killtimer.unref();
server.close();
res.statusCode = 500;
res.setHeader('content-type', 'text/plain');
res.end('Oops, there was a problem!\n');
cluster.worker.kill();
} catch (er2) {
console.error('Error sending 500!', er2);
}
I'm not sure about the down side effects using kill instead of disconnect, but it seems disconnect is waiting the server to close, however it seems this is not working (at least not like it should)
I just would like some feedbacks about this. There could be a good reason this example is written this way that I've missed.
Thanks
EDIT:
I've just checked with curl, and it works well.
However I was previously testing with Chrome, and it seems that after sending back the 500 response, chrome does a second request BEFORE the server actually ends to close.
In this case, the server is closing and not closed (which means the worker is also disconnecting without being disconnected), causing the second request to be handled by the same worker as before so:
It prevents the server to finish to close
The second server.close(); line being evaluated, it triggers an exception because the server is not closed.
All following requests will trigger the same exception until the killtimer callback is called.
I figured it out, actually when the server is closing and receives a request at the same time, it stops its closing process.
So he still accepts connection, but cannot be closed anymore.
Even without cluster, this simple example illustrates this:
var PORT = 8080;
var domain = require('domain');
var server = require('http').createServer(function(req, res) {
var d = domain.create();
d.on('error', function(er) {
try {
var killtimer = setTimeout(function() {
process.exit(1);
}, 30000);
killtimer.unref();
console.log('Trying to close the server');
server.close(function() {
console.log('server is closed!');
});
console.log('The server should not now accepts new requests, it should be in "closing state"');
res.statusCode = 500;
res.setHeader('content-type', 'text/plain');
res.end('Oops, there was a problem!\n');
} catch (er2) {
console.error('Error sending 500!', er2);
}
});
d.add(req);
d.add(res);
d.run(function() {
console.log('New request at: %s', req.url);
// error
setTimeout(function() {
flerb.bark();
});
});
});
server.listen(PORT);
Just run:
curl http://127.0.0.1:8080/ http://127.0.0.1:8080/
Output:
New request at: /
Trying to close the server
The server should not now accepts new requests, it should be in "closing state"
New request at: /
Trying to close the server
Error sending 500! [Error: Not running]
Now single request:
curl http://127.0.0.1:8080/
Output:
New request at: /
Trying to close the server
The server should not now accepts new requests, it should be in "closing state"
server is closed!
So with chrome doing 1 more request for the favicon for example, the server is not able to shutdown.
For now I'll keep using worker.kill() which makes the worker not to wait for the server to stops.
I ran into the same problem around 6 months ago, sadly don't have any code to demonstrate as it was from my previous job. I solved it by explicitly sending a message to the worker and calling disconnect at the same time. Disconnect prevents the worker from taking on new work and in my case as i was tracking all work that the worker was doing (it was for an upload service that had long running uploads) i was able to wait until all of them are finished and then exit with 0.
I want to provide a meaningful error to the client when too many users are connected or when they're connecting from an unsupported domain, so...
I wrote some WebSocket server code:
var http = require('http');
var httpServer = http.createServer(function (request, response)
{
// i see this if i hit http://localhost:8001/
response.end('go away');
});
httpServer.listen(8001);
// https://github.com/Worlize/WebSocket-Node/wiki/Documentation
var webSocket = require('websocket');
var webSocketServer = new webSocket.server({ 'httpServer': httpServer });
webSocketServer.on('request', function (request)
{
var connection = request.reject(102, 'gtfo');
});
And some WebSocket client code:
var connection = new WebSocket('ws://127.0.0.1:8001');
connection.onopen = function (openEvent)
{
alert('onopen');
console.log(openEvent);
};
connection.onclose = function (closeEvent)
{
alert('onclose');
console.log(closeEvent);
}
connection.onerror = function (errorEvent)
{
alert('onerror');
console.log(errorEvent);
};
connection.onmessage = function (messageEvent)
{
alert('onmessage');
console.log(messageEvent);
};
All I get is alert('onclose'); with a CloseEvent object logged to the console without any status code or message that I can find. When I connect via ws://localhost:8001 the httpServer callback doesn't come into play, so I can't catch it there. The RFC suggests I should be able to send any status code other than 101 when there's a problem, but Chrome throws an error in the console Unexpected response code: 102. If I call request.reject(101, 'gtfo'), implying it was successful I get a handshake error, as I'd expect.
Not really sure what else I can do. Is it just not possible right now to get the server response in Chrome's WebSocket implementation?
ETA: Here's a really nasty hack in the mean time, I hope that's not what I have to end up doing.
var connection = request.accept(null, request.origin);
connection.sendUTF('gtfo');
connection.close();
I'm the author of WebSocket-Node and I've also posted this response to the corresponding issue on GitHub: https://github.com/Worlize/WebSocket-Node/issues/46
Unfortunately, the WebSocket protocol does not provide any specific mechanism for providing a close code or reason at this stage when rejecting a client connection. The rejection is in the form of an HTTP response with an HTTP status of something like 40x or 50x. The spec allows for this but does not define a specific way that the client should attempt to divine any specific error messaging from such a response.
In reality, connections should be rejected at this stage only when you are rejecting a user from a disallowed origin (i.e. someone from another website is trying to connect users to your websocket server without permission) or when a user otherwise does not have permission to connect (i.e. they are not logged in). The latter case should be handled by other code on your site: a user should not be able to attempt to connect the websocket connection if they are not logged in.
The code and reason that WebSocket-Node allow you to specify here are an HTTP Status code (e.g. 404, 500, etc.) and a reason to include as a non-standard "X-WebSocket-Reject-Reason" HTTP header in the response. It is mostly useful when analyzing the connection with a packet sniffer, such as WireShark. No browser has any facility for providing rejection codes or reasons to the client-side JavaScript code when a connection is rejected in this way, because it's not provided for in the WebSocket specification.