I am currently using Node.js to handle the back-end of my website but I am unsure of how Websockets/Objects are handled together.
This is a template I am using as an example of my main class. (Sends web-requests to a specific page)
class ViewClass {
constructor(URL, views) {
this.link = URL;
this.views = views;
this.make_requests();
}
make_requests() {
try {
const XMLHttpRequest = require("xmlhttprequest").XMLHttpRequest;
const xhr = new XMLHttpRequest();
let link = this.link;
let views = this.views;
for (let index = 1; index < views + 1; index++) {
xhr.open("GET", link, false);
xhr.onload = function (e) {
if (xhr.readyState === 4) {
if (xhr.status === 200) {
console.log("View: " + index + " Sent Successfully!");
} else {
console.error("View: " + index + " Failed!");
}
}
};
xhr.send(null);
}
} catch (error) {
console.log(error.message);
}
}
}
This is my Main Websocket File (Stripped for simplicity)
server.on('connection', function (socket) {
console.log("Welcomed Connection from: " + socket.remoteAddress);
socket.on('close', function (resp) {
console.log(`[${GetDate(3)}] Bye!`);
});
socket.on('data', function (buf) {
// Take Views/URL from Front-end.
// Initialise a new Object from ViewClass and let it run until finished.
});
});
Lets say I receive data from the WebSocket and that data creates a new ViewClass object and starts running immediately. Will that Now Running code block the input/output of the Node.js Server? Or will it be handled in the background?
If there is any information I can provide to make it clearer let me know as I am extremely new to Websocket/Js and I am more than likely missing information.
Your ViewClass code is launching views XMLHttpRequests and then doing nothing, but waiting for responses to come back. Because a regular XMLHttpRequest is asynchronous (if you don't pass false for the async flag), the server is free to do other things while the code is waiting for the XMLHttpRequest responses.
Will that Now Running code block the input/output of the Node.js Server?
No, because this is asynchronous code, it will not block the input/output of the server.
Or will it be handled in the background?
Responses themselves are not handled in the background. Nodejs runs your Javascript in a single thread (assuming there are no WorkerThreads being used which are not being used here). But, waiting for a networking response is asynchronous and is handled by native code in the event loop in the background. So, while your code is doing nothing but waiting for an event to occur, nodejs and your server is free to respond to other incoming events (such as other incoming requests).
Emergency Edit:
This code:
xhr.open("GET", link, false);
Is attempting a SYNCHRONOUS XMLHttpRequest. That's a horrible thing to do in a node.js server. That WILL block all other activity. Change the false to true to allow the xhr request to be asynchronous.
Related
There is a simple web server that accepts data. Sample code below.
The idea is to track in real time how much data has entered the server and immediately inform the client about this. If you send a small amount of data, then everything works well, but if you send more than X data in size, then the on.data event on the server is triggered with a huge delay. I can see that data is transfering for 5 seconds already but on.data event is not trigerred.
on.data event seems to be triggered only when data is uploaded completely to the server, so that's why it works fine with small data (~2..20Mb), but with big data (50..200Mb) it doesnt work well.
Or maybe it is due to some kind of buffering..?
Do you have any suggestions why on.data triggered with delay and how to fix it?
const app = express();
const port = 3000;
// PUBLIC API
// upload file
app.post('/upload', function (request, response) {
request.on('data', chunk => {
// message appears with delay
console.log('upload on data', chunk.length);
// send message to the client about chunk.length
});
response.send({
message: `Got a POST request ${request.headers['content-length']}`
});
});
app.listen(port, () => {
console.log(`Example app listening at http://localhost:${port}`);
});
TLDR:
The delay that you are experiencing probably is the Queueing from Resource scheduling from the browser.
The Test
I did some tests with express, and then I found that it uses http to handle requests/response, so I used a raw http server listener to test this scenario, which has the same situation.
Backend code
This code, based on sample of Node transaction samples, will create a http server and give log of time on 3 situations:
When a request was received
When the first data event fires
When the end event fires
const http = require('http');
var firstByte = null;
var server = http.createServer((request, response) => {
const { headers, method, url } = request;
let body = [];
request.on('error', (err) => {
}).on('data', (chunk) => {
if (!firstByte) {
firstByte = Date.now();
console.log('received first byte at: ' + Date.now());
}
}).on('end', () => {
console.log('end receive data at: ' + Date.now());
// body = Buffer.concat(body).toString();
// At this point, we have the headers, method, url and body, and can now
// do whatever we need to in order to respond to this request.
if (url === '/') {
response.statusCode = 200;
response.setHeader('Content-Type', 'text/html');
response.write('<h1>Hello World</h1>');
}
firstByte = null;
response.end();
});
console.log('received a request at: ' + Date.now());
});
server.listen(8083);
Frontend code (snnipet from devtools)
This code will fire a upload to /upload which some array data, I filled the array before with random bytes, but then I removed and see that it did not have any affect on my timing log, so yes.. the upload content for now is just an array of 0's.
console.log('building data');
var view = new Uint32Array(new Array(5 * 1024 * 1024));
console.log('start sending at: ' + Date.now());
fetch("/upload", {
body: view,
method: "post"
}).then(async response => {
const text = await response.text();
console.log('got response: ' + text);
});
Now running the backend code and then running the frontend code I get some log.
Log capture (screenshots)
The Backend log and frontend log:
The time differences between backend and frontend:
Results
looking at the screenshoots and I get two differences between the logs:
The first, and most important, is the difference between frontend fetch start and backend request recevied, I got 1613ms which is "close" (1430ms) to Resource Scheduling in network timing tab, I think there are more things happening between the frontend fetch call and the node backend event, so I can't direct compare the times:
log.backendReceivedRequest - log.frontEndStart
1613
The second is the difference between receving data on backend, which I got
578ms, close to Request sent (585ms) in network timing tab:
log.backendReceivedAllData - log.backendReceivedFirstData
578
I also changed the frontend code to send different sizes of data and the network timing tab still matches the log
The thing that remains unknown for me is... Why does Google Chrome is queueing my fetch since I'm not running any more requests and not using the bandwidth of the server/host? I readed the conditions for Queueing but not found the reason, maybe is allocating the resources on disk, but not sure: https://developer.chrome.com/docs/devtools/network/reference/#timing-explanation
References:
https://nodejs.org/es/docs/guides/anatomy-of-an-http-transaction/
https://developer.chrome.com/docs/devtools/network/reference/#timing-explanation
I found a problem. It was in nginx config. Nginx was setup like a reverse proxy. By default proxy request buffering is enabled, so nginx grabs first whole request body and only then forwards it to nodejs, so that's why I saw delay.
https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_request_buffering
I'd like to intercept fetch from all parts and libraries in my application, and at the same time I'd like not to break possibility of working with the application via file URL - it is useful for Electron and mobile devices (via WebView). For now, I've found two possible ways of doing this:
something like here
const realFetch = window.fetch;
window.fetch = function() {
// do something
return realFetch.apply(this, arguments)
}
something like here, with service worker registration:
main.js:
if ('serviceWorker' in navigator) {
window.addEventListener('load', function() {
navigator.serviceWorker.register('sw.js').then(function(registration) {
console.log('Service worker registered with scope: ', registration.scope);
}, function(err) {
console.log('ServiceWorker registration failed: ', err);
});
});
}
sw.js:
self.addEventListener('fetch', function(event) {
event.respondWith(
// intercept requests by handling event.request here
);
});
With the first approach I cannot intercept fetch requests from web workers. The second approach doesn't work with file URLs, and I want my application to work via file URL due to it allows me to use the app via Electron for desktops or WebView for Android. Is there any other way for intercepting fetch requests?
P.S. I cannot modify the worker I'm trying to intercept requests from.
Update:
On basis of #Ciro Corvino's answer, I've tried the third approach: to start my own worker before anythnig else and try to redefine fetch from there. Didn't work for me, unfortunately, here is the code:
function redefineFetch() {
console.log('inside worker');
if (self.fetch == null) {
console.log('null!');
} else {
console.log(self.fetch.toString());
}
const originalFetch: WindowOrWorkerGlobalScope['fetch'] = self.fetch;
self.fetch = (input: RequestInfo, init: RequestInit) => {
console.log('overridden');
return originalFetch(input, init);
}
}
const blob = new Blob(['(' +
redefineFetch.toString() + ')()'], {type: 'text/javascript'});
const blobUrl = window.URL.createObjectURL(blob);
const w = new Worker(blobUrl);
I'm sure that this code starts before the other workers (I've added a timeout), but this doesn't redefine fetch for the other workers. Can someone explain why or fix the solution?
Update 2:
Apparently each worker has it's own private WorkerGlobalScope, otherwise there would be no sense to use messages for inter-worker communications. Probably, another fix for my problem could be in overriding Worker constructor, if this is possible. Will check it.
Just try to override the fetch method of the current WorkerGlobalScope into main javascript context (window) and into each js file run in a dedicated worker context calling this function:
note that the self property returns the specialized scope for each context
//works in each worker context you call it and enable fetch interception
function EnableFetchWithArguments() {
const originalCtxFetch = self.fetch;
self.fetch = function() {
// Get the parameter in arguments
// Intercept the parameter here
return originalCtxFetch.apply(this, arguments)
}
}
see for reference and browser compatiblity: WorkerGlobalScope
Noob question on using callbacks as a control flow pattern with Node and the http class. Based on my understanding of the event loop, all code is blocking, i/o is non-blocking and using callbacks, here's the a simple http server and a pseudo rest function:
// Require
var http = require("http");
// Class
function REST() {};
// Methods
REST.prototype.resolve = function(request,response,callback) {
// Pseudo rest function
function callREST(request, callback) {
if (request.url == '/test/slow') {
setTimeout(function(){callback('time is 30 seconds')},30000);
} else if (request.url == '/test/foo') {
callback('bar');
}
}
// Call pseudo rest
callREST(request, callback);
}
// Class
function HTTPServer() {};
// Methods
HTTPServer.prototype.start = function() {
http.createServer(function (request, response) {
// Listeners
request.resume();
request.on("end", function () {
// Execute only in not a favicon request
var faviconCheck = request.url.indexOf("favicon");
if (faviconCheck < 0) {
//Print
console.log('incoming validated HTTP request: ' + request.url);
//Instantiate and execute on new REST object
var rest = new REST();
rest.resolve(request,response,function(responseMsg) {
var contentType = {'Content-Type': 'text/plain'};
response.writeHead(200, contentType); // Write response header
response.end(responseMsg); // Send response and end
console.log(request.url + ' response sent and ended');
});
} else {
response.end();
}
});
}).listen(8080);
// Print to console
console.log('HTTPServer running on 8080. PID is ' + process.pid);
}
// Process
// Create http server instance
var httpServer = new HTTPServer();
// Start
httpServer.start();
If I open up a browser and hit the server with "/test/slow" in one tab then "/test/foo" in another, I get the following behavior - "foo" responds with "Bar" immediately and then 30 secs late, "slow" responds with "time is 30 seconds". This is what I was expecting.
But if I open up 3 tabs in a browser and hit the server with "/test/slow" successively in each tab, "slow" is being processed and responds serially/synchronously so that the 3 responses appear at 30 second intervals. I was expecting the responses right after each other if they were being processed asynchronously.
What am I doing wrong?
Thank you for your thoughts.
This is actually not the server's fault. Your browser is opening a single connection and re-using it between the requests, but one request can't begin until the previous finishes. You can see this a couple of ways:
Look in the network tab of the Chrome dev tools - the entry for the longest one will show the request in the blocking state until the first two finish.
Try opening the slow page in different browsers (or one each in normal and incognito windows) - this prevents sharing connections.
Thus, this will only happen if the same browser window is making multiple requests to the same server. Also, note that XHR (AJAX) requests will open separate connections so they can be performed in parallel. In the real world, this won't be a problem.
i'm using WebSockets to send data from my node.js server to my clients. Since the data can be kind of large, the UI thread used to block, so no user interaction or video playing was possible during the data was received. That's when i stumbled over WebWorkers, and i also managed to get them work together with WebSockets.
app.js:
...
var worker = new Worker('worker.js');
worker.addEventListener('message', function(e) {
console.log('Worker said: ', e.data);
}, false);
worker.postMessage('init');
...
worker.js:
function initWebSocket() {
var connection = new WebSocket('ws://host:port', ['soap', 'xmpp']);
connection.onopen = function () {
connection.send('Ping'); // Send the message 'Ping' to the server
};
// Log errors
connection.onerror = function (error) {
console.log('WebSocket Error ' + error);
};
// Log messages from the server
connection.onmessage = function (e) {
console.log('Server: ' + e.data);
//self.postMessage('Worker received : ' + e.data);
};
};
self.addEventListener('message', function(e) {
switch (e.data) {
case 'init':
initWebSocket();
break;
default:
self.postMessage('Unknown command: ' + e.data);
};
}, false);
All i'm doing so far is receive the data. Of course, later on i intend to do more stuff with it. But my problem is: The UI thread is still blocking when large files arrive. Did i get something wrong here?
UPDATE:
Actually, i have to revise my previous comment. Obviously chrome had cached some of my files i was sending before, so i didn't realize the problem starts already with files way smaller than 300MB (currently, i'm testing a 50MB file). The ui blocks until the file has been completly received. What i'm currently doing is the following: I'm loading an index page with a video playing. Also, on the same page, i put a button which starts a worker. The worker does send an xhr request to the server and gets a 50MB file. So i just dismissed WebSockets for the sake of it. What's happening when i click the button: The video freezes until the complete data has been received. When i do the same and let the worker just calculate numbers in a for-loop, the video keeps playing. So it seems to have something to work with using the network, but not specifically WebSockets. Is it possible that WebWorkers just can't work with network stuff?
I have been using jquery libraries for implementing AJAX. it was ok and I am comfortable with that. However, I started reading some ajax book and found the following code.
// stores the reference to the XMLHttpRequest object
var xmlHttp = createXmlHttpRequestObject();
// retrieves the XMLHttpRequest object
function createXmlHttpRequestObject()
{
// will store the reference to the XMLHttpRequest object
var xmlHttp;
// if running Internet Explorer
if(window.ActiveXObject)
{
try
{
xmlHttp = new ActiveXObject("Microsoft.XMLHTTP");
}
catch (e)
{
xmlHttp = false;
}
}// if running Mozilla or other browsers
else
{
try
{
xmlHttp = new XMLHttpRequest();
}
catch (e)
{
xmlHttp = false;
}
}
// return the created object or display an error message
if (!xmlHttp)
alert("Error creating the XMLHttpRequest object.");
else
return xmlHttp;
}
// make asynchronous HTTP request using the XMLHttpRequest object
function process()
{
// proceed only if the xmlHttp object isn't busy
if (xmlHttp.readyState == 4 || xmlHttp.readyState == 0)
{
// retrieve the name typed by the user on the form
name = encodeURIComponent(document.getElementById("myName").value);
// execute the quickstart.php page from the server
xmlHttp.open("GET", "quickstart.php?name=" + name, true);
// define the method to handle server responses
xmlHttp.onreadystatechange = handleServerResponse;
// make the server request
xmlHttp.send(null);
}
else
// if the connection is busy, try again after one second
setTimeout('process()', 1000);
}
//executed automatically when a message is received from the server
function handleServerResponse()
{
// move forward only if the transaction has completed
if (xmlHttp.readyState == 4)
{
// status of 200 indicates the transaction completed successfully
if (xmlHttp.status == 200)
{
// extract the XML retrieved from the server
xmlResponse = xmlHttp.responseXML;
// obtain the document element (the root element) of the XML structure
xmlDocumentElement = xmlResponse.documentElement;
// get the text message, which is in the first child of
// the the document element
helloMessage = xmlDocumentElement.firstChild.data;
// update the client display using the data received from the server
document.getElementById("divMessage").innerHTML =
'<i>' + helloMessage + '</i>';
// restart sequence
setTimeout('process()', 1000);
}
// a HTTP status different than 200 signals an error
else
{
alert("There was a problem accessing the server: " + xmlHttp.statusText);
}
}
}
Here my question is why do we use setTimeout('process()', 1000); in handleServerResponse() function? Can't we do this without setTimeout('process()', 1000);?
For me, it looks like some kind of constant polling. It's reusing the AJAX request over and over every second, and when the previous request is still active, it waits another second to send it again. So it's not just create an AJAX request and deal with the response.
Using that code, the page would be updating constantly with the information retrieved from the server. Whenever server response has changed, page will as well but not in real time (only when next request finishes). It's similar to Periodic Refresh.
As an evolution, you can have Long Polling in which you spawn an AJAX request and then wait until server responds. If any info is there in the server for you, you'll receive the response immediately. If, while you are waiting for response, anything comes to the server for you, you will receive it. If your request times out, server will respond with an empty body. Then, your client will spawn another AJAX request. You can get some more info from the Wikipedia. Extra link: Comet.
In the given example , the book has call the process() function on the body onload event.
When I change the code to onload-> to onkeyup <input type="text" id="myName" onkeyup="process()"/> I could remove the code //setTimeout('process()', 1000);