Reconnecting serveral peerConnections after page reload - javascript

I'm creating a web application for monitoring by smartphones using WebRTC, and for the signalling server I'm using socket.io.
When I'm sending a stream, I create an RTCPeerConnection object on the "watch" page that receives this stream. Streams I send on separate pages. The user can attach up to four streams from a smartphone, so on the "watch" page there are up to four RTCPeerConnection objects.
Streams are received automatically as soon as the offer from the "transmit" page appears, then the RTCPeerConnection object is created on the "watch" page and connected to the standard WebRTC schema.
"transmit" page:
function onCreateOfferSuccess(sdp){
//console.log(sdp);
pc.setLocalDescription(new RTCSessionDescription(sdp));
console.log('ask');
socket.emit('ask', {"sdp":JSON.stringify(pc.localDescription),
"user": loggedUserID,
"fromSocket": ownSocket});
}
"watch" page:
socket.on('ask', function(offer){
if (offer.user === loggedUserID){
TempDescriptions = JSON.parse(offer.sdp);
console.log(TempDescriptions)
currTransmiterSocket = offer.fromSocket;
console.log(currTransmiterSocket);
getStream();
}
function getStream(){
try {
setTimeout(function(){
console.log(time, 'getStream()');
connection = getPeerConnection();
connection.setRemoteDescription(
new RTCSessionDescription(TempDescriptions),
function() {
connection.createAnswer(gotDescription, function(error){
console.log(error)
});
}, function(error){
console.log(error)
});
}, getStreamDelay*3000)
getStreamDelay++
}
catch(err){
console.log(err);
}
};
My web application requires functionality in which when we exit from the "watch" page and return to it again, all previously included streams must be displayed.
To implement this functionality, I use the oniceconnectionstatechange method. If the stream is disconnected, the iceRestart function is executed which creates the offer with the option {iceRestart: true}
"transmit" page:
var options_with_restart = {offerToReceiveAudio: false,
offerToReceiveVideo: true,
iceRestart: true};
function iceRestart(event){
try{
setTimeout(function(){
pc.createOffer(options_with_restart).then(onCreateOfferSuccess, onCreateOfferError);
},1000);
} catch(error) {
console.log(error);
The problem is that when I restart the "watch" page, all pages "transmit" send to ask at once. Only one object is connected, although four RTCPeerConnection objects are created at once (let's assume that the user sends four streams).
I have been struggling with this problem for several days. I tried to set an increasing time delay on subsequent calls to the getStream() function as seen in the above code, I tried to check the signallingState connections before executing the getStream() function, I tried several other methods but none of them worked.
If you need some part of my code to help, please write.
edit:
gotDescription() method in "watch" page.
function gotDescription(sdp) {
try{
connection.setLocalDescription(sdp,
function() {
registerIceCandidate();
socket.emit('response', {"sdp": sdp,
"user": loggedUserID,
"fromSocket": ownSocket,
"toSocket": currTransmiterSocket});
}, function(error){
console.log(error)
});
} catch(err){
console.log(err);
}
}
I add console.log with RTCPeerConnection object
console output:
https://i.stack.imgur.com/dQXkE.png1
log shows that the signalingState of connection is "stable" but when I develop the object, signalingState is equal to "have-remote-offer"
like here

Remove the TempDescriptions global variable, and pass the sdp to getStream(offer.sdp) directly.
Otherwise, you've socket.on('ask', function(offer){ called 4 times, overwriting TempDescriptions. Then 3+ seconds later your 4 setTimeouts roll around, all accessing the final value of TempDescriptions only.
That's probably why only one RTCPeerConnection re-connects.
In general, using time delay to separate connections seems like a bad idea, as it slows down re-connection. Instead, why not send an id? E.g.
socket.emit('ask', {id: connectionNumber++,
sdp: JSON.stringify(pc.localDescription),
user: loggedUserID,
fromSocket: ownSocket});
Update: Stop adding global variables to window
Whenever you assign to an undeclared variable like this:
connection = getPeerConnection();
...it creates a global on window, e.g. window.connection, and you have the same problem. You have 4 connection, but you're storing them in one variable.
Type "use strict"; at the head of your source file to catch this:
ReferenceError: assignment to undeclared variable connection
Scoping: The general problem
You're dealing with 4 connections here, but you lack an approach for scoping each instance.
Most other languages would tell you to create a class and make object instances, and put everything including connection on this. That's one good approach. In JS you can use closures instead. But at minimum you still need 4 variables holding the 4 connections, or an array of connections. Then you look up—e.g. from the id I mentioned—which connection to deal with.
Also, your try/catches aren't going to catch asynchronous errors. Instead of defining all these callbacks, I strongly recommend using promises, or even async/await when dealing with the highly asynchronous WebRTC API. This makes scoping trivial. E.g.
const connections = [];
socket.on('ask', async ({user, id, sdp, fromSocket}) => {
try {
if (user != loggedUserID) return;
if (!connections[id]) {
connections[id] = getPeerConnection();
registerIceCandidate(connections[id]);
}
const connection = connections[id];
await connection.setRemoteDescription(JSON.parse(sdp));
await connection.setLocalDescription(await connection.createAnswer());
socket.emit('response', {sdp,
user: loggedUserID,
fromSocket: ownSocket,
toSocket: fromSocket});
} catch (err) {
console.log(err);
}
};
This way the error handling is solid.

Related

Handling service bus error messages in azure function using javascript

I have an azure function using service bus topic trigger and I want to handle error messages gracefully, I want to be able to do an abandon of the message and pass the exception to it so I can see it in a property when I read the dead letters queue.
this is my code:
const serviceBusTopicTrigger: AzureFunction = async function(context: Context, mySbMsg: any): Promise<void> {
// do something messy that may fail.
};
When my function fails with an exception the message goes to the DLQ as expected, but the problem is, that it doesn't save the exception thrown, it only tells you that it tried to execute the method 10 times and it couldn't.
What I want its to be able to catch the exception and add it to the message properties so, when I process the DLQ queue I will be able to know the reason for the error. Even more, as the code is failing with an exception I would like it to abandon from the first time it runs the message, so it doesn't have to retry 10 times.
I'm thinking something like this:
const serviceBusTopicTrigger: AzureFunction = async function(context: Context, mySbMsg: any): Promise<void> {
try{
// do something messy and that may fail
}
catch(error){
context.bindingData.userProperties['DeadLetterReason'] = 'Internal server error';
context.bindingData.userProperties['DeadLetterErrorDescription'] = JSON.stringify(error);
context.bindingData.abandonMsg();
}
};
I haven't been able to find any documentation about something like this, so is it possible? or can I force the message to the DLQ? with something like this:
const serviceBusTopicTrigger: AzureFunction = async function(context: Context, mySbMsg: any): Promise<void> {
try{
// do something messy and that may fail
}
catch(error){
context.bindings.deadLetterQueue.userProperties['DeadLetterReason'] = 'Internal server error';
context.bindings.deadLetterQueue.userProperties['DeadLetterErrorDescription'] = JSON.stringify(error);
context.bindings.deadLetterQueue = mySbMsg;
}
};
Or finally and sadly do I have to manage my error directly from the method and maybe send it directly from there to an azure storage table or queue to notify errors?, I wouldn't like that because then I would be handling, both errors from the dead letter queue, and from my functions in different places. Is this the only way?
Any more ideas?
Thanks.
First of all, I think nodejs can not do this. nodejs lost the type information in C#, it should have been sent like this:
https://learn.microsoft.com/en-us/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.deadletterasync?view=azure-dotnet
When my function fails with an exception the message goes to the DLQ
as expected, but the problem is, that it doesn't save the exception
thrown, it only tells you that it tried to execute the method 10 times
and it couldn't.
Max Delivery Count is set to 10 by default when you create the subscription of the service bus topic. The default value is 10, and the minimum can be set to 1.
When you turn on the service bus information automatic completion, just like below:
host.json
{
"version": "2.0",
"extensions": {
"serviceBus": {
"messageHandlerOptions": {
"autoComplete": true
}
}
},
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[1.*, 3.1.0)"
}
}
At that time, when an error occurs, the function run failed, call the abandon method, Delivery Count +1, and send the message back at the same time.
Then once you use try-catch, you will not retry 10 times at all. The function is regarded as a successful operation, function run success and the message disappears without sending it back.(That is, once try-catch you will only execute it once.)
I haven't been able to find any documentation about something like
this, so is it possible? or can I force the message to the DLQ?
No, it can't be done. The related type is missing and cannot be manually sent to dead-letter via JavaScript.
Or finally and sadly do I have to manage my error directly from the
method and maybe send it directly from there to an azure storage table
or queue to notify errors?, I wouldn't like that because then I would
be handling, both errors from the dead letter queue, and from my
functions in different places. Is this the only way?
Why do you say that the error still appears in the dead-letter? Using try-catch in the case of automatic completion should not be sent to dead-letter.

Nodejs manage different threads

I'm a little bit newbie with Nodejs.
I'm working in a Nodejs - express solution.
I want to send and e-mail when some information is added to a MSSSQL database.
This is working well for me. The problem is that I want to check every five minutes that this information added to the database is modified or not, and if not, send another e-mail.
The call to add information to the db is this route:
router.post('/postlinevalidation', function(req, res) {
//insert function into mssql
silkcartCtrl.sendMail(req, res);
});
The controller part for sending the e-mail:
exports.sendMail = function(req, res) {
var emails = "";
fs.readFile('./config/email.conf', 'utf8', function (err,data) {
if (err) {
return logger.error(err);
}
emails = data;
});
var minutes = 5, the_interval = minutes * 60 * 1000;
var refreshId = setInterval(function() {
logger.info("I am doing my 5 minutes check FL_PENDIENTE");
var request = new sql.Request(req.dbsqlserver);
var sqlpendinglinesvalidation = "SELECT [FK_IDCHECK],[FK_IDPEDIDO],[BK_IDPROVEEDOR],[DE_PROVEEDOR]"+
",[FK_FAMILIA],[BK_FAMILIA],[FK_SUBFAMILIA],[BK_SUBFAMILIA],[FK_ARTICULO]"+
",[BK_ARTICULO],[FL_VALIDAR],[DT_FECHA],[FL_PENDIENTE],[DES_CHECK],[QNT_PROPUESTA],[FECHA]"+
"FROM TABLE"+
" WHERE [FL_PENDIENTE] = 1";
request.query(sqlpendinglinesvalidation, function (err, recordset) {
if (recordset.length > 0) {
var transporter = nodemailer.createTransport('smtps://user%40gmail.com:pwd#smtp.gmail.com');
var mailOptions = {
from: '"Mailer" <mail#mail.com>', // sender address
to: emails, // list of receivers
subject: 'Tienes compras pendientes de validar', // Subject line
text: 'Tienes compras pendientes de validar', // plaintext body
html: '<b>Tienes compras pendientes de validar.</b>' // html body
};
// send mail with defined transport object
transporter.sendMail(mailOptions, function(error, info){
if(error){
return logger.error(error);
}
logger.info('Message sent: ' + info.response);
});
} else {
clearInterval(refreshId);
return true;
}
});
}, the_interval);
};
As I said this is working well.
I control the five minutes withsetInterval
But I supossed every time the route postlinevalidation is called, a new thread is open, so I will have several setInterval processes running.
I want to know how to manage it. If the controller function exports.sendMail is running, when the route is called again, "kill this process", and start again exports.sendMail
Thanks in advance
But I supossed every time the route postlinevalidation is called, a
new thread is open, so I will have several setInterval processes
running.
No, this is not how node.js works. You don't get multiple threads because of multiple setInterval() timers.
node.js by itself is single threaded. So, each time a route is called, it just creates an event in the node.js event queue and they are served FIFO, one at a time. At any point that one of the route handlers makes an async call, it essentially "yields" control back and the next item in the event queue gets to run until it yields or finishes.
Timers like setInterval() also use the event queue so no additional threads are creates by setInterval(). It is possible that node.js modules that use native code may themselves use threads and node.js uses a small thread pool that it uses for disk managemnet, but neither of those have anything to do with setInterval().
If you explicitly want to create another execution context for a long running operation in node.js to separate it from the single node.js thread, then that is usually done with the child process module that is part of node.js. You create a new process (which can be a node.js process or some other program running in the process) and you can then communicate with that other process.
If the controller function exports.sendMail is running, when the route
is called again, "kill this process", and start again
exports.sendMail
This is something that would need to be an explicit feature of the nodemailer module in order for you to cancel an operation in process. How "in process" asynchronous operations are implemented and controlled is not a generic node.js thing, but is specific to how that specific module implements things and keeps track of things.
Looking into the code for the node-mailer and more specifically the smtp-connection module, it looks like it uses plain async node.js socket code. That means it does not create any new threads or processes on its own.
As for your setInterval() calls, you need to make sure that any body of code that creates a setInterval() keeps track of the interval timer ID and eventually clears the interval so it stops and you don't keep piling up more and more interval timers. Another possibility is that you have only one interval and it does checking for all outstanding operations (rather than have a separate interval for each one).
From a quick look, I think you don't really need to put the sendMail function inside postlinevalidation. If you want to control it, you could run it in a different script from the express app. You can use something like pm2 or parallelshell to run multiple scripts at the same time.
If you are using setInterval then you can use clearInterval to stop the setInterval based on your condition. Whenever you call a setInterval function, it returns an id using which you can stop the setInterval.
var interval = setInterval(doStuff, 5000);
function doStuff() {
if(your_condition) {
clearInterval(interval);
}
}

Calling socket.disconnect in a forEach loop doesn't actually call disconnect on all sockets

I am new to javascript world. Recently I was working on a chat application in nodejs. So I have a method called gracefulshutdown as follows.
var gracefulShutdown = function() {
logger.info("Received kill signal, shutting down gracefully.");
server.close();
logger.info('Disconnecting all the socket.io clients');
if (Object.keys(io.sockets.sockets).length == 0) process.exit();
var _map = io.sockets.sockets,
_socket;
for (var _k in _map) {
if (_map.hasOwnProperty(_k)) {
_socket = _map[_k];
_socket.disconnect(true);
}
}
...code here...
setTimeout(function() {
logger.error("Could not close connections in time, shutting down");
process.exit();
}, 10 * 1000);
}
Here is what is happening in the disconnect listener.The removeDisconnectedClient method simply updates an entry in the db to indicate the removed client.
socket.on('disconnect', function() {
removeDisconnectedClient(socket);
});
So in this case the disconnect event wasn't fired for all sockets. It was fired for only a few sockets randomly from the array. Although I was able to fix it using setTimeout(fn, 0) with the help of a teammate.
I read about it online and understood only this much that setTimeout defers the execution of of code by adding it to end of event queue. I read about javascript context, call stack, event loop. But I couldn't put together all of it in this context. I really don't understand why and how this issue occurred. Could someone explain it in detail. And what is the best way to solve or avoid them.
It is hard to say for sure without a little more context about the rest of the code in gracefulShutdown but I'm surprised it is disconnecting any of the sockets at all:
_socket = _map[ _k ];
socket.disconnect(true);
It appears that you are assigning an item from _map to the variable _socket but then calling disconnect on socket, which is a different variable. I'm guessing it is a typo and you meant to call disconnect on _socket?
Some of the sockets might be disconnecting for other reasons and the appearance that your loop is disconnecting some but not all the sockets is probably just coincidence.
As far as I can tell from the code you posted, socket should be undefined and you should be getting errors about trying to call the disconnect method on undefined.
From the method name where you use it I can suppose that application exits after attempts to disconnect all sockets. The nature of socket communication is asynchronous, so given you have a decent amount of items in _map it can occur that not all messages with disconnect will be sent before the process exits.
You can increase chances by calling exit after some timeout after disconnecting all sockets. However, why would you manually disconnect? On connection interruption remote sockets will automatically get disconnected...
UPDATE
Socket.io for Node.js doesn't have a callback to know for sure that packet with disconnect command was sent. At least in v0.9. I've debugged that and came to conclusion that without modification of sources it is not possible to catch that moment.
In file "socket.io\lib\transports\websocket\hybi-16.js" a method write is called to send the disconnect packet
WebSocket.prototype.write = function (data) {
...
this.socket.write(buf, 'binary');
...
}
Whereas socket.write is defined in Node.js core transport "nodejs-{your-node-version}-src\core-modules-sources\lib\net.js" as
Socket.prototype.write = function(chunk, encoding, cb)
//cb is a callback to be called on writeRequest complete
However as you see this callback is not provided, so socket.io will not know about the packet having been sent.
At the same time when disconnect() is called for websocket, member disconnected is set to true, and "disconnect" event is broadcasted, indeed. But synchronously. So .on('disconnect' handler on server socket doesn't give and valuable information about whether the packet was sent or not.
Solution
I can make a general conclusion from this. If it is so critical to make sure that all clients are immediately informed (and not wait for a heartbeat timeout or if heartbeat is disabled) then this logic should be implemented manually.
You can send an ordinary message which will mean for the client that server is shutting down and call socket disconnect as soon as the message is received. At the same time server will be able to accept all acknowledgements
Server-side:
var sockets = [];
for (var _k in _map) {
if (_map.hasOwnProperty(_k)) {
sockets.push(_map[_k]);
}
}
sockets.map(function (socket) {
socket.emit('shutdown', function () {
socket.isShutdown = true;
var all = sockets.every(function (skt) {
return skt.isShutdown;
});
if (all) {
//wrap in timeout to let current tick finish before quitting
setTimeout(function () {
process.exit();
});
}
})
})
Clients should behave simply
socket.on('shutdown', function () {
socket.disconnect();
});
Thus we make sure each client has explicitly disconnected. We don't care about server. It will be shutdown shortly.
In the example code it looks like io.sockets.sockets is an Object, however, at least in the library version I am using, it is a mutable array which the socket.io library is free to modify each time you are removing a socket with disconnect(true).
Thus, when you call disconnect(true); if the currently iterated item from index i is removed, this effect like this happens:
var a = [1,2,3,4];
for( var i in a) {
a.splice(i,1); // remove item from array
alert(i);
}
// alerts 0,1
Thus, the disconnect(true) call will ask the socket.io to remove the item from the array - and because you are both holding reference to the same array, the contents of the array are modified during the loop.
The solution is to create a copy of the _map with slice() before the loop:
var _map = io.sockets.sockets.slice(); // copy of the original
It would create a copy of the original array and thus should go through all the items in the array.
The reason why calling setTimeout() would also work is that it would defer the removal of the items from the array, allowing the whole loop iterate without modifying the sockets -Array.
The problem here is that sockjs and socket.io use asynchronous "disconnect" methods. IE. When you call disconnect, it is not immediately terminated. It is just a promise that it WILL be terminated. This has the following effect (assuming 3 sockets)
Your for loop grabs the first socket
The disconnect method is called on the first socket
Your for loop grabs the second socket
The disconnect method is called on the second socket
The disconnect method on the first socket finishes
Your for loop grabs the third socket
The disconnect method is called on the third socket
Program kills itself
Notice, that sockets 2 and 3 haven't necessarily finished yet. This could be for a number of reasons.
Finally, setTimeout(fn, 0) is, as you said, blocking the final call, but it may not be consistent (I haven't dug into this too much). By that I mean, you've set the final termination to be AFTER all your sockets have disconnected. The setTimeout and setInterval methods essentially act more like a queue. Your position in the queue is dictated by the timer you set. Two intervals set for 10s each, where they both run synchronously will cause one to run AFTER the other.
After Socket.io 1.0, the library does not expose you an array of the connected sockets. You can check that io.socket.sockets.length, is not equal to the open socket objects. Your best bet is that you broadcast a 'disconnect' message to all the clients that you want to off, and on.'disconnect' on the client side close the actual WebSocket.

SignalR-Hub after IIS stop,start will no longer call client functions

I have a queue system using SignalR 2.1.1 with Angular. Everything is working perfectly actually. However when I decided to test the system against an IIS outage I noticed a problem. When I stop, then start IIS, IIS restart doesn't cause the issue, my javascript functions that the hub calls will no longer fire. That makes sense to me, but the problem is that the client can still call the server without any issue so the user has no idea they are disconnected. This would certainly mess up my queue state.
So, the solution would seem to be able to test this disconnect somehow and reconnect if necessary. Is there a way to test to see if the client functions my hub is calling are still connected? It seems that since I can call the hub that it should have to reconnect although I don't see any of that activity happening. I've tried the disconnected, reconnecting, stateChanged events on the client side to see if I could catch that happening with no luck.
Thank you for any assistance
So my solution was to create a method on the hub that only responds to the caller:
public void LastChange()
{
Clients.Caller.lastChange();
}
I hooked that call back to this function in my Angular controller:
vm.queueHub.client.lastChange = function onLastChange()
{
vm.lastChangeCalledBack = true;
}
Also in my controller I created this function that tests for the lastChangeCalledBack variable which is set by the function the hub calls. If it's not set after some interval testing I assume we've lost connection:
vm.stillAlive = function()
{
vm.queueHub.server.lastChange();
var found = $interval(function()
{
if (vm.lastChangeCalledBack == true)
{
vm.lastChangeCalledBack = false;
$interval.cancel(found);
}
}, 100, 10);
return found;
}
Finally I created this function in my controller and call it from any functions that make queue changes from the UI and pass in the callback to call if the connection is still valid. For some reason the promise seems to be reverse of what the Angular documentation says, but I must be misunderstanding: $interval docs
function verifyConnection(callback)
{
vm.stillAlive().then(
function (data) {
console.log("Lost connection with server: " + data);
signalrFactory.start();
var reconnectedMessage = "There was a server disconnect. Your connection has been re-established, but you should reload your browser."
getQueue(function () { alert(reconnectedMessage); });
},
function (data) {
console.log("Server connection intact: " + data);
callback();
}
);
}
So for example, this is called from the UI to open a modal:
vm.open = function (item)
{
verifyConnection(function () {
openFlagModal(item);
});
};
I also plan to call the verifyConnection() function periodically as well. This solution seems to work and keep all the clients in sync with the server no matter what. However, I don't like the fact that the SignalR client is already sending pings to the server, and re-establishing the connection, just not reconnecting the callback client methods. It makes me wonder if I'm doing something wrong to cause the client functions to not get reconnected.
Any thoughts on this solution?

Signalr check if hub already started

I have multiple javascript blocks with signalR functions.
I don't know the order of execution so that i want to start the hub with
$.connection.hub.start();
if it isn't started already.
How can i check if the hub is already started? Starting it multiple times it it throws an error.
There are a few ways to approach this problem. The first is to create your own connection status tracking variables, which you set with the connection callback events:
$.connection.hub.start().done(function() { ConnectionStarted = true; })
You can check ConnectionStarted before attempting to start the connection. Unfortunately, this won't work well, as start() is asynchronous and so many instances could try to start a connection before one has finished and set ConnectionStart to true.
So, working solutions. There are two.
First, have every instance use its own connection object (ie: don't use the default $.connection.hub, but instead use manual connection creator:
var localConnection = $.hubConnection();
var localHubProxy= localConnection.createHubProxy('HubNameHere');
This isn't great, as most browsers have a limited number of connections allowed per page, and also because it is generally overkill.
IMO, the best solution is to use the single automatic connection with default proxy ($.connection.hub) and look at the connection state (something I just came across). Each connection object has a state:
$.signalR.connectionState
Object {connecting: 0, connected: 1, reconnecting: 2, disconnected: 4}
So, in each instance, go for something like this?:
if ($.connection.hub && $.connection.hub.state === $.signalR.connectionState.disconnected) {
$.connection.hub.start()
}
Also note that when you create a connection, it will sit in state "disconnected" / 4 until start is called on it. Once start is called, the connection will apparently try to reconnect constantly (if it is interrupted) until $.connection.hub.stop() is called (will then go back to state "disconnected").
Refs:
http://www.asp.net/signalr/overview/hubs-api/hubs-api-guide-javascript-client#establishconnection
https://github.com/SignalR/SignalR/wiki
You can check the connection state in each of your functions like:
function doSomething {
if ($.connection.hub.state === $.signalR.connectionState.disconnected) {
$.connection.hub.start().done(function () { myHub.server.myHubMethod(); });
}
else {
myHub.server.myHubMethod();
}
}
You can detect when the hub has started using .done()
$.connection.hub.start().done(function () {
});
using this method, you can do the following (Taken from docs : https://github.com/SignalR/SignalR/wiki/SignalR-JS-Client-Hubs) you can then keep track of if the connection is open yourself.
function connectionReady() {
alert("Done calling first hub serverside-function");
};
$.connection.hub.start()
.done(function() {
myHub.server.SomeFunction(SomeParam) //e.g. a login or init
.done(connectionReady);
})
.fail(function() {
alert("Could not Connect!");
});

Categories