Tokbox streamCreated being called same number of times client is called - javascript

I'm calling on a client, one-to-one, multiple times during a session and the streamCreated event gets called on the host. When I hang up, I unsubscribe and the client unpublishes. However, when I call on the client again, the streamCreated event gets called twice on the host's side. I call on the client 3, 4, 5, etc. more times and the streamCreated event fires the same number of times as I have called on the client. For example on the 7th time I call the client, streamCreated gets called 7 times! It seems like I'm not really destroying the streams although streamDestroyed gets called.
On the client side, I was desperate enough to try and unpublish with:
clientSession.unpublish(clientPublisher, handleError);
clientPublisher.stream.destroy();
clientPublisher.destroy();
clientPublisher = null;
On the host side, I've also tried to make sure the subscriber was destroyed:
clientSession.unsubscribe(clientSubscriber);
clientSubscriber.destroy();
clientSubscriber = null;
The problem with this is when I open a video monitor with multiple clients and have each client publish without audio. However, I can still hear the client I called on... like their original stream(s) still exists. What am I doing wrong?

Every time I called on the person, I was using:
clientSession.on('streamCreated', function (event) {
clientSubscriber = clientSession.subscribe(event.stream, vid, {
...
So, each time I called on a client, it created a new event handler. To correct the issue, I added the following code when I disconnected from the client.
clientSession.unsubscribe(clientSubscriber);
clientSession.off();
That killed the event handler and everything works properly now.

Related

Signal R client event handler not invoked despite receiving message from server

I have already tried all possible trouble shooting for this. The problem is occuring with a single event handler. the messsage is received from serverside. This i confirmed in the network tab, but the event handler is not invoked and no error is thrown.
objConnection.on("FileTransferAccepted", (res) => {
setTransferStatus(true);
});
This is received in the browser
I have also confirmed that the event handler is attached
Im stuck. cant figure out whats wrong with just this handler. tried renaming the handler still no change.
So I finally found the problem.
I created a connection object in one page and forgot to clean up and stop the connection before navigating to another page. Here I created another connection object and updated the connection id in the db so that im reachable. But even after the component is unmounted. the earlier connection was not destroyed. And the message was received it but that object had no event handler to handle this event. All I did was close previous connection on unmount and everything started working fine.
Summary:
Solution 1: In case you have created multiple connections in different pages or components. destroy these connections by calling the "stop()" method on unmount. Also dont forget to update your new connection id in db.
Solution 2: dont create multiple connections for dfferent pages, use the same object throughout the app.

ChatClient websocket binding to events multiple times whenever socket re-connect

I'm using a simple chat-client for my react-native chat app which uses WebSocket.
Whenever there's an error on the server side and the chat shuts down, causing an error event on my app, sets the readyState to CLOSED meaning I need to restart and try to re-connect.
I have a list of listeners when the socket connects such as: 'chatMessage', 'error', 'userDisconnected', 'userConnected'.
But the problem is, whenever I recreate the client by doing:
initWebSocket() {
this._chatClient = null;
this._chatClient = new ChatClient({
url: `${Constants.chatUrl}:${Constants.chatPort}`,
log: false
});
}
To then call a connect() afterwards, I create a duplicated event listeners of all of them. So when I send/receive a message, it shows twice as if it was indeed sent twice. This is a look & feel thing, because on the server, it only shows the message was sent once. If I go back to the message list screen and then get inside the same chat, it will update and show only 1 message, not 2. So this is 100% a duplicated event listener being called.
How can I solve this specific issue?

What reason might be that mobile device is not registered with OneSignal?

We have a small app that we started alpha testing yesterday with among roughly 50 people. It's React Native app. There are push notifications handled by OneSignal and these are kinda essential to get most out of the app. For like 80% of people it's working just fine. Their device gets registered, I store userId in our database and notification is sent and received. For others, I am not even able to get userId.
All users have the exact same app, there are no variants, so OneSignal appId is definitely correct there.
I have the following code that's executed whenever the app is started. For those failing users the registerOneSignal function is not even called. It's really strange behavior. Could be a bug in OneSignal?
react-native-onesignal: v3.0.5
const alternateApproach = (device) => {
log.debug('onesignal ids', device)
this.activateOneSignal(device.userId, device)
}
const registerOneSignal = (status: SubscriptionStatus) => {
log.debug('onesignal state', status)
if (!status.userId) {
OneSignal.addEventListener('ids', alternateApproach)
return
}
this.activateOneSignal(status.userId)
}
OneSignal.getPermissionSubscriptionState(registerOnesignal)
In the end, it was indeed a mistake on my side simply coming from the fact, that when an event is fired before the listener is attached, it will be missed forever. I have moved listening for ids event to a very early point in my app and it seems to be working for everyone now.
What's bit strange though, that OneSignal.getPermissionSubscriptionState is not an event, it should invoke a callback to get data out of it no matter timing. So apparently there is some bug in OneSignal module. Perhaps there is an event underneath too? I've moved the call of this method to the same point I am listening for ids. Doing a couple of tests it indeed shows that ids is called every time, but the callback for that method is never invoked in some cases.

What happens with socket's client scope on 'disconnect' event (server side)

So What happens with socket's client scope on 'disconnect' event?
I'm trying to avoid bad racing conditions and logic flaws in my node.js + mongoose + socket.io app.
What do I mean by scope is:
io.on('connection', function (client) {
///CLIENT SCOPE///
//define private vars, store and operate client's session info//
//recieving and sending client's socket signals//
}
Some background:
Let's say, for example, I implement some function that operates db by finding room and writing user to this room.
BUT, in the moment when the room (to be written in) found, but user yet not written in - he disconnects. On disconnect event i must pull him out of his last room in db, but i can not, it's not saved in db for this moment yet.
The only way I see is to assign a bool value on 'disconnect' event against which i can check before saving guy in to the room and in the case of true don't save him at all.
What i'm confused with - would this bool survive a disconnect event, as it saved in client's scope.
What happens with the scope? is it completely wiped out on disconnect? or it's wiped out only when everything that relys on this scope is finished?
I'm using 'forceNew': true to force socket.connect(); to socket immedietly if something goes wrong (hypothetically) and socket error fired without user really leaving the site.
If user reconnects through this 'old' socket is he getting back his scope on server, or this socket's previous scope has been wiped out on his disconnection or wiped out on reconnection by on 'connection' event?
The client closure will remain alive as long as there is code running that uses that closure so you generally don't have to worry about that issue. A closure is essentially an object in Javascript and it will only be garbage collected when there is no active code that has a reference to anything inside the closure.
As for your concurrency issue with a socket being disconnected while you are writing to the DB, you are correct to recognize that this is an issue to be careful with. Exactly what you need to do about it depends a bit on how your database behaves. Because node.js runs single threaded, your node.js code writing to the database will run to completion before any disconnect event gets processed. This doesn't mean that the database write will have completed before the disconnect event starts processing, but it does mean that it will have already been sent to the database. So, if your database processes multiple requests in the order it receives them, then it will likely just do the right thing and you won't have any extra code to worry about.
If your database could actually process the delete before the write finishes (which seems unlikely), then you'd have to code up some protection for that. The simplest concept there is to implement a database queue on all database operations for a given user. I'd probably create an object with common DB methods on it to implement this queue and create a separate object in each client closure so they were local to a given user. When a database operation is in process, this object would have a flag indicating that an operation was in progress. If another database operation was called while this flag was set, that second operation would go in a queue rather than being sent directly to the database. Each time a database operation finishes, it checks the queue to see if there is a next operation is waiting to run. If so, it runs it.
FYI, I have a simple node.js app (running on a Raspberry Pi with temperature sensors) that records temperature data and every so-often it writes that data to disk using async writes. Because new temperature data could arrive while I'm in the middle of my async writes of the data, I have a similar issue. I abstracted all operations on the data in an object, implemented a flag that indicates if I'm in the process of writing the data and, if any method calls arrive to modify the data, then those operations go in a queue and they are processed only when the write has finished.
As to your last question, the client scope you have in the io.on('connection', ...) closure is associated only with that particular connection event. If you get another connection event and thus this code is triggered to run again, that will create a new and separate closure that has nothing to do with the prior one, even if the client object is the same. Normally, this works out just fine because your code in this function will just set things up again for a new connection.

WebRTC: Unable to successfully complete signalling process using DataChannel

I've been having trouble establishing a WebRTC session and am trying to simplify the issue as much as possible. So I've written up a simple copy & paste example, where you just paste the offer/answer into webforms and click submit.
The HTML+JS, all in one file, can be found here: http://pastebin.com/Ktmb3mVf
I'm on a local network, and am therefore removing the ICE server initialisation process to make this example as bare-bones as possible.
Here are the steps I'm carrying out in the example:
Page 1
Page 1 (loads page), enters a channel name (e.g. test) and clicks create.
A new Host object is created, new PeerConnection() and createDataChannel are called.
createOffer is called, and the resulting offerSDP is pasted into the offer textarea.
Page 2
Copy offerSDP from Page 1 and paste into offer textarea on Page 2, click join.
New Guest object is created, PeerConnection and an ondatachannel handler is set.
setRemoteDescription is called for the Guest object, with the offerSDP data.
createAnswer is called and the result is pasted into the answer textarea box.
Page 1
The answerSDP is copied from Page 2 and pasted into the answer textarea of Page 1, submit answer is clicked.
Host.setRemoteDescription is called with the answerSDP data. This creates a SessionDescription, then peer.setRemoteDescription is called with the resulting data.
Those are the steps carried out in the example, but it seems I'm missing something critical. After the offerer's remoteDescription is set with the answerSDP, I try to send a test message on the dataChannel:
Chrome 40
"-- complete"
> host.dataChannel.send('hello world');
VM1387:2 Uncaught DOMException: Failed to execute 'send' on 'RTCDataChannel': RTCDataChannel.readyState is not 'open'
Firefox 35
"-- complete"
ICE failed, see about:webrtc for more details
> host.dataChannel.send('hello world');
InvalidStateError: An attempt was made to use an object that is not, or is no longer, usable
I also had a more complicated demo operating, with a WebSocket signalling server, and ICE candidates listed, but was getting the same error. So I hope this simplification can help to track down the issue.
Again, the single-file code link: http://pastebin.com/Ktmb3mVf
To enable webRTC clients to connect to each other, you need ICE. While STUN and TURN, which you don't need for such a test, are part of that, even without these helpers you still need to use ICE to tell the other end which IP/port/protocol to connect to.
There are two ways to do this: Google's "trickle ice", where the SDP (answer/offer) is passed on without any ICE candidates. These are then transported over a separate signaling layer and added as they are discovered. This speeds up the connection process, as ICE takes time and some late ICE candidates might not be needed.
The classic method is to wait until all ICE candidates have been gathered, and then generate the SDP with these already included.
I have modified your latest version to do that: http://pastebin.com/g2YVvrRd
You also need to wait for the datachannel/connection to become available before being able to use it, so I've moved the sending of the message to the channels onopen event.
The significant changes to the original code:
The interface callbacks were removed from Host.prototype.createOffer and Guest.prototype.createAnswer, instead we attach the provided callback function to the respective objects for later use.
self.cb = cb;
Both Host and Guest have an added ICE handler for the PeerConnection:
var self = this;
this.peer.onicecandidate = function (event) {
// This event is called for every discovered ICE candidate.
// If this was trickle ICE, you'd pass them on here.
// An event without an actual candidate signals the end of the
// ICE collection process, which is what we need for classic ICE.
if (!event.candidate) {
// We fetch the up to date description from the PeerConnection
// It now contains lines with the available ICE candidates
self.offer = self.peer.localDescription;
// Now we move on to the deferred callback function
self.cb(self.offer);
}
}
For the guest self.offer becomes self.answer
The interface handler $("#submitAnswer").click() does not send the message anymore, instead it is send when the datachannel is ready in the onopen event defined in setChannelEvents().
channel.onopen = function () {
console.log('** channel.onopen');
channel.send('hello world!');
};

Categories