Really quick question.
I'm creating a synced streaming app so I would like to emit the current timecode of what's being played every second through socket.io and broadcast it down to other clients.
Is this wise? Are there any drawbacks to making the same call every second, can I make other calls at the same time?
P.S. I'm not making any database calls or doing any processing server-side.
with not to many clients viewing videos it should be fine, but eventually you will experience small lags if the number of users viewing starts to get big.
Another approach is to keep track on server for example you can do this
Video loads with autoplay or emit an event to start a timer server-side
// from client-side
socket.emit('videoplaying',/* some video data */);
on server side you start small timers based on socket IDs
function Timer(VideoInformation){
this.currentTime=0;
this.startedAt=+new Date();
this.endedAt=null;
this.title=VideoInformation.title||'Untitled';
this.interval=null;
this.play=function(){
var self=this;
this.interval=setInterval(function(){ self.currentTime=+new Date(); },1000);
}
this.stop=function(){
if(this.interval!==null){ clearInterval(this.interval) }
}
//.. pause/end/reset ..
}
//server side
var TimeTracker={};
// handling new videoplaying
socket.on('videoplaying',function(videoInformation){
if(!TimeTracker.hasOwnProperty(socket.id)){
TimeTracker[socket.id]=[];
}
TimeTracker[socket.id].push(new Timer(videoInformation));
});
In the end you add event listeners to current video the user is viewing to notify the server timer that it has paused/stopped/click on specific video time etc..
Hope it helps, this isn't a working solution, its more a concept..
Related
https://github.com/BuildFire/sdk/wiki/BuildFire-Audio-Player-Service
in index.html:
function Player() {} //make it global
Player.instance = buildfire.services.media.audioPlayer;
in the triggering function inside player.js
let track = {};
track.title = "Given Title";
track.url = <insert url here>;
track.image = <insert img.jpg url here>;
Player.instance.play(track);
when initializing the Media Player UI in player.js:
Player.instance.onEvent ( function(e){
console.log(e.event);
//other code goes here
};
Right now, on the offline tester, my media player works perfectly, updating the time, and adding the changes when I pause or play. Unfortunately, when I move it to the online tester and the actual app, my phone, and my client's phone don't receive the "timeUpdate" event. And I'm confirming that, with the code above, I DO get "play" and "pause" events, but not the "timeUpdate" event.
Can anyone confirm if this happens for them too, or if there are any fixes?
I think the problem you're having is not with the code, but the server where the audio is hosted. Try hosting the audio on a s3 or other cloud storage bucket instead of bundling it into the app (sorry can't comment yet)
So far I have successfully established (running node.js server) an RTC connection between two peers with a datachannel. I can send data back and forth.
I have also successfully streamed the webcam from one peer to another and vice versa.
How exactly am I doing this?
Both peers do this:
function handleRemoteStreamAdded(event) {
console.log('Remote stream added.');
remoteStream = event.stream;
remoteVideo.srcObject = remoteStream;
}
function gotStream(stream){
...
pc.addStream(stream);
...
}
navigator.mediaDevices.getUserMedia(constraints).then(gotStream).catch(err);
...
pc = new RTCPeerConnection();
...
pc.onaddstream = handleRemoteStreamAdded;
So I basically say that whenever I add my own stream (pc.addStream) then go to handleRemoteStreamAdded. It all works fine.
But what I really want to do as a next step is to add a button to each client and give each of them the option whether or not they want to stream their cam to the other side. If they want to, then the stream should start automatically on the other end. Unfortunately, I just can't figure out how to.
Theoretically, what I thought is to add an Eventlistener to a button and then the event triggers:
navigator.mediaDevices.getUserMedia(constraints).then(gotStream).catch(err);
By doing this I basically also did pc.addStream(stream); via function gotStream. Then I send a message to the other end like "display my cam" and by receiving this message on the other end, that other peer should somehow trigger handleRemoteStreamAdded. But within this function there is the pre-defined event that I can only "access" locally via pc.onaddstream = handleRemoteStreamAdded;
How can I automatically start streaming the other side's cam as soon as I either get a message like "display my cam" or by some event?
are you creating another offer and doing a new signaling exchange after calling pc.addStream? (which fwiw is deprecated; prefer addTrack and ontrack)
See https://webrtc.github.io/samples/src/content/peerconnection/upgrade/ for a similar thing adding video to an audio-only call.
We are building a chatroom with our own notification system without depending on GCM but with service worker + SSE.
On desktop it is fine , but on mobile android app (which uses cordova-crosswalk , chromium 53 ) .
The long running notification connection become stuck after 20-30 minutes and it is in foreground activity.
It dosen't die with an error , just not reciving data. No error at all which is very weird. No way to reconnect since we do not know if the connection is dead at all.
What would be the cleanest way? Restarting connection every 5 minutes is one idea but it is not clean.
code
runEvtSource(url, fn) {
if (this.get('session.content.isAuthenticated') === true) {
var evtSource = new EventSource(url, {
withCredentials: true
});
}}
Agressive reconnect code
var evtSource = this.runEvtSource(url, fn)
var evtSourceErrorHandler = (event) => {
var txt;
switch (event.target.readyState) {
case EventSource.CONNECTING:
txt = 'Reconnecting...';
evtSource.onerror = evtSourceErrorHandler;
break;
case EventSource.CLOSED:
txt = 'Reinitializing...';
evtSource = this.runEvtSource(url, fn)
evtSource.onerror = evtSourceErrorHandler;
break;
}
console.log(txt);
evtSource.onerror = evtSourceErrorHandler
I normally add a keep-alive layer on top of the the SSE connection. It doesn't happen that often, but sockets can die without dying properly, so your connection just goes quiet and you don't get an error.
So, one way is, inside your get data function:
if(timer)clearTimeout(timer);
timer = setTimeout(reconnect, 30 * 1000);
...process the data
In other words, if it is over 30 seconds since you last got data, reconnect. Choose a value based on the frequency of the data you send: if 10% of the time there is a 60 second gap between data events, but never a 120 second gap, then setting the time-out to something higher than 120 seconds makes sense.
You might also want to keep things alive by pushing regular messages from the server to client. This is a good idea if the frequency of messages from the server is very irregular. E.g. I might have the server send the current timestamp every 30 seconds, and use a keep-alive time-out of 45 seconds on the client.
As an aside, if this is a mobile app, bear in mind if the user will appreciate the benefit of reduced latency of receiving chat messages against the downside of reduced battery life.
I know there are many solutions can be found in the web regarding my problem, but none of them are working for me. That's why I'm asking this question.
First let me explain what I'm looking to achieve -
-> I'm developing a multi-user Web application [ASP.Net]
-> I'm using SignalR to get real-time database change notifications and SignalR instantly transmit the change notifications to all the users logged in the application.
-> Now in addition what I want to do is to play a notification sound for all the logged in users so that they can understand a new notification need attention.
This is what I've done so far -
JavaScript
<script>
function playSound(mysound) {
//thisSound = document.getElementById(mysound);
//thisSound.Play();
var audio = new Audio(mysound);
audio.play();
}
</script>
Code Behind -
ScriptManager.RegisterClientScriptBlock(Me, [GetType](), "OpenWindow", "javascript: playSound('./audio/notification.wav')", True)
The problem with this solution is that the user need to reload the page to hear the notification sound, which I think is pointless if the user can't hear them instantly like the SignalR processing notifications.
Is it possible to push the sound to all the clients so that they don't need to reload the page?
Any help would be highly appreciated. If you need any further clarification please let me know.
Finally I got it worked. I had to change the jquery a little bit -
<script type="text/javascript">
function playSound(mysound) {
//thisSound = document.getElementById(mysound);
//thisSound.Play();
var audio = new Audio(mysound);
audio.play();
}
$(function () {
var notify = $.connection.notificationsHub;
var audio;
notify.client.displayNotification = function (s_Not, s_Path) {
$("#newNot").html(s_Not);
audio = new Audio(s_Path);
var iLevel = "<%=System.Web.HttpContext.Current.Session("USER_LEVEL")%>";
var i_OldNot = "<%=System.Web.HttpContext.Current.Session("NOT_COUNT")%>";
if (iLevel == 2) {
//alert(i_OldNot + " " + s_Not);
if (i_OldNot < i_Not) {
playSound("/audio/notification.wav");
//i_OldNot == Number(s_not);
}
}
};
$.connection.hub.start();
});
</script>
In the code behind I had to set a Session Variable to store the number of last notification before update. If the previous and present number of notification is higher than the session value then notification sound play -
System.Web.HttpContext.Current.Session("NOT_COUNT")= i_LastNotCount
Now the sound is playing without reloading the page. Special thanks to rdans, because of his below comments I got this idea -
Then in a comment you say:
I already have implemented SignalR. So it's not the problem at all.
This suggests to me that you probably don't understand what signalR is doing because the whole point of signalR is to push to the browser without having to post back or use an ajax call.
The time you are executing the user logged in notification using signalR, after that you can Call javascript function from server .
Hope this is what you are looking for.
I've made a messenger web application using PHP, JavaScript and Ajax. Now, I want a sound to be played on every message the user receives. For that, I've used this algorithm:
var sound1="";
var sound2="";
$(document).ready(function(){
var soundvar=setInterval(soundFunc, 800);
function soundFunc()
{
var chatattr4=$(".chatwindow").css("visibility");
if(chatattr4=="visible")
{
sound1=$("#conversation #receivermessage").last().text();
var soundvar2=setTimeout(soundFunc2, 700);
function soundFunc2()
{
sound2=$("#conversation #receivermessage").last().text();
}
}
var soundvar3=setTimeout(soundFunc3, 750);
function soundFunc3()
{
if(sound1!=sound2)
{
var audio = new Audio('fb.mp3');
audio.play();
}
}
}
});
So what I'm doing is checking almost every second the last received message in two different setTimeouts. If it is different, the sound is played. It's working almost okay, but sometimes the sound is not played. (If you receive messages too quickly, sound won't be played on each message received, that is other case, I don't have problem with that). I don't think there's any problem with browser or sound. But I think it's problem with this algorithm. I don't want to use any other algorithm.
Can this algorithm be modified/improved/fixed so that it works better? (95-99% accuracy).