I am running streaming on wowza streaming server. But I am not able to find the exact duration time of streaming.
I am running music website on which we play streaming on user native player. For our tracking we want to keep track of exact time duration of the audio listen by the user.
As others have said in the comments, it isn't possible to determine the exact streaming time.
Different clients have different behavior with how they handle streams. Consider the case where a browser client may pre-buffer data. If the user goes to a page and the browser begins downloading audio data, the server will think that the client is listening to the stream when in fact the data is just sitting in memory. When the user does start playing the audio, say 1 minute later, the server now believes they have already been listening for a minute. When the user goes to a new page, the connection to the server is dropped, stopping audio at the same time as disconnection.
In other cases, media players can actually be paused mid-stream where they buffer the data for several seconds before disconnecting.
The best you could do is use client-side analytics, but this isn't possible in all circumstances since you are not always in control over the client, and not all clients would be capable.
You can't do that on server (wowza) side. Well you can, but data won't be accurate, because of buffering and how HTTP streaming protocols work in general.
However, you can still aggregate those data using some sort of javascript on client side.
You have to listen for player events like - play, pause, stop, even seek. Most web players have callbacks to track those events. Then collect data and send them to your db for storage.
To get duration of the stream, you have to develop a custom module. There is an event called onMediaStreamDestroy, using IMediaStream object you can get the duration.
public class MyMediaStreamListener implements IMediaStreamNotify
{
#Override
public void onMediaStreamDestroy(IMediaStream stream)
{
stream.length(); // this is used to get the length of the video in seconds
}
}
Related
I need some protection in a WebRTC APP, i need to stop clients receiving a large data packet, eg 2kb.
I need to cut of so to speak if someone sends me data larger than 2kb and delete the message. is there a setting somewhere i can limit the data received. or can i intercept the data while being downloaded, then stop the download part way.
Iv been searching around but could not find any information on this.
As per Mozilla foundation WebRTC site that describes Using WebRTC data channels there is not.
WebRTC data channels support buffering of outbound data. This is handled automatically. While there's no way to control the size of the buffer, you can learn how much data is currently buffered, and you can choose to be notified by an event when the buffer starts to run low on queued data. This makes it easy to write efficient routines that make sure there's always data ready to send without over-using memory or swamping the channel completely.
However, if your intention is to use the data received as a trigger, you may be able to use RTCDataChannel.bufferedAmount.
This returns the number of bytes of data currently queued to be sent over the data channel. More details here.
Get that value and build a logic to limit or stop download as per your needs.
Syntax
var amount = aDataChannel.bufferedAmount;
Here's a high level overview of my problem:
There will be a central computer at an art gallery, and three separate remote sites, say up to a mile away from central. Each site has a musician. The central computer sends a live backing track over the internet to each of the three musicians, who play along to it and are each recorded as a live stream. Each of the three streams is then played back at the gallery, in-sync with the backing track and with the other musicians, as though all the musicians were playing live in the same room. The client has requested that the musicians appear to play PRECISELY in time with each other, i.e. no apparent latency between each musician. The musicians cannot hear each other, they only hear the backing track.
Here's what I see as the technical solution:
Each backing track packet is sent out from the gallery with the current timestamp. As a musician plays and is recorded, the packet currently being recorded is marked with the timestamp of the current backing track packet. When the three audio streams are sent back, they are buffered. Each packet is then played, say, ten seconds after its timestamp. i.e. At 11:00:00 AM, all of the packets marked 10:59:50 AM are played.
Or to think of it another way, each incoming stream is delayed 10 seconds behind real time. This buffering should allow for any network blips. It is also acceptable since there is no apparent latency to the viewers at the gallery, and everything is being played "as-live." We are assuming there is a good quality internet connection to each remote site.
I'm ideally looking for a JavaScript solution to this, as it's what I'm most familiar with (but other solutions would be interesting to know about as well).
Does anyone know of any JavaScript libraries with built-in functionality to allow this sort of buffering?
To be clear, it sounds like it doesn't matter that the musicians play back in time with each other... only that they play in time with the backing track, and within ~10 seconds of each other, correct? Assuming that's the case...
Site-to-Site Connectivity
You can use WebRTC for this, but we'll only be using the data channel. No need for media streams. This is a performance that requires precise timing, and I'm assuming decent quality audio as well. Let's just leave it in PCM and send that over the WebRTC data channels.
Alternatively, you could have a server host Web Socket connections which relay data to the sites.
Audio Recording and Playback
You can use the ScriptProcessorNode to play and record. This gives you raw access to the PCM stream. Just send/receive the bytes via your data channel. If you want to save some bandwidth, you can reduce the floating point samples down to 16-bit integers. Just crank them back up to floats on the receiving end.
Synchronization
The main synchronization needs to occur where playback of the backing track and recording occur at the same time. Immediately upon starting your playback, start recording. If you're using the ScriptProcessorNode as mentioned previously, you can actually do both in the same node, guaranteeing sample-accurate synchronization.
On playback, simply buffer all your tracks until you have your desired buffer level, and then play them back simultaneously inside your ScriptProcessorNode. Again, this is sample-accurate.
The only thing you might have to deal with now is clock drift. What's 44.1kHz to you might actually be 44.099kHz to me. This can add up over time, but is generally not something you need to concern yourself with as long as you reset all this once in awhile. That is, as long as you're not recording for a whole day or more without stopping, it probably won't be an issue for you.
The recorded packets should be marked with the timestamp of the incoming backing track packet
No, this synchronization should not happen at the network layer. If you're using a reliable transport with WebRTC data channels or Web Socket, you don't have to do anything but start all your streams at byte 0. Don't use timestamps, use sample offsets.
Does anyone know of any JavaScript libraries with built-in functionality to allow this sort of buffering?
I've actually built a project for doing similar things that allows for sample-accurate internet radio hand-offs from one site to another. It builds up a buffer over time, and then for the hand-off it basically re-syncs to a master clock from the new site. Since the new site is behind the old site, and since we can't bend space/time very easily, I drop out of the buffer a bit and pick up at the new site's master clock. (Not any different if there were a buffer underrun from a single site!) Anyway, I don't know of any other code that does this.
I am currently building a webRTC application that streams audio (the classic server, client, one to many model). Communication and signaling is done through sockets.
The problem I have found is that there is a lot of variability when streaming to smart devices (mainly due to varying processing power), even on a local network.
Hence, I am trying to add functionality that syncs the stream between devices. At a high level I was thinking of potentially, buffering the incoming stream, once all devices are connected the last peer to connect will share something that indicates where that specific peer's buffer starts and all peers will play the buffer from that position.
Does this sound possible? Is there a better way to sync up remote streams? If I was to go along this path, how would I go about buffering a remote MediaStream object (or data from a BlobURL) potentially into some form of array which can be used to identify a common starting location between the streams?
Would I potentially use the Javascript AudioContext api?
I have also looked at NTP protocols and other syncing mechanism but I couldn't find how to apply them to in the context of a webRTC application.
Any help, pointers, or direction would be greatly appreciated.
I have access to IBM Watson's Speech-To-Text API which allows streaming via WebSockets, and I'm able to call getUserMedia() to instantiate a microphone device in the browser, but now I need to work out the best way to stream this information in real-time.
I intend for a three-way WebSocket connection from browser <=> my server <=> Watson using my server as a relay for CORS reasons.
I have been looking at WebRTC and various experiments, but all of these seem to be inter-browser peer-to-peer and not client-to-server like I intend.
The only other examples (e.g. RecordRTC) I've come across are seemingly based around recording a WAV or a FLAC file from the MediaStream returned by getUserMedia() and then sending the file to the server, but this itself has two problems:
The user shouldn't have to press a start or a stop button - it should just be able to listen to the user at all times.
Even if I make a recording and stop it when there's a period of silence, there will be an unreasonable time delay between speaking and getting a response from the server.
I'm making a proof of concept and if possible, I'd like this to work on as many modern browsers as it can - but most importantly, mobile browsers. iOS seems to be out of the question on this one though.
http://caniuse.com/#feat=stream
http://caniuse.com/#search=webrtc
Lets assume I just have this code for now:
// Shimmed with https://raw.githubusercontent.com/webrtc/adapter/master/adapter.js
navigator.mediaDevices.getUserMedia({ audio: true })
.then(function (mediaStream) {
// Continuously send raw or compressed microphone data to server
// Continuously receive speech-to-text services
}, function (err) {
console.error(err);
});
i was wondering if it is possible to capture video input from a client like the following https://apprtc.appspot.com/?r=91737737, and display it on another so that any viewer can see it, my issue is that i do not have a webcam on my second computer and i would like to receive the video using webrtc. is it possible to capture from one end and capture it on another? perhaps if this isnt possible are websockets the best way to do this?
I see no reason it shouldn't be possible apart from being imperfect due to performance/bandwidth issues.
The most supported HTML5 solution at the moment I would imagine it be the use of getUserMedia which is available on Chrome (consider getUserMedia.js for broader support on camera input, although I haven't used it)
Scenario
We will have a capturer, a server that broadcasts the stream and the watchers that receive the final stream.
Plan
Capture phase
Use getUserMedia to get data from the camera
Draw it on a canvas (maybe you could skip this)
Post the frame as in the format of image data using websockets (e.x. via socket.io for broader support) to the server (e.x. node.js).
Broadcast phase
Receive the image data and just broadcast to the subscribed watchers
Watch phase
The watcher will have a websocket connection with the server
On every new frame received from the server it will have to draw the received frame to a canvas
Considerations
You should take into account that performance of the network will affect the playback.
You could enforce a FPS rate on the client side to avoid jiggly playback speed.
A buffer pool would be nice if it fits your case for smoother playback.
Future
You could use PeerConnection API, MediaSource API when they become available, since this is why they are made, although that will probably increase the CPU usage depending on the browsers' performance.