I'm developing a collaborative audio recording platform for musicians (something like a cloud DAW married with GitHub).
In a nutshell, a session (song) is made of a series of audio tracks, encoded in AAC and played through HTML5 <audio> elements. Each track is connected to the Web Audio API through a MediaElementAudioSourceNode and routed through a series of nodes (gain and pan, at the moment) until the destination. So far so good.
I am able to play them in sync, pause, stop and seek with no problems at all, and successfully implemented the usual mute, solo functionalities of the common DAW, as well as waveform visualization and navigation. This is the playback part.
As for the recording part, I connected the output from getUserMedia() to a MediaStreamAudioSourceNode, which is then routed to a ScriptProcessorNode that writes the recorded buffer to an array, using a web worker.
When the recording process ends, the recorded buffer is written into a PCM wave file and uploaded to the server, but, at the same time, hooked up to a <audio> element for immediate playback (otherwise I would have to wait for the wav file to be uploaded to the server to be available).
Here is the problem: I can play the recorded track in perfect sync with the previous ones if I play them from the beginning, but I can't seek properly. If I change the currentTime property of the newly recorded track, it becomes messy and terribly out of sync — I repeat that this happens only when the "local" track is added, as the other tracks behave just fine when I change their position.
Does anyone have any idea of what may be causing this? Is there any other useful information I can provide?
Thank you in advance.
Fundamentally, there's no guarantee that elements will sync properly. If you really want audio to be in sync, you'll have to load the audio files into AudioBuffers and play them with BufferSourceNodes.
You'll find in some relatively straightforward circumstances you can get them to sync - but it won't necessarily work across devices and OSes, and once you start trying to seek, as you found, it will fall apart. The way wraps downloading, decoding and playing into one step doesn't lend itself to syncing.
Related
I have a video that I'm splitting the individual video/audio streams out then dashing with MP4Box, then I'm playing them with Media Source Extensions and appending byte ranges to video/audio source buffers from the MPD files. It's all working nicely, but one video I have has audio that is delayed by about 1.1 second. I couldn't get it to sync up and the audio would always play ahead of the video.
Currently I'm trying to set the audioBuffer.timestampOffset = 1.1 and that gets it to sync up perfectly. The issue I'm running into now though is the video refuses to play unless the audio source buffer has data. So the video stalls right away. If I skip a few seconds in (past the offset) everything works because both video/audio are buffered.
Is there a way to get around this? Either make it play without the audio loaded, somehow fill the audio buffer with silence (can I generate something with the Web Audio API)? Add silence to the audio file in ffmpeg? Something else?
I first tried adding a delay in ffmpeg with ffmpeg -i video.mkv -map 0:a:0 -acodec aac -af "adelay=1.1s:all=true" out.aac but nothing seemed to change. Was I doing something wrong? Is there a better way to demux audio while keeping the exact same timing as when it was in the container with the video so I don't have to worry about delays/offsets at all?
I managed to fix it with ffmpeg using -af "aresample=async=1:min_hard_comp=0.100000:first_pts=0" which was mentioned in this article https://videoblerg.wordpress.com/2017/11/10/ffmpeg-and-how-to-use-it-wrong/
I am trying to create a web event where, multiple broadcasters live streams and viewers watches them live. The broadcaster will have their turn, which gets selected by the event host. So basically there are three user types in an event (1) one event host(is also a broadcaster), (2) multiple broadcasters(event speakers) and (3) multiple viewers.
And record event streaming from start to end with host and speaker camera. Note that host video and audio will record throughout the event. If video not possible then at least audio.
I successfully integrated this library: https://github.com/muaz-khan/RTCMultiConnection
With this library I am able to show camera of host and multiple broadcasters of event and viewers able to watch them live without camera.
For recording, I am using RecordRTC to record every video of event speaker(starts recording once turn starts and stop recording once turn ends) and upload it to the server, and then after finishing of each speaker's turn I am merging all recorded videos using ffmpeg.
But, I want to remove ffmpeg merging and want to record full event with host video and selected event speaker video one after another as single recorded video.
For that, I stored all video/webm blob in a file and want to merge all video/webm blobs to create a single video file but found no solution.
I also tried using webpage canvas recording solution but it is not recording video element.
Canvas recording demo: https://www.webrtc-experiment.com/RecordRTC/Canvas-Recording/webpage-recording.html
Can anybody help me with either recording of all webm blobs as single video file or using canvas to record streaming videos as single recording?
I'm doing a project where I have a server offering up audio via WebSocket to the browser. The browser is then using WebSocket to capture the audio blocks and play them out via the Web Audio API.
I create an AudioContext which then has a predefined sample rate of X to play back this audio on the local hardware. Fair enough. However, the sample rate of the audio I am getting from the server is Y (which is not necessarily the same value as X, but I do know what it is via the WebSocket protocol I have defined).
After doing a lot of digging I can't seem to see a way in which the input audio would be resampled via Web Audio itself. I can't imagine this is an uncommon problem but since I'm not quite an expert JavaScript coder I have yet to discern the solution.
Just assigning the samples to getChannelData only really works when X==Y. Otherwise, the audio skips and is at the wrong pitch. Using decodeAudioData allows me access to do the resampling, it seems, but I don't want to have to write my own resampler.
Thoughts?
Confused, are you needing to use decodeAudioData?
If you just have the raw PCM data transferred, you just create an AudioBuffer in your local AudioContext at the served buffer's sampleRate (i.e. you set the third parameter of http://webaudio.github.io/web-audio-api/#widl-AudioContext-createBuffer-AudioBuffer-unsigned-long-numberOfChannels-unsigned-long-length-float-sampleRate based on the data you're getting), and call start() with an appropriate time (and a playbackRate of 1). You WILL get minor glitches at block boundaries, but only minor ones.
There's an active discussion on this, watch issue: https://github.com/WebAudio/web-audio-api/issues/118.
I know there are several video streaming solutions out there, however I am looking for a solution that might allow me to arrange video content in some sort of "timeline" or allocate video files to particular time slots similar to how television works.
The user would be subjected to the video content that was edited and prepared "ahead of time" for whichever specific time slot that video was prepared and "dropped in" for.
You can use Wowza with StreamPublisher module. You will then have a schedule (you can generate it as you please) that will control the playback. For a simple example check out streamNerd and its simple playlist generator.
I want to extract audio from a video file using javascript, is it possible to make a mp3 file or any format so that i can play it in an audio tag in html5 ? if yes how? OR what API should I use ?
JavaScript is a script running at the client-end for handling user interactions usually. Long computations cause the browser to halt the script anyway and can turn the page unresponsive. Besides, it is not suitable for video encoding/decoding.
You should use a Java applet for video processing/audio extraction. That way, processing is still offloaded to the client. Processing on the server is expensive in terms of memory and processor cycles. Not Recommended.
Sooo, this is NOT an optimal solution. It is sort of terrible actually since you will forcing users to download a video without seeing it. What you could do is play the file as HTML5 video and just hide the video display then show controls as if it were just audio. This is sort of like playing a video, but just not watching the video while listening to the audio.
Checkout jPlayer: http://jplayer.org/
It will let you play an MP4 basically on any browser by telling it to use flash then falling back to HTML5 when you need it to. Then you can just style things how you want using CSS/HTML/Images.
You might want to try a small test first to see if the display:none; css style will also prevent the video's audio from playing. If that is the case you could also use height:0;;
Once again, this is sub-optimal since you are taking up the user's bandwidth. There are other solutions out there that could potentially strip out just the audio for you in a streaming environment.