HTML5 web audio - long recordings being truncated - javascript

Does anyone have any thoughts as to why a recording would get truncated when using HTML5 audio? It only seems to happen when the recordings are around a minute and a half. The audio usually gets truncated around 30 seconds (or less). Shorter recordings do not get truncated.
We're using Matt Diamond's recorder.js functionality. There can be more than one HTML5 player dynamically created on a page (up to 5), but the number of players don't seem to be part of the issue as the truncation happens when there's only 2 players on the page.
The buffer length is 4096. The audio is saved out to file on disk as a wav file. The user can hear the full audio recording when they play it back through the HTML5 player if they're still on the same page, but the file that is saved out is truncated.
Any ideas on why this happens or how to overcome it would be greatly appreciated!
I forgot to mention this is in an asp.net web application. We use Sox in the code behind to normalize the audio file prior to saving it to disk.

Related

How to play video Media Source Extensions when the audio start is delayed? Or how to fix it with ffmpeg?

I have a video that I'm splitting the individual video/audio streams out then dashing with MP4Box, then I'm playing them with Media Source Extensions and appending byte ranges to video/audio source buffers from the MPD files. It's all working nicely, but one video I have has audio that is delayed by about 1.1 second. I couldn't get it to sync up and the audio would always play ahead of the video.
Currently I'm trying to set the audioBuffer.timestampOffset = 1.1 and that gets it to sync up perfectly. The issue I'm running into now though is the video refuses to play unless the audio source buffer has data. So the video stalls right away. If I skip a few seconds in (past the offset) everything works because both video/audio are buffered.
Is there a way to get around this? Either make it play without the audio loaded, somehow fill the audio buffer with silence (can I generate something with the Web Audio API)? Add silence to the audio file in ffmpeg? Something else?
I first tried adding a delay in ffmpeg with ffmpeg -i video.mkv -map 0:a:0 -acodec aac -af "adelay=1.1s:all=true" out.aac but nothing seemed to change. Was I doing something wrong? Is there a better way to demux audio while keeping the exact same timing as when it was in the container with the video so I don't have to worry about delays/offsets at all?
I managed to fix it with ffmpeg using -af "aresample=async=1:min_hard_comp=0.100000:first_pts=0" which was mentioned in this article https://videoblerg.wordpress.com/2017/11/10/ffmpeg-and-how-to-use-it-wrong/

Can I speed up decodeAudioData sampling from Web Audio API

I'm using Web audio api decodeAudioData() method to decode audio files (45min duration, about 40MB, mp3 format) stored on my ubuntu web server. When user enter on some website page I need to preload Audio file, but it takes too long, about 1min 10s. Is it possible to do something to speed up this process? I'm getting arraybuffer through xmlhtttprequest and then decoding it, but it is very very slow.
EDIT: I was tottaly wrong. After measurements I realised that problem is in downloading file. Processing is only 10 secs which is good.

Stream part of the video to the client

Given a windows server backend, is there a way to implement a pure javascript/html5 client that would be able to play only a designated part of the video file (e.g. from 10th second to 15th on a 2 hour video)?
From what I know, standard html5 video tag will download an entire file which is not suitable for my situation.
Streaming solutions on the server would probably be an answer, but are there any that would work with pure javascript/html client? Thanks.
To do this you should encode your video into one of the segmented/fragmented format like MPEG-DASH or Apple HLS. The result will be a playlist file and 1 or more media files containing 2 to 10 second fragments of your (long) video file. For DASH you will normally have 1 fragmented MP4 file containing 2 second fragments of video, the playlist file will tell your player which parts of the file to download corresponding to the time you wish to play. For this to work your web server needs to support HTTP RANGE headers (which most do).
For HLS you will normally end up with multiple 10 second files. The playlist file will tell the player which file to download for the time to play.
Here's how to build a HTML5 player to play DASH streams:
http://blogs.msdn.com/b/interoperability/archive/2014/01/03/mpeg-dash-tutorial-embedding-an-adaptive-streaming-video-within-your-html5-application.aspx
http://www-itec.uni-klu.ac.at/dash/?page_id=746
Besides complex methods like HLS or MPEG-DASH you can consider using pseudo-streaming, or progressive download. Its seeking capability supported by a number of media servers will allow you to watch the MP4 video from any moment. Using Javascript you should be able to actually setup play and stop when you need (but that's up you to deal with different browsers handling playback in HTML5 video container).

Seeking HTML5 audio element causes delay (breaks sync)

I'm developing a collaborative audio recording platform for musicians (something like a cloud DAW married with GitHub).
In a nutshell, a session (song) is made of a series of audio tracks, encoded in AAC and played through HTML5 <audio> elements. Each track is connected to the Web Audio API through a MediaElementAudioSourceNode and routed through a series of nodes (gain and pan, at the moment) until the destination. So far so good.
I am able to play them in sync, pause, stop and seek with no problems at all, and successfully implemented the usual mute, solo functionalities of the common DAW, as well as waveform visualization and navigation. This is the playback part.
As for the recording part, I connected the output from getUserMedia() to a MediaStreamAudioSourceNode, which is then routed to a ScriptProcessorNode that writes the recorded buffer to an array, using a web worker.
When the recording process ends, the recorded buffer is written into a PCM wave file and uploaded to the server, but, at the same time, hooked up to a <audio> element for immediate playback (otherwise I would have to wait for the wav file to be uploaded to the server to be available).
Here is the problem: I can play the recorded track in perfect sync with the previous ones if I play them from the beginning, but I can't seek properly. If I change the currentTime property of the newly recorded track, it becomes messy and terribly out of sync — I repeat that this happens only when the "local" track is added, as the other tracks behave just fine when I change their position.
Does anyone have any idea of what may be causing this? Is there any other useful information I can provide?
Thank you in advance.
Fundamentally, there's no guarantee that elements will sync properly. If you really want audio to be in sync, you'll have to load the audio files into AudioBuffers and play them with BufferSourceNodes.
You'll find in some relatively straightforward circumstances you can get them to sync - but it won't necessarily work across devices and OSes, and once you start trying to seek, as you found, it will fall apart. The way wraps downloading, decoding and playing into one step doesn't lend itself to syncing.

Capture Audio Input with flash or html5

I am trying to capture the microphone and send the recording to my server.. I tried this method here but it records only a big WAV and the upload can be slow sometimes.
Is there a way to capture the voice and compress it on the client side?
Best method would be to send the recording while recording, but I have no Idea if this is possible. (It works for YouTube Live Webcam recording, it must work for Audio only too..)
Hey check out this post where i replied to a guy with a similar question as you.
How do I embed a Flash audio recorder in my site
i dont know about client side compressing (i have looked into it before and couldnt find anything). But i know you can severely reduce the size of the file by limiting the rate of recording via these numbers here, where if i recall correctly 16 is 16khz recording
recorder = new MicRecorder(wavencoder,null,50,16);
also sending to the server is not that hard, just look up how to post data, because the wav file is essentially binary data
You can compress the file on the clientside using libmp3lame.js: https://github.com/akrennmair/libmp3lame-js
There is already a gitHub project that uses this library to record audio and save it in MP3 format directly in the browser:
https://github.com/nusofthq/Recordmp3js

Categories