Peak detection with the web audio API? - javascript

TL;DR - I want to use Javascript to detect every click in a drummer's click track (an mp3 with only beats in it) and then replace these with .wav samples of a different click sound. The drummer click track is not in constant time, so I can't simply detect the BPM and replace samples from that.
I have a task I'd like to achieve with Javascript and the web audio API, but I'm not sure if it's actually possible using either of these....
Basically I regularly use recorded backing tracks for songs and I replace the default click track (the metronome track that a drummer plays along to) with custom click samples (one .wav sample for the first beat of a bar and another sample for the remaining beats in any given bar). Annoyingly many of these drummer click tracks are not in constant time - so do not have a constant BPM from start to finish.
I want to detect every click in a click track (every peak soundwave) and then replace these with the .wav samples, and download the final file as a MP3. Is this possible?

There's no built-in way to do this in WebAudio. You will have to implement your peak detector using a ScriptProcessorNode or AudioWorkletNode. Once you have the location of each peak, you can then schedule your replacement clicks to start playing at the click times. With an OfflineAudioContext, you can get the resulting PCM result. To get a compressed version (probably not mp3), I think you need to use a MediaRecorder.

Related

How to make web app like "Epic Sax Gandalf" using JavaScript?

I want to create an application that when launched on different devices will display the same content (music, photo or video) at the same time.
Here is simple example.
And real-life example. :)
My first idea was based on machine local time.
timestamp = new Date().getTime()
(timestamp(\d{4}$) === "0000") => play music and animation
music = 10s,
animation = 10s
and for every every 10 seconds, start this function.
I know, however, that this solution may not work and the content will still be unsynchronized.
So, does anyone know how to achieve the effect I'm talking about using javascript?
I actually had the same idea as you had and implemented a little proof of concept for it.
You can try it out like this:
Open the deployed site (either in two browsers/windows or even on different devices)
Choose the same unique channel-id for all your opened sites
Click "Start"
One window should have a heading with "Leader". The others should be "Follower"s. By clicking on the Video-id field you can paste the video id of any youtube video (or choose from a small list of recommended ones).
Click "Play" on each window
Wait a bit - it can take up to 1 minute until the videos synchronize
Each follower has a "System time offset". On the same device it should be nearly 0ms. This is the time that the system-time (Date.now()) in the browser differs from the system time on the Leader window.
On the top left of each video you can see a time that changes every few seconds and should be under 20ms (after the videos are synchronized). This is the time the video-feed differs from its optimal time in relation to the system time.
(I would love to know wether it works for you too. My Pusher deployment is EU-based so maybe problems with the increased latency could occur...)
How does it work?
The synchronisation happens in two steps:
Step 1.: Synchronizing the system times
I basically implemented the NTP (Network time protocol) Algorithm in JS via websockets or Pusher JS as my channel of communication between each Follower- and the Leader-clients. Look under "Clock synchronization algorithm" in the Wikipedia article for more information.
Step 2.: Synchronizing the video feed to the "reference time"
At the currentTime (= synchronized system time) we want the currentVideoTimeto be at currentTime % videoLength. Because the currentTime or system time has been synchronized between the clients in Step 1 and the videoLength is obviously the same in all the clients (because they are supposed to play the same video) the currentVideoTime is the same too.
The big problem is that if I would just start the video at the correct time on all clients (via setTimeout()) they probably wouldn't play at the same time, because one system has e.g. network problems and the video still buffers or another program wants in this moment the processing power of the OS. Depending on the device the time from calling the start function of the video player and the actual starting of the video differs too.
I'm solving this by checking every second wether the video is at the right position (= currentTime % videoLength). If the difference to the right position is bigger than 20ms, I'm stopping the video, skipping the video to the position where it should be in 5s + the time it was late before and start it again.
The code is a bit more sophisticated (and complicated) but this is the general idea.
sync-difference-estimator
synchronized-player

HTML 5 how to record filtered canvas

I am gonna use my webcam as a source and show my view on webpage , than I will manipulate my view like (blacknwhite , fiseye, etc.) and show that manipulated video in my canvas.
An example ( http://photobooth.orange-coding.net/ )
Ok everything is cool for now . I can capture that manipulated canvas as a image.
Is there any way to record that manipulated canvas as video?
I also found an example (https://www.webrtc-experiment.com/ffmpeg/audio-plus-canvas-recording.html)
But when I tried that code on my webcam recording project , it's just recording my source view(not blacknwhite) . It is not implementing my effect to record.
Any idea or is it possible ?
Thank you.
Recording video in the browser is like getting blood out of a stone. If you hit it hard and long enough against your head, there will be blood, eventually. But it's a painful experience, you it will certainly give you a headache!
There is currently no way of recording video in real-time from a canvas element. But there is proposed a Mediastream Recording API which includes video (and it excludes the canvas part). Currently only audio is supported, and only if FF.
You can grab an image as often as possible and use it as a sequence, but there are several issues you will run into:
You will not get full frame-rate if you choose to grab the image as JPEG or PNG (PNG is not very useful with video as there is no alpha)
If you choose to grab the raw data you may achieve full frame rate (note that frame rate for video is typically never above 30 FPS) but you will fill up the memory very quickly, and you would need a point in time to process the frames into something that can be transferred to server or downloaded. JavaScript is single threaded and no matter how you twist and turn this stage, you will get gaps in the video when this process is invoked (unless you have a lot of memory and can wait until the end - but this not good for a public available solution if that's the goal).
You will have no proper sinc like time-code (to sync by) so the video will be like the movies from Chaplins day, variable. You can get close by binding high-resolution timestamps but not accurate enough as you will have no way of getting the stamp at the very time you grab the image.
No sound is recorded; if you do record audio in FF using the API, you have no way to properly sync the audio with the video anyways (which already has its own problems ref. above)
Up until now we are still at single frame sequences. If you record one minute # 30 fps you have 60x30 frames, or 1800 pictures/buffers per minute. If you record in HD720 and choose grabbing the raw buffer (the most realistic option here) you will end up with 1800 x 1280 x 720 x 4 (RGBA) bytes per minute, or 6,635,520,000 bytes, ie. 6.18 GB per minute - and that's just in raw size. Even if you lower the resolution to lets say 720x480 you'll end up with 2.32 GB/min.
You can alternatively process them into a video format, it's possible, but currently there are next to none solutions for this (there has been one, but it had varying result which is probably why it's hard be found...), so you are left to this yourselves - and that is a complete project involving writing encoder, compressor etc. And the memory usage will be quite high as you need to create each frame in separate buffers until you know the full length, then create a storage buffer to hold them all and so forth. And even if you did, compressing more than 6 GB worth of data (or event "just" 2 GB) is not gonna make user or browser very happy (if there is any memory left)...
Or bite the dust and go with a commercial Flash based solution (but that excludes your image processing and pretty much takes over the camera... so not really an option in this case).
The only realistic option, IMO, is to wait for the aforementioned API - this will let your browser do all the hard work, in compiled optimized code, enable frame by frame compression leaving the memory pretty much intact, and give very little headache compared to the alternative(s) above. There may be an option to apply shaders to the stream at one point, or integrate it with some canvas processing (not on the table in this proposal AFAICS) so recording real-time from a canvas will still be a challenge.
This is where server side processing comes in...
(of course, a screen recorder is an option which is of curse completely non-integrated, but will enable you to demo your effects at least...).

flag start point and end point of streaming video node.js

I'm trying to write a nodejs website for streaming html5 video (webm, mp4). But don't know how to know when user finished viewing video. That mean, we need a start point (the point that user start to view video), and an end point(the point that user finished to view video). And we can know how many percents that they are viewed the video.
The video are located at our server.
The video HTML element (which is a subclass of MediaElement) has currentTime and duration properties that you can look at in JavaScript, and also a bunch of events you can listen for, like onplay, onplaying, onpause, ontimeupdate, etc.
https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaElement

Generating graphical timeline in javascript/jquery

Let's say I can fetch a total time of an certain video in seconds. I can also get the starting times of sub-events in that video.
Is there a way to output some kind of timeline with sub-events arranged by their starting times respectively to the end time of the video?
The sub-events should also be linked to the video in such way that when I click on them, video would jump on beginning of that sub-event.
Basically, I would like to hide the timeline of html5 video, and stick a new dynamically generated one under the video with few selected events to navigate through, with no slider, just click (for example a tennis match with set and game timeline instead of min:sec timeline).
I'm using video in html5 and javascript.
Well you can user this jquery plungin for timeline. Would look cool.
And for jumping you can read here.

Javascript to select snippet from audio track

I'm working on a project in PHP, where from a library of tracks, a user can select a track, choose a snippet from that track of a few seconds in length, and use that snippet as a search parameter on the next page.
The client would like a waveform of the track to be displayed (these have already been generated as PNG files), with a 'playhead' showing the current playback position (should be pretty easy to do). The user should be able to select a start and endpoint for their selection by dragging vertical bars, audition the selected snippet (by pressing the space bar, or similar), and then click 'Search' to submit an HTML form. The only parameters I actually need from the form are the start and end positions of the selected audio snippet.
So, this is what the client wants, and I'm at the brainstorming phase. So far, I have a bunch of mp3 files and corresponding waveform graphics. In terms of browser support, it's going to be OK to specify that the browser must be 'the latest version of...', but I would like to offer support for all the big names: Safari, Firefox, Opera, IE, Chrome.
Do you have any suggestions of JavaScript libraries or solutions I should consider to help with implementation: embedding the audio file, the playback control interface, and the snippet selection interface. Although it might be possible to generate corresponding OGG files for all the mp3s (to help with an HTML5 implementation), I'd prefer not to if possible, as it complicates things. So ideally I'd like an mp3-only solution which offers cross-browser support. Perhaps something like jPlayer would be a possibility?
I'm certainly familiar with jQuery, so using that would be a bonus.
Are there any existing libraries I could use which might help me with the 'snippet selection' interface? In a totally ideal world, it would be a solution with a 'scrub bar' - i.e. as you drag the start and endpoint handles, the sound at the 'playhead' is previewed instantly.
Many thanks for any ideas and suggestions!
EDIT (more information):
I'm hoping to be able to create something similar to this demo:
http://www.happyworm.com/jquery/jplayer/latest/demo-07.htm except with
the added feature that a user can select a 'snippet' from the track,
using start- and end-point scrub bars, and can audition the snippet by
pressing play.
So, my desired interface is like this:
on page load, the track would start buffering, and the play button
would play from the beginning as normal if pressed
if the user drags a scrub bar from the far left edge of the track,
the sound at the playhead would be previewed (as with the demo I
linked to), and on releasing the scrub bar, playback would continue
from there to the end of the track
if the user drags another scrub bar from the far right of the track,
the sound at the playhead would again be previewed. However, as the
bar is released, the playback should jump back to the LEFT scrub bar,
and continue from there. (If this is too difficult to achieve, I'd be
happy with the right-hand scrub bar to not preview the sound as it was
dragged, but just to act as a 'marker' for the end of the selection).
the two scrub bars cannot pass each other, so the left one is always
the 'start' and the right is 'end' of the selection.
once the bars have been moved and a selection has been made, the
playback should loop the selected snippet indefinitely, until 'stop'
is pressed.
in an ideal world, I'd like the scrub bars to 'snap' to a grid of
whole seconds, so that the snippet start, end and length values are
always a whole number of seconds. Failing that, I'll just round the
numbers to the nearest integer later in JavaScript.
For the selection:
http://docs.jquery.com/UI/Slider
As for JavaScript doing a playback scrubbing, good luck with that... Sounds more like a flash or html5 thing

Categories