The idea is to have multiple audio files streaming on a website and to set each audio stream
to separate audio channels. Alternatively, the audio could come from separate web pages, and even separate windows, and then filtered to separate audio channels.
As I understand it, audio coming from anywhere in the browser is streamed through audio channels 1 (left) and 2(right). The advantage of audio sources being streamed on different channels is the ability to access that sound data independently for further manipulation.
How can this be accomplished?
Check this tutorial about creating positional audio with Web Audio API
http://www.html5rocks.com/en/tutorials/webaudio/positional_audio/
Web Audio API has required low level audio functions to mix audio channels freely.
Related
Since AudioContext.decodeAudioData() does not support streaming, what are my options for playing streamed audio chunks? Are their any libraries that I can use? Any resources that you might be able to point me to?
I want to use the Web Audio API AnalyserNode without access to an MP3 source file. I'd like to just analyze the sound playing from the browser. Is that possible?
I am trying to create a web event where, multiple broadcasters live streams and viewers watches them live. The broadcaster will have their turn, which gets selected by the event host. So basically there are three user types in an event (1) one event host(is also a broadcaster), (2) multiple broadcasters(event speakers) and (3) multiple viewers.
And record event streaming from start to end with host and speaker camera. Note that host video and audio will record throughout the event. If video not possible then at least audio.
I successfully integrated this library: https://github.com/muaz-khan/RTCMultiConnection
With this library I am able to show camera of host and multiple broadcasters of event and viewers able to watch them live without camera.
For recording, I am using RecordRTC to record every video of event speaker(starts recording once turn starts and stop recording once turn ends) and upload it to the server, and then after finishing of each speaker's turn I am merging all recorded videos using ffmpeg.
But, I want to remove ffmpeg merging and want to record full event with host video and selected event speaker video one after another as single recorded video.
For that, I stored all video/webm blob in a file and want to merge all video/webm blobs to create a single video file but found no solution.
I also tried using webpage canvas recording solution but it is not recording video element.
Canvas recording demo: https://www.webrtc-experiment.com/RecordRTC/Canvas-Recording/webpage-recording.html
Can anybody help me with either recording of all webm blobs as single video file or using canvas to record streaming videos as single recording?
Can I separate audio layer and video from YouTube and add only audio to my website? I want to create app like spotify but I want to play music for example from YouTube
Use YouTube-dl to download the audio streams. You could probably stream them too in real time if you were fancy.
does anyone know if is it possible to render an audio waveform from a video playing in a youtube player using Javascript?
Thank you!
On the client side, it's not possible to isolate the audio from the video data.
You would need to get the raw audio data to be able to then process it with the WebAudio API (e.g. display it).
There are some server-side solutions (extracting audio from video, sending it back etc.), but that's not legal, as it's written in the TOS of youtube:
(https://developers.google.com/youtube/terms?hl=fr)
You are not allowed to:
separate, isolate, or modify the audio or video components of any YouTube >audiovisual content made available through the YouTube API;
promote separately the audio or video components of any YouTube audiovisual >content made available through the YouTube API;
See this questions for more details:
Is there a Youtube API that gives only audio?