In Javascript, I have come across a situation where I need to play two audios simultaneously in iOS Safari. One audio is the Voice script whereas other one is the Background sound(playing in loop). I've used the following code to achieve this.
var audio = new Audio();
audio.src = "voice.mp3";
audio.type = "audio/mp3";
audio.play();
var bg_audio = new Audio();
bg_audio.src = "background.mp3";
bg_audio.type = "audio/mp3";
bg_audio.loop = true;
bg_audio.play();
However, this is creating an issue In iOS Control Center (Lock Screen). Because sometimes it shows progress bar of voice player and sometimes it shows the controls for background player. I want to show controls for voice player only. Is there any workaround?
P.S. I've being searching for answer everywhere even posted on Apple's Developer Forum but didn't get any response.
Thanks!
In iOS Safari only 1 channel of audio can play at a time, so no layering or overlapping of sounds.
You need to use web Audio API instead.
This article may be helpful -
web audio on iOS
Related
Background
I am trying to create an interactive video player using the HTML5 video element.
I came across Media Source Extensions API, and got it working.
var mediaSource = new MediaSource();
video.src = window.URL.createObjectURL(mediaSource);
sourceBuffer = mediaSource.addSourceBuffer(mime);
Next I make a REST call to fetch the video segment and attach it to the source buffer.
sourceBuffer.appendBuffer(arrayBuffer);
Problem
I am trying to fire events when the newly loaded video segment reaches a certain time.
Also, I want to be able to seamlessly loop a video segment, which exits and continues to another video segment on interaction.
How can I achieve these features?
I'm working on a web project where user chooses a design of a mobile mockup and save some chat conversations.
As an output the application should give a high quality video (or 1080p at least) of the chat saved before so that it looks like the real chat conversation is captured.
As of now I'm drawing mockup and conversation on HTML5 Canvas and recording it with canvas.captureStream() method.
It is able to record upto 1280px wide canvas but when I tried Increasing it to achieve 1080p video. Canvas animations slows down and browser stop working sometimes.
I'm done with googling how to optimize canvas and all the stuff that can help me.
Looks like canvas is no more able to work for me, So is there any way to record DOM and render it as video.
I was using captureStream method of canvas
const stream = canvas.captureStream();
And mediaRecorder to capture it.
let options = {mimeType: 'video/webm'};
let mediaRecorder = new MediaRecorder(stream, options);
I expect to get a way of recording video of DOM in high quality. So that I can run chat with javascript and it records the same in order to achieve the output.
I'm using Howler JS to play songs on a website. I want just a portion of the song to be played.
Im making a sprite of each mp3 and those sprites can be played. However, it takes really long before the audio plays. It's like the whole mp3 is downloaded first and then the sprite begins, which really decrease performances and consume bandwidth.
Im not familiar with Howler at all, maybe there's a method to download just the portion to be played, or if not, is there any other library/ ways to accomplish this ?
<div
className="playExtrait"
onClick={() => {
Howler.unload();
let song = new Howl({
src: [url],
html5: true,
sprite: {
extrait: [0, 30000]
}
});
let songID = song.play("extrait");
setPlayPause("playing");
song.fade(1, 0, 30000, songID);
song.on("end", () => {
setPlayPause("paused");
});
}}
>
You can create recordings of each specific time slices of the media by using Media Fragments URI, for example, by setting src of a <audio> element to /path/to/media#t=10,15 for playback of 10 through 15 seconds of the media resource and MediaRecorder to record the playback and save the recording as a .webm media file, where MediaRecorder is stopped at pause event of HTMLMediaElement.
See
How to edit (trim) a video in the browser?
How to get a precise timeupdate on a video to return upto 2 decimal numbers (milliseconds)?
Javascript - Seek audio to certain position when at exact position in audio track
How to use Blob URL, MediaSource or other methods to play concatenated Blobs of media fragments??
For an example of concatenating multiple media fragments into a single recording see MediaFragmentRecorder (am the author of the code at the repository). MediaSource at Chromium/Chrome has issues when MediaRecorder is used to record a MediaSource stream, though the code should still produce the expected result at Firefox.
I'm working on a web-based music sequencer/tracker, and I've noticed that in my sample playback routine, audio contexts seem to exist only for the duration of of a sample, and that the Web Audio API doesn't seem to adjust playback duration when I pitchshift a sample. For instance, if I shift a note down an octave, the routine only plays the first half of the sound before cutting off. More intense pitch downshifts result in even less of the sound playing, and while I'm not sure I can confirm this, I suspect that speeding up the audio results in relatively long periods of silence before the sound exits the buffer.
Here's my audio playback routine at the moment. So far, a lot more work has gone into making sure other functions send the right data to this than into extending the functionality of this routine.
function playSound(buffer, pitch, dspEffect, dspValue, volume) {
var source = audioEngine.createBufferSource();
source.buffer = buffer;
source.playbackRate.value = pitch;
// Volume adjustment is handled before other DSP effects are added.
var volumeAdjustment = audioEngine.createGain();
source.connect(volumeAdjustment);
// Very basic error trapping in case of bad volume input.
if(volume >= 0 && volume <= 1) {
volumeAdjustment.gain.value = volume;
} else {
volumeAdjustment.gain.value = 0.6;
}
switch(dspEffect){
case 'lowpass':
var createLowPass = audioEngine.createBiquadFilter();
volumeAdjustment.connect(createLowPass);
createLowPass.connect(audioEngine.destination);
createLowPass.type = 'lowpass';
createLowPass.frequency.value = dspValue;
break;
// There are a couple of other optional DSP effects,
// but so far they all operate in about the same fashion.
}
source.start();
}
My intent is for samples to play back fully no matter how much pitch shifting is applied, and to limit the amount of pitch shifting allowed if necessary. I've found that appending several seconds of silence to a sound works around this issue, but it's cumbersome due to the large amount of sounds I would need to process, and I'd prefer a code-based solution.
EDIT: Of the browsers I can test this in, this only appears to be an issue in Google Chrome. Samples play back fully in Firefox, Internet Explorer does not yet support the Web Audio API, and I do not have ready access to Safari or Opera. This definitely changes the nature of the help I'm looking for.
I've found that appending several seconds of silence to a sound works around this issue, but it's cumbersome due to the large amount of sounds I would need to process, and I'd prefer a code-based solution.
You could upload a sound file that is just several seconds of silence and append it to the actual audio file. Here is an SO answer that shows how to do this...
I'm building a video for my website with HTML5. Ideally, I'd have only one silent video file, and five different audio tracks in different languages that sync up with the video.
Then I'd have a button that allows users to switch between audio tracks, even as the video is playing; and the correct audio track would come to life (without the video pausing or starting over or anything; much like a DVD audio track selection).
I can do this quite simply in Flash, but I don't want to. There has to be a way to do this in pure HTML5 or HTML5+jQuery. I'm thinking you'd play all the audio files at 0 volume, and only increase the volume of the active track... but I don't know how to even do that, let alone handle it when the user pauses or rewinds the video...
Thanks in advance!
Synchronization between audio and video is far more complex than simply starting the audio and video at the same time. Sound cards will playback at slightly different rates. (What is 44.1 kHz to me, might actually be 44.095 kHz to you.)
Often, the video is synchronized to the audio stream, but the player is what handles this. By loading up multiple objects for playback, you are effectively pulling them out of sync.
The only way this is going to work is if you can find a way to control the different audio streams from the video player. I don't know if this is possible.
Instead, I propose that you encode the video multiple times, with the different streams. You can use FFMPEG for this, and even automate the process, depending on your workflow. Switching among streams becomes a problem, but most video players are robust enough to guess the byte offset in the file, when given the bitrate.
If you only needed two languages, you could simply adjust the balance between a left and right stereo audio channel.
If you're willing to let all five tracks download, why not just mux them into the video? Videos are not limited to a single audio track (even AVI could do multiple audio tracks). Then syncing should be handled for you. You'd just enable/disable the audio tracks as needed.
It is doable with Web Audio API.
Part of my program listens to video element events and stops or restarts audio tracks created using web audio API. This gives me an ability to turn on and off any of the tracks in perfect sync.
There are some drawbacks.
There is no Web Audio API support in Internet Explorers except for Edge.
The technique works with buffered audio only and that's limiting. There are some problems with large files: https://bugs.chromium.org/p/chromium/issues/detail?id=71704