I implemented web push notifications. Notification is coming but I want to play custom notification sound that I added but that sound is not working default system window sound is coming I want to play this sound. I added in code to let me know why this notification sound is not playing recive
self.addEventListener('push', async function (event) {
const data = event.data.json();
console.log(data);
const title = 'Sound Notification';
const options = {
sound: '../public/messageNotification.mp3',
};
try {
registration.showNotification(title, options);
} catch (e) {
registration.showNotification(title, options);
}
});
I think you can use HTMLAudioElement for his purpose. For example:
let sound: HTMLAudioElement;
sound = new Audio();
sound.src = '../public/messageNotification.mp3';
sound.load();
sound.play();
We need to take a look what registration.showNotification does, but if it is working correctly and the sound is still not playing, it might be because modern browsers block autoplay in some situations.
For example, Chrome has this autoplay policy. Firefox and Safari might have slightly different policies.
In these cases, you might need to find workarounds for each browser, or you can instruct users to always enable autoplay for your site. In Chrome 104, you do this by clicking on the lock icon (next to the URL), select Site settings, then under Sound select Allow
const song = new Audio("url");
song.play(); // to play the audio
Related
I'm having trouble accessing the microphone and camera while using Firefox on windows after running this script on the second time. Chrome/edge is fine
let stream;
try {
document.getElementById('record').onclick = async () => {
stream = await navigator.mediaDevices.getUserMedia({ video: true, audio: true});
document.getElementById('video').srcObject = stream;
};
document.getElementById('stop').onclick = () => {
stream.getTracks().forEach(track => track.stop());
document.getElementById('video').srcObject = undefined;
stream = null;
};
} catch (e) {
console.error(e);
}
On the second go stream seams to be legit, it contains video and audio track, but it won't display video correctly whereas chrome and safari deals with it without any issues. Should I treat firefox in a specific way? What could be wrong? I'll add that my camera & microphone is fine + I've granted the permissions
fiddle link to the example code
Closing and reopening browser seam to make the issue go away, until I run that script again. Thanks in advance
Your code is correct. It's just that webcams tend to take a little extra time between when they're closed, and when they're re-opened. It's a big issue for webcams that don't support multiple clients simultaneously.
I've experienced this problem occasionally on Chrome, as well as Firefox.
The best thing to do is handle errors and try again.
I have a video call application based on WebRTC. It is working as expected. However when call is going on, if I disconnect and connect back audio device (mic + speaker), only speaker part is working. The mic part seems to be not working - the other side can't hear anymore.
Is there any way to inform WebRTC to take audio input again once audio device is connected back?
Is there any way to inform WebRTC to take audio input again once audio device is connected back?
Your question appears simple—the symmetry with speakers is alluring—but once we're dealing with users who have multiple cameras and microphones, it's not that simple: If your user disconnects their bluetooth headset they were using, should you wait for them to reconnect it, or immediately switch to their laptop microphone? If the latter, do you switch back if they reconnect it later? These are application decisions.
The APIs to handle these things are: primarily the ended and devicechange events, and the replaceTrack() method. You may also need the deviceId constraint, and the enumerateDevices() method to a handle multiple devices.
However, to keep things simple, let's take the assumptions in your question at face value to explore the APIs:
When the user unplugs their sole microphone (not their camera) mid-call, our job is to resume conversation with it when they reinsert it, without dropping video:
First, we listen to the ended event to learn when our local audio track drops.
When that happens, we listen for a devicechange event to detect re-insertion (of anything).
When that happens, we could check what changed using enumerateDevices(), or simply try getUserMedia again (microphone only this time).
If that succeeds, use await sender.replaceTrack(newAudioTrack) to send our new audio.
This might look like this:
let sender;
(async () => {
try {
const stream = await navigator.mediaDevices.getUserMedia({video: true, audio: true});
pc.addTrack(stream.getVideoTracks()[0], stream);
sender = pc.addTrack(stream.getAudioTracks()[0], stream);
sender.track.onended = () => navigator.mediaDevices.ondevicechange = tryAgain;
} catch (e) {
console.log(e);
}
})();
async function tryAgain() {
try {
const stream = await navigator.mediaDevices.getUserMedia({audio: true});
await sender.replaceTrack(stream.getAudioTracks()[0]);
navigator.mediaDevices.ondevicechange = null;
sender.track.onended = () => navigator.mediaDevices.ondevicechange = tryAgain;
} catch (e) {
if (e.name == "NotFoundError") return;
console.log(e);
}
}
// Your usual WebRTC negotiation code goes here
The above is for illustration only. I'm sure there are lots of corner cases to consider.
A simple usage of the Web Audio API:
var UnprefixedAudioContext = window.AudioContext || window.webkitAudioContext;
var context;
var volumeNode;
var soundBuffer;
context = new UnprefixedAudioContext();
volumeNode = context.createGain();
volumeNode.connect(context.destination);
volumeNode.gain.value = 1;
context.decodeAudioData(base64ToArrayBuffer(getTapWarm()), function (decodedAudioData) {
soundBuffer = decodedAudioData;
});
function play(buffer) {
var source = context.createBufferSource();
source.buffer = buffer;
source.connect(volumeNode);
(source.start || source.noteOn).call(source, 0);
};
function playClick() {
play(soundBuffer);
}
inside a UIWebView works fine (plays the sound); but when you switch to the Music app and play a song, and then come back to the app with the UIWebView the song stops playing.
The same code inside Safari doesn't have this problem.
Is there a workaround to avoid this behavior?
Here's the full fiddle:
http://jsfiddle.net/gabrielmaldi/4Lvdyhpx/
Are you on iOS? This sounds like an audio session category issue to me. iOS apps define how their audio interacts with audio. From Apple's documentation:
Each audio session category specifies a particular pattern of “yes”
and “no” for each of the following behaviors, as detailed in Table
B-1:
Interrupts non-mixable apps audio: If yes, non-mixable apps will be
interrupted when your app activates its audio session.
Silenced by the Silent switch: If yes, your audio is silenced when the
user moves the Silent switch to silent. (On iPhone, this switch is
called the Ring/Silent switch.)
Supports audio input: If yes, app audio input (recording), is allowed.
Supports audio output: If yes, app audio output (playback), is
allowed.
Looks like the default category silences audio from other apps:
AVAudioSessionCategorySoloAmbient—(Default) Playback only. Silences
audio when the user switches the Ring/Silent switch to the “silent”
position and when the screen locks. This category differs from the
AVAudioSessionCategoryAmbient category only in that it interrupts
other audio.
The key here is in the last sentence: "it interrupts other audio".
There are a number of other categories you can use depending on whether or not you want your audio silenced when the screen is locked, etc. AVAudioSessionCategoryAmbient does not silence audio.
Give this a try in the objective-c portion of your app:
NSError *setCategoryError = nil;
BOOL success = [[AVAudioSession sharedInstance]
setCategory: AVAudioSessionCategoryAmbient
error: &setCategoryError];
if (!success) { /* handle the error in setCategoryError */ }
In my small HTML5 web-app, I want to play sounds in response to user actions. When the user clicks a button, in the onclick handler I play a sound like this:
url = "assets/sounds/buzz" + (this.canPlayMP3 ? ".mp3" : ".ogg");
sound = new Audio(url);
sound.load();
sound.play();
This works great on Firefox. Unfortunately, on an iPad (iPad 2 running iOS 5.1.1), I get a 2-second delay before the sound is played. This happens every time I play the sound sample, not just the first time.
The MP3 file is 9KB long. The iPad is connected to the network using exactly the same Wifi connection as the computer running Firefox.
How can I figure out what's going on?
You might want to create a single instance of the audio element for each sound:
var Sounds = {
cat: new Audio('/sounds/meow.ogg'),
bird: new Audio('/sounds/tweet.ogg')
};
Then you can play the same element over and over again:
function playSound(name) {
Sounds[name].currentTime = 0;
Sounds[name].play();
}
playSound('cat');
If iOS destroys your Audio objects, you could cache sound files in the cache manifest:
CACHE MANIFEST
# 2012-08-09:v1.3
NETWORK:
*
CACHE:
/sounds/meow.ogg
/sounds/tweet.ogg
How about moving the loading outside the handler e.g. make it global/preloaded? Then inside handler call play method only.
I've been experimenting with connecting an audio element to the web audio api using createMediaElementSource and got it to work but one thing I need to do is change the playback rate of the audio tag and I couldn't get that to work.
If you try to run the code below, you'll see that it works until you uncomment the line where we set the playback rate. When this line is in the audio gets muted.
I know I can set the playback rate on an AudioBufferSourceNode using source.playbackRate.value but this is not what I'd like to do, I need to set the playback rate on the audio element while it's connected to the web audio api using createMediaElementSource so I don't have any AudioBufferSourceNode.
Has anyone managed to do that?
var _source,
_audio,
_context,
_gainNode;
_context = new webkitAudioContext();
function play(url) {
if (_audio) {
_audio.pause();
}
_audio = new Audio(url);
//_audio.playbackRate = 0.6;
setTimeout(function() {
if (!_gainNode) {
_gainNode = _context.createGainNode();
_gainNode.gain.value = 0.1;
_gainNode.connect(_context.destination);
}
_source = _context.createMediaElementSource(_audio);
_source.connect(_gainNode);
_audio.play();
}, 0);
}
play("http://geo-samples.beatport.com/items/volumes/volume2/items/3000000/200000/40000/9000/400/60/3249465.LOFI.mp3");
setTimeout(function () {
_audio.pause();
}, 4000);
You have to set the playback rate after the audio has started playing. The only portable way I have found to make this work, is by waiting until you get a timeupdate event with valid currentTime:
_audio.addEventListener('timeupdate', function(){
_if(!isNaN(audio.currentTime)) {
_audio.playbackRate = 0.6;
}
});
Note that playback rate isn't currently supported on android and that Chrome (on desktop) doesn't support playback rates lower than 0.5.
Which browser are you using to test this? It seems this is not yet implemented in Firefox, but should be working on Chrome.
Mozilla bug for implementing playbackRate:
https://bugzilla.mozilla.org/show_bug.cgi?id=495040