I have the following code in preload:
this.load.audio('theme', './audio/theme.mp3');
and in create():
gameState.music = this.sound.add('theme');
gameState.music.play();
gameState.music.loop = true;
I also added in config following some info i found out there:
audio: {
disableWebAudio: true
}
It works on desktop and iphone, but not on android. Funny thing that it worked before i made some - unrelated to audio - changes to the game.
There are some points to unpack, so I opted, for a anwser instead of a comment.
(Sorry that it is so long, I tried to keep it short)
You change the Audio file type from mp3 (which is well supported over many devices, especially older), to ogg which is not as well supported _(tipp: best info page for information to browser feature support)
specific android version support say's android 5.0+ should be able to play ogg file (android version table), I tested it on Android 5.5, it works. So if the andorid version is not under 5.0, and you tap the screen only once, and no sound is playing, there is a different issue.
iphone should not work (ogg is not supported, if I can belief the apple forum posts), but I tested it on iPhone SE 2015 and it works. Strange
btw.: fullscreen also works, but only on the safari browser, not chrome I tap scrolled
Mobile devices need user interaction to play sounds/music, so if you let the page load and don't tap the screen, the music shouldn't (won't) play. After tap the music starts. This is the part, where we are entering uncharted territory:
You are starting the music in the Scene 'FirstScene', but this Scene is stopped after first tap. Theoretically everything in the Scene will/should be unloaded, but the music plays on. (This fact seems to be know, but is still Strange, to me)
Personally I would:
call the play() function only after a user interaction, this prevents possible problems due to the "browser blocking",
use mp3 since it wider supported,
[and optional] if possible keep the sounds / music it the same Scene, and if the scene is closed/stopped, I would stop any audio, that was initiated in that Scene. Or even handle all the sound in a seperate always running scene like mentioned, in this forum post. (But I usually always have a "always-running"-scene which manages my scene switches and other background actions)
But, it seems okay, since currently it works on my local server, for:
Win10 Chrome 104+
iPhone SE 2015 safari, fullscreen won't work on iphone, if we can trust this site
Android 5.5 Chrome
Android 11.0 Chrome
Update Sound Manager Option:
an "easy" way is to add 2 extra Scenes
BootScene:
class BootScene extends Phaser.Scene {
constructor(){
super({ key: 'BootScene' })
}
create() {
// `launch` starts a scene without stopping other
this.scene.launch('FirstScene');
this.scene.launch('SoundScene');
}
}
SoundScene:
Scene that runs parallel to the game
class SoundScene extends Phaser.Scene {
constructor(){
super({ key: 'SoundScene' })
}
preload(){
this.load.audio('theme', './audio/theme.ogg');
}
create() {
this.input.once('pointerdown', _ => {
gameState.music = this.sound.add('theme');
gameState.music.play();
gameState.music.loop = true;
});
}
}
And make some minor tweaks:
index.html
(add new Scenes)
...
<script src="BootScene.js"></script>
<script src="SoundScene.js"></script>
...
game.js
(add new Scenes to config)
...
config = {
...
scene: [BootScene, SoundScene, FirstScene, ...],
...
}
...
FirstScene.js:
Remove the theme stuff from preload and the create function.
add extra checks to prevent errors (check if gameState.music is set already)
update() {
if (gameState.music && !document.hasFocus()) {
gameState.music.pause();
}
if (gameState.music && document.hasFocus()) {
gameState.music.resume();
}
}
After some digging and some testing here some more infos, on how to get the Audio, work consistently, on iphone or Android:
It seems that for Mobile (iOs or Android) you might need to unlock the audio/sound.
in the SoundScene just adapt the create function, to mirror these changes:
create() {
this.input.once('pointerdown', _ => {
// UNLOCK audio, doesn't work immediately on iphone
this.sound.unlock();
gameState.music = this.sound.add('theme2', {loop: true});
// if unlock worked, just play the sound
if(!this.sound.locked){
gameState.music.play();
}
else { // IF Not wait on unlock event
this.sound.once(Phaser.Sound.Events.UNLOCKED, () => {
gameState.music.play();
})
}
});
}
With the current config / code, this should work with Android, but not with iPhone.
If you remove the disableWebAudio:true from the config configutation in the game.js - file, than it works on iPhone, but not on Android. Strange
Currently this are the only two configurations, how I could get the audio to work on android and iPhone. I couldn't find a single configuration, that works for both.
Update: I retested everything, it seems to work, with the SoundScene changes, on win 10, android and iphone, with disableWebAudio:true using a mp3- file.
Related
Today I upgraded to macOS Big Sur 11.0.1 and Safari 14, and my website (one-to-one video chat based on WebRTC) stopped working on Safari. After 10 seconds of a video call, the following console error appears: "A MediaStreamTrack ended due to a capture failure" and the other person can no longer see the video.
My code looks like this:
const userMedia = await navigator.mediaDevices.getUserMedia({
video: true,
audio: true,
});
if (userMedia != null) {
userMedia.getTracks().forEach((track) => {
otherRtcPeer.addTrack(track, userMedia);
});
}
Is it a Safari bug or an implementation issue? And how to solve it?
After going through this guide, I have made changes and resolved the issue.
Have stream object in react state
when video element render/re-render the stream object is cloned and assigned to video element srcObject
After capture of the picture, stop all media tracks in the stream.
This way error mentioned above has been overcome.
I was able to fix the issue on my end by styling a video element I'm using as a webGL texture as display:block, opacity:0 (instead of display: none).
Perhaps they removed the ability to play offscreen video textures on ios14/big sur.
im trying to create a video conference web app..
the problem is im trying to disable my camera on the middle conference, its work but my laptop camera indicator still on (the light is on) but on my web, video show blank screen, is that normal or i miss something?
here what i try
videoAction() {
navigator.mediaDevices.getUserMedia({
video: true,
audio: true
}).then(stream => {
this.myStream = stream
})
this.myStream.getVideoTracks()[0].enabled = !(this.myStream.getVideoTracks()[0].enabled)
this.mediaStatus.video = this.myStream.getVideoTracks()[0].enabled
}
There is also a stop() method which should do the trick in Chrome and Safari. Firefox should already mark the camera as unused by setting the enabled property.
this.myStream.getVideoTracks()[0].stop();
Firstly, MediaStreamTrack.enabled is a Boolean, so you can simply assign the value false.
To simplify your code, you might call:
var vidTrack = myStream.getVideoTracks();
vidTrack.forEach(track => track.enabled = false);
When MediaStreamTrack.enabled = false, the track passes empty frames to the stream, which is why black video is sent. The camera/source itself is not stopped-- I believe the webcam light will turn off on Mac devices, but perhaps not on Windows etc.
.stop(), on the other hand, completely stops the track and tells the source it is no longer needed. If the source is only connected to this track, the source itself will completely stop. Calling .stop() will definitely turn off the camera and webcam light, but you won't be able to turn it back on in your stream instance (since its video track was destroyed). Therefore, completely turning off the camera is not what you want to do; just stick to .enabled = false to temporarily disable video and .enabled = true to turn it back on.
We need to assign window.localStream = stream; inside navigator.mediaDevices.getUserMedia method
For Stop Webcam and LED light off.
localStream.getVideoTracks()[0].stop();
video.src = '';
I'm trying to get my processing.js application working on web browsers. When I'm running it on Processing 3.5.4 (on Windows) everything's fine, but on HTML it draws everything but the sounds don't play. (fonts don't load either, I can't see any text). Do I need another javascript audio and font library?
That's how I import the sound library:
import processing.sound.*;
SoundFile file;
And that's how I get it played (It does play on the desktop app, doesn't play on browsers)
file = new SoundFile(this, "song.wav");
file.play();
What approach would be the easiest to solve this?
Try p5.js and p5.sound library and see the loadSound() example:
let mySound;
function preload() {
soundFormats('mp3', 'ogg');
mySound = loadSound('assets/doorbell');
}
function setup() {
let cnv = createCanvas(100, 100);
cnv.mousePressed(canvasPressed);
background(220);
text('tap here to play', 10, 20);
}
function canvasPressed() {
// playing a sound file on a user gesture
// is equivalent to `userStartAudio()`
mySound.play();
}
Processing.js is no longer updated AFAIK.
Suppose you use the Web Audio API to play a pure tone:
ctx = new AudioContext();
src = ctx.createOscillator();
src.frequency = 261.63; //Play middle C
src.connect(ctx.destination);
src.start();
But, later on you decide you want to stop the sound:
src.stop();
From this point on, src is now completely useless; if you try to start it again, you get:
src.start()
VM564:1 Uncaught DOMException: Failed to execute 'start' on 'AudioScheduledSourceNode': cannot call start more than once.
at <anonymous>:1:5
If you were making, say, a little online keyboard, you're constantly turning notes on and off. It seems really clunky to remove the old object from the audio nodes graph, create a brand new object, and connect() it into the graph, (and then discard the object later) when it would be simpler to just turn it on and off when needed.
Is there some important reason the Web Audio API does things like this? Or is there some cleaner way of restarting an audio source?
Use connect() and disconnect(). You can then change the values of any AudioNode to change the sound.
(The button is because AudioContext requires a user action to run in Snippet.)
play = () => {
d.addEventListener('mouseover',()=>src.connect(ctx.destination));
d.addEventListener('mouseout',()=>src.disconnect(ctx.destination));
ctx = new AudioContext();
src = ctx.createOscillator();
src.frequency = 261.63; //Play middle C
src.start();
}
div {
height:32px;
width:32px;
background-color:red
}
div:hover {
background-color:green
}
<button onclick='play();this.disabled=true;'>play</button>
<div id='d'></div>
This is exactly how the web audio api works. Sound generator nodes like oscillator nodes and audio buffer source nodes are intended to be used once. Every time you want to play your oscillator, you have to create it and set it up, just like you said. I know it seems like a hassle, but you can abstract it into a play() method that handles those details for you, so you don't have to think about it every time you play an oscillator. Also, don't worry about the performance implications of creating so many nodes. The web audio api is intended to be used this way.
If you just want to make music on the internet, and you're not as interested in learning the ins and outs of the web audio api, you might be interested in using a library I wrote that makes things like this easier: https://github.com/rserota/wad
I am working on a 12 Voice Polyphonic Syntesizer with 2 Osc per Voice.
I now never Stop the Osc's. I disconnect the Osc's. You can do that by setTimeout. For the Time take the longest release Phase (1 of 2) from the amp Enveloop for this Set of Osc's. Subtract the AudioContext.currentTime(), multiply with 1000 (setTimeout works with milisecs, web Audio works with seconds.)
I'm running a three.js phonegap project on a iPhone 5s, which rotates the camera according to the compass heading readout. All works as it should, but for the freezing of the compass readout after 10 seconds. The app continues to work fine, but the function rotateCameraWithCompass isn't called anymore. Also there is no call to compassError. The watchHeading method seems to have frozen.
A frequency of 30 ms seemed realistic, or is this technically out of bounds
UPDATE: I lowered the frequency down to 100ms, still freezes after about 10 seconds.
function rotateCameraWithCompass( heading ) {
camera.rotation.y = -1 * Math.toRad( heading.magneticHeading );
}
function compassError( compassError ) { console.log( 'Compass error: ' + compassError.code ); }
var watchID = navigator.compass.watchHeading( rotateCameraWithCompass, compassError, { frequency: 30 } );
Answer posted to Google Groups:
https://groups.google.com/forum/#!topic/phonegap/OEZO1-0LMT4
Read the docs Luke!
iOS has quirks noted in the documentation:
Only one watchHeading can be in effect at one time in iOS. If a
watchHeading uses a filter, calling getCurrentHeading or watchHeading
uses the existing filter value to specify heading changes. Watching
heading changes with a filter is more efficient than with time
intervals.
http://docs.phonegap.com/en/4.0.0/cordova_plugins_pluginapis.md.html#Plugin%20APIs
http://plugins.cordova.io/#/package/org.apache.cordova.device-orientation
Bottom line: Use the filter as described by Jan