https://github.com/BuildFire/sdk/wiki/BuildFire-Audio-Player-Service
in index.html:
function Player() {} //make it global
Player.instance = buildfire.services.media.audioPlayer;
in the triggering function inside player.js
let track = {};
track.title = "Given Title";
track.url = <insert url here>;
track.image = <insert img.jpg url here>;
Player.instance.play(track);
when initializing the Media Player UI in player.js:
Player.instance.onEvent ( function(e){
console.log(e.event);
//other code goes here
};
Right now, on the offline tester, my media player works perfectly, updating the time, and adding the changes when I pause or play. Unfortunately, when I move it to the online tester and the actual app, my phone, and my client's phone don't receive the "timeUpdate" event. And I'm confirming that, with the code above, I DO get "play" and "pause" events, but not the "timeUpdate" event.
Can anyone confirm if this happens for them too, or if there are any fixes?
I think the problem you're having is not with the code, but the server where the audio is hosted. Try hosting the audio on a s3 or other cloud storage bucket instead of bundling it into the app (sorry can't comment yet)
Related
A simple usage of the Web Audio API:
var UnprefixedAudioContext = window.AudioContext || window.webkitAudioContext;
var context;
var volumeNode;
var soundBuffer;
context = new UnprefixedAudioContext();
volumeNode = context.createGain();
volumeNode.connect(context.destination);
volumeNode.gain.value = 1;
context.decodeAudioData(base64ToArrayBuffer(getTapWarm()), function (decodedAudioData) {
soundBuffer = decodedAudioData;
});
function play(buffer) {
var source = context.createBufferSource();
source.buffer = buffer;
source.connect(volumeNode);
(source.start || source.noteOn).call(source, 0);
};
function playClick() {
play(soundBuffer);
}
inside a UIWebView works fine (plays the sound); but when you switch to the Music app and play a song, and then come back to the app with the UIWebView the song stops playing.
The same code inside Safari doesn't have this problem.
Is there a workaround to avoid this behavior?
Here's the full fiddle:
http://jsfiddle.net/gabrielmaldi/4Lvdyhpx/
Are you on iOS? This sounds like an audio session category issue to me. iOS apps define how their audio interacts with audio. From Apple's documentation:
Each audio session category specifies a particular pattern of “yes”
and “no” for each of the following behaviors, as detailed in Table
B-1:
Interrupts non-mixable apps audio: If yes, non-mixable apps will be
interrupted when your app activates its audio session.
Silenced by the Silent switch: If yes, your audio is silenced when the
user moves the Silent switch to silent. (On iPhone, this switch is
called the Ring/Silent switch.)
Supports audio input: If yes, app audio input (recording), is allowed.
Supports audio output: If yes, app audio output (playback), is
allowed.
Looks like the default category silences audio from other apps:
AVAudioSessionCategorySoloAmbient—(Default) Playback only. Silences
audio when the user switches the Ring/Silent switch to the “silent”
position and when the screen locks. This category differs from the
AVAudioSessionCategoryAmbient category only in that it interrupts
other audio.
The key here is in the last sentence: "it interrupts other audio".
There are a number of other categories you can use depending on whether or not you want your audio silenced when the screen is locked, etc. AVAudioSessionCategoryAmbient does not silence audio.
Give this a try in the objective-c portion of your app:
NSError *setCategoryError = nil;
BOOL success = [[AVAudioSession sharedInstance]
setCategory: AVAudioSessionCategoryAmbient
error: &setCategoryError];
if (!success) { /* handle the error in setCategoryError */ }
I am using Monaca.mobi to build a hybrid app. When I build the app for IOS everything is fine; however, when I build it for an android device (Nexus 7) audio does come through. In the Monaca debugger; however, the audio works fine. Is there something about android devices that I am not aware about, maybe some permissions of the app?
Sound is played through an angularJS function called on certain button clicks. I know that this code is correct, just thought I might share it:
function DontAsk($scope){
$scope.play = function(){
var audio = new Audio();
audio.src = 'sounds/DontEventAsk.mp3';
audio.play();
}}
Thanks for any insight.
your above code is only working with iOS. For Android, the path to your local audio file is not recognized. The following code will work for both OSes. I've already tested with the built app too.
$scope.play= function(){
var os = navigator.platform;
if (os=='iPhone'){
var url = "sounds/DontEventAsk.mp3";
}
else{
var url = getPhoneGapPath() + "sounds/DontEventAsk.mp3";
}
var my_media = new Media(url,
// success callback
function() {
console.log("playAudio():Audio Success");
},
// error callback
function(err) {
console.log("playAudio():Audio Error: "+JSON.stringify(err));
});
// Play audio
my_media.play();
}
The big question here is what browser does the Monaca.mobi app use internally? The default Android browser is notorious for not supporting newer codecs like Audio that require HTML5. You might be better off setting some kind of flag that the app can watch and then use the app to play the sound instead of relying on the browser.
I'm developing a firefox extension, and I'd like it to play a notification sound when an event occurs. However, despite following the instructions from Play audio from firefox extension's data directory, the sound doesn't play. It's stored in my data directory. Here is my code:
var pageworker = require ("sdk/page-worker");
let { setTimeout } = require("sdk/timers"); //setTimeout is not enabled by default in ff extensions
function playSound(sound){
console.log("playing sound: "+sound);
var soundPlayer = pageworker.Page({
contentURL: data.url("blank.html"),
contentScript: "new Audio('notification-1.mp3').play();console.log('the audio did run')",//This is where it should play, but it doesn't, even if I remove the console.log
//The script does work if I type it into the javascript console, but replace the file name with a file:/// URL
});
setTimeout(function(){soundPlayer.destroy();},5000);//destroy the sound player after 5 seconds
}
but although both console.log's are called, no audio ever plays.
This is both in the XPI and when using cfx.
As noted in the comments, try using an absolute URL in the contentScript string:
"new Audio("+data.url("notification-1.mp3")+").play();"
In my small HTML5 web-app, I want to play sounds in response to user actions. When the user clicks a button, in the onclick handler I play a sound like this:
url = "assets/sounds/buzz" + (this.canPlayMP3 ? ".mp3" : ".ogg");
sound = new Audio(url);
sound.load();
sound.play();
This works great on Firefox. Unfortunately, on an iPad (iPad 2 running iOS 5.1.1), I get a 2-second delay before the sound is played. This happens every time I play the sound sample, not just the first time.
The MP3 file is 9KB long. The iPad is connected to the network using exactly the same Wifi connection as the computer running Firefox.
How can I figure out what's going on?
You might want to create a single instance of the audio element for each sound:
var Sounds = {
cat: new Audio('/sounds/meow.ogg'),
bird: new Audio('/sounds/tweet.ogg')
};
Then you can play the same element over and over again:
function playSound(name) {
Sounds[name].currentTime = 0;
Sounds[name].play();
}
playSound('cat');
If iOS destroys your Audio objects, you could cache sound files in the cache manifest:
CACHE MANIFEST
# 2012-08-09:v1.3
NETWORK:
*
CACHE:
/sounds/meow.ogg
/sounds/tweet.ogg
How about moving the loading outside the handler e.g. make it global/preloaded? Then inside handler call play method only.
I had searched a lot of DEMO and examples about getUserMedia , but most are just camera capturing, not microphone.
So I downloaded some examples and tried on my own computer , camera capturing is work ,
But when I changed
navigator.webkitGetUserMedia({video : true},gotStream);
to
navigator.webkitGetUserMedia({audio : true},gotStream);
The browser ask me to allow microphone access first, and then it failed at
document.getElementById("audio").src = window.webkitURL.createObjectURL(stream);
The message is :
GET blob:http%3A//localhost/a5077b7e-097a-4281-b444-8c1d3e327eb4 404 (Not Found)
This is my code: getUserMedia_simple_audio_test
Did I do something wrong? Or only getUserMedia can work for camera now ?
It is currently not available in Google Chrome. See Issue 112367.
You can see in the demo, it will always throw an error saying
GET blob:http%3A//whatever.it.is/b0058260-9579-419b-b409-18024ef7c6da 404 (Not Found)
And also you can't listen to the microphone either in
{
video: true,
audio: true
}
It is currently supported in Chrome Canary. You need to type about:flags into the address bar then enable Web Audio Input.
The following code connects the audio input to the speakers. WATCH OUT FOR THE FEEDBACK!
<script>
// this is to store a reference to the input so we can kill it later
var liveSource;
// creates an audiocontext and hooks up the audio input
function connectAudioInToSpeakers(){
var context = new webkitAudioContext();
navigator.webkitGetUserMedia({audio: true}, function(stream) {
console.log("Connected live audio input");
liveSource = context.createMediaStreamSource(stream);
liveSource.connect(context.destination);
});
}
// disconnects the audio input
function makeItStop(){
console.log("killing audio!");
liveSource.disconnect();
}
// run this when the page loads
connectAudioInToSpeakers();
</script>
<input type="button" value="please make it stop!" onclick="makeItStop()"/>
(sorry, I forgot to login, so posting with my proper username...)
It is currently supported in Chrome Canary. You need to type about:flags into the address bar then enable Web Audio Input.
The following code connects the audio input to the speakers. WATCH OUT FOR THE FEEDBACK!
http://jsfiddle.net/2mLtM/
<script>
// this is to store a reference to the input so we can kill it later
var liveSource;
// creates an audiocontext and hooks up the audio input
function connectAudioInToSpeakers(){
var context = new webkitAudioContext();
navigator.webkitGetUserMedia({audio: true}, function(stream) {
console.log("Connected live audio input");
liveSource = context.createMediaStreamSource(stream);
liveSource.connect(context.destination);
});
}
// disconnects the audio input
function makeItStop(){
console.log("killing audio!");
liveSource.disconnect();
}
// run this when the page loads
connectAudioInToSpeakers();
</script>
<input type="button" value="please make it stop!" onclick="makeItStop()"/>
It's working, you just need to add toString parameter after audio : true
Check this article - link