I've been trying to play an audio linked to a toast notification . I'm currently using toastr.js .
toastr.warning(eventData.type);
self.eventAudio[eventData.id] = {};
self.eventAudio[eventData.id].audio = new Audio();
self.eventAudio[eventData.id].audio.src = './images/someaudio.mp3';
self.eventAudio[eventData.id].audio.loop = true;
self.eventAudio[eventData.id].audio.play();
eventData has a unique Id in id key : eventData.id.
what i wanted was for a particular toastr notification , audio is played and for toastr.options.onHidden, i've bound a function with the contents :
var self = this;
toastr.options.onHidden = function() {
self.eventAudio[eventData.id].audio.pause();
}
toastr.options.onclose = function() {
self.eventAudio[eventData.id].audio.pause();
}
toastr.options.onCloseClick = function() {
self.eventAudio[eventData.id].audio.pause();
}
The issue i'm facing is that , the audio is always in loop and never paused , not even after the toast notification is hidden .This issue is when the toast notification has more than one instance ,like i have 2 or more notifications.
What am i doing wrong ?
Please help me out
Thanks in advance
This is difficult because the toastr.js library doesn't pass any identifiable event information to the onHidden/onClose/onCloseClick callbacks. If you look at the source code, it's calling the same options.onHidden() function for all toast events.
if (options.onHidden && response.state !== 'hidden') {
options.onHidden();
}
https://github.com/CodeSeven/toastr/blob/master/toastr.js
Right now what you're doing is overwriting the callback function for every new event. (If your last event.id is abc, every time any toast is closed it will try to pause the audio associated with abc.)
The easiest (and probably most user friendly) way to do this is to create a single Audio element that gets restarted every time a new toast appears.
var audio = new Audio();
audio.src = './audio/test.mp3';
audio.play();
toastr.options.onHidden = function() {
audio.pause();
}
toastr.options.onclose = function() {
audio.pause();
}
toastr.options.onCloseClick = function() {
audio.pause();
}
If you need it to keep playing continuously until all toasts are closed, you could have a counter that pauses the audio when there are no more active toasts
I used Jquery to access the uniqueId in the eventData.id .
toastr.warning() , toastr.success() ,toastr.error(), toastr.info() takes input: message , title for notification .
var message = '<input type="hidden" id="eventId" value="'+ eventData.id +'"/>';
toastr.warning(message,eventData.type);
$('.toast-warning').on('mouseleave',function(){
var eventDataId = $(this).find('#eventId').val();
self.eventAudio[eventDataId].pause();
self.eventAudio[eventDataId].src ='';
});
I bound the onHidden() ,onclose() , onCloseclick() with :
$('.toast-warning').trigger('mouseleave');
Since the <input> has its type as hidden , it was not visible in the toast notification , and i accessed the element easily using:
$(this).find() .
Used this to reach the answer: Jquery toastr onHidden function
Related
Apologies in advance for any terminology mistakes, I'm a student and trying my hardest to be as clear as possible! and thanks in advance for any help!
I'm trying to use Azure Speech-To-Text services. I'd like the user to be able to press a start and stop button to record themselves and print out the transcription. My app will eventually be a React Frontend and Rails backend, but right now I am just trying to understand and work through the demo.
I'm confused by the documentation but was able to get things half working. However, right now it just continuously listens to the speaker and never stops.
I want to use stopContinuousRecognitionAsync() or recognizer.close() once a button is pressed, but I cannot seem to get it working. The farthest I've gotten is the result is logged only once the stop button is pressed, but it continues to listen and print out results. I've also tried using recognizer.close() -> recognizer = undefined but to no avail. I am guessing that due to the asynchronous behavior, it closes out the recognizer before logging a result.
The latest code I've tried is below. This result starts listening on start click and prints speech on stop, but continues to listen and log results.
// subscription key and region for speech services.
var subscriptionKey, serviceRegion;
var authorizationToken;
var SpeechSDK;
var recognizer;
document.addEventListener("DOMContentLoaded", function () {
startRecognizeOnceAsyncButton = document.getElementById("startRecognizeOnceAsyncButton");
subscriptionKey = document.getElementById("subscriptionKey");
serviceRegion = document.getElementById("serviceRegion");
phraseDiv = document.getElementById("phraseDiv");
startRecognizeOnceAsyncButton.addEventListener("click", function () {
startRecognizeOnceAsyncButton.disabled = true;
phraseDiv.innerHTML = "";
// if we got an authorization token, use the token. Otherwise use the provided subscription key
var speechConfig;
if (authorizationToken) {
speechConfig = SpeechSDK.SpeechConfig.fromAuthorizationToken(authorizationToken, serviceRegion.value);
} else {
speechConfig = SpeechSDK.SpeechConfig.fromSubscription(“API_KEY”, serviceRegion.value);
}
speechConfig.speechRecognitionLanguage = "en-US";
var audioConfig = SpeechSDK.AudioConfig.fromDefaultMicrophoneInput();
recognizer = new SpeechSDK.SpeechRecognizer(speechConfig, audioConfig);
recognizer.startContinuousRecognitionAsync(function () {}, function (err) {
console.trace("err - " + err);});
stopButton = document.querySelector(".stopButton")
stopButton.addEventListener("click", () =>{
console.log("clicked")
recognizer.recognized = function(s,e) {
console.log("recognized text", e.result.text)
}
})
});
Assuming the recognizer is conjured correctly outside of the code, there's a few things to change to get the result you want.
The events should be hooked to the recognizer before calling startContinuousRecognition().
In the stop button handler, call stop. I'd also hook the stop event outside of the start button click handler.
Quick typed changes, didn't compile. :-)
startRecognizeOnceAsyncButton.addEventListener("click", function () {
startRecognizeOnceAsyncButton.disabled = true;
//div where text is being shown
phraseDiv.innerHTML = "";
// The event recognized signals that a final recognition result is received.
recognizer.recognized = function(s,e) {
console.log("recognized text", e.result.text)
}
//start listening to speaker
recognizer.startContinuousRecognitionAsync(function () {}, function (err) {
console.trace("err - " + err);});
});
stopButton = document.querySelector(".stopButton")
stopButton.addEventListener("click", () =>{
console.log("clicked");
recognizer.stopContinuousRecongition();
};
Can anybody tell me how to fix this issue I have with Picture in Picture? I added it for a HTML5 player and took the code from the Apple website, but it's not working for me. It gives me an error saying:
TypeError: null is not an object (evaluating 'video.webkitSupportsPresentationMode')
(anonymous function)-jquery-3.min.js:2:31697
The code:
var video = document.getElementById('video');
var PiP = document.getElementById('picture-in-picture');
// picture-in-picture
if (video.webkitSupportsPresentationMode && typeof video.webkitSetPresentationMode === "function") {
// Toggle PiP when the user clicks the button.
PiP.addEventListener("click", function(event) {
video.webkitSetPresentationMode(video.webkitPresentationMode === "picture-in-picture" ? "inline" : "picture-in-picture");
});
} else {
PiP.disabled = true;
}
I wasn't sure were to place this code. I just put it inside a javascript script tag in the footer.
Updated:
I replaced :
var video = document.getElementById('video');
var PiP = document.getElementById('picture-in-picture');
with just:
var video = $( "video" );
var PiP = $( "#picture-in-picture" );
Inside the jQuery ready and the error is gone but still doesn't work. I put an alert on each of the if and else condition and it looks like it doesn't even recognize the function.
if (video.webkitSupportsPresentationMode && typeof video.webkitSetPresentationMode === "function") {
// Toggle PiP when the user clicks the button.
PiP.addEventListener("click", function(event) {
video.webkitSetPresentationMode(video.webkitPresentationMode === "picture-in-picture" ? "inline" : "picture-in-picture");
});
alert("works")
} else {
PiP.disabled = true;
alert("no works") //<--- This is the alert I get
}
i have never tried this PiP until now. Could it be that Apple removed this function in the new Safari? It seems like any html5 video I go to has this option already next to the full screen. But won't be good for a custom HTML5 player which is why I want to add this function to a button.
Why the error?
document.getElementById returns a DOM object while the jQuery object (created by the $ method) is a wrapper around a DOM element or a set of DOM. You can [read the detailed explanation here][1].
This mean that if you wanna use jQuery you need to to change:
var video = $( "video" );
to
var video = $( "video" )[0];
Strange issue
<button ng-show="scene.audio" class="button icon {{scene.audioIcon}}"
ng-click="playAudio(scene)"/>
$scope.playAudio = function ($scene){
if($scene.audioIcon == "ion-ios-play-outline") {
$scene.audioIcon = "ion-ios-pause-outline";
media = new Media($scene.audio.src,function(){
**$scene.audioIcon = "ion-ios-play-outline";**
media.stop();
media.release();
},null);
media.scene = $scene;
media.play();
}
else if(media){
media.stop();
media.release();
$scene.audioIcon = "ion-ios-play-outline";
}
I can update the $scene.audioIcon on the 2 click functions, which updates the button in the UI. However in the onComplete function of new Media, this function is called when the audio is finished, and the $scene audio icon changes, however it doesn't get updated in the UI.
I assume because it comes later?
Is there a way I can trigger an update of the button?
angularjs doesn't know that it should check for changes because the completion event is called from native code. You should wrap your code in $scope.$apply().
Something like that:
media = new Media($scene.audio.src,function() {
$scope.$apply(function() {
$scene.audioIcon = "ion-ios-play-outline";
media.stop();
media.release();
});
},null);
I am building a javascript game, and i want to create a background music based on sound file snippets. Short mp3 files to play them as one continuous track. I have tried binding an "ended" event handler on the audio file, though this causes a delay between audio fragments.
To solve this I made a hacky solution that still does not work, changing the audio 1 second before it finishes.
Ebuc.manageAudio = function(){
var listener = function (event) {
if (this.currentTime > (this.duration - 1) && Ebuc.bgnext) {
Ebuc.manageAudio();
console.log("aduio");
Ebuc.bgnext = false;
}
if(this.currentTime < 2){
Ebuc.bgnext = true;
console.log("reset");
}
console.log(event);
console.log("listener active")
};
var color = Level.current.color;
if(Ebuc.bgsong == null) {
Ebuc.bgsong = new Audio('assets/sound/' + Resources.audioSetList[color].getcurrentsong());
Ebuc.bgsong.addEventListener('timeupdate', listener, true);
}
else{
Ebuc.bgsong = new Audio('assets/sound/' + Resources.audioSetList[color].getcurrentsong());
}
Ebuc.bgsong.play();
Resources.audioSetList[color].next();
};
This sample works once, when it is time to switch fragment 2 to fragment 3 the loop stops. Console logging the event listener gives 4 times a log before stopping.
Q1: Why is this eventlistener suddenly disappearing?
Q2: Is there a non hack solution for chaining these audio fragments.
I thank you in advance.
You're going to have more than just pausing issues trying to rapidly switch between two short audio clips, you're going to probably want to crossfade between the two audio tracks quickly as well to prevent any popping, artifacts, etc.
Here's an example of crossfading using howler from howler's github issues. You could probably use this example, and keep a queue of loaded instances to transition to. I hope that helps.
//you'll probably want this crossfade duration to be shorter.
var crossfadeDuration = 5000,
volume = 0.7;
var instance1, instance2, soundDuration;
// Singleton helper to build similar instances
var createHowlerInstance = function (urls, onload) {
return new Howl({
urls: urls,
loop: false,
volume: 0,
onload: onload
});
};
// Create "slave" instance. This instance is meant
// to be played after the first one is done.
instance2 = createHowlerInstance(['file2.mp3']);
// Create "master" instance. The onload function passed to
// the singleton creator will coordinate the crossfaded loop
instance1 = createHowlerInstance(['file1.mp3'], function(){
// Get the sound duration in ms from the Howler engine
soundDuration = Math.floor(instance1._duration * 1000);
(function crossfadedLoop(enteringInstance, leavingInstance){
// Fade in entering instance
enteringInstance.pos(0).play().fade(0, volume, crossfadeDuration);
// Wait for the audio end to fade out entering instance
// white fading in leaving instance
setTimeout(function(){
enteringInstance.fade(volume, 0, crossfadeDuration);
crossfadedLoop(leavingInstance, enteringInstance);
}, soundDuration - crossfadeDuration);
})(instance1, instance2);
});
By using the idea of setting a timeOut in the answer of pantalohnes I have created the following code to solve the gap:
Ebuc.manageAudio = function(){
var color = Level.current.color;
Ebuc.bgsong = new Audio('assets/sound/' + Resources.audioSetList[color].getcurrentsong());
Ebuc.bgsong.addEventListener("loadedmetadata",function(){
setTimeout(Ebuc.manageAudio, (Ebuc.bgsong.duration * 1000) - 50);
Ebuc.bgsong.play();
console.log(Ebuc.bgsong.duration);
Resources.audioSetList[color].next();
});
};
The 50 milliseconds timeout bridges the gap between the sequenced files exactly.
Answering your question (although I see you found another solution), I think I found your bug:
The second time your enter Ebuc.manageAudio(), Ebuc.bgsong is already set and you just create a new Audio Ebuc.bgsong = new Audio(...) without attaching the listener to it, so you're not being notified for any 'timeupdate' events emitted when playing the second audio file.
You should also remove the listener from the previous playing audio.
So, if all else is ok, I think this should fix it:
Ebuc.manageAudio = function(){
var listener = function (event) {
if (this.currentTime > (this.duration - 1) && Ebuc.bgnext) {
Ebuc.manageAudio();
console.log("aduio");
Ebuc.bgnext = false;
}
if(this.currentTime < 2){
Ebuc.bgnext = true;
console.log("reset");
}
console.log(event);
console.log("listener active")
};
var color = Level.current.color;
if(Ebuc.bgsong != null) {
Ebuc.bgsong.removeEventListener('timeupdate', listener, true);
}
Ebuc.bgsong = new Audio('assets/sound/' + Resources.audioSetList[color].getcurrentsong());
Ebuc.bgsong.addEventListener('timeupdate', listener, true);
Ebuc.bgsong.play();
Resources.audioSetList[color].next();
};
More than that, I think that if you properly remove the listener from the previous playing audio, you won't need that bgnext hack at all:
var listener = function (event) {
if (this.currentTime > (this.duration - 1)) {
Ebuc.manageAudio();
console.log("aduio");
}
console.log(event);
console.log("listener active")
};
Ebuc.manageAudio = function () {
var color = Level.current.color;
if (Ebuc.bgsong != null) {
Ebuc.bgsong.removeEventListener('timeupdate', listener, true);
}
Ebuc.bgsong = new Audio('assets/sound/' + Resources.audioSetList[color].getcurrentsong());
Ebuc.bgsong.addEventListener('timeupdate', listener, true);
Ebuc.bgsong.play();
Resources.audioSetList[color].next();
};
Let me know if that worked :)
I’ve seen different web apps like Playmoss, Whyd, and Songdrop etc. that, I believe, HAVE to utilize the Soundcloud Embedded Widget in order to produce the functionality of playing multiple tracks, in sucession, not apart of a set/(playlist). Currently I am having issues reproducing this functionality with the following library, so I decided to attempt to write my own:
https://github.com/eric-robinson/SCLPlayer
I am very new to writing javascript, but my below code, will load a first track, and play it once hitting the “ready” bind. Once hitting the “finish” bind, It will then jump to the loadNextTrack() function and load the next tracks URL, into the src of the widget’s iFrame. After that, it doesn’t ever hit the original “ready” bind, which would then begin playback.
So to clear things up, playback doesn’t begin for the second track.
<script type = "text/javascript">
var SCLPlayer = {
isPlayerLoaded : false,
isPlayerFullLoaded : false,
needsFirstTrackSkip : true,
isPaused: true,
scPlayer : function() {
widgetContainer = document.getElementById('sc');
widget = SC.Widget(widgetContainer);
return widget;
},
loadNextTrack : function() {
var ifr = document.getElementById('sc');
ifr.src = 'http://w.soundcloud.com/player/?url=https://api.soundcloud.com/tracks/231758952';
console.log ('Loading Next Track');
SCLPlayer.scPlayer().bind(SC.Widget.Events.READY, function() {
console.log ('Player is Ready, next Track');
SCLPlayer.scPlayer().play();
});
}
};
$( '#sc' ).ready(function() {
SCLPlayer.scPlayer().bind(SC.Widget.Events.READY, function() {
SCLPlayer.isPlayerLoaded = true;
//window.location = 'sclplayer://didLoad';
console.log ('Player is Ready');
SCLPlayer.scPlayer().play();
});
SCLPlayer.scPlayer().bind(SC.Widget.Events.PLAY, function() {
SCLPlayer.isPaused = false;
//window.location = 'sclplayer://didPlay';
console.log ('Player did Play');
});
SCLPlayer.scPlayer().bind(SC.Widget.Events.PAUSE, function() {
SCLPlayer.isPaused = true;
//window.location = 'sclplayer://didPause';
console.log ('Player did Pause');
});
SCLPlayer.scPlayer().bind(SC.Widget.Events.FINISH, function() {
SCLPlayer.isPaused = true;
//window.location = 'sclplayer://didFinish';
console.log ('Player did Finish');
SCLPlayer.loadNextTrack();
});
});
</script>
</head>
<body>
<iframe id = "sc" width="100%" height="100%" scrolling="no" frameborder="no" src="http://w.soundcloud.com/player/?url=https://api.soundcloud.com/tracks/226183306"></iframe>
</body>
The whole point of me writing this Javascript is so that I can then use a Swift to Javascript bridge in my iOS app to then control the loading of tracks into the embedded players. For some reason over a slower connection, the next track doesn't always load into the player, using the "bridge". I hope to provide the nextTrackURL to the javascript side of things before the currentTrack finishes, so that the bridge conveys nothing and the Javascript handles new track loading, solely on its own.
I think you want to use the load function to specify the url for the new track
From the soundcloud Widget API docs:
load(url, options) — reloads the iframe element with a new widget specified by the url. All previously added event listeners will continue working. options is an object which allows you to define all possible widget parameters as well as a callback function which will be executed as soon as new widget is ready. See below for detailed list of widget parameters.
var url = "https://api.soundcloud.com/";
var options = [];
// if a track
url += "tracks/";
// if a playlist
url += "playlists/"
// append the id of the track / playlist to the url
url += id;
// set any options you want for the player
options.show_artwork = false;
options.liking = false;
options.auto_play = true;
widget.load(url, options, OPTIONAL_CALLBACK_FUNCTION);
Edited to show binding...
The bind code is called once, after the widget is initially loaded.
The ready event is only called once, when the widget is initially loaded, it is not called for each subsequent call using load().
try {
widget.bind(SC.Widget.Events.FINISH,
function finishedPlaying() {
// your code / function call
}
);
widget.bind(SC.Widget.Events.PAUSE,
function paused() {
// your code / function call
}
);
widget.bind(SC.Widget.Events.PLAY,
function playing() {
// your code / function call
widget.getCurrentSound(function scCurrentSound(sound) {
// this also binds getCurrent sound which is called
// each time a new sound is loaded
});
}
);
widget.bind(SC.Widget.Events.PLAY_PROGRESS,
function position(pos) {
// your code / function call
}
);
widget.bind(SC.Widget.Events.SEEK,
function seek(pos) {
// your code / function call
}
);
widget.bind(SC.Widget.Events.READY,
function ready() {
// your code / function call
}
);
} catch(e) {
// exception handler code
}