jPlayer ends tracks 2-8% (a few seconds) too early? - javascript

I'm not sure what this could be... it's kind of hard to debug.
Basically when using jPlayer, each track ends a few seconds too early (mp3 format only).
I'm using S3/Cloudfront CDN for distribution, but I don't think that has anything to do with it (unless there is some weird header issue that could create symptoms like this). Ive tried it on about 5 different mp3's so far, all to the same effect.
Also, the .progress-bar doesn't get to 100% either, it ends at about 95% and then goes to the next playlist item.
var fnmApp = (function() {
var player = function() {
var options = {
swfPath : '<%= asset_path 'Jplayer.swf' %>'
, supplied : 'mp3'
, solution : 'html,flash'
, wmode : 'transparent'
, smoothPlayBar : false
};
var fnmPlaylist = new jPlayerPlaylist({
jPlayer: '#fnmp'
, cssSelectorAncestor: '#fnmp-container'
}, mixtapePlaylist, options);
$('.fnmp-container .jp-gui a').click(function(e) {
e.preventDefault();
});
};
return {
player: player
};
})();

Streaming MP3 files over HTTP is a bit problematic because it isn't typically possible to know how big that file is (in time or samples) until it is completely downloaded, and frames counted. Most players get around this by estimating time and then either updating that estimate as playback continues or simply rolling past the end of the file, should there still be data to play even after the original estimated length.
It sounds like what is happening is that the original estimated length is being used for the playback length. This is likely a bug with whatever is playing back audio, or the codec it is using. With jPlayer, you could be using either Flash or the browser via HTML5 for playback. Since forcing Flash over HTML5 is working in your case, I believe this is a bug in the build of Chrome that you are using. Unfortunately, there is no direct way to fix this problem, since it is out of your control. You can only work around it.

Related

Asynchronous javascript in a synchronous function when combining mediaStreams from getUserMedia and getDisplayMedia?

My team is adapting the sipml5 library to create a html5 softphone for use in our organization. The full repository is here: https://github.com/L1kMakes/sipml5-ng . We have the code working well; audio and video calls work flawlessly. In the original code we forked from (which was from like 2012) screen sharing was accomplished with a browser plugin, but HTML 5 and WebRTC have changed to allow this to be done with just JavaScript now.
I am having difficulty adapting the code to accommodate this. I am starting with the code here on line 828: https://github.com/L1kMakes/sipml5-ng/blob/master/src/tinyMEDIA/src/tmedia_session_jsep.js This works, though without audio. That makes sense as the only possible audio stream from a screen share is the screen audio, not the mic audio. I am attempting to initialize an audio stream from getUserMedia, grab a video stream from getDisplayMedia, and present that to the client as a single mediaStream. Here's my adapted code:
if ( this.e_type == tmedia_type_e.SCREEN_SHARE ) {
// Plugin-less screen share using WebRTC requires "getDisplayMedia" instead of "getUserMedia"
// Because of this, audio constraints become limited, and we have to use async to deal with
// the promise variable for the mediastream. This is a change since Chrome 71. We are able
// to use the .then aspect of the promise to call a second mediaStream, then attach the audio
// from that to the video of our second screenshare mediaStream, enabling plugin-less screen
// sharing with audio.
let o_stream = null;
let o_streamAudio = null;
let o_streamVideo = null;
let o_streamAudioTrack = null;
let o_streamVideoTrack = null;
try {
navigator.mediaDevices.getDisplayMedia(
{
audio: false,
video: !!( this.e_type.i_id & tmedia_type_e.VIDEO.i_id ) ? o_video_constraints : false
}
).then(o_streamVideo => {
o_streamVideoTrack = o_streamVideo.getVideoTracks()[0];
navigator.mediaDevices.getUserMedia(
{
audio: o_audio_constraints,
video: false
}
).then(o_streamAudio => {
o_streamAudioTrack = o_streamAudio.getAudioTracks()[0];
o_stream = new MediaStream( [ o_streamVideoTrack , o_streamAudioTrack ] );
tmedia_session_jsep01.onGetUserMediaSuccess(o_stream, This);
});
});
} catch ( s_error ) {
tmedia_session_jsep01.onGetUserMediaError(s_error, This);
}
} else {
try {
navigator.mediaDevices.getUserMedia(
{
audio: (this.e_type == tmedia_type_e.SCREEN_SHARE) ? false : !!(this.e_type.i_id & tmedia_type_e.AUDIO.i_id) ? o_audio_constraints : false,
video: !!(this.e_type.i_id & tmedia_type_e.VIDEO.i_id) ? o_video_constraints : false // "SCREEN_SHARE" contains "VIDEO" flag -> (VIDEO & SCREEN_SHARE) = VIDEO
}
).then(o_stream => {
tmedia_session_jsep01.onGetUserMediaSuccess(o_stream, This);
});
} catch (s_error ) {
tmedia_session_jsep01.onGetUserMediaError(s_error, This);
}
}
My understanding is, o_stream should represent the resolved mediaStream tracks, not a promise, when doing a screen share. On the other end, we are using the client "MicroSIP." When making a video call, when the call is placed, I get my video preview locally in our web app, then when the call is answered the MicroSIP client gets a green square for a second, then resolves to my video. When I make a screen share call, my local web app sees the local preview of the screen share, but upon answering the call, my MicroSIP client just gets a green square and never gets the actual screen share.
The video constraints for both are the same. If I add debugging output to get more descriptive of what is actually in the media streams, they appear identical as far as I can tell. I made a test video call and a test screen share call, captured debug logs from each and held them side by side in notepad++...all appears to be identical save for the explicit debug describing the traversal down the permission request tree with "GetUserMedia" and "GetDisplayMedia." I can't really post the debug logs here as cleaning them up of information from my organization would leave them pretty barren. Save for the extra debug output on the "getDisplayMedia" call before "getUserMedia", timestamps, and uniqueID's related to individual calls, the log files are identical.
I am wondering if the media streams are not resolving from their promises before the "then" is completed, but asynchronous javascript and promises is still a bit over my head. I do not believe I should convert this function to async, but I have nothing else to debug here; the mediaStream is working as I can see it locally, but I'm stumped on figuring out what is going on with the remote send.
The solution was...nothing, the code was fine. It turns out the recipient SIP client we were using had an issue where it just aborts if it gets video larger than 640x480.

How to set the currentTime in HTML5 audio object when audio file is online?

I have a JavaScript audio player with skip forward/back 10 second buttons. I do this by setting the currentTime of my audio element:
function Player(skipTime)
{
this.skipTime = skipTime;
this.waitLoad = false;
// initialise main narration audio
this.narration = new Audio(getFileName(dynamicNarration));
this.narration.preload = "auto";
this.narration.addEventListener('canplaythrough', () => { this.loaded(); });
this.narration.addEventListener('timeupdate', () => { this.seek(); });
this.narration.addEventListener('ended', () => { this.ended(); });
this.narration.addEventListener('waiting', () => { this.audioWaiting(); });
this.narration.addEventListener('playing', () => { this.loaded(); });
}
Player.prototype = {
rew: function rew()
{
if (!this.waitLoad) {
this.skip(-this.skipTime);
}
},
ffw: function ffw()
{
if (!this.waitLoad) {
this.skip(this.skipTime);
}
},
skip: function skip(amount)
{
const curTime = this.narration.currentTime;
const newTime = curTime + amount;
console.log(`Changing currentTime (${curTime}) to ${newTime}`);
this.narration.currentTime = newTime;
console.log(`Result: currentTime = ${this.narration.currentTime}`);
},
loaded: function loaded()
{
if (this.waitLoad) {
this.waitLoad = false;
playButton.removeClass('loading');
}
},
audioWaiting: function audioWaiting()
{
if (!this.waitLoad) {
this.waitLoad = true;
playButton.addClass('loading');
}
},
}
(I'm including here some of the event listeners I'm attaching because previously I'd debugged a similar problem as being down to conflicts in event listeners. Having thoroughly debugged event listeners this time though, I don't think that's the root of the problem.)
Though this all works fine on my local copy, when I test an online version I get the following results:
Chrome: resets play position to 0. Final console line reads Result: currentTime = 0.
Safari: doesn't change play position at all. Final console.log line gives a value for currentTime equal to newTime (even though the play position actually doesn't change).
Firefox: skipping forward works; skipping backwards interrupts the audio for a few seconds, then it starts playing again from a couple of seconds before where the playhead had been. In both cases, final console.log line gives a value for currentTime equal to newTime
The issue must have something to do with the way audio is loaded. I have tried adding another console log line to show the start and end values for buffered.
In Chrome it goes up to 2 seconds after current play position. In Safari it goes up to ~170 seconds, and in Firefox it seems to buffer the full audio length.
However, in each case the start of the buffered object is 0.
Does anyone have any idea what might be going wrong?
There are some requirements to properly load an audio file and use the properties.
Your response while serving the file needs to have the following headers.
accept-ranges: bytes
Content-Length: BYTE_LENGTH_OF_YOUR_FILE
Content-Range: bytes 0-BYTE_LENGTH_OF_YOUR_FILE/BYTE_LENGTH_OF_YOUR_FILE
content-type: audio/mp3
My colleagues and I have been struggling over this for a few days and finally this worked
Image of Response header for an audio file
If your browser did not load the audio then the audio can not be played. The browser did not know your audio file and becaue of this it tries to play your audio from the start. May be your audio could be only 1 second long or even shorter.
Solution
You have to wait for loadedmetadata event and after it you can play your audion from any time position. After this event your browser knows all relevant information about your audio file.
Please change your code like follows:
function Player(skipTime)
{
this.skipTime = skipTime;
// initialise main narration audio
this.narration = new Audio(getFileName(dynamicNarration));
this.narration.preload = "auto";
this.narration.addEventListener('canplaythrough', () => { this.loaded(); });
this.narration.addEventListener('timeupdate', () => { this.seek(); });
this.narration.addEventListener('ended', () => { this.ended(); });
this.narration.addEventListener('waiting', () => { this.audioWaiting(); });
this.narration.addEventListener('playing', () => { this.loaded(); });
this.narration.addEventListener('loadedmetadata', () => {playButton.removeClass('loading');});
playButton.addClass('loading');
}
Player.prototype =
{
rew: function()
{
this.skip(-this.skipTime);
},
ffw: function()
{
this.skip(this.skipTime);
},
skip: function(amount)
{
var curTime = this.narration.currentTime;
var newTime = curTime + amount;
console.log(`Changing currentTime (${curTime}) to ${newTime}`);
this.narration.currentTime = newTime;
console.log(`Result: currentTime = ${this.narration.currentTime}`);
}
};
But if you do not want long to wait for audio loading then you have only one option more: to convert all your audiofiles to dataURL format which looks like follows:
var data = "data:audio/mp3;base64,...
But in this case you have to wait for your page load even more than for one audio file load. And by audio file load it is only the metadata and it is faster.
This solved my issue...
private refreshSrc() {
const src = this.media.src;
this.media.src = '';
this.media.src = src;
}
I found a solution to my problem, if not exactly an explanation.
My hosting provider uses a CDN, for which it must replace resource's URLs with those of a different domain. The URLs of my audio resources are dynamically constructed by JS, because there's a random element to them; as such, the deployment process that replaces URLs wasn't catching those for my audio files. To get around this, I manually excluded the audio files from the CDN, meaning I could refer to them using relative file paths.
This was how things stood when I was having this issue.
Then, due to a separate issue, I took a different approach: I got the audio files back on the CDN and wrote a function to extract the domain name I needed to use to retrieve the files. When I did that, suddenly I found that all my problems to do with setting currentTime had disappeared. Somehow, not having the files on the CDN was severely interfering with the browser's ability to load them in an orderly manner.
If anyone can volunteer an explanation for why this might have been, I'd be very curious to hear it...
Edit
I've been working on another project which involves streaming audio, this time also with PWA support, so I had to implement a caching mechanism in my service worker for audio files. Through this guide I learned all about the pitfalls of range requests, and understand now that failing to serve correct responses to requests with range headers will break seeking on some browsers.
It seems that in the above case, when I excluded my files from the CDN they were served from somewhere that didn't support range headers. When I moved them back on the CDN this was fixed, as it must have been built with explicit support for streaming media.
Here is a good explanation of correct responses to range requests. But for anyone having this issue while using a third party hosting service, it suffices to know that probably they do not support range headers for streaming media. If you want to verify this is the case, you can query the audio object's duration. At least in Safari's case, the duration is set to infinity when it can't successfully make a range request, and at that point seeking will be disabled.

VideoJS HTML5 Video JS How to boost volume above maximum?

Its possible there's no solution to this but I thought I'd inquire anyway. Sometimes a video is really quiet and if I turn the volume of my computer up accordingly then other sounds I have become way too loud as a result. It would be nice to be able to boost the volume above maximum.
I did a search on google which literally turned up nothing at all, not even results related to videojs at all in fact. Some videos my Mac is almost at max volume to hear the video's speech well so it would not be feasible to start with everything at a lower volume and adjust accordingly.
I tried with:
var video = document.getElementById("Video1");
video.volume = 1.0;
And setting it to anything above 1.0 but the video then fails to open at all:
var video = document.getElementById("Video1");
video.volume = 1.4; /// 2.0 etc
Based on: http://cwestblog.com/2017/08/17/html5-getting-more-volume-from-the-web-audio-api/
You can adjust the gain by using the Web Audio API:
function amplifyMedia(mediaElem, multiplier) {
var context = new(window.AudioContext || window.webkitAudioContext),
result = {
context: context,
source: context.createMediaElementSource(mediaElem),
gain: context.createGain(),
media: mediaElem,
amplify: function(multiplier) {
result.gain.gain.value = multiplier;
},
getAmpLevel: function() {
return result.gain.gain.value;
}
};
result.source.connect(result.gain);
result.gain.connect(context.destination);
result.amplify(multiplier);
return result;
}
amplifyMedia(document.getElementById('myVideo'), 1.4);
The multiplier you pass to the function is at same level as the video volume, 1 being the 100% volume, but in this case you can pass a higher number.
Can't post any working demo or JSFiddle because Web Audio API requires a source from the same origin (or CORS compatible). You can see the error in the console: https://jsfiddle.net/yuriy636/41vrx1z7/1/
MediaElementAudioSource outputs zeroes due to CORS access restrictions
But I have tested locally and it works as intended.
If you have access to the source files, rather than trying to boost on the fly using Javascript (for that the Web Audio API answer from #yuriy636 is the best solution) then you can process the video locally using something like ffmpeg.
ffmpeg -i input.mp4 -filter:a "volume=1.5" output.mp4
This will apply a filter to the input.mp4 file that just adjusts the volume to 1.5x the input and creates a new file called output.mp4.
You can also set a decibel level:
ffmpeg -i input.mp4 -filter:a "volume=10dB" output.mp4
or review the instructions to normalize audio etc.

IE 9 and 10 yield unexpected and inconsistent MediaError's

We have a set of HTML blocks -- say around 50 of them -- which are iteratively parsed and have Audio objects dynamically added:
var SomeAudioWrapper = function(name) {
this.internal_player = new Audio();
this.internal_player.src = this.determineSrcFromName(name);
// ultimately an MP3
this.play = function() {
if (someOtherConditionsAreMet()) {
this.internal_player.play();
}
}
}
Suppose we generate about 40 to 80 of these on page load, but always the same set for a particular configuration. In all browsers tested, this basic strategy appears to work. The audio load and play successfully.
In IE's 9 and 10, a transient bug surfaces. On occasion, calling .play() on the inner Audio object fails. Upon inspection, the inner Audio object has a .error.code of 4 (MEDIA_ERR_SRC_NOT_SUPPORTED). The file's .duration shows NaN.
However, this only happens occasionally, and to some random subset of the audio files. E.g., usually file_abc.mp3 plays, but sometimes it generates the error. The network monitor shows a successful download in either case. And attempting to reload the file via the console also fails -- and no requests appears in IE's network monitor:
var a = new Audio();
a.src = "the_broken_file.mp3";
a.play(); // fails
a.error.code; // 4
Even appending a query value fails to refetch the audio or trigger any network requests:
var a = new Audio();
a.src = "the_broken_file.mp3?v=12345";
a.play(); // fails
a.error.code; // 4
However, attempting the load the broken audio file in a new tab using the same code works: the "unsupported src" plays perfectly.
Are there any resource limits we could be hitting? (Maybe the "unsupported" audio finishes downloading late?) Are there any known bugs? Workarounds?
I think we can pretty easily detect when a file fails. For other compatibility reasons we run a loop to check audio progress and completion stats to prevent progression through the app (an assessment) until the audio is complete. We could easily look for .error values -- but if we find one, what do we do about it!?
Addendum: I just found a related question (IE 9/10/11 sound file limit) that suggests there's an undocumented limit of 41 -- not sure whether that's a limit of "41 requests for audio files", "41 in-memory audio objects", or what. I have yet to find any M$ documentation on the matter -- or known solutions.
Have you seen these pages on the audio file limits within IE? These are specific to Sound.js, but the information may be applicable to your issue:
https://github.com/CreateJS/SoundJS/issues/40 ...
Possible solution as mentioned in the last comment: "control the maximum number of audio tags depending on the platform and reuse these instead of recreating them"
Additional Info: http://community.createjs.com/kb/faq/soundjs-faq (see the section entitled “I load a lot of sounds, why am running into errors in Internet Explorer?”)
I have not experienced this problem in Edge or IE11. But, I wrote a javascript file to run some tests by looping through 200 audio files and seeing what happens. What I found is that the problem for IE9 and IE10 is consistent between ALL tabs. So, you are not even guaranteed to be able to load 41 files if other tabs have audio opened.
The app that I am working on has a custom sound manager. Our solution is to disable preloading audio for IE9 and IE10 (just load on demand) and then when the onended or onpause callback gets triggered, to run:
this.src = '';
This will free up the number of audio that are contained in IE. Although I should warn that it may make a request to the current page the user is on. When the play method in the sound manager is called again, set the src and play it.
I haven't tested this code, but I wrote something similar that works. What I think you could do for your implementation, is resolve the issue by using a solution like this:
var isIE = window.navigator.userAgent.match(/MSIE (9|10)/);
var SomeAudioWrapper = function(name) {
var src = this.determineSrcFromName(name);
this.internal_player = new Audio();
// If the browser is IE9 or IE10, remove the src when the
// audio is paused or done playing. Otherwise, set the src
// at the start.
if (isIE) {
this.internal_player.onended = function() {
this.src = '';
};
this.internal_player.onpause = this.internal_player.onended;
} else {
this.internal_player.src = src;
}
this.play = function() {
if (someOtherConditionsAreMet()) {
// If the browser is IE, set the src before playing.
if (isIE) {
this.internal_player.src = src;
}
this.internal_player.play();
}
}
}

Stop jRecorder playing back after recording

I've been using jRecorder for a while now, it's pretty good considering how lightweight it is.
Now, I've got a case where I need to prevent jRecorder playing back the recorded audio after record. I've tried pretty much every function within jRecorder and the documentation is not great.
Anyone every encountered and got round this?
Here's the code, just standard jRecorder stuff really.
$.jRecorder({
'swf_path': '/scripts/jrecorder1.1/jRecorder.swf',
'host': host,
'callback_started_recording': function() {
$('.audio-recorder').addClass('recording');
},
'callback_finished_recording': function() {
$('.audio-recorder').removeClass('recording');
},
'callback_stopped_recording': function() {
$.jRecorder.sendData();
$('.audio-recorder').removeClass('recording');
},
'callback_error_recording': function() {
$('.audio-recorder').removeClass('recording');
},
'callback_activityTime': function(time) {
},
'callback_activityLevel': function(level) {
},
'callback_finished_sending': function(response) {
}
}, $('.audio-recorder .audio-recorder-singleton'));
After a day of trying almost everything on this, I finally found a solution. I thought I'd post it here in full as there are lots of comments on the jRecorder website on how to do this, but they are unanswered.
To do this, you need to do the following:
Go to https://github.com/sythoos/jRecorder and download everything in the flash-fla folder. Make sure you match the folder of the flash-fla folder structure EXACTLY on your local machine.
Once you have done that, and if you haven't got it already, you're going to need to download Flash. I used the free 30 day trial to do this, you can also do the same.
Open up the Main.as file that you should now have in your local directory and delete line 160, which should be as shown below:
private function recordComplete(e:Event):void
{
//fileReference.save(recorder.output, "recording.wav");
//finalize_recording();
preview_recording(); <----- **DELETE THIS**
}
Now, you need to open up the AudioRecorderCS4-1.0.fla file in Flash and export the movie (File -> Export -> Export Movie).
Once exported (and named so you can find it), reference/include in project your new SWF and change the swfPath of your jRecorder parameter to match the new SWF and voila! :)
You may download old version of jRecorder.swf file
from here jRecorder.swf without preview
so that you do not need to recompile fla file.
Hit Raw button to download it. This version does not have function jSendFileToServer
so you should comment it out in your jRecorder.js file.
WAV file will be submitted to server right after the recording stops.

Categories