How to jump to certain time offsets in HTML5 Audio elements?
They say you can simply set their currentTime property (emphasis mine):
The currentTime attribute must, on getting, return the current
playback position, expressed in seconds. On setting, if the media
element has a current media controller, then it must throw an
INVALID_STATE_ERR exception; otherwise, the user agent must seek to
the new value (which might raise an exception).
Alas, it doesn't seem to work (I need it in Chrome).
There are similar questions, although, no answers.
To jump around an audio file, your server must be configured properly.
The client sends byte range requests to seek and play certain regions of a file, so the server must response adequately:
In order to support seeking and playing back regions of the media that
aren't yet downloaded, Gecko uses HTTP 1.1 byte-range requests to
retrieve the media from the seek target position. In addition, if you
don't serve X-Content-Duration headers, Gecko uses byte-range requests
to seek to the end of the media (assuming you serve the Content-Length
header) in order to determine the duration of the media.
Then, if the server responses to byte range requests correctly, you can set the position of audio via currentTime:
audio.currentTime = 30;
See MDN's Configuring servers for Ogg media (the same applies for other formats, actually).
Also, see Configuring web servers for HTML5 Ogg video and audio.
Works on my chrome...
$('#audio').bind('canplay', function() {
this.currentTime = 29; // jumps to 29th secs
});
Both audio and video media accept the #t URI Time range property
song.mp3#t=8.5
To dynamically skip to a specific point use HTMLMediaElement.currentTime:
audio.currentTime = 8.5;
A much easier solution is
var element = document.getElementById('audioPlayer');
//first make sure the audio player is playing
element.play();
//second seek to the specific time you're looking for
element.currentTime = 226;
Make sure you attempt to set the currentTime property after the audio element is ready to play. You can bind your function to the oncanplay event attribute defined in the specification.
Can you post a sample of the code that fails?
I was facing problem that progress bar of audio was not working but audio was working properly. This code works for me. Hope it will help you too.
Here song is the object of audio component.
HTML Part
<input type="range" id="seek" value="0" max=""/>
JQuery Part
$("#seek").bind("change", function() {
song.currentTime = $(this).val();
});
song.addEventListener('timeupdate',function (){
$("#seek").attr("max", song.duration);
$('#seek').val(song.currentTime);
});
Firefox also makes byte range requests when seeking content that it has not yet loaded- it is not just a chrome issue.
Set the response header "Accept-Ranges: bytes" and return a 206 Partial Content status code to allow any client to make byte range requests.
See https://developer.mozilla.org/en-US/docs/Web/HTTP/Configuring_servers_for_Ogg_media#Handle_HTTP_1.1_byte_range_requests_correctly
The #katspaugh's answer is correct, but there is a workaround that does not require any additional server configuration. The idea is to get the audio file as a blob, transform it to dataURL and use it as the src for the audio element.
Here is solution for angular $http, but if needed I can add vanilla JS version as well:
$http.get(audioFileURL,
{responseType:'blob'})
.success(function(data){
var fr = new FileReader;
fr.readAsDataURL(data);
fr.onloadend = function(){
domObjects.audio.src = fr.result;
};
});
cautions
This workaround is not suitable for large files.
It will not work cross-origin unless CORS are set properly.
Set time position to 5 seconds:
var vid = document.getElementById("myAudio");
vid.currentTime = 5;
In order to fix video rewind and fast forward on chrome just add /stream? to your html request for example:
<video src="youre.website.ext/{fileId}">
fix. <video src="your.website./{fileId}/stream?">
My problem was video rewind and forward didnt work on chrome but worked well on mozzila.
Related
I desire to play an audio-book in my web-page. The audio book is a .zip file, which contains multiple .mp3 files, having one for each chapter of the book. The run time of all the files is several hours, and the their cumulative size is 60MB. The .zip is stored server-side (Express.js)
How can I play each file in succession in the client (in an <audio> element for instance), so that the audio-book plays smoothly, as if 1 file?
Do I need to use a MediaStream object? If so, how?
-Thanks
I'd take a look at this answer on another Stack Overflow question however I have made some modifications in order to match your question:
var audioFileURLs= [];
function preloadAudio(url) {
var audio = new Audio();
// once this file loads, it will call loadedAudio()
// the file will be kept by the browser as cache
audio.addEventListener('canplaythrough', loadedAudio, false);
audio.src = url;
}
var loaded = 0;
function loadedAudio() {
// this will be called every time an audio file is loaded
// we keep track of the loaded files vs the requested files
loaded++;
if (loaded == audioFileURLs.length){
// all have loaded
init();
}
}
var player = document.getElementById('player');
function play(index) {
player.src = audioFiles[index];
player.play();
}
function init() {
// do your stuff here, audio has been loaded
// for example, play all files one after the other
var i = 0;
// once the player ends, play the next one
player.onended = function() {
i++;
if (i >= audioFiles.length) {
// end
return;
}
play(i);
};
// play the first file
play(i);
}
// call node/express server to get a list of links we can hit to retrieve each audio file
fetch('/getAudioUrls/#BookNameOrIdHere#')
.then(r => r.json())
.then(arrayOfURLs => {
audioFileURLs = arrayOfURLs
arrayOfURLs.map(url => preloadAudio(URL))
})
And then just have an audio element on the screen with the id of "player" like <audio id="player"></audio>
With this answer though, the arrayOfURLs array must contain URLs to an API on your server that will open the zip file and return the specified mp3 data. You may also just want to take this answer as a general reference, and not a complete solution because there is optimization to be done. You should probably only load the first audio file at first, and 5 minutes or so before the first file ends you may want to start pre-loading the next and then repeat this process for the entire thing... That all will be up to you but this should hopefully put you on your feet.
You may also run into an issue with the audio element though because it will only show the length of the current audio segment it is on, and not the full length of the audiobook. I would choose to believe this zip file has the book separated by chapter correct? If so you could create a chapter selector, that pretty much allows you to jump to a specific chapter aka getAudioUrls URL.
I hope this helps!
One other note for you... reading your comment on a potential answer down below, you could combine all the audio files into one using some sort of node module (audioconcat is one I found after a quick google search) and return that one file to the client. However, I would not personally take this route because the entire audiobook will be in the server's memory while it combines them, and until it returns it to the client. This could cause some memory issues down the road, so I would avoid it if I could. However, I will admit that this option could be potentially nice because the full length of the audiobook will display in the audio elements timeline.
The best option perhaps is to store the books full length and chapter lengths in a details.json file in the zip file and send that to the client in the first API call along with the URLs to each audio file. This would enable you to build a nice UI.
The only options I can think of is to use either use a javascript mp3 decoder (or compiled a C decoder to asm.js/wasm) and use the audio APIs. Or wrap the mp3 in an mp4 using something like mux.js and use media source extensions to playback.
maybe this will help you
<audio controls="controls">
<source src="track.ogg" type="audio/ogg" />
<source src="track.mp3" type="audio/mpeg" />
Your browser does not support the audio element.
</audio>
(function(){
var url = "http://dash.edgesuite.net/envivio/Envivio-dash2/manifest.mpd";
var player = dashjs.MediaPlayer().create();
player.initialize(document.querySelector("#videoPlayer"), url,
})();
var bitrates = player.getBitrateInfoListFor("video");
console.log('My bitrate:' + bitrates.length);
In console write
My bitrate:0
how can I find out what quality the video has and how to change it?
Can I play a mpd file without dash.js using Media Source and xhr ?
how can I find out what quality the video has and how to change it?
You need to wait until the manifest has been loaded and the player has initialized completely, which happens asynchronously. Add an event listener like so:
player.on("streamInitialized", function () {
var bitrates = player.getBitrateInfoListFor("video");
console.log('My bitrate:' + bitrates.length);
});
Now you should get a list of the bitrates available.
To change the quality manually, use http://cdn.dashjs.org/latest/jsdoc/module-MediaPlayer.html#setQualityFor__anchor
Can I play a mpd file without dash.js using Media Source and xhr ?
Sure, but you can't just pass a manifest to MSE so you would still need do all the hard stuff a DASH player does such as parsing the manifest, determining the media URLs, selecting the relevant quality etc.
For those who are coming in future,
the code
player.setQualityFor('video', {number});
will change the quality, but however due to auto bitrate quality switch, the quality returns to the quality the bandwidth can afford.
To set quality manually and switch off auto bitrate,
let settingsp = player.getSettings();
settingsp.streaming.abr.autoSwitchBitrate = false;
Now you can use setQualityFor() anchor to change quality and it remains same throughout the video.
I have an array of Blobs (binary data, really -- I can express it however is most efficient. I'm using Blobs for now but maybe a Uint8Array or something would be better). Each Blob contains 1 second of audio/video data. Every second a new Blob is generated and appended to my array. So the code roughly looks like so:
var arrayOfBlobs = [];
setInterval(function() {
arrayOfBlobs.append(nextChunk());
}, 1000);
My goal is to stream this audio/video data to an HTML5 element. I know that a Blob URL can be generated and played like so:
var src = URL.createObjectURL(arrayOfBlobs[0]);
var video = document.getElementsByTagName("video")[0];
video.src = src;
Of course this only plays the first 1 second of video. I also assume I can trivially concatenate all of the Blobs currently in my array somehow to play more than one second:
// Something like this (untested)
var concatenatedBlob = new Blob(arrayOfBlobs);
var src = ...
However this will still eventually run out of data. As Blobs are immutable, I don't know how to keep appending data as it's received.
I'm certain this should be possible because YouTube and many other video streaming services utilize Blob URLs for video playback. How do they do it?
Solution
After some significant Googling I managed to find the missing piece to the puzzle: MediaSource
Effectively the process goes like this:
Create a MediaSource
Create an object URL from the MediaSource
Set the video's src to the object URL
On the sourceopen event, create a SourceBuffer
Use SourceBuffer.appendBuffer() to add all of your chunks to the video
This way you can keep adding new bits of video without changing the object URL.
Caveats
The SourceBuffer object is very picky about codecs. These have to be declared, and must be exact, or it won't work
You can only append one blob of video data to the SourceBuffer at a time, and you can't append a second blob until the first one has finished (asynchronously) processing
If you append too much data to the SourceBuffer without calling .remove() then you'll eventually run out of RAM and the video will stop playing. I hit this limit around 1 hour on my laptop
Example Code
Depending on your setup, some of this may be unnecessary (particularly the part where we build a queue of video data before we have a SourceBuffer then slowly append our queue using updateend). If you are able to wait until the SourceBuffer has been created to start grabbing video data, your code will look much nicer.
<html>
<head>
</head>
<body>
<video id="video"></video>
<script>
// As before, I'm regularly grabbing blobs of video data
// The implementation of "nextChunk" could be various things:
// - reading from a MediaRecorder
// - reading from an XMLHttpRequest
// - reading from a local webcam
// - generating the files on the fly in JavaScript
// - etc
var arrayOfBlobs = [];
setInterval(function() {
arrayOfBlobs.append(nextChunk());
// NEW: Try to flush our queue of video data to the video element
appendToSourceBuffer();
}, 1000);
// 1. Create a `MediaSource`
var mediaSource = new MediaSource();
// 2. Create an object URL from the `MediaSource`
var url = URL.createObjectURL(mediaSource);
// 3. Set the video's `src` to the object URL
var video = document.getElementById("video");
video.src = url;
// 4. On the `sourceopen` event, create a `SourceBuffer`
var sourceBuffer = null;
mediaSource.addEventListener("sourceopen", function()
{
// NOTE: Browsers are VERY picky about the codec being EXACTLY
// right here. Make sure you know which codecs you're using!
sourceBuffer = mediaSource.addSourceBuffer("video/webm; codecs=\"opus,vp8\"");
// If we requested any video data prior to setting up the SourceBuffer,
// we want to make sure we only append one blob at a time
sourceBuffer.addEventListener("updateend", appendToSourceBuffer);
});
// 5. Use `SourceBuffer.appendBuffer()` to add all of your chunks to the video
function appendToSourceBuffer()
{
if (
mediaSource.readyState === "open" &&
sourceBuffer &&
sourceBuffer.updating === false
)
{
sourceBuffer.appendBuffer(arrayOfBlobs.shift());
}
// Limit the total buffer size to 20 minutes
// This way we don't run out of RAM
if (
video.buffered.length &&
video.buffered.end(0) - video.buffered.start(0) > 1200
)
{
sourceBuffer.remove(0, video.buffered.end(0) - 1200)
}
}
</script>
</body>
</html>
As an added bonus this automatically gives you DVR functionality for live streams, because you're retaining 20 minutes of video data in your buffer (you can seek by simply using video.currentTime = ...)
Adding to the previous answer...
make sure to add sourceBuffer.mode = 'sequence' in the MediaSource.onopen event handler to ensure the data is appended based on the order it is received. The default value is segments, which buffers until the next 'expected' timeframe is loaded.
Additionally, make sure that you are not sending any packets with a data.size === 0, and make sure that there is 'stack' by clearing the stack on the broadcasting side, unless you are wanting to record it as an entire video, in which case just make sure the size of the broadcast video is small enough, and that your internet speed is fast. The smaller and lower the resolution the more likely you can keep a realtime connection with a client, ie a video call.
For iOS the broadcast needs to made from a iOS/macOS application, and be in mp4 format. The video chunk gets saved to the app's cache and then removed once it is sent to the server. A client can connect to the stream using either a web browser or app across nearly any device.
I am trying to stream audio through a websocket on a node.js (express) server to a web browser. The audio is coming from an iOS device as 16-bit, mono wav files sampled at 4k (4000 samples per second).
Here's my code:
Server Code:
webSocketServer.on('connection', function connection(client) {
client.on('message', function(message) {
webSocketServer.clients.forEach(function each(connection) {
connection.send(message, { binary: true }
);
});
});
Client Code:
webSocket = new WebSocket('ws://' + window.location.hostname + ':8080/');
webSocket.binaryType = 'arraybuffer'
webSocket.onmessage = function(message) {
var arrayBuffer = message.data // wav from server, as arraybuffer
var source = audioContext.createBufferSource();
audioContext.decodeAudioData(arrayBuffer, function(buffer){
source.buffer = buffer
source.connect(audioContext.destination)
source.start(time);
time += source.buffer.duration
}, function(){
console.log('error')
})
};
decodeAudioData()appears to be working, however the audio buffer it returns is half the length I was expecting. (eg 4000 samples will only give me 0.5 seconds of audio. I originally thought this was because the wav is 16 bit and not 32, but switching to 32 caused decodeAudioData() to trigger it's error callback.
I figured this workaround could be added to the success callback:
source.playbackRate.value = 0.5 // play at half speed
time += source.buffer.duration * 2 // double duration
This gets the timing to work perfectly, but I am left with one problem: There is an audible 'click' or 'pop' between audio chunks. After spacing out the chunks by one second (time += (source.buffer.duration * 2) + 1), I was able to find that the click happens at the very beginning of each chunk.
So my main two head-scratchers are:
1) Why is the decoded audio playing at twice the speed I am expecting? Is my sampling rate of 4k too low for the Web Audio API? Why can't I decode 32-bit wav's?
2) I have some experience with digital audio workstations (ableton, logic) and I know that clicking sounds can arise if a wave 'jumps' from a sample back down to zero or vice versa (ie: starting/ending a sine wave in the midst of a phase). Is that what's going on here? Is there a way to get around this? Crossfading each individual sample seems silly. Why doesn't each chunk pickup where the last one left off?
1) The audio I was receiving was actually at 2k by mistake, but the wav header still said 4k, thus the double speed error.
2) See the last paragraph of Chris Wilsons answer here:
Finally - this is not going to work well if the sound stream does not match the default audio device's sample rate; there are always going to be clicks, because decodeAudioData will resample to the device rate, which will not have a perfect duration. It will work, but there will likely be artifacts like clicks at the boundaries of chunks. You need a feature that's not yet spec'ed or implemented - selectable AudioContext sample rates - in order to fix this.
Brion Vibbers AudioFeeder.js works great without any clicks but requires raw 32bit pcm data. Also be wary of upsampling artifacts!
Another option :
You can use the MediaSource API to overcome those glitches between the audio.
If you need full fledged research on this, use this : MSE for Audio
I'm trying to render MJpeg stream in HTML5 using the img tag.
When I'm running the following, everything works great, meaning, the video starts to play until the video ends:
<img src="http://[some ip]:[port]/mjpg">
My question is how can I get the stream frame by frame.
For each frame, I want to get it, do something (ajax call to the server) and then display the frame as an image.
Thanks.
You can do this without repeatedly making Http requests. Only one will suffice. You can use the fetch api to create a ReadableStream, access it's Reader and keep reading from the stream.
Once you have the reader keep reading chunks from the stream recursively. Look for the SOI ( 0xFF 0xD8) in the byte stream which signals the end of the header and the beginning of the JPEG frame. The header will contain the length of the JPEG in bytes to be read. Read that many bytes from the chunk and any successive chunks and store it into a Uint8Array. Once you've
successfully read the frame convert it into a blob, create a UrlObject out of it and assign it to the src property of your img object.
Keep doing this till the connection is closed.
Shameless plug. Here's a link to a working sample on github.
If the camera exposes raw JPEG images (not .MJPEG extension) you'll have to reaload it manually (if the extension is .MJPEG the browser will do everything, just put the correct src). If you have .MJPEG and want to use the raw .JPEG check your camera documentation. Most cameras expose both the .MJPEG and raw .JPEG streams (just on different URLs).
Unfortunately you won't be able to easily get the image through ajax, but you could change the src of the image periodically.
You can use Date.getTime() and add it to the querystring to force the browser to reload the image, and repeat each time the image loads.
If you use jQuery the code will look something like this:
camera.html
<!DOCTYPE html>
<html>
<head>
<title>ipCam</title>
</head>
<body>
<h1>ipCam</h1>
<img id="motionjpeg" src="http://user:pass#127.0.0.1:8080/" />
<script src="motionjpeg.js"></script>
<script>
//Using jQuery for simplicity
$(document).ready(function() {
motionjpeg("#motionjpeg"); // Use the function on the image
});
</script>
</body>
</html>
motionjpeg.js
function motionjpeg(id) {
var image = $(id), src;
if (!image.length) return;
src = image.attr("src");
if (src.indexOf("?") < 0) {
image.attr("src", src + "?"); // must have querystring
}
image.on("load", function() {
// this cause the load event to be called "recursively"
this.src = this.src.replace(/\?[^\n]*$/, "?") +
(new Date()).getTime(); // 'this' refers to the image
});
}
Note that my example will play the MotionJPEG on page load, but won't allow play/pause/stop functionalities
If you stream source can't return frames on another address (http://[some ip]:[port]/frame/XXX) then you can use MJPEG stream parser on the server. For example, Paparazzo.js parse stream and return single jpeg. Actually it returns only last frame without saving previous, but it can be changed.
Problem can't be solved only in browser with js without some plugins and server.