I was given 2 MP3 files, one that is 4.5Mb and one that is 5.6Mb. I was instructed to have them play on a website I am managing. I have found a nice, clean looking CSS based jQuery audio player.
My question is, is this the right solution for files that big? I am not sure if the player preloads the file, or streams it? (If that is the correct terminology) I don't deal much with audio players and such...
This player is from happyworm.com/jquery/jplayer/latest/demo-01.htm
Is there another approach I shoudl take to get this to play properly? I don't want it to have to buffer, and the visitor to wait, or slow page loading... etc.etc. I want it to play clean and not affect the visitors session to the site.
The name is a bit misleading - the MP3 playing is done in a Flash component, as in all other similar players, too. The jQuery part of it is the control and customization of the player (which is very nice, I'm not saying anything against the product).
The player should be capable to play an MP3 file while it loads. It's not going to be real streaming (because you can't skip to arbitrary positions) but it should work out all right.
Make sure you test the buffering yourself, using a big MP3 file. Remember to encode the MP3 files according to the rules because otherwise the files will act up, especially in older players.
Related
I am working with React and Node. My project is having requirement to merge videos and play it on the player. Is possible anyhow, I can do it either on my React side using some canvas or on the back end side using some module other than ffmpeg?
Just want to show preview nothing else is it possible?
Any help would be much appriciated
Right now what I am doing is playing the videos one by one on the player
{vidoes.map((url) => <ReactPlayer src={url}/>)}
Now what I want to do is to draw the images on the canvas. So for that I am playing all the urls at once but how can I play them into series and save the next canvas images until the before one completes?
To achieve continous playing in browsers for multiple input video files there’s no need for server side processing. Static file serving for multiple video files (we’ll call them chunks from now on) is enough.
At first the .appendBytes() method for the playing buffer of a video player was invented in Flash to allow for switching video streams (or different quality chunks). It was also particularly useful for live video where the next video file doesn’t exist when the playing starts. It also allowed multiple resolution video chunks to play one after the other seamelessly, which, at the time, didn’t work in any other player, including VLC (and I think ffmpeg didn’t have it either).
HTML5 browsers have added an .appendBuffer() method to add video chunks to the currently playing video buffer.
It allows you to hands-on pre-load whatever content you need with full control of what gets played and when (you are in control of buffering and what comes next at all times).
You can start dumping video chunks into the playing buffer at any time.
On the server side ffmpeg cannot solve your problem in the same way the browser can as you will have to handle very difficult corner cases where the video resolution, codecs, audio channels, etc. might differ. Also, a browser only solution is vastly superior to doing remuxing or video encoding on the server.
MDN: https://developer.mozilla.org/en-US/docs/Web/API/SourceBuffer/appendBuffer
This will solve all your problems related to video playback of multiple source files.
If you want to do this on the backend, as stated in the comment, you will likely need to include ffmpg. There are some libraries though that make is simpler like fluent-ffmpeg
assuming you already have the files on your node server, you can use something like:
const ffmpeg = require('fluent-ffmpeg');
ffmpeg('/path/to/input1.avi')
.input('/path/to/input2.avi')
.on('error', function(err) {
console.log('An error occurred: ' + err.message);
})
.on('end', function() {
console.log('Merging finished !');
})
.mergeToFile('/path/to/merged.avi', '/path/to/tempDir');
note that this is a simple example taken directly from https://github.com/fluent-ffmpeg/node-fluent-ffmpeg#mergetofilefilename-tmpdir-concatenate-multiple-inputs.
Make sure you read through the prerequisites since it requires ffmpeg to be installed and configured.
There are other aspects that you might need to consider, such as differing codecs and resolutions. Hopefully this gives you a good starting point though.
Pretty sure most, if not all video manipulation libraries use ffmpeg under the hood.
Seeing as you're a react dev, you might appreciate Remotion to manipulate the videos as needed. It doesn't run in the frontend but does have some useful features.
Everything is being done in the front end.
My goal is to be able to create an audio track, in real time, and play it instantly for a user. The file would be roughly 10 minutes. However, the files are very simple, mostly silence, with a few sound clips (the sound clip is 2kb) sprinkled around. So the process for generating the data (the raw bytes) is very simple, it's either write the 2kb sound clip or place n amount of 00 for the silence. It's just that for 10 minutes. But instead of generating the entire file fully, and then playing it, I would like to stream the audio, ideally I would be generating more and more of the file while the audio was playing. It would prevent any noticeable delay between when the user clicks play and when the audio starts playing. The process of creating the file can take anywhere from 20 milliseconds to 500 milliseconds, different files are created based off user input.
The only problem is: I have no idea how to do this. I've read ideas about using web sockets, but that seems like the data would come from the server, I see no reason why to bother a server with this when the JavaScript can easily generate the audio data on its own.
I've been researching and experimenting with the Web Audio API and the Media Streams API for the past several hours, and I keep going in circles and I'm totally confused by it. I'm starting to think that these API are meant to be used for gathering data from a users mic or webcam, and not fed data directly from a readable stream.
Is what I want to do possible? Can it be achieved using something like a MediaStreamAudioSourceNode or is there another simpler way that I haven't noticed?
Any help on this topic would be so greatly appreciated. Examples of a simple working version would be even more appreciated. Thanks!
I'm going to follow this question, because a true streaming solution would be very nice to know about. My experience is limited to using WebAudio API to play to two sounds with a given pause in between them. The data is actually generated at the server and downloaded using Thymeleaf, into two javascript variables that hold the PCM data to be played. But this data could easily have been generated at the client itself via Javascript.
The following is not great, but almost could be workable, given that there are extensive silences. I'm thinking, manage an ordered FIFO queue with the variable name and some sort of timing value for when you want the associated audio played, and have a function that periodically polls the queue and loads commands into javascript setTimeout methods with the delay amount calculated based on the timing values given in the queue.
For the one limited app I have, the button calls the following (where I wrote a method that plays the sound held in the javascript variable)
playTone(pcmData1);
setTimeout(() => playTone(pcmData2), 3500);
I have the luxury of knowing that pcmData1 is 2 seconds long, and a fixed pause interval between the two sounds. I also am not counting on significant timing accuracy. For your continuous playback tool, it would just have the setTimeout part with values for the pcmData variable and the timing obtained from the scheduling FIFO queue.
Whether this is helpful and triggers a useful idea, IDK. Hopefully, someone with more experience will show us how to stream data on the fly. This is certainly something that can be easily done with Java, using it's SourceDataLine class which has useful blocking-queue aspects, but I haven't located a Javascript equivalent yet.
I've been researching for this for a minute but I'm not getting a straight answer to my exact question. I'm trying to understand the process behind video players switching video quality (480p, 720p, 1080p, etc.).
In order to achieve that, I am first asking if this is more of a front-end thing or back-end thing? And to illustrate the first answer, does one:
A) Upload one video file to a server (S3/Google Cloud) at the highest quality, then use a video tag in HTML and add a property to specify which quality is desired?
B) Upload one video file to a server (S3/Google Cloud) at the highest quality, then use JS to control the playback quality?
C) Split highest quality uploaded file into multiple files with different streaming quality using server code, then use front-end JS to choose which quality is needed?
D) Realize that this is way more work than it seems and should leave it to professional video streaming services, such as JWPlayer?
Or am I not seeing another option that's possible without a streaming service, without actually building a streaming service?
If the answer is pretty much D, what service do you recommend?
Note: I know YouTube and Vimeo can handle this but I'm not trying to have that kind of overhead.
It is answer 'C' as noted in the comments, and maybe partly answer 'D' also.
You need a video streaming server that supports one of the popular adjustable bitrate streaming protocols (ABR) DASH or HLS. ABR allows the client device or player download the video in chunks, e.g 10 second chunks, and select the next chunk from the bit rate most appropriate to the current network conditions.
There are open source streaming servers available such as GStreamer and licensed ones like Wowza, which you can sue if you want to host the videos yourself.
For some example of ABR see this answer: https://stackoverflow.com/a/42365034/334402
I currently need to extract snapshot(s) from an IP Camera using RTSP on a web page.
VLC Web Plugin works well with playing stream, but before I get my hands dirty on playing with its Javascript API, can some one tell me whether the API can help me to take the snapshot(s) of the stream, like the way it done with VLC Media Player, cuz it does not present on the above page.
If the answer is 'No', please give me some other way to do this.
Thanks in advance.
Dang Loi.
The VLC plugin only provides metadata properties accessible from JavaScript.
For this reason there is no way to access the bitmap/video itself as plugins runs sand-boxed in the browser. The only way to obtain such data would be if the plugin itself provided a mechanism for it.
The only way to grab a frame is therefor to use a generic screen snagger (such as SnagIt), of course, without the ability to control it from JavaScript.
You could, as an option, look into the HTML5 Video element to see if you can use your video source with that. In that case you could grab frames, draw them to canvas and from there save it as an image.
Another option in case the original stream format isn't supported, is to transcode it on the fly to a format supported by the browser. Here is one such transcoder.
I making a website just for fun and I want to be able to cycle through many different mp3's that are stored on my computer but eventually will be stored on my domain. I also want to be able to seek to a different point of the song. This is just a starting point for me but I would eventually like it so that a user can select a folder from their machine and then play music through my site. Is this possible? If so, how should I implement it. I have seen a couple different players that look alright but I am not to sure how to get the player on my site. I am very new to HTML5, Javascript and CSS. I also saw the audio tags which is built with html5 but i dont know if that is the best way to go.