I'm working on a web project where user chooses a design of a mobile mockup and save some chat conversations.
As an output the application should give a high quality video (or 1080p at least) of the chat saved before so that it looks like the real chat conversation is captured.
As of now I'm drawing mockup and conversation on HTML5 Canvas and recording it with canvas.captureStream() method.
It is able to record upto 1280px wide canvas but when I tried Increasing it to achieve 1080p video. Canvas animations slows down and browser stop working sometimes.
I'm done with googling how to optimize canvas and all the stuff that can help me.
Looks like canvas is no more able to work for me, So is there any way to record DOM and render it as video.
I was using captureStream method of canvas
const stream = canvas.captureStream();
And mediaRecorder to capture it.
let options = {mimeType: 'video/webm'};
let mediaRecorder = new MediaRecorder(stream, options);
I expect to get a way of recording video of DOM in high quality. So that I can run chat with javascript and it records the same in order to achieve the output.
Related
I need to show video from the users camera and virtual objects created with WebGL on a web page in a single html element, probably, either <canvas> or <video>.
I can successfully get user video, it is a stream, from navigator's media devices and show it in <video> element.
I can successfully create virtual visual objects with WebGL and show them in a <canvas> element, by using other's example code here (from MDN).
I need to mix them on a single html element. How can I achieve that.
My further research shows me that there is a captureStream() method of HTMLMediaElement interface. Both <video> and canvas have this method. I can capture the stream from such elements and use it for something else, like attaching into another html element (but not into a canvas element probably) or a WebRTC Peer Connection as source, recording it. But this overwrites the previous stream.
Then I have found that a stream, called MediaStream, has tracks inside them, like video tracks, audio tracks even text tracks. And more can be added by addTrack method of the MediaStream, and they can be gotten by getTracks method. I have added the video track from my <canvas> element's stream to the <video> elements stream, however, I can only view the original video track from the user media in the <video> element.
What am I missing to achieve that?
<html>
<body>
getting video stream from camera into screen
<video autoplay></video>
getting virtual objects into screen
<canvas id="glcanvas" width="640" height="480"></canvas>
</body>
// webgl codes that draws a rotating colored cube on html canvas webgl context
<script src="gl-matrix.js"></script>
<script src="webgl-demo.js"></script>
<script>
// getting video stream from camera into screen
const video = document.querySelector("video");
navigator.mediaDevices.getUserMedia({video: true})
.then(stream => {
let canv = document.querySelector("#glcanvas");
let canvstrm = canv.captureStream();
// get track from the canvas stream and add to the user media stream
let canvstrmtrack = canvstrm.getTracks()[0]
stream.addTrack(canvstrmtrack);
video.srcObject = stream;
})
</script>
</html>
Complete gist
A video element can only play a single video track at a time.
Support for this is found in the MediaCapture spec:
Since the order in the MediaStream's track set is undefined, no requirements are put on how the AudioTrackList and VideoTrackList is ordered
And in HTML:
Either zero or one video track is selected; selecting a new track while a previous one is selected will unselect the previous one.
It sounds like you expected the graphics to overlay the video, with e.g. black as transparent.
There are no good video composition tools in the web platform at the moment. About the only option is to repeatedly draw frames from the video element into a canvas, and use canvas.captureStream, but it is resource intensive and slow.
If you are merely doing this for playback (not recording or transmitting the result), then you might be able to achieve this effect much more cheaply by positioning a canvas with transparency on top of the video element using CSS. This approach also avoids cross-origin restrictions.
In Javascript, I have come across a situation where I need to play two audios simultaneously in iOS Safari. One audio is the Voice script whereas other one is the Background sound(playing in loop). I've used the following code to achieve this.
var audio = new Audio();
audio.src = "voice.mp3";
audio.type = "audio/mp3";
audio.play();
var bg_audio = new Audio();
bg_audio.src = "background.mp3";
bg_audio.type = "audio/mp3";
bg_audio.loop = true;
bg_audio.play();
However, this is creating an issue In iOS Control Center (Lock Screen). Because sometimes it shows progress bar of voice player and sometimes it shows the controls for background player. I want to show controls for voice player only. Is there any workaround?
P.S. I've being searching for answer everywhere even posted on Apple's Developer Forum but didn't get any response.
Thanks!
In iOS Safari only 1 channel of audio can play at a time, so no layering or overlapping of sounds.
You need to use web Audio API instead.
This article may be helpful -
web audio on iOS
I'm working in a web application that allows user to create a scene using canvas element, and then using Web Audio Api the scene changes accordingly with extracted frequencies in real time (creating audio visualizations). I implemented the interactivity with canvas with eventListeners.
My question here is if there is a way to disable canvas interaction (stop eventListeners) while the song is playing (the scene is changing) and after user stops it, the scene can be modified again.
Thank you all.
How could you programmatically convert a YouTube/Vimeo video into a series of animated images that each reflect 5 seconds of the video? Essentially, the goal is to deconstruct the video into silent, 5-second animated pictures -- think "moving pictures" from Harry Potter.
One option is to slice the video into 5-second video chunks, but the output should feel like animated GIFs ... that is, play instantly, be lighter than combining 150 pictures into one JavaScript slideshow (assuming 30 FPS), but have the image quality of a JPG or PNG. If this is possible with video, then it's an option we're open to exploring.
Another option is to take screen shots of the video, but that is not programmatic.
Ideas?
The output needs to get rendered in HTML5 on Mobile Safari.
You've a bit of a problem here — quality is directly related to file size. So if you create a video of 30fps (higher than regular broadcast TV, really?), you're going to run into issues with it being light & fast loading.
I don't know if I'd go down the route of making actual GIFs if you're looking for a high-ish framerate, but if it's for a web project, HTML5 video tag should be able to have autoplaying video that integrates into the page fairly seemlessly.
What you would want to do here is take a programme like Handbrake, put the video at the highest possible compression settings (lowest quality/framerate) & slowly bring it up until you have something that you think is the minimum you can get away with.
From there, you can look into scripting the process using these settings & something like FFmpeg. You'll probably also want to remove video metadata to save as much filespace as possible.
I'm building a video for my website with HTML5. Ideally, I'd have only one silent video file, and five different audio tracks in different languages that sync up with the video.
Then I'd have a button that allows users to switch between audio tracks, even as the video is playing; and the correct audio track would come to life (without the video pausing or starting over or anything; much like a DVD audio track selection).
I can do this quite simply in Flash, but I don't want to. There has to be a way to do this in pure HTML5 or HTML5+jQuery. I'm thinking you'd play all the audio files at 0 volume, and only increase the volume of the active track... but I don't know how to even do that, let alone handle it when the user pauses or rewinds the video...
Thanks in advance!
Synchronization between audio and video is far more complex than simply starting the audio and video at the same time. Sound cards will playback at slightly different rates. (What is 44.1 kHz to me, might actually be 44.095 kHz to you.)
Often, the video is synchronized to the audio stream, but the player is what handles this. By loading up multiple objects for playback, you are effectively pulling them out of sync.
The only way this is going to work is if you can find a way to control the different audio streams from the video player. I don't know if this is possible.
Instead, I propose that you encode the video multiple times, with the different streams. You can use FFMPEG for this, and even automate the process, depending on your workflow. Switching among streams becomes a problem, but most video players are robust enough to guess the byte offset in the file, when given the bitrate.
If you only needed two languages, you could simply adjust the balance between a left and right stereo audio channel.
If you're willing to let all five tracks download, why not just mux them into the video? Videos are not limited to a single audio track (even AVI could do multiple audio tracks). Then syncing should be handled for you. You'd just enable/disable the audio tracks as needed.
It is doable with Web Audio API.
Part of my program listens to video element events and stops or restarts audio tracks created using web audio API. This gives me an ability to turn on and off any of the tracks in perfect sync.
There are some drawbacks.
There is no Web Audio API support in Internet Explorers except for Edge.
The technique works with buffered audio only and that's limiting. There are some problems with large files: https://bugs.chromium.org/p/chromium/issues/detail?id=71704