I've built this HTML5 video player that I am loading into a canvas to manipulate and back onto a canvas to display it. The video starts out quite slow and the frame rate only gets worse each time it is played. All I am currently manipulating in the video now is the color value when the video is paused, but will eventually be using real time manipulation throughout videos that will be posted in the future.
I used the below tutorial to learn this trick https://www.youtube.com/watch?v=zjQzP3mOXdc
Here is the relevant code, but there may possibly be interference coming from elsewhere so feel free to check the source code at the link at the bottom
var v = document.getElementById('video');
var color = "#DA7AC1";
var processes={
timerCallback:function() {
if (this.v2.paused || this.v2.ended) {
return;
}
this.ctxIn.drawImage(this.v2,0,0,this.width,this.height);
this.pixelScan();
var self=this;
setTimeout(function() {
self.timerCallback();
}, 0);
},
doLoad:function(){
this.v2=document.getElementById("video");
this.cIn=document.getElementById("cIn");
this.ctxIn=this.cIn.getContext("2d");
this.cOut=document.getElementById("cOut");
this.ctxOut=this.cOut.getContext("2d");
var self=this;
this.v2.addEventListener("playing", function() {
self.width=self.v2.videoWidth;
self.height=self.v2.videoHeight;
cIn.width=self.v2.videoWidth;
cIn.height=self.v2.videoHeight;
cOut.width=self.v2.videoWidth;
cOut.height=self.v2.videoHeight;
self.timerCallback();
}, false);
},
pixelScan: function() {
var frame = this.ctxIn.getImageData(0,0,this.width,this.height);
for(var i=0; i<frame.data.length;i+=4) {
var grayscale=frame.data[i]*.3+frame.data[i+1]*.59+frame.data[i+2]*.11;
frame.data[i]=grayscale;
frame.data[i+1]=grayscale;
frame.data[i+2]=grayscale;
}
this.ctxOut.putImageData(frame,0,0);
return;
}
}
http://coreytegeler.com/ethan/
Any and all help would be greatly appreciated! Thanks!
Reason 1
Try to adjust your timer avoiding 0 as timeout value:
setTimeout(function() {
self.timerCallback();
}, 34);
34ms is plenty as video frame rate is typically never more than 30 FPS (NTSC) or 25 FPS (PAL), ie 1000 / 30. If you use 0 you risk stacking up your calls which means the browser will be busy trying to empty the event queue.
If you use anything lower than 33-34ms you end up having the same frame processed twice or more which of course is unnecessary (your video is actually 29.97 FPS/NTSC so you might want to consider keeping 34ms).
Reason 2
The video resolution is also full HD (1920x1080) which is a bit too much for canvas and JS to process in real-time (for a typcial consumer computer). Try to reduce the video size so a normal spec'ed computer will be able to process the data.
Reason 3 (in part)
You don't need two on-screen canvases or even an on-screen video. Try to create these tags dynamically and not inserting them into the DOM. Use a single canvas on-screen and draw the result to that (you can putImageData from one canvas to another).
Reason 4 (in part)
Ideally, replace setTimeout with a requestAnimationFrame approach as this improves the synchronization and efficiency considerably. You can implement a toggle to reduce the FPS to for example 30 as you don't need to process each frame twice (ref. 30 FPS video frame rate).
Update
To create these elements dynamically (ref reason 3) you can do something like this:
var canvas = document.createElement('canvas'),
video = document.createElement('video'),
ctx = canvas.getContext('2d');
video.preload = 'auto';
video.addEventListener('canplay', start, false);
if (video.canPlayType('video/mp4')) {
video.src = 'videoUrl.mp4';
} else if ...etc.
Then when the video has loaded enough data (on metadata or canplay) you set the off-screen (and on-screen) canvas element to the size of the video:
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
Then when playing process its buffer and copy to the on-screen canvas you defined before.
You don't have have an off-screen canvas - I merely mention this as you in your original code used and in and out canvas IIRC. You can simply use a single on-screen canvas and the off-screen video and draw to the video frame to the canvas, process it and put back the processed data. Should work fine too in this case.
I ran a profile in chrome and it points to line 46 as taking up the most CPU.
setTimeout(function() {
self.timerCallback();
}, 0);
Perhaps increasing the timeout will stop it from lagging.
I had the same issues and tried a number of fixes. I was using Premier Elements which didn't export to mp4 and using HandBrake to convert the format. I also Tried FFMpeg to do the conversion, but neither worked.
What I did was switch to Kdenlive as my video editor, it exported directly to MP4, and that video worked perfectly.
So, if you are have this slow render issue, it is probably an issues with the video encoding. Easiest fix is to get a high quality video editor like Premier Pro, Final Cut, or Kdenlive. Kdenlive is free but it has a huge learning curve and poor public documentation.
Related
I was trying to make a basic media recorder with the MediaRecorder API which is fairly straight forward: get the stream from getDisplayMedia, then record it.
The problem: This only records the maximum screen size, but no more. So if my screen is 1280/720, it will not record 1920/1080.
This may seem quite obvious, but my intent is that it should record the smaller resolution inside of the bigger one. For example:
With the red rectangle representing what my actual screen is recording, and the surrounding black rectangle is simply black space, but the entire video is now a higher resolution, 1920/1080, which is useful for youtube, since youtube scales down anything that is in between 720 and 1080 resolution, which is a problem.
Anyway I tried simply adding the stream from getDisplayMedia to a video element video vid.srcObject = stream, then made a new canvas with the resolution 1920/1080, and in the animate loop just did ctx.drawImage(vid, offsetX, offsetY), and outside of the loop, where the MediaRecorder was made, simply did newStream = myCanvas.captureStream() as per the documentation of the API, and passed that to the MediaRecorder; however, the problem is that because of the huge canvas overhead, everything is really slow and the framerate is absolutely terrible (don't have video example, but just test it yourself).
So is there some way to optimize the canvas to not affect the framerate (tried looking into OffscreenCanvas but I couldn't find a way to get the stream from it itself to use with MediaRecorder, so it didn't really help), or is there a better way to capture and record the canvas, or is there a better way to record the screen within a larger resolution, in client-size JavaScript? If not with client-size JavaScript, is there some kind of real-time video encoder (ffmpeg is too slow) that could be run on the server, and each frame of the canvas could be sent to the server and saved there? Is there some better way to make a video recorder with any kind of JavaScript -- client or server or both?
Don't know what your code looks like, but I managed to get a smooth experience with this piece of code:
(You will also find very good example here: https://mozdevs.github.io/MediaRecorder-examples/)
<!doctype html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<script src="script.js"></script>
</head>
<body>
<canvas id="canvas" style="background: black"></canvas>
</body>
// DISCLAIMER: The structure of this code is largely based on examples
// given here: https://mozdevs.github.io/MediaRecorder-examples/.
window.onload = function () {
navigator.mediaDevices.getDisplayMedia({
video: true
})
.then(function (stream) {
var video = document.createElement('video');
// Use "video.srcObject = stream;" instead of "video.src = URL.createObjectURL(stream);" to avoid
// errors in the examples of https://mozdevs.github.io/MediaRecorder-examples/
// credits to https://stackoverflow.com/a/53821674/5203275
video.srcObject = stream;
video.addEventListener('loadedmetadata', function () {
initCanvas(video);
});
video.play();
});
};
function initCanvas(video) {
var canvas = document.getElementById('canvas');
// Margins around the video inside the canvas.
var xMargin = 100;
var yMargin = 100;
var videoWidth = video.videoWidth;
var videoHeight = video.videoHeight;
canvas.width = videoWidth + 2 * xMargin;
canvas.height = videoHeight + 2 * yMargin;
var context = canvas.getContext('2d');
var draw = function () {
// requestAnimationFrame(draw) will render the canvas as fast as possible
// if you want to limit the framerate a particular value take a look at
// https://stackoverflow.com/questions/19764018/controlling-fps-with-requestanimationframe
requestAnimationFrame(draw);
context.drawImage(video, xMargin, yMargin, videoWidth, videoHeight);
};
requestAnimationFrame(draw);
}
Suppose you use the Web Audio API to play a pure tone:
ctx = new AudioContext();
src = ctx.createOscillator();
src.frequency = 261.63; //Play middle C
src.connect(ctx.destination);
src.start();
But, later on you decide you want to stop the sound:
src.stop();
From this point on, src is now completely useless; if you try to start it again, you get:
src.start()
VM564:1 Uncaught DOMException: Failed to execute 'start' on 'AudioScheduledSourceNode': cannot call start more than once.
at <anonymous>:1:5
If you were making, say, a little online keyboard, you're constantly turning notes on and off. It seems really clunky to remove the old object from the audio nodes graph, create a brand new object, and connect() it into the graph, (and then discard the object later) when it would be simpler to just turn it on and off when needed.
Is there some important reason the Web Audio API does things like this? Or is there some cleaner way of restarting an audio source?
Use connect() and disconnect(). You can then change the values of any AudioNode to change the sound.
(The button is because AudioContext requires a user action to run in Snippet.)
play = () => {
d.addEventListener('mouseover',()=>src.connect(ctx.destination));
d.addEventListener('mouseout',()=>src.disconnect(ctx.destination));
ctx = new AudioContext();
src = ctx.createOscillator();
src.frequency = 261.63; //Play middle C
src.start();
}
div {
height:32px;
width:32px;
background-color:red
}
div:hover {
background-color:green
}
<button onclick='play();this.disabled=true;'>play</button>
<div id='d'></div>
This is exactly how the web audio api works. Sound generator nodes like oscillator nodes and audio buffer source nodes are intended to be used once. Every time you want to play your oscillator, you have to create it and set it up, just like you said. I know it seems like a hassle, but you can abstract it into a play() method that handles those details for you, so you don't have to think about it every time you play an oscillator. Also, don't worry about the performance implications of creating so many nodes. The web audio api is intended to be used this way.
If you just want to make music on the internet, and you're not as interested in learning the ins and outs of the web audio api, you might be interested in using a library I wrote that makes things like this easier: https://github.com/rserota/wad
I am working on a 12 Voice Polyphonic Syntesizer with 2 Osc per Voice.
I now never Stop the Osc's. I disconnect the Osc's. You can do that by setTimeout. For the Time take the longest release Phase (1 of 2) from the amp Enveloop for this Set of Osc's. Subtract the AudioContext.currentTime(), multiply with 1000 (setTimeout works with milisecs, web Audio works with seconds.)
I'm creating a PDF output tool using jsPDF but need to add multiple pages, each holding a canvas image of a video frame.
I am stuck on the logic as to the best way to achieve this as I can't reconcile how to queue the operations and wait on events to achieve the best result.
To start I have a video loaded into a video tag and can get or set its seek point simply with:
video.currentTime
I also have an array of video seconds like the following:
var vidSecs = [1,9,13,25,63];
What I need to do is loop through this array, seek in the video to the seconds defined in the array, create a canvas at these seconds and then add each canvas to a PDF page.
I have a create canvas from video frame function as follows:
function capture_frame(video_ctrl, width, height){
if(width == null){
width = video_ctrl.videoWidth;
}
if(height == null){
height = video_ctrl.videoHeight;
}
canvas = document.createElement('canvas');
canvas.width = width;
canvas.height = height;
var ctx = canvas.getContext('2d');
ctx.drawImage(video_ctrl, 0, 0, width, height);
return canvas;
}
This function works fine in conjunction with the following to add an image to the PDF:
function addPdfImage(pdfObj, videoObj){
pdfObj.addPage();
pdfObj.text("Image at time point X:", 10, 20);
var vidImage = capture_frame(videoObj, null, null);
var dataURLWidth = 0;
var dataURLHeight = 0;
if(videoObj.videoWidth > pdfObj.internal.pageSize.width){
dataURLWidth = pdfObj.internal.pageSize.width;
dataURLHeight = (pdfObj.internal.pageSize.width/videoObj.videoWidth) * videoObj.videoHeight;
}else{
dataURLWidth = videoObj.videoWidth;
dataURLHeight = videoObj.videoHeight;
}
pdfObj.addImage(vidImage.toDataURL('image/jpg'), 'JPEG', 10, 50, dataURLWidth, dataURLHeight);
}
My logic confusion is how best to call these bits of code while looping through the vidSecs array as the problem is that setting the video.currentTime needs the loop to wait for the video.onseeked event to fire before code to capture the frame and add it to the PDF can be run.
I've tried the following but only get the last image as the loop has completed before the onseeked event fires and calls the frame capture code.
for(var i = 0; i < vidSecs.length; i++){
video.currentTime = vidSecs[i];
video.onseeked = function() {
addPdfImage(jsPDF_Variable, video);
};
}
Any thoughts much appreciated.
This is not a real answer but a comment, since I develop alike application and got no solution.
I am trying to extract viddeo frames from webcam live video stream and save as canvas/context, updated every 1 - 5 sec.
How to loop HTML5 webcam video + snap photo with delay and photo refresh?
I have created 2 canvases to be populated by setTimeout (5000) event and on test run I don't get 5 sec delay between canvas/contextes, sometimes, 2 5 sec. delayed contextes get populated with image at the same time.
So I am trying to implement
Draw HTML5 Video onto Canvas - Google Chrome Crash, Aw Snap
var toggle = true;
function loop() {
toggle = !toggle;
if (toggle) {
if (!v.paused) requestAnimationFrame(loop);
return;
}
/// draw video frame every 1/30 frame
ctx.drawImage(v, 0, 0);
/// loop if video is playing
if (!v.paused) requestAnimationFrame(loop);
}
to replace setInterval/setTimeout to get video and video frames properly synced
"
Use requestAnimationFrame (rAF) instead. The 20ms makes no sense. Most video runs at 30 FPS in the US (NTSC system) and at 25 FPS in Europe (PAL system). That would be 33.3ms and 40ms respectively.
"
I am afraid HTML5 provided no quality support for synced real time live video processing via canvas/ context, since HTML5 offers no proper timing since was intended to be event/s controlled and not run as real time run app code ( C, C++ ...).
My 100+ queries via search engine resulted in not a single HTML5 app I intend to develop.
What worked for me was Snap Photo from webcam video input, Click button event controlled.
If I am wrong, please correct me.
Two approaches:
create a new video element for every seek event, code provided by Chris West
reuse the video element via async/await, code provided by Adrian Wong
I have a javascript timer.
It refreshes the img src on a 200ms interval.
I have taken a look at the canvas object. I am unsure whether it is recommended to use the canvas instead of the img element?
I am running tests on both and cannot see any differences in performance.
This is my code for using the timer/img:
This is my code:
var timer4x4
var cache4x4 = new Image();
var alias = 'test';
var lastUpdate = 0;
function setImageSrc4x4(src) {
live4x4.src = src;
timer4x4 = window.setTimeout(swapImages4x4, 200);
}
function swapImages4x4() {
cache4x4.onload = function () {
setImageSrc4x4(cache4x4.src);
};
cache4x4.onerror = function () {
setImageSrc4x4("http://127.0.0.1/images/ERROR.jpg");
};
cache4x4.src = null;
cache4x4.src = 'http://127.0.0.1/Cloud/LiveXP.ashx?id=' + createGuid() + '&Alias=' + alias + '&ReSync=' + reSync;
reSync = 0;
}
*nb will add canvas code in a bit
I am streaming images from my client desktop PC to my web server. I am trying to display as many images (FPS) as possible. The image is a container for 4 smaller images. Stitched up on the client and sent to the server.
I have Googled and it says if doing pixel manipulation and aniumation use canvas.
But, I am just doing animation.
Thanks
The canvas element was designed to draw / edit / interact with images in it. If all you do is display the image, then you don't need that and a simple img is the semantically correct choice (with the added bonus of being compatible on more devices).
In both cases, the performance will be similar (if not the same) because the only thing to happen is that the image is downloaded.
While performance-wise you won't notice much of a difference, since you still cannot fully rely on HTML5 support yet, it is probably best to go with the img-solution for now.
I am writing an html5 app which manipulates video on a canvas.
I am also showing a custom (self-drawn) mouse cursor over it.
To update my app I am calling the setInterval function to draw stuff to the canvas.
As I am manipulating video I need: Very high refresh rates + Lot's of processor time to draw. (~50ms per frame).
My draw function causes my on-mouse function to starve. (this is acceptable by me).
But... After it is finished starving it responds to old events. It can take up to 3 frames for the mouse to catch so that I can render it in the right position. Meaning you can see the cursor "crawling" after you've stopped moving the mouse.
Is there a way to give the onmousemove events a higher priority then my setInterval(drawFunction)?
When in the draw function, Can I "ask" if there are pending mouse events, and revoke my current call to draw?
Is there some-other hack I can use? (I can draw to back-buffer in a webWorker, but as I understand from reading up on html5 this is only thread abstraction [threads are not concurrent] )
You can't prioritize event handling, at least not directly.
What you might consider would be having your own timer-driven code check for pending mouse events. The mouse event handler would just put a work request into a queue. Your video manipulating could could just check that queue and handle operations as it sees fit. Thus, the real mouse work would also be done in the timer code.
edit Here's what I'm thinking. You'd still have handlers for your mouse events:
var eventQueue = [];
canvasElement.onmousemove = function(evt) {
evt = evt || event;
eventQueue.push({ type: evt.type, x: evt.clientX, y: evt.clientY, when: new Date() });
};
Thus the mouse handlers would not do any actual work. They just record the details in a list of events and then return.
The main, timer-driven loop can then just check the queue of events and decide what to do. For example if it sees a whole string of "mousemove" events, it could compute the net change in position over all of them and update the cursor position just once.
You certainly can use a Web Worker to manipulate your pixels in the background. Web Workers do run concurrently. It doesn't seem to be part of the spec, but every implementation runs workers in a separate process. So your main script will be responsible for updating your custom cursor, but you'd pass the canvas ImageData off to the background worker for the video processing. Note that you can't send an actual Canvas or Context to the worker, as workers have no DOM access.
There is no way to give events higher priority, they seem to be serviced on a first come, first serve basic. Assuming you are using setTimeout, one thing that might help though is to break up your tasks into smaller restartable chunks, e.g. if you are processing an image like this:
for(y=0; y<height; y++) {
// Deal with rows of pixels
}
// Show image
You could do this instead:
var INTERVAL = height/4;
for(y = old_y; y<height && y<old_y+INTERVAL ; y++) {
// Deal with row of pixels
}
if (y == height) {
// show image
}
old_y = (y == height) ? 0 : y;
Then the other events will have 4x (or whatever, depending on the INTERVAL) greater chance of being dealt with.