I have a weather webpage with a webcam image that uploads every 2 minutes.
I want to change the webpage.
I want the webcam image to be the background on the webpage, but I want that image to update itself once a minute.
The reason I want it to be the background image is so I can put weather report stickers on top of it (in the foreground) in the bottom corners or at the bottom of the page centered.
Right now the webcam takes the picture and uploads it to my domain host. Then I download it to my webpage. It's fairly kluged together at this point but it works. You can see it here
Any help appreciated
Greg
You can do something like this and make the canvas a background. But this snippet will not work in here. As they deprecated the feature in insecure origins for security reasons perhaps.
getUserMedia() no longer works on insecure origins. To use this feature, you should consider switching your application to a secure origin, such as HTTPS. See this for more details.
var video = document.getElementById("live");
var canvas = document.getElementById("myCanvas");
var ctx = canvas.getContext('2d');
function start() {
navigator.webkitGetUserMedia({
video: true
}, gotStream, function() {});
}
function gotStream(stream) {
video.src = webkitURL.createObjectURL(stream);
}
function refresh() {
ctx.drawImage(video, 0, 0, 500, 375);
ctx.stroke();
setTimeout(function(){
refresh;
}, 60000)
}
refresh();
start();
<video style="display:none;" id="live" width="500" height="375" autoplay style="border:5px solid #000000"></video>
<canvas id="myCanvas" width="500" height="375" style="border:5px solid #000000"></canvas>
Related
I am developing filter effects for video like this website. I am using JavaScript/jQuery, and fabric, but it's not correctly applying the effect to the video.
This is the code I tried:
$(document).ready(function() {
canvas = new fabric.Canvas('c');
canvas.setWidth(480);
canvas.setHeight(360);
var video1El = document.getElementById('video1');
var video1 = new fabric.Image(video1El, {
left: 0,
top: 0
});
canvas.add(video1);
video1El.load();
$(document.body).on('click', '#play', function() {
video1El.play();
var filter = new fabric.Image.filters.BlendColor({
color: 'red',
mode: 'tint',
alpha: 0.5
});
video1.filters = [filter];
});
fabric.util.requestAnimFrame(function render() {
var image = canvas.item(0);
var backend = fabric.filterBackend;
if (backend && backend.evictCachesForKey) {
backend.evictCachesForKey(image.cacheKey);
backend.evictCachesForKey(image.cacheKey + '_filtered');
}
canvas.item(0).applyFilters();
canvas.renderAll();
fabric.util.requestAnimFrame(render);
});
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/fabric.js/2.2.3/fabric.min.js"></script>
<button id="play">play</button>
<canvas id="c" width="300" height="300"></canvas>
<video crossorigin="anonymous" id="video1" style="display: none" class="canvas-img" width="480" height="360">
<source id="video_src1" src="http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4" type="video/mp4">
</video>
My expectation is to create a video filter. For example, grayscale filter, blur, color filter, RGB filter, shadow, and black and white video. Just like the website link which I added.
i also try below,
ffmpeg -i input.mp4 -vf "gblur=sigma=5:steps=8" -c:v libx264 -crf 22 -c:a aac -strict -2 output.mp4
but how is this command works , can someone give one example of filter with above command. any other technique is https://www.veed.io/tools/video-filters this site uses ffmpeg.
Your code is basically fine. The interesting part is
var filter = new fabric.Image.filters.BlendColor({
color:'red',
mode: 'tint',
alpha: 0.5
});
There you are using the filter BlendColor, which is one of many other filters. For instance,
BaseFilter
Brightness
Convolute
GradientTransparency
Grayscale
Invert
Mask
Noise
Pixelate
RemoveWhite
Sepia
Sepia2
Tint
See the documentation.
However, after fabric.Image.filters you may add a filter. For instance, if you want to add the blur effect, you may use fabric.Image.filters.Blur(). The first parameter of Blur() is an object. In the documentation, you may find, what properties you can pass.
Example:
$(document).ready(function() {
canvas = new fabric.Canvas('c');
canvas.setWidth(480);
canvas.setHeight(360);
var video1El = document.getElementById('video1');
var video1 = new fabric.Image(video1El, {
left: 0,
top: 0
});
canvas.add(video1);
video1El.load();
$(document.body).on('click', '#play', function() {
video1El.play();
var filter = new fabric.Image.filters.Blur({
blur: 0.5
});
video1.filters = [filter];
});
fabric.util.requestAnimFrame(function render() {
var image = canvas.item(0);
var backend = fabric.filterBackend;
if (backend && backend.evictCachesForKey) {
backend.evictCachesForKey(image.cacheKey);
backend.evictCachesForKey(image.cacheKey + '_filtered');
}
canvas.item(0).applyFilters();
canvas.renderAll();
fabric.util.requestAnimFrame(render);
});
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/fabric.js/2.2.3/fabric.min.js"></script>
<button id="play">play</button>
<canvas id="c" width="300" height="300"></canvas>
<video crossorigin="anonymous" id="video1" style="display: none" class="canvas-img" width="480" height="360">
<source id="video_src1" src="http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4" type="video/mp4">
</video>
The code above is client-side only. So, if you want the user to download the video, you may need to modify the video by the server and then send it to the client.
Here is an example how you could blur a video, using FFmpeg with the following command:
ffmpeg -i input.mp4 -vf "gblur=sigma=5:steps=8" -c:v libx264 -crf 22 -c:a aac -strict -2 output.mp4
This command applies a blur filter with a sigma value of 5 and 8 steps to the video file "input.mp4" and saves the output to "output.mp4" (see the documentation).
Follow these steps:
Open a command prompt or terminal window on your server.
Use cd <DIRECTORY> to navigate to the directory where the input video file is located.
Enter a FFmpeg command (for instance, the one that you see above) and press enter
Once the command has finished running, the output video file will be saved in the same directory as the input file.
Ensure that you have FFmpeg installed on your server, otherwise you may use a library (see the comments).
Once the video is processed, you may send it to the client.
It looks like you're trying to use the fabric.js library to apply filters to a video element. However, fabric.js is a library for working with canvas elements, not video elements. Canvas elements are image-based, whereas video elements are video-based, and they have different capabilities and limitations.
You cannot directly apply filters to the video element, but one way to achieve a similar effect is to use a canvas element to draw the video, apply the filters to the canvas, and then display the canvas on top of the video.
You can use the HTML5 canvas API to draw the video on the canvas and set its attributes. You can also use libraries like CamanJS and PixiJS, which both provide an easy way to apply various filters to canvas elements.
Here is an example of how to use CamanJS to apply grayscale filter to a video:
<canvas id="canvas"></canvas>
<video id="video" src="your_video.mp4"></video>
<script>
var canvas = document.getElementById('canvas'),
video = document.getElementById('video'),
ctx = canvas.getContext('2d');
video.addEventListener('play', function(){
ctx.drawImage(video, 0, 0, canvas.width, canvas.height);
Caman("#canvas", function () {
this.grayscale();
this.render();
});
}, false);
</script>
You can apply different filter by replacing the filter name in this example.
Hope this helps and gives you a starting point.
I have a basic webapp that uses canvas frames to render content on each webpage. On loading the page, chrome will render a single blank canvas frame before the main content is rendered (~20ms or so of blank frame).
When clicking through multiple pages, the blank frame is visually distracting and results in a bad user experience.
Is there a way to pre-render the canvas frame before chrome lays out the page? Or is there a way to have the canvas frame ready so that chrome does not display a blank frame?
The basic code outline looks like this:
<script>
function draw() {
var canvas = document.getElementById("samplecanvas");
if (canvas.getContext) {
var ctx = canvas.getContext("2d");
var img = new Image();
img.onload = function(){
ctx.drawImage(img, 0, 0, img.width, img.height,
0, 0, canvas.width, canvas.height);
}
img.src = "data:image/jpeg;base64,...........";
} else {
alert('canvas not supported by this browser')
}
}
</script>
<body onload="draw();">
<canvas id="samplecanvas" width="800" height="800"></canvas>
</body>
Like I said, the above does work, it just results in a blank frame that is really annoying. The base64 embedded image is large (720p), but I still wouldn't expect this to be slow to render.
Any ideas?
I can't seem to have Chrome browser show the true image size via <img>
and also by drawImage via canvas context. Explorer shows both of these
correctly.
In Chrome, the image shown is not by the dimensions of the original image
but a scaled down one. The browser when rendered from a Web server seems to
have something to do with it. Curiously enough, when open the browser on
the html file locally:
E.g. file:///C:/xampp/htdocs/Website_TEST_active/test1.html, the image dimensions are correct.
Attached is a stripped down HTML and Javascript code. Appreciate any insights.
Thanks
Sean
<!DOCTYPE html>
<html>
<body>
<p>Image to use:</p>
<img id="scream" onload="loadImage()" src="pics/cover.jpg" alt="Test">
<p>Canvas:</p>
<canvas id="myCanvas" width="854" height="480" style="border:1px solid #d3d3d3;">
Your browser does not support the HTML5 canvas tag.
</canvas>
<script>
window.onload = function() {
// Not used
}
/*
* Upon image load, draw image on canvas
*/
function loadImage(){
var c = document.getElementById("myCanvas");
var ctx = c.getContext("2d");
var img = document.getElementById("scream");
ctx.drawImage(img, 0,0, img.naturalWidth, img.naturalHeight);
console.log("Original Image W=" + img.naturalWidth +
" H=" + img.naturalHeight);
}
</script>
</body>
</html>
I found something that might be right for what you're asking.
Here's the link- https://superuser.com/a/364875
Open Developer tools (Ctrl+Shift+I or use the Settings Icon at the
top-right of your browser window => Tools => Developer tools) and on
the relevant page, switch to the Network tab and reload the page.
In the Size column you'll see the size of everything loaded
(Documents, Stylesheets, Images, Scripts, ...). You can enable a
filter to help you find out what you need - at the bottom-center of
Developer tools frame.
My application works fine in Chrome, but in IE/EDGE the canvas doesnt show the video.
This started to happen when i use as source a encrypted video, when i used open source video the canvas showed the video.
I cant find a solution mostly because IE/EDGE doesnt show errors in developer tools console.
IE/EDGE has some policy that doesnt allow to draw a encrypted video?
In future i will remove video element from html, create only in javascript and write some text in canvas as a watermark.
<canvas runat="server" id="canvas1"></canvas>
<video
id="video1"
runat="server"
class="azuremediaplayer amp-default-skin amp-big-play-centered"
controls
poster="">
</video>
<script>
var videoElement = document.getElementById('<%=video1.ClientID%>');
videoElement.setAttribute('webkit-playsinline', 'true');
videoElement.width = '1280';
videoElement.height = '720';
var x, y, min, tempo = 0;
var nroRender = 201;
var myPlayer = amp(videoElement);
myPlayer.src([{
src: '<URL VIDEO>',
protectionInfo: [
{
type: 'Widevine',
authenticationToken: 'Bearer=<TOKEN>'
}, {
type: 'PlayReady',
authenticationToken: 'Bearer=<TOKEN>'
}]
}]);
var canvasElement = document.getElementById('<%=canvas1.ClientID%>');
canvasElement.width = '1280';
canvasElement.height = '720';
var ctx = canvasElement.getContext('2d');
function desenha() {
ctx.clearRect(0, 0, canvasElement.width, canvasElement.height);
ctx.drawImage($('#video1 video')[0], 0, 0, canvasElement.width, canvasElement.height);
}
function loop() {
desenha();
setTimeout(loop, 1000 / 60);
}
loop();
</script>
If you have some problem to understand what is the problem, run in Chrome and then in IE. In Chrome canvas appears like video, in IE canvas appears black.
Full code in https://github.com/tobiasrighi/video-canvas/blob/master/WebForm1.aspx
Because the video is protected with DRM, by design IE/Edge block the ability to capture frames - its actually not an error and this is built down lower in the media pipeline. It seems Chrome's current implementation with Widevine does not block frames, although this may happen in the near future depending on Google's future design considerations.
I'm just trying to figure out how to get an image to draw on a canvas. I followed a tutorial on W3 schools, but when i tried it on my own it doesn't seem to be working. I copy and paste the code below into an HTML file, and the image never loads into the canvas. I downloaded the picture into the same directory. I've been asking around, and looked online, but no one seems to know what the problem is.
I'm using an updated version of chrome (Version 29.0.1547.76 m).
<!DOCTYPE html>
<html>
<body>
<p>Image to use:</p>
<img id="scream" src="img_the_scream.jpg" alt="The Scream" width="220" height="277">
<p>Canvas:</p>
<canvas id="myCanvas" width="250" height="300" style="border:1px solid #d3d3d3;">
Your browser does not support the HTML5 canvas tag.</canvas>
<script>
var c=document.getElementById("myCanvas");
var ctx=c.getContext("2d");
var img=document.getElementById("scream");
this.ctx.drawImage(img,10,10);
</script>
</body>
</html>
Your image probably hasn't finished loading at the point you are using drawImage:
HTML
Add onload handler in img tag:
<img id="scream" onload="draw()" src="...
Then the function to handle it:
var c=document.getElementById("myCanvas");
var ctx=c.getContext("2d");
var img=document.getElementById("scream");
function draw() {
ctx.drawImage(img,10,10);
}
Online demo here
Be aware of that where you place the scripts in your document matters as well.
I would recommend you setting the src attribute in JavaScript as well. That makes it more "safe" to handle the onload (or subscribed event with img.addEventListener('load', ...).
you should use the following approach that first let the image loaded then display:
image.onload = function() {
pic.getContext('2d').drawImage('your image to display', 0,0);
}
"If you try to call drawImage() before the image has finished loading, it won't do anything (or, in older browsers, may even throw an exception). So you need to be sure to use the load event so you don't try this before the image has loaded."
example:
var img = new Image(); // Create new img element
img.addEventListener('load', function() {
// execute drawImage statements here
}, false);
img.src = 'myImage.png'; // Set source path
Source