In my app, I wrote a code which allows a user to take a picture using their camera which is connected to the computer.
Implementation:
I have used a webcam, which show a video element and the source is the camera:
function onSuccess (stream) {
this.video.srcObject = stream;
}
navigator.getUserMedia({video: true}, onSuccess, onFailure)
When the user clicks on a button I create a snapshot from the video:
function takeSnapshot (width, height, savePicture) {
const canvas = document.createElement('canvas');
canvas.width = width;
canvas.height = height;
const ctx = canvas.getContext('2d');
ctx.drawImage(this.video, 0, 0, canvas.width, canvas.height);
canvas.toBlob(function (blob) {
savePicture(blob);
}, "image/jpeg");
}
Now, I need to zoom into the camera and crop the video around the center, since I want to remove the background around the photo.
I don't want to implement this using CSS since I want to save the picture cropped and zoomed too.
I want this zooming in the video view and also when taking the snapshot with zoom.
The camera has no zooming option of its own.
Is there an option to zoom and crop in the video stream?
context.drawImage() has optional parameters to "scale" image (sWidth, sHeight).
To move the image and "crop", you may set additional params (dx, dy).
https://developer.mozilla.org/en-US/docs/Web/API/CanvasRenderingContext2D/drawImage
But this a direct way of zoom&crop by JS. If you need ready UI for such functionality, you may combine your code with something like this: http://deepliquid.com/projects/Jcrop/demos.php?demo=thumbnail (I didn't try it, just have it in bookmarks)
Related
let me explain you what I do. I need to draw bigg image near to 5000px to 2688px. So I can't draw whole image because in browser it's too long. I decided to add scroll bar to see whole image. The following script allows me to draw a part of canvas
var img = new Image();
img.onload = function(){
canvasW = 5376
canvasH = 2688
ctx.drawImage(img, 0,0, canvasW, canvasH);
};
img.src = "image.png";
Now imagine I want to apply effects like blur or brigthness, contrast etc, My canvas will change, so how can I redraw modified canvas (with effect and not the first one.
I tried to check on stackoverflow but the examples is to redraw image from the first one. I mean the drawing is not from a modifed canvas.
I tried to store base64 of my canvas and redraw by using base64 :
var image = new Image()
image.addEventListener('load', function() {
ctx.clearRect(0, 0, canvasWidth, canvasHeight);
ctx.drawImage(image, canMouseX, canMouseY, canvasW, canvasH);
})
image.src = bases64[bases64.length - 1]
It doesn't work because by storing base64 there is just a part of canvas that is redraw. If you want more details clone my repo on : https://github.com/wyllisMonteiro/draggingScroll/tree/master
I want to find a solution to redraw a modified canvas
I have a requirement to take the canvas from a webpage and convert it to PDF in the backend so it can be saved on the server and downloaded & printed at a later time at 600 DPI.
I have followed this question, and I have a working prototype of the code: AJAX call to send the canvas to backend in Base64 and then a Java function to convert it to PDF.
However, the problem is that the quality of the image is dependent on the screen/browser window size the user has when he clicks the button to trigger the image creation - a fullscreen browser will create a higher-res image than a partial window browser. Example: Both taken on my PC but on the latter the window is about half the screen size.
I was thinking of somehow creating the canvas on a headless browser with preset size, and that would at least make the quality consistent across users, but I have no idea how to dynamically change the image so I can keep it at 600 DPI no matter the paper size the user chooses to use.
Do I have to draw the canvas shapes directly onto PDF? I know that would fulfill the DPI requirement, but is that even possible to do from an AngularJS/Java stack?
You can decide the proper size for the canvas and then modify the way it's displayed via CSS. Here, the final size is set as 2000x2000 and it will be saved as such (by clicking on it), regardless of the viewport size:
const canvas = document.getElementById('canvas');
const ctx = canvas.getContext('2d');
// Draw the ellipse
ctx.beginPath();
ctx.lineWidth = 10;
ctx.ellipse(1000, 1000, 500, 800, Math.PI / 4, 0, 2 * Math.PI);
ctx.stroke();
// Draw the ellipse's line of reflection
ctx.beginPath();
ctx.setLineDash([5, 5]);
ctx.moveTo(0, 2000);
ctx.lineTo(2000, 0);
ctx.stroke();
canvas.addEventListener('click', function (e) {
var dataURL = canvas.toDataURL('image/png');
var link = document.createElement("a");
link.download = "finalsize.png";
link.href = dataURL;
link.click();
});
<body style="display:flexbox">
<canvas id="canvas" width="2000" height="2000" style="width:100%; "></canvas>
</body>
The Google Ideas homepage features an animation that distorts the appearance of some of the text and a button with a static sound effect, in order to simulate signal interference as the content transitions from one item to the next.
Here's a Gif in case they change the design:
How are they achieving this? I can see classes and styles jumping around in the dev tools, so JavaScript is definitely involved, but I can't find the relevant section of script itself.
It's not that hard, especially with html2canvas and canvas-glitch.
Basically you just need to convert the DOM element to canvas, and then manipulate the image data to achieve the glitch effect. And with these two libs, that task becomes quite trivial.
html2canvas(node, {
onrendered: function (canvas) {
// hide the original node
node.style.display = "none";
// add the canvas node to the node's parent
// practically replacing the original node
node.parentNode.appendChild(canvas);
// get the canvas' image data
var ctx = canvas.getContext('2d');
var imageData = ctx.getImageData(0, 0, canvas.width, canvas.height);
// glitch it
var parameters = { amount: 1, seed: Math.round(Math.random()*100), iterations: 5, quality: 30 };
glitch(imageData, parameters, function(imageData) {
// update the canvas' image data
ctx.putImageData(imageData, 0, 0);
});
}
});
Click here for a demo: https://jsfiddle.net/ArtBIT/x12gvs1v/
I have a canvas element that I'm setting the background on dynamically via code. Then I'm using the Sketch library (http://intridea.github.io/sketch.js/) to draw on the canvas. - This all works.
However, whenever I try to convert the canvas using canvas.toDataURL("image/png") it's able to save the canvas drawing, however isn't saving the background. - I understand this is working as designed.
Is there a way to merge the two? I was toying around with the idea that I could set the image src to the background src after I'm done drawing and try to export that, however I'm not certain. Does anyone have any experience with this?
As you've discovered, the canvas and its background are maintained separately and toDataURL will not also capture the background.
You can combine the background with the sketch using canvas compositing.
The 'destination-over' compositing mode will let you drawImage your background behind the sketches
context.globalCompositeOperation="destination-over";
context.drawImage(backgroundImage,0,0);
Now the background pixels have been drawn behind you sketch and both will be captured with toDataURL.
Yes, You are correct. I fetch the background image along with canvas image and download.
ctx.width = 2503;
ctx.height = 250;
ctx.globalCompositeOperation="destination-over";
var background = new Image();
background.src = "http://localhost/xxxxx/xxxxx/xxxxx/xxxxx/ecg_back.png";
ctx.drawImage(background, 0, 0);
// create pattern
var ptrn = ctx.createPattern(background, 'repeat'); // Create a pattern with this image, and set it to "repeat".
ctx.fillStyle = ptrn;
ctx.fillRect(0, 0, ctx.width, ctx.height);
How are you adding the background to the canvas? Are you setting it in css as a background image? Or are you adding the image directly to the canvas? I think you'll need to do the latter, as per the example here.
In my webapplication the user can take a photo with the camera on his mobile device. The photo is then displayed in a canvas.
<canvas id="photo" ></canvas>
Now I want to send the photo to a backend server. The best way to do this seems to be encoding the image to a base64 string.
However, I want the photo to always be the same resolution, for example 100 x 100 pixels. Otherwise the base64 string is too big.
I am currently using the second parameter of toDataURL to determine the export quality of the picture. (here 2%).
var base64 = document.getElementById("photo").toDataURL("image/jpeg", 0.02);
This does not seem to be a good way because I don't know the initial resolution of the photo. So I would like to always get the photo in the same resolution. How to achieve this?
You could use an invisible background canvas with a size of 100x100 and copy the image from the photo-canvas to it. The drawImage function allows you to specify a size for the destination rectangle. When it's not the same as the source rectangle, the content will be scaled:
// Create a temporary, invisible 100x100 canvas
var tmpCanvas = document.createElement('canvas');
tmpCanvas.width = 100;
tmpCanvas.height = 100;
var tmpContext = canvas.getContext('2d');
// copy the content of the photo-canvas to the temp canvas rescaled
var photoCanvas = document.getElementById("photo");
tmpContext.drawImage(photoCanvas, // source
0, 0, photoCanvas.width, photoCanvas.height, // source rect
0, 0, 100, 100 // destination rect
);
// get data-url of temporary canvas
var base64 = tmpCanvas.toDataURL("image/jpeg");
But keep in mind that when the source-canvas isn't rectangular, the aspect ratio won't be preserved. When you don't want this, there are different ways to deal with this issue, like cropping or adding a border. Most of these would be implemented by choosing different source- or destination rectangles in the drawImage call.