I am using this library to upload and crop the images
http://andyshora.com/angular-image-cropper.html.
I have added functions for rotate left, rotate right, flip horizontal and flip vertical to overcome mobile EXIF orientation issues.
scope.clockwise = function() {
scope.angleInDegrees+=90;
drawRotated(scope.angleInDegrees);
};
scope.counterclockwise = function() {
scope.angleInDegrees-=90;
drawRotated(scope.angleInDegrees);
};
function drawRotated(degrees){
ctx.clearRect(0,0,canvasWidth,canvasHeight);
ctx.save();
ctx.translate(canvasWidth/2,canvasHeight/2);
ctx.rotate(degrees*Math.PI/180);
transformPoint = untransformPoint = function(x, y) {
return {
x: x,
y: y
};
};
ctx.drawImage($img,-$img.width/2,-$img.width/2);
ctx.restore();
}
After rotate I can not move the image with rotated image. While clicking image its automatically displaying first uploaded image(ie before rotation).
Fiddle link http://jsfiddle.net/x9f94yz5/1/ (Not a running code)
Related
I am trying to draw on a rotating p5.js canvas, where the canvas element is being rotated via its transform CSS attribute. The user should be able to draw points on the canvas with the cursor, even while the canvas element is actively rotating.
The below code is what I have tried so far. However, it is currently not working, because the points are not correctly showing up where the mouse is hovering over the rotating canvas. I'm currently testing this out on the p5.js editor.
let canvas
function setup() {
canvas = createCanvas(400, 400)
background('#fb88f3')
strokeWeight(10)
}
function draw() {
canvas.style(`transform: rotate(${frameCount}deg)`)
translate(200, 200)
rotate(-radians(frameCount))
point(mouseX-200, mouseY-200)
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.3.1/p5.min.js"></script>
I've found this related StackOverflow post where they drew on a rotating canvas as a p5.Graphics element. However, for my purposes, I'd like to rotate the actual element instead, as part of a simple p5.js painting application I am working on.
I've been stuck on this for a while and would appreciate any help! Thanks in advance!
This seems to be a bug in p5.js.
The mouse coordinate seems to depend on the size of the bounding box of the canvas. The bounding box depends on the rotation. Therefor you need to scale the offset for the calculation of the center of the canvas.
Unfortunately, that's not all. All of this does not depend on the current angle, but on the angle that was set when the mouse was last moved. Hence the scale needs to be computed in the mouseMoved callback:
let scale;
function mouseMoved() {
let angle_rad = radians(frameCount)
scale = abs(sin(angle_rad)) + abs(cos(angle_rad))
}
let center_offset = 200 * scale;
point(mouseX-center_offset, mouseY-center_offset)
let canvas, scale
function setup() {
canvas = createCanvas(400, 400)
background('#fb88f3')
strokeWeight(10)
}
function draw() {
let angle = frameCount;
let angle_rad = radians(angle)
let center_offset = 200 * scale
canvas.style(`transform: rotate(${angle}deg)`)
translate(200, 200)
rotate(-angle_rad)
point(mouseX-center_offset, mouseY-center_offset)
}
function mouseMoved() {
let angle_rad = radians(frameCount)
scale = abs(sin(angle_rad)) + abs(cos(angle_rad))
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.3.1/p5.min.js"></script>
I'm drawing an image onto a canvas using drawImage. It's a PNG that is surrounded by transparent pixels, like this:
How can I detect a drawing move path in the transparent part of that image on the canvas? I want to detect if a user draws in a transparent part.
I am trying this tutorial and I did as showing in the tutorial.
var ctx = canvas.getContext('2d'),
img = new Image;
img.onload = draw;
img.src = "http://i.stack.imgur.com/UFBxY.png";
function draw() {
// draw original image in normal mode
ctx.drawImage(img, 10, 10);
}
<canvas id=canvas width=500 height=500></canvas>
Check it out full code on Github
Check it out live demo IonCanvas
To find out is a pixel is transparent get the pixel using ctx.getImageData and look at the alpha value.
Example
// assumes ctx is defined
// returns true if pixel is fully transparent
function isTransparent(x, y) { // x, y coordinate of pixel
return ctx.getImageData(x, y, 1, 1).data[3] === 0; // 4th byte is alpha
}
I'm a creating a web app where a user can slide to an image of choice on a Bootstrap Carousel, and click on a canvas to place said image of choice. The user can place the same image multiple times with multiple clicks.
I'm having difficulties with the drawing part.
I already have the click coordinates and source of the active image in the carousel, and I have tried a function that waits to draw in case image isn't loaded when draw function is initally called.
javascript
var canvas = document.getElementById('canv');
var context = canvas.getContext('2d');
canvas.addEventListener('click', placeImage);
function placeImage() {
cx = event.pageX;
cy = event.pageY;
var url = document.getElementsByClassName("carousel-item
active")[0].children[0].src;
var sticker = new Image();
sticker.src = url;
draw(context, sticker, cx, cy);
}
function draw(context, sticker, cx, cy) {
if (!sticker.complete) {
setTimeout( function() {
draw(context, sticker, cx, cy);
}, 50);
return;
}
context.drawImage(sticker, cx, cy);
}
I expect that when I click on the canvas, an image identical to the one in the active carousel slide (dimensions and all) will be drawn on the canvas at the coordinates where I just clicked. The actual output is nothing.
I'm having a problem drawing sprites on canvas for a school project. My code:
treeImage = new Image();
treeImage.src = "sprites/treeSprites.png";
function rocks() { //to create the rock
this.x = 1920 * Math.random(); //random location on the width of the field
this.y = ground[Math.round(this.x/3)]; //ground is an array that stores the height of the ground
this.draw = function() {
ctx.save();
ctx.translate(this.x, this.y);
ctx.rotate(Math.tan((ground[Math.floor(this.x/3)]-ground[Math.floor(this.x/3)+1])/-3));
//^rotating based on its position on the ground^
ctx.drawImage(treeImage, 200, 50, 50, 50, -25, -50, 50, 50);
ctx.restore();
}
}
len = rockArray.length; //every frame
for (var i = 0;i<len;i++) {
rockArray[i].draw();
}
I only request 50×50px from the image. Exactly outside of the 50×50 there are black lines (which shouldn't interfere because I only request the square within the black lines) but when I draw the rock, the black outlines are visible. (For other reasons, I can't remove the black lines.)
I'm guessing the image JavaScript stores when I load the image is made blurry, and then when I request that part from the image, the lines around are visible too, as the blur "spreads" the lines into the square I request.
Is there a way I can prevent this?
Use ctx.imageSmoothingEnabled = false.
This will make the image sharp instead of smoothed (blurry).
(documentation)
If you draw a vertical line at x=5 and width = 1, the canvas actually draws the line from 4.5 to 5.5 this results in aliasing and a fuzzy line. A quick way to remedy that so it is a solid line is to offset the entire canvas by half a pixel before doing anthing else.
ctx.translate(-0.5, -0.5);
(documentation)
I am trying to work on image processing with fabric js.I am dealing with very huge images hence I have to save copy of canvases after image processing so that next time it can be shown faster with jquery's id.show. But I want to render the images on the exact location. I am using canvas.zoomToPoint and canvas.relativePan to zoom and pan the image but after I do zoom + pan and then apply image processing to show hidden canvas and apply hiddencanvas.zoomToPoint and hiddencanvas.relativePan on hidden canvas, it doesn't render the image on the exact location where I left the older canvas. Am I doing any mistake. Here's a supporting Fiddle .However, the fiddle renders a image by uploading and if you zoom and pan and click on invert, the inverted image doesn't move there Panning code : ` ``var panning = false;
canvas.on('mouse:up', function (e) {
panning = false;
});
canvas.on('mouse:down', function (e) {
panning = true;
});
canvas.on('mouse:move', function(e) {
if (panning && e && e.e) {
var x = e.offsetX, y = e.offsetY;
var delta = new fabric.Point(e.e.movementX, e.e.movementY);
canvas.relativePan(delta);
//Above statement pan's the image but what to save to server in order to render the image on the exact panned location ?
}
});
`` whereas this is zoom code : canvas.zoomToPoint({ x: x, y: y }, newZoom);
I found the answer , it was a very silly mistake .
Every canvas has a Viewport transform .
So, we just need to get canvas.viewportTransform and then we can get the ScaleX, scaleY, left , top as [scaleX, 0,0, scaleY, left,top] .
Hope , it will help someone .