I would like to perform a 3D rotation of an image from the center with WebGL.
Here is an example of the type of rotation I want to achieve :
My goal is not to make a 360° rotation as in the image above but just at least from 0° to 45°.
I first used glfx.js which allows me to create a perspective with a 2D image.
My problem is that I have to calculate correctly the 4 pairs of coordinates to create the rotation effect around the center of the image from a given angle.
const img = new Image();
img.onload = async function() {
const imageCanvas = document.createElement('canvas');
const imageCtx = imageCanvas.getContext('2d');
const w = this.naturalWidth;
const h = this.naturalHeight;
imageCanvas.width = w;
imageCanvas.height = h;
imageCtx.drawImage(this, 0, 0);
const before = [
0, 0, // Top Left
w, 0, // Top Right
0, h, // Bottom Left
w, h // Bottom Right
];
// I would like to calculate the 4 pairs of coordinates of "after"
// to make a rotation around the center of the image with a given angle.
// const angleInDegrees = 20;
const after = [
0, 0, // Top Left
w, 50, // Top Right
0, h, // Bottom Left
w, h-50 // Bottom Right
];
let fxCanvas;
try {
fxCanvas = fx.canvas();
} catch (e) {
alert(e);
return;
}
const texture = fxCanvas.texture(imageCanvas);
fxCanvas.draw(texture).perspective(before, after).update();
document.body.appendChild(fxCanvas);
}
img.crossOrigin = 'anonymous';
img.src = 'https://images.unsplash.com/photo-1669279768556-39aaad69a698?ixlib=rb-4.0.3&auto=format&fit=crop&w=500&q=60';
<script src="https://evanw.github.io/glfx.js/glfx.js"></script>
To make it simple, I want to calculate the (x,y) coordinates of the 4 red corners of the image after the rotation by a given angle.
Related
I am trying to put image data 100x100 to canvas 1000x1000 , but cant able to do it ,
let width=1000; //canvas width
let height=1000; //canvas height
let img_w=100; //image width
let img_h=100; //image height
let img=new Image();
img.width=img_w
img.height=img_h
img.src="./flower.jpg"
var canvas = document.getElementById('mycanvas');
var context = canvas.getContext('2d');
canvas.width = width;
canvas.height = height;
let pixels,scannedimg;
img.onload=()=>{
context.drawImage(img, 0, 0,width,height );
scannedimg = context.getImageData(0, 0, img.width, img.height);
pixels=scannedimg.data
console.log(pixels)
redraw();
}
let row=4*img_w;
let col=img_h;
function redraw(){
for(let i=0;i<row;i+=4){
for(let j=0;j<col;j++){
pixels[i+j*row]=0;
pixels[i+j*row+1]=0;
pixels[i+j*row+2]=0;
//pixels[i+j*400+3]=0;
}
}
scannedimg.data=pixels;
console.log(scannedimg);
context.putImageData(scannedimg,0,0,0,0,width,height);
}
i have converted the original array into a black image array (array of zeros) , but while putting on canvas , it is still 100x100
How to scale it to 1000x1000?
i don't want to iterate through 1000x1000 and set it to zero ,
i need a computationally efficient answer
Unless you outsource the pixel calculations to a WebAssembly module a JavaScript-only approach would indeed be rather slow for a large image.
Honestly I'm not sure what you are actually doing in your code.
First your drawing an unknown-sized .jpg to a 1000x1000 canvas which - unless the .jpg is also 1000x1000 - will scale and eventually distort the source image.
let width=1000;
let height=1000;
context.drawImage(img, 0, 0, width, height);
Secondly you're obtaining the pixel data of a 100x100 region from the top-left of your 1000x1000 canvas.
let img_w=100;
let img_h=100;
img.width=img_w;
img.height=img_h;
scannedimg = context.getImageData(0, 0, img.width, img.height);
Finally in your redraw() function you're rather randomly setting some of the pixels to black and draw it back to the canvas at 1000x1000 (which doesn't work that way but I will get into it later).
Let's do it a little different. Say we have a 300x200 image. First we need to draw it to a 100x100 canvas while maintaining it's aspect ratio to get the 100x100 imagedata.
This can be done using a dynamically created off-screen <canvas> element as we don't need to see it.
Now the tricky part is the CanvasRenderingContext2D putImageData() method. I assume you were thinking that the last pair of parameters for the width & height would stretch existing pixel data to fill the region specifid by (x, y, width, height). Well that's not the case. Instead we need to - again - paint the 100x100 pixel data to a same-sized off-screen canvas (or for simlicity re-use the existing) and draw it to the final canvas using the drawImage() method.
Here's everything put together:
let pixelsWidth = 100;
let pixelsHeight = 100;
let finalWidth = 500;
let finalHeight = 500;
let tempCanvas = document.createElement('canvas');
let tempContext = tempCanvas.getContext('2d');
tempCanvas.width = pixelsWidth;
tempCanvas.height = pixelsHeight;
let pixelData;
let img = new Image();
img.crossOrigin = 'anonymous';
img.onload = (e) => {
let scale = e.target.naturalWidth >= e.target.naturalHeight ? pixelsWidth / e.target.naturalWidth : pixelsHeight / e.target.naturalHeight;
let tempWidth = e.target.naturalWidth * scale;
let tempHeight = e.target.naturalHeight * scale;
tempContext.drawImage(e.target, pixelsWidth / 2 - tempWidth / 2, pixelsHeight / 2 - tempHeight / 2, tempWidth, tempHeight);
pixelData = tempContext.getImageData(0, 0, pixelsWidth, pixelsHeight);
redraw();
}
img.src = 'https://picsum.photos/id/237/300/200';
function redraw() {
let canvas = document.getElementById('canvas');
let context = canvas.getContext('2d');
canvas.width = finalWidth;
canvas.height = finalHeight;
tempContext.putImageData(pixelData, 0, 0);
context.drawImage(tempCanvas, 0, 0, finalWidth, finalHeight);
}
canvas {
background: #cccccc;
}
<canvas id="canvas"></canvas>
I need to wrap an image around another image of a mug using javascript, and I found this:
Wrap an image around a cylindrical object in HTML5 / JavaScript
This helps when loading the image that has the mug handle on the left. However when using the same function (with tweaked position values) the image has an opacity applied to it. I searched endlessly to figure out for what reason this is happening however I found nothing :/
This is the function used to wrap the image for the mug with the right handle:
function canvas2() {
var canvas = document.getElementById('canvas2');
var ctx = canvas.getContext('2d');
var productImg = new Image();
productImg.onload = function() {
var iw = productImg.width;
var ih = productImg.height;
canvas.width = iw;
canvas.height = ih;
ctx.drawImage(
productImg,
0,
0,
productImg.width,
productImg.height,
0,
0,
iw,
ih
);
loadUpperIMage();
};
productImg.src =
'https://i.ibb.co/B2G8y1m/white-right-ear.jpg';
function loadUpperIMage() {
var img = new Image();
img.src =
'https://i.ibb.co/BnQP0TL/my-mug-image.png';
img.onload = function() {
var iw = img.width;
var ih = img.height;
var xOffset = 48, //left padding
yOffset = 68; //top padding
var a = 70; //image width
var b = 8; //round ness
var scaleFactor = iw / (6 * a);
// draw vertical slices
for (var X = 0; X < iw; X += 1) {
var y = (b / a) * Math.sqrt(a * a - (X - a) * (X - a)); // ellipsis equation
if (!isNaN(y)) {
ctx.drawImage(
img,
X * scaleFactor,
0,
iw / 0.78,
ih,
X + xOffset,
y + yOffset,
1,
162
);
}
}
};
}
}
Hope someone can help with this!
Here is a fiddle with the issue https://jsfiddle.net/L20aj5xr/
It is because of the 4th argument you pass to drawImage - iw / 0.78. By multiplying image width by a value lower than one, you get the value larger than image width. The spec for drawImage says:
When the source rectangle is outside the source image, the source rectangle must be clipped to the source image and the destination rectangle must be clipped in the same proportion.
ctx.drawImage(image, sx, sy, sWidth, sHeight, dx, dy, dWidth, dHeight);
Because the source width (sw) you are using is larger than source image size, the destination rectangle "is clipped in the same proportion". The destination rectangle width is 1px because you chose it as a width for each vertical line you are drawing, and after clipping it's width becomes 1 * 0.78 = 0.78px. The width is now less than 1px and to be honest I am not exactly sure how it actually works under the hood, but my guess is that a browser still needs to draw that 1px, but because the source is 0.78px, it kinda stretches the source to that 1px and adds some anti-aliasing to smooth the transition, which results into added transparency (i.e. browser does not have enough information for that 1px and it tries to fill it up the best it can). You can play around with that by incresing sw even more and observe increasing transparency.
To fix your issue I used the value 20 instead of 0.78 like for the first cup and it seemed to look ok.
There are numerous examples out there showing how to draw things onto a canvas, however, my problem is slightly different - I want to load a photo into memory, draw a shape onto exact coordinates over the photo, THEN draw/scale the photo onto a canvas. Not sure where to start with this. Are there any relevant libraries out there I can use with ionic that will allow you to do this?
Edit 1 ~ I now have this mostly working:
private properties:
#ViewChild('mainCanvas') canvasEl: ElementRef;
private _CANVAS: any;
private _CONTEXT: any;
ionViewDidEnter():
this._CANVAS = this.canvasEl.nativeElement;
this._CONTEXT = this._CANVAS.getContext('2d');
updateCanvas():
var img = new Image();
const ctx = this._CONTEXT;
const canvas = this._CANVAS;
ctx.clearRect(0, 0, this._CANVAS.width, this._CANVAS.height);
ctx.fillStyle = "#ff0000";
img.onload = (() => {
img.width = img.width;
img.height = img.height;
canvas.width = img.width;
canvas.height = img.height;
ctx.drawImage(img, 0, 0);
ctx.lineWidth = 8;
ctx.strokeStyle = "#FF0000";
ctx.strokeRect(100, 100, 400, 400);
ctx.scale(0.5, 0.5); // this does nothing
});
img.src = (<any>window).Ionic.WebView.convertFileSrc(path);
This draws the photo then the rectangle onto the canvas, however, the resulting image is too large to fit onto the screen, so I need to scale the canvas after all drawing is complete. I tried this with ctx.scale but the canvas remains the same size regardless of which values I specify.
You cannot draw straight onto a photo, but what you can do is create an offscreen canvas that is the same size as the photo, draw the photo to it, and then draw your shapes on top.
The result can then be drawn to your main canvas e.g.
// Empty image for example purposes
const img = new Image(100, 100);
// Creating a canvas for example purposes
const mainCanvas = document.createElement('canvas');
const mainCtx = mainCanvas.getContext('2d');
// Create an offscreen buffer
const bufferCanvas = document.createElement('canvas');
const bufferCtx = bufferCanvas.getContext('2d');
// Scale the buffer canvas to match our image
bufferCanvas.width = img.width;
bufferCanvas.height = img.height;
if (bufferCtx && mainCtx) {
// Draw image to canvas
bufferCtx.drawImage(img, 0, 0);
// Draw a rectangle in the center
bufferCtx.fillRect(img.width / 2 - 5, img.height / 2 - 5, 10, 10);
// Draw the buffer to the main canvas
mainCtx.drawImage(bufferCanvas, 0, 0);
}
I have a 360 degree panoramic photo that I want transformed into a cube map.
In order to get the top and bottom faces, I cropped large rectangles from the top 25 percent of the photo and bottom 25% of the photo. The image below represents the bottom 25% of the 360 degree panoramic photo.
and I want to transform it into something like this
using HTML5 and Javascript.
The first image was cropped from a much larger one using this code
imagePieces = [];
numColsToCut = 1;
var numRowsToCut = 1;
var widthOfOnePiece = image.width;
var heightOfOnePiece = image.height / 4;
var startHeight = 0;
var canvas = document.createElement('canvas');
canvas.width = widthOfOnePiece;
canvas.height = heightOfOnePiece;
var context = canvas.getContext('2d');
console.log(image.height / 2);
context.drawImage(image, 0, startHeight, widthOfOnePiece, heightOfOnePiece, 0, 0, canvas.width, canvas.height);
imagePieces.push(canvas.toDataURL());
document.getElementById('myImageElementInTheDom6').src = imagePieces[0];
Before placing the cropped image into the DOM element, I want to 'warp' it into the bottom image. Any tips? :)
This affect here (image below) was achieved with a couple simple Photoshop steps takes, the colors parts were turn white, the background (various shades of white gray), was made transparent. Is it possible to achieve this with canvas?
The images inside the circles below is the final result.
The images were originally colored, like the 2nd from top image was this one:
See that circle in the middle, basically all the white was cut out in an aliased way.
Same with this zoho logo:
The 2nd from bottom was originally something like this:
Except the red R was just a Y in the middle and instead of all the text and green strip seen in image here, it just had some grainy texture in shades of gray around it. And via photoshop the Y was made trasnparent, and the texture and stamp was just made solid, removing the 3d shadow etc.
Putting this above yandex stamp through the photoshop algorithm gives this (i replaced the white with black for demo/visibility puproses)
This was jagged after the photoshop algorithm but in final application the image is reduced to around 80x80px and that makes it look real smooth and anti-aliased. So real final result is this which looks very decent.
The problem is multifaceted as there are regions which require different approaches, for example, the last image where the main text needs to be converted to white but keep transparency, while the bottom bar in the same image is solid but need the white text to be retained while the solid background to be removed.
It's doable by implementing tools to select regions and apply various operators manually - automatically will be a much larger challenge than it may appear to be.
You could make requirements to the user to only upload images with an alpha channel. For that you can simply replace each non-transparent pixel with white. It becomes more a policy issue than a technical one in my opinion.
For example
Taking the logo:
var img = new Image();
img.crossOrigin = "";
img.onload = process;
img.src = "http://i.imgur.com/HIhnb4A.png"; // load the logo
function process() {
var canvas = document.querySelector("canvas"), // canvas
ctx = canvas.getContext("2d"), // context
w = this.width, // image width/height
h = this.height,
idata, data32, len, i, px; // iterator, pixel etc.
canvas.width = w; // set canvas size
canvas.height = h;
ctx.drawImage(this, 0, 0); // draw in image
idata = ctx.getImageData(0, 0, w, h); // get imagedata
data32 = new Uint32Array(idata.data.buffer); // use uint32 view for speed
len = data32.length;
for(i = 0; i < len; i++) {
// extract alpha channel from a pixel
px = data32[i] & 0xff000000; // little-endian: ABGR
// any non-transparency? ie. alpha > 0
if (px) {
data32[i] = px | 0xffffff; // set this pixel to white, keep alpha level
}
}
// done
ctx.putImageData(idata, 0, 0);
}
body {background:gold}
<canvas></canvas>
Now the problem is easy to spot: the "#" character is just solid because there is no transparency behind it. To automate this would require first to knock out all whites, then apply the process demoed above. However, this may work in this single case but probably not be a good thing for most.
There will also be anti-aliasing issues as it's not possible to know how much of the white you want to knock out as we don't analyze the edges around the white pixels. Another possible challenge is ICC corrected image where white may not be white depending on ICC profile used, browser support and so forth.
But, it's doable to some degree - taking the code above with a prestep to knock out entirely white pixels for this logo:
var img = new Image();
img.crossOrigin = "";
img.onload = process;
img.src = "http://i.imgur.com/HIhnb4A.png"; // load the logo
function process() {
var canvas = document.querySelector("canvas"), // canvas
ctx = canvas.getContext("2d"), // context
w = this.width, h = this.height,
idata, data32, len, i, px; // iterator, pixel etc.
canvas.width = w; // set canvas size
canvas.height = h;
ctx.drawImage(this, 0, 0); // draw in image
idata = ctx.getImageData(0, 0, w, h); // get imagedata
data32 = new Uint32Array(idata.data.buffer); // use uint32 view for speed
len = data32.length;
for(i = 0; i < len; i++) {
px = data32[i]; // pixel
// is white? then knock it out
if (px === 0xffffffff) data32[i] = px = 0;
// extract alpha channel from a pixel
px = px & 0xff000000; // little-endian: ABGR
// any non-transparency? ie. alpha > 0
if (px) {
data32[i] = px | 0xffffff; // set this pixel to white, keep alpha level
}
}
ctx.putImageData(idata, 0, 0);
}
body {background:gold}
<canvas></canvas>
Use this
private draw(base64: string) {
// example size
const width = 200;
const height = 70;
const image = new Image();
image.onload = () => {
const canvas = document.createElement("canvas");
canvas.width = width;
canvas.height = height;
const ctx = canvas.getContext("2d");
ctx.drawImage(image, 0, 0);
const imageData = ctx.getImageData(0, 0, width, height);
for (let x = 0; x < imageData.width; x++) {
for (let y = 0; y < imageData.height; y++) {
const offset = (y * imageData.width + x) * 4;
const r = imageData.data[offset];
const g = imageData.data[offset + 1];
const b = imageData.data[offset + 2];
// if it is pure white, change its alpha to 0
if (r == 255 && g == 255 && b == 255) {
imageData.data[offset + 3] = 0;
}
}
}
ctx.putImageData(imageData, 0, 0);
// output base64
const result = canvas.toDataURL();
};
image.src = base64;
}