Stop pixel font from being blurred when rendered - javascript

I am trying to create a little game in a pixel retro look and I am using a pixelated font. Unfortunately I am unable to render the font with edges as sharp as they are in the source font.
About my mcve:
I am using this font for fonts/pokemon_classic.ttf (I found no way to host that font online, so no jsfiddle), but you can use any pixel font you like.
The example below renders the font like this:
How can I make this text render as sharp as it is in the source font? It should look like this (edited image):
the scale of root may change during runtime to fit the screen
A less elegant solution which would probably work is to fix the alpha of each pixel to be either 0 or 1 depending on some threshold, but I don't know how to do this.
JS:
PIXI.settings.SCALE_MODE = PIXI.SCALE_MODES.NEAREST;
let scale = 30;
let app = new PIXI.Application();
document.body.appendChild(app.view);
app.renderer.view.style.position = "absolute";
app.renderer.view.style.display = "block";
app.renderer.autoResize = true;
app.renderer.resize(window.innerWidth, window.innerHeight);
let root = new PIXI.Container();
app.stage.addChild(root);
root.scale.set(scale);
document.fonts.load('8pt "pokemon"').then(() => {
let text = new PIXI.Text("Test", {fontFamily: 'pokemon', fontSize: 8, fill: 0xff1010});
root.addChild(text);
});
CSS:
#font-face {
font-family: 'pokemon';
src: url("fonts/pokemon_classic.ttf");
}
* {
padding: 0;
margin: 0;
}

Things that can make text blurry....
SCALE_MODE
Sub-pixel positioning. Turn on roundPixels when creating new PIXI.Application for v4, v5 you can set globally via PIXI.settings.ROUND_PIXELS = true;, or indivudally, displayObject.roundPixels = true;
Scaling
So you're good on #1, but #2 and #3 could be issues.

I resolved it by setting resolution like this:
text.resolution = window.devicePixel * scale
It will re-render the text leading to bad performance, so I just apply it for the last value of scale.

If you don't mind having a double resolution for the whole app...
My Pixi.js text and textures were blurry.
I increased the resolution of the whole Pixi app.
Then your canvas size will double.
You can then you options.autoDensity to fit double resolution in the same canvas. https://pixijs.download/dev/docs/PIXI.Application.html
// Pixi.js API from v6.5.1
import * as PIXI from 'pixi.js';
PIXI.settings.PRECISION_FRAGMENT = PIXI.PRECISION.HIGH; // might help a bit
PIXI.settings.ROUND_PIXELS = true; // might help a bit
PIXI.settings.RESOLUTION = 2;
const app = new PIXI.Application({ width: 750, height: 400, autoDensity: true });

Related

Fabric.js seems limited to 2048 texture size despite using WebGL

I'm setting up an app that allows the user to upload an image and make minor modifications (zoom, brightness, rotate, etc). I'm running into an issue with the brightness adjustment where Fabric.js just redraws a 2,048 x 2,048 segment of the image, rather than the entire thing. This is despite the max texture size being 16,384.
The image I'm uploading is large (5,400 x 3,600), but that's on purpose. These days even cell phones have high resolution cameras, so I need to be prepared for users to upload large images.
I found that if I randomly resize the image down by .5 as soon as it's loaded, then the brightness adjustment works just fine. That's not the end of the world - but I need to know how far to resize any image upon receipt. So is there a way to calculate the current texture size and then resize it down to something I can be sure Fabric.js will handle? I thought the current texture size could be the larger of either the pixel width or height of the image, but something must be wrong there, because my current image would be 5,400 which is well below the max of 16,384 (and resizing it by half would bring it down to 2,700, which is still above 2,048, but this actually works).
Any ideas on how to either:
determine how far to resize the image so Fabric.js can handle it
or, get Fabric.js to use the max texture size (since it seems to be ignoring it)
Here are some code snippets..
var FabricCanvas = new fabric.Canvas('FabricCanvas');
// image file upload handler
jQuery(".FabricPhotoUpload").on("change", function( event_change ) {
var reader = new FileReader();
reader.onload = function ( event_onload ) {
var imgObj = new Image();
imgObj.src = event.target.result;
imgObj.onload = function () {
var FabricImage = new fabric.Image(imgObj);
FabricImage.set({ angle: 0, top: 0, left: 0 });
fabric.filterBackend = fabric.initFilterBackend();
if ( fabric.isWebglSupported() ) {
fabric.textureSize = fabric.maxTextureSize;
}
FabricCanvas.add(FabricImage);
FabricCanvas.renderAll();
}
}
reader.readAsDataURL( event_change.target.files[0] );
event_change.target.value = "";
});
// using jQueryUI for the brightness slider
window.FabricSliderMax_Bright = 50;
jQuery( ".FabricBrightness" ).slider({
slide: function( event, ui ) {
var NewBrightness = ui.value / window.FabricSliderMax_Bright;
// for now I'm assuming FabricImage.filters[0] is the brightness filter
if ( typeof( FabricImage.filters[0] ) == "undefined" ) {
FabricImage.filters[0] = new fabric.Image.filters.Brightness({ brightness: NewBrightness });
} else {
FabricImage.filters[0].brightness = NewBrightness;
}
FabricImage.applyFilters();
FabricCanvas.renderAll();
},
max: window.FabricSliderMax_Bright,
min: -50,
value: 1
});
I too am playing with maxTextureSize.
I have not yet determined if there is a valid method to [pre]determine the hardware's limit for all platforms, but you can set it directly;
if (fabric.isWebglSupported()) fabric.textureSize = 65536;
I suspect it definitely needs to be at the least mod4 (4 bytes per pixel) ... but whether the increments can be less than 1k, I have yet to determine.
I've found that I need to first init the filter backend, then set the max texture size, then re-init the filter backend again. I'm not sure if this will work for everyone or not, but it fixed the issue for me:
// #ts-ignore
fabric.filterBackend = fabric.initFilterBackend(); //this will init WebGL
// #ts-ignore
fabric.textureSize = fabric.maxTextureSize; //allows for larger images
//This will re-init WebGL with the larger image textureSize.
// For some reason we need this extra re-init.
// Otherwise we get an error about 'RangeError out-of-bounds Uint8Array on ArrayBuffer'
// #ts-ignore
fabric.filterBackend = fabric.initFilterBackend();

How do I get the MIME type of an image/blob in javascript?

I'm working on a Chrome Extension in which I resize images (actually resize; not changing the browser display) that users right click on. When they right click on the image, I get access to the image's 'src'.
I can resize the images that aren't gifs fine; I'm using canvases to do this. You can see me do this here https://jsfiddle.net/cyqvacc6/6/.
img_url = 'https://i.imgur.com/SHo6Fub.jpg';
function get_image(image_url, emoji_name) {
var img_el = document.createElement('img');
img_el.onload = function () {
canvas = img_to_canvas(img_el);
emoji_sized_canvas = emoji_sized(canvas);
document.body.appendChild(emoji_sized_canvas);
};
img_el.src = image_url;
}
function img_to_canvas(img) {
canvas = document.createElement('canvas');
canvas.width = img.width;
canvas.height = img.height;
canvas_ctx = canvas.getContext('2d');
canvas_ctx.drawImage(img, 0, 0, canvas.width, canvas.height);
return canvas;
}
function emoji_sized(canvas) {
var target_dim = emoji_dimensions(canvas.width, canvas.height);
var factor = 2;
var canvas_long_side = Math.max(canvas.width, canvas.height);
var target_long_side = Math.max(target_dim.width, target_dim.height);
new_canvas = document.createElement('canvas');
new_canvas_ctx = new_canvas.getContext('2d');
if ((target_long_side === canvas_long_side)) {
// Return the image.
return canvas;
} else if (target_long_side > canvas_long_side * factor) {
// Increase the size of the image and then resize the result.
new_canvas.width = canvas.width * factor;
new_canvas.height = canvas.height * factor;
new_canvas_ctx.drawImage(canvas, 0, 0, new_canvas.width, new_canvas.height);
return emoji_sized(new_canvas);
} else if (canvas_long_side > target_long_side * factor) {
// Half the size of the image and then resize the result.
var width = new_canvas.width = canvas.width / factor;
var height = new_canvas.height = canvas.height / factor;
new_canvas_ctx.drawImage(canvas, 0, 0, new_canvas.width, new_canvas.height);
return emoji_sized(new_canvas);
} else {
// Resize the image in one shot
new_canvas.width = target_dim.width;
new_canvas.height = target_dim.height;
new_canvas_ctx.drawImage(canvas, 0, 0, new_canvas.width, new_canvas.height);
return new_canvas;
}
}
function emoji_dimensions(width, height) {
const MAX_SIDE_LENGTH = 128;
// Get the larger side
long_side = Math.max(height, width);
// Determine the scale ratio
// If the image is between 95% to 100% of the target
// emoji size, don't adjust it's size.
var scale;
if ((long_side >= 0.95 * MAX_SIDE_LENGTH) && (long_side <= MAX_SIDE_LENGTH))
{
scale = 1;
} else {
scale = MAX_SIDE_LENGTH / long_side;
}
return {
'height': height * scale,
'width': width * scale
};
}
Unfortunately, I'm not seeing an easy way to resize gifs using canvases. When I try the same approach on gifs, the 'resized' image is no longer a gif; it's just the first frame of the gif resized.
I think I'm going to end up sending gifs to a server to resize them, but still, in order to do this, I need to know whether the image I'm working on is animated or not, which I don't know how to do.
So, how do I determine if an image is a gif? Also, is it possible to resize these gifs from the client, i.e. javascript?
For reference, I need to reduce the gifs in terms of byte size and pixel, i.e. the gif needs to be both below 128px in both height and width and less than 64k in total byte size.
Since your question actually contains multiple questions, it's quite hard to answer it, so I'll currently don't include code in here.
First, Canvas API can only draw the first frame of any animated image passed through an <img> element. According to specs.
Specifically, when a CanvasImageSource object represents an animated image in an HTMLOrSVGImageElement, the user agent must use the default image of the animation (the one that the format defines is to be used when animation is not supported or is disabled), or, if there is no such image, the first frame of the animation, when rendering the image for CanvasRenderingContext2D APIs.
So you won't natively be able to render all your gif's frames on the canvas.
For this, you'll have to parse the file and extract every frames of your file.
Here are is an untested library that do propose this functionality :
libgif-js.
If you don't like libraries, you could also write a script yourself.
edit: I tried this lib and it's awfull... don't use it, maybe you could fork it, but it's really not meant to do image processing
Once you've got the frames, you can resize these with canvas, and then reencode them all in a final gif file. Untested either gif.js seems to be able to do that.
Tested too, little bit less awfull but it doesn't like transparency and it needs to have the js files hosted, so no online demo... Would also probably need a fork...
And finally, to answer the title question, "How to check the MIME type of a file", check this Q/A.
Basically, the steps are to extract the 4 first bits of your file and checking it against magic-numbers. 'image/gif' magic-numbers are 47 49 46 38.

Clipping mask using fabricjs

I'm currently working on web app for photo editing using FabricJS and one of features I need to implement is something like Clipping masks from Photoshop.
For example I have this assets: frame, mask and image. I need to insert image inside frame and clip it with mask. Most tricky part is in requirements:
User should be able to modify image inside frame, e.g. move, rotate, skew... Frame itself also can be moved inside canvas.
Number of layers is not limited so user can add objects under or above masked image.
Masks, frames and images is not predefined, user should be able to upload and use new assets.
My current solution is this:
Load assets
Set globalCompositeOperation of image to source-out
Set clipTo function for image.
Add assets on canvas as a group
In this solution clipTo function preserve image inside rectangular area of frame and with help of globalCompositeOperation I'm clipping image to actual mask. At first sight it works fine but if I add new layer above this newly added group it will be cutted off because of globalCompositeOperation="source-out" rule. I've created JSFiddle to show this.
So, that else could I try? I've seen some posts on StackOverflow with advices to use SVGs for clipping mask, but if I understand it correctly SVG must contain only one path. This could be a problem because of third requirement of my app.
Any advice in right direction will help, because right now I'm totally stuck with this problem.
You can do this by using ClipPath property of Img Object which you want to mask. With this, you can Mask Any Type of Object. and also you need to add some Ctx Configuration in ClipTo function of Img Object.
check this link https://jsfiddle.net/naimsajjad/8w7hye2v/8/
(function() {
var img01URL = 'http://fabricjs.com/assets/printio.png';
var img02URL = 'http://fabricjs.com/lib/pug.jpg';
var img03URL = 'http://fabricjs.com/assets/ladybug.png';
var img03URL = 'http://fabricjs.com/assets/ladybug.png';
var canvas = new fabric.Canvas('c');
canvas.backgroundColor = "red";
canvas.setHeight(500);
canvas.setWidth(500);
canvas.setZoom(1)
var circle = new fabric.Circle({radius: 40, top: 50, left: 50, fixed: true, fill: '', stroke: '1' });
canvas.add(circle);
canvas.renderAll();
fabric.Image.fromURL(img01URL, function(oImg) {
oImg.scale(.25);
oImg.left = 10;
oImg.top = 10;
oImg.clipPath = circle;
oImg.clipTo = function(ctx) {
clipObject(this,ctx)
}
canvas.add(oImg);
canvas.renderAll();
});
var bili = new fabric.Path('M85.6,606.2c-13.2,54.5-3.9,95.7,23.3,130.7c27.2,35-3.1,55.2-25.7,66.1C60.7,814,52.2,821,50.6,836.5c-1.6,15.6,19.5,76.3,29.6,86.4c10.1,10.1,32.7,31.9,47.5,54.5c14.8,22.6,34.2,7.8,34.2,7.8c14,10.9,28,0,28,0c24.9,11.7,39.7-4.7,39.7-4.7c12.4-14.8-14-30.3-14-30.3c-16.3-28.8-28.8-5.4-33.5-11.7s-8.6-7-33.5-35.8c-24.9-28.8,39.7-19.5,62.2-24.9c22.6-5.4,65.4-34.2,65.4-34.2c0,34.2,11.7,28.8,28.8,46.7c17.1,17.9,24.9,29.6,47.5,38.9c22.6,9.3,33.5,7.8,53.7,21c20.2,13.2,62.2,10.9,62.2,10.9c18.7,6.2,36.6,0,36.6,0c45.1,0,26.5-15.6,10.1-36.6c-16.3-21-49-3.1-63.8-13.2c-14.8-10.1-51.4-25.7-70-36.6c-18.7-10.9,0-30.3,0-48.2c0-17.9,14-31.9,14-31.9h72.4c0,0,56-3.9,70.8,26.5c14.8,30.3,37.3,36.6,38.1,52.9c0.8,16.3-13.2,17.9-13.2,17.9c-31.1-8.6-31.9,41.2-31.9,41.2c38.1,50.6,112-21,112-21c85.6-7.8,79.4-133.8,79.4-133.8c17.1-12.4,44.4-45.1,62.2-74.7c17.9-29.6,68.5-52.1,113.6-30.3c45.1,21.8,52.9-14.8,52.9-14.8c15.6,2.3,20.2-17.9,20.2-17.9c20.2-22.6-15.6-28-16.3-84c-0.8-56-47.5-66.1-45.1-82.5c2.3-16.3,49.8-68.5,38.1-63.8c-10.2,4.1-53,25.3-63.7,30.7c-0.4-1.4-1.1-3.4-2.5-6.6c-6.2-14-74.7,30.3-74.7,30.3s-108.5,64.2-129.6,68.9c-21,4.7-18.7-9.3-44.3-7c-25.7,2.3-38.5,4.7-154.1-44.4c-115.6-49-326,29.8-326,29.8s-168.1-267.9-28-383.4C265.8,13,78.4-83.3,32.9,168.8C-12.6,420.9,98.9,551.7,85.6,606.2z',{top: 0, left: 180, fixed: true, fill: 'white', stroke: '', scaleX: 0.2, scaleY: 0.2 });
canvas.add(bili);
canvas.renderAll();
fabric.Image.fromURL(img02URL, function(oImg) {
oImg.scale(0.5);
oImg.left = 180;
oImg.top = 0;
oImg.clipPath = bili;
oImg.clipTo = function(ctx) {
clipObject(this,ctx)
}
canvas.add(oImg);
canvas.renderAll();
});
function clipObject(thisObj,ctx)
{
if (thisObj.clipPath) {
ctx.save();
if (thisObj.clipPath.fixed) {
var retina = thisObj.canvas.getRetinaScaling();
ctx.setTransform(retina, 0, 0, retina, 0, 0);
// to handle zoom
ctx.transform.apply(ctx, thisObj.canvas.viewportTransform);
thisObj.clipPath.transform(ctx);
}
thisObj.clipPath._render(ctx);
ctx.restore();
ctx.clip();
var x = -thisObj.width / 2, y = -thisObj.height / 2, elementToDraw;
if (thisObj.isMoving === false && thisObj.resizeFilter && thisObj._needsResize()) {
thisObj._lastScaleX = thisObj.scaleX;
thisObj._lastScaleY = thisObj.scaleY;
thisObj.applyResizeFilters();
}
elementToDraw = thisObj._element;
elementToDraw && ctx.drawImage(elementToDraw,
0, 0, thisObj.width, thisObj.height,
x, y, thisObj.width, thisObj.height);
thisObj._stroke(ctx);
thisObj._renderStroke(ctx);
}
}
})();
<script src="https://cdnjs.cloudflare.com/ajax/libs/fabric.js/3.6.3/fabric.min.js"></script>
<canvas id="c" width="400" height="400"></canvas>
Not sure what you want.
If you want the last image loaded (named img2), the one you send to the back to not effect the layers above do the following.
You have mask,frame,img, and img2;
Put them in the following order and with the following comp settings.
img2, source-over
img, source-over
mask, destination-out
frame, source-over
If you want something else you will have to explain it in more detail.
Personally when I provide masking to the client I give them full access to all the composite methods and allow them to work out what they need to do to achieve a desired effect. Providing a UI that allows you to change the comp setting, and layer order makes it a lot easier to sort out the sometimes confusing canvas composite rules.
I'd suggest looking at this solution.
Multiple clipping areas on Fabric.js canvas
You end up with a shape layer that is used to define the mask shape. That shape then gets applied as a clipTo to your image.
The one limitation I can think off though that you might run into is when you start to rotate various shapes. I know I have it working great with a rectangle and a circle, however ran into some issues with polygons from what I recall... This was all setup under and older version of FabricJS however, so there may have been some improvements there that I'm not experienced with.
The other issue I ran into was drop shadows didn't render correctly when passed to a NodeJS server running FabricJS.

Why wont my sprite appear on top of my background?

I have been practicing using sprites for a game I am going to make and have watched and read a few tutorials, I thought I was close to getting my sprite to appear so I could finally start my game but while practicing I cant get it to work, I have dont 2 seperate tutorials where I can get the sprite and the background to appear by themselfs but cannot get them to work together, I have been using EaselJS too. some of the sprite animation code has been copied from tutorials too.
<!DOCTYPE HTML>
<html>
<head>
<meta charset="UTF-8">
<title>sprite prac<title>
<!-- EaselJS library -->
<script src="lib/easel.js"></script>
<script>
// Initialize on start up so game runs smoothly
function init() {
canvas = document.getElementById("canvas");
stage = new Stage(canvas);
bg = new Image();
bg.src = "img/grassbg.jpg";
bg.onload = setBG;
stage.addChild(background);
imgMonsterARun = new Image();
imgMonsterARun.onload = handleImageLoad;
imgMonsterARun.onerror = handleImageError;
imgMonsterARun.src = "img/MonsterARun.png";
stage.update();
}
function handleImageLoad(e) {
startGame();
}
// Simple function for setting up the background
function setBG(event){
var bgrnd = new Bitmap(bg);
stage.addChild(bgrnd);
stage.update();
}
function startGame() {
// create a new stage and point it at our canvas:
stage = new createjs.Stage(canvas);
// grab canvas width and height for later calculations:
screen_width = canvas.width;
screen_height = canvas.height;
// create spritesheet and assign the associated data.
var spriteSheet = new createjs.SpriteSheet({
// image to use
images: [imgMonsterARun],
// width, height & registration point of each sprite
frames: {width: 64, height: 64, regX: 32, regY: 32},
animations: {
walk: [0, 9, "walk"]
}
});
// create a BitmapAnimation instance to display and play back the sprite sheet:
bmpAnimation = new createjs.BitmapAnimation(spriteSheet);
// start playing the first sequence:
bmpAnimation.gotoAndPlay("walk"); //animate
// set up a shadow. Note that shadows are ridiculously expensive. You could display hundreds
// of animated rats if you disabled the shadow.
bmpAnimation.shadow = new createjs.Shadow("#454", 0, 5, 4);
bmpAnimation.name = "monster1";
bmpAnimation.direction = 90;
bmpAnimation.vX = 4;
bmpAnimation.x = 16;
bmpAnimation.y = 32;
// have each monster start at a specific frame
bmpAnimation.currentFrame = 0;
stage.addChild(bmpAnimation);
// we want to do some work before we update the canvas,
// otherwise we could use Ticker.addListener(stage);
createjs.Ticker.addListener(window);
createjs.Ticker.useRAF = true;
createjs.Ticker.setFPS(60);
}
//called if there is an error loading the image (usually due to a 404)
function handleImageError(e) {
console.log("Error Loading Image : " + e.target.src);
}
function tick() {
// Hit testing the screen width, otherwise our sprite would disappear
if (bmpAnimation.x >= screen_width - 16) {
// We've reached the right side of our screen
// We need to walk left now to go back to our initial position
bmpAnimation.direction = -90;
}
if (bmpAnimation.x < 16) {
// We've reached the left side of our screen
// We need to walk right now
bmpAnimation.direction = 90;
}
// Moving the sprite based on the direction & the speed
if (bmpAnimation.direction == 90) {
bmpAnimation.x += bmpAnimation.vX;
}
else {
bmpAnimation.x -= bmpAnimation.vX;
}
// update the stage:
stage.update();
}
</script>
</head>
<body onload="init();">
<canvas id="canvas" width="500" height="500" style="border: thin black solid;" ></canvas>
</body>
</html>
There are a few places where you are using some really old APIs, which may or may not be supported depending on your version of EaselJS. Where did you get the easel.js script you reference?
Assuming you have a version of EaselJS that matches the APIs you are using, there are a few issues:
You add background to the stage. There is no background, so you are probably getting an error when you add it. You already add bgrnd in the setBackground method, which should be fine. If you get an error here, then this could be your main issue.
You don't need to update the stage any time you add something, just when you want the stage to "refresh". In your code, you update after setting the background, and again immediately at the end of your init(). These will fire one after the other.
Are you getting errors in your console? That would be a good place to start debugging. I would also recommend posting code if you can to show an actual demo if you continue to have issues, which will help identify what is happening.
If you have a newer version of EaselJS:
BitmapAnimation is now Sprite, and doesn't support direction. To flip Sprites, use scaleX=-1
Ticker no longer uses addListener. Instead it uses the EventDispatcher. createjs.Ticker.addEventListener("tick", tickFunction);
You can get new versions of the CreateJS libraries at http://code.createjs.com, and you can get updated examples and code on the website and GitHub.

Drawing on canvas after megapix rendering is reversed

I have a page which allows you to browse in an image, then draw on it and save both the original and the annotated version. I am leveraging megapix-image.js and exif.js to help in rendering images from multiple mobile devices properly. It works great, except in certain orientations. For example, a vertical photo taken on an iPhone4s is considered orientation 6 by exif and gets flipped accordingly by megapix-image so it's rendered nicely on the canvas. For some reason, when I draw on it afterward, it seems like the drawing is reversed. Mouse and touch both behave the same way. The coordinates look right to me (meaning they match a working horizontal pic and a non-working vertical pic), as does the canvas height and width when megapix-image.js flips it. This leads me to believe it has something to do with the context, but honestly, I am not really sure. I have a JS fiddle of the part of my work that shows the behavior. Just browse in a vertically taken pic from a mobile device or take a pic in vertical format on a mobile device and use it. I think all will show this same behavior.
The final rendering is done like this:
function RenderImage(file2) {
if (typeof file2[0].files[0] != 'undefined') {
EXIF.getData(file2[0].files[0], function () {
orientation = EXIF.getTag(this, "Orientation");
var file = file2[0].files[0];
var mpImg = new MegaPixImage(file);
var resCanvas1 = document.getElementById('annoCanvas');
mpImg.render(resCanvas1, {
maxWidth: 700,
maxHeight: 700,
orientation: orientation
});
});
}
}
But the full jsfiddle is here:
http://jsfiddle.net/awebster28/Tq3qU/6/
Does anyone have any clues for me?
If you look at the lib you are using there is a transformCoordinate function that is used to set the right transform before drawing.
And they don't save/restore the canvas (boooo!!!) so it remains with this transform after-wise.
Solution for you is to do what the lib should do : save the context before the render and restore it after :
function RenderImage(file2) {
// ... same code ...
var mpImg = new MegaPixImage(file);
var eData = EXIF.pretty(this);
// Render resized image into canvas element.
var resCanvas1 = document.getElementById('annoCanvas');
var ctx = resCanvas1.getContext('2d');
ctx.save();
//setting the orientation flips it
mpImg.render(resCanvas1, {
maxWidth: 700,
maxHeight: 700,
orientation: orientation
});
ctx.restore();
//...
}
I ended up fixing this by adding another canvas to my html (named "annoCanvas2"). Then, I updated megapix-image.js to include this function, which draws the contents of the new canvas to a fresh one:
function drawTwin(sourceCanvas)
{
var id = sourceCanvas.id + "2";
var destCanvas = document.getElementById(id);
if (destCanvas !== null) {
var twinCtx = destCanvas.getContext("2d");
destCanvas.width = sourceCanvas.width;
destCanvas.height = sourceCanvas.height;
twinCtx.drawImage(sourceCanvas, 0, 0, sourceCanvas.width, sourceCanvas.height);
}
}
Then, just after the first is rotated and flipped and rendered, I rendered the resulting canvas to my "twin". Then I had a nice canvas, with my updated image that I could then draw on and also save!
var tagName = target.tagName.toLowerCase();
if (tagName === 'img') {
target.src = renderImageToDataURL(this.srcImage, opt, doSquash);
} else if (tagName === 'canvas') {
renderImageToCanvas(this.srcImage, target, opt, doSquash);
//------I added this-----------
drawTwin(target);
}
I was glad to have it fixed so I met my deadline, but I am still not sure why I had to do this. If anyone out there can explain it, I'd love to know why.

Categories