I'm using a canvas to resize client-side an image before uploading it on the server.
maxWidth = 500;
maxHeight = 500;
//manage resizing
if (image.width >= image.height) {
var ratio = 1 / (image.width / maxWidth);
}
else {
var ratio = 1 / (image.height / maxHeight);
}
...
canvas.width = image.width * ratio;
canvas.height = image.height * ratio;
var resCtx = canvas.getContext('2d');
resCtx.drawImage(image, 0, 0, image.width * ratio, image.height * ratio);
This works, since on the server is saved the resized image, but there are two points i would like to improve:
image weight in KB is important to me; i want a very light weight image; even if it is resized, the canvas' image is still too heavy; i can see that the image saved on the server has a resolution of 96 DPI and 32-bit of color-depth even if the original image has a resolution of 72 DPI and 24-bit of color-depth; why? Can i set the canvas' image resolution?
the resized image does not look very nice, because the resizing proces introduce distortion... I've tried the custom algorithm by GameAlchemist in this post:
HTML5 Canvas Resize (Downscale) Image High Quality?
getting a very nice result, but then the resized image was more heavy in KB than the original one! Is there a good algorithm in order to get quality resized images keeping them lightweight?
DPI does not matter at all with images. An image at 1k x 1k will be just the same if 892478427 DPI or as 1 DPI. DPI is arbitrary in this context so ignore that part (it is only used as information for DTP programs so they know the relative size compared to document size). Images are only measured in pixels.
Canvas is RGBA based (32-bits, 24-bit colors + 8 bit alpha channel) so it's optimal form for exporting images will be in this format (ie. PNG files). You can however export 24-bits images without the alpha channel by requesting JPEG format as export format.
Compression is basically signal processing. As in all forms of (lossy) compression you try to remove frequencies in the signal in order to reduce the size. The more high frequencies there is the harder it is to compress the image (or sound or anything else signal based).
In images high frequency manifests as small details (thin lines and so forth) and as noise. To reduce the size of a compressed image you need to remove high frequencies. You can do this by using interpolation as a low-pass filter. A blur is a form of low-pass filter as well and they work in the same way in principle.
So in order to reduce image size you can apply a slight blur on the image before compressing it.
The JPEG format support quality settings which can reduce the size as well although this using a different approach than blurring:
// 0.5 = quality, lower = lower quality. Range is [0.0, 1.0]
var dataUri = canvas.toDataURL('image/jpeg', 0.5);
To blur an image you can use a simple and fast method of scaling it to half the size and then back up (with smoothing enabled). This will use interpolation as a low-pass filter by averaging the pixels.
Or you can use a "real" blur algorithm such as Gaussian blur as well as box blur, but only with a small amount.
Related
I have an image with height=75pixels and width=75pixels (75x75).
I want to calculate what the resolution for this image would be in different resolutions [480p (640x480), 720p (1280x720), and 1080p (1920x1080)] while still keeping the image's original aspect ratio.
For example, I know if I keep multiplying by 2 that eventually I will reach a number for each resolution like so:
75x75 * 2 = 144x144
144x144 * 2 = 288x288
288x288 * 2 = 576x576
576x576 * 2 = 1152x1152 (This resolution would be just over 480p)
1152x1152 * 2 = 2304x2304 (This resolution would be over 720/1080p)
How can I calculate different resolutions for an image while keeping the original aspect ratio?
I am trying to convert my HTML code to image using Html2Canvas Library. In order to get a high-quality image in the result, I am using scale property with value 2.
html2canvas(element, {
scale: 2
}).then(function(canvas) {
getCanvas = canvas;
var imageData = getCanvas.toDataURL("image/png", 1);
var newData = imageData.replace(/^data:image\/png/, "data:application/octet-stream");
});
Now this newData is sent to php using AJAX and to save I use below code -
define('UPLOAD_DIR', 'assets/output_images/');
$image_parts = explode(";base64,", $_POST['ImageUri']);
$image_type_aux = explode("image/", $image_parts[0]);
$image_base64 = base64_decode($image_parts[1]);
$file_name = uniqid().'.png';
$file = UPLOAD_DIR . $file_name;
$success =file_put_contents($file, $image_base64);
But it always gives me the same image as it was with the scale with value 1 and there is no improvement in the final image.
NOTE: My div dimension is 369 * 520 and generated image size is always 369*520 with scale value 1 or 2 or 3 etc.
Thanks in advance
this just can't be... if you use bitmap image multiply size won't be better quality, but lower. You'll create fake pixels as copy of originals pixels at best.
For a real scale with image(with no quality loss) you've to use scalar image such SVG who won't be in CANVAS who use bitmap image but in the DOM as regular Elements.
Obviously you can use optionals parameters with
context.drawImage(img,sx,sy,swidth,sheight,x,y,width,height);
where width and height will be final width and height in pixels(so no higher quality image just a larger size and a lower quality because pixels are multiplicated with fake pixels cloned form original size).
I hope i'be got your question fine.
edit: notice that to draw in canvas you've to create a image object such new Image('imagename.png'); at least but you can add the alt attributes and other stuff like an id or such. for more example source W3schools.org canvas.drawImage(...)
I have a FabricJS canvas that has to be converted to a PDF that matches the exact fractional inches for print production. Per other questions I have received, my printer is a large format printer and actually needs this/is capable to printing to the exact decimal as follows:
Width (Inches): 38.703111
Height (Inches): 25.999987
Since the requirements are a DPI of 300, I obtained the pixels by multiplying the width and height x 300 as follows:
38.703111 inches x 300 = 11610.9333 px
25.999987 inches x 300 = 7799.9961 px
With these measurements in hand, I created a FabricJS canvas that users could edit and then convert to an image (will need to figure out how to convert it to a PDF server side later/suspecting node.js with the pdfkit module).
Since it is not usable to have a 11610 x 7799px Canvas on a page, I set the size of the canvas as follows:
Width: 650px
Height: 436.6571863 (Orginal Height * New Width / Orginal Width = new height)
Here is what my Canvas looks like (HTML) and corresponding JavaScript code to render it:
<canvas id="c" width="650" height="436.6571863" style="border:1px solid"></canvas>
var canvas = new fabric.Canvas('c', {});
Users are able to edit the canvas and then convert it to an image, but this is the problem occurs. I attempt to scale the canvas via FabricJS' toDataURL method using a multiplier of 17.86297431 (Original Width in Pixels / New Width in Pixels and Original Height in Pixels / New Height in Pixels both equal a multiplier of 17.86297431) as follows:
var dataURL = canvas.toDataURL({
format: 'png',
multiplier: 17.86297431
})
document.write('<img src="' + dataURL + '"/>');
However, once it scales, the width of my outputted image appears correct at 11610, but the height is off at 7788 when it should be 7799. This output does not show fractions, but rather just the whole pixels when I inspect the element.
My question is, how can I get my FabricJS Canvas to properly scale up to the pixels (or inches) so I can have my PDF (converted from my PNG) in the correct dimensions? Is this an issue with pixel fractions not being respected/what can I do about this?
You need to set NUM_FRACTION_DIGITS in fabric.js file
By default this value = 2, you can change it and set for example = 9
Now your multiplier = 9,0123456;
But for Fabric JS your multiplier = 9,012;
If your will set NUM_FRACTION_DIGITS = 9 you will get your multiplier = 9,0123456;
Here the proof link: http://fabricjs.com/docs/fabric.Object.html
That is what you need: NUM_FRACTION_DIGITS - Defines the number of fraction digits to use when serializing object values. You can use it to increase/decrease precision of such values like left, top, scaleX, scaleY, etc.
You should not work with float dimensions, that is a bad idea. Why don't you use Math.ceil for your dimensions. This will make no visual difference and this is the approach usually used.
Also you could use the Zoomify alghoritm approach to determine your scale factor. You can start from the original dimensions and divide them by 2 until you get a value smaller than a maximum value defined by you, which you consider acceptable. That way your scale factor will be a power of 2.
How can I upload large images to the GPU using WebGL without freezing up the browser (think of high-res skyboxes or texture atlases)?
I thought at first to seek if there's a way to make texImage2D do its thing asynchronously (uploading images to the GPU is IO-ish, right?), but I could not find any way.
I then tried using texSubImage2D to upload small chunks that fit in a 16 ms time window (I'm aiming for 60 fps). But texSubImage2D takes an offset AND width/height parameter only if you pass in an ArrayBufferView - when passing in Image objects you can only specify the offset and it will (I'm guessing) upload the whole image. I imagine painting the image to a canvas first (to get it as a buffer) is just as slow as uploading the whole thing to the GPU.
Here's a minimal example of what I mean: http://jsfiddle.net/2v63f/3/.
I takes ~130 ms to upload this image to the GPU.
Exact same code as on jsfiddle:
var canvas = document.getElementById('can');
var gl = canvas.getContext('webgl');
var image = new Image();
image.crossOrigin = "anonymous";
//image.src = 'http://i.imgur.com/9Tq28Qj.jpg?1';
image.src = 'http://i.imgur.com/G0qL97y.jpg'
image.addEventListener('load', function () {
var texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
var now = performance.now();
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, image);
console.log(performance.now() - now);
});
The referenced image appears to be 3840x1920
For a high-res skybox you can usually discard the need for an alpha channel, and then decide if some other pixel format structure can provide a justifiable trade off between quality and data-size.
The specified RGBA encoding means this will require a 29,491,200 byte data transfer post-image-decode. I attempted to test with RGB, discarding the alpha but saw a similar 111 ms transfer time.
Assuming you can pre-process the images prior to your usage, and care only about the time measured for data-transfer time to GPU, you can preform some form of lossy encoding or compression on the data to decrease the amount of bandwidth you need to transfer.
One of the more trivial encoding methods would be to half your data size and send the data to the chip using RGB 565. This will decrease your data size to 14,745,600 bytes at a cost of color range.
var buff = new Uint16Array(image.width*image.height);
//encode image to buffer
//on texture load-time, perform the following call
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGB, image.width, image.height, 0, gl.RGB, gl.UNSIGNED_SHORT_5_6_5, buff);
Assuming you can confirm support for the S3TC extension (https://developer.mozilla.org/en-US/docs/Web/WebGL/Using_Extensions#WEBGL_compressed_texture_s3tc), you could also store and download the texture in DXT1 format and decrease the memory transfer requirement down to 3,686,400 bytes.
It appears any form of S3TC will result in the same RGB565 color-range reduction.
var ext = (
gl.getExtension("WEBGL_compressed_texture_s3tc") ||
gl.getExtension("MOZ_WEBGL_compressed_texture_s3tc") ||
gl.getExtension("WEBKIT_WEBGL_compressed_texture_s3tc")
);
gl.compressedTexImage2D(gl.TEXTURE_2D, 0, ext.COMPRESSED_RGB_S3TC_DXT1_EXT, image.width, image.height, 0, buff);
Most block compressions offer a decreased image quality up close, but allow for richer imagery at a distance.
Should you need high resolution up close, a lower resolution or compressed texture could be loaded initially to be used as a poor LOD distant-view, and a smaller subset of the higher resolution could be loaded as on approaching the texture in question.
Within my trivial tests at http://jsfiddle.net/zy8hkby3/2/ , the texture-payload-size reductions can cause exponentially decreasing time requirements for the data-transfer, but this is most likely GPU dependent.
I am working on an image cropping tool at the moment, the image crop tool comes from jCrop, what I am trying to is make the image the user takes the crop from smaller than the original uploaded. Basically if uploads a landscape image I need to make the image 304px wide without alterting the aspect ratio of the shot.
For instance if the user uploads a portrait shot, I need make the image 237px without altering the aspect ratio of the shot.
Is this possible? I have access to original images sizes in my code, but I cannot work out make sure I am not altering the aspect ratio?
Yes, it's possible if the source image is already wide enough. If it's not wide enough there isn't enough data to crop and if you wanted to maintain aspect ratio, you'd need to scale the image up before the crop.
Something like this should get you started:
CropWidth=237;
AspectRatio= SourceWidth/SourceHeight;
CropHeight = CropWidth*AspectRatio;
To properly maintain aspect ratio you should use GCD:
function gcd(a, b) {
return (b == 0) ? a : gcd(b, a % b);
}
You should then do something like this:
function calcRatio(objectWidth, objectHeight) {
var ratio = gcd(objectWidth, objectHeight),
xRatio = objectWidth / ratio,
yRatio = objectHeight / ratio;
return { "x": xRatio, "y": yRatio };
}
This will get you the ratio of some object. You can use this information to figure out how large the resulting image should be.
function findSize (targetHeight, xRatio, yRatio) {
var widthCalc = Math.round((xRatio * targetHeight) / yRatio);
return { "width" : widthCalc, "height" : targetHeight };
}
var test = calcRatio(1280,720),
dimensions = findSize(400, test.x, test.y);
//400 is what we want the new height of the image to be
Those are your dimensions. If you don't have any extra "space" around your images that you need to account for then your work is done. Otherwise you need to handle a couple cases.