Javascript (or pseudocode) to properly append two PNGs using arraybuffers/canvas - javascript

I have two images (when combined will have a 1:1 width:height ratio). If I combine them using convert a.png b.png -append c.png on unix, it works perfectly. I'm trying to achieve this in javascript. I'm adding the arraybuffers (containing img data) together because drawing them both in a canvas doesn't seem to produce the identical image. If I simply append each arraybuffer, the image ration is 2:1; does anyone know how to properly append the array buffers, similar to what convert does?
Edit: To elaborate, simply stacking on a canvas won't work (I've tried). This could be due to low level canvas code, I suspect it's due to how canvas joins the pixels at the boundary between the two images. It needs to be arraybuffers.

If you for some reason don't want to load the images via Image then the only option is to manually parse and decompress the file. It's true that the browser can alter images due to ICC/gamma support. This doesn't happen in the canvas step though, but during image loading and converting to RGBA data.
That being said, the process during getImageDate()/putImageData() can also alter the pixel values due to (un-)pre-multiplying and rounding errors.
Example using canvas to merge two PNG images into a single one:
var ctx = c.getContext("2d"), // canvas 2d context
img1 = new Image, // create two image
img2 = new Image, // elements
count = 2; // Track for loader
// load images
img1.onload = img2.onload = function() { // make sure images are
if (!--count) append(); // loaded first
};
img1.crossOrigin = img2.crossOrigin = ""; // need this for this demo
img1.src = "http://i.imgur.com/hlHEhUhb.jpg"; // random images...
img2.src = "http://i.imgur.com/ynzkv40b.jpg";
// process images
function append() {
// use width to sum the images
c.width = img1.width + img2.width; // set total height
c.height = Math.max(img1.height,img2.height); // set max height
ctx.drawImage(img1, 0, 0); // draw in image 1
ctx.drawImage(img2, img1.width, 0) ; // draw in image 2
console.log(c.toDataURL()); // extract, send to server
}
<canvas id=c></canvas>
You cannot simply merge PNG data with each other without decoding them first. This is because the image data chunk is compressed (deflated) and each scan-line in a PNG file is using a initial byte describing the line-filter being used.
Simply merging them could possibly invalidate the deflated data if just vertical, and horizontally it would invalidate the line data vs. length due to the extra filter byte that would be introduced. The filters for each line could be different as well.
So there is no way around parsing, decompressing and decoding the source PNG files. But in order to parse a PNG file you would have to know how the file format is built up.
The PNG file format
The main file structure in a PNG files is:
-Signature- 8 bytes
IHDR chunk required (width, height, depth, mode etc.)
[PLTE chunk] required for indexed color mode
[Misc chunks] optional ancillary and private chunks
IDAT chunk required, can be multiple
IEND chunk required, last chunk (data-less)
Any other chunk can be ignored in this case unless you are using an indexed palette in which case you need to consider the PLTE chunk as well.
Chunks allow you to skip to next chunk if the current chunk is unknown or not needed. A chunk is structured using 8 bytes, followed by data and then a 4 bytes CRC-32 checksum (data is not required, like with the IEND chunk):
0x00 SIZE (4 bytes)
0x04 FOURCC (4 bytes)
0x08 DATA (variable, can be 0)
0x?? CRC-32 (4 bytes)
The size is representing the data only. The name would be a ASCII representation of the chunk name, always four bytes ("IDAT", "IEND", ...).
The CRC-32 checksum can be ignored if you don't wish to validate the data, but cannot be ignored when you produce a new PNG file as most PNG viewers/parser uses this value and it includes the chunk name.
All values are unsigned in big-endian byte order.
Reading Chunks
A typical way to read chunked data files such as PNG is to initialize a start offset at the first chunk. Then iterate through reading and moving the file cursor at the same time, checking the chunk name.
For example:
var pos = 8; // first chunk position
var dv = new DataView(arraybuffer); // use a DataView
Make some helper functions to read and move position:
function getUint32() { // and for Uint16 etc.
var data = dv.getUint32(pos); // use big-endian byte-order
pos += 4;
return data
}
// decode chunk name to string (from pngtoy)
function getFourCC() {
var v = getUint32(),
c = String.fromCharCode;
return c((v & 0xff000000)>>>24) + c((v & 0xff0000)>>>16) +
c((v & 0xff00)>>>8) + c((v & 0xff)>>>0)
}
Which now allows us to work with the file buffer as intended:
// repeated actions:
var size = getUint32();
var name = getFourCC();
var data, crc;
if (name === "IHDR") { // check chunk type
data = new Uint8Array(dv.buffer, pos, size); // get data section from chunk
pos += size; // next chunk or the end
crc = getUint32(); // read CRC-32 checksum
// validate CRC-32 here
}
else pos += size + 4; // skip data and crc
Tip: even if the chunk is skipped it could be a point validating the data against the CRC checksum to find early indication of file corruption.
The IDAT chunk always contains deflated data since this is the only valid storage form in the format specification, and has to be inflated first. For this process I would recommend (as always) the Pako implementation of the zlib library.
Reading process
The read process for each input image then becomes (using a DataView is required):
Check magic header/signature. There are 8 bytes which should always be the sequence of:0x89504E47 0x0D0A1A0A (big-endian).
If OK the first chunk (IHDR) would be found at position 8 in the file. You need to parse the content of this header to find width, height as well as bitmap depth (16, 8, 4, 1) and type (RGB, RGBA, Greyscale, Bitmap etc.) as well as if the image is interlaced or not.
When these data has been obtained you can scan the IDAT chunks. Note the plural - there is usually just a single IDAT chunk but it is perfectly valid to have several IDAT chunks. When you reach the IEND chunk there is no more data. A valid PNG file will not have any other chunks between the IDAT and IEND chunks when the bitmap is split into several IDAT chunks.
Pass the data through inflate to decompress it. (tip: using Inflate the instance instead of the static function can take each separate IDAT chunk data and then decompress to a single buffer).
Now you will have a raw but unfiltered PNG bitmap
We can still not merge the files as we need to decode each scan-line using the filter byte. There are five different line-filters in PNG where 0 means no filtering is needed, up to the more complex 4 Paeth filter.
In addition, the image can be interlaced (Adam-7) which require a different approach due to being progressive.
When you have decoded each scan-line (and de-interlaced if needed) you will have a raw bitmap unaffected by ICC/gamma from the browser.
An extra step needs to be taken to check if both images are of the same kind (e.g. RGB, RGBA, etc.) If not then one has to in addition, be converted to the other format usually by "upgrading" the one with less information/quality. If the same format and depth you should be good to go.
If the size differs in such a way it would leave some sort of gap in the final result, padding may be needed to fill empty pixels where there is no coverage, depending on format and if don't want things such as transparency etc.
Now you can merge the two bitmaps horizontally or vertically.
Merging two bitmaps
You mentioned you wanted to merge the bitmaps horizontally -
Set up a new buffer the size of image 1 width + image 2 width times the size of a single pixel (3 for RGB, 4 for RGBA etc.)
Define height as the maximum of the two heights
Determine if you need/want to use padding/zero-fill (height 1 !== height 2)
Set up a main loop for the new buffer, then alternate between image 1 and 2 per scan-line so that the two first scan-lines are copied as a single one into the new buffer.
Writing process
The reverse process to save out the image again is then:
Set signature
Add IHDR chunk and update with new sizes, format, depth
Add IDAT chunk
Encode each scanline (you can use filter 0 for simplicity, but it will add to size)
Deflate the data using zlib and add. Update size of the chunk using the compressed size
Calculate CRC-32 checksums
Add IEND chunk
My strategy would be to build the file in parts using a plain Array to hold each typed array part (signature) and chunk + data. Then pass the array to a Blob which will concatenate the parts to a single binary buffer.
For example:
var arr = [];
arr.push(taSig); // ta* = typed array
arr.push(taIHDR);
arr.push(taIDAT);
arr.push(taIEND);
Then pass in the array to a Blob:
var blob = new Blob(arr, {type: "image/png"});
The complete PNG file format specification can be found here.
I will recommend you to check out my pngtoy (PNG parser and decoder, MIT lic.) for details. It do similar steps as described above to obtain a raw decoded bitmap.

Related

Uint clamped array to data url

I want to get part of a canvas in the base 64 data url format which will be sent to my api
for this is I am using the ctx.getImageData() method which is returning a clamped uint 8 array
here is the code:
const data = ctx.getImageData(mousex, mousey, mousex1, mousey1);
const clamped = data.data
}
I have tried many btoa methods but the either return a broken array full of A or a broken array with a lot of /
You can use canvas.toDataURL(type, encoderOptions) for this purpose.
First extract the part of your source image. You don't need getImageData() for that. Instead create a second canvas with part_canvas = document.createElement("canvas"), this canvas doesnt have to be visible on the page.
Assign it its size .width and .height of the part you want to extract
part_ctx = part_canvas.getContext("2d") on the canvas
Then part_ctx.drawImage(source_image, part_x, part_x, part_width, parth_height, 0, 0, part_width, parth_height); This will take a rectangualr area of the source image and put it in the invisble canvas.
Finally you can do part_canvas.toDataURL() and you have the data URL
Didn't test. But I think this should work.

How do you convert a 32bit RGB png with encoded height values to 16 bit png using browser side javascript?

I am trying to create a 16bit heightmap png from a 32bit Rgb encoded height value png.
According to mapbox I can decode their png pixel values to height values in meters with this formula. height = -10000 + ((R * 256 * 256 + G * 256 + B) * 0.1)
I am new to understanding this type of information and have tried to figure it out myself but I need some help.
I found this program called image-js, docs for this program are here . Note I am using this in the browser not with node.
I think it has the abilities that I need. However, I am really not sure how to get the data from the original image and create a new 16bit png with the new height values calculated from the above formula.
If you look in the docs under image I can set many properties such as
bitDepth and set that to 16
getPixelsArray() this function will give me an array of pixels values in the form of r,g,b. I think I can then run each pixel through the above formula to get the height. After that how do I turn this height data back into a 16bit grayscale heightmap png?
Thanks!
Example images.
32bit Rgb encoded height value png
16 bit heightmap png. You have to zoom in to see the image changes
Update:
Thanks to traktor for the suggestion. I wanted to add some information to my question.
I don't necessarily need to use image-js. My main goal is to get a 16 bit heightmap from a 32 bit RGB. So any other browser side javscript method would be fine. I also am using browser side createWritable(); stream to write file to disk.
I have a method used from this nice module https://github.com/colkassad/terrain-rgb-height however I cannot get the browser side version of pngjs to work correctly. I think there is a difference between browser side stream reader/writers and node reader/writers that make it not work.
Thanks!
Looking through the source code for Image-js package on GitHub turns up that the options object used in calls to the library is verified in source file kind.js which enumerates supported options properties in the assignment statement:
const { components, alpha, bitDepth, colorModel } = definition;
However, many options are defaulted using kind option property, which itself defaults to "RGBA".
Knowing more about option properties allows using them (I couldn't find options documentation outside of the code).
I would suggest
Create an Image-js image from the 32bit encoded height png. Omit a kind property to use the default 32bit PNG pixel model of 3 color channels plus one alpha channel.
Convert the image data (held as a typed array in the image object's data property) to a single dimensional array of height values using the MapBox conversion algorithm. The layout of the (image.data) typed array appears to be the same as that used for ImageData
Create a new image-js image using
new Image(width, height, decodedHeightArray, {kind="GREY", bitDepth:16})
where decodedHeightArray is the array prepared in the previous step.
Just adding my code to what tracktor's answer provided
Thank you very much traktor
This worked perfectly. For anyone who wants the code
I am using image-js to encode and decode the image
let file_handleSixteen = await dir_handle.getFileHandle(sixteen_file_name, {create: true})
let writableSixteen = await file_handleSixteen.createWritable();
async function convert16(arrayBuff) {
let image = await Image.load(arrayBuff);
let width = image.width
let height = image.height
let decodedHeightArray = []
let pixelsArray = image.getPixelsArray()
for (const pixel of pixelsArray) {
let r = pixel[0]
let g = pixel[1]
let b = pixel[2]
let height = getHeightFromRgb(r, g, b);
decodedHeightArray.push(height)
}
let newImage = new Image(width, height, decodedHeightArray, {kind: "GREY", bitDepth: 16})
return newImage.toBlob()
}
function getHeightFromRgb(r, g, b) {
return -10000 + ((r * 256 * 256 + g * 256 + b) * 0.1);
}
const file = await file_handleRgb.getFile();
let imageBuffer = await file.arrayBuffer()
let convertedArray = await convert16(imageBuffer)
await writableSixteen.write(convertedArray)
await writableSixteen.close();
I am also using the browser streams api to write the file to disk. Note the filestreams writable.write only accepts certain values one of them being a Blob that is why converted it to Blob before passing it to the write method

display image by using uint16array data

I am doing a DICOM project using XTK library. Now I need to create a list of thumbnails from input DICOM files (output images could be PNG or JPG).
During the rendering process, XTK provides an array of pixel data in an Uint16Array of each DICOM file. But I have no idea about converting those pixel data into a canvas.
I searched for some related articles or questions but found nothing possible.
Nowadays, ImageData has become a constructor, that you can put on a 2d context easily.
What it expects as arguments is an UInt8ClampedArray, a width and a height.
So from an Uint16Array representing rgba pixel data, you'd just have to do
var data = your_uint16array;
var u8 = new Uint8ClampedArray(data.buffer);
var img = new ImageData(u8, width, height);
ctx.putImageData(img, 0,0);
But according to your screenshot, what you have is actually an Array of Uint16Array, so you will probably have to first merge all these Uint16Arrays in a single one.
Also note that an Uint16Array is a weird view over an rgba pixel data, Uint32Array being more conventional (#ffff vs #ffffffff).

How to draw on an HTML5 Canvas, pixel-by-pixel

Suppose that I have a 900x900 HTML5 Canvas element.
I have a function called computeRow that accepts, as a parameter, the number of a row on the grid and returns an array of 900 numbers. Each number represents a number between 0 and 200. There is an array called colors that contains an array of strings like rgb(0,20,20), for example.
Basically, what I'm saying is that I have a function that tells pixel-by-pixel, what color each pixel in a given row on the canvas is supposed to be. Running this function many times, I can compute a color for every pixel on the canvas.
The process of running computeRow 900 times takes about 0.5 seconds.
However, the drawing of the image takes much longer than that.
What I've done is I've written a function called drawRow that takes an array of 900 numbers as the input and draws them on the canvas. drawRow takes lots longer to run than computeRow! How can I fix this?
drawRow is dead simple. It looks like this:
function drawRow(rowNumber, result /* array */) {
var plot, context, columnNumber, color;
plot = document.getElementById('plot');
context = plot.getContext('2d');
// Iterate over the results for each column in the row, coloring a single pixel on
// the canvas the correct color for each one.
for(columnNumber = 0; columnNumber < width; columnNumber++) {
color = colors[result[columnNumber]];
context.fillStyle = color;
context.fillRect(columnNumber, rowNumber, 1, 1);
}
}
I'm not sure exactly what you are trying to do, so I apologize if I am wrong.
If you are trying to write a color to each pixel on the canvas, this is how you would do it:
var ctx = document.getElementById('plot').getContext('2d');
var imgdata = ctx.getImageData(0,0, 640, 480);
var imgdatalen = imgdata.data.length;
for(var i=0;i<imgdatalen/4;i++){ //iterate over every pixel in the canvas
imgdata.data[4*i] = 255; // RED (0-255)
imgdata.data[4*i+1] = 0; // GREEN (0-255)
imgdata.data[4*i+2] = 0; // BLUE (0-255)
imgdata.data[4*i+3] = 255; // APLHA (0-255)
}
ctx.putImageData(imgdata,0,0);
This is a lot faster than drawing a rectangle for every pixel. The only thing you would need to do is separate you color into rgba() values.
If you read the color values as strings from an array for each pixel it does not really matter what technique you use as the bottleneck would be that part right there.
For each pixel the cost is split on (roughly) these steps:
Look up array (really a node/linked list in JavaScript)
Get string
Pass string to fillStyle
Parse string (internally) into color value
Ready to draw a single pixel
These are very costly operations performance-wise. To get it more efficient you need to convert that color array into something else than an array with strings ahead of the drawing operations.
You can do this several ways:
If the array comes from a server try to format the array as a blob / typed array instead before sending it. This way you can copy the content of the returned array almost as-is to the canvas' pixel buffer.
Use a web workers to parse the array and pass it back as a transferable object which you them copy into the canvas' buffer. This can be copied directly to the canvas - or do it the other way around, transfer the pixel buffer to worker, fill there and return.
Sort the array by color values and update the colors by color groups. This way you can use fillStyle or calculate the color into an Uint32 value which you copy to the canvas using a Uint32 buffer view. This does not work well if the colors are very spread but works ok if the colors represent a small palette.
If you're stuck with the format of the colors then the second option is what I would recommend primarily depending on the size. It makes your code asynchronous so this is an aspect you need to deal with as well (ie. callbacks when operations are done).
You can of course just parse the array on the same thread and find a way to camouflage it a bit for the user in case it creates a noticeable delay (900x900 shouldn't be that big of a deal even for a slower computer).
If you convert the array convert it into unsigned 32 bit values and store the result in a Typed Array. This way you can iterate your canvas pixel buffer using Uint32's instead which is much faster than using byte-per-byte approach.
fillRect is meant to be used for just that - filling an area with a single color, not pixel by pixel. If you do pixel by pixel, it is bound to be slower as you are CPU bound. You can check it by observing the CPU load in these cases. The code will become more performant if
A separate image is created with the required image data filled in. You can use a worker thread to fill this image in the background. An example of using worker threads is available in the blog post at http://gpupowered.org/node/11
Then, blit the image into the 2d context you want using context.drawImage(image, dx, dy).

Fast way to make RGB array into RGBA array in Javascript

An emulator I am working with internally stores a 1-dimensional framebuffer of RGB values. However, HTML5 canvas uses RGBA values when calling putImageData. In order to display the framebuffer, I currently loop through the RGB array and create a new RGBA array, in a manner similar to this.
This seems suboptimal. There has been much written on performing canvas draws quickly, but I'm still lost on how to improve my application performance. Is there any way to more quickly translate this RGB array to an RGBA array? The alpha channel will always be fully opaque. Also, is there any way to interface with a canvas so that it takes an array of RGB, not RGBA, values?
There's no way to use plain RGB, but the loop in that code could be optimised somewhat by removing repeated calculations, array deferences, etc.
In general you shouldn't use ctx.getImageData to obtain the destination buffer - you don't normally care what values are already there and should use ctx.createImageData instead. If at all possible, re-use the same raw buffer for every frame.
However, since you want to preset the alpha values to 0xff (they default to 0x00) and only need to do so once, it seems to be much most efficient to just fill the canvas and then fetch the raw values with getImageData.
ctx.fillStyle = '#ffffff'; // implicit alpha of 1
ctx.fillRect(0, 0, ctx.canvas.width, ctx.canvas.height);
dest = ctx.getImageData(0, 0).data
and then for each frame for can just leave the alpha byte untouched:
var n = 4 * w * h;
var s = 0, d = 0;
while (d < n) {
dest[d++] = src[s++];
dest[d++] = src[s++];
dest[d++] = src[s++];
d++; // skip the alpha byte
}
You could also experiment with "loop unrolling" (i.e. repeating that four line block multiple times within the while loop) although results will vary across browsers.
Since it's very likely that your total number of pixels will be a multiple of four, just repeat the block another three times and then the while will only be evaluated for every four pixel copies.
Both ctx.createImageData and ctx.getImageData will create a buffer, the later (get) will be slower since it has also to copy the buffer.
This jsperf : http://jsperf.com/drawing-pixels-to-data
confirms that we have a like 33% slowdown on Chrome, and 16 times slower on Firefox (FFF seems to byte-copy when Chrome copy with 32 or 64 bits move).
i'll just recall that you can handle typed array of different types, and even create a view on the buffer (image.data.buffer).
So this may allow you to write the bytes 4 by 4.
var dest = ctx.createImageData(width, height);
var dest32 = new Int32Array(dest.data.buffer);
var i = 0, j=0, last = 3*width*height;
while (i<last) {
dest32[j] = src[i]<<24 + src[i+1] << 16
+ src[i+2] << 8 + 255;
i+=3;
j++;
}
You will see in this jsperf test i made that it is faster to
write using 32 bits integers :
http://jsperf.com/rgb-to-rgba-conversion-with-typed-arrays
notice that there is a big issue in those tests : since this test is
awfull in terms of garbage creation, accuracy is so-so.
Still after many launch, we see that we have around 50%
gain on write 4 vs write 1.
Edit : it might be worth to see if reading the source with a DataView wouldn't speed things up.
but the input array has to be a buffer (or have a buffer property like a Uint8Array).
(https://developer.mozilla.org/en-US/docs/Web/JavaScript/Typed_arrays/DataView)
do not hesitate to update the fiddle with such a try.
Edit 2 :
I don't understand i re-ran the test and now write 4 is slower : ??? and after, faster again : -------
Anyway you have great interest in keeping the dest32 buffer under your hand and not
create a new one each time anyway, so since this test measure the Int32Array creation, it does not correspond to your use case.

Categories