I am doing a DICOM project using XTK library. Now I need to create a list of thumbnails from input DICOM files (output images could be PNG or JPG).
During the rendering process, XTK provides an array of pixel data in an Uint16Array of each DICOM file. But I have no idea about converting those pixel data into a canvas.
I searched for some related articles or questions but found nothing possible.
Nowadays, ImageData has become a constructor, that you can put on a 2d context easily.
What it expects as arguments is an UInt8ClampedArray, a width and a height.
So from an Uint16Array representing rgba pixel data, you'd just have to do
var data = your_uint16array;
var u8 = new Uint8ClampedArray(data.buffer);
var img = new ImageData(u8, width, height);
ctx.putImageData(img, 0,0);
But according to your screenshot, what you have is actually an Array of Uint16Array, so you will probably have to first merge all these Uint16Arrays in a single one.
Also note that an Uint16Array is a weird view over an rgba pixel data, Uint32Array being more conventional (#ffff vs #ffffffff).
Related
I want to get part of a canvas in the base 64 data url format which will be sent to my api
for this is I am using the ctx.getImageData() method which is returning a clamped uint 8 array
here is the code:
const data = ctx.getImageData(mousex, mousey, mousex1, mousey1);
const clamped = data.data
}
I have tried many btoa methods but the either return a broken array full of A or a broken array with a lot of /
You can use canvas.toDataURL(type, encoderOptions) for this purpose.
First extract the part of your source image. You don't need getImageData() for that. Instead create a second canvas with part_canvas = document.createElement("canvas"), this canvas doesnt have to be visible on the page.
Assign it its size .width and .height of the part you want to extract
part_ctx = part_canvas.getContext("2d") on the canvas
Then part_ctx.drawImage(source_image, part_x, part_x, part_width, parth_height, 0, 0, part_width, parth_height); This will take a rectangualr area of the source image and put it in the invisble canvas.
Finally you can do part_canvas.toDataURL() and you have the data URL
Didn't test. But I think this should work.
I’m trying to use the Cornerstone DICOM parser (https://github.com/cornerstonejs/dicomParser) to extract the pixeldata for a CT scan using javascript and HTML5 canvas.
I need to be able to reduce the size of the reulting pixeldata.
Is used the hermite-resize algorithm:
https://github.com/viliusle/Hermite-resize
Please see my following proof of concept which show an axial slice of the upper abdomen:
http://castlemountain.dk/dicomParser/index2.html
As you can see two types of compressed images are generated (image 2 and image 3) where it is apparent that image 2 is more coarse than image 3 (especially when looking at the ribs). I use the same compression algorithm but when getting the pixeldata from drawing the uncompressed pixeldata (image 1) then the image quality is better for image 3 and I can’t seem to figure out why that is. HTML5 canvas related?
The example is as follows:
Firstly, I extract the CT pixeldata for the 512x512 px CT image which is subsequently converted to 0-255 greyscale values with CT window/level (method convertToGreylevel).
This array of greyscale pixel values is then used to generate an imageData object (imgData) for HMLT5 canvas and this is shown as an uncompressed image (Image 1):
ctx.putImageData(imgData,0,0);
Next the function
pixelData2 = resample_hermite(pixelData[slices], 512, 512,
Math.round(512/compression2), Math.round(512/compression2));
Is used to generate compressed pixeldata (256x256 px) which are used to generate an imageData object which is shown (Image 2):
ctx.putImageData(imgData2,512,0);
Afterwards the
var imgInput = ctx.getImageData(0, 0, 512, 512);
Is used to get the imageData from the first 512x512 image drawn to canvas.
The pixeldata from canvas is extracted and used to generate an array of pixeldata which is compressed with the:
var outputData = resample_hermite(inputData, 512, 512,Math.round(512/compression2), Math.round(512/compression2));
This resulting array of pixeldata (256x256 pixel) is used to generate HMTL5 image object which is shown (Image 3):
ctx.putImageData(img2, 512, Math.round(512/compression2));
Image 2 is definitely more coarse (see the white ribs) than compressed image 3.
Can anyone point me in the direction why that is?
Best regards
Looks like you are using old Hermite-resize lib version, you have operations with "0.5", but original lib does not have anything similar.
After fixing resample_hermite function (basically removing 0.5, but i recommend using latest code from github), results are much better.
I have two images (when combined will have a 1:1 width:height ratio). If I combine them using convert a.png b.png -append c.png on unix, it works perfectly. I'm trying to achieve this in javascript. I'm adding the arraybuffers (containing img data) together because drawing them both in a canvas doesn't seem to produce the identical image. If I simply append each arraybuffer, the image ration is 2:1; does anyone know how to properly append the array buffers, similar to what convert does?
Edit: To elaborate, simply stacking on a canvas won't work (I've tried). This could be due to low level canvas code, I suspect it's due to how canvas joins the pixels at the boundary between the two images. It needs to be arraybuffers.
If you for some reason don't want to load the images via Image then the only option is to manually parse and decompress the file. It's true that the browser can alter images due to ICC/gamma support. This doesn't happen in the canvas step though, but during image loading and converting to RGBA data.
That being said, the process during getImageDate()/putImageData() can also alter the pixel values due to (un-)pre-multiplying and rounding errors.
Example using canvas to merge two PNG images into a single one:
var ctx = c.getContext("2d"), // canvas 2d context
img1 = new Image, // create two image
img2 = new Image, // elements
count = 2; // Track for loader
// load images
img1.onload = img2.onload = function() { // make sure images are
if (!--count) append(); // loaded first
};
img1.crossOrigin = img2.crossOrigin = ""; // need this for this demo
img1.src = "http://i.imgur.com/hlHEhUhb.jpg"; // random images...
img2.src = "http://i.imgur.com/ynzkv40b.jpg";
// process images
function append() {
// use width to sum the images
c.width = img1.width + img2.width; // set total height
c.height = Math.max(img1.height,img2.height); // set max height
ctx.drawImage(img1, 0, 0); // draw in image 1
ctx.drawImage(img2, img1.width, 0) ; // draw in image 2
console.log(c.toDataURL()); // extract, send to server
}
<canvas id=c></canvas>
You cannot simply merge PNG data with each other without decoding them first. This is because the image data chunk is compressed (deflated) and each scan-line in a PNG file is using a initial byte describing the line-filter being used.
Simply merging them could possibly invalidate the deflated data if just vertical, and horizontally it would invalidate the line data vs. length due to the extra filter byte that would be introduced. The filters for each line could be different as well.
So there is no way around parsing, decompressing and decoding the source PNG files. But in order to parse a PNG file you would have to know how the file format is built up.
The PNG file format
The main file structure in a PNG files is:
-Signature- 8 bytes
IHDR chunk required (width, height, depth, mode etc.)
[PLTE chunk] required for indexed color mode
[Misc chunks] optional ancillary and private chunks
IDAT chunk required, can be multiple
IEND chunk required, last chunk (data-less)
Any other chunk can be ignored in this case unless you are using an indexed palette in which case you need to consider the PLTE chunk as well.
Chunks allow you to skip to next chunk if the current chunk is unknown or not needed. A chunk is structured using 8 bytes, followed by data and then a 4 bytes CRC-32 checksum (data is not required, like with the IEND chunk):
0x00 SIZE (4 bytes)
0x04 FOURCC (4 bytes)
0x08 DATA (variable, can be 0)
0x?? CRC-32 (4 bytes)
The size is representing the data only. The name would be a ASCII representation of the chunk name, always four bytes ("IDAT", "IEND", ...).
The CRC-32 checksum can be ignored if you don't wish to validate the data, but cannot be ignored when you produce a new PNG file as most PNG viewers/parser uses this value and it includes the chunk name.
All values are unsigned in big-endian byte order.
Reading Chunks
A typical way to read chunked data files such as PNG is to initialize a start offset at the first chunk. Then iterate through reading and moving the file cursor at the same time, checking the chunk name.
For example:
var pos = 8; // first chunk position
var dv = new DataView(arraybuffer); // use a DataView
Make some helper functions to read and move position:
function getUint32() { // and for Uint16 etc.
var data = dv.getUint32(pos); // use big-endian byte-order
pos += 4;
return data
}
// decode chunk name to string (from pngtoy)
function getFourCC() {
var v = getUint32(),
c = String.fromCharCode;
return c((v & 0xff000000)>>>24) + c((v & 0xff0000)>>>16) +
c((v & 0xff00)>>>8) + c((v & 0xff)>>>0)
}
Which now allows us to work with the file buffer as intended:
// repeated actions:
var size = getUint32();
var name = getFourCC();
var data, crc;
if (name === "IHDR") { // check chunk type
data = new Uint8Array(dv.buffer, pos, size); // get data section from chunk
pos += size; // next chunk or the end
crc = getUint32(); // read CRC-32 checksum
// validate CRC-32 here
}
else pos += size + 4; // skip data and crc
Tip: even if the chunk is skipped it could be a point validating the data against the CRC checksum to find early indication of file corruption.
The IDAT chunk always contains deflated data since this is the only valid storage form in the format specification, and has to be inflated first. For this process I would recommend (as always) the Pako implementation of the zlib library.
Reading process
The read process for each input image then becomes (using a DataView is required):
Check magic header/signature. There are 8 bytes which should always be the sequence of:0x89504E47 0x0D0A1A0A (big-endian).
If OK the first chunk (IHDR) would be found at position 8 in the file. You need to parse the content of this header to find width, height as well as bitmap depth (16, 8, 4, 1) and type (RGB, RGBA, Greyscale, Bitmap etc.) as well as if the image is interlaced or not.
When these data has been obtained you can scan the IDAT chunks. Note the plural - there is usually just a single IDAT chunk but it is perfectly valid to have several IDAT chunks. When you reach the IEND chunk there is no more data. A valid PNG file will not have any other chunks between the IDAT and IEND chunks when the bitmap is split into several IDAT chunks.
Pass the data through inflate to decompress it. (tip: using Inflate the instance instead of the static function can take each separate IDAT chunk data and then decompress to a single buffer).
Now you will have a raw but unfiltered PNG bitmap
We can still not merge the files as we need to decode each scan-line using the filter byte. There are five different line-filters in PNG where 0 means no filtering is needed, up to the more complex 4 Paeth filter.
In addition, the image can be interlaced (Adam-7) which require a different approach due to being progressive.
When you have decoded each scan-line (and de-interlaced if needed) you will have a raw bitmap unaffected by ICC/gamma from the browser.
An extra step needs to be taken to check if both images are of the same kind (e.g. RGB, RGBA, etc.) If not then one has to in addition, be converted to the other format usually by "upgrading" the one with less information/quality. If the same format and depth you should be good to go.
If the size differs in such a way it would leave some sort of gap in the final result, padding may be needed to fill empty pixels where there is no coverage, depending on format and if don't want things such as transparency etc.
Now you can merge the two bitmaps horizontally or vertically.
Merging two bitmaps
You mentioned you wanted to merge the bitmaps horizontally -
Set up a new buffer the size of image 1 width + image 2 width times the size of a single pixel (3 for RGB, 4 for RGBA etc.)
Define height as the maximum of the two heights
Determine if you need/want to use padding/zero-fill (height 1 !== height 2)
Set up a main loop for the new buffer, then alternate between image 1 and 2 per scan-line so that the two first scan-lines are copied as a single one into the new buffer.
Writing process
The reverse process to save out the image again is then:
Set signature
Add IHDR chunk and update with new sizes, format, depth
Add IDAT chunk
Encode each scanline (you can use filter 0 for simplicity, but it will add to size)
Deflate the data using zlib and add. Update size of the chunk using the compressed size
Calculate CRC-32 checksums
Add IEND chunk
My strategy would be to build the file in parts using a plain Array to hold each typed array part (signature) and chunk + data. Then pass the array to a Blob which will concatenate the parts to a single binary buffer.
For example:
var arr = [];
arr.push(taSig); // ta* = typed array
arr.push(taIHDR);
arr.push(taIDAT);
arr.push(taIEND);
Then pass in the array to a Blob:
var blob = new Blob(arr, {type: "image/png"});
The complete PNG file format specification can be found here.
I will recommend you to check out my pngtoy (PNG parser and decoder, MIT lic.) for details. It do similar steps as described above to obtain a raw decoded bitmap.
I'd like to dynamically downsize some images on my canvas using createjs, and then store the smaller images to be displayed when zooming out of the canvas for performance reasons. Right now, I'm using the following code:
var bitmap = createjs.Bitmap('somefile.png');
// wait for bitmap to load (using preload.js etc.)
var oc = document.createElement('canvas');
var octx = oc.getContext('2d');
oc.width = bitmap.image.width*0.5;
oc.height = bitmap.image.height*0.5;
octx.drawImage(bitmap.image, 0, 0, oc.width, oc.height);
var dataUrl = oc.toDataURL('image/png'); // very expensive
var smallBitmap = new createjs.Bitmap(dataUrl);
This works, but:
The toDataURL operation is very expensive when converting to image/png and too slow to use in practice (and I can't convert to the faster image/jpeg due to the insufficient quality of the output for all settings I tried)
Surely there must be a way to downsize the image without having to resort to separate canvas code, and then do a conversion manually to draw onto the createjs Bitmap object??
I've also tried:
octx.drawImage(bitmap.image, 0, 0, oc.width, oc.height);
var smallBitmap = new createjs.Bitmap(oc);
But although very fast, this doesn't seem to actually work (and in any case I'm having to create a separate canvas element every time to facilitate this.)
I'm wondering if there is a way that I can use drawImage to draw a downsampled version of the bitmap into a createjs Bitmap instance directly without having to go via a separate canvas object or do a conversion to string?
If I understand correctly, internally this is how the createjs cache property works (i.e. uses drawImage internally to write into the DisplayObject) but I'm unable to figure out how use it myself.
You have tagged this post with createjs and easeljs, but your examples show plain Canvas context usage for scaling.
You can use the scale parameter on Bitmap.cache() to get the result you want, then reuse the cacheCanvas as necessary.
// This will create a half-size cache (50%)
// But scale it back up for you when it displays on the stage
var bmp = new createjs.Bitmap(img);
bmp.cache(0, 0, img.width, img.height, 0.5);
// Pull out the generated cache and use it in a new Bitmap
// This will display at the new scaled size.
var bmp2 = new createjs.Bitmap(bmp.cacheCanvas);
// Un-cache the first one to reset it if you want
bmp.uncache();
Here is a fiddle to see it in action: http://jsfiddle.net/lannymcnie/ofdsyn7g/
Note that caching just uses another canvas with a drawImage to scale it down. I definitely would stay away from toDataURL, as it not performant at all.
Suppose that I have a 900x900 HTML5 Canvas element.
I have a function called computeRow that accepts, as a parameter, the number of a row on the grid and returns an array of 900 numbers. Each number represents a number between 0 and 200. There is an array called colors that contains an array of strings like rgb(0,20,20), for example.
Basically, what I'm saying is that I have a function that tells pixel-by-pixel, what color each pixel in a given row on the canvas is supposed to be. Running this function many times, I can compute a color for every pixel on the canvas.
The process of running computeRow 900 times takes about 0.5 seconds.
However, the drawing of the image takes much longer than that.
What I've done is I've written a function called drawRow that takes an array of 900 numbers as the input and draws them on the canvas. drawRow takes lots longer to run than computeRow! How can I fix this?
drawRow is dead simple. It looks like this:
function drawRow(rowNumber, result /* array */) {
var plot, context, columnNumber, color;
plot = document.getElementById('plot');
context = plot.getContext('2d');
// Iterate over the results for each column in the row, coloring a single pixel on
// the canvas the correct color for each one.
for(columnNumber = 0; columnNumber < width; columnNumber++) {
color = colors[result[columnNumber]];
context.fillStyle = color;
context.fillRect(columnNumber, rowNumber, 1, 1);
}
}
I'm not sure exactly what you are trying to do, so I apologize if I am wrong.
If you are trying to write a color to each pixel on the canvas, this is how you would do it:
var ctx = document.getElementById('plot').getContext('2d');
var imgdata = ctx.getImageData(0,0, 640, 480);
var imgdatalen = imgdata.data.length;
for(var i=0;i<imgdatalen/4;i++){ //iterate over every pixel in the canvas
imgdata.data[4*i] = 255; // RED (0-255)
imgdata.data[4*i+1] = 0; // GREEN (0-255)
imgdata.data[4*i+2] = 0; // BLUE (0-255)
imgdata.data[4*i+3] = 255; // APLHA (0-255)
}
ctx.putImageData(imgdata,0,0);
This is a lot faster than drawing a rectangle for every pixel. The only thing you would need to do is separate you color into rgba() values.
If you read the color values as strings from an array for each pixel it does not really matter what technique you use as the bottleneck would be that part right there.
For each pixel the cost is split on (roughly) these steps:
Look up array (really a node/linked list in JavaScript)
Get string
Pass string to fillStyle
Parse string (internally) into color value
Ready to draw a single pixel
These are very costly operations performance-wise. To get it more efficient you need to convert that color array into something else than an array with strings ahead of the drawing operations.
You can do this several ways:
If the array comes from a server try to format the array as a blob / typed array instead before sending it. This way you can copy the content of the returned array almost as-is to the canvas' pixel buffer.
Use a web workers to parse the array and pass it back as a transferable object which you them copy into the canvas' buffer. This can be copied directly to the canvas - or do it the other way around, transfer the pixel buffer to worker, fill there and return.
Sort the array by color values and update the colors by color groups. This way you can use fillStyle or calculate the color into an Uint32 value which you copy to the canvas using a Uint32 buffer view. This does not work well if the colors are very spread but works ok if the colors represent a small palette.
If you're stuck with the format of the colors then the second option is what I would recommend primarily depending on the size. It makes your code asynchronous so this is an aspect you need to deal with as well (ie. callbacks when operations are done).
You can of course just parse the array on the same thread and find a way to camouflage it a bit for the user in case it creates a noticeable delay (900x900 shouldn't be that big of a deal even for a slower computer).
If you convert the array convert it into unsigned 32 bit values and store the result in a Typed Array. This way you can iterate your canvas pixel buffer using Uint32's instead which is much faster than using byte-per-byte approach.
fillRect is meant to be used for just that - filling an area with a single color, not pixel by pixel. If you do pixel by pixel, it is bound to be slower as you are CPU bound. You can check it by observing the CPU load in these cases. The code will become more performant if
A separate image is created with the required image data filled in. You can use a worker thread to fill this image in the background. An example of using worker threads is available in the blog post at http://gpupowered.org/node/11
Then, blit the image into the 2d context you want using context.drawImage(image, dx, dy).