I want to get part of a canvas in the base 64 data url format which will be sent to my api
for this is I am using the ctx.getImageData() method which is returning a clamped uint 8 array
here is the code:
const data = ctx.getImageData(mousex, mousey, mousex1, mousey1);
const clamped = data.data
}
I have tried many btoa methods but the either return a broken array full of A or a broken array with a lot of /
You can use canvas.toDataURL(type, encoderOptions) for this purpose.
First extract the part of your source image. You don't need getImageData() for that. Instead create a second canvas with part_canvas = document.createElement("canvas"), this canvas doesnt have to be visible on the page.
Assign it its size .width and .height of the part you want to extract
part_ctx = part_canvas.getContext("2d") on the canvas
Then part_ctx.drawImage(source_image, part_x, part_x, part_width, parth_height, 0, 0, part_width, parth_height); This will take a rectangualr area of the source image and put it in the invisble canvas.
Finally you can do part_canvas.toDataURL() and you have the data URL
Didn't test. But I think this should work.
Related
In javascript, I'm successfully converting a RGB video frame from my webcam to a tensor using TensorFlowJS's function tf.browser.fromPixels(). Now, I'd like to select only a part of this tensor according to values I've previously obtained. specifically a rectangle from the video frame with coordinates [x1,y1,x2,y2] but I'm struggling to do so using TFJS function tf.stridedSlice(), because I can't figure out how the function's parameters works.
For example, the video frame tensor has shape [480,640,3], and I'd like to cut a whole rectangle from it with shape [270,202,3], of which I know the upper left (x1,y1) and bottom right (x2,y2) coordinates, how can I achieve this in some manner like:
tensorImg = tf.browser.fromPixels(videoFrame);
tensorCropped = tf.stridedSlice(tensorImg,[x1,y1],[x1+x2,y1+y2]); ???
Thanks.
This should work:
const cropBox = [[0.15, 0.15, 0.85, 0.85]]; // top,left,bottom,right in range 0..1 (not in pixel range)
const outputSize = [200, 200]; // how large we want output to be
const resize = tf.image.cropAndResize(inputTensor, cropBox, [0], outputSize);
Note that cropBox itself is array of arrays and then second param says which entry to actually use [0])
I am trying to create a 16bit heightmap png from a 32bit Rgb encoded height value png.
According to mapbox I can decode their png pixel values to height values in meters with this formula. height = -10000 + ((R * 256 * 256 + G * 256 + B) * 0.1)
I am new to understanding this type of information and have tried to figure it out myself but I need some help.
I found this program called image-js, docs for this program are here . Note I am using this in the browser not with node.
I think it has the abilities that I need. However, I am really not sure how to get the data from the original image and create a new 16bit png with the new height values calculated from the above formula.
If you look in the docs under image I can set many properties such as
bitDepth and set that to 16
getPixelsArray() this function will give me an array of pixels values in the form of r,g,b. I think I can then run each pixel through the above formula to get the height. After that how do I turn this height data back into a 16bit grayscale heightmap png?
Thanks!
Example images.
32bit Rgb encoded height value png
16 bit heightmap png. You have to zoom in to see the image changes
Update:
Thanks to traktor for the suggestion. I wanted to add some information to my question.
I don't necessarily need to use image-js. My main goal is to get a 16 bit heightmap from a 32 bit RGB. So any other browser side javscript method would be fine. I also am using browser side createWritable(); stream to write file to disk.
I have a method used from this nice module https://github.com/colkassad/terrain-rgb-height however I cannot get the browser side version of pngjs to work correctly. I think there is a difference between browser side stream reader/writers and node reader/writers that make it not work.
Thanks!
Looking through the source code for Image-js package on GitHub turns up that the options object used in calls to the library is verified in source file kind.js which enumerates supported options properties in the assignment statement:
const { components, alpha, bitDepth, colorModel } = definition;
However, many options are defaulted using kind option property, which itself defaults to "RGBA".
Knowing more about option properties allows using them (I couldn't find options documentation outside of the code).
I would suggest
Create an Image-js image from the 32bit encoded height png. Omit a kind property to use the default 32bit PNG pixel model of 3 color channels plus one alpha channel.
Convert the image data (held as a typed array in the image object's data property) to a single dimensional array of height values using the MapBox conversion algorithm. The layout of the (image.data) typed array appears to be the same as that used for ImageData
Create a new image-js image using
new Image(width, height, decodedHeightArray, {kind="GREY", bitDepth:16})
where decodedHeightArray is the array prepared in the previous step.
Just adding my code to what tracktor's answer provided
Thank you very much traktor
This worked perfectly. For anyone who wants the code
I am using image-js to encode and decode the image
let file_handleSixteen = await dir_handle.getFileHandle(sixteen_file_name, {create: true})
let writableSixteen = await file_handleSixteen.createWritable();
async function convert16(arrayBuff) {
let image = await Image.load(arrayBuff);
let width = image.width
let height = image.height
let decodedHeightArray = []
let pixelsArray = image.getPixelsArray()
for (const pixel of pixelsArray) {
let r = pixel[0]
let g = pixel[1]
let b = pixel[2]
let height = getHeightFromRgb(r, g, b);
decodedHeightArray.push(height)
}
let newImage = new Image(width, height, decodedHeightArray, {kind: "GREY", bitDepth: 16})
return newImage.toBlob()
}
function getHeightFromRgb(r, g, b) {
return -10000 + ((r * 256 * 256 + g * 256 + b) * 0.1);
}
const file = await file_handleRgb.getFile();
let imageBuffer = await file.arrayBuffer()
let convertedArray = await convert16(imageBuffer)
await writableSixteen.write(convertedArray)
await writableSixteen.close();
I am also using the browser streams api to write the file to disk. Note the filestreams writable.write only accepts certain values one of them being a Blob that is why converted it to Blob before passing it to the write method
I am doing a DICOM project using XTK library. Now I need to create a list of thumbnails from input DICOM files (output images could be PNG or JPG).
During the rendering process, XTK provides an array of pixel data in an Uint16Array of each DICOM file. But I have no idea about converting those pixel data into a canvas.
I searched for some related articles or questions but found nothing possible.
Nowadays, ImageData has become a constructor, that you can put on a 2d context easily.
What it expects as arguments is an UInt8ClampedArray, a width and a height.
So from an Uint16Array representing rgba pixel data, you'd just have to do
var data = your_uint16array;
var u8 = new Uint8ClampedArray(data.buffer);
var img = new ImageData(u8, width, height);
ctx.putImageData(img, 0,0);
But according to your screenshot, what you have is actually an Array of Uint16Array, so you will probably have to first merge all these Uint16Arrays in a single one.
Also note that an Uint16Array is a weird view over an rgba pixel data, Uint32Array being more conventional (#ffff vs #ffffffff).
Suppose that I have a 900x900 HTML5 Canvas element.
I have a function called computeRow that accepts, as a parameter, the number of a row on the grid and returns an array of 900 numbers. Each number represents a number between 0 and 200. There is an array called colors that contains an array of strings like rgb(0,20,20), for example.
Basically, what I'm saying is that I have a function that tells pixel-by-pixel, what color each pixel in a given row on the canvas is supposed to be. Running this function many times, I can compute a color for every pixel on the canvas.
The process of running computeRow 900 times takes about 0.5 seconds.
However, the drawing of the image takes much longer than that.
What I've done is I've written a function called drawRow that takes an array of 900 numbers as the input and draws them on the canvas. drawRow takes lots longer to run than computeRow! How can I fix this?
drawRow is dead simple. It looks like this:
function drawRow(rowNumber, result /* array */) {
var plot, context, columnNumber, color;
plot = document.getElementById('plot');
context = plot.getContext('2d');
// Iterate over the results for each column in the row, coloring a single pixel on
// the canvas the correct color for each one.
for(columnNumber = 0; columnNumber < width; columnNumber++) {
color = colors[result[columnNumber]];
context.fillStyle = color;
context.fillRect(columnNumber, rowNumber, 1, 1);
}
}
I'm not sure exactly what you are trying to do, so I apologize if I am wrong.
If you are trying to write a color to each pixel on the canvas, this is how you would do it:
var ctx = document.getElementById('plot').getContext('2d');
var imgdata = ctx.getImageData(0,0, 640, 480);
var imgdatalen = imgdata.data.length;
for(var i=0;i<imgdatalen/4;i++){ //iterate over every pixel in the canvas
imgdata.data[4*i] = 255; // RED (0-255)
imgdata.data[4*i+1] = 0; // GREEN (0-255)
imgdata.data[4*i+2] = 0; // BLUE (0-255)
imgdata.data[4*i+3] = 255; // APLHA (0-255)
}
ctx.putImageData(imgdata,0,0);
This is a lot faster than drawing a rectangle for every pixel. The only thing you would need to do is separate you color into rgba() values.
If you read the color values as strings from an array for each pixel it does not really matter what technique you use as the bottleneck would be that part right there.
For each pixel the cost is split on (roughly) these steps:
Look up array (really a node/linked list in JavaScript)
Get string
Pass string to fillStyle
Parse string (internally) into color value
Ready to draw a single pixel
These are very costly operations performance-wise. To get it more efficient you need to convert that color array into something else than an array with strings ahead of the drawing operations.
You can do this several ways:
If the array comes from a server try to format the array as a blob / typed array instead before sending it. This way you can copy the content of the returned array almost as-is to the canvas' pixel buffer.
Use a web workers to parse the array and pass it back as a transferable object which you them copy into the canvas' buffer. This can be copied directly to the canvas - or do it the other way around, transfer the pixel buffer to worker, fill there and return.
Sort the array by color values and update the colors by color groups. This way you can use fillStyle or calculate the color into an Uint32 value which you copy to the canvas using a Uint32 buffer view. This does not work well if the colors are very spread but works ok if the colors represent a small palette.
If you're stuck with the format of the colors then the second option is what I would recommend primarily depending on the size. It makes your code asynchronous so this is an aspect you need to deal with as well (ie. callbacks when operations are done).
You can of course just parse the array on the same thread and find a way to camouflage it a bit for the user in case it creates a noticeable delay (900x900 shouldn't be that big of a deal even for a slower computer).
If you convert the array convert it into unsigned 32 bit values and store the result in a Typed Array. This way you can iterate your canvas pixel buffer using Uint32's instead which is much faster than using byte-per-byte approach.
fillRect is meant to be used for just that - filling an area with a single color, not pixel by pixel. If you do pixel by pixel, it is bound to be slower as you are CPU bound. You can check it by observing the CPU load in these cases. The code will become more performant if
A separate image is created with the required image data filled in. You can use a worker thread to fill this image in the background. An example of using worker threads is available in the blog post at http://gpupowered.org/node/11
Then, blit the image into the 2d context you want using context.drawImage(image, dx, dy).
I want to know if I've understood this correctly.
I loop my map and load sprites on the map.
So I decided to store the pixel information in an array so that when I click with my mouse I check if its in a pixel array range and get the id related to it (effectively being pixel accurate for detecting what object was clicked?)
This is my thinking process:
I draw the sprite:
ctx.drawImage(castle[id], abposx, abposy - (imgheight/2));
myImageData[sdata[i][j][1]] =
ctx.getImageData(abposx, abposy, castle[id].width, castle[id].height);
Then some how with left click, check if mouse x and mouse y is inbetween the range of the arrays and return the value of myImageData?
Or have I misunderstood what getImageData is about?
getImageData gives you all of the pixel data for an image. Basically you only need to use getImageData if you are doing any sort of pixel manipulation with the image, like changing its hue/color, or applying a filter, or need specific data, such as the r/g/b, or alpha values. In order to check for pixel perfect collisions you an do something like the following:
var imageData = ctx.getImageData(x, y, 1, 1);
if(imageData.data[3] !== 0){
// you have a collision!
}
imageData.data[0-3] holds an array of data, 0-2 are the color values r/g/b, and 3 is the alpha value. So we assume if the alpha is 0, it must be a transparent portion. Also note, in the example and fiddle I am grabbing the data from the canvas itself, so if there was an image behind it that wasnt transparent it would count as not being transparent. The best way to do it if you have many images that overlap is to keep a copy of the image by itself offscreen somewhere and do a translation of the coordinates to get the position on the image. Heres a good MDN Article explaining getImageData as well.
Live Demo