Getting font metrics in JavaScript? - javascript

I'm currently working on a JavaScript project that uses the HTML5 canvas as a rendering target. In order for my code to play nicely with the (rigidly specified) interfaces I've been provided, I need to be able to take a font and extract the ascent and descent heights of that font. This will allow clients to more accurately position the text. I'm aware that I can change where the text draws by setting the textBaseline attribute, but I actually need the numeric values describing these different heights. Is there a simple way to do this? If not, is there a (hopefully lightweight) library that can handle it for me? Most of the proposed solutions I've found are heavyweight font rendering libraries, which seems like massive overkill for this problem.

This article on HTML5 Typographic Metrics discusses the problem and a part solution using CSS - although the integer rounding of offsetTop etc is a problem (potentially can use getBoundingClientRect() to get proper floating point values for some browsers).

The short answer is that there is no built in way and you are sort-of on your own.
Because of this, most people simply estimate them. Some people pre-calculate all the values, which isn't too much work provided you are using only a few fonts.
The third way is to make a canvas in memory and print some letters (say, a Q and an O) and programatically attempt to determine the ascent and descent using per-pixel collision. It's a pain and can be slow depending on the number of fonts and how accurate you want to be, but it is the most accurate way of going about it if you do not want to pre-compute the values.

Just as a reference here:
The width of the text can be measured with:
ctx.measureText(txt).width
http://www.w3schools.com/tags/canvas_measuretext.asp

Related

Dynamic pattern in javascript

I am curious as to where to start to make something similar to HERE as I cannot find any information about it. It may be fairly simple and im sorry if it is.
What I am hoping to replicate is the colour grid that generates based on the colours and size of the lines. I am looking to replicate the functionality of the application whereby when the user selects a line and changes the width of that line and it will then calculate the image. I have been looking around but cannot find information about how to replicate it. I may be searching for the wrong thing as javascript is not my strongest language.
I know of a roundabout way to do it with svg but where would I start for javascript/jquery?
I know of a way to do it with svg but where would I start for javascript/jquery?
Well, SVG would involve javascript as well, wouldn't it? You're just looking for different ways to display an image. None is native the javascript, that is just a programming language, you'd have to consider which API to use:
There's nothing wrong with SVG! It even seems to be the easiest solution, maybe wrapping DOM code in some nice drawing library.
It has been demonstrated that such is possible with CSS3 background patterns, although I would consider this rather unusable
Use the <canvas> element! This would be the most genuine HTML5 approach, and even though the api is rather simple there exist mighty libraries.

Finding a polygonal approximation of a Closed Path

I'd like to be able to find the best fitting polygonal approximation of a closed path (could be any path as they're being pulled out of images) but am having issues with how to approach coding an algorithm to find it.
I can think of a naive approach: every x amount of pixels along the path, choose the best fit line for those pixels, then brute force for different starting offsets and lengths and find the one that minimizes the least-square error with the minimum amount of lines.
There's got to be something more elegant. Anyone know of anything? Also, (cringe) but this is going to be implemented in javascript unless I get really desperate, so nice libraries that do things for you are pretty much ruled out, (opencv has a polygonal fitter for instance).
D3.js1 has some adaptive resampling code that you might be able to use. There's also an illustrated description of the algorithm used (Visvalingam’s algorithm).
The Ramer–Douglas–Peucker algorithm seems appropriate here, and is simple to implement.
Note that the acceptable error is an input to this algorithm, so if you have a target number of lines you can binary-search using the error parameter to hit the target.

Javascript library for resampling an array?

I'm trying to visualize some data on an HTML canvas and I'm facing an issue similar to this one. That is, the size of my data doesn't exactly match the size of my canvas.
In one instance I'd like to plot a 1024 point signal on a canvas that's 100px wide. (E.g., an audio waveform.)
In another instance I'd like to show a 1024 by 5000 point matrix on a canvas that's 100 px high by by 500 px wide. (E.g., an audio spectrogram.)
In both cases, I'll need to resample my data so that it fits on the canvas. Does anyone know of a library/toolkit/function in Javascript that can do this?
** EDIT **
I'm aware that there are many techniques I could use here. One possibility is to simply discard or duplicate data points. This would do in a pinch, but discarding/duplication is known to produce results that tend to look "jagged" or "blocky" (see here and here). I'd prefer to use a slightly more sophisticated algorithm that outputs smoother images such as Lanczos, bilinear or bicubic resampling. Any of these would meet my needs.
My question isn't about which algorithm to use, though, it's about whether any of them have been implemented in open-source javascript libraries. Surprisingly, I haven't been able to find much in JS. Coding my own resampling function is obviously an option, but I wanted to check with the SO community first to make sure I wasn't re-inventing the wheel.
(This answer gives a code listing that's very close to what I want, except that it operates directly on the canvas objects rather than the data arrays, and it forces the aspect ratios of the input and output to be the same. If nothing else is available, I can definitely work with this, but I was hoping for a solution that's a bit more general and flexible, along the lines of Matlab's resample.)
use canvas scale
ctx.scale(xscale,yscale);
you can determine the scaling by calculating the rate between your canvas and the data
ctx.scale(canvas_x/data_x,canvas_y/data_y)

Web worker and scaling images

I need to scale images in array form in a Web Worker. If I was outside a web worker I could use a canvas and drawImage to copy certain parts of an image or scale it.
Look like in a web worker I can't use a canvas so, what can I do? Is there any pure Javascript library that can help me?
Thanks a lot in advance.
Scaling can be done in various ways, but they all boil down to either removing or creating pixels from the image. Since images are essentially matrices (resized as arrays) of pixel values, you can look at scaling up images as enlarging that array and filling in the blanks and scaling down images as shrinking the array by leaving values out.
That being said, it is typically not that difficult to write your own scale function in JavaScript that works on arrays. Since I understand that you already have the images in the form of a JavaScript array, you can pass that array in a message to the Web Worker, scale it your scale function and send the scaled array back to the main thread.
In terms of representation I would advise you to use the Uint8ClampedArray which was designed for RGBA (color, with alpha channel) encoded images and is more efficient than normal JavaScript arrays. You can also easily send Uint8ClampedArray objects in messages to your Web Worker, so that won't be a problem. Another benefit is that a Uint8ClampedArray is used in the ImageData datatype (after replacing CanvasPixelArray) of the Canvas API. This means that it quite easy to draw your scaled image back on a Canvas (if that was what you wanted), simply by getting the current ImageData of the canvas' 2D context using ctx.getImageData() and changing its data attribute to your scaled Uint8ClampedArray object.
By the way, if you don't have your images as arrays yet you can use the same method. First draw the image on the canvas and then use the data attribute of the current ImageData object to retrieve the image in a Uint8ClampedArray.
Regarding scaling methods to upscale an image, there are basically two components that you need to implement. The first one is to divide the known pixels (i.e. the pixels from the image you are scaling) over the larger new array that you have created. An obvious way is to evenly divide all the pixels over the space. For example, if you are making the width of an image twice as wide, you want simply skip a position after each pixel leaving blanks in between.
The second component is then to fill in those blanks, which can be slightly less straightforward. However, there are several that are fairly easy. (On the other hand, if you have some knowledge of Computer Vision or Image Processing you might want to look at some more advanced methods.) An easy and somewhat obvious method is to interpolate each unknown pixel position using its nearest neighbor (i.e. the closest pixel value that is known) by duplicate the known pixel's color. This does typically result in the effect of bigger pixels (larger blocks of the same color) when you scale the images too much. Instead of duplicating the color of the closest pixel, you can also take the average of several known pixels that are nearby. Possibly even combined with weights were you make closer pixels count more in the average than pixels that are farther away. Other methods include blurring the image using Gaussians. If you want to find out what method is the best for your application, look at some pages about image interpolation. Of course, remember that scaling up always means filling in stuff that isn't really there. Which will always look bad if you do it too much.
As far as scaling down is concerned, one typically just removes pixels by transferring only a selection of pixels from the current array to the smaller array. For example if you would want to make the with of an image twice as small, you roughly iterate through the current array with steps of 2 (This depends a bit on the dimensions of the image, even or odd, and the representation that you are using). There are methods that do this even better by removing those pixels that could be missed the most. But I don't know enough about them.
By the way, all of this is practically unrelated to web workers. You would do it in exactly the same way if you wanted to scale images in JavaScript on the main thread. Or in any other language for that matter. Web Workers are however a very nice way to do these calculations on a separate thread instead of on the UI thread, which means that the website itself does not seem unresponsive. However, like you said, everything that involves the canvas element needs to be done on the main thread, but scaling arrays can be done anywhere.
Also, I'm sure there are JavaScript libraries that can do this for you and depending on their methods you can also load them in your Web Worker using importScripts. But I would say that in this case it might just be easier and a lot more fun to try to write it yourself and make it tailor-made for your purpose.
And depending on how advanced your programming skills are and the speed at which you need to scale you can always try to do this on the GPU instead of on the CPU using WebGL. But that does seem a slight overkill in this case. Also, you can try to chop your image in several pieces and try to scale the separate parts on several Web Workers making it multi-threaded. Although it is certainly not trivial to combine the parts later. Perhaps multi-threaded makes more sense when you have a lot of images that need to be scaled on the client side.
It all really depends on your application, the images and your own skills and desires.
Anyway, I hope that roughly answers your question.
I feel some specifics on mslatour's answer are needed, since I just spent 6 hours trying to figure out how to "…simply… change its data attribute to your scaled Uint8ClampedArray object". To do this:
① Send your array back from the web-worker. Use the form:
self.postMessage(bufferToReturn, [bufferToReturn]);
to pass your buffer to and from the web worker without making a copy of it, if you don't want to. (It's faster this way.) (There is some MDN documentation, but I can't link to it as I'm out of rep. Sorry.) Anyway, you can also put the first bufferToReturn inside lists or maps, like this:
self.postMessage({buffer:bufferToReturn, width:500, height:500}, [bufferToReturn]);
You use something like
webWorker.addEventListener('message', function(event) {your code here})
to listen for a posted message. (In this case, the events being posted are from the web worker and the event doing the listening is in your normal JS code. It works the same other way, just switch the 'self' and 'webWorker' variables around.)
② In your browser-side Javascript (as opposed to worker-side), you can use imageData.data.set() to "simply" change the data attribute and put it back in the canvas.
var imageData = context2d.createImageData(width, height);
imageData.data.set(new Uint8ClampedArray(bufferToReturn));
context2d.putImageData(imageData, x_offset, y_offset);
I would like to thank hacks.mozilla.org for alerting me to the existence of the data.set() method.
p.s. I don't know of any libraries to help with this… yet. Sorry.
I have yet to test it out myself, but there is a pure JS library that might be of use here:
https://github.com/taisel/JS-Image-Resizer

Approach comparison: EaselJS vs Multiple Canvases vs Hidden Canvas for interactiveness

1.) I found a canvas API called EaselJS, it does an amazing job of creating a display list for each elements you draw. They essentially become individually recognizable objects on the canvas (on one single canvas)
2.) Then I saw on http://simonsarris.com/ about this tutorial that can do drag and drop, it makes use of a hidden canvas concept for selection.
3.) And the third approach, a working approach, http://www.lucidchart.com/ , which is exactly what I'm trying to achieve, basically have every single shape on a different canvas, and use to position them. There's a huge amount of canvas.
The question is, what is the easiest way to achieve interactive network diagram as seen on http://www.lucidchart.com/
A side question is, is it better to get text input through positioning on canvas or using multiple canvas (one for rendering text) as in LucidChart
I'm the person who made the tutorials in 2. There's a lot going on here, so I'll try to explain a bit.
I use a hidden canvas for selection simply because it is easy to learn and will work for ANY kind of object (text, complex paths, rectangles, semi-transparent images). In the real diagramming library that I am writing, I don't do anything of the sort, instead I use a lot of math to determine selection. The hidden-canvas method is fine for less than 1000 objects, but eventually performance starts to suffer.
Lucidchart actually uses more than one canvas per object. And it doesn't just have them in memory, they are all there the DOM. This is an organizational choice on their part, a pretty weird one in my opinion. SVG might have made their work a lot easier if thats what they are going to do, as if seems they are doing a lot of footwork just to be able to emulate how SVG works a bit. There aren't too many good reasons to have so many canvases in the DOM if you can avoid it.
It seems to me that the advantage of them doing it that way is that if they have 10,000 objects, when you click, you only have to look at the one (small) canvas that is clicked for selection testing, instead of the entire canvas. So they did it to make their selection code a little shorter. I'd much rather only have one canvas in the DOM; their way seems needlessly messy. The point of canvas is to have a fast rendering surface instead of a thousand divs representing objects. But they just made a thousand canvases.
Anyway, to answer your question, the "easiest" way to achieve interactive network diagrams like lucidchart is to either use a library or use SVG (or an SVG library). Unfortunately there aren't too many yet. Getting all the functionality yourself in Canvas is hard but certainly doable, and will afford you better performance than SVG, especially if you plan on having more than 5,000 objects in your diagrams. Starting with EaselJS for now isn't too bad of an idea, though you'll probably find yourself modifying more and more of it as you get deeper into your project.
I am making one such interactive canvas diagramming library for Northwoods Software, but it won't be done for a few more months.
To answer the question that is sort-of in your title: The fastest method of doing interactiveness such as hit-testing is using math. Any high-performance canvas library with the features to support a lot of different types of objects will end up implementing functions like getNearestIntersectionPoint, getIntersectionsOnRect, pathContainsPoint, and so on.
As for your side question, it is my opinion that creating a text field on top of the canvas when a user wants to change text and then destroying it when the user is done entering text is the most intuitive-feeling way to get text input. Of course you need to make sure the field is positioned correctly over the text you are editing and that the font and font sizes are the same for a consistent feel.
Best of luck with your project. Let me know how it goes.
Using SVG (and maybe libraries as Raphael)!!
Then any element can receive mouse events.

Categories