Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
If I wanted to draw the character "二" with my fingers, Google keyboard can accurately recognize this on their Chinese keyboard version as 二. I've seen this work in multiple of websites and apps which I have always wondered how they would detect it so well.
One theory comes to mind, to retrieve the pixels from beginning and end point of the drawn input, and then compare in pixels to see how much the original character and the user's drawing overlap each other? If the overlap rate matches over 70% in pixels, then it will output the correct desired character.
Pixel overlap would be a very inaccurate method. Imagine if you draw your first line a bit too low, but your second perfectly; 三 would be recognised with 67% over 二 with 50% coverage. Imagine if both of your lines were just a bit off; you'd get a space instead.
One way to do it is to recognise and classify individual strokes based on the direction and bends in the stroke (not from the image, but from mouse data collected as the user paints the stroke), then look up the sequence of strokes in a character database. You'd need a database with character strokes. I am almost certain that jisho.org that I linked in comments, and many other similar sites, employ this method.
Another good way would be to use a neural networks to recognise the bitmap of the input image; this requires many training examples for all characters. Unlike the previous method, this method can recognise even cursive-style input (i.e. with strokes flowing into each other) - as long as the training was exhaustive enough.
There's many more specific techniques one can find by looking for "Handwritten Kanji Recognition" or "Handwritten Chinese Character Recognition" (and Japanese/Chinese equivalents, like 手書き漢字認識 or 手写汉字识别) through Google Scholar.
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
I am working with the Google Maps Javascript API. I have 793 points with some associated information in a MySQL table. This morning I added 303 points to reach the current total. There are now three data "sources" by which I am color coding the markers. This morning prior to adding the additional markers there were only two data sources and all points (490) were apparently being mapped. (this is apparently still true if I select only points matching first two sources)
I have confirmed XML output exists from the PHP data-fetching sub-page for all 793 points including the corresponding source information. The problem is that the first data source points are no longer appearing.
If I select random points and limit them to 500, then all three colors appear in approximately the appropriate distributions, but when I remove the limit there are no markers for the first data source anymore.
Weirder still it seems like the first points are falling off when I don't randomize but do limit the results. In that case, the proportions of colors bias toward the second and third source points until there are no more points from the first source.
I cannot find evidence of a limit for the number of markers that can be displayed using the Javascript API (although I do know I am approaching the reasonable marker display density in some neighborhoods, that is not my concern yet).
Similar question: how to raise a marker limit in Google Maps? (Answers suggest there is no API limit)
FYI, I have not included any code because the code seems to be working correctly except for the roll off effect that appears to be attributable to the API. If you think I may be making an error in the code that is producing the described behavior I will edit my question to include additional information. Thank you.
Garbage in, garbage out: The data is bad, where is the bathroom?
Markers of each color show up when the data is randomly selected because it changes the data order. Without randomization, redundant points covered the first set of points making it appear as though there were no first data points. Randomized data randomly covered either the first or third color rather than only the first.
I wonder if I had posted the (working) code, would anyone have helped me figure out that the code wasn't the problem faster?
The MySQL query I used to determine the problem: select PropertyID, FormattedAddress, Latitude, Longitude, Source from MapData group by Source;
This produced three results, one from each source, at which point the redundancy was obvious. I will go back to the data compilation phase and correct it.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am developing a product configurator in asp.net C# probabely like http://www.vistaprint.in/vp/ns/easypath/studio.aspx?template=446855~s1_BJD_BJX&ag=True&xnav=previews&xnid=image_112&rd=1
I am stuck on these 3 questions.
Users will upload a clear image with simple(no gradients) text on it. I would like to change this text on image to embroidery stitched text? Text can be straight line text or curved text or any other shape.
If it is not possible with text on image ,can I simply convert the text into embroidery stitched text? Text can be straight line text or curved text or any other shape.
Can this be done using some jquery or javascript or C# plugins? If yes,please suggest.
As I am new to product configurator ,I have no idea from where to start from
and require some helping hands.
Method 1
One way I would go for this is to have different image for each letter a-z A-Z and also each digit 0-9. (With a transparent background)
When the user is done typing I would send an ajax request to the server with the user's input and the response would be an image with the text itself. (jQuery could be used for this purpose)
On the server side for each letter in the user's input I would fetch the appropriate image For example "a" would be . (It is better for the letters to be with a transparent background).
Using something like this Combine two Images into one new Image I would create the full text and than send it back to the client.
On the client side you would know roughly where to put the new image on the canvas. /For example it has to be centred vertically and horizontally./
And finally if you want to curve, manipulate, etc. the text you can also use standart C# tools - MSDN Also this SO answer - here.
Method 2
Another way is to create /or use/ a custom font (One appropriate can be found here and here) And render the image using it. Please check this SO question. If you need a more serious text manipulation, probably this method is more appropriate.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
What I'm creating
I'm creating a JavaScript image manipulation app (using Fabric.js & HTML5 canvas) to build/re-render the canvas and place/drag objects onto it. After placing the objects it could export the data url and the image is manipulated.
The feature that I would like to add
I'm trying to implement a reversed radial tilt shift effect so certain spots in the picture could be "blurred out". What's the fastest way to achieve this through Fabric.js or just plain HTML5 canvas?
This is an example of what I want to achieve
What I've tried
Adding a Circle shape and trying to add an Blur/Convolute effect to
it and then lowering the opacity => This didn't work in my case, I
could only change the opacity attribute.
Adding an image (from URL) and trying the same thing as the first
point. => This didn't work in my case, I could only add a
Blur/Convolute filter but not change the opacity.
Maybe i can finish this demo later.
As you see i put your image as background.
Then i loaded the same image in a image element that i used to create a pattern for the circle.
Now, when the circle is moving i move the patterns offset so it looks like the image is the same as the background, but you are looking at the pattern.
Now imagine to use stackblur or fabricjs filters to blur the pattern image (just once) and you should get the effect you desire.
some trick to compensate the scaling effect is required.
I will finish the demo later.
hope it gives you the help.
EDIT: i have serious cors issue here.
I do not know how to make a snippet with not local images.
here a working demo made with fabricjs:
http://www.deltalink.it/andreab/fabric/blur.html
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am trying to implement a system to identify/detect words of a handwritten text in a image. I need to recognize the words in the text. But I feel it is impossible since the images are not readable even for me. For now what I need is to separate out the words. I only need to figure out there is a word. When the user select an area, the system should select only a single word in the image.
My Question is : Is it doable using JavaScript?
Here is a sample Image.
JS+Canvas and a basic implementation of the Viola-Jones face-recognition technique.
With some manuscript like that? I think you'll get really bad results.
You need first to detect the global horizontal inclination. (By getting the inclination you can simultaneously retrieve the line height.)
Create a 100% horizontal grid runner like:
0000000000...
1111111111...
0000000000...
where 0 checkes for light and 1 for dark areas. Let it run over your image-selection data from top-to-bottom, and to all inclinations (i.e. +-15deg max).
A positive match is when your (stripes)grid returns the threshold contrast density that matches its raster.
If the runner returns no match increase it's size and let it run again.
You need to account for mistakes so you need to store every possible positive match. After you're done with all sizes and inclinations you just pick the one that resulted with more matches.
Now you'll have the general horizontal inclination and the line height.
Now you need to define the vertical letter inclination. At the same time you can retrieve the blank spaces.
Same technique. You let run a vertical runner line-by-line (you know the line-height)
0101010
0101010
0101010
0101010
0101010
starting from 0 left to the most right. No match? change degree. Let run again.
Retrieve the run that collected more matches. You have the letter inclination.
let it run over the same line of text and collect all the information about the highlight gaps between the dark areas.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Alex Fink & I have a non-linear written language, UNLWS. Maintaining its grammar document in Google Docs is growing excessively cumbersome, however, because of how much time it takes to load each image — and the images created for embedding in gdocs are not really combinable and exportable in a good way.
UNLWS is primarily composed of a variety of glyphs — which are basically small pieces of vector art with specified binding points (the little blue circles in the glossary) — and which all interconnect with each other at those binding points like a graph (as in trees, not bars) using a variety of methods (mainly simple lines).
Some glyphs have internal structure as well, e.g. variable line or arc length, spline curvature, distance of segments, etc., and some have bindings that aren't drawn with lines (e.g. articles).
In general, so long as the glyphs are connected properly and follow some rules about how to connect (e.g. avoid crossings, make smooth or straight lines where possible, relax the graph), the result will be licit.
However, there are also non-graph components, such as cartouches that surround or divide portions of the text, and some cases where certain glyphs must be near each other and placed with specific orientation or distance with respect to each other.
In some cases, we'd also want to hint the graphics package about how to do the layout more aesthetically, e.g. for poetry (note the cartouches in the center and sides) or stories (note the black-dashed dividing lines of three varieties). For some simpler examples, see our scratchpad.
Ideally, we would like to be able to compose this programmatically using a reasonable graphics library (possibly d3.js, though I'm open to other suggestions), e.g. being able to:
define reusable glyphs (with variables)
let us make custom DSL(s) to specify both the form of glyphs (building up from lower-level vector "phonetics") and how they interact (for syntax)
tell the package what glyphs to connect at what binding points and how, and have it figure out how to do the necessary graph relaxation etc, with at least a reasonable-looking result
optionally hint the package on how to do the layout
add English text in various places with alignment / snapping (there is none in UNLWS itself, but it's useful for documentation)
export to a standard vector format (eg for printing posters or shirts), and the same to stdout via some command line tool (eg for embedding in LaTeX)
What would be a good way to do this?
I've deliberately phrased this in an open-ended way, as I don't know what an appropriate approach or package set would be (or if JS / D3 is even appropriate in the first place). I would imagine that parts of the problem (e.g. graph relaxation) have been addressed by existing packages, but I'm not at all familiar with them.
(FWIW: Both of us are coders. I've used Ruby very extensively and d3.js a little bit. Alex is a mathematician, and therefore uses LaTeX and Asymptote a lot, but hasn't used JS much. We're not tied to any particular option, including Javascript.)
The Force layout in d3 is probably the best place to start - that's the tool for automatic graph relaxation tool.