Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am trying to implement a system to identify/detect words of a handwritten text in a image. I need to recognize the words in the text. But I feel it is impossible since the images are not readable even for me. For now what I need is to separate out the words. I only need to figure out there is a word. When the user select an area, the system should select only a single word in the image.
My Question is : Is it doable using JavaScript?
Here is a sample Image.
JS+Canvas and a basic implementation of the Viola-Jones face-recognition technique.
With some manuscript like that? I think you'll get really bad results.
You need first to detect the global horizontal inclination. (By getting the inclination you can simultaneously retrieve the line height.)
Create a 100% horizontal grid runner like:
0000000000...
1111111111...
0000000000...
where 0 checkes for light and 1 for dark areas. Let it run over your image-selection data from top-to-bottom, and to all inclinations (i.e. +-15deg max).
A positive match is when your (stripes)grid returns the threshold contrast density that matches its raster.
If the runner returns no match increase it's size and let it run again.
You need to account for mistakes so you need to store every possible positive match. After you're done with all sizes and inclinations you just pick the one that resulted with more matches.
Now you'll have the general horizontal inclination and the line height.
Now you need to define the vertical letter inclination. At the same time you can retrieve the blank spaces.
Same technique. You let run a vertical runner line-by-line (you know the line-height)
0101010
0101010
0101010
0101010
0101010
starting from 0 left to the most right. No match? change degree. Let run again.
Retrieve the run that collected more matches. You have the letter inclination.
let it run over the same line of text and collect all the information about the highlight gaps between the dark areas.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
If I wanted to draw the character "二" with my fingers, Google keyboard can accurately recognize this on their Chinese keyboard version as 二. I've seen this work in multiple of websites and apps which I have always wondered how they would detect it so well.
One theory comes to mind, to retrieve the pixels from beginning and end point of the drawn input, and then compare in pixels to see how much the original character and the user's drawing overlap each other? If the overlap rate matches over 70% in pixels, then it will output the correct desired character.
Pixel overlap would be a very inaccurate method. Imagine if you draw your first line a bit too low, but your second perfectly; 三 would be recognised with 67% over 二 with 50% coverage. Imagine if both of your lines were just a bit off; you'd get a space instead.
One way to do it is to recognise and classify individual strokes based on the direction and bends in the stroke (not from the image, but from mouse data collected as the user paints the stroke), then look up the sequence of strokes in a character database. You'd need a database with character strokes. I am almost certain that jisho.org that I linked in comments, and many other similar sites, employ this method.
Another good way would be to use a neural networks to recognise the bitmap of the input image; this requires many training examples for all characters. Unlike the previous method, this method can recognise even cursive-style input (i.e. with strokes flowing into each other) - as long as the training was exhaustive enough.
There's many more specific techniques one can find by looking for "Handwritten Kanji Recognition" or "Handwritten Chinese Character Recognition" (and Japanese/Chinese equivalents, like 手書き漢字認識 or 手写汉字识别) through Google Scholar.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
A while ago I had created a small system that would allow one to select various images to use in a forum signature that were all designed to fit together (see example image below). This is currently done by having a series of images that get referenced by their names, which folder they're in, and suffixes in the image names.
I would like to create a system where one could modify these all they wanted. I tried looking up a few different ways to do it, however was unsuccessful in finding any way that would be able to do what I'm aiming to do here.
The original images are made in Photoshop and separated into individual layers based on the type of banner. Ideally I'd love to make a system that would allow one to modify the colours (RGB, slider, something like that), change the icon either by a set of preset icons or uploading their own, and the ability to modify the text on the images.
After all is said and done, I'd like the image pieces to be downloadable so they're not stored server-side. In addition I'd like to do this without having to export every variation possible, since that's already a nuisance with the current way it's doing things.
TL;DR:
Is there any way a user could modify a set of parameters to change colours, icons, or text, then download the result as a PNG? Code type does not matter, I'm willing to learn, just want to know the right direction.
Here's a download of the current code for anyone interested.
https://dl.dropboxusercontent.com/u/90098446/website.zip
Example Image (ignore the white lines):
Seems like each banner has four layers: banner, icon, text, and optional tears.
Save the individual layers and assemble them via javascript on the front-end. Arranging the parts as sprite sheets may make this more convenient both for editing in Photoshop and for assembling programmatically.
When the user wants to download the results, send a description of the assembly to the back-end, have the back-end assemble the parts into an image, and offer the image for download.
Rather than manually assembling each possibility in Photoshop, you instead let the system assemble it on demand.
I have actually done something like this. You need to use a canvas in your HTML and to have a finite number of possible images or possibly some Javascript functions to draw the thing the user wants. Those images/functions should be put together in your canvas to enable the user to preview what is his picture and a control set to change things.
This SO post shows you how you can get a png from a canvas. I suggest that you should have a save button, where the user finalizes the picture and this should send the picture to the server, where it will be stored. The download feature should download that picture.
I have done this differently. I have used canvas as a preview and when the user finalized the settings, those were sent to the server as JSON where the final picture was put together.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am developing a product configurator in asp.net C# probabely like http://www.vistaprint.in/vp/ns/easypath/studio.aspx?template=446855~s1_BJD_BJX&ag=True&xnav=previews&xnid=image_112&rd=1
I am stuck on these 3 questions.
Users will upload a clear image with simple(no gradients) text on it. I would like to change this text on image to embroidery stitched text? Text can be straight line text or curved text or any other shape.
If it is not possible with text on image ,can I simply convert the text into embroidery stitched text? Text can be straight line text or curved text or any other shape.
Can this be done using some jquery or javascript or C# plugins? If yes,please suggest.
As I am new to product configurator ,I have no idea from where to start from
and require some helping hands.
Method 1
One way I would go for this is to have different image for each letter a-z A-Z and also each digit 0-9. (With a transparent background)
When the user is done typing I would send an ajax request to the server with the user's input and the response would be an image with the text itself. (jQuery could be used for this purpose)
On the server side for each letter in the user's input I would fetch the appropriate image For example "a" would be . (It is better for the letters to be with a transparent background).
Using something like this Combine two Images into one new Image I would create the full text and than send it back to the client.
On the client side you would know roughly where to put the new image on the canvas. /For example it has to be centred vertically and horizontally./
And finally if you want to curve, manipulate, etc. the text you can also use standart C# tools - MSDN Also this SO answer - here.
Method 2
Another way is to create /or use/ a custom font (One appropriate can be found here and here) And render the image using it. Please check this SO question. If you need a more serious text manipulation, probably this method is more appropriate.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
What I'm creating
I'm creating a JavaScript image manipulation app (using Fabric.js & HTML5 canvas) to build/re-render the canvas and place/drag objects onto it. After placing the objects it could export the data url and the image is manipulated.
The feature that I would like to add
I'm trying to implement a reversed radial tilt shift effect so certain spots in the picture could be "blurred out". What's the fastest way to achieve this through Fabric.js or just plain HTML5 canvas?
This is an example of what I want to achieve
What I've tried
Adding a Circle shape and trying to add an Blur/Convolute effect to
it and then lowering the opacity => This didn't work in my case, I
could only change the opacity attribute.
Adding an image (from URL) and trying the same thing as the first
point. => This didn't work in my case, I could only add a
Blur/Convolute filter but not change the opacity.
Maybe i can finish this demo later.
As you see i put your image as background.
Then i loaded the same image in a image element that i used to create a pattern for the circle.
Now, when the circle is moving i move the patterns offset so it looks like the image is the same as the background, but you are looking at the pattern.
Now imagine to use stackblur or fabricjs filters to blur the pattern image (just once) and you should get the effect you desire.
some trick to compensate the scaling effect is required.
I will finish the demo later.
hope it gives you the help.
EDIT: i have serious cors issue here.
I do not know how to make a snippet with not local images.
here a working demo made with fabricjs:
http://www.deltalink.it/andreab/fabric/blur.html
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I want to show a simple custom floor-plan in my Android app, something similar to the image below:
The target is to colour the area where I am currently in (I get this info in my app):
It would be perfect if each of these areas are also clickable to set an onclicklistener or similar and be able to zoom in/out as with images.
The main restriction is I want everything local, that is, no connection to an internet service to import the image.
I have tried loading an html which contains an svg floor-plan, but I don't know how to change the colour of an area from Android when I move from one area to another.
How can I approach this?
You can create a simple html page within a WebView and attach the SVG and add Javascript, so when tapping on the some elements in the svg to call a javascript function. You can then sent the events to the native java code in your android app if your app is native or just handle it in the html.
I believe this article is especially for your question Interfacing with SVG
you'r floor looks a lot like a grid layout!! why are you doing so much work to achieve something very simple? what you need to do is dynamically create a grid layout then put other views in it
This is a very open-ended question. It's also not simple.
You're going to have to do a lot of low-level coding to achieve what you outlined. No matter the case, you're going to have to find a way to turn that SVG into something that can be interacted with. The limiting factor here is that you're using SVGs--that's not a data format, it's vector. Unless you parse SVG into something more usable for a UI, or can get this data in a different format (some form of parsed JSON?), you're just showing an image.
Now assuming you do find a way to get this data more accessibly, you could go down the path of a custom view and overriding the draw(). Then you could just draw out your rooms and positions where you want, add a TouchListener, add a drawing state machine, etc. Alternatively to the View.draw(), you could use an GLSurfaceView and draw with hardware language.