Related
There is a relatively simple game with such rules:
There is a safe which needs to be unlocked.
Code to the safe is a 4 digits number without repetitions(1234, 4867, 1092, etc., code like 1231 isn't possible in this game).
The game gives 5 attempts to guess the right code.
Let's say I start a new game and on the first try I test code like 0123.
The game responds with 2-1. 2 means that code 0123 has 2 right numbers which I need to use in the final unlock code. 1 means that one of those 2 numbers is at the correct position already.
After this I have 4 more exact same steps where I try different codes based on the previous tested numbers and responses from the game.
The goal is to get final code, let's say 9135(based on the prev 0123 try) and response from the game needs to be 4-4(4 right numbers, 4 in place). The earlier it happens - better.
I know that this can be solved using combinatorics just by excluding some combinations but I don't know how to choose the most weighted combination for the next try and hope AI can do it better.
I'm a frontend developer and an absolute beginner in AI. I don't really understand how complex code will be to solve this problem and what effort it requires. I will really appreciate if you can explain to me and share some links/code examples(lang doesn't matter but would be good if it is JS or Python) of similar solved tasks, so I can solve my problem based on this.
Feel free to tell me if my explanation wasn't clear, I will try more simple words then:)
Thanks!
Your game sounds similar to Mastermind, only with numbers instead of colored pegs.
Googling "Mastermind AI" leads to e.g. this implementation using a genetic algorithm to solve Mastermind, which you could probably look at for inspiration.
While #AKX is correct that this is a variant of Mastermind, a genetic algorithm might not be the first place to look, as this is probably more complex that simpler approaches.
Donald Knuth is famous (among many other things) for working out a solution to the game. There is a good overview of this approach on the Puzzling Stack Exchange site, and if you look at the other answers on that question, there is also a discussion of how to code the solution.
In your case, the simple approach is to write a function that iterates from 0000 to 9999. These are all potential answers. But, when you iterate through the numbers you want to remove (1) all numbers with duplicate digits and (2) all numbers that are inconsistent with the guesses so far. Any other numbers can be put in an array or list storing potential answers. From these remaining numbers, you can just guess any number and then continue the process.
A more complicated approach would be to make the next guess using an algorithm similar to ID3 to try to find the guess that maximizes the information gain you get from the response. But, given how much information you get from each guess, this is unlikely to be needed.
I'm trying to implement a divide and conquer algorithm to find the closest pair of points in a randomly-generated set of points using JavaScript. This algorithm should be running in O(n log n) time, but it is taking considerably longer to run than a simple brute force algorithm, which should be O(n^2).
I've created two jsfiddles that time the algorithms for an array of 16000 points:
Divide and Conquer
Brute Force
My hypothesis is that the divide and conquer is so slow because JavaScript arrays are actually hash tables. Is it possible to significantly speed up the algorithm in JavaScript? If so, what would be the best way to go about doing this?
At a glance, your merge function is allocating way too much memory (roughly order O(n^2)). I made a hacky way to measure this here. The basic idea is I just have a counter in the global scope, and add the size of the array generated by merge each time it is called. Hopefully this is enough info for you to fix it, if you encounter any more problems I'd be happy to provide a few additional pointers.
Also, just by playing with the numbers it's pretty easy to rule out it being a hash table issue* - an algorithm slowed by hash table would not exhibit a faster growth rate than O(n log n), it would just start slow and grow along the same lines. If you try a range of values out though, it should become apparent that it's growing faster than that, suggesting a different issue.
*the internal implementation of javascript arrays is a bit more complicated than them just being objects, but for the point I'm trying to make it doesn't really matter
I'm building a website that should collect various news feeds and would like the texts to be compared for similarity. What i need is some sort of a news text similarity algorithm.
I know that php has the similar_text function and am not sure how good it is + i need it for javascript.
So if anyone could point me to an example or a plugin or any instruction on how this is possible or at least where to look and start investigating.
There's a javascript implementation of the Levenshtein distance metric, which is often used for text comparisons. If you want to compare whole articles or headlines though you might be better off looking at intersections between the sets of words that make up the text (and frequencies of those words) rather than just string similarity measures.
The question whether two texts are similar is a philosophical one as long as you don't specify exactly what it should mean. Consider the Strings "house" and "mouse". Seen from a semantic level they are not very similar, but they are very similar regarding their "physical appearance", because only one letter is different (and in this case you could go by Levenshtein distance).
To decide about similarity you need an appropriate text representation. You could – for instance – extract and count all n-grams and compare the two resulting frequency-vectors using a similarity measure as e.g. cosine similarity. Or you could stem the words to their root form after having removed all stopwords, sum up their occurrences and use this as input for a similarity measure.
There are plenty approaches and papers about that topic, e.g. this one about short texts. In any case: The higher the abstraction level where you want to decide if two texts are similar the more difficult it will get. I think your question is a non-trivial one (and hence my answer rather abstract) ... ;-)
I am building a Sudoku game for fun, written in Javascript.
Everything works fine, board is generated completely with a single solution each time.
My only problem is, and this is what's keeping me from having my project released to public
is that I don't know how to grade my boards for difficulty levels. I've looked EVERYWHERE,
posted on forums, etc. I don't want to write the algorithms myself, thats not the point of this
project,and beside, they are too complex for me, as i am no mathematician.
The only thing i came close to was is this website that does grading via JSbut the problem is, the code is written in such a lousy undocumented, very ad-hoc manner,therefor cannot be borrowed...
I'll come to the point -Can anyone please point me to a place which offers a source code for Sudoku grading/rating?
Thanks
Update 22.6.11:
This is my Sudoku game, and I've implemented my own grading system which relies
on basic human logic solving techniques, so check it out.
I have considered this problem myself and the best I can do is to decide how difficult the puzzle is to solve by actually solving it and analyzing the game tree.
Initially:
Implement your solver using "human rules", not with algorithms unlikely to be used by human players. (An interesting problem in its own right.) Score each logical rule in your solver according to its difficulty for humans to use. Use values in the hundreds or larger so you have freedom to adjust the scores relative to each other.
Solve the puzzle. At each position:
Enumerate all new cells which can be logically deduced at the current game position.
The score of each deduction (completely solving one cell) is the score of the easiest rule that suffices to make that deduction.
EDIT: If more than one rule must be applied together, or one rule multiple times, to make a single deduction, track it as a single "compound" rule application. To score a compound, maybe use the minimum number of individual rule applications to solve a cell times the sum of the scores of each. (Considerably more mental effort is required for such deductions.) Calculating that minimum number of applications could be a CPU-intensive effort depending on your rules set. Any rule application that completely solves one or more cells should be rolled back before continuing to explore the position.
Exclude all deductions with a score higher than the minimum among all deductions. (The logic here is that the player will not perceive the harder ones, having perceived an easier one and taken it; and also, this promises to prune a lot of computation out of the decision process.)
The minimum score at the current position, divided by the number of "easiest" deductions (if many exist, finding one is easier) is the difficulty of that position. So if rule A is the easiest applicable rule with score 20 and can be applied in 4 cells, the position has score 5.
Choose one of the "easiest" deductions at random as your play and advance to the next game position. I suggest retaining only completely solved cells for the next position, passing no other state. This is wasteful of CPU of course, repeating computations already done, but the goal is to simulate human play.
The puzzle's overall difficulty is the sum of the scores of the positions in your path through the game tree.
EDIT: Alternative position score: Instead of completely excluding deductions using harder rules, calculate overall difficulty of each rule (or compound application) and choose the minimum. (The logic here is that if rule A has score 50 and rule B has score 400, and rule A can be applied in one cell but rule B can be applied in ten, then the position score is 40 because the player is more likely to spot one of the ten harder plays than the single easier one. But this would require you to compute all possibilities.)
EDIT: Alternative suggested by Briguy37: Include all deductions in the position score. Score each position as 1 / (1/d1 + 1/d2 + ...) where d1, d2, etc. are the individual deductions. (This basically computes "resistance to making any deduction" at a position given individual "deduction resistances" d1, d2, etc. But this would require you to compute all possibilities.)
Hopefully this scoring strategy will produce a metric for puzzles that increases as your subjective appraisal of difficulty increases. If it does not, then adjusting the scores of your rules (or your choice of heuristic from the above options) may achieve the desired correlation. Once you have achieved a consistent correlation between score and subjective experience, you should be able to judge what the numeric thresholds of "easy", "hard", etc. should be. And then you're done!
Donald Knuth studied the problem and came up with the Dancing Links algorithm for solving sudoku, and then rating the difficulty of them.
Google around, there are several implementations of the Dancing Links engine.
Perhaps you could grade the general "constrainedness" of a puzzle? Consider that a new puzzle (with only hints) might have a certain number of cells which can be determined simply by eliminating the values which it cannot contain. We could say these cells are "constrained" to a smaller number of possible values than the typical cell and the more highly constrained cells that exist the more progress one can make on the puzzle without guessing. (Here we consider the requirement for "guessing" to be what makes a puzzle hard.)
At some point, however, the player must start guessing and, again, the constrainedness of a cell is important because with fewer values to choose between for a given cell the easier it is to find the correct value (and increase the constrainedness of other cells).
Of course, I don't actually play Sudoku (I just enjoy writing games and solvers for it), so I have no idea if this is a valid metric, just thinking out loud =)
I have a simple solver that looks for only unique possibilities in rows, columns and squares. When it has solved the few cells solvable by this method, it then picks a remaining candidate tries it and sees if the simple solver then leads to either a solution or a cell empty of possibilities. In the first case the puzzle is solved, in the second, one possibility has shown to be infeasible and thus eliminated. In the third case, which leads to neither a final solution nor an infeasibility, no
deduction can be reached.
The primary result of cycling through this procedure is to eliminate possiblities until picking
a correct cell entry leads to a solution. So far this procedure has solved even the hardest
puzzles without fail. It solves without difficulty puzzles with multiple solutions. If the
trial candidates are picked a random, it will generate all possilbe solutions.
I then generate a difficulty for the puzzle based on the number of illegal candidates that must
be eliminated before the simple solver can find a solution.
I know that this is like guessing, but if simple logic can eliminated a possible candidate, then one
is closer to the final solution.
Mike
I've done this in the past.
The key is that you have to figure out which rules to use from a human logic perspective. The example you provide details a number of different human logic patterns as a list on the right-risde.
You actually need to solve the puzzle using these rules instead of computer rules (which can solve it in milliseconds using simple pattern replacement). Every time you change the board, you can start over from the 'easiest' pattern (say, single open boxes in a cell or row), and move down the chain until you find one the next logical 'rule' to use.
When scoring the sodoku, each methodology is assigned some point value, which you would add up for every field you needed to fill out. While 'single empty cell' might get a 0, 'XY Chain' might get 100. You tabulate all of the methods needed (and frequency) and you wind up with a final weighting. There are plenty of places that list expected values for those weightings, but they are all fairly empirical. You're trying to model human logic, so feel free to come up with your own weightings or enhance the system (if you really only use XY chains, the puzzle is probably easier than if it requires more advanced mechanisms).
You may also find that even though you have a unique sodoku, that it is unsolvable through human logic.
And also note that this is all far more CPU intensive than solving it in a standard, patterned way. Some years ago when I wrote my code, it was taking multiple (I forget exactly, but maybe even up to 15) seconds to solve some of the generated puzzles I'd created.
Assuming difficulty is directly proportional to the time it takes a user to solve the puzzle, here is an Artificially Intelligent solution that approaches the results of the ideal algorithm over time.
Randomly generate a fixed number of starting puzzle layouts, say 100.
Initially, offer a random difficulty section that let's a user play random puzzles from the available layouts.
Keep an average random solution time for each user. I would probably make a top 10/top X leaderboard for this to generate interest in playing random puzzles.
Keep an average solution time multiplier for each puzzle solution (if the user normally solves the puzzle in 5 minutes and solves it in 20 minutes, 4 should be figured in to the puzzles average solution time multiplier)
Once a puzzle has been played enough times to get a base difficulty for the puzzle, say 5 times, add that puzzle to your list of rated puzzles and add another randomly generated puzzle to your available puzzle layouts.
Note: You should keep the first puzzle in your random puzzles list so that you can get better and better statistics on it.
Once you have enough base-rated puzzles, say 50, allow users to access the "Rated Difficulty" portion of your application. The difficulty for each puzzle will be the average time multiplier for that puzzle.
Note: When users choose to play puzzles with rated difficulty, this should NOT affect the average random solution time or average solution time multiplier, unless you want to get into calculating weighted averages (otherwise if a user plays a lot of harder puzzles, their average time and time multipliers will be skewed).
Using the method above, a solution would be rated from 0 (already solved/no time to solve) to 1 (users will probably solve this puzzle in their average time) to 2 (users will probably take twice as long to solve this puzzle than their average time) to infinity (users will take forever to find a solution to this puzzle).
I've been going through JS tutorials for a week and a half (Lynda.com, and HeadFirst series). Things make general sense, but JS is not nearly as easy for me as HTML/CSS. When I look at really simple, beginner, code (e.g. Lynda.com's tutorial where you create a bingo card), I'm struggling to really read through the code in terms of the representation of logical arguments. My guess is that if I don't tackle this rightaway any other language I try to learn will be impossible, not to mention that I won't learn JS well-- or at all.
So can anybody suggest a book/web site that offers good basic instruction about algorithms? OR, am I just being too impatient and after a couple weeks, things should settle and the code will make more sense.
Here is an example of the silly basic code that still preplexes.
function newCard() {
if (document.getElementById) {
for (var i=0; i<24; i++) {
setSquare(i);
}
HTML/CSS are document description languages, a way of representing visual structure and information, they are not programming languages as such.
JavaScript is not necessarily a simple language per se, so take it easy and you could do with an elementary programming introduction book.
Try to convert what you are reading to English, line by line, in order. The syntax, the symbols and the way it is written are probably the main source of confusion as you are not used to them. People who are not used to algebra panic at the sight of it, with cries of "I will never understand that, how do you read it?" - in time you will become accustomed.
Take this simple bit of code:
1 for (var i=0; i<24; i++) {
2 setSquare(i);
3 }
Line 1: a "for-loop"
A loop is a block of code (denoted by the braces {}) that is repeated until some point. In the case of a for loop, there are 3 settings (arguments) that control the loop.
The first is for initialisation, starting conditions, in this case setting the new variable i to 0, i=0.
The second is the condition it tells the loop whether to keep going and is checked every time the loop starts over. Here the condition is i < 24, keep going while the variable i is less than (<) 24.
The final part is an increment, whatever happens in the last part happens once per list. In this case at the end of the list, before the next loop. i++ means increment i by one, shorthand for i = i + 1.
So the loop is run multiple times, i starts at 0 and goes up by 1 each time, and once it is no longer less than 24, ie. it reaches 24, it ends. So the block of code is executed 24 times, with i = 0 to 23.
Line 2: Inside the loop is a single statement, a function call, to a function called setSquare, the value i is passed to it each time.
Line 3: The closing brace of the for-loop.
So all together, this code calls the setSquare() function 24 times, with values from 0 to 23.
What setSquare() does is a mystery without seeing that code as well.
Answering your question
It seems to me you're having some problems with basic programming constructs, such as functions, loops, variable declaration, etc - and if you don't understand those you're bound to never understanding any code at all. So my suggestion is to grab a book about programming (preferably about Javascript, in your case). I never learned JS from a book as I already had a programming background so the main concepts were already there, but a friend of mine liked O'Reilly Head First Javascript. Then, when the main concepts of the language are learned, take a look at the jQuery library.
Also, two quick notes:
HTML and CSS are not programming languages
You don't need to concern yourself with algorithms, at least for now - an algorithm is a series of complex procedures designed to solve a specific problem, and not a simple for loop used to iterate an array :)
You find learning JavaScript harder than the other two more difficult because it is a programming language where as CSS and HTML are markup/styling.
JavaScript isn't an easy first language to learn either. I wouldn't be too worried if you find yourself confused, programming is hard and it doesn't come naturally to everyone. Eventually, your mindset will change and things that seemed impossible at first will seem very intuitive.
That being said, there are very few good starting resources to learn JavaScript. Your best bet is to look at a book like Head First JavaScript. They will take a very slow progression through how to program (the mentality of writing algorithms to solve problems) and also introduce you to all of the features of JavaScript.
Keep your head up : ).
I would hope that you're not having trouble with the for-loop as that's fundamental to programming.
1 for (var i=0; i<24; i++) {
2 setSquare(i);
3 }
To follow up on on #Orbling's detailed answer, line 2 reveals the main point of the program. Assuming that setSquare(i) means what it says, the for loop apparently has the side effect of changing the state of a square to the current value of i in a for loop. My guess is that the width of the square changes with i.
A key team I mentioned is "side effect", which means that a program will affect the state of another entity outside of itself. The following for loops also has a side effect.
1 for (var i=0; i<24; i++) {
2 print(i);
3 }
where I will stipulate that print(i) will render the value of i in a JS popup.