Hi All,
I really appreciate all the help I received when I asked you to critique my PrefixMap module a few weeks ago. I think I am making good progress in correcting the "lisp" in my Haskell programming. I'll be very grateful to anyone who can take a glance at the attached short program and say if any unidiomatic usages pop out. It solves sudoku puzzles. (What pleasure do people get by doing these in their heads?!?) Cheers, David -------------------------------- David F. Place mailto:[hidden email] _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe sudoku.hs (4K) Download Attachment |
On 3/22/06, David F. Place <[hidden email]> wrote:
> Hi All, > > I really appreciate all the help I received when I asked you to > critique my PrefixMap module a few weeks ago. I think I am making > good progress in correcting the "lisp" in my Haskell programming. > I'll be very grateful to anyone who can take a glance at the attached > short program and say if any unidiomatic usages pop out Try > cellIndex r c = 3*(r `div` 3) + c `div` 3 It's much much shorter and should produce the same results. > It solves > sudoku puzzles. (What pleasure do people get by doing these in their > heads?!?) > They are probably asking the same question: why take hours to write a program to do it when with my mad sudoku solving skills I can solve it in X seconds? My roommate is like this. Cheers, Jared. -- http://www.updike.org/~jared/ reverse ")-:" _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by David Place
On Mar 22, 2006, at 2:16 PM, David F. Place wrote: > Hi All, > > I really appreciate all the help I received when I asked you to > critique my PrefixMap module a few weeks ago. I think I am making > good progress in correcting the "lisp" in my Haskell programming. > I'll be very grateful to anyone who can take a glance at the > attached short program and say if any unidiomatic usages pop out. > sudoku :: Sudoku -> IO () > sudoku s = ((mapM_ putStrLn) . (check s) . (take 1) . solveSudoku) s > check puzzle [] = [showSudoku puzzle,"No solutions."] > check puzzle [solution] > | solution `solves` puzzle = > ["Puzzle:",showSudoku puzzle,"Solution:",showSudoku > solution] > | otherwise = ["Program Error. Incorrect Solution!"] That '(check s) . (take 1)' bit looks a little odd to me. I would simply have written 'check' to match like: check puzzle [] = <failure case> check puzzle (solution : _ ) = <success case> Also, I like to put off doing IO as long as possible, so I'd probably have 'sodoku' return a String or [String] and move the putStr into main. Its an easy way to make your code more reusable. Also, your parser is pretty hackish (but I suspect you knew that already). FYI, solveSudoku has a bug; if you enter an invalid puzzle it will return non-solutions. > It solves sudoku puzzles. (What pleasure do people get by doing > these in their heads?!?) I have no idea. Rob Dockins Speak softly and drive a Sherman tank. Laugh hard; it's a long way to the bank. -- TMBG _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
Robert Dockins wrote:
> > On Mar 22, 2006, at 2:16 PM, David F. Place wrote: > >> Hi All, >> >> I really appreciate all the help I received when I asked you to >> critique my PrefixMap module a few weeks ago. I think I am making >> good progress in correcting the "lisp" in my Haskell programming. The style of the code and choice of names is good. >> I'll be very grateful to anyone who can take a glance at the attached >> short program and say if any unidiomatic usages pop out. > > > That '(check s) . (take 1)' bit looks a little odd to me. I would > simply have written 'check' to match like: > > check puzzle [] = <failure case> > check puzzle (solution : _ ) = <success case> > A simpler version of replace: replace :: Sudoku -> Int -> Int -> Int -> Sudoku replace s r c x = let (above,row:below) = splitAt r s (left,_:right) = splitAt c row in above++((left++(x:right)):below) And a simpler version of toList in keeping with your style: toList :: Set -> [Int] toList i = concatMap f [9,8..1] where f b = if testBit i b then [b] else [] (The above is also a little less prone to off-by-one errors since testBit is the opposite of setBit) > > Also, I like to put off doing IO as long as possible, so I'd probably > have 'sodoku' return a String or [String] and move the putStr into > main. Its an easy way to make your code more reusable. > > Also, your parser is pretty hackish (but I suspect you knew that already). > A minimal change to the parser that still does no sanity checking but may be a little more robust is import Data.Char readSudoku :: [String] -> String -> Sudoku readSudoku ["line"] input = takeBy 9 $ map digitToInt $ filter isDigit $ head $ lines input readSudoku _ input = map (map digitToInt . filter isDigit) $ lines input _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Jared Updike
Jared Updike wrote:
> On 3/22/06, David F. Place <[hidden email]> wrote: > ... > > It solves > > sudoku puzzles. (What pleasure do people get by doing these in their > > heads?!?) > > > > They are probably asking the same question: why take hours to write a > program to do it when with my mad sudoku solving skills I can solve it > in X seconds? My roommate is like this. I would say because they have chosen the wrong language for this problem :-) Solving Sudoku is a typical finite domain constraint problem. Thus, a language with constraint solving facilities like Curry (a combination of Haskell and constraint logic programming) is much better suited. Actually, I wrote a Sudoku solver in Curry and the actual code for the solver is 10 lines of code which is compact and well readable (if you are familiar with Haskell), see http://www.informatik.uni-kiel.de/~curry/examples/CLP/sudoku.curry Regards, Michael _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by haskell-2
Thanks for your helpful suggestions. I took them to heart and incorporated many of them in a new version. -------------------------------- David F. Place mailto:[hidden email] _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe sudoku.hs (5K) Download Attachment |
In reply to this post by Michael Hanus
>> > It solves sudoku puzzles. (What pleasure do people get by doing
>> > these in their heads?!?) probably the same you get from writing programs?-) figuring out the rules, not getting lost in complexity, making something difficult work.. >> They are probably asking the same question: why take hours to write a >> program to do it when with my mad sudoku solving skills I can solve it >> in X seconds? My roommate is like this. if we just throw raw computing power at the problem (in-place array updates, bitvectors, multiprocessors, ..), wouldn't they be justified? but as soon as we try to take their skills and encode them in our programs, things become more interesting (do computers really "play" chess?-). > I would say because they have chosen the wrong language for this > problem :-) Solving Sudoku is a typical finite domain constraint > problem. Thus, a language with constraint solving facilities > like Curry (a combination of Haskell and constraint logic programming) > is much better suited. Actually, I wrote a Sudoku solver in Curry > and the actual code for the solver is 10 lines of code which is > compact and well readable (if you are familiar with Haskell), see > > http://www.informatik.uni-kiel.de/~curry/examples/CLP/sudoku.curry interesting. I haven't yet got round to installing Curry and trying this, but I assume that this declarative specification, under a finite domain constraint solver, is not just an effective implementation, but an efficient one, right? if yes, it is really impressive how constraint propagation has managed to make essentially the same code that, as a mere functional logic program, would be effective, but hardly useable, so much more efficient, just by imposing a different evaluation strategy on it. and the factoring into constraint generation and constraint propagation under some strategy is nice as well. my own Sudoku solver (yes, me too - see attached, but only after you've written your own!-) uses simple hand-coded constraint propagation to limit the space for exhaustive search - some puzzles are solved by constraint propagation only, and even where guesses are used, each guess is immediately followed by propagation, to weed out useless branches early, and to deduce consequences of each guess before asking for the next one. the good old game, start with generate&test, then move the tests forward, into the generator. I've only coded the two most useful groups of constraints (when there's only a single number left for a position, or when there is only a single position left for a number). there are other deductions one does in by-hand solving, and I'm not an experienced sudoku solver myself, so I don't even know more than a few such rules yet, but these particular two seem sufficient to get acceptable performance even under ghci/hugs, so why do more?-) (*) [by acceptable, I mean that my sequence of 5 test puzzles is solved in less than 20 seconds on a 2GHz Pentium M; no idea how that compares to the most efficient solvers] since I haven't factored out the constraint propagation into a general module, the core of my code is a lot longer than the Curry version (about 60 additional lines, though I'm sure one could reduce that;-). the only negative point I can find about the Curry example is that it isn't obvious what tricks the FD-constraint solver is using (ie., it would be nice to have a concise language for expressing propagation techniques, and then explicitly to apply a strategy to the declarative specification, instead of the implicit, fixed strategy of the built-in solver). for instance, I assume that Michael's declarative specification implicitly allows the built-in solver to use the first group of constraints I mentioned (only a single possible number left for a position), but does it use the second group (only a single position left to place a number in a particular line/column/block)? my guess is that no, it doesn't, although it wouldn't be difficult to change that - just have the declarative specification express the dual puzzle as well (assigning positions to numbers instead of numbers to positions). is this correct, or is that dual reading already implied? perhaps Haskell should have Control.Constraint.* libraries for generalized constraint propagation (and presumably for constraint handling rules as well, as they are so popular nowadays for specifying Haskell's type classes)? cheers, claus (*) actually, that was a bit disappointing!-( I was hoping for some fun while trying to encode more and more "clever" rules, but not much of that seems to be required. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe Sudoku.hs (6K) Download Attachment |
Claus Reinke wrote:
>>> > It solves sudoku puzzles. (What pleasure do people get by doing > >>> these in their heads?!?) > > probably the same you get from writing programs?-) figuring out the > rules, not getting lost in complexity, making something difficult work.. >From a human standpoint, there are some puzzles that are much harder than others. This also applies to the few programs that I have seen. It is this variety of complexity that makes it interesting. >>> They are probably asking the same question: why take hours to write a >>> program to do it when with my mad sudoku solving skills I can solve it >>> in X seconds? My roommate is like this. > > if we just throw raw computing power at the problem (in-place array > updates, bitvectors, multiprocessors, ..), wouldn't they be justified? > but as soon as we try to take their skills and encode them in our > programs, things become more interesting (do computers really "play" > chess?-). You can go get the 36,628 distict minimal puzzles from http://www.csse.uwa.edu.au/~gordon/sudokumin.php that have only 17 clues. Then you can run all of them through your program to locate the most evil ones, and use them on your associates. :) This also gave me a way to statistically measure if a new deductive step made much of a difference (or if it made no difference). Histograms and gnuplot helped. > [snip Curry language example] > my own Sudoku solver (yes, me too - see attached, but only after > you've written your own!-) uses simple hand-coded constraint propagation > to limit the space for exhaustive search - some puzzles are solved by > constraint propagation only, and even where guesses are used, each guess > is immediately followed by propagation, to weed out useless branches > early, and to deduce consequences of each guess before asking for the > next one. the good old game, start with generate&test, then move the > tests forward, into the > generator. > > I've only coded the two most useful groups of constraints (when > there's only a single number left for a position, or when there is > only a single position left for a number). there are other deductions > one does in by-hand solving, and I'm not an experienced sudoku solver > myself, so I don't even know more than a few such rules yet, but these > particular two seem sufficient to get acceptable performance even under > ghci/hugs, so why do more?-) (*) I have two versions of a solver. The first is a re-write of GDANCE bu Knuth to solve Sudoku efficiently as a binary cover problem. (see http://www-cs-faculty.stanford.edu/~knuth/programs.html ) This uses the "Dancing Links algorithm" implemented with STRef's and is very fast. The second uses a different encoding to look for clever deductions. This alone solves some easy problems and comes very close before getting stuck on most. There are few though that start with 17 hints and only discover one or two more by logic before getting stuck. These are the most evil of all. You might be interested in the deductions described at http://www.sudokusolver.co.uk/ > > [by acceptable, I mean that my sequence of 5 test puzzles is solved in > less than 20 seconds on a 2GHz Pentium M; no idea how that compares to > the most efficient solvers] I could run ~20,000 puzzles in a couple of hours. (The collection was smaller then). > perhaps Haskell should have Control.Constraint.* libraries for > generalized constraint propagation (and presumably for constraint > handling rules as well, as they are so popular nowadays for specifying > Haskell's type classes)? Did you see the monad at http://haskell.org/hawiki/SudokuSolver ? Perhaps you could generalize that. > > cheers, > claus > > (*) actually, that was a bit disappointing!-( I was hoping for some > fun while trying to encode more and more > "clever" rules, but not much of that seems to be required. > You need more than 5 examples. The truly evil puzzles are rarer than that. Go get the set of minimal puzzles and see how far your logic takes you. -- Chris _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
Chris wrote:
> You need more than 5 examples. The truly evil puzzles are rarer than that. Go > get the set of minimal puzzles and see how far your logic takes you. Chris elucidated some of my questions before I finished writing my email... Claus wrote: > (*) actually, that was a bit disappointing!-( How much harder is the problem of generating (hard/medium/easy) (solvable) Sudoku puzzles? Are all puzzles solvable (that don't break the rules at any point)? I imagine a simple way is to start with a correctly saturated grid of numbers and then start randomly shooting holes in it and testing if it is still solvable (either unambiguously or ambiguously) with your Sudoku solver? A rough mesaure of the difficulty of the unsolved puzzle could be how long the solver took to solve it (number of steps) (and the number of possible solutions)? Are puzzles with multiple solutions usually considered harder or easier? Are these considered proper puzzles? Is this a more interesting problem to try to solve (generating) rather than solving puzzles? I haven't investigated it much but I thought about it when I was writing YASS (Yet Another Sudoku Solver) of my own. What makes a Sudoku puzzle fiendish? Just the amount of missing information, the amount of lookahed required? Jared. P.S. Another interesting problem could be trying other number arrangements besides 9x9, e.g. hexadecimal puzzles... I wrote my solver to handle these but I never saw other than 9x9 puzzles to try it on (hence the idea of generating puzzles)... Is that because most people want puzzles to solve by hand instead of computer? -- http://www.updike.org/~jared/ reverse ")-:" _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Claus Reinke
> since I haven't factored out the constraint propagation into a
> general module, the core of my code is a lot longer than the Curry version > (about 60 additional lines, though I'm sure one could reduce that;-). the > only negative point I can find about the Curry example is that it isn't > obvious what tricks the FD-constraint solver is using Curry does not have a constraint solver of its own. It simply delegates all constraints to the FD solver of SICStus Prolog. The all_different constraint subsumes the rules that you describe, depending on the consistency algorithm used. FD solvers implement general but efficient algorithms that are much more complex than a few simple rules. See http://www.sics.se/sicstus/docs/latest/html/sicstus/Combinatorial-Constraints.html#Combinatorial%20Constraints for the SICStus FD all_different documentation. > (ie., it would be nice > to have a concise language for expressing propagation techniques, and then > explicitly to apply a strategy to the declarative specification, instead of > the implicit, fixed strategy of the built-in solver). CHR is meant as a highlevel language for expressing propagation. It (currently) does not include a strategy language though. Cheers, Tom -- Tom Schrijvers Department of Computer Science K.U. Leuven Celestijnenlaan 200A B-3001 Heverlee Belgium tel: +32 16 327544 e-mail: [hidden email] _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Jared Updike
Jared Updike wrote:
> Chris wrote: >> You need more than 5 examples. The truly evil puzzles are rarer than that. Go >> get the set of minimal puzzles and see how far your logic takes you. > > Chris elucidated some of my questions before I finished writing my email... > > Claus wrote: >> (*) actually, that was a bit disappointing!-( > > How much harder is the problem of generating (hard/medium/easy) > (solvable) Sudoku puzzles? I pursued this line of attack as well. I could not generate the hardest puzzles, though I was working without studying other's approaches. > Are all puzzles solvable (that don't break > the rules at any point)? All well formed problems have exactly one solution. It is always solvable by, at worst, brute force. > I imagine a simple way is to start with a > correctly saturated grid of numbers and then start randomly shooting > holes in it and testing if it is still solvable (either unambiguously > or ambiguously) with your Sudoku solver? That method works poorly. > A rough mesaure of the > difficulty of the unsolved puzzle could be how long the solver took to > solve it (number of steps) (and the number of possible solutions)? Are > puzzles with multiple solutions usually considered harder or easier? > Are these considered proper puzzles? Proper puzzle have exactly one solution, accept no substitute. A problem is minimal (a partial order) if removing any single hint creates additional solutions. So the goal is build a minimal problem with as few hints as possible. The smallest number of hints achieved to date is 17, but there is no proof that a 16 hint puzzle is impossible. > > Is this a more interesting problem to try to solve (generating) rather > than solving puzzles? I haven't investigated it much but I thought > about it when I was writing YASS (Yet Another Sudoku Solver) of my > own. What makes a Sudoku puzzle fiendish? Just the amount of missing > information, the amount of lookahed required? A measure of difficulty is more subjective. Different programs will make luckier guesses on any specific problem. So I consider "how many blanks are left when the pure logic program gets stuck" to be my favorite difficulty metric. These were the worst from the list of 17's (left to right, top to bottom) : > --------- --------- --------- --------- --------- > "....6..8..2.........1.......7....1.25...3..........4....42.1...3..7..6.........5." > "...8...17.6.3....5.......2....6..4..7...2....1.........4...7.....3...8......1...." > ".9......25..3........6.....3.6...4......81...7.........8..9......2....3.......67." > "1...2..6.7.5................8.....1....5.3.....47........4..7..2.....5...6..1...." > "5...8.2..74..................2.5.......6....78......4..6.7.......1...5.....3.4..." My puzzle generator worked like this: Start with an empty grid and randomly add allowed hints until the number of solutions falls to 1. Now try and remove hints while keeping the number of solutions exactly 1. The performance of this was erratic, but occasionally produced puzzles with as few as 20 to 22 hints. There was a few fairly evil spawn of my generator: .2.|...|58. 6..|9..|..1 3..|.7.|6.. ---+---+--- ...|.65|... ...|...|1.. ...|.32|.97 ---+---+--- ...|...|.38 .59|...|... ..4|...|2.. .5.|1..|... 6..|7..|... 4.8|.63|... ---+---+--- .1.|...|98. .6.|...|... 97.|.28|... ---+---+--- ...|..5|..1 ...|93.|.4. ...|...|2.7 1.6|...|..5 ...|.7.|.21 3..|...|.8. ---+---+--- ...|8..|1.6 ...|.1.|9.. ...|..9|.7. ---+---+--- ...|4.6|... ..2|..3|... 857|...|... I like that they have two 3x3 sections which are without any hints. > > Jared. > > P.S. Another interesting problem could be trying other number > arrangements besides 9x9, e.g. hexadecimal puzzles... I wrote my > solver to handle these but I never saw other than 9x9 puzzles to try > it on (hence the idea of generating puzzles)... Is that because most > people want puzzles to solve by hand instead of computer? Yes > > -- > http://www.updike.org/~jared/ > reverse ")-:" > _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Jared Updike
Hello
it seems that sudoku solver may be a good candidate for nofib suite / language comparison shootout -- Best regards, Bulat mailto:[hidden email] _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by haskell-2
Am Montag, 3. April 2006 18:52 schrieb Chris Kuklewicz:
> Claus Reinke wrote: > >>> > It solves sudoku puzzles. (What pleasure do people get by doing > > >>> > >>> these in their heads?!?) > > > > probably the same you get from writing programs?-) figuring out the > > rules, not getting lost in complexity, making something difficult work.. Exactly, and I wrote a solver with a relatively elaborate strategy last year (it was written incrementally, so the code is horrible, I always wanted to rewrite it properly but never got to do it, hence I will not post it unless asked to), to have both kinds of pleasure, figure out a strategy and teach it to my computer. > > From a human standpoint, there are some puzzles that are much harder than > others. This also applies to the few programs that I have seen. It is > this variety of complexity that makes it interesting. > > >>> They are probably asking the same question: why take hours to write a > >>> program to do it when with my mad sudoku solving skills I can solve it > >>> in X seconds? My roommate is like this. > > > > if we just throw raw computing power at the problem (in-place array > > updates, bitvectors, multiprocessors, ..), wouldn't they be justified? > > but as soon as we try to take their skills and encode them in our > > programs, things become more interesting (do computers really "play" > > chess?-). > > You can go get the 36,628 distict minimal puzzles from > http://www.csse.uwa.edu.au/~gordon/sudokumin.php that have only 17 clues. > Then you can run all of them through your program to locate the most evil > ones, and use them on your associates. :) Well, I loaded them down and let my solver determine whether all of them have a unique solution (they do), took 76 min 14.260 sec user time, hence roughly 0.125 secs per puzzle, so I dare say there are no evil ones among them (However, Alson Kemp's solver from the HaWiki-page -- which, I don't know why, is much faster than Cale Gibbard's -- took over 20 minutes to solve the second 0 0 0 0 0 0 0 1 0 4 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 5 0 6 0 4 0 0 8 0 0 0 3 0 0 0 0 1 0 9 0 0 0 0 3 0 0 4 0 0 2 0 0 0 5 0 1 0 0 0 0 0 0 0 0 8 0 7 0 0 0, so these puzzles may be evil after all). My solver needed to guess on 5,309 of the 36,628 17-hint puzzles, which I find a bit disappointing -- the big disappointment was when neither I nor my solver were able to solve the example from the hawiki-page without guessing :-( -- does anybody know whether in a uniquly solvable sudoku-puzzle guessing is never necessary, i.e. by proper reasoning ('if I put 6 here, then there must be a 3 and thus the 4 must go there...' is what I call guessing) there is always at least one entry determined? > > This also gave me a way to statistically measure if a new deductive step > made much of a difference (or if it made no difference). Histograms and > gnuplot helped. > > > [snip Curry language example] > > > > my own Sudoku solver (yes, me too - see attached, but only after > > you've written your own!-) uses simple hand-coded constraint propagation > > to limit the space for exhaustive search - some puzzles are solved by > > constraint propagation only, and even where guesses are used, each guess > > is immediately followed by propagation, to weed out useless branches > > early, and to deduce consequences of each guess before asking for the > > next one. the good old game, start with generate&test, then move the > > tests forward, into the > > generator. > > > > I've only coded the two most useful groups of constraints (when > > there's only a single number left for a position, or when there is > > only a single position left for a number). there are other deductions > > one does in by-hand solving, and I'm not an experienced sudoku solver > > myself, so I don't even know more than a few such rules yet, but these > > particular two seem sufficient to get acceptable performance even under > > ghci/hugs, so why do more?-) (*) > > I have two versions of a solver. The first is a re-write of GDANCE bu > Knuth to solve Sudoku efficiently as a binary cover problem. (see > http://www-cs-faculty.stanford.edu/~knuth/programs.html ) This uses the > "Dancing Links algorithm" implemented with STRef's and is very fast. > > The second uses a different encoding to look for clever deductions. This > alone solves some easy problems and comes very close before getting stuck > on most. There are few though that start with 17 hints and only discover > one or two more by logic before getting stuck. These are the most evil of > all. > > You might be interested in the deductions described at > http://www.sudokusolver.co.uk/ > > > [by acceptable, I mean that my sequence of 5 test puzzles is solved in > > less than 20 seconds on a 2GHz Pentium M; no idea how that compares to > > the most efficient solvers] > > I could run ~20,000 puzzles in a couple of hours. (The collection was > smaller then). As stated above, I ran the 36,628 in 76 minutes on a 1200MHz Duron. But I must confess that my solver takes about 25 secs for the empty puzzle, guessing is baaaaaad. > > > perhaps Haskell should have Control.Constraint.* libraries for > > generalized constraint propagation (and presumably for constraint > > handling rules as well, as they are so popular nowadays for specifying > > Haskell's type classes)? > > Did you see the monad at http://haskell.org/hawiki/SudokuSolver ? Perhaps > you could generalize that. > > > cheers, > > claus > > > > (*) actually, that was a bit disappointing!-( I was hoping for some > > fun while trying to encode more and more > > "clever" rules, but not much of that seems to be required. > > You need more than 5 examples. The truly evil puzzles are rarer than that. > Go get the set of minimal puzzles and see how far your logic takes you. Cheers, Daniel -- "In My Egotistical Opinion, most people's C programs should be indented six feet downward and covered with dirt." -- Blair P. Houghton _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Bulat Ziganshin-2
bulat.ziganshin:
> Hello > > it seems that sudoku solver may be a good candidate for nofib suite / > language comparison shootout It would also be nice to see some example sudoku solvers posted on an `Idioms' page on haskell.org: http://www.haskell.org/haskellwiki/Category:Idioms someone could just create a new page 'Sudoku' and add the phrase [Category:Idioms]] to the text, and it will be indexed. Seeing 4 or 5 solutions would be a useful tutorial, I'd imagine. -- Don _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Daniel Fischer-4
On 2006-04-03, Daniel Fischer <[hidden email]> wrote:
> does anybody know whether in a uniquly solvable sudoku-puzzle guessing is > never necessary, i.e. by proper reasoning ('if I put 6 here, then there must > be a 3 and thus the 4 must go there...' is what I call guessing) there is > always at least one entry determined? No, it sometimes is, (well, depending on your set of base inference rules -- throwing all possible solutions in and doing pattern matching technically allows no-backtracking solutions). Most people use "eliminate all impossible numbers in a given box (that is, that number occurs in same row, column, or square)", combined with "if there is only one place in this {row, column, square} a number can be, place it." But there are additional common patterns such as "if there are N boxes in a {row, column, square}, each with a subset of N numbers, then eliminate those numbers in the other squares. For example if two boxes in a row both only allow 2 and 3, then 2 and 3 can be eliminated from all the other boxes in that row. These are often worth implementing as they can radically reduce guessing. Also worth doing may be chains of reasoning that can restrict a number to be in a given row or column of a square (or square of a row or column), which can then eliminate it from other places. -- Aaron Denney -><- _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Daniel Fischer-4
Daniel Fischer wrote:
> does anybody know whether in a uniquly solvable sudoku-puzzle guessing is > never necessary, i.e. by proper reasoning ('if I put 6 here, then there must > be a 3 and thus the 4 must go there...' is what I call guessing) there is > always at least one entry determined? > http://www.phil.uu.nl/~oostrom/cki20/02-03/japansepuzzles/ASP.pdf "As an application, we prove the ASP-completeness (which implies NP-completeness) of three popular puzzles: Slither Link, Cross Sum, and Number Place." As the size of the puzzle N increases, it is np-complete. (3x3x3,4x4x4,5x5x5,...) And the definition of "logic" vs "brute force" is a imprecise. Complex logic looks like "hypothetical guess and check", and the efficient dancing links algorithm by Knuth is very smart brute force. -- Chris _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Donald Bruce Stewart
On 4/3/06, Donald Bruce Stewart <[hidden email]> wrote:
> It would also be nice to see some example sudoku solvers posted > on an `Idioms' page on haskell.org: > http://www.haskell.org/haskellwiki/Category:Idioms > > someone could just create a new page 'Sudoku' and add the phrase > [Category:Idioms]] to the text, and it will be indexed. Done. Lives at http://www.haskell.org/haskellwiki/Sudoku > Seeing 4 or 5 solutions would be a useful tutorial, I'd imagine. > > it seems that sudoku solver may be a good candidate for nofib suite / > > language comparison shootout Anyone with killer solvers (Chris?) can add them to the wiki. Maybe the fastest can be used in a new Shootout benchmark. Jared. -- http://www.updike.org/~jared/ reverse ")-:" _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Jared Updike
On Mon, 3 Apr 2006, Jared Updike wrote: > or ambiguously) with your Sudoku solver? A rough mesaure of the > difficulty of the unsolved puzzle could be how long the solver took to > solve it (number of steps) (and the number of possible solutions)? Are > puzzles with multiple solutions usually considered harder or easier? > Are these considered proper puzzles? It's an interesting test to run a Sudoku solver on an empty array. :-) _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
Henning Thielemann wrote:
> > On Mon, 3 Apr 2006, Jared Updike wrote: > >> or ambiguously) with your Sudoku solver? A rough mesaure of the >> difficulty of the unsolved puzzle could be how long the solver took to >> solve it (number of steps) (and the number of possible solutions)? Are >> puzzles with multiple solutions usually considered harder or easier? >> Are these considered proper puzzles? > > It's an interesting test to run a Sudoku solver on an empty array. :-) I am cleaning up my old (aka inexperienced) solver based on Knuth's dancing links to put on the wiki. The code is very different than most Haskell solutions, since it revolves around a mutable data structure (which is not an MArray). It "solves" an empty array in 81 steps with no backtracking. It will produce a list of all the solutions of an empty board quite efficiently. Cleaning up my "logic" based solver will take longer. -- Chris _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Tom Schrijvers
> Curry does not have a constraint solver of its own. It
> simply delegates all constraints to the FD solver of SICStus Prolog. or that of SWI Prolog (which prompted my attempt to install Curry). which was implemented by.. hi, again!-) (*) > The all_different constraint subsumes the rules that you describe, > depending on the consistency algorithm used. FD solvers implement general > but efficient algorithms that are much more complex than a few simple > rules. I haven't yet been able to get Curry to work on my windows machine, but it seems to do a straightforward translation of these constraints to Prolog +FD solver, close to the one I've attached (an easy way to "use" external constraint solvers from Haskell;-). the docs you pointed to state that all_different, in declarative view, simply translates into mutual inequalities between the list members, and although I don't fully understand the sources, they seem to confirm that not much more is going on. as I said, it is reasonable that this covers the first group of constraints (every position in each coordinate holds a positive single-digit number, every positive single-digit number occurs at most once in each coordinate). what I can't quite seem to be able to make out, though, is how that would cover the second group of constraints we discussed (every number occurs at least once in each coordinate), in terms of using it for propagation and avoiding search: if I have a line in which no position is uniquely determined (all positions have a current domain of size>1), but only one of the positions (i) has a domain including the number n, then that group of constraints allows me to assign n to i, without guessing. wouldn't the labelling in the FD solver generate all the possible numbers in the domain of i, up to n, before committing to n on i as the only option that is consistent with the constraints? while equivalent from a declarative point of view, that would be a lot less efficient, not to mention other propagation techniques that depend on inspecting and modifying the current domains of more than one position without actually instantiating any variables. btw, I thought it would be easy to augment the declarative spec from which all those constraints are generated, to expose this second group of constraints, but since the domains are implicit, I don't see how the "assign a number to each position" and the "assign a position to each number" constraints could communicate about their separate progress in narrowing the search space (other than when either group uniquely determines a number on a position, or by having the prolog code inspect the low-level representation of constraints). if I compare the Prolog version generated by the attached Haskell program with my current Haskell version, both in code interpreters, then the Haskell version is significantly faster. is that only because of details or because of more propagation? cheers, claus (*) closed-world or open-world, but small-world for sure!-) _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe SudokuPL.hs (1K) Download Attachment |
Free forum by Nabble | Edit this page |