Hello! My library that can help to optimize using 'case ... of ...' construction if there are multiple (more than at least 5) variants. Best regards, Oleksandr Zhabenko. _______________________________________________ Haskell mailing list [hidden email] http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell |
Have you benchmarked this library? I don't see how using a vector can be any faster than using a case. On Sat, 21 Sep 2019, 14:00 olexandr543--- via Haskell, <[hidden email]> wrote:
_______________________________________________ Haskell mailing list [hidden email] http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell |
In reply to this post by Haskell - Haskell mailing list
Hello! Matthew, no, I don't benchmark this library, it should be more sufficient theoretically because it uses binary search rather than linear, and the first one is known to be more efficient. I used the similar technique for writing code for my project mm1. There I firstly used case but then manually made optimization using case and guards and if-then-else. You can see it for different releases for mm1. The manually optimized program being added additional functionality after optimizing were a little more efficient, but it should be mentioned that most time while it is running the espeak and sox being worked, so I think it should be more efficient for such situations.
I can try to do benchmarking, but it needs some time in advance. Thank you for your advice! Best regards, Oleksandr Zhabenko.
субота, 21 вересня 2019 р., 21:26:35 GMT+3, Олександр Жабенко <[hidden email]> написав:
Hello! Matthew, no, I don't benchmark this library, it should be more sufficient theoretically because it uses binary search rather than linear, and the first one is known to be more efficient. I used the similar technique for writing code for my project mm1. There I firstly used case but then manually made optimization using case and guards and if-then-else. You can see it for different releases for mm1. The manually optimized program being added additional functionality after optimizing were a little more efficient, but it should be mentioned that most time while it is running the espeak and sox being worked, so I think it should be more efficient for such situations.
I can try to do benchmarking, but it needs some time in advance. Thank you for your advice! Best regards, Oleksandr Zhabenko. Hello! Matthew, no, I don't benchmark this library, it should be more sufficient theoretically because it uses binary search rather than linear, and the first one is known to be more efficient. I used the similar technique for writing code for my project mm1. There I firstly used case but then manually made optimization using case and guards and if-then-else. You can see it for different releases for mm1. The manually optimized program being added additional functionality after optimizing were a little more efficient, but it should be mentioned that most time while it is running the espeak and sox being worked, so I think it should be more efficient for such situations.
I can try to do benchmarking, but it needs some time in advance. Thank you for your advice! Best regards, Oleksandr Zhabenko.
субота, 21 вересня 2019 р., 15:46:38 GMT+2, Matthew Pickering <[hidden email]> написав:
Have you benchmarked this library? I don't see how using a vector can be any faster than using a case. On Sat, 21 Sep 2019, 14:00 olexandr543--- via Haskell, <[hidden email]> wrote:
_______________________________________________ Haskell mailing list [hidden email] http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell |
In reply to this post by Haskell - Haskell mailing list
Case matching is already optimized in GHC. There might be ways to improve it, but it already uses binary search and/or jump tables to improve performance when there are many branches. On Sat, Sep 21, 2019, 8:59 AM olexandr543--- via Haskell <[hidden email]> wrote:
_______________________________________________ Haskell mailing list [hidden email] http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell |
In reply to this post by Haskell - Haskell mailing list
case_of_ in Haskell should be constant - Your constructors are ints, you know in advance how many there are, so you can just jump with an offset? (I haven't actually checked this, but it doesn't seem unreasonable to me, that this would be the implementation)
======= Georgi _______________________________________________ Haskell mailing list [hidden email] http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell |
In reply to this post by David Feuer
At any rate, using binary search doesn’t automatically guarantee faster runtime. Constant factors can add up, and there might not even be enough elements to warrant binary search. Benchmarking on realistic data is definitely a good idea. On Sat, Sep 21, 2019 at 11:39 AM David Feuer <[hidden email]> wrote:
_______________________________________________ Haskell mailing list [hidden email] http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell |
In reply to this post by Haskell - Haskell mailing list
Yes, you can use IntMap. It needs another dependency and the complexity choice as O(log (min (n, W))) where W is a number of bits in a representing key, used for mapping. For me, it is rather hard to solve whether it can be better, because it raises another problem of efficient binary representation of the keys. While working on the topic, I found out that may be the most efficient will be hashing (for lookup it can be used e. g. Cuckoo hashing) and there is a library in Haskell for it. Using it, you can write something like: {-# LANGUAGE ScopedTypeVariables #-} module Main where import qualified Data.HashTable.IO as I import Data.Maybe (fromJust,isJust) import System.Environment (getArgs) main :: IO () main = do args <- getArgs let a = [([z,t],x) | x <- ['a'..'d'], z <- ['f'..'t'], t <- ['a'..'p'], [x,z,t] <= "fnf"] b :: I.CuckooHashTable String Char <- I.fromList a c <- I.lookup b ("kn") let d1 u = do d <- I.lookup b u print c if isJust d then putStrLn . show . fromJust $ d else putStrLn "Nothing" return () if null args then d1 "fa" else d1 (head $ args) so it works. And the library documentation says that a complexity of Cuckoo hash lookup is about O(1), but with hashing there are questions of ratio of fulfilling the buckets and collisions. So, my library having the very generic constraints can be used instead. Best regards, Oleksandr Zhabenko.
субота, 21 вересня 2019 р., 21:52:18 GMT+3, [hidden email] <[hidden email]> написав:
Yes, you can. It needs another dependency and the complexity choice as O(log (min (n, W))) where W is a number of bits in a representing key, used for mapping. For me, it is rather hard to solve whether it can be better, because it raises another problem of efficient binary representation of the keys. While working on the topic, I found out that may be the most efficient will be hashing (for lookup it can be used e. g. Cuckoo hashing) and there is a library in Haskell for it. Using it, you can write something like: {-# LANGUAGE ScopedTypeVariables #-} module Main where import qualified Data.HashTable.IO as I import Data.Maybe (fromJust,isJust) import System.Environment (getArgs) main :: IO () main = do args <- getArgs let a = [([z,t],x) | x <- ['a'..'d'], z <- ['f'..'t'], t <- ['a'..'p'], [x,z,t] <= "fnf"] b :: I.CuckooHashTable String Char <- I.fromList a c <- I.lookup b ("kn") let d1 u = do d <- I.lookup b u print c if isJust d then putStrLn . show . fromJust $ d else putStrLn "Nothing" return () if null args then d1 "fa" else d1 (head $ args) so it works. And the library documentation says that a complexity of Cuckoo hash lookup is about O(1), but with hashing there are questions of ratio of fulfilling the buckets and collisions. So, my library having the very generic constraints can be used instead. Best regards, Oleksandr Zhabenko.
субота, 21 вересня 2019 р., 21:34:38 GMT+3, Henning Thielemann <[hidden email]> написав:
On Sat, 21 Sep 2019, olexandr543--- via Haskell wrote: > My library that can help to optimize using 'case ... of ...' construction if there are multiple (more than at > least 5) variants. > mm2: The library that can be used for optimization of multiple (Ord a) => a -> b transformations Isn't this a problem you would solve using Data.Map? _______________________________________________ Haskell mailing list [hidden email] http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell |
I did not check the code for 'case ... of ...' constuction in GHC, my intention comes from the explanations how this construction works in Haskell in some teaching materials. So, definitely I should do more exploratory work. Nevertheless, the type signatures and sorting is realized for rather general situation and can be therefore preferable for some situations. Best regards, Oleksandr Zhabenko.
субота, 21 вересня 2019 р., 21:07:25 GMT+2, Олександр Жабенко <[hidden email]> написав:
I did not check the code for 'case ... of ...' constuction in GHC, my intention comes from the explanations how this construction works in Haskell in some teaching materials. So, definitely I should do more exploratory work. Nevertheless, the type signatures and sorting is realized for rather general situation and can be therefore preferable for some situations. Best regards, Oleksandr Zhabenko.
субота, 21 вересня 2019 р., 20:59:03 GMT+2, [hidden email] <[hidden email]> написав:
Yes, you can use IntMap. It needs another dependency and the complexity choice as O(log (min (n, W))) where W is a number of bits in a representing key, used for mapping. For me, it is rather hard to solve whether it can be better, because it raises another problem of efficient binary representation of the keys. While working on the topic, I found out that may be the most efficient will be hashing (for lookup it can be used e. g. Cuckoo hashing) and there is a library in Haskell for it. Using it, you can write something like: {-# LANGUAGE ScopedTypeVariables #-} module Main where import qualified Data.HashTable.IO as I import Data.Maybe (fromJust,isJust) import System.Environment (getArgs) main :: IO () main = do args <- getArgs let a = [([z,t],x) | x <- ['a'..'d'], z <- ['f'..'t'], t <- ['a'..'p'], [x,z,t] <= "fnf"] b :: I.CuckooHashTable String Char <- I.fromList a c <- I.lookup b ("kn") let d1 u = do d <- I.lookup b u print c if isJust d then putStrLn . show . fromJust $ d else putStrLn "Nothing" return () if null args then d1 "fa" else d1 (head $ args) so it works. And the library documentation says that a complexity of Cuckoo hash lookup is about O(1), but with hashing there are questions of ratio of fulfilling the buckets and collisions. So, my library having the very generic constraints can be used instead. Best regards, Oleksandr Zhabenko.
субота, 21 вересня 2019 р., 21:52:18 GMT+3, [hidden email] <[hidden email]> написав:
Yes, you can. It needs another dependency and the complexity choice as O(log (min (n, W))) where W is a number of bits in a representing key, used for mapping. For me, it is rather hard to solve whether it can be better, because it raises another problem of efficient binary representation of the keys. While working on the topic, I found out that may be the most efficient will be hashing (for lookup it can be used e. g. Cuckoo hashing) and there is a library in Haskell for it. Using it, you can write something like: {-# LANGUAGE ScopedTypeVariables #-} module Main where import qualified Data.HashTable.IO as I import Data.Maybe (fromJust,isJust) import System.Environment (getArgs) main :: IO () main = do args <- getArgs let a = [([z,t],x) | x <- ['a'..'d'], z <- ['f'..'t'], t <- ['a'..'p'], [x,z,t] <= "fnf"] b :: I.CuckooHashTable String Char <- I.fromList a c <- I.lookup b ("kn") let d1 u = do d <- I.lookup b u print c if isJust d then putStrLn . show . fromJust $ d else putStrLn "Nothing" return () if null args then d1 "fa" else d1 (head $ args) so it works. And the library documentation says that a complexity of Cuckoo hash lookup is about O(1), but with hashing there are questions of ratio of fulfilling the buckets and collisions. So, my library having the very generic constraints can be used instead. Best regards, Oleksandr Zhabenko.
субота, 21 вересня 2019 р., 21:34:38 GMT+3, Henning Thielemann <[hidden email]> написав:
On Sat, 21 Sep 2019, olexandr543--- via Haskell wrote: > My library that can help to optimize using 'case ... of ...' construction if there are multiple (more than at > least 5) variants. > mm2: The library that can be used for optimization of multiple (Ord a) => a -> b transformations Isn't this a problem you would solve using Data.Map? _______________________________________________ Haskell mailing list [hidden email] http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell |
Free forum by Nabble | Edit this page |