The Burning Bridges thread got lots done, but seemed to miss a few things, and
didn't even touch on the Numeric classes. The Numeric classes should be fixed at some point, and sooner is better than later. However, it would be a large change and would go nicely with a major version bump in base. 5 is coming up soon. Proposals, ordered from relatively controversial to insanely so (at least IMO): 1. Replace (.) and id from versions from Control.Category in Prelude This is a small change, and has close to the same effect as the Foldable/ Traversable change. The key difference is that this is a much smaller change and there is little current use for the versions from Control.Category However, historically they have seen use with the other lens-ish libraries, and AFAICT are the reason the lenses in `lens` are "backwards", or at least called so my some. 1.2 Use Semigroupoid for (.) and Ob for id instead. Personally, I really like this idea, but I think it would be much more controversial. 2. Move Semigroup into Prelude 2.1 Make Monoid depend on Semigroup. 3. Do something with the Numeric classes. This isn't so much of a proposal as a request for discussion from people more experienced than me, but I still think a general idea if people think that doing *anything* is a good idea would be useful. 3.1 Split each numeric operation into it's own class. Say no to 3.2 and yes here for no hierarchy in them/ConstraintKinds/empty classes. Pros: EDSLs, convenience. Cons: Would be major breakage, would need ConstraintKinds/empty classes to have a hierarchy. 3.2 Hierarchy. the classes are TBD, this is here for a straw poll. _______________________________________________ Libraries mailing list [hidden email] http://www.haskell.org/mailman/listinfo/libraries |
Darn, theres another carter on this list!!! (welcome!)
These are some good points to push on, but *the two weeks* before ghc 7.8 is tentatively due for release!
Also, 3 is too big to be included in this thread, the ones before are worth several threads alone. I humbly ask all subsequent respondents to focus on #'s 1 and 2.
fixing the numeric components of prelude actually will require some innovation on the way we can even organize / structure type classes if we really wish to map the standard pen+paper algebraic structures to their computational analogues in a prelude friendly way. I've got many good reasons to care about improving this piece of Base, including the fact that I'm spending (a professionally inadvisable) large amount of time figuring out how to improve the entire numerical computing substrate for Haskell. And i'm leaning towards figuring out the numeric prelude that needs to be *correct and good* and then pushing for a subset thereof for getting into base. This is one of those areas that "commitee" doesn't matter. the design has to work. It has to be useable. And i don't think theres currently any strong "heres the right design" choice. Also whatever new design lands in GHC BASE defacto determines the next haskell standard (ishhh).
That said, I think after the split-base work lands, doing surgery on the default numerical classes becomes more tenable
cheers :) On Sun, Feb 23, 2014 at 10:13 PM, Carter Charbonneau <[hidden email]> wrote: The Burning Bridges thread got lots done, but seemed to miss a few things, and _______________________________________________ Libraries mailing list [hidden email] http://www.haskell.org/mailman/listinfo/libraries |
In reply to this post by Carter Charbonneau
NonEmpty seems to be frequently reimplemented, particularly by beginners. Including Semigroup in Prelude would save all this duplication. |
On #3: The library numeric-prelude achieves many of these goals (Plus a bunch more). If the experiences of using numeric-prelude are positive then using this or a subset of this as the standard numeric prelude might resolve these goals easily.
On Mon, Feb 24, 2014 at 5:13 AM, harry <[hidden email]> wrote: Carter Charbonneau wrote _______________________________________________ Libraries mailing list [hidden email] http://www.haskell.org/mailman/listinfo/libraries |
lets not talk about this while people are buried with 7.8 release engineering, please :) there are certainly good ideas in Henning's (amazing) work, HOWEVER, one thing people often forget is that theres more than one valid computational formalization of a given mathematical concept (which itself can often have a multitude of equivalent definitions).
and thats even ignoring the fact that the haddocks are full of readable Qualified names like
On Fri, Mar 21, 2014 at 1:08 PM, Corey O'Connor <[hidden email]> wrote:
_______________________________________________ Libraries mailing list [hidden email] http://www.haskell.org/mailman/listinfo/libraries |
Am 21.03.2014 18:22, schrieb Carter Schonwald:
> lets not talk about this while people are buried with 7.8 release > engineering, please :) > > there are certainly good ideas in Henning's (amazing) work, > HOWEVER, one thing people often forget is that theres more than one > valid computational formalization of a given mathematical concept (which > itself can often have a multitude of equivalent definitions). > > and thats even ignoring the fact that the haddocks are full of readable > Qualified names like > "classC > <http://hackage.haskell.org/package/numeric-prelude-0.4.1/docs/Algebra-Algebraic.html#t:C> a > => C a where" :) I tried three times to make Haddock show qualifications, but it is really frustrating. Last time I tried I despaired of GHC data structures. Somewhere the qualification must have been recognized by GHC but it is thrown away early and hard to restore. _______________________________________________ Libraries mailing list [hidden email] http://www.haskell.org/mailman/listinfo/libraries |
On 21 March 2014 20:10, Henning Thielemann <[hidden email]> wrote:
Here is a screenshot: http://projects.haskell.org/pipermail/haddock/attachments/20100828/e91f52de/attachment-0001.png
However, I don't think that patch would still work, but it might give you some hints how to do it. _______________________________________________ Libraries mailing list [hidden email] http://www.haskell.org/mailman/listinfo/libraries |
Am 22.03.2014 08:22, schrieb Tobias Brandt:
> On 21 March 2014 20:10, Henning Thielemann > <[hidden email] > <mailto:[hidden email]>> wrote: > > > I tried three times to make Haddock show qualifications, but it is > really frustrating. Last time I tried I despaired of GHC data > structures. Somewhere the qualification must have been recognized by > GHC but it is thrown away early and hard to restore. > > > I actually implemented qualified names in haddock a long time ago > (http://projects.haskell.org/pipermail/haddock/2010-August/000649.html) > exactly because of numeric-prelude. > Here is a screenshot: > http://projects.haskell.org/pipermail/haddock/attachments/20100828/e91f52de/attachment-0001.png > > However, I don't think that patch would still work, but it might give > you some hints how to do it. I know, but as far as I remember it was not completely satisfying. I prefered abbreviated qualifications, e.g. Field.C instead of Algebra.Field.C and there were differences between imports from modules of the same package and imports from external modules. http://trac.haskell.org/haddock/ticket/22 _______________________________________________ Libraries mailing list [hidden email] http://www.haskell.org/mailman/listinfo/libraries |
On 22 March 2014 08:28, Henning Thielemann <[hidden email]> wrote: I know, but as far as I remember it was not completely satisfying. I prefered abbreviated qualifications, e.g. Field.C instead of Algebra.Field.C and there were differences between imports from modules of the same package and imports from external modules. There are four modes of qualification in my patch: * None: as now * Full: everything is fully qualified * Local: only imported names are fully qualified, like in the screenshot
* Relative: like local, but prefixes in the same hierarchy are stripped. Algebra.Field.C would become Field.C when shown in the documentation for Algebra.VectorSpace. The last one probably comes closest to what you want. Preserving the original qualification (as written in the source code) would probably be perfect, but that's already thrown away when we get to it in haddock.
_______________________________________________ Libraries mailing list [hidden email] http://www.haskell.org/mailman/listinfo/libraries |
In reply to this post by Corey O'Connor
On Fri, Mar 21, 2014 at 1:08 PM, Corey O'Connor <[hidden email]> wrote:
> On #3: The library numeric-prelude achieves many of these goals (Plus a > bunch more). If the experiences of using numeric-prelude are positive then > using this or a subset of this as the standard numeric prelude might resolve > these goals easily. > > http://hackage.haskell.org/package/numeric-prelude One of my major complaints against numeric-prelude is the same as my major complaint against every other such project I've seen put forward: they completely ignore semirings and related structures. Semirings are utterly ubiquitous and this insistence that every notion of addition comes equipped with subtraction is ridiculous. In my work I deal with semirings and semimodules on a daily basis, whereas rings/modules show up far less often, let alone fields/vectorspaces. When not dealing with semirings, the other structures I work with are similarly general (e.g., semigroups, quasigroups,...). But this entire area of algebra is completely overlooked by those libraries which start at abelian groups and then run headlong for normed Euclidean vector spaces. My main complaint against the Num class is that it already assumes too much structure. So developing the hierarchy even further up than Num does little to help me. -- Live well, ~wren _______________________________________________ Libraries mailing list [hidden email] http://www.haskell.org/mailman/listinfo/libraries |
agreed, hence my earlier remarks :) On Sat, Mar 22, 2014 at 5:09 PM, wren romano <[hidden email]> wrote:
_______________________________________________ Libraries mailing list [hidden email] http://www.haskell.org/mailman/listinfo/libraries |
In reply to this post by wren romano
My complaint with numeric-prelude is that it doesn't (and arguably
can't) fix the problems that for me make Haskell borderline usable for actual engineering work involving actual "normal" numbers, and genuinely somewhat unusable for teaching: Off the top of my head: * The lack of implicit conversions (except for the weird defaulting of literals, which means that I am constantly writing `fromIntegral` and `toRealFrac` in places where there is only one reasonable choice of type conversion, and occasionally having things just malfunction because I didn't quite understand what these conversion functions would give me as a result. Prelude> 3 + 3.5 6.5 Prelude> let x = 3 Prelude> x + 3.5 <interactive>:4:5: No instance for (Fractional Integer) arising from the literal `3.5' Possible fix: add an instance declaration for (Fractional Integer) In the second argument of `(+)', namely `3.5' In the expression: x + 3.5 In an equation for `it': it = x + 3.5 Prelude> I mean, seriously? We expect newbies to just roll with this kind of thing? Even worse, the same sort of thing happens when trying to add a `Data.Word.Word` to an `Integer`. This is a totally safe conversion if you just let the result be `Integer`. * The inability of Haskell to handle unary negation sanely, which means that I and newbies alike are constantly having to figure things out and parenthesize. From my observations of students, this is a huge barrier to Haskell adoption: people who can't write 3 + -5 just give up on a language. (I love the current error message here, "cannot mix `+' [infixl 6] and prefix `-' [infixl 6] in the same infix expression", which is about as self-diagnosing of a language failing as any error message I've ever seen.) * The multiplicity of exponentiation functions, one of which looks exactly like C's XOR operator, which I've watched trip up newbies a bunch of times. (Indeed, NumericPrelude seems to have added more of these, including the IMHO poorly-named (^-) which has nothing to do with numeric negation as far as I can tell. See "unary negation" above.) * The incredible awkwardness of hex/octal/binary input handling, which requires digging a function with an odd and awkward return convention (`readHex`) out of an oddly-chosen module (or rolling my own) in order to read a hex value. (Output would be just as bad except for `Text.Printf` as a safety hatch.) Lord knows what you're supposed to do if your number might have a C-style base specifier on the front, other than the obvious ugly brute-force thing? * Defaulting numeric types with "-Wall" on producing scary warnings. Prelude> 3 + 3 <interactive>:2:3: Warning: Defaulting the following constraint(s) to type `Integer' (Num a0) arising from a use of `+' In the expression: 3 + 3 In an equation for `it': it = 3 + 3 <interactive>:2:3: Warning: Defaulting the following constraint(s) to type `Integer' (Num a0) arising from a use of `+' at <interactive>:2:3 (Show a0) arising from a use of `print' at <interactive>:2:1-5 In the expression: 3 + 3 In an equation for `it': it = 3 + 3 6 and similarly for 3.0 + 3.0. If you can't even write simple addition without turning off or ignoring warnings, well, I dunno. Something. Oh, and try to get rid of those warnings. The only ways I know are `3 + 3 :: Integer` or `(3 :: Integer) + 3`, both of which make code read like a bad joke. Of course, if you write everything to take specific integral or floating types rather than `Integral` or `RealFloat` or `Num` this problem mostly goes away. So everyone does, turning potentially general code into needlessly over-specific code. Not sure I'm done, but running out of steam. But yeah, while I'm fine with fancy algebraic stuff getting fixed, I'd also like to see simple grade-school-style arithmetic work sanely. That would let me teach Haskell more easily as well as letting me write better, clearer, more correct Haskell for that majority of my real-world problems that involve grade-school numbers. --Bart _______________________________________________ Libraries mailing list [hidden email] http://www.haskell.org/mailman/listinfo/libraries |
On 23 March 2014 11:58, Bart Massey <[hidden email]> wrote:
> My complaint with numeric-prelude is that it doesn't (and arguably > can't) fix the problems that for me make Haskell borderline usable for > actual engineering work involving actual "normal" numbers, and > genuinely somewhat unusable for teaching: Off the top of my head: > > * The lack of implicit conversions (except for the weird defaulting of > literals, which means that I am constantly writing `fromIntegral` and > `toRealFrac` in places where there is only one reasonable choice of > type conversion, and occasionally having things just malfunction > because I didn't quite understand what these conversion functions > would give me as a result. > > Prelude> 3 + 3.5 > 6.5 > Prelude> let x = 3 > Prelude> x + 3.5 > > <interactive>:4:5: > No instance for (Fractional Integer) arising from the literal `3.5' > Possible fix: add an instance declaration for (Fractional Integer) > In the second argument of `(+)', namely `3.5' > In the expression: x + 3.5 > In an equation for `it': it = x + 3.5 > Prelude> > > I mean, seriously? We expect newbies to just roll with this kind of thing? Isn't this more of a ghci issue than a Haskell issue? I actually think it's good behaviour that - in actual code, not defaulted numerals in ghci - we need to explicitly convert between types rather than have a so-far-Integer-value automagically convert to Double. > > Even worse, the same sort of thing happens when trying to add a > `Data.Word.Word` to an `Integer`. This is a totally safe conversion if > you just let the result be `Integer`. What would be the type of such an operation be? I think we'd need some kind of new typeclass to denote the "base value", which would make the actual type signature be much more hairy. > > * The inability of Haskell to handle unary negation sanely, which > means that I and newbies alike are constantly having to figure things > out and parenthesize. From my observations of students, this is a huge > barrier to Haskell adoption: people who can't write 3 + -5 just give > up on a language. (I love the current error message here, "cannot mix > `+' [infixl 6] and prefix `-' [infixl 6] in the same infix > expression", which is about as self-diagnosing of a language failing > as any error message I've ever seen.) This is arguably the fault of mathematics for overloading the - operator :p > > * The multiplicity of exponentiation functions, one of which looks > exactly like C's XOR operator, which I've watched trip up newbies a > bunch of times. (Indeed, NumericPrelude seems to have added more of > these, including the IMHO poorly-named (^-) which has nothing to do > with numeric negation as far as I can tell. See "unary negation" > above.) > > * The incredible awkwardness of hex/octal/binary input handling, which > requires digging a function with an odd and awkward return convention > (`readHex`) out of an oddly-chosen module (or rolling my own) in order > to read a hex value. (Output would be just as bad except for > `Text.Printf` as a safety hatch.) Lord knows what you're supposed to > do if your number might have a C-style base specifier on the front, > other than the obvious ugly brute-force thing? > > * Defaulting numeric types with "-Wall" on producing scary warnings. > > Prelude> 3 + 3 > > <interactive>:2:3: Warning: > Defaulting the following constraint(s) to type `Integer' > (Num a0) arising from a use of `+' > In the expression: 3 + 3 > In an equation for `it': it = 3 + 3 > > <interactive>:2:3: Warning: > Defaulting the following constraint(s) to type `Integer' > (Num a0) arising from a use of `+' at <interactive>:2:3 > (Show a0) arising from a use of `print' at <interactive>:2:1-5 > In the expression: 3 + 3 > In an equation for `it': it = 3 + 3 > 6 > > and similarly for 3.0 + 3.0. If you can't even write simple addition > without turning off or ignoring warnings, well, I dunno. Something. > Oh, and try to get rid of those warnings. The only ways I know are `3 > + 3 :: Integer` or `(3 :: Integer) + 3`, both of which make code read > like a bad joke. So above you didn't like how ghci defaulted to types too early, now you're complaining that it's _not_ defaulting? Or just that it gives you a warning that it's doing the defaulting? > > Of course, if you write everything to take specific integral or > floating types rather than `Integral` or `RealFloat` or `Num` this > problem mostly goes away. So everyone does, turning potentially > general code into needlessly over-specific code. > > > Not sure I'm done, but running out of steam. But yeah, while I'm fine > with fancy algebraic stuff getting fixed, I'd also like to see simple > grade-school-style arithmetic work sanely. That would let me teach > Haskell more easily as well as letting me write better, clearer, more > correct Haskell for that majority of my real-world problems that > involve grade-school numbers. > > --Bart > _______________________________________________ > Libraries mailing list > [hidden email] > http://www.haskell.org/mailman/listinfo/libraries -- Ivan Lazar Miljenovic [hidden email] http://IvanMiljenovic.wordpress.com _______________________________________________ Libraries mailing list [hidden email] http://www.haskell.org/mailman/listinfo/libraries |
In reply to this post by Barton C Massey
On Sat, Mar 22, 2014 at 8:58 PM, Bart Massey <[hidden email]> wrote: * The lack of implicit conversions (except for the weird defaulting of Actually, we don't expect them to roll with it any more. >>> x + 3.5 6.5 ghci turns no NoMonomorphismRestriction on newer GHCs. To me that issue is largely independent of changes to the numeric hierarchy, though I confess I'd largely tuned out this thread and am just now skimming backwards a bit. That said, implicit conversions are something where personally I feel Haskell does take the right approach. It is the only language I have access to where I can reason about the semantics of (+) sanely and extend the set of types. Even worse, the same sort of thing happens when trying to add a The problem arises when you allow for users to extend the set of numeric types like Haskell does. We have a richer menagerie of exotic numerical types than any other language, explicitly because of our adherence to a stricter discipline and moving the coercion down to the literal rather than up to every function application.
Because of that type inference works better. It can flow both forward and backwards through (+), whereas the approach you advocate is strictly less powerful. You have to give up overloading of numeric literals, and in essence this gives up on the flexibility of the numerical tower to handle open sets of new numerical types.
As someone who works with compensated arithmetic, automatic differentiaton, arbitrary precision floating point, interval arithmetic, Taylor models, and all sorts of other numerical types in Haskell, basically you're almost asking me to give up all the things that work in this language to go back to a scheme style fixed numerical tower.
* The inability of Haskell to handle unary negation sanely, which That is probably fixable by getting creative in the language grammar. I note it particularly because our Haskell like language Ermine here at work gets it right. ;)
* The multiplicity of exponentiation functions, one of which looks It is unfortunate, but there really is a distinction being made. * The incredible awkwardness of hex/octal/binary input handling, which A lot of people these days turn to lens for that: >>> "7b" ^? hex Just 123 >>> hex # 123 "7b" * Defaulting numeric types with "-Wall" on producing scary warnings. This is no longer a problem at ghci due to NMR: >>> 3 + 3 6 You can of course use >>> :set -Wall -fno-warn-type-defaults instead of -Wall for cases like
>>> 3 == 3 where the type doesn't get picked. -Edward _______________________________________________ Libraries mailing list [hidden email] http://www.haskell.org/mailman/listinfo/libraries |
On Sun, Mar 23, 2014 at 12:11 AM, Edward Kmett <[hidden email]> wrote:
> On Sat, Mar 22, 2014 at 8:58 PM, Bart Massey <[hidden email]> wrote: >>>> let x = 3 >>>> x + 3.5 > 6.5 > > ghci turns no NoMonomorphismRestriction on newer GHCs. Nice. AFAICT newer means 7.8, which I haven't tried yet. Major improvement. >> Even worse, the same sort of thing happens when trying to add a >> `Data.Word.Word` to an `Integer`. This is a totally safe conversion if >> you just let the result be `Integer`. > > Because of that type inference works better. It can flow both forward and > backwards through (+), whereas the approach you advocate is strictly less > powerful. You have to give up overloading of numeric literals, and in > essence this gives up on the flexibility of the numerical tower to handle > open sets of new numerical types. You obviously know far more about this than me, but I'm not seeing it? AFAICT all I am asking for is numeric subtyping using the normal typeclass mechanism, but with some kind of preference rules that get the subtyping right in "normal" cases? I can certainly agree that I don't want to go toward C's morass of "widening to unsigned" (???) or explicitly-typed numeric literals. I just want a set of type rules that agrees with grade-school mathematics most of the time. I'm sure I'm missing something, and it really is that hard, but if so it makes me sad. >> * The inability of Haskell to handle unary negation sanely, which >> means that I and newbies alike are constantly having to figure things >> out and parenthesize. From my observations of students, this is a huge >> barrier to Haskell adoption: people who can't write 3 + -5 just give >> up on a language. (I love the current error message here, "cannot mix >> `+' [infixl 6] and prefix `-' [infixl 6] in the same infix >> expression", which is about as self-diagnosing of a language failing >> as any error message I've ever seen.) > > > That is probably fixable by getting creative in the language grammar. I note > it particularly because our Haskell like language Ermine here at work gets > it right. ;) Almost every PL I've seen with infix arithmetic gets it right. It's trivial for any operator-precedence parser, and not too hard for other common kinds. In general it would be nice if Haskell allowed arbitrary prefix and postfix unary operators, but I'd settle for a special case for unary minus. >> * The multiplicity of exponentiation functions, one of which looks >> exactly like C's XOR operator, which I've watched trip up newbies a >> bunch of times. (Indeed, NumericPrelude seems to have added more of >> these, including the IMHO poorly-named (^-) which has nothing to do >> with numeric negation as far as I can tell. See "unary negation" >> above.) > > > It is unfortunate, but there really is a distinction being made. I get that. I even get that static-typing exponentiation is hard. (You should see how we did it in Nickle (http://nickle.org) --not because it's good but because it calls out a lot of the problems.) What I don't get is why the names seem so terrible to me, nor why the typechecker can't do more to help reduce the number of needed operators, ideally to one. It might mean extra conversion operators around exponentiation once in a while, I guess? >> * The incredible awkwardness of hex/octal/binary input handling, which >> requires digging a function with an odd and awkward return convention >> (`readHex`) out of an oddly-chosen module (or rolling my own) in order >> to read a hex value. (Output would be just as bad except for >> `Text.Printf` as a safety hatch.) Lord knows what you're supposed to >> do if your number might have a C-style base specifier on the front, >> other than the obvious ugly brute-force thing? > > > A lot of people these days turn to lens for that: > >>>> :m + Numeric.Lens Control.Lens >>>> "7b" ^? hex > Just 123 >>>> hex # 123 > "7b" Nice. Once this really becomes standard, I guess we'll be somewhere. >> * Defaulting numeric types with "-Wall" on producing scary warnings. > This is no longer a problem at ghci due to NMR: > >>>> 3 + 3 > 6 Nice. I'm not sure what to make of something like this (with ghci -Wall -XNoMonomorphismRestriction) >> let discard :: Integral a => a -> (); discard _ = () >> discard 3 <interactive>:5:1: Warning: Defaulting the following constraint(s) to type `Integer' (Integral a0) arising from a use of `discard' at <interactive>:5:1-7 (Num a0) arising from the literal `3' at <interactive>:5:9 In the expression: discard 3 In an equation for `it': it = discard 3 <interactive>:5:1: Warning: Defaulting the following constraint(s) to type `Integer' (Integral a0) arising from a use of `discard' at <interactive>:5:1-7 (Num a0) arising from the literal `3' at <interactive>:5:9 In the expression: discard 3 In an equation for `it': it = discard 3 () I get the feeling I just shouldn't worry about it: there's not much to be done, as this is just a limitation of the static type system and not really Haskell's fault. (Although I wonder why the conversion is warned twice? :-) > You can of course use > >>>> :set -Wall -fno-warn-type-defaults > > instead of -Wall for cases like > >>>> 3 == 3 > > where the type doesn't get picked. Of course. I have no idea what warnings I might be turning off that would actually be useful? Anyway, thanks much for the response! --Bart _______________________________________________ Libraries mailing list [hidden email] http://www.haskell.org/mailman/listinfo/libraries |
On Sun, Mar 23, 2014 at 4:50 AM, Bart Massey <[hidden email]> wrote:
Let's consider what directions type information flows though (+) class Foo a where
(+) :: a -> a -> a Given that kind of class you get (+) :: Foo a => a -> a -> a
Now given the result type of an expression you can know the types of both of its argument types. If either argument is determined you can know the type of the other argument and the result as well.
Given any one of the arguments or the result type you can know the type of the other two. Consider now the kind of constraint you'd need for widening.
class Foo a b c | a b -> c where (+) :: a -> b -> c Now, given both a and b you can know the type of c.
But given just the type of c, you know nothing about the type of either of its arguments. a + b :: Int used to tell you that a and b both had to be Int, but now you'd get nothing!
Worse, given the type of one argument, you don't get to know the type of the other argument or the result. We went from getting 2 other types from 1 in all directions to getting 1 type from 2 others, in only 1 of 3 cases.
The very cases you complained about, where you needed defaulting now happen virtually everywhere!
I do think there were at least two genuinely bad ideas in the design of (^) (^^) and (**) originally. Notably the choice in (^) an (^^) to overload on the power's Integral type.
More often than not this simply leads to an ambiguous choice for simple cases like x^2. Even if that had that been monomorphized and MPTCs existed when they were defined AND we wanted to use fundeps in the numerical tower, you'd still need at least two operators.
You can't overload cleanly between (^) and (**) because instance selection would overlap. Both of their right sides unify: (^) :: ... => a -> Int -> a (**) :: ... => a -> a -> a
The warnings that turns off are just the ones like your discard and the 3 == 3 where it had to turn to defaulting for an under-determined type.
-Edward _______________________________________________ Libraries mailing list [hidden email] http://www.haskell.org/mailman/listinfo/libraries |
In reply to this post by Barton C Massey
On Sun, Mar 23, 2014 at 01:50:43AM -0700, Bart Massey wrote:
> >>>> :m + Numeric.Lens Control.Lens > >>>> "7b" ^? hex > > Just 123 > >>>> hex # 123 > > "7b" > > Nice. Once this really becomes standard, I guess we'll be somewhere. Unless there's some way I'm unaware of to statically verify that the string indeed represents a valid hex encoding, then this is still not a complete solution. Tom _______________________________________________ Libraries mailing list [hidden email] http://www.haskell.org/mailman/listinfo/libraries |
In reply to this post by Barton C Massey
Am 23.03.2014 09:50, schrieb Bart Massey:
>>> * The multiplicity of exponentiation functions, one of which looks >>> exactly like C's XOR operator, which I've watched trip up newbies a >>> bunch of times. (Indeed, NumericPrelude seems to have added more of >>> these, including the IMHO poorly-named (^-) which has nothing to do >>> with numeric negation as far as I can tell. See "unary negation" >>> above.) >> >> It is unfortunate, but there really is a distinction being made. > > I get that. I even get that static-typing exponentiation is hard. (You > should see how > we did it in Nickle (http://nickle.org) --not because it's good but because > it calls out a lot of the problems.) What I don't get is why the names seem > so terrible to me, nor why the typechecker can't do more to help reduce the > number of needed operators, ideally to one. It might mean extra conversion > operators around exponentiation once in a while, I guess? I think the power functions of Haskell are the best we can do, and mathematics is to blame for having only one notation for different power functions. http://www.haskell.org/haskellwiki/Power_function I like to compare it to division. In school we first learnt natural numbers and that division cannot always be performed with natural numbers. Instead we have division with remainder. In contrast to that we can always divide rational numbers (except division by zero). In Haskell this is nicely captured by two different functions div and (/). The same way I find it sensible to distinguish power functions. I found the infix operator names (^^) and (**) not very intuitive and defined (^-) and (^/) in NumericPrelude, in order to show, that the first one allows negative exponents and the second one allows fractional exponents. Unfortunately the first one looks like power function with negated exponent and I had no better idea for an identifier so far. _______________________________________________ Libraries mailing list [hidden email] http://www.haskell.org/mailman/listinfo/libraries |
In reply to this post by Tom Ellis
Am 23.03.2014 19:16, schrieb Tom Ellis:
> On Sun, Mar 23, 2014 at 01:50:43AM -0700, Bart Massey wrote: >>>>>> :m + Numeric.Lens Control.Lens >>>>>> "7b" ^? hex >>> Just 123 >>>>>> hex # 123 >>> "7b" >> >> Nice. Once this really becomes standard, I guess we'll be somewhere. > > Unless there's some way I'm unaware of to statically verify that the string > indeed represents a valid hex encoding, then this is still not a complete > solution. Since the original complaint was about parsing, there must be a way to fail. The first line ("7b" ^? hex) seems to allow failure. For literal hex input we have 0x7b. However I don't see a problem with readHex. Prelude> case Numeric.readHex "7b" of [(n,"")] -> print n; _ -> putStrLn "could not parse hex number" 123 _______________________________________________ Libraries mailing list [hidden email] http://www.haskell.org/mailman/listinfo/libraries |
On Sun, Mar 23, 2014 at 07:30:07PM +0100, Henning Thielemann wrote:
> Am 23.03.2014 19:16, schrieb Tom Ellis: > >Unless there's some way I'm unaware of to statically verify that the string > >indeed represents a valid hex encoding, then this is still not a complete > >solution. > > Since the original complaint was about parsing, there must be a way > to fail. The first line ("7b" ^? hex) seems to allow failure. > > For literal hex input we have 0x7b. Oh, I wasn't reading carefully enough! _______________________________________________ Libraries mailing list [hidden email] http://www.haskell.org/mailman/listinfo/libraries |
Free forum by Nabble | Edit this page |