Origins of '$'

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
16 messages Options
Reply | Threaded
Open this post in threaded view
|

Origins of '$'

George Pollard
Hello Haskell-Café:

This is a little bit random, but I was just wondering if anyone knew
where the $ low-precedence parenthesis-eliminating application operator
originated. The Haskell Report doesn't mention anything, and I can't
search for "$" on Google. So... who thought it up? Does it originate in
an earlier language, or is it uniquely Haskellish? :)

- George

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe

signature.asc (204 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Origins of '$'

Don Stewart-2
porges:
> Hello Haskell-Café:
>
> This is a little bit random, but I was just wondering if anyone knew
> where the $ low-precedence parenthesis-eliminating application operator
> originated. The Haskell Report doesn't mention anything, and I can't
> search for "$" on Google. So... who thought it up? Does it originate in
> an earlier language, or is it uniquely Haskellish? :)
>

If in doubt, you can search the early archives.

   http://www.cse.unsw.edu.au/~dons/haskell-1990-2006/threads.html 

Early on there is a discussion about using $ for module import
qualifiers,
   
    http://www.cse.unsw.edu.au/~dons/haskell-1990-2006/msg00411.html
    use ' or $ for module qualifiers.  The former would require

But then by 91 we start to see things take shape,

    http://www.cse.unsw.edu.au/~dons/haskell-1990-2006/msg00443.html
    haskell report version 1.1: beta-to-beta2
    Will Partain

    11 Jun 91 20:41
    - ($$)          :: (a -> b) -> a -> b
    - f $$ a            =  f a

Where '$$' was removed from the draft 1.1 report.

Then  in the following thread we start to see the emergence of the low
fixity $ that we know today. This is the first reference to it that I
can find in the list:

    http://www.cse.unsw.edu.au/~dons/haskell-1990-2006/msg00647.html
    syntax change
    Paul Hudak

    Sun, 1 Dec 1991 21:16:00 +0000
    About the fixity of $

    | The problem is that V1.1 does not allow things like:
    |
    | f x $ \y->
    | g y $
    | ...
    |
    | where the fixity of $ is defined as:
    |
    | infixr 0 $

Which suggests that $ was already in the 1.0 report going to SIGPLAN.
Perhaps Paul or Simon could shed light on it? Anyone have the 1.0 report
lying around to check if it was in earlier?

Paul reiterates this in Aug 1992,

    http://www.cse.unsw.edu.au/~dons/haskell-1990-2006/msg00889.html

    Of course, if you really don't like the parens, you can always write
    your example as:

      f $ x!i

    where ($) is defined in the prelude as:

      infixr 0 $
      f $ x = f x

-- Don
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Origins of '$'

Hans Aberg
In reply to this post by George Pollard
On 7 Dec 2008, at 04:30, George Pollard wrote:

> This is a little bit random, but I was just wondering if anyone knew
> where the $ low-precedence parenthesis-eliminating application  
> operator
> originated. The Haskell Report doesn't mention anything, and I can't
> search for "$" on Google. So... who thought it up? Does it  
> originate in
> an earlier language, or is it uniquely Haskellish? :)

As for the operator itself, it appears in Alonzo Church, "The Calculi  
of Lambda-Conversion", where it is written as exponentiation, like  
x^f, or typographically as
      f
     x

One can define operators
   a ^ b := b(a)          -- Application in inverse.
   (a * b)(x) := b(a(x))  -- Function composition in inverse.
   (a + b)(x) := a(x) * b(x)
   O(x) := I              -- Constant function returning identity.
   I(x) := x              -- Identity.
and use them to define lambda calculus (suffices with the first four;  
Church reverses the order of "*").

Then on Church's natural number functionals, these are just the  
expected natural number operations.

   Hans


_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Origins of '$'

Luke Palmer-2
On Sun, Dec 7, 2008 at 3:05 AM, Hans Aberg <[hidden email]> wrote:
One can define operators
 a ^ b := b(a)          -- Application in inverse.
 (a * b)(x) := b(a(x))  -- Function composition in inverse.
 (a + b)(x) := a(x) * b(x)
 O(x) := I              -- Constant function returning identity.
 I(x) := x              -- Identity.
and use them to define lambda calculus (suffices with the first four; Church reverses the order of "*").

The simple elegance of writing this encoding just increased my already infinite love of Haskell by another cardinality.

a .^ b = b a
(a .* b) x = b (a x)
(a .+ b) x = a x .* b x
o x = i
i x = x

toNat x = x (+1) 0
fromNat n = foldr (.) id . replicate n

Luke

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Origins of '$'

Hans Aberg
On 7 Dec 2008, at 11:34, Luke Palmer wrote:

> On Sun, Dec 7, 2008 at 3:05 AM, Hans Aberg <[hidden email]> wrote:
> One can define operators
>  a ^ b := b(a)          -- Application in inverse.
>  (a * b)(x) := b(a(x))  -- Function composition in inverse.
>  (a + b)(x) := a(x) * b(x)
>  O(x) := I              -- Constant function returning identity.
>  I(x) := x              -- Identity.
> and use them to define lambda calculus (suffices with the first  
> four; Church reverses the order of "*").
>
> The simple elegance of writing this encoding just increased my  
> already infinite love of Haskell by another cardinality.
>
> a .^ b = b a
> (a .* b) x = b (a x)
> (a .+ b) x = a x .* b x
> o x = i
> i x = x
>
> toNat x = x (+1) 0
> fromNat n = foldr (.) id . replicate n

I have some more notes on this that you might translate, if possible  
(see below).

If one implements integers this way, time complexity of the operators  
will be of high order, but it is in fact due to representing n in  
effect as 1+...+1. If one represents them, using these operators, in  
a positional notation system, that should be fixed, though there is a  
lot of other overhead.

   Hans


Associativity: a*(b*c) = (a*b)*c, a+(b+c) = (a+b)+c

RHS Relations:
   a^O = I, a^I = a
   a^(b * c) = (a^b)^c
   a^(b + c) = a^b * a^c
   a*(b + c) = a*b + a*c

LHS Relations:
   I * a = a, O + a = a, O * a = I ^ a
   c functor (i.e., c(a*b) = c(a)*c(b), c(I) = I) =>
     (a*b)^c = a^c * b^c
     (a+b)*c = a*c + b*c
     I^c = I

If n in Natural, f: A -> A an endo-function, define
   f^n := I,              if n = 0
          f * ... * f,    if n > 1
          |-n times-|
The natural number functionals, corresponding to Church's number
functionals, are then defined by
   \bar n(f) := f^k
If S(x) := x + 1 (regular integer addition), then
   \bar n(S)(0) = n

Also write (following Hancock)
   log_x b := \lambda x b
Then
   log_x I = O
   log_x x = I
   log_x(a * b) = log_x a + log_x b
   log_x(a ^ b) = (log_x a) * b,    x not free in b.


_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Origins of '$'

Mark P. Jones
In reply to this post by Don Stewart-2
Don Stewart wrote:
> Which suggests that $ was already in the 1.0 report going to SIGPLAN.
> Perhaps Paul or Simon could shed light on it? Anyone have the 1.0 report
> lying around to check if it was in earlier?

As far as Haskell is concerned, the first "report"-ed occurrence
of the $ operator was in the Haskell 1.2 report dated March 1992.
I don't see any mention of the $ operator in either the 1.0 or
the 1.1 reports (April 1990 and August 1991, respectively).

The 1.0 report did define the following operator, which is a
variant of $:

   let     ::  a -> (a -> b) -> b
   let x k  =  k x

This was exported from the prelude, but its definition actually
appeared in the PreludeIO section of the report, hinting at the
main motivation for its introduction in support of continuation
based I/O.  (Monadic I/O didn't officially arrive until the 1.3
report in May 1996.)

But the "let" operator was quickly promoted to a keyword in
Haskell 1.1 with the introduction of let expressions, replacing
the "where expressions" that Haskell 1.0 had provided for local
definitions.  With the move to 1.1, "where" became part of the
syntax for definition right hand sides, able to scope across
multiple guards and no longer part of the expression syntax.

A little history can sometimes be fun.

All the best,
Mark
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Origins of '$'

Bugzilla from jonathanccast@fastmail.fm
On Mon, 2008-12-08 at 09:58 -0800, Mark P. Jones wrote:

> Don Stewart wrote:
> > Which suggests that $ was already in the 1.0 report going to SIGPLAN.
> > Perhaps Paul or Simon could shed light on it? Anyone have the 1.0 report
> > lying around to check if it was in earlier?
>
> As far as Haskell is concerned, the first "report"-ed occurrence
> of the $ operator was in the Haskell 1.2 report dated March 1992.
> I don't see any mention of the $ operator in either the 1.0 or
> the 1.1 reports (April 1990 and August 1991, respectively).
>
> The 1.0 report did define the following operator, which is a
> variant of $:
>
>    let     ::  a -> (a -> b) -> b
>    let x k  =  k x
>
> This was exported from the prelude, but its definition actually
> appeared in the PreludeIO section of the report, hinting at the
> main motivation for its introduction in support of continuation
> based I/O.  (Monadic I/O didn't officially arrive until the 1.3
> report in May 1996.)

Not officially, but `let' as above is in fact the unit of the Cont
monad.  And (>>>) from the PreludeIO section of the same report is the
(>>=) for the Cont monad.

So monadic I/O was there, it just seems that no-one noticed (or I
haven't seen this explicitly pointed out)...

jcc


_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Origins of '$'

Dan Piponi-2
In reply to this post by Hans Aberg
On Sun, Dec 7, 2008 at 2:05 AM, Hans Aberg <[hidden email]> wrote:
> As for the operator itself, it appears in Alonzo Church, "The Calculi of
> Lambda-Conversion", where it is written as exponentiation, like x^f

That's reminiscent of the notation in Lambek and Scott where (roughly
speaking) the function converting an element of an object A^B to an
arrow B->A (something Haskellers don't normally have to think about)
is written as a superscript integral sign. Presumably this comes from
the same source. Both $ and the integral sign are forms of the letter
's'. Don't know why 's' would be chosen though.
--
Dan
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Origins of '$'

Hans Aberg
On 8 Dec 2008, at 19:36, Dan Piponi wrote:

> On Sun, Dec 7, 2008 at 2:05 AM, Hans Aberg <[hidden email]> wrote:
>> As for the operator itself, it appears in Alonzo Church, "The  
>> Calculi of
>> Lambda-Conversion", where it is written as exponentiation, like x^f
>
> That's reminiscent of the notation in Lambek and Scott where (roughly
> speaking) the function converting an element of an object A^B to an
> arrow B->A (something Haskellers don't normally have to think about)
> is written as a superscript integral sign. Presumably this comes from
> the same source. Both $ and the integral sign are forms of the letter
> 's'. Don't know why 's' would be chosen though.

In set theory, and sometimes in category theory, A^B is just another  
notation for Hom(B, A), and the latter might be given the alternate  
notation B -> A. And th reason is that for finite sets, computing  
cardinalities result in the usual power function of natural numbers -  
same as Church, then.

And the integral sign comes from Leibnitz: a stylized "S" standing  
for summation. Also, it is common to let "s" or sigma stand for a  
section, that is, if given functions
   s: A -> B
   pi: B -> A
such that the composition
   pi o s: A -> B -> A
is the identity on A, then s is called a section and pi a projection  
(as in differential geometry).

   Hans


_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Origins of '$'

Nathan Bloomfield

In set theory, and sometimes in category theory, A^B is just another notation for Hom(B, A), and the latter might be given the alternate notation B -> A. And th reason is that for finite sets, computing cardinalities result in the usual power function of natural numbers - same as Church, then.

 Hans

Slightly off topic, but the A^B notation for hom-sets also makes the natural isomorphism we call currying expressable as A^(BxC) = (A^B)^C.

Nathan Bloomfield

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Origins of '$'

Joachim Breitner-2
Hi,

Am Montag, den 08.12.2008, 15:59 -0600 schrieb Nathan Bloomfield:

> Slightly off topic, but the A^B notation for hom-sets also makes the
> natural isomorphism we call currying expressable as A^(BxC) = (A^B)^C.

So A^(B+C) = A^B × A^C ?

Oh, right, I guess that’s actually true:

uncurry either                :: (a -> c, b -> c)  -> (Either a b -> c)
(\f -> (f . Left, f . Right)) :: (Either a b -> c) -> (a -> c, b -> c)

It’s always nice to see that I havn’t learned the elementary power
calculation rules for nothing :-)

Greetings,
Joachim

PS:
For those who prefer Control.Arrow to points:
(.Left) &&& (.Right) :: (Either a b -> c) -> (a -> c, b -> c)
(found by trial and error :-))

--
Joachim "nomeata" Breitner
  mail: [hidden email] | ICQ# 74513189 | GPG-Key: 4743206C
  JID: [hidden email] | http://www.joachim-breitner.de/
  Debian Developer: [hidden email]

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe

signature.asc (204 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Origins of '$'

Dan Piponi-2
2008/12/8 Joachim Breitner <[hidden email]>:
> So A^(B+C) = A^B × A^C ?

That's part of the basis for Hinze's paper on memoization:
http://www.informatik.uni-bonn.de/~ralf/publications/WGP00b.ps.gz

> It's always nice to see that I havn't learned the elementary power
> calculation rules for nothing :-)

More generally, all of Tarski's "high school algebra" axioms carry
over to types. You can see the axioms here:
http://math.bu.edu/people/kayeats/papers/saga_paper4.ps That proves
type theory is child's play :-)
--
Dan
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Origins of '$'

Arnar Birgisson
On Mon, Dec 8, 2008 at 23:10, Dan Piponi <[hidden email]> wrote:
> More generally, all of Tarski's "high school algebra" axioms carry
> over to types. You can see the axioms here:
> http://math.bu.edu/people/kayeats/papers/saga_paper4.ps That proves
> type theory is child's play :-)

Ah, the power of a well chosen notation :)

As for the original question, this message:

    http://www.cse.unsw.edu.au/~dons/haskell-1990-2006/msg00675.html

suggests that the application operator was suggested by Kent Karlsson,
but I couldn't find the actual suggestion though.

cheers,
Arnar
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Origins of '$'

Hans Aberg
In reply to this post by Joachim Breitner-2
On 8 Dec 2008, at 23:15, Joachim Breitner wrote:

> Am Montag, den 08.12.2008, 15:59 -0600 schrieb Nathan Bloomfield:
>
>> Slightly off topic, but the A^B notation for hom-sets also makes the
>> natural isomorphism we call currying expressable as A^(BxC) = (A^B)
>> ^C.
>
> So A^(B+C) = A^B × A^C ?
>
> Oh, right, I guess that’s actually true:...

I posted some of those relations for lambda-calculus two days ago  
(this thread).

It is so very off-topic, because one can reverse the process, take  
those operators and some relations, and show it contains the lambda-
calculus (or so I remember, don't recall details). Then one problem  
is that these operators are not so intuitive for all practical purposes.

   Hans


_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Origins of '$'

Derek Elkins
In reply to this post by Joachim Breitner-2
On Mon, 2008-12-08 at 23:15 +0100, Joachim Breitner wrote:

> Hi,
>
> Am Montag, den 08.12.2008, 15:59 -0600 schrieb Nathan Bloomfield:
>
> > Slightly off topic, but the A^B notation for hom-sets also makes the
> > natural isomorphism we call currying expressable as A^(BxC) = (A^B)^C.
>
> So A^(B+C) = A^B × A^C ?
>
> Oh, right, I guess that’s actually true:
>
> uncurry either                :: (a -> c, b -> c)  -> (Either a b -> c)
> (\f -> (f . Left, f . Right)) :: (Either a b -> c) -> (a -> c, b -> c)
>
> It’s always nice to see that I havn’t learned the elementary power
> calculation rules for nothing :-)

I want to point out a quick categorical way of proving this (and almost
all the other "arithmetic" laws follow similarly.)  This is just
continuity of right adjoints.  The interesting thing is the adjunction,
one that is commonly neglected in discussions of Cartesian closed
categories.

A^- is a function C^{op} -> C and it is adjoint to itself on the right,
i.e. (A^-)^{op} -| A^-.  As an isomorphism of hom functors that is,
Hom(=,A^-) ~ Hom(-,A^=) or in Haskell notation there is an isomorphism
flip :: (a -> b -> c) -> (b -> a -> c) (it is it's own inverse.)  This
is induced by the swap operation on pairs and is why for enriched
categories usually they talk about -symmetric- monoidally closed
categories.  Symmetric monoidally closed categories validate all the
arithmetic laws as well.

So B+C is BxC in the opposite category and so A^- takes (BxC)^op to A^B
x A^C.

And that's not all, every adjunction gives rise to a monad namely,
A^(A^-) or in Haskell notation (b -> a) -> a and indeed if you work out
the details this is the continuation monad.

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Origins of '$'

Richard A. O'Keefe

On 15 Dec 2008, at 12:52 pm, Derek Elkins wrote:
> I want to point out a quick categorical way of proving this (and  
> almost
> all the other "arithmetic" laws follow similarly.)  This is just
> continuity of right adjoints.  The interesting thing is the  
> adjunction,
> one that is commonly neglected in discussions of Cartesian closed
> categories.

December buds swell.
Categories unlimit
Haskell I once knew.

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe