Are bottoms ever natural?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
15 messages Options
Reply | Threaded
Open this post in threaded view
|

Are bottoms ever natural?

(IIIT) Siddharth Bhat
I've been thinking about the issue of purity and speculation lately, and from what little I have read, it looks like the presence of bottom hiding inside a lazy value is a huge issue. 

How "natural" is it for bottoms to exist? If one were to change Haskell and declare that any haskell value can be speculated upon, what ramifications does this have?

Is it totally broken? Is it "correct" but makes programming unpleasant? What's the catch? 

Thanks,
Siddharth 
--
Sending this from my phone, please excuse any typos!

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
Reply | Threaded
Open this post in threaded view
|

Re: Are bottoms ever natural?

Brandon Allbery
Define "natural".

You might want to look into the concept of Turing completeness. One could define a subset of Haskell in which bottoms cannot occur... but it turns out there's a lot of useful things you can't do in such a language. (In strict languages, these often are expressed as infinite loops of one kind or another. Note also that any dependency on external input is an infinite loop from the perspective of the language, since it can only be broken by the external entity providing the input.)

On Tue, Dec 19, 2017 at 1:47 AM, (IIIT) Siddharth Bhat <[hidden email]> wrote:
I've been thinking about the issue of purity and speculation lately, and from what little I have read, it looks like the presence of bottom hiding inside a lazy value is a huge issue. 

How "natural" is it for bottoms to exist? If one were to change Haskell and declare that any haskell value can be speculated upon, what ramifications does this have?

Is it totally broken? Is it "correct" but makes programming unpleasant? What's the catch? 

Thanks,
Siddharth 
--
Sending this from my phone, please excuse any typos!

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.



--
brandon s allbery kf8nh                               sine nomine associates
[hidden email]                                  [hidden email]
unix, openafs, kerberos, infrastructure, xmonad        http://sinenomine.net

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
Reply | Threaded
Open this post in threaded view
|

Re: Are bottoms ever natural?

(IIIT) Siddharth Bhat

Is that really true, though?
Usually when you have an infinite loop, you have progress of some sort. Infinite loops with no side effects can be removed from the program according to the C standard, for example. So, in general, we should allow programmers to express termination / progress, right? At that point, no computation ever "bottoms out"?

Shouldn't a hypothetical purely functional programming language better control this (by eg. Forcing totality?) It seems like we lose much of the benefits of purity by muddying the waters with divergence.

From an optimising compiler perspective, Haskell is on some weird lose-lose space, where you lose out on traditional compiler techniques that work on strict code, but it also does not allow the awesome stuff you could do with "pure" computation because bottom lurks everywhere.

What neat optimisations can be done on Haskell that can't be done in a traditional imperative language? I genuinely want to know.

What are your thoughts on this?

Cheers
Siddharth


On Tue 19 Dec, 2017, 08:09 Brandon Allbery, <[hidden email]> wrote:
Define "natural".

You might want to look into the concept of Turing completeness. One could define a subset of Haskell in which bottoms cannot occur... but it turns out there's a lot of useful things you can't do in such a language. (In strict languages, these often are expressed as infinite loops of one kind or another. Note also that any dependency on external input is an infinite loop from the perspective of the language, since it can only be broken by the external entity providing the input.)

On Tue, Dec 19, 2017 at 1:47 AM, (IIIT) Siddharth Bhat <[hidden email]> wrote:
I've been thinking about the issue of purity and speculation lately, and from what little I have read, it looks like the presence of bottom hiding inside a lazy value is a huge issue. 

How "natural" is it for bottoms to exist? If one were to change Haskell and declare that any haskell value can be speculated upon, what ramifications does this have?

Is it totally broken? Is it "correct" but makes programming unpleasant? What's the catch? 

Thanks,
Siddharth 
--
Sending this from my phone, please excuse any typos!

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.



--
brandon s allbery kf8nh                               sine nomine associates
[hidden email]                                  [hidden email]
unix, openafs, kerberos, infrastructure, xmonad        http://sinenomine.net
_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
--
Sending this from my phone, please excuse any typos!

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
Reply | Threaded
Open this post in threaded view
|

Re: Are bottoms ever natural?

(IIIT) Siddharth Bhat

Also, I'm sorry if the tone of the email is hostile, I don't mean it that way :) I just want to start a discussion about compiler and language design around lazy languages that permit bottom as an inhabitant.


On Tue 19 Dec, 2017, 08:20 (IIIT) Siddharth Bhat, <[hidden email]> wrote:

Is that really true, though?
Usually when you have an infinite loop, you have progress of some sort. Infinite loops with no side effects can be removed from the program according to the C standard, for example. So, in general, we should allow programmers to express termination / progress, right? At that point, no computation ever "bottoms out"?

Shouldn't a hypothetical purely functional programming language better control this (by eg. Forcing totality?) It seems like we lose much of the benefits of purity by muddying the waters with divergence.

From an optimising compiler perspective, Haskell is on some weird lose-lose space, where you lose out on traditional compiler techniques that work on strict code, but it also does not allow the awesome stuff you could do with "pure" computation because bottom lurks everywhere.

What neat optimisations can be done on Haskell that can't be done in a traditional imperative language? I genuinely want to know.

What are your thoughts on this?

Cheers


Siddharth


On Tue 19 Dec, 2017, 08:09 Brandon Allbery, <[hidden email]> wrote:
Define "natural".

You might want to look into the concept of Turing completeness. One could define a subset of Haskell in which bottoms cannot occur... but it turns out there's a lot of useful things you can't do in such a language. (In strict languages, these often are expressed as infinite loops of one kind or another. Note also that any dependency on external input is an infinite loop from the perspective of the language, since it can only be broken by the external entity providing the input.)

On Tue, Dec 19, 2017 at 1:47 AM, (IIIT) Siddharth Bhat <[hidden email]> wrote:
I've been thinking about the issue of purity and speculation lately, and from what little I have read, it looks like the presence of bottom hiding inside a lazy value is a huge issue. 

How "natural" is it for bottoms to exist? If one were to change Haskell and declare that any haskell value can be speculated upon, what ramifications does this have?

Is it totally broken? Is it "correct" but makes programming unpleasant? What's the catch? 

Thanks,
Siddharth 
--
Sending this from my phone, please excuse any typos!

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.



--
brandon s allbery kf8nh                               sine nomine associates
[hidden email]                                  [hidden email]
unix, openafs, kerberos, infrastructure, xmonad        http://sinenomine.net
_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
--
Sending this from my phone, please excuse any typos!
--
Sending this from my phone, please excuse any typos!

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
Reply | Threaded
Open this post in threaded view
|

Re: Are bottoms ever natural?

Brandon Allbery
In reply to this post by (IIIT) Siddharth Bhat
I'm tempted to refer you to the "Floop and Bloop and Gloop" chapter in _Gödel, Escher, Bach: an Eternal Golden Braid_.

How exactly do you prove to the compiler that you have made sufficient progress? It's not enough to assume that doing any particular operation constitutes progress. And even in cases where you can provide such a proof, it usually requires dependent types to express because "progress" depends on some value. Consider that any given action may appear to have no side effects even in C unless the entire program is considered as a single unit. And also consider that Haskell expressions usually do not have "side effects" in this sense at all, unless you are in IO... and now you have to formalize what an IO "side effect" is in order to prove that you have actually accomplished something.

On Tue, Dec 19, 2017 at 2:20 AM, (IIIT) Siddharth Bhat <[hidden email]> wrote:

Is that really true, though?
Usually when you have an infinite loop, you have progress of some sort. Infinite loops with no side effects can be removed from the program according to the C standard, for example. So, in general, we should allow programmers to express termination / progress, right? At that point, no computation ever "bottoms out"?

Shouldn't a hypothetical purely functional programming language better control this (by eg. Forcing totality?) It seems like we lose much of the benefits of purity by muddying the waters with divergence.

From an optimising compiler perspective, Haskell is on some weird lose-lose space, where you lose out on traditional compiler techniques that work on strict code, but it also does not allow the awesome stuff you could do with "pure" computation because bottom lurks everywhere.

What neat optimisations can be done on Haskell that can't be done in a traditional imperative language? I genuinely want to know.

What are your thoughts on this?

Cheers
Siddharth


On Tue 19 Dec, 2017, 08:09 Brandon Allbery, <[hidden email]> wrote:
Define "natural".

You might want to look into the concept of Turing completeness. One could define a subset of Haskell in which bottoms cannot occur... but it turns out there's a lot of useful things you can't do in such a language. (In strict languages, these often are expressed as infinite loops of one kind or another. Note also that any dependency on external input is an infinite loop from the perspective of the language, since it can only be broken by the external entity providing the input.)

On Tue, Dec 19, 2017 at 1:47 AM, (IIIT) Siddharth Bhat <[hidden email]> wrote:
I've been thinking about the issue of purity and speculation lately, and from what little I have read, it looks like the presence of bottom hiding inside a lazy value is a huge issue. 

How "natural" is it for bottoms to exist? If one were to change Haskell and declare that any haskell value can be speculated upon, what ramifications does this have?

Is it totally broken? Is it "correct" but makes programming unpleasant? What's the catch? 

Thanks,
Siddharth 
--
Sending this from my phone, please excuse any typos!

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.



--
brandon s allbery kf8nh                               sine nomine associates
[hidden email]                                  [hidden email]
unix, openafs, kerberos, infrastructure, xmonad        http://sinenomine.net
_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
--
Sending this from my phone, please excuse any typos!



--
brandon s allbery kf8nh                               sine nomine associates
[hidden email]                                  [hidden email]
unix, openafs, kerberos, infrastructure, xmonad        http://sinenomine.net

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
Reply | Threaded
Open this post in threaded view
|

Re: Are bottoms ever natural?

Theodore Lief Gannon
Extremely relevant blog (even references Bloop/Floop) about how distinguishing data from codata addresses this issue:

On Dec 18, 2017 11:44 PM, "Brandon Allbery" <[hidden email]> wrote:
I'm tempted to refer you to the "Floop and Bloop and Gloop" chapter in _Gödel, Escher, Bach: an Eternal Golden Braid_.

How exactly do you prove to the compiler that you have made sufficient progress? It's not enough to assume that doing any particular operation constitutes progress. And even in cases where you can provide such a proof, it usually requires dependent types to express because "progress" depends on some value. Consider that any given action may appear to have no side effects even in C unless the entire program is considered as a single unit. And also consider that Haskell expressions usually do not have "side effects" in this sense at all, unless you are in IO... and now you have to formalize what an IO "side effect" is in order to prove that you have actually accomplished something.

On Tue, Dec 19, 2017 at 2:20 AM, (IIIT) Siddharth Bhat <[hidden email]> wrote:

Is that really true, though?
Usually when you have an infinite loop, you have progress of some sort. Infinite loops with no side effects can be removed from the program according to the C standard, for example. So, in general, we should allow programmers to express termination / progress, right? At that point, no computation ever "bottoms out"?

Shouldn't a hypothetical purely functional programming language better control this (by eg. Forcing totality?) It seems like we lose much of the benefits of purity by muddying the waters with divergence.

From an optimising compiler perspective, Haskell is on some weird lose-lose space, where you lose out on traditional compiler techniques that work on strict code, but it also does not allow the awesome stuff you could do with "pure" computation because bottom lurks everywhere.

What neat optimisations can be done on Haskell that can't be done in a traditional imperative language? I genuinely want to know.

What are your thoughts on this?

Cheers
Siddharth


On Tue 19 Dec, 2017, 08:09 Brandon Allbery, <[hidden email]> wrote:
Define "natural".

You might want to look into the concept of Turing completeness. One could define a subset of Haskell in which bottoms cannot occur... but it turns out there's a lot of useful things you can't do in such a language. (In strict languages, these often are expressed as infinite loops of one kind or another. Note also that any dependency on external input is an infinite loop from the perspective of the language, since it can only be broken by the external entity providing the input.)

On Tue, Dec 19, 2017 at 1:47 AM, (IIIT) Siddharth Bhat <[hidden email]> wrote:
I've been thinking about the issue of purity and speculation lately, and from what little I have read, it looks like the presence of bottom hiding inside a lazy value is a huge issue. 

How "natural" is it for bottoms to exist? If one were to change Haskell and declare that any haskell value can be speculated upon, what ramifications does this have?

Is it totally broken? Is it "correct" but makes programming unpleasant? What's the catch? 

Thanks,
Siddharth 
--
Sending this from my phone, please excuse any typos!

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.



--
brandon s allbery kf8nh                               sine nomine associates
[hidden email]                                  [hidden email]
unix, openafs, kerberos, infrastructure, xmonad        http://sinenomine.net
_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
--
Sending this from my phone, please excuse any typos!



--
brandon s allbery kf8nh                               sine nomine associates
[hidden email]                                  [hidden email]
unix, openafs, kerberos, infrastructure, xmonad        http://sinenomine.net

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
Reply | Threaded
Open this post in threaded view
|

Re: Are bottoms ever natural?

Alexey Muranov
In reply to this post by (IIIT) Siddharth Bhat
Siddharth,

how would you deal with functions that terminate for some
arguments/inputs but do not terminate for the others ?

Alexey.

On Tue, 2017-12-19 at 07:20 +0000, (IIIT) Siddharth Bhat wrote:

> > Is that really true, though?
> > Usually when you have an infinite loop, you have progress of some
> > sort. Infinite loops with no side effects can be removed from the
> > program according to the C standard, for example. So, in general, we
> > should allow programmers to express termination / progress, right?
> At
> > that point, no computation ever "bottoms out"?
> > Shouldn't a hypothetical purely functional programming language
> > better control this (by eg. Forcing totality?) It seems like we lose
> > much of the benefits of purity by muddying the waters with
> > divergence.
> > From an optimising compiler perspective, Haskell is on some weird
> > lose-lose space, where you lose out on traditional compiler
> > techniques that work on strict code, but it also does not allow the
> > awesome stuff you could do with "pure" computation because bottom
> > lurks everywhere.
> > What neat optimisations can be done on Haskell that can't be done in
> > a traditional imperative language? I genuinely want to know.
> > What are your thoughts on this?
> > Cheers
> > Siddharth
> >
> > On Tue 19 Dec, 2017, 08:09 Brandon Allbery, <[hidden email]>
> > wrote:
>> > > Define "natural".
>> > >
>> > > You might want to look into the concept of Turing completeness.
>> One
>> > > could define a subset of Haskell in which bottoms cannot occur...
>> > > but it turns out there's a lot of useful things you can't do in
>> > > such a language. (In strict languages, these often are expressed
>> as
>> > > infinite loops of one kind or another. Note also that any
>> > > dependency on external input is an infinite loop from the
>> > > perspective of the language, since it can only be broken by the
>> > > external entity providing the input.)
>> > >
>> > > On Tue, Dec 19, 2017 at 1:47 AM, (IIIT) Siddharth Bhat
>> <siddharth.b
>> > > [hidden email]> wrote:
>>> > > > I've been thinking about the issue of purity and speculation
>>> > > > lately, and from what little I have read, it looks like the
>>> > > > presence of bottom hiding inside a lazy value is a huge issue.
>>> > > >
>>> > > > How "natural" is it for bottoms to exist? If one were to
>>> change
>>> > > > Haskell and declare that any haskell value can be speculated
>>> > > > upon, what ramifications does this have?
>>> > > >
>>> > > > Is it totally broken? Is it "correct" but makes programming
>>> > > > unpleasant? What's the catch?
>>> > > >
>>> > > > Thanks,
>>> > > > Siddharth

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
Baa
Reply | Threaded
Open this post in threaded view
|

Re: Are bottoms ever natural?

Baa
Pure functions can return "undefined" for some arguments values. So,
such function is partially-defined. Its domain has "gaps". This can be
coded in math, to avoid "undefined" (bottom), like

  x = {-100..100, 105, 107..200}

It's impossible in Haskell, but IMHO its possible in F*, due to DepTypes
and RefTypes ;)

IMHO this is the reason to have bottom: possibility to return
"undefined" w/ simple correct type in signature (hidden bottom). If you
have a way to code arguments domains, no need to have bottom for pure
functions to "signal" gaps - gaps happen in run-time :) This is the one
of reasons for bottom to exist, as far as I understand. May be there are
other reasons :)

===
Best regards, Paul

> Siddharth,
>
> how would you deal with functions that terminate for some
> arguments/inputs but do not terminate for the others ?
>
> Alexey.
>
> On Tue, 2017-12-19 at 07:20 +0000, (IIIT) Siddharth Bhat wrote:
> > > Is that really true, though?
> > > Usually when you have an infinite loop, you have progress of some
> > > sort. Infinite loops with no side effects can be removed from the
> > > program according to the C standard, for example. So, in general,
> > > we should allow programmers to express termination / progress,
> > > right?  
> > At  
> > > that point, no computation ever "bottoms out"?
> > > Shouldn't a hypothetical purely functional programming language
> > > better control this (by eg. Forcing totality?) It seems like we
> > > lose much of the benefits of purity by muddying the waters with
> > > divergence.
> > > From an optimising compiler perspective, Haskell is on some weird
> > > lose-lose space, where you lose out on traditional compiler
> > > techniques that work on strict code, but it also does not allow
> > > the awesome stuff you could do with "pure" computation because
> > > bottom lurks everywhere.
> > > What neat optimisations can be done on Haskell that can't be done
> > > in a traditional imperative language? I genuinely want to know.
> > > What are your thoughts on this?
> > > Cheers
> > > Siddharth
> > >
> > > On Tue 19 Dec, 2017, 08:09 Brandon Allbery, <[hidden email]>
> > > wrote:  
> >> > > Define "natural".
> >> > >
> >> > > You might want to look into the concept of Turing
> >> > > completeness.  
> >> One  
> >> > > could define a subset of Haskell in which bottoms cannot
> >> > > occur... but it turns out there's a lot of useful things you
> >> > > can't do in such a language. (In strict languages, these often
> >> > > are expressed  
> >> as  
> >> > > infinite loops of one kind or another. Note also that any
> >> > > dependency on external input is an infinite loop from the
> >> > > perspective of the language, since it can only be broken by the
> >> > > external entity providing the input.)
> >> > >
> >> > > On Tue, Dec 19, 2017 at 1:47 AM, (IIIT) Siddharth Bhat  
> >> <siddharth.b  
> >> > > [hidden email]> wrote:  
> >>> > > > I've been thinking about the issue of purity and speculation
> >>> > > > lately, and from what little I have read, it looks like the
> >>> > > > presence of bottom hiding inside a lazy value is a huge
> >>> > > > issue.
> >>> > > >
> >>> > > > How "natural" is it for bottoms to exist? If one were to  
> >>> change  
> >>> > > > Haskell and declare that any haskell value can be speculated
> >>> > > > upon, what ramifications does this have?
> >>> > > >
> >>> > > > Is it totally broken? Is it "correct" but makes programming
> >>> > > > unpleasant? What's the catch?
> >>> > > >
> >>> > > > Thanks,
> >>> > > > Siddharth  
>
> _______________________________________________
> Haskell-Cafe mailing list
> To (un)subscribe, modify options or view archives go to:
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
> Only members subscribed via the mailman list are allowed to post.
_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
Reply | Threaded
Open this post in threaded view
|

Re: Are bottoms ever natural?

jpaugh
In the strictest sense, the math will bottom in exactly the same scenarios that Haskell will, but a human is unlikely to make that mistake because the result is nonsensical. If the set x you've provided is the domain for a given function g, then it doesn't make sense to reason about g(106): since 106 isn't in the domain, g(106) cannot exist in the codomain. Yet, if you're trying to graph the function on a plane, you have to deal it somehow, because 106 will exist on the plane.

Getting to the original topic, it is impossible to avoid divergence in general, even in mathematics, which is in no way retrained by a compiler (although it may be limited by the operating environment ;). Languages (or maths) which eschew divergence are not very powerful.
Consider that arithmetic diverges, because division is partial. But we humans often learn to deal with bottoms iimplicitly, by eliminating them from consideration in the first place. That might be analogous to refactoring the program to remove all offending cases, and recompiling.

On December 19, 2017 8:35:57 AM CST, Baa <[hidden email]> wrote:
Pure functions can return "undefined" for some arguments values. So,
such function is partially-defined. Its domain has "gaps". This can be
coded in math, to avoid "undefined" (bottom), like

x = {-100..100, 105, 107..200}

It's impossible in Haskell, but IMHO its possible in F*, due to DepTypes
and RefTypes ;)

IMHO this is the reason to have bottom: possibility to return
"undefined" w/ simple correct type in signature (hidden bottom). If you
have a way to code arguments domains, no need to have bottom for pure
functions to "signal" gaps - gaps happen in run-time :) This is the one
of reasons for bottom to exist, as far as I understand. May be there are
other reasons :)

===
Best regards, Paul

Siddharth,

how would you deal with functions that terminate for some
arguments/inputs but do not terminate for the others ?

Alexey.

On Tue, 2017-12-19 at 07:20 +0000, (IIIT) Siddharth Bhat wrote:
Is that really true, though?
Usually when you have an infinite loop, you have progress of some
sort. Infinite loops with no side effects can be removed from the
program according to the C standard, for example. So, in general,
we should allow programmers to express termination / progress,
right?
At
that point, no computation ever "bottoms out"?
Shouldn't a hypothetical purely functional programming language
better control this (by eg. Forcing totality?) It seems like we
lose much of the benefits of purity by muddying the waters with
divergence.
From an optimising compiler perspective, Haskell is on some weird
lose-lose space, where you lose out on traditional compiler
techniques that work on strict code, but it also does not allow
the awesome stuff you could do with "pure" computation because
bottom lurks everywhere.
What neat optimisations can be done on Haskell that can't be done
in a traditional imperative language? I genuinely want to know.
What are your thoughts on this?
Cheers
Siddharth

On Tue 19 Dec, 2017, 08:09 Brandon Allbery, <[hidden email]>
wrote:
Define "natural".

You might want to look into the concept of Turing
completeness.
One
could define a subset of Haskell in which bottoms cannot
occur... but it turns out there's a lot of useful things you
can't do in such a language. (In strict languages, these often
are expressed
as
infinite loops of one kind or another. Note also that any
dependency on external input is an infinite loop from the
perspective of the language, since it can only be broken by the
external entity providing the input.)

On Tue, Dec 19, 2017 at 1:47 AM, (IIIT) Siddharth Bhat
<siddharth.b
[hidden email]> wrote:
I've been thinking about the issue of purity and speculation
lately, and from what little I have read, it looks like the
presence of bottom hiding inside a lazy value is a huge
issue.

How "natural" is it for bottoms to exist? If one were to
change
Haskell and declare that any haskell value can be speculated
upon, what ramifications does this have?

Is it totally broken? Is it "correct" but makes programming
unpleasant? What's the catch?

Thanks,
Siddharth



Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.


Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.

--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
Reply | Threaded
Open this post in threaded view
|

Re: Are bottoms ever natural?

Siddharth Bhat

So, I have two opinions to share on this:
Regarding the plane example, it is wrong to attempt to graph it on a plane, precisely because the domain is not the plane. 

I am confused by your assertion that it is impossible to avoid divergence in mathematics: we do not define division as a *partial* function. Rather we define it as a *total* function on a *restricted domain*.

So, what one would need is the ability to create these restricted domains, which certain languages have as far as I am aware.

At that point, now that we track our domains and codomains correctly, all should work out?

I would still like an answer to interesting transformations that are enabled by having purity and laziness and are not encumbered by the presence of bottoms.

Cheers,
Siddharth


On Tue 19 Dec, 2017, 20:36 Jonathan Paugh, <[hidden email]> wrote:
In the strictest sense, the math will bottom in exactly the same scenarios that Haskell will, but a human is unlikely to make that mistake because the result is nonsensical. If the set x you've provided is the domain for a given function g, then it doesn't make sense to reason about g(106): since 106 isn't in the domain, g(106) cannot exist in the codomain. Yet, if you're trying to graph the function on a plane, you have to deal it somehow, because 106 will exist on the plane.

Getting to the original topic, it is impossible to avoid divergence in general, even in mathematics, which is in no way retrained by a compiler (although it may be limited by the operating environment ;). Languages (or maths) which eschew divergence are not very powerful.
Consider that arithmetic diverges, because division is partial. But we humans often learn to deal with bottoms iimplicitly, by eliminating them from consideration in the first place. That might be analogous to refactoring the program to remove all offending cases, and recompiling.

On December 19, 2017 8:35:57 AM CST, Baa <[hidden email]> wrote:
Pure functions can return "undefined" for some arguments values. So,
such function is partially-defined. Its domain has "gaps". This can be
coded in math, to avoid "undefined" (bottom), like

x = {-100..100, 105, 107..200}

It's impossible in Haskell, but IMHO its possible in F*, due to DepTypes
and RefTypes ;)

IMHO this is the reason to have bottom: possibility to return
"undefined" w/ simple correct type in signature (hidden bottom). If you
have a way to code arguments domains, no need to have bottom for pure
functions to "signal" gaps - gaps happen in run-time :) This is the one
of reasons for bottom to exist, as far as I understand. May be there are
other reasons :)

===
Best regards, Paul

Siddharth,

how would you deal with functions that terminate for some
arguments/inputs but do not terminate for the others ?

Alexey.

On Tue, 2017-12-19 at 07:20 +0000, (IIIT) Siddharth Bhat wrote:
Is that really true, though?
Usually when you have an infinite loop, you have progress of some
sort. Infinite loops with no side effects can be removed from the
program according to the C standard, for example. So, in general,
we should allow programmers to express termination / progress,
right?
At
that point, no computation ever "bottoms out"?
Shouldn't a hypothetical purely functional programming language
better control this (by eg. Forcing totality?) It seems like we
lose much of the benefits of purity by muddying the waters with
divergence.
From an optimising compiler perspective, Haskell is on some weird
lose-lose space, where you lose out on traditional compiler
techniques that work on strict code, but it also does not allow
the awesome stuff you could do with "pure" computation because
bottom lurks everywhere.
What neat optimisations can be done on Haskell that can't be done
in a traditional imperative language? I genuinely want to know.
What are your thoughts on this?
Cheers
Siddharth

On Tue 19 Dec, 2017, 08:09 Brandon Allbery, <[hidden email]>
wrote:
Define "natural".

You might want to look into the concept of Turing
completeness.
One
could define a subset of Haskell in which bottoms cannot
occur... but it turns out there's a lot of useful things you
can't do in such a language. (In strict languages, these often
are expressed
as
infinite loops of one kind or another. Note also that any
dependency on external input is an infinite loop from the
perspective of the language, since it can only be broken by the
external entity providing the input.)

On Tue, Dec 19, 2017 at 1:47 AM, (IIIT) Siddharth Bhat
<siddharth.b
[hidden email]> wrote:
I've been thinking about the issue of purity and speculation
lately, and from what little I have read, it looks like the
presence of bottom hiding inside a lazy value is a huge
issue.

How "natural" is it for bottoms to exist? If one were to
change
Haskell and declare that any haskell value can be speculated
upon, what ramifications does this have?

Is it totally broken? Is it "correct" but makes programming
unpleasant? What's the catch?

Thanks,
Siddharth



Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.


Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.

--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
--
Sending this from my phone, please excuse any typos!

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
Reply | Threaded
Open this post in threaded view
|

Re: Are bottoms ever natural?

Brandon Allbery
Programs interface with the real world if they are to be useful, and the real world contains bottoms (including literal ones, ultimately: black holes). Mathematics is our cognitive tool, not the real world, and can reject bottoms at least up to a point.

On Tue, Dec 19, 2017 at 11:51 AM, Siddharth Bhat <[hidden email]> wrote:

So, I have two opinions to share on this:
Regarding the plane example, it is wrong to attempt to graph it on a plane, precisely because the domain is not the plane. 

I am confused by your assertion that it is impossible to avoid divergence in mathematics: we do not define division as a *partial* function. Rather we define it as a *total* function on a *restricted domain*.

So, what one would need is the ability to create these restricted domains, which certain languages have as far as I am aware.

At that point, now that we track our domains and codomains correctly, all should work out?

I would still like an answer to interesting transformations that are enabled by having purity and laziness and are not encumbered by the presence of bottoms.

Cheers,
Siddharth


On Tue 19 Dec, 2017, 20:36 Jonathan Paugh, <[hidden email]> wrote:
In the strictest sense, the math will bottom in exactly the same scenarios that Haskell will, but a human is unlikely to make that mistake because the result is nonsensical. If the set x you've provided is the domain for a given function g, then it doesn't make sense to reason about g(106): since 106 isn't in the domain, g(106) cannot exist in the codomain. Yet, if you're trying to graph the function on a plane, you have to deal it somehow, because 106 will exist on the plane.

Getting to the original topic, it is impossible to avoid divergence in general, even in mathematics, which is in no way retrained by a compiler (although it may be limited by the operating environment ;). Languages (or maths) which eschew divergence are not very powerful.
Consider that arithmetic diverges, because division is partial. But we humans often learn to deal with bottoms iimplicitly, by eliminating them from consideration in the first place. That might be analogous to refactoring the program to remove all offending cases, and recompiling.

On December 19, 2017 8:35:57 AM CST, Baa <[hidden email]> wrote:
Pure functions can return "undefined" for some arguments values. So,
such function is partially-defined. Its domain has "gaps". This can be
coded in math, to avoid "undefined" (bottom), like

x = {-100..100, 105, 107..200}

It's impossible in Haskell, but IMHO its possible in F*, due to DepTypes
and RefTypes ;)

IMHO this is the reason to have bottom: possibility to return
"undefined" w/ simple correct type in signature (hidden bottom). If you
have a way to code arguments domains, no need to have bottom for pure
functions to "signal" gaps - gaps happen in run-time :) This is the one
of reasons for bottom to exist, as far as I understand. May be there are
other reasons :)

===
Best regards, Paul

Siddharth,

how would you deal with functions that terminate for some
arguments/inputs but do not terminate for the others ?

Alexey.

On Tue, 2017-12-19 at 07:20 +0000, (IIIT) Siddharth Bhat wrote:
Is that really true, though?
Usually when you have an infinite loop, you have progress of some
sort. Infinite loops with no side effects can be removed from the
program according to the C standard, for example. So, in general,
we should allow programmers to express termination / progress,
right?
At
that point, no computation ever "bottoms out"?
Shouldn't a hypothetical purely functional programming language
better control this (by eg. Forcing totality?) It seems like we
lose much of the benefits of purity by muddying the waters with
divergence.
From an optimising compiler perspective, Haskell is on some weird
lose-lose space, where you lose out on traditional compiler
techniques that work on strict code, but it also does not allow
the awesome stuff you could do with "pure" computation because
bottom lurks everywhere.
What neat optimisations can be done on Haskell that can't be done
in a traditional imperative language? I genuinely want to know.
What are your thoughts on this?
Cheers
Siddharth

On Tue 19 Dec, 2017, 08:09 Brandon Allbery, <[hidden email]>
wrote:
Define "natural".

You might want to look into the concept of Turing
completeness.
One
could define a subset of Haskell in which bottoms cannot
occur... but it turns out there's a lot of useful things you
can't do in such a language. (In strict languages, these often
are expressed
as
infinite loops of one kind or another. Note also that any
dependency on external input is an infinite loop from the
perspective of the language, since it can only be broken by the
external entity providing the input.)

On Tue, Dec 19, 2017 at 1:47 AM, (IIIT) Siddharth Bhat
<siddharth.b
[hidden email]> wrote:
I've been thinking about the issue of purity and speculation
lately, and from what little I have read, it looks like the
presence of bottom hiding inside a lazy value is a huge
issue.

How "natural" is it for bottoms to exist? If one were to
change
Haskell and declare that any haskell value can be speculated
upon, what ramifications does this have?

Is it totally broken? Is it "correct" but makes programming
unpleasant? What's the catch?

Thanks,
Siddharth



Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.


Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.

--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
--
Sending this from my phone, please excuse any typos!

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.



--
brandon s allbery kf8nh                               sine nomine associates
[hidden email]                                  [hidden email]
unix, openafs, kerberos, infrastructure, xmonad        http://sinenomine.net

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
Reply | Threaded
Open this post in threaded view
|

Re: Are bottoms ever natural?

Alexey Muranov
In reply to this post by (IIIT) Siddharth Bhat
Siddharth,

are you aware that the question whether a given element belongs to the
domain of a given computable function is algorithmically undecidable?

Alexey.

On Tue, 2017-12-19 at 16:51 +0000, Siddharth Bhat wrote:

> > So, I have two opinions to share on this:
> > Regarding the plane example, it is wrong to attempt to graph it on a
> > plane, precisely because the domain is not the plane.
> > I am confused by your assertion that it is impossible to avoid
> > divergence in mathematics: we do not define division as a *partial*
> > function. Rather we define it as a *total* function on a *restricted
> > domain*.
> > So, what one would need is the ability to create these restricted
> > domains, which certain languages have as far as I am aware.
> > At that point, now that we track our domains and codomains
> correctly,
> > all should work out?
> > I would still like an answer to interesting transformations that are
> > enabled by having purity and laziness and are not encumbered by the
> > presence of bottoms.
> > Cheers,
> > Siddharth
> >



_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
Reply | Threaded
Open this post in threaded view
|

Re: Are bottoms ever natural?

Evan Laforge
In reply to this post by Siddharth Bhat
Dependently typed languages usually have a totality checker and notion of codata.  Idris seems most likely to have exploited that for optimizations, but I haven't seen anything about interesting transformations. Its strict of course but presumably if you're total you could play with that. 

On Dec 19, 2017 8:53 AM, "Siddharth Bhat" <[hidden email]> wrote:

So, I have two opinions to share on this:
Regarding the plane example, it is wrong to attempt to graph it on a plane, precisely because the domain is not the plane. 

I am confused by your assertion that it is impossible to avoid divergence in mathematics: we do not define division as a *partial* function. Rather we define it as a *total* function on a *restricted domain*.

So, what one would need is the ability to create these restricted domains, which certain languages have as far as I am aware.

At that point, now that we track our domains and codomains correctly, all should work out?

I would still like an answer to interesting transformations that are enabled by having purity and laziness and are not encumbered by the presence of bottoms.

Cheers,
Siddharth


On Tue 19 Dec, 2017, 20:36 Jonathan Paugh, <[hidden email]> wrote:
In the strictest sense, the math will bottom in exactly the same scenarios that Haskell will, but a human is unlikely to make that mistake because the result is nonsensical. If the set x you've provided is the domain for a given function g, then it doesn't make sense to reason about g(106): since 106 isn't in the domain, g(106) cannot exist in the codomain. Yet, if you're trying to graph the function on a plane, you have to deal it somehow, because 106 will exist on the plane.

Getting to the original topic, it is impossible to avoid divergence in general, even in mathematics, which is in no way retrained by a compiler (although it may be limited by the operating environment ;). Languages (or maths) which eschew divergence are not very powerful.
Consider that arithmetic diverges, because division is partial. But we humans often learn to deal with bottoms iimplicitly, by eliminating them from consideration in the first place. That might be analogous to refactoring the program to remove all offending cases, and recompiling.

On December 19, 2017 8:35:57 AM CST, Baa <[hidden email]> wrote:
Pure functions can return "undefined" for some arguments values. So,
such function is partially-defined. Its domain has "gaps". This can be
coded in math, to avoid "undefined" (bottom), like

x = {-100..100, 105, 107..200}

It's impossible in Haskell, but IMHO its possible in F*, due to DepTypes
and RefTypes ;)

IMHO this is the reason to have bottom: possibility to return
"undefined" w/ simple correct type in signature (hidden bottom). If you
have a way to code arguments domains, no need to have bottom for pure
functions to "signal" gaps - gaps happen in run-time :) This is the one
of reasons for bottom to exist, as far as I understand. May be there are
other reasons :)

===
Best regards, Paul

Siddharth,

how would you deal with functions that terminate for some
arguments/inputs but do not terminate for the others ?

Alexey.

On Tue, 2017-12-19 at 07:20 +0000, (IIIT) Siddharth Bhat wrote:
Is that really true, though?
Usually when you have an infinite loop, you have progress of some
sort. Infinite loops with no side effects can be removed from the
program according to the C standard, for example. So, in general,
we should allow programmers to express termination / progress,
right?
At
that point, no computation ever "bottoms out"?
Shouldn't a hypothetical purely functional programming language
better control this (by eg. Forcing totality?) It seems like we
lose much of the benefits of purity by muddying the waters with
divergence.
From an optimising compiler perspective, Haskell is on some weird
lose-lose space, where you lose out on traditional compiler
techniques that work on strict code, but it also does not allow
the awesome stuff you could do with "pure" computation because
bottom lurks everywhere.
What neat optimisations can be done on Haskell that can't be done
in a traditional imperative language? I genuinely want to know.
What are your thoughts on this?
Cheers
Siddharth

On Tue 19 Dec, 2017, 08:09 Brandon Allbery, <[hidden email]>
wrote:
Define "natural".

You might want to look into the concept of Turing
completeness.
One
could define a subset of Haskell in which bottoms cannot
occur... but it turns out there's a lot of useful things you
can't do in such a language. (In strict languages, these often
are expressed
as
infinite loops of one kind or another. Note also that any
dependency on external input is an infinite loop from the
perspective of the language, since it can only be broken by the
external entity providing the input.)

On Tue, Dec 19, 2017 at 1:47 AM, (IIIT) Siddharth Bhat
<siddharth.b
[hidden email]> wrote:
I've been thinking about the issue of purity and speculation
lately, and from what little I have read, it looks like the
presence of bottom hiding inside a lazy value is a huge
issue.

How "natural" is it for bottoms to exist? If one were to
change
Haskell and declare that any haskell value can be speculated
upon, what ramifications does this have?

Is it totally broken? Is it "correct" but makes programming
unpleasant? What's the catch?

Thanks,
Siddharth



Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.


Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.

--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
--
Sending this from my phone, please excuse any typos!

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
Baa
Reply | Threaded
Open this post in threaded view
|

Re: Are bottoms ever natural?

Baa
In reply to this post by (IIIT) Siddharth Bhat
Also I want to point out to interesting and powerful language Shen:

     http://shenlanguage.org/learn-shen/index.html

which supports lazy evaluation, dependent types and many other things:
you can construct own rules in type-system level translator. I'm not
familiar with Shen but I suppose it's possible to programming without
"botton" in Shen. It supports Haskell too. By default partial functions
are "tracked" and can be traced, but w/ Dep. types this can be avoided
as I understand...

So, such languages exists ;)

===
Best regards, Paul

> Also, I'm sorry if the tone of the email is hostile, I don't mean it
> that way :) I just want to start a discussion about compiler and
> language design around lazy languages that permit bottom as an
> inhabitant.
>
> On Tue 19 Dec, 2017, 08:20 (IIIT) Siddharth Bhat, <
> [hidden email]> wrote:  
>
> > Is that really true, though?
> > Usually when you have an infinite loop, you have progress of some
> > sort. Infinite loops with no side effects can be removed from the
> > program according to the C standard, for example. So, in general,
> > we should allow programmers to express termination / progress,
> > right? At that point, no computation ever "bottoms out"?
> >
> > Shouldn't a hypothetical purely functional programming language
> > better control this (by eg. Forcing totality?) It seems like we
> > lose much of the benefits of purity by muddying the waters with
> > divergence.
> >
> > From an optimising compiler perspective, Haskell is on some weird
> > lose-lose space, where you lose out on traditional compiler
> > techniques that work on strict code, but it also does not allow the
> > awesome stuff you could do with "pure" computation because bottom
> > lurks everywhere.
> >
> > What neat optimisations can be done on Haskell that can't be done
> > in a traditional imperative language? I genuinely want to know.
> >
> > What are your thoughts on this?
> >
> > Cheers
> >
> >
> > Siddharth
> >
> > On Tue 19 Dec, 2017, 08:09 Brandon Allbery, <[hidden email]>
> > wrote:
> >> Define "natural".
> >>
> >> You might want to look into the concept of Turing completeness.
> >> One could define a subset of Haskell in which bottoms cannot
> >> occur... but it turns out there's a lot of useful things you can't
> >> do in such a language. (In strict languages, these often are
> >> expressed as infinite loops of one kind or another. Note also that
> >> any dependency on external input is an infinite loop from the
> >> perspective of the language, since it can only be broken by the
> >> external entity providing the input.)
> >>
> >> On Tue, Dec 19, 2017 at 1:47 AM, (IIIT) Siddharth Bhat <  
> >> [hidden email]> wrote:  
> >>  
> >>> I've been thinking about the issue of purity and speculation
> >>> lately, and from what little I have read, it looks like the
> >>> presence of bottom hiding inside a lazy value is a huge issue.
> >>>
> >>> How "natural" is it for bottoms to exist? If one were to change
> >>> Haskell and declare that any haskell value can be speculated
> >>> upon, what ramifications does this have?
> >>>
> >>> Is it totally broken? Is it "correct" but makes programming
> >>> unpleasant? What's the catch?
> >>>
> >>> Thanks,
> >>> Siddharth
> >>> --
> >>> Sending this from my phone, please excuse any typos!
> >>>
> >>> _______________________________________________
> >>> Haskell-Cafe mailing list
> >>> To (un)subscribe, modify options or view archives go to:
> >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
> >>> Only members subscribed via the mailman list are allowed to post.
> >>>  
> >>
> >>
> >>
> >> --
> >> brandon s allbery kf8nh                               sine nomine
> >> associates
> >> [hidden email]
> >> [hidden email]
> >> unix, openafs, kerberos, infrastructure, xmonad
> >> http://sinenomine.net
> >> _______________________________________________
> >> Haskell-Cafe mailing list
> >> To (un)subscribe, modify options or view archives go to:
> >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
> >> Only members subscribed via the mailman list are allowed to post.  
> >
> > --
> > Sending this from my phone, please excuse any typos!
> >  

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
Reply | Threaded
Open this post in threaded view
|

Re: Are bottoms ever natural?

Joachim Durchholz
In reply to this post by Siddharth Bhat
Am 19.12.2017 um 17:51 schrieb Siddharth Bhat:
> So, I have two opinions to share on this:
> Regarding the plane example, it is wrong to attempt to graph it on a
> plane, precisely because the domain is not the plane.

People graph partial functions all the time, e.g. the cotangent:
https://upload.wikimedia.org/wikipedia/commons/b/bf/Cotangent.svg
They simply do not graph the places where the function is undefined.

> I am confused by your assertion that it is impossible to avoid
> divergence in mathematics: we do not define division as a *partial*
> function. Rather we define it as a *total* function on a *restricted
> domain*.

Which amounts to the same thing in practice.
You still need to check that your denominator is non-zero in a division.
Whether you do that by a type check or a runtime check amounts to
roughly the same amount of work for the programmer.

> So, what one would need is the ability to create these restricted
> domains, which certain languages have as far as I am aware.

Yes, but type systems are usually more clunky than assertions.
You can take apart assertions if they are connected by AND, OR, etc.
Type systems usually do not have that, and those that do are no more
decidable, which is a no-go for most language designers.

> At that point, now that we track our domains and codomains correctly,
> all should work out?

Not much better than what we have right now, I fear.

> I would still like an answer to interesting transformations that are
> enabled by having purity and laziness and are not encumbered by the
> presence of bottoms.
If you have laziness, you do not know whether the computation will
terminate before you try, and no amount of type checking will change that.
Unless your type language is powerful enough to express the difference
between a terminating and a nonterminating computation. Which means that
now it's the compiler that needs to deal with potentially-bottom values,
instead of the runtime. Which is a slightly different point on the
trade-off spectrum, but still just a trade-off.

Regards,
Jo
_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.