
12345

On 23 Dec 2011, at 16:16, MigMit wrote:
>
> On 23 Dec 2011, at 02:11, Conor McBride wrote:
>
>>> So... you are developing a programming language with all
>>> calculations being automatically lifted to a monad? What if we
>>> want to do calculations with monadic values themselves, like, for
>>> example, store a few monadic calculations in a list (without
>>> joining all there effects as the sequence function does)?
>>
>> The plan is to make a clearer distinction between "being" and
>> "doing" by
>> splitting types clearly into an effect part and a value part, in a
>> sort
>> of a Levystyle callbypushvalue way. The notation
>>
>> [<list of effects>]<value type>
>>
>> is a computation type whose inhabitants might *do* some of the
>> effects in
>> order to produce a value which *is* of the given value type.
>
> Oh, so it's not an arbitrary monad, but a single one. That's a bit
> disappointing.
The list of effects is arbitrary, and localizable, by means of
defining handlers.
So it's not a single monad.
It's probably still disappointing.
Cheers
Conor
_______________________________________________
HaskellCafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskellcafe


I'd like to make special syntax for folds, so that fold is built in the type definition. Maybe it can be some special braces or just fold(..). So we can write the same function in place of foldr, maybe, either and so on and don't have to define
them by hand.
Inside special foldbraces one can write functions (separated with  as inside of type declaration) that have apropriate types. Constructors define the order of functions inside foldbraces.
Handful of examples:
data List a = Nil  Cons a (List a)
length :: List a > List a length = fold( 0  const (+1) )
(++) :: List a > List a > List a a ++ b = fold( b  Cons ) a
head :: List a > a head = fold( undefined  const )
data Maybe a = Nothing  Just a
fromJust :: Maybe a > a fromJust = fold (undefined  id)
data Nat = Zero  Succ Nat
add :: Nat > Nat > Nat add a = fold (a  Succ)
mul :: Nat > Nat > Nat mul a = fold (Zero  add a)
Maybe something similiar for unfolds but I have no syntax here.

I'd like to invent some typesystem that can allow me to say that (.), (>>>), (>=>) are the same things just as id and pure
I'd like to have in place of Monadclass special case of Category class We can express return and (>>=) with id and (.) in Monad's typing.
return = id ma >>= mf = (mf . const ma) ()
where id and (.) are
class Kleisli m where id :: a > m a (.) :: (b > m c) > (a > m b) > (a > m c)
I'd like to parametrise it over m so Kleisli
can become a special case of Category.
And we ?can? define ($) in terms of id, (.), const and (),
($) :: Category cat => cat a b > ?dom cat a?> ?cod cat b?
f $ a = (f . const a) ()
so (=<<) becomes just ($)
Anton
_______________________________________________
HaskellCafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskellcafe


On 20111223 13:46, Conor McBride wrote:
>
>>> The plan is to make a clearer distinction between "being" and "doing" by
>>> splitting types clearly into an effect part and a value part, in a sort
>>> of a Levystyle callbypushvalue way. The notation
>>>
>>> [<list of effects>]<value type>
>>>
>>> is a computation type whose inhabitants might *do* some of the
>>> effects in
>>> order to produce a value which *is* of the given value type.
>
> The list of effects is arbitrary, and localizable, by means of defining
> handlers.
> So it's not a single monad.
>
> It's probably still disappointing.
On the contrary!
> Haskell doesn't draw a clear line in types between the effect part
> and the value part, or support easy fluidity of shifting roles
> between the two. Rather we have two modes: (1) the
> implicit partiality mode, where the value part is the whole of
> the type and the notation is applicative;
> (2) the explicit sideeffect mode, where the type is an
> effect operator applied to the value type and the notation
> is imperative.
I was drawn to callbypushvalue a few years ago while attempting to
create a language which would support both callbyvalue and
callbyname. I haven't had the skill to express what I have felt to be
the shortcoming of Haskell, but I believe you've put your finger right
on it.
> it's an attempt to rerationalise techniques that emerged
> from Haskell programming.
Exactly.
Haskell has grown a wealth of features/libraries/techniques for
combining monads, yet the fundamental monad, evaluation, has a separate
place in the language.
 Scott Turner
_______________________________________________
HaskellCafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskellcafe


On Wed, Dec 21, 2011 at 8:39 PM, MigMit <[hidden email]> wrote:
On 22 Dec 2011, at 06:25, Alexander Solla wrote:
> Denotational semantics is unrealistic.
And so are imaginary numbers. But they are damn useful for electrical circuits calculations, so who cares?
Not a fair comparison. Quaternions are not particularly useful for electrical circuits, because it is unrealistic to apply a fourdimensional construct to twodimensional phase spaces. In the same way, denotational semantics adds features which do not apply to a theory of finite computation.
> The /defining/ feature of a bottom is that it doesn't have an interpretation.
What do you mean by "interpretation"?
You know, the basic notion of a function which maps syntax to concrete values.
> They should all be treated alike, and be treated differently from every other Haskell value.
But they ARE very similar to other values. They can be members of otherwise meaningful structures, and you can do calculations with these structures. "fst (1, __)" is a good and meaningful calculation.
Mere syntax. What is fst doing? It computes "forall (x,y)  x". Using the language of model theory, we can say that Haskell computes an interpretation "forall (x,y) = x". It is __ that lacks an intepretation, not fst. We can see that by trying
"fst (__,1), which reduces to __ by the proof rule for fst. __ lacks an interpretation, so the runtime either loops forever or throws an error (notice that error throwing is only done with compiler magic  if you define your own undefined = undefined, using it will loop. GHC is specially trained to look for Prelude.undefined to throw the "undefined" error.)
> Every other Haskell value /does/ have an interpretation.
So, (__) is bad, but (1, __) is good?
I did not introduce "good" and "bad" into this discussion. I have merely said (in more words) that I want my hypothetical perfect language to prefer OPERATIONAL (model) SEMANTICS for a typed PARACONSISTENT LOGIC over the DENOTATIONAL SEMANTICS which the official documentation sometimes dips into.
Using examples from denotational semantics is not going to change my mind about the applicability of one or the other. It is clear that denotational semantics is a Platonic model of constructive computation.
You know, my scientific advisor used to say "math is about calling different things by the same name; philosophy is about calling the same thing by different names". It seems to me that philosophy is something you're doing now, whereas programming is all about math.
Then you are mistaken. I am talking about choosing the appropriate mathematical model of computation to accurately, clearly, and simply describe the language's semantics.
_______________________________________________
HaskellCafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskellcafe


On Sat, Dec 24, 2011 at 8:20 PM, Alexander Solla <[hidden email]> wrote:
On Wed, Dec 21, 2011 at 8:39 PM, MigMit <[hidden email]> wrote:
On 22 Dec 2011, at 06:25, Alexander Solla wrote:
> Denotational semantics is unrealistic.
And so are imaginary numbers. But they are damn useful for electrical circuits calculations, so who cares?
Not a fair comparison. Quaternions are not particularly useful for electrical circuits, because it is unrealistic to apply a fourdimensional construct to twodimensional phase spaces. In the same way, denotational semantics adds features which do not apply to a theory of finite computation.
> The /defining/ feature of a bottom is that it doesn't have an interpretation.
What do you mean by "interpretation"?
You know, the basic notion of a function which maps syntax to concrete values.
> They should all be treated alike, and be treated differently from every other Haskell value.
But they ARE very similar to other values. They can be members of otherwise meaningful structures, and you can do calculations with these structures. "fst (1, __)" is a good and meaningful calculation.
Mere syntax. What is fst doing? It computes "forall (x,y)  x". Using the language of model theory, we can say that Haskell computes an interpretation "forall (x,y) = x". It is __ that lacks an intepretation, not fst. We can see that by trying
"fst (__,1), which reduces to __ by the proof rule for fst. __ lacks an interpretation, so the runtime either loops forever or throws an error (notice that error throwing is only done with compiler magic  if you define your own undefined = undefined, using it will loop. GHC is specially trained to look for Prelude.undefined to throw the "undefined" error.)
> Every other Haskell value /does/ have an interpretation.
So, (__) is bad, but (1, __) is good?
I did not introduce "good" and "bad" into this discussion. I have merely said (in more words) that I want my hypothetical perfect language to prefer OPERATIONAL (model) SEMANTICS for a typed PARACONSISTENT LOGIC over the DENOTATIONAL SEMANTICS which the official documentation sometimes dips into.
Using examples from denotational semantics is not going to change my mind about the applicability of one or the other. It is clear that denotational semantics is a Platonic model of constructive computation.
You make it sound as though platonism is somehow bad. Before we get into that lets look at a capsule of our history: Kronecker and Cantor disagree on whether sets exist.
K: God created the natural numbers; all the rest is the work of man. C: All manner of infinite sets exist (presumably in the mind of God?) A generation later and continuing, Hilbert and Brouwer disagree on what constitutes a proof.
A little later Goedel sets out to demolish Hilbert's formalist agenda. The (interpretations of) Hilbert's philosophical position is particularly curious: For Brouwer, Hilbert is wrong because he is a platonist  believes in completed infinities
For Goedel, Hilbert is wrong because he is not a platonist  he is looking for mechanizable proofs. Talk of two stools!!
Turing tries to demolish Goedel. His real intention is AI not the Turing machine. He does not succeed  simple questions turn out to be undecidable/noncomputable. However a 'sideeffect' (ahem) of his attempts via the TM is ... the computer.
That Goedel was a (mostly closet) platonist: http://guidetoreality.blogspot.com/2006/12/gdelsplatonism.html
Am I arguing that platonism is good? Dunno... I am only pointing out that the tension between platonism and its opponents (formalist, empiricist, constructivist etc) is evidently a strong driver for progress
BTW a more detailed look at the historical arguments would show that the disagreements were more violent than anything on this list :) Hopefully on this list we will be cordial and wish each other a merry Christmas!
_______________________________________________
HaskellCafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskellcafe


Отправлено с iPad
In the same way, denotational semantics adds features which do not apply to a theory of finite computation.
And why exactly should we limit ourselves to some theory you happen to like?
> The /defining/ feature of a bottom is that it doesn't have an interpretation.
What do you mean by "interpretation"?
You know, the basic notion of a function which maps syntax to concrete values.
But (__) IS a concrete value.
But they ARE very similar to other values. They can be members of otherwise meaningful structures, and you can do calculations with these structures. "fst (1, __)" is a good and meaningful calculation.
Mere syntax.
So what?
> Every other Haskell value /does/ have an interpretation.
So, (__) is bad, but (1, __) is good?
I did not introduce "good" and "bad" into this discussion. I have merely said (in more words) that I want my hypothetical perfect language to prefer OPERATIONAL (model) SEMANTICS for a typed PARACONSISTENT LOGIC over the DENOTATIONAL SEMANTICS which the official documentation sometimes dips into.
Well, that's a different story. But it seems to me that the term "Haskelllike" won't apply to that kind of language. Also, it seems to me (though I don't have any kind of proof) that denotational semantics is something that is much simpler.
It is clear that denotational semantics is a Platonic model of constructive computation.
Could you please stop offending abstract notions?
Then you are mistaken. I am talking about choosing the appropriate mathematical model of computation to accurately, clearly, and simply describe the language's semantics.
Well, domain theory does exactly that for Haskell. _______________________________________________
HaskellCafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskellcafe


On 26 Dec 2011, at 16:11, AUGER Cédric wrote:
>> There is
>> http://www.stixfonts.org/>> For typesetting with Xe[La]TeX or Lua[La]TeX, use XITS (in the
>> TeXLive package).
>>
>>> (And then we'll have to deal with folks trying to use the letter,
>>> because everyone knows the Roman alphabet is the only one that
>>> matters and of *course* Greek letters are symbol characters....
>>> Pfeh.)
>>
>> This is the big problem right now: how to enter these symbols
>> efficiently.
>
> Under Xorg, "XCompose" might be your friend! I have a whole bunch of
> them for Coq programing.
>
> Having something like:
>
> 8<
> # ~/.XCompose contents:
>
> # the Compose file of the xorg distribution to have a lot
> # of useful symbols such as "☭" ^^
> include "/usr/local/lib/X11/locale/en_US.UTF8/Compose"
> # the Compose file that could be included in Haskell distributions
> include "/usr/local/share/haskell/Compose"
> # other personnal notations
> >8
> # /usr/local/share/haskell/Compose contents:
>
> # maybe to be downloaded via a Hackage cabal package?
> # of course "<Multi_key> <H>" should be replaced by a
> # user specified combo
> <Multi_key> <H> <colon> <colon> : "∷"
> <Multi_key> <H> <greater> <greater> <greater> : "⋙"
> <Multi_key> <H> <less> <less> <less> : "⋘"
> <Multi_key> <H> <less> <minus> <less> : "↢" U2919
> <Multi_key> <H> <greater> <minus> <greater> : "↣" U291A
> <Multi_key> <H> <minus> <less> <less> : U291B
> <Multi_key> <H> <greater> <greater> <minus> : U291C
> <Multi_key> <H> <a> <l> <l> : "∀"
> <Multi_key> <H> <e> <x> : "∃"
> <Multi_key> <H> <a> <n> <d> : "∧"
> <Multi_key> <H> <o> <r> : "∨"
> <Multi_key> <H> <a> <b> <s> <t> : "λ"
> <Multi_key> <H> <c> <o> <m> <p> : "∘"
> <Multi_key> <H> <minus> <greater> : "→"
> 8<
>
> But if you are under Windows, or Mac OS, I cannot tell (as well as I
> cannot tell if you are under a POSIX system not running xorg, such as
> the tty1..ttyn consoles)
On OS X one can make ones owns key maps, like with the program on the link below, but it is very time consuming.
Hans
http://scripts.sil.org/cms/scripts/page.php?site_id=nrsi&id=ukelele_______________________________________________
HaskellCafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskellcafe


On 26 Dec 2011, at 19:29, AUGER Cédric wrote:
> Le Mon, 26 Dec 2011 18:20:55 +0100,
> Hans Aberg < [hidden email]> a écrit :
>
>> On 26 Dec 2011, at 16:11, AUGER Cédric wrote:
>>
>>> Under Xorg, "XCompose" might be your friend! I have a whole bunch of
>>> them for Coq programing.
>>>
>>> Having something like:
>>>
>>> 8<
>>> # ~/.XCompose contents:
>>>
>>> # the Compose file of the xorg distribution to have a lot
>>> # of useful symbols such as "☭" ^^
>>> include "/usr/local/lib/X11/locale/en_US.UTF8/Compose"
>>> # the Compose file that could be included in Haskell distributions
>>> include "/usr/local/share/haskell/Compose"
>>> # other personnal notations
>>> >8
>>> # /usr/local/share/haskell/Compose contents:
>>>
>>> # maybe to be downloaded via a Hackage cabal package?
>>> # of course "<Multi_key> <H>" should be replaced by a
>>> # user specified combo
>>> <Multi_key> <H> <colon> <colon> : "∷"
>>> <Multi_key> <H> <greater> <greater> <greater> : "⋙"
>>> <Multi_key> <H> <less> <less> <less> : "⋘"
>>> <Multi_key> <H> <less> <minus> <less> : "↢" U2919
>>> <Multi_key> <H> <greater> <minus> <greater> : "↣" U291A
>>> <Multi_key> <H> <minus> <less> <less> : U291B
>>> <Multi_key> <H> <greater> <greater> <minus> : U291C
>>> <Multi_key> <H> <a> <l> <l> : "∀"
>>> <Multi_key> <H> <e> <x> : "∃"
>>> <Multi_key> <H> <a> <n> <d> : "∧"
>>> <Multi_key> <H> <o> <r> : "∨"
>>> <Multi_key> <H> <a> <b> <s> <t> : "λ"
>>> <Multi_key> <H> <c> <o> <m> <p> : "∘"
>>> <Multi_key> <H> <minus> <greater> : "→"
>>> 8<
>>>
>>> But if you are under Windows, or Mac OS, I cannot tell (as well as I
>>> cannot tell if you are under a POSIX system not running xorg, such
>>> as the tty1..ttyn consoles)
>>
>> On OS X one can make ones owns key maps, like with the program on the
>> link below, but it is very time consuming.
>> http://scripts.sil.org/cms/scripts/page.php?site_id=nrsi&id=ukelele>>
> I have heard of ukelele (but I didn't remembered the name as I am not
> a Mac OS user); I have heard it was a rather dirty solution, but that
> should work.
I think so, too.
> I guess that having a Haskell (or any script language by
> the way) patching the xml file (and able to "unpatch" if we want to
> uninstall) and make possible to get that patch via Hackage could be a
> solution (it would be time consuming for the patch writer, but then
> any Mac OS user would benefit it).
>
> Of course it would only work for the basic Haskell notations, for
> custom ones, I guess it would be interesting to have a program which
> reads a "XCompose like" file (which is quite easy to edit) and generate
> the xml file for ukelele.
>
> Note that the rest of the Haskell cafe mailing list won't be able to
> read my response since it seems that I cannot post on the list (I
> subscribed 2 years ago and never posted; waiting to learn and Haskell
> and then read the mailing list, but I learnt Haskell only about one
> month ago).
If one had a way to replace ASCII multicharacter symbols, that might be easier.
Hans
_______________________________________________
HaskellCafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskellcafe


On Mon, Dec 26, 2011 at 12:20, Hans Aberg <[hidden email]> wrote:
On 26 Dec 2011, at 16:11, AUGER Cédric wrote: > But if you are under Windows, or Mac OS, I cannot tell (as well as I
> cannot tell if you are under a POSIX system not running xorg, such as
> the tty1..ttyn consoles)
On OS X one can make ones owns key maps, like with the program on the link below, but it is very time consuming.
System Preferences > Personal > Language & Text > Text > Use symbol and text substitution
 brandon s allbery [hidden email]wandering unix systems administrator (available) (412) 4759364 vm/sms
_______________________________________________
HaskellCafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskellcafe


On 26 Dec 2011, at 23:03, Brandon Allbery wrote:
> > But if you are under Windows, or Mac OS, I cannot tell (as well as I
> > cannot tell if you are under a POSIX system not running xorg, such as
> > the tty1..ttyn consoles)
>
> On OS X one can make ones owns key maps, like with the program on the link below, but it is very time consuming.
>
> System Preferences > Personal > Language & Text > Text > Use symbol and text substitution
Cool. I have a vague memory of seeing it, but not paying much attention to it.
One can turn it on in Xcode 4.2 by Edit > Format > Substitutions > Show Substitutions and click Text Replacement or selecting it directly in the Substitutions menu. This popup windows allows one to apply it a text selection. (And similar in other programs, like Mail.)
For example, I set one entry so that typing x > a becomes x ↦ a, the TeX \mapsto, in Unicode ↦ RIGHTWARDS ARROW FROM BAR U+21A6.
It might be tedious to make a lot of entries, though, but something to start with.
Hans
_______________________________________________
HaskellCafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskellcafe


Quoth Hans Aberg,
...
> For example, I set one entry so that typing x > a becomes x â¦ a, the
> TeX \mapsto, in Unicode â¦ RIGHTWARDS ARROW FROM BAR U+21A6.
>
> It might be tedious to make a lot of entries, though, but something to
> start with.
Something to finish me with, too. I wouldn't be able to do much
in a programming world that used glyphs like that. My vision
isn't perfect, but I think it's within a fairly normal range,
and it isn't good enough to decode a lot of hieroglyphics at
normal font size at reading speed.
The ASCII limit of 100 or so normal characters may be a barrier
to expression, but it's a boost to comprehension.
Donn
_______________________________________________
HaskellCafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskellcafe


On 27 Dec 2011, at 01:02, Donn Cave wrote:
> Quoth Hans Aberg,
> ...
>> For example, I set one entry so that typing x > a becomes x ↦ a, the
>> TeX \mapsto, in Unicode ↦ RIGHTWARDS ARROW FROM BAR U+21A6.
>>
>> It might be tedious to make a lot of entries, though, but something to
>> start with.
>
> Something to finish me with, too. I wouldn't be able to do much
> in a programming world that used glyphs like that.
The symbol I chose is the one normally used in math. You can see it here:
https://en.wikipedia.org/wiki/Function_(mathematics)
> My vision
> isn't perfect, but I think it's within a fairly normal range,
> and it isn't good enough to decode a lot of hieroglyphics at
> normal font size at reading speed.
Higher resolution displays might help, but larger ones are expected to come the next year.
> The ASCII limit of 100 or so normal characters may be a barrier
> to expression, but it's a boost to comprehension.
Common combinations tend to evolve in symbols. The ASCII combinations are just makeshift.
Hans
_______________________________________________
HaskellCafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskellcafe


2011/12/24 MigMit <[hidden email]>
Отправлено с iPad
In the same way, denotational semantics adds features which do not apply to a theory of finite computation.
And why exactly should we limit ourselves to some theory you happen to like?
Because the question was about MY IDEAL.
I have spoken at length why my ideal is preferable to the current state of affairs. You still continue to misunderstand my point, and respond with red herrings.
> The /defining/ feature of a bottom is that it doesn't have an interpretation.
What do you mean by "interpretation"?
You know, the basic notion of a function which maps syntax to concrete values.
But (__) IS a concrete value.
Um, perhaps in denotational semantics. But even in that case, it is not a HASKELL value.
You seem to be mixing up syntax and semantics.
But they ARE very similar to other values. They can be members of otherwise meaningful structures, and you can do calculations with these structures. "fst (1, __)" is a good and meaningful calculation.
Mere syntax.
So what?
So we give meaning to syntax through our semantics. That is what this whole conversation is all about. I am proposing we give Haskell bottoms semantics that bring it in line with the bottoms from various theories including lattice theory, the theory of sets, the theory of logic, as opposed to using denotational semantics' bottom semantic, which is unrealistic for a variety of reasons. Haskell bottoms can't be compared, due to Rice's theorem. Haskell bottoms cannot be given an interpretation as a Haskell value.
What happens to referential transparency when distinct things are all defined by the same equation?
... = let x = x in x
undefined, seq, unsafeCoerce, and many other "primitives" are defined using that equation. (See GHC.Prim) The Haskell definition for these distinct things /does nothing/. It loops. The semantics we get for them (an error message if we use undefined, a causal sideeffect if we use seq, type coercion if we use unsafeCoerce) is done /magically/ by the compiler. As far as Haskell, as a language, is concerned, all of these are bottom, and they are all /equal/, because of referential transparency/substitutionality.
Oops.
So Haskell, as a logic, is telling us that all of these "distinct" bottoms are not so distinct. And that any interpretation function providing semantics should map them all to the same value in the model.
> Every other Haskell value /does/ have an interpretation.
So, (__) is bad, but (1, __) is good?
I did not introduce "good" and "bad" into this discussion. I have merely said (in more words) that I want my hypothetical perfect language to prefer OPERATIONAL (model) SEMANTICS for a typed PARACONSISTENT LOGIC over the DENOTATIONAL SEMANTICS which the official documentation sometimes dips into.
Well, that's a different story.
No, it's the same story that I've been telling.
But it seems to me that the term "Haskelllike" won't apply to that kind of language. Also, it seems to me (though I don't have any kind of proof) that denotational semantics is something that is much simpler.
Haskell is already a paraconsistent logic. How is giving Haskell operational and interpretive semantics not "Haskelllike"? Its denotational semantics is a platonic completion of the logical semantics.
It is clear that denotational semantics is a Platonic model of constructive computation.
Could you please stop offending abstract notions?
What? Platonic does not mean "bad". But it does mean that the theory is "too big" to be appropriate in this case.
_______________________________________________
HaskellCafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskellcafe


And why exactly should we limit ourselves to some theory you happen to like?
Because the question was about MY IDEAL.
You're right. I'm confusing two different threads. My apologies. But (__) IS a concrete value.
Um, perhaps in denotational semantics. But even in that case, it is not a HASKELL value.
Could you please clarify the meaning of "HASKELL value"? Does it mean something that can be represented in memory, fully evaluated? If so, infinite lists like [1..] aren't HASKELL value's either, which is rather limiting. Or is it something else? You seem to be mixing up syntax and semantics.
Unlikely. But, in case I'm wrong, I'd like to know what makes you think that I put any attention on syntax... But they ARE very similar to other values. They can be members of otherwise meaningful structures, and you can do calculations with these structures. "fst (1, __)" is a good and meaningful calculation. Mere syntax.
...except for this. See, I didn't want to argue about definitions here, so I didn't ask you why do you think of that as a syntactic issue. That was a mistake. So: what is the reason for you to say that "fst (1, __)" is a "mere syntax"? I am proposing we give Haskell bottoms semantics that bring it in line with the bottoms from various theories including lattice theory, the theory of sets, the theory of logic, as opposed to using denotational semantics' bottom semantic, which is unrealistic for a variety of reasons.
Erm... but denotational semantics IS based on lattice theory (which, in turn, is based on the theory of sets). So how on Earth can they be "opposed" to each other? Haskell bottoms can't be compared, due to Rice's theorem. Haskell bottoms cannot be given an interpretation as a Haskell value.
Well, lots of things can't be compared. Functions, for example. Do you mean that functions like "id" are not what you call "Haskell value"? What happens to referential transparency when distinct things are all defined by the same equation? ... = let x = x in x undefined, seq, unsafeCoerce, and many other "primitives" are defined using that equation. (See GHC.Prim) The Haskell definition for these distinct things /does nothing/. It loops. The semantics we get for them (an error message if we use undefined, a causal sideeffect if we use seq, type coercion if we use unsafeCoerce) is done /magically/ by the compiler. As far as Haskell, as a language, is concerned, all of these are bottom, and they are all /equal/, because of referential transparency/substitutionality.
No. The mere fact that Prim.hs contains some stub doesn't mean anything. In fact, the Prim.hs header states: " It is not code to actually be used. Its only purpose is to be consumed by haddock."
"seq" has a very clear semantics  and yes, I do mean denotational semantics here  which has nothing to do with Prim.hs "definition". Indeed. Well, that's a different story.
No, it's the same story that I've been telling.
Yes, and again, I'm sorry for that. It is clear that denotational semantics is a Platonic model of constructive computation.
Could you please stop offending abstract notions?
What? Platonic does not mean "bad".
The way I see it, phylosphical notions are all offensive. _______________________________________________
HaskellCafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskellcafe


Le Mon, 26 Dec 2011 19:30:20 0800,
Alexander Solla < [hidden email]> a écrit :
> So we give meaning to syntax through our semantics. That is what this
> whole conversation is all about. I am proposing we give Haskell
> bottoms semantics that bring it in line with the bottoms from various
> theories including lattice theory, the theory of sets, the theory of
> logic, as opposed to using denotational semantics' bottom semantic,
> which is unrealistic for a variety of reasons. Haskell bottoms can't
> be compared, due to Rice's theorem. Haskell bottoms cannot be given
> an interpretation as a Haskell value.
There is no problem with Rice Theorem, you can perfectly "compare"
bottoms between them, all you cannot do is to make this "compare"
function both "correct" and "complete" in Haskell.
By correct, I mean "(⊥==⊥)≠False", and by complete, I mean "(⊥==⊥)≠⊥".
But incompleteness is very common in Haskell, and correctness is the
only "real worry" when computing (comparing two non terminating
values, and expecting the comparison to terminate is a mistake from
the programmer).
On purely theoretical side (so not in Haskell), there is no problem
either. For instance, the "haltingproblem" can be turned into a
mathematical function φ (ie., φ ∈ ⅋(TM×{true,false}) satisfying the
criteria of functionnal total relation); the only thing is that this
set is not "computable".
> as opposed to using denotational semantics' bottom semantic,
> which is unrealistic for a variety of reasons.
Indeed, it is realistic, since in real world, things are not always
expected to terminate, so we have to take this into account into the
semantics. Don't forget that semantics is something "upper" Haskell
evaluation, it is in the purely mathematical world and can rely on non
computable things as far as they are consistent with the theory.
That is you cannot define a complete and correct function in pure
Haskell which takes a String which can be interpreted as a Haskell
function, and return a String which can be interpreted as the semantics
of the Haskell function.
(Of course the following function would be correct:
semantics :: String > String
semantics s = semantics s
and this one would be complete
semantics :: String > String
semantics s = []
and there is many possible functions which are complete and correct
for many inputs and many possible functions which are correct and
terminate for many inputs)
> Haskell bottoms cannot be given an interpretation as a Haskell
> value.
As a lot of others said, Haskell bottoms ARE values, so they have an
interpretation as a Haskell value (itself).
If a whole community doesn't agree with you, either you are right and
know it and should probably run away from this community or publish a
well argumented book (you won't solve in 2 lines, what the community
believes in for years) either the other case (you are right but not
sure, or simply you are wrong) and in this case don't try to impose
your point of view, rather ask questions on the points you don't
understand.
For me your actual behaviour can be considered as trolling.
Note that I don't say the semantics you have in mind is "wrong", I only
say that the semantics of ⊥ as described in the community's paper is
not "wrong"; but two different "formal" semantics can exist which give
the same "observationnal" semantics.
_______________________________________________
HaskellCafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskellcafe


On Mon, Dec 19, 2011 at 7:10 PM, Alexander Solla < [hidden email]> wrote:
> * Documentation that discourages thinking about bottom as a 'value'. It's
> not a value, and that is what defines it.
The fact that bottom is a value in Haskell is the fundamental thing that
differentiates Haskell from other languages and the source of its power. The
fact that f __ /= __ potentially _is_ what it means to be a lazy language.
It is what allows us to implement loops and conditionals as normal functions
rather than requiring special and limited language contexts. But more
importantly, it is required to think about __ as a value in order to prove
the correctness of many algorithms, it is the base case of your inductive
reasoning. A Haskell programmer who cannot think of __ in that way will
plateau as very useful idioms such as tying the knot, infinite data
structures, etc.. are just complicated to think about in operational
semantics when they get more interesting than an infinite list. Not treating
__ as a value would be a huge disservice to anyone learning the language.
Sure, it may seem a little strange coming from the imperative world to think
of it as a value, but it is by far the oddest concept in Haskell, after all,
_functions_ are values in Haskell and people seem to eventually figure that
out.
John
_______________________________________________
HaskellCafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskellcafe


John Meacham :
> The fact that bottom is a value in Haskell is the fundamental thing that
> differentiates Haskell from other languages and the source of its power. The
> fact that f __ /= __ potentially _is_ what it means to be a lazy language.
> Not treating
> __ as a value would be a huge disservice to anyone learning the language.
> Sure, it may seem a little strange coming from the imperative world to think
> of it as a value, but it is by far the oddest concept in Haskell, after all,
> _functions_ are values in Haskell and people seem to eventually figure that
> out.
Well...
Personally I hate thinking about bottom as "value". I don't do this. I
NEVER teach that. And, I am a "lazy guy", almost all my Haskell programs
are strongly based on laziness.
I'll tell you what I teach, and you might throw some tomatoes...
"The fundamental thing that differentiates Haskell from other languages
and the source of it power"  if I might cite you  is that we don't see
the difference between an object and the process which creates it. (The
difference demands that we speak about the callbyneed, etc...)
The bottom, as sin (2*pi), or "Text" may be seen as processes. Anyway, a
lazy list IS a process /par excellence/. The __ entity is a process
which refuses to give you a value (or does it in a deadly way). Your
program manipulates processes. A process in some computational context
must do something  or not. The bottom never does anything useful.
All this is probably a question of language, of terminology, of
preconceptions (of all that, what for God knows which reasons, Americans
call "just semantics"), but I will not forget the day when I thought as
you, and I had to explain to 2nd year students what does it mean: a
value which doesn't have a value...
Thank you.
Jerzy Karczmarczuk
_______________________________________________
HaskellCafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskellcafe


Jerzy Karczmarczuk < [hidden email]> writes:
> and the source of it power"  if I might cite you  is that we don't see
> the difference between an object and the process which creates it.
Interestingly, according to Wikipedia's article on "type system":
A type system associates a type with each computed value.
but later cites Pierce:
a tractable syntactic framework for classifying phrases according to
the kinds of values they compute
While the former might be said to avoid __ by defining it to not be a
"value" that is computed, the latter clearly must include it, as a
the computation of a "phrase" might not terminate (as longs as the
language is Turingcomplete, of course).
Anyway, I think also nonlazy languages has bottom inhabiting their
types, it's just that since it leads more immediately to failure, it's
not usually taken into account.
k

If I haven't seen further, it is by standing in the footprints of giants
_______________________________________________
HaskellCafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskellcafe


Jerzy Karczmarczuk wrote:
> Well...
> Personally I hate thinking about bottom as "value". I don't do this. I
> NEVER teach that. And, I am a "lazy guy", almost all my Haskell programs
> are strongly based on laziness.
>
> I'll tell you what I teach, and you might throw some tomatoes...
> "The fundamental thing that differentiates Haskell from other languages
> and the source of it power"  if I might cite you  is that we don't see
> the difference between an object and the process which creates it. (The
> difference demands that we speak about the callbyneed, etc...)
> The bottom, as sin (2*pi), or "Text" may be seen as processes. Anyway, a
> lazy list IS a process /par excellence/. The __ entity is a process
> which refuses to give you a value (or does it in a deadly way). Your
> program manipulates processes. A process in some computational context
> must do something  or not. The bottom never does anything useful.
While it's ultimately an issue of nomenclature, applying the terminus
"value" to __ is a good idea, because it allows us to answer questions
like the following:
Question: What is (the denotation of, the value of)
x = and $ take 5 $ cycle [True,False]
where cycle xs = fix (xs++)
Answer:
x = False
If you treat __ as a value, this answer can be obtained by
straightforward algebraic manipulation. If you treat __ as an
operational construct, you will have to perform graph reduction to see
the answer, but then you have to worry about the *order* in which you
perform your reduction steps.
It's not wrong to perform graph reduction, and any student should do it
at one point in their lives, but the restriction to operational
semantics would miss an important abstraction that is part of the
Haskell spirit.
Best regards,
Heinrich Apfelmus

http://apfelmus.nfshost.com_______________________________________________
HaskellCafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskellcafe

12345
