Re: Detecting Cycles in Datastructures

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
28 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Re: Detecting Cycles in Datastructures

Hudak, Paul
This is a very late response to an old thread...

Tom Hawkins wrote:
 > In a pure language, is it possible to detect cycles in recursive
 > data structures?  For example, is it possible to determine that
 > "cyclic" has a loop? ...
 >
 > data Expr = Constant Int | Addition Expr Expr
 >
 > cyclic :: Expr
 > cyclic = Addition (Constant 1) cyclic
 >
 > Or phased differently, is it possible to make "Expr" an instance of
 > "Eq" such that cyclic == cyclic is smart enough to avoid a recursive
 > decent?
 >
 > -Tom

---

Perhaps it's obvious, but one way to do this is to make the cycle
*implicit* via some kind of a fixed-point operator, which is a trick I
used recently in a DSL application.  For example, you could define:

 > data Expr = Const Int
 >           | Add Expr Expr
 >           | Loop                      -- not exported
 >      deriving (Eq, Show)

The purpose of Loop is to "mark" the point where a cycle exists.
Instead of working with values of type Expr, you work with values of
type Expr -> Expr, or Fix Expr:

 > type Fix a = a -> a

For example:

 > fe1,fe2 :: Fix Expr
 > fe1 e = Add (Const 1) (Const 1)  -- non-recursive
 > fe2 e = Add (Const 1) e          -- recursive

You can always get the value of an Expr from a Fix Expr whenever you
need it, by computing the fixed point:

 > fix f = x where x = f x

 > e1,e2 :: Expr
 > e1 = fix fe1          -- finite
 > e2 = fix fe2          -- infinite

Note that e2 is equivalent to your "cyclic".  But now we can also test
for equality:

 > instance Eq (Fix Expr) where
 >   fe1 == fe2  =  fe1 Loop == fe2 Loop

For example, fe2 == fe2 returns "True", whereas e2 == e2
(i.e. your cyclic == cyclic) diverges.

Of course, note that, although fe1 == fe2 implies that fix fe1 == fix
fe2, the reverse is not true, since there will always be loops of
varying degrees of unwinding that are semantically equivalent, but not
"syntactically", or structurally, equivalent.  Thus this definition of
equality is only approximate, but still, it's useful.

If you want to have multiple loops, or a loop not at the top level,
then you need to add some kind of a loop constructor to Expr.  I've
sketched that idea below, and pointed out a couple of other useful
ideas, such as showing a loop, and mapping a function over a loop
without unwinding it.

I hope this helps,

   -Paul

----------------------------------------------------------------------

 > module Cyclic where

Loop now needs an ID because there may be more than one of them.

 > data Expr = Const Int
 >           | Add Expr Expr
 >           | Rec (Fix Expr)            -- implicit loop
 >           | Loop ID                   -- not exported
 >
 > type Fix a = a -> a
 > type ID    = Int

To check equality of and show Exprs, we need to supply unique IDs to
each recursive loop, which we do via a simple counter.

 > instance Eq Expr where
 >   e1 == e2  =
 >     let eq (Const x) (Const y)       n  =  x == y
 >         eq (Loop i1) (Loop i2)       n  =  i1 == i2
 >         eq (Add e1 e2) (Add e1' e2') n  =  eq e1 e1' n && eq e2 e2' n
 >         eq (Rec fe1) (Rec fe2)       n  =  eq (fe1 (Loop n))
 >                                               (fe2 (Loop n)) (n+1)
 >         eq _ _                       n  =  False
 >     in  eq e1 e2 0
 >
 > instance Show Expr where
 >   show e  =
 >     let show' (Const x)   n =  "(Const " ++ show x ++ ")"
 >         show' (Add e1 e2) n =  "(Add " ++ show' e1 n ++ " "
 >                                        ++ show' e2 n ++ ")"
 >         show' (Loop i)    n =  "(Loop " ++ show i ++ ")"
 >         show' (Rec fe)    n =  "(Rec " ++ show n ++ " "
 >                                        ++ show' (fe (Loop n)) (n+1)
 >                                        ++ ")"
 >     in  show' e 0
 >

Unwinding (i.e. computing fixpoints):

Note: unwind should never see a Loop constructor.

 > unwind :: Expr -> Expr
 > unwind (Add e1 e2) = Add (unwind e1) (unwind e2)
 > unwind (Rec fe)    = x where x = unwind (fe x)
 > unwind e           = e

The 2nd equation above is analogous to:
fix f = x where x = f x
And these two equations could also be written as:
fix f = f (fix f)
unwind (Rec fe) = unwind (fe (Rec fe))

Examples:

 > fe1,fe2 :: Fix Expr
 > fe1 e = Add (Const 1) (Const 1)  -- non-recursive
 > fe2 e = Add (Const 1) e          -- recursive
 >
 > e1,e2,e3 :: Expr
 > e1 = Rec fe1                                 -- no real loop
 > e2 = Rec fe2                                 -- top-level loop
 > e3 = Add e2 (Const 0)                        -- lower-level loop
 > e4 = Rec (\e1-> Add (Const 1)
 >                     (Rec (\e2-> Add e1 e2))) -- nested loop
 >
 > b1,b2 :: Bool
 > b1 = e3==e3           -- returns True
 > b2 = e3==e2           -- returns False
 >
 > e1u,e2u,e3u :: Expr
 > e1u = unwind e1       -- finite
 > e2u = unwind e2       -- infinite
 > e3u = unwind e3       -- infinite
 > e4u = unwind e4       -- infinite

Now suppose we define a function, say mapE, that applies a function to
the leaves (in this case the Const Int's) of an Expr.  For example:

mapE succ (Add (Const 1) (Const 2)) => Add (Const 2) (Const 3)

Then if we define something like:

cyclic1 = Add (Const 1) cyclic1

and do "mapE succ cyclic", we'd like to get:

cyclic2 = Add (Const 2) cyclic2

However, most implementations will unwind the loop -- i.e. we don't
get the sharing implied by the explicit loop.  However, by using Rec
to express loops implicitly, we can get around this as follows:

 > mapE :: (Int->Int) -> Expr -> Expr
 > mapE f e = mapE' f e 0 where
 >   mapE' f (Const i)   n = Const (f i)
 >   mapE' f (Add e1 e2) n = Add (mapE' f e1 n) (mapE' f e2 n)
 >   mapE' f (Rec fe)    n = Rec (absLoop n (mapE' f (fe (Loop n)) (n+1)))
 >   mapE' f (Loop i)    n = Loop i

absLoop n e turns e :: Expr into a Fix Expr by "abstracting out" Loop n:

 > absLoop :: Int -> Expr -> Fix Expr
 > absLoop n e = \e' ->
 >    let abs (Loop n') | n==n' = e'
 >        abs (Add e1 e2)       = Add (abs e1) (abs e2)
 >        abs e                 = e
 >    in abs e

Examples:

 > e5 = mapE succ e2
 > e6 = Rec (\e -> Add (Const 2) e)
 > b3 = e5==e6   -- returns True!
 >
 > e7 = mapE succ e4
 > e8 = Rec (\e1-> Add (Const 2)
 >                     (Rec (\e2-> Add e1 e2)))
 > b4 = e7==e8   -- returns True!

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Detecting Cycles in Datastructures

Henning Thielemann

On Fri, 18 Nov 2005, Paul Hudak wrote:

> For example:
>
>> fe1,fe2 :: Fix Expr
>> fe1 e = Add (Const 1) (Const 1)  -- non-recursive
>> fe2 e = Add (Const 1) e          -- recursive

Do you mean

fe1 _ = Add (Const 1) Loop

?
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Detecting Cycles in Datastructures

Gregory Woodhouse
In reply to this post by Hudak, Paul


--- Paul Hudak <[hidden email]> wrote:

> This is a very late response to an old thread...
>
> Tom Hawkins wrote:
>  > In a pure language, is it possible to detect cycles in recursive
>  > data structures?  For example, is it possible to determine that
>  > "cyclic" has a loop? ...
>  >
>  > data Expr = Constant Int | Addition Expr Expr
>  >
>  > cyclic :: Expr
>  > cyclic = Addition (Constant 1) cyclic
>  >
>  > Or phased differently, is it possible to make "Expr" an instance
> of
>  > "Eq" such that cyclic == cyclic is smart enough to avoid a
> recursive
>  > decent?
>  >

I'm a little confused. It's one thing to demonstrate that a (possibly
infinite) digraph contains a loop, but quite another to show that it
does not. Carrying around a copy of the subgraph consisting of nodes
actually visited is workable in a pure language, but there's no
guarantee of termination.

>  > -Tom
>
> ---
>
> Perhaps it's obvious, but one way to do this is to make the cycle
> *implicit* via some kind of a fixed-point operator, which is a trick
> I
> used recently in a DSL application.  


So, you imagine f to be a function that (non-deterministally) follows
an edge in a digraph. The idea being that G is (possibly infinite)
digraph and F is a subgraph. The "function" f would
non-deterministically select a vertext of F, follow an edge in G (again
chosen non-deterministically), add the (possibly new) edge to F, and
return the resulting graph. Then, you are asking if f has a fixpoint in
this broader context?

> For example, you could define:
>
>  > data Expr = Const Int
>  >           | Add Expr Expr
>  >           | Loop                      -- not exported
>  >      deriving (Eq, Show)
>
> The purpose of Loop is to "mark" the point where a cycle exists.
> Instead of working with values of type Expr, you work with values of
> type Expr -> Expr, or Fix Expr:
>
>  > type Fix a = a -> a
>
> For example:
>
>  > fe1,fe2 :: Fix Expr
>  > fe1 e = Add (Const 1) (Const 1)  -- non-recursive
>  > fe2 e = Add (Const 1) e          -- recursive
>
> You can always get the value of an Expr from a Fix Expr whenever you
> need it, by computing the fixed point:
>
>  > fix f = x where x = f x

If it can be computed. Maybe I'm off-track here, but it seems to me
that when you introduce laziness into the equation, you lose any
guarantee of there even being a fixpoint, much less one that can be
computed.

>
>  > e1,e2 :: Expr
>  > e1 = fix fe1          -- finite
>  > e2 = fix fe2          -- infinite
>
> Note that e2 is equivalent to your "cyclic".  But now we can also
> test
> for equality:
>
>  > instance Eq (Fix Expr) where
>  >   fe1 == fe2  =  fe1 Loop == fe2 Loop
>
> For example, fe2 == fe2 returns "True", whereas e2 == e2
> (i.e. your cyclic == cyclic) diverges.
>
> Of course, note that, although fe1 == fe2 implies that fix fe1 == fix
> fe2, the reverse is not true, since there will always be loops of
> varying degrees of unwinding that are semantically equivalent, but
> not
> "syntactically", or structurally, equivalent.  Thus this definition
> of
> equality is only approximate, but still, it's useful.

I'm lost. Are you saying bottom is bottom?

>
> If you want to have multiple loops, or a loop not at the top level,
> then you need to add some kind of a loop constructor to Expr.  I've
> sketched that idea below, and pointed out a couple of other useful
> ideas, such as showing a loop, and mapping a function over a loop
> without unwinding it.
>
> I hope this helps,
>
>    -Paul
>



===
Gregory Woodhouse  <[hidden email]>


"Interaction is the mind-body problem of computing."

--Philip Wadler











_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Detecting Cycles in Datastructures

Hudak, Paul
In reply to this post by Henning Thielemann
Henning Thielemann wrote:
> On Fri, 18 Nov 2005, Paul Hudak wrote:
>> For example:
>>> fe1,fe2 :: Fix Expr
>>> fe1 e = Add (Const 1) (Const 1)  -- non-recursive
>>> fe2 e = Add (Const 1) e          -- recursive
>
> Do you mean
> fe1 _ = Add (Const 1) Loop
> ?

No, I really meant it as written.  I included this example just to point
out that an expression didn't have to be infinite to represent it as the
fixpoint of a function.  Note that the least fixpoint of fe1 is "Add
(Const 1) (Const 1)".

Loop shouldn't ever be used by the user -- that's why I added the
comment that it was "not exported".  It's just there to "open" the
function for inspection.  In this sense, the trick is analogous to the
use of higher-order abstract syntax -- i.e. using the meta-language
(Haskell) to represent functions in the object language (expressions).

   -Paul
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Detecting Cycles in Datastructures

Hudak, Paul
In reply to this post by Gregory Woodhouse
Greg Woodhouse wrote:
 > --- Paul Hudak <[hidden email]> wrote:
 >>Tom Hawkins wrote:
 >> > In a pure language, is it possible to detect cycles in recursive
 >> > data structures?  For example, is it possible to determine that
 >> > "cyclic" has a loop? ...
 >> >
 >> > data Expr = Constant Int | Addition Expr Expr
 >> >
 >> > cyclic :: Expr
 >> > cyclic = Addition (Constant 1) cyclic
 >> >
 >> > Or phased differently, is it possible to make "Expr" an instance of
 >> > "Eq" such that cyclic == cyclic is smart enough to avoid a
 >> > recursive decent?
 >
 > I'm a little confused. It's one thing to demonstrate that a (possibly
 > infinite) digraph contains a loop, but quite another to show that it
 > does not. Carrying around a copy of the subgraph consisting of nodes
 > actually visited is workable in a pure language, but there's no
 > guarantee of termination.

The question that was originally posted assumed that the graph was
represented using direct recursion in Haskell.  In which case you
cannot detect a cycle, as previous replies pointed out.  Of course, if
you represented the graph in some more concrete form -- such as an
explicit list of nodes and edges -- then you could detect the cycle
using a standard graph algorithm, at the expense of losing the
elegance of the Haskell recursion.  My post was just pointing out that
there is a third possibility, one that retains most of the elegance
and abstractness of Haskell, yet still allows detecting cycles.

 >>Perhaps it's obvious, but one way to do this is to make the cycle
 >>*implicit* via some kind of a fixed-point operator, which is a trick
 >>I used recently in a DSL application.
 >
 > So, you imagine f to be a function that (non-deterministally) follows
 > an edge in a digraph. The idea being that G is (possibly infinite)
 > digraph and F is a subgraph. The "function" f would
 > non-deterministically select a vertext of F, follow an edge in G (again
 > chosen non-deterministically), add the (possibly new) edge to F, and
 > return the resulting graph. Then, you are asking if f has a fixpoint in
 > this broader context?

I don't understand this... and therefore it's probably not what I
imagine!  I'm saying simply that a cyclic graph can be represented as
the fixed point of a function.

 >>For example, you could define:
 >>
 >> > data Expr = Const Int
 >> >           | Add Expr Expr
 >> >           | Loop                      -- not exported
 >> >      deriving (Eq, Show)
 >>
 >>The purpose of Loop is to "mark" the point where a cycle exists.
 >>Instead of working with values of type Expr, you work with values of
 >>type Expr -> Expr, or Fix Expr:
 >>
 >> > type Fix a = a -> a
 >>
 >>For example:
 >>
 >> > fe1,fe2 :: Fix Expr
 >> > fe1 e = Add (Const 1) (Const 1)  -- non-recursive
 >> > fe2 e = Add (Const 1) e          -- recursive
 >>
 >>You can always get the value of an Expr from a Fix Expr whenever you
 >>need it, by computing the fixed point:
 >>
 >> > fix f = x where x = f x
 >
 > If it can be computed. Maybe I'm off-track here, but it seems to me
 > that when you introduce laziness into the equation, you lose any
 > guarantee of there even being a fixpoint, much less one that can be
 > computed.

Actually it's the other way around -- laziness is what makes this work!
  The least fixpoint of fe2 in a strict language, for example, is bottom.

 >> > e1,e2 :: Expr
 >> > e1 = fix fe1          -- finite
 >> > e2 = fix fe2          -- infinite
 >>
 >>Note that e2 is equivalent to your "cyclic".  But now we can also
 >>test for equality:
 >>
 >> > instance Eq (Fix Expr) where
 >> >   fe1 == fe2  =  fe1 Loop == fe2 Loop
 >>
 >>For example, fe2 == fe2 returns "True", whereas e2 == e2
 >>(i.e. your cyclic == cyclic) diverges.
 >>
 >>Of course, note that, although fe1 == fe2 implies that fix fe1 == fix
 >>fe2, the reverse is not true, since there will always be loops of
 >>varying degrees of unwinding that are semantically equivalent, but
 >>not "syntactically", or structurally, equivalent.  Thus this definition
 >>of equality is only approximate, but still, it's useful.
 >
 > I'm lost. Are you saying bottom is bottom?

I suspect from your other post that you haven't seen the standard
trick of encoding infinite data structures as fixpoints.  Suppose you
have a lambda calculus term for cons, as well as for the numeral 1.
Then the infinite list of ones is just:

Y (\ones. cons 1 ones)

where Y (aka the "paradoxical combinator" or "fixed point combinator")
is defined as:

\f. (\x. f (x x)) (\x. f (x x))

To see this, try unfolding the above term a few iterations using
normal-order reduction.  Note that the term has no recursion in it
whatsoever.

Now, my earlier point above is that:

Y (\ones. cons 1 ones)

unwinds to the same term as:

Y (\ones. cons 1 (cons 1 ones))

yet the two terms (\ones. cons 1 ones) and
(\ones. cons 1 (cons 1 ones)) are not the same (in fact they're not
lambda convertible, either).

   -Paul
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Detecting Cycles in Datastructures

Gregory Woodhouse


--- Paul Hudak <[hidden email]> wrote:

>
> I suspect from your other post that you haven't seen the standard
> trick of encoding infinite data structures as fixpoints.  Suppose you
> have a lambda calculus term for cons, as well as for the numeral 1.
> Then the infinite list of ones is just:
>

Not in quite those terms. Intuitively, I think of the infinite data
stucture here as being a kind of a "completion" of the space of
ordinary (finite) graphs to a space in which th given function actually
does have a fixed point.

> Y (\ones. cons 1 ones)
>
> where Y (aka the "paradoxical combinator" or "fixed point
> combinator")
> is defined as:
>
> \f. (\x. f (x x)) (\x. f (x x))
>

Now, this is I have seen, but it frankly strikes me as a "formal
trick". I've never felt that this definition of Y gave me much insight
into the computation you describe below.

> To see this, try unfolding the above term a few iterations using
> normal-order reduction.  Note that the term has no recursion in it
> whatsoever.
>
> Now, my earlier point above is that:
>
> Y (\ones. cons 1 ones)
>
> unwinds to the same term as:
>
> Y (\ones. cons 1 (cons 1 ones))
>
> yet the two terms (\ones. cons 1 ones) and
> (\ones. cons 1 (cons 1 ones)) are not the same (in fact they're not
> lambda convertible, either).
>
>    -Paul

And this is what leaves me scatching my head. If you leave off the
"ones", then you get a sequence of finite lists [1], [1, 1], ... that
seems to approximate the infinite list, but I find it difficult to
imagine how you might pluck a formula like \i. 1 (the ith term)  out of
the air.
>



===
Gregory Woodhouse  <[hidden email]>


"Interaction is the mind-body problem of computing."

--Philip Wadler











_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Detecting Cycles in Datastructures

Hudak, Paul
Greg Woodhouse wrote:

> --- Paul Hudak <[hidden email]> wrote:
>>Y (\ones. cons 1 ones)
>>
>>where Y (aka the "paradoxical combinator" or "fixed point
>>combinator") is defined as:
>>
>>\f. (\x. f (x x)) (\x. f (x x))
>
> Now, this is I have seen, but it frankly strikes me as a "formal
> trick". I've never felt that this definition of Y gave me much insight
> into the computation you describe below.
>
>>To see this, try unfolding the above term a few iterations using
>>normal-order reduction.  Note that the term has no recursion in it
>>whatsoever.
>>
>>Now, my earlier point above is that:
>>Y (\ones. cons 1 ones)
>>unwinds to the same term as:
>>Y (\ones. cons 1 (cons 1 ones))
>>
>>yet the two terms (\ones. cons 1 ones) and
>>(\ones. cons 1 (cons 1 ones)) are not the same (in fact they're not
>>lambda convertible, either).
>>-Paul
>
> And this is what leaves me scatching my head. If you leave off the
> "ones", then you get a sequence of finite lists [1], [1, 1], ... that
> seems to approximate the infinite list, but I find it difficult to
> imagine how you might pluck a formula like \i. 1 (the ith term)  out of
> the air.

The important property of Y is this:

Y f = f (Y f)

In this way you can see it as "unwinding" the function, one step at a
time.  If we define f as (\ones. cons 1 ones), then here is one step of
unwinding:

Y f ==> f (Y f) ==> cons 1 (Y f)

If you do this again you get:

cons 1 (cons 1 (Y f))

and so on.  As you can see, continuing this process yields the infinite
list of ones.

Now note that if we define g as (\ones. cons 1 (cons 1 ones)), we get:

Y g ==> g (Y g) ==> cons 1 (cons 1 (Y g))

which obviously also yields the infinite list of ones.  Yet f is not
equal to g.

Does this help?

   -Paul

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Detecting Cycles in Datastructures

Gregory Woodhouse


--- Paul Hudak <[hidden email]> wrote:

>
> The important property of Y is this:
>
> Y f = f (Y f)

Right. This is just a formal statement of the property thaat f fixex Y
f. I'm with you so far.
>
> In this way you can see it as "unwinding" the function, one step at a
>
> time.  If we define f as (\ones. cons 1 ones), then here is one step
> of
> unwinding:
>
> Y f ==> f (Y f) ==> cons 1 (Y f)
>

I'm with you here, too, except that we are *assuming* that Y has the
stated property. If it does, it's absolutely clear that the above
should hold.

> If you do this again you get:
>
> cons 1 (cons 1 (Y f))
>
> and so on.  As you can see, continuing this process yields the
> infinite
> list of ones.

Right.

>
> Now note that if we define g as (\ones. cons 1 (cons 1 ones)), we
> get:
>
> Y g ==> g (Y g) ==> cons 1 (cons 1 (Y g))

Now, this is where things get a little mysterious. Where did g come
from?

I understand that the "x x" om formal definition of Y is what makes
everything work (or, at least I think I do). When I try and think this
all through, I wind up with something like this:

First of all, I'm after a "list" (named ones) with the property that
ones = cons 1 ones. How can I express the existence of such a thing as
a fixed point? Well, if it does exist, it must be a fixed point of f =
\x. cons 1 x, because

f ones ==> (\x. cons 1 x) ones ==> cons 1 ones ==? ones

Now, if I write

Y = \y. (\x y (x x)) (\x. y x x)

then,

Y f ==> cons 1 (\x. cons 1 ( x x)) (\x (cons 1) x x)

and I'm left to ponder what that might mean. Formally, I can see that
this pulls an extra cons 1 out to the left, and so we're led to your g

>
> which obviously also yields the infinite list of ones.  Yet f is not
> equal to g.

To me, all this really seems to be saying is that for any n, there's a
lambda expression explicitly involving [1, 1, ..., 1] (n times) that is
a "solution" to the above problem, which I interpet (perhaps
incorrectly) to mean that *if* there were a "real" list that wedre a
solution to the above recurrence, then for any n, there would be an
initial sublist consisting of just 1's.

Now, I know that lambda expressions are not lists, though lists can be
encoded as lambda expressions. For finite lists, this all seem simple
enough, but in this case we seem to have a lambda expression that is
not a (finite) list, but that can nevertheless be "approximated" in
some3 sense by finte lists.

But I can't really put my finger on just what that sense might be.
>
> Does this help?

Yes, it does. Very much so.
>
>    -Paul
>
>



===
Gregory Woodhouse  <[hidden email]>


"Interaction is the mind-body problem of computing."

--Philip Wadler











_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Detecting Cycles in Datastructures

Lennart Augustsson
Greg Woodhouse wrote:

>
> --- Paul Hudak <[hidden email]> wrote:
>
>
>>The important property of Y is this:
>>
>>Y f = f (Y f)
>
>
> Right. This is just a formal statement of the property thaat f fixex Y
> f. I'm with you so far.
>
>>In this way you can see it as "unwinding" the function, one step at a
>>
>>time.  If we define f as (\ones. cons 1 ones), then here is one step
>>of
>>unwinding:
>>
>>Y f ==> f (Y f) ==> cons 1 (Y f)
>>
>
>
> I'm with you here, too, except that we are *assuming* that Y has the
> stated property. If it does, it's absolutely clear that the above
> should hold.

You can easily verify that
        Y f = f (Y f)

LHS =
Y f =
(\f. (\x. f (x x)) (\x. f (x x))) f =
(\x. f (x x)) (\x. f (x x)) =
f ((\x. f (x x) (\x. f (x x)))

RHS =
f (Y f) =
f ((\f. (\x. f (x x)) (\x. f (x x))) f) =
f ((\x. f (x x)) (\x. f (x x)))

So (\f. (\x. f (x x)) (\x. f (x x))) is a fixpoint combinator
(one of infinitely many).

        -- Lennart


        -- Lennart
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Infinite lists and lambda calculus (was: Detecting Cycles in Datastructures)

Gregory Woodhouse


--- Lennart Augustsson <[hidden email]> wrote:

I guess I'm not doing a very good job of expressing myself. I see that
if you define Y as you do, then the various functions you list have the
property that Y f = f (Y f).

I don't want to draw out a boring discussion that is basically about my
own qualms here, especially since I haven't really been able to
articulate what it is that bothers me.

Perhaps the issue is that the manipulations below are purely syntactic,
and though they work, it is not at all clear how to make contact with
other notions of computability. Perhaps it is that

Y = (\f. (\x. f (x x)) (\x. f (x x)))

is a perfect sensible definition, it's still just a recipe for a
computation. Maybe I'm thinking too hard, but it reminds me of the mu
operator. Primiive recursive functions are nice and constructive, but
minimization is basically a "search", there's no guarantee it will
work. If I write

g(x) = mu x (f(x) - x)

then I've basically said "look at every x and stop when you find a
fixed point". Likewise, given a candidate for f, it's easy to verify
that Y f = f (Y f), as you've shown, but can the f be found without
some kind of "search"? Since there are recursive functions that aren't
primitive recursive, the answer has to be "no".

Finally, you've exhibited a sequence of fixed points, and in this case
it's intuitively clear how these correspond to something we might call
an infinite list. But is there an interpetration that makes this
precise? The notation

ones = cons 1 ones
ones = cons 1 (cons 1 ones)
...

is suggestive, but only suggestive (at least as far as I can see). Is
there a model in which [1, 1 ...] is a real "thing" that is somehow
"approached" by the finite lists?

>
> You can easily verify that
> Y f = f (Y f)
>
> LHS =
> Y f =
> (\f. (\x. f (x x)) (\x. f (x x))) f =
> (\x. f (x x)) (\x. f (x x)) =
> f ((\x. f (x x) (\x. f (x x)))
>
> RHS =
> f (Y f) =
> f ((\f. (\x. f (x x)) (\x. f (x x))) f) =
> f ((\x. f (x x)) (\x. f (x x)))
>
> So (\f. (\x. f (x x)) (\x. f (x x))) is a fixpoint combinator
> (one of infinitely many).
>
> -- Lennart
>
>
> -- Lennart
>



===
Gregory Woodhouse  <[hidden email]>


"Interaction is the mind-body problem of computing."

--Philip Wadler











_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Infinite lists and lambda calculus

Lennart Augustsson
Greg Woodhouse wrote:
> Perhaps the issue is that the manipulations below are purely syntactic,
But all the computation rules of the lambda calculus are "syntactic"
in that sense.  When you can prove things by symbol pushing it's
the easiest way.

But as Paul Hudak mentioned, there definitions that are equal, but
you need something stronger to prove it, like fix point induction.

> and though they work, it is not at all clear how to make contact with
> other notions of computability. Perhaps it is that
>
> Y = (\f. (\x. f (x x)) (\x. f (x x)))
>
> is a perfect sensible definition, it's still just a recipe for a
> computation. Maybe I'm thinking too hard, but it reminds me of the mu
> operator. Primiive recursive functions are nice and constructive, but
> minimization is basically a "search", there's no guarantee it will
> work. If I write
>
> g(x) = mu x (f(x) - x)
>
> then I've basically said "look at every x and stop when you find a
> fixed point". Likewise, given a candidate for f, it's easy to verify
> that Y f = f (Y f), as you've shown, but can the f be found without
> some kind of "search"?
There is no search with this defintion (as you can see), it's very
constructive.

It computes the fix point which you can also define as
                  oo
        fix f = lub f^i(_|_)
                 i=0
where f^i is f iterated i times.  Is that a definition
of fixpoint that makes you happier?

> Since there are recursive functions that aren't
> primitive recursive, the answer has to be "no".
Where did primitive recursion come from?  Y is used
to express general recursion.

>
> Finally, you've exhibited a sequence of fixed points, and in this case
> it's intuitively clear how these correspond to something we might call
> an infinite list. But is there an interpetration that makes this
> precise? The notation
>
> ones = cons 1 ones
> ones = cons 1 (cons 1 ones)
> ...

The list ones is the least upper bound in the domain of lazy (integer)
lists of the following chain
_|_, cons 1 _|_, cons 1 (cons 1 _|_), ...

How much domain theory have you studied?  Maybe that's where you can
find the connection you're missing?

        -- Lennart
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Infinite lists and lambda calculus

Gregory Woodhouse


--- Lennart Augustsson <[hidden email]> wrote:

>
> It computes the fix point which you can also define as
>                   oo
> fix f = lub f^i(_|_)
>                  i=0
> where f^i is f iterated i times.  Is that a definition
> of fixpoint that makes you happier?

Believe it or not, yes. My background is in mathematics, so something
looking more like a fixed point of a continuous map seems a lot more
intuitive.
>
>
> How much domain theory have you studied?  Maybe that's where you can
> find the connection you're missing?
>

None. But it seems clear that this is a deficit in my education I need
to address.

> -- Lennart
>



===
Gregory Woodhouse  <[hidden email]>


"Interaction is the mind-body problem of computing."

--Philip Wadler











_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Detecting Cycles in Datastructures

jerzy.karczmarczuk
In reply to this post by Gregory Woodhouse
My simple-mindness and naïveté begin to bother me. I begin to get lost,
and I don't know anymore what is the problem...

Greg Woodhouse a &eacute;crit:

> --- Paul Hudak <[hidden email]> wrote:
...
>> The important property of Y is this:
>>
>> Y f = f (Y f)
>
> Right. This is just a formal statement of the property thaat f fixex Y
> f. I'm with you so far.

No, not exactly. This *may be* the *definition* of Y in a typed system
(Haskell). On the other hand you cannot  make Y as a self-applying lambda
construct for obvious reasons, and still you can sleep well...

...

>> Now note that if we define g as (\ones. cons 1 (cons 1 ones)), we
>> get:
>>
>> Y g ==> g (Y g) ==> cons 1 (cons 1 (Y g))
>
> Now, this is where things get a little mysterious. Where did g come
> from?
>
> I understand that the "x x" om formal definition of Y is what makes
> everything work (or, at least I think I do).

Not exactly. Let's do it in Haskell, without this (x x) stuff...
My `g`means something different, though.

fix f = f (fix f)   -- Here you have your Y. No typeless lambda.

g f n = n : f n    -- This is a generic *non-recursive* `repeat`
ones = fix g 1     -- Guess what.


Now (take 30 ones) works perfectly well. 'ones' is a piece of co-data, or a
co-recursive stream as many people, e.g., David Turner would say. It has
no finite form. Yet, we live happy with, provided we have a normal
reduction scheme (laziness).
Why do you want (in a further posting) to have a "real thing" which for you
means a finite list? Are you sure that everybody needs the LEAST fixed
point? The co-streams give you something different...


Frankly, I don't know what is the issue...

Jerzy Karczmarczuk


_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Infinite lists and lambda calculus (was: Detecting Cycles in Datastructures)

Cale Gibbard
In reply to this post by Gregory Woodhouse
On 18/11/05, Greg Woodhouse <[hidden email]> wrote:

>
>
> --- Lennart Augustsson <[hidden email]> wrote:
>
> I guess I'm not doing a very good job of expressing myself. I see that
> if you define Y as you do, then the various functions you list have the
> property that Y f = f (Y f).
>
> I don't want to draw out a boring discussion that is basically about my
> own qualms here, especially since I haven't really been able to
> articulate what it is that bothers me.
>
> Perhaps the issue is that the manipulations below are purely syntactic,
> and though they work, it is not at all clear how to make contact with
> other notions of computability. Perhaps it is that
>
> Y = (\f. (\x. f (x x)) (\x. f (x x)))
>
In a sense, the real definition of Y is Y f = f (Y f), this lambda
term just happens to have that property, but such functions aren't
rare.
One fun one is:

Y = (L L L L L L L L L L L L L L L L L L L L L L L L L L L L)
where
L = λabcdefghijklmnopqstuvwxyzr. (r (t h i s i s a f i x e d p o i n t
c o m b i n a t o r))

> is a perfect sensible definition, it's still just a recipe for a
> computation. Maybe I'm thinking too hard, but it reminds me of the mu
> operator. Primiive recursive functions are nice and constructive, but
> minimization is basically a "search", there's no guarantee it will
> work. If I write
>
> g(x) = mu x (f(x) - x)
>
> then I've basically said "look at every x and stop when you find a
> fixed point". Likewise, given a candidate for f, it's easy to verify
> that Y f = f (Y f), as you've shown, but can the f be found without
> some kind of "search"? Since there are recursive functions that aren't
> primitive recursive, the answer has to be "no".
>
> Finally, you've exhibited a sequence of fixed points, and in this case
> it's intuitively clear how these correspond to something we might call
> an infinite list. But is there an interpetration that makes this
> precise? The notation
>
> ones = cons 1 ones
> ones = cons 1 (cons 1 ones)
> ...
>
> is suggestive, but only suggestive (at least as far as I can see). Is
> there a model in which [1, 1 ...] is a real "thing" that is somehow
> "approached" by the finite lists?
> >
> > You can easily verify that
> >       Y f = f (Y f)
> >
> > LHS =
> > Y f =
> > (\f. (\x. f (x x)) (\x. f (x x))) f =
> > (\x. f (x x)) (\x. f (x x)) =
> > f ((\x. f (x x) (\x. f (x x)))
> >
> > RHS =
> > f (Y f) =
> > f ((\f. (\x. f (x x)) (\x. f (x x))) f) =
> > f ((\x. f (x x)) (\x. f (x x)))
> >
> > So (\f. (\x. f (x x)) (\x. f (x x))) is a fixpoint combinator
> > (one of infinitely many).
> >
> >       -- Lennart
> >
> >
> >       -- Lennart
> >
>
>
>
> ===
> Gregory Woodhouse  <[hidden email]>
>
>
> "Interaction is the mind-body problem of computing."
>
> --Philip Wadler
>
>
>
>
>
>
>
>
>
>
>
> _______________________________________________
> Haskell-Cafe mailing list
> [hidden email]
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Detecting Cycles in Datastructures

Gregory Woodhouse
In reply to this post by jerzy.karczmarczuk


--- [hidden email] wrote:

>
> fix f = f (fix f)   -- Here you have your Y. No typeless lambda.
>
> g f n = n : f n    -- This is a generic *non-recursive* `repeat`
> ones = fix g 1     -- Guess what.
>
>

Very nice! I honestly would not have expected this to work. Simple
cases of lazy evaluation are one thing, but this is impressive.

Incidentally, I only(!) have accces to Hugs here in the office, but
defining

fix f         = f (fix f)
g f n         = n : f n
h f           = 1 : map (+1) f

I see that the number of reductions required for each successive term
in h actually seems to go down.

Main> take 1 (fix h)
[1]
(65 reductions, 95 cells)
Main> take 2 (fix h)
[1,2]
(96 reductions, 138 cells)
Main> take 3 (fix h)
[1,2,3]
(123 reductions, 178 cells)
Main> take 4 (fix h)
[1,2,3,4]
(150 reductions, 218 cells)
Main> take 5 (fix h)
[1,2,3,4,5]
(177 reductions, 258 cells)
Main> take 6 (fix h)
[1,2,3,4,5,6]
(204 reductions, 298 cells)

(In case it's not obvious, I'm a complete Haskell newbie, and find this
rather impressive.)
Main>                            


===
Gregory Woodhouse  <[hidden email]>


"Interaction is the mind-body problem of computing."

--Philip Wadler











_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Detecting Cycles in Datastructures

Ben Franksen-2
On Saturday 19 November 2005 02:16, Greg Woodhouse wrote:
> --- [hidden email] wrote:
> > fix f = f (fix f)   -- Here you have your Y. No typeless lambda.
> >
> > g f n = n : f n    -- This is a generic *non-recursive* `repeat`
> > ones = fix g 1     -- Guess what.
>
> Very nice! I honestly would not have expected this to work. Simple
> cases of lazy evaluation are one thing, but this is impressive.

[You should read some of his papers, for instance "the most unreliable
techique in the world to compute pi". I was ROTFL when I saw the title
and reading it was an eye-opener and fun too.]

Ben
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Infinite lists and lambda calculus

Hudak, Paul
In reply to this post by Cale Gibbard
Cale Gibbard wrote:
>>Y = (\f. (\x. f (x x)) (\x. f (x x)))
>
> In a sense, the real definition of Y is Y f = f (Y f), this lambda
> term just happens to have that property, but such functions aren't
> rare.

Actually no, the "real" definition is the top one, because the other one
isn't even a valid lambda term, since it's recursive!  There is no such
thing as recursion in the pure lambda calculus.

Of course, there are many valid definitions of Y, as you point out, both
recursive and non-recursive.  But I do believe that the first one above
is the most concise non-recursive definition.

   -Paul
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Infinite lists and lambda calculus

Cale Gibbard
On 18/11/05, Paul Hudak <[hidden email]> wrote:

> Cale Gibbard wrote:
> >>Y = (\f. (\x. f (x x)) (\x. f (x x)))
> >
> > In a sense, the real definition of Y is Y f = f (Y f), this lambda
> > term just happens to have that property, but such functions aren't
> > rare.
>
> Actually no, the "real" definition is the top one, because the other one
> isn't even a valid lambda term, since it's recursive!  There is no such
> thing as recursion in the pure lambda calculus.

I meant in the sense that Y f = f (Y f) is an axiomatic definition of
Y. We're satisfied with any concrete definition of Y which meets that
criterion. Of course it's not actually a lambda term, but it is an
equation which a lambda term Y can satisfy. Similarly in, say, set
theory, we don't say what sets *are* so much as we say what it is that
they satisfy, and there are many models of set theory which meet those
axioms. Y f = f (Y f) is really the only important property of Y, and
any model of it will do as a concrete definition.

 - Cale
>
> Of course, there are many valid definitions of Y, as you point out, both
> recursive and non-recursive.  But I do believe that the first one above
> is the most concise non-recursive definition.
>
>    -Paul
>
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Detecting Cycles in Datastructures

Gracjan Polak
In reply to this post by Ben Franksen-2
2005/11/19, Benjamin Franksen

> [You should read some of his papers, for instance "the most unreliable
> techique in the world to compute pi". I was ROTFL when I saw the title
> and reading it was an eye-opener and fun too.]
>

Care to post a link? Seems interesting, but unknown to google :)

--
Gracjan
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Detecting Cycles in Datastructures

Jacob Nelson

http://users.info.unicaen.fr/~karczma/arpap/

On Sat, 19 Nov 2005, Gracjan Polak wrote:

> 2005/11/19, Benjamin Franksen
>
>> [You should read some of his papers, for instance "the most unreliable
>> techique in the world to compute pi". I was ROTFL when I saw the title
>> and reading it was an eye-opener and fun too.]
>>
>
> Care to post a link? Seems interesting, but unknown to google :)
>
> --
> Gracjan
> _______________________________________________
> Haskell-Cafe mailing list
> [hidden email]
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
12