Thinking more about this I/O demand analysis thing. I don't think it's possible for the compiler to decide for itself what to fuzz, but a user might know very well. Given
m >>= f
there are really two sensible approaches:
1. Want to ensure m is executed even if the action produced by f will diverge.
2. Not care whether m is executed if the compound action will ultimately diverge.
Our current I/O hack mixes these two in an unprincipled way.
The second approach, not caring, is certainly the most natural for GHC's implementation of I/O: we're defining a function from the real world to a new real world and a value; the function is partial and we don't care about the details. This is fairly clearly the right way to handle strict ST: if we don't get a result at the end, we don't get anything useful and don't care what actions get dropped.
The first approach seems to offer a better way to explain I/O and evaluation to users, ensuring that evaluation is only performed to the extent necessary for execution. This is partially supported by the I/O hack when it triggers. But it doesn't always, and it's not entirely clear if we can make do so.
Still no real conclusions here; just exploring the problem more.