Why doesn't Haskell need Trampolining?
As Scala developer learning the IO Monad, and therefore technicalities of Trampolining in general that are necessary for recursion where tail call optimization is not possible, I wonder how Haskell seems to natively avoid it.
I get that Haskell is a lazy language, however I wonder if someone could elaborate a bit further.
For instance, why doesn't ForeverM stackoverflow in scala? Well, I can answer for trampoline, and I can find the actual code that does that in libraries and blogs. I actually implemented a basic trampoline myself to learn.
How does it happens in Haskell? Is there a way to unpack the laziness a bit, give some pointers, and maybe documentation that would help in understanding it better?
sealed trait IO[A] {
.....
def flatMap[B](f: A => IO[B]): IO[B] =
FlatMap[A,B](this, f) // we do not interpret the `flatMap` here, just return it as a value
def map[B](f: A => B): IO[B] =
flatMap[B](f andThen (Return(_)))
}
case class Return[A](a: A) extends IO[A]
case class Suspend[A](resume: () => A) extends IO[A]
case class FlatMap[A,B](sub: IO[A], k: A => IO[B]) extends IO[B]
......
@annotation.tailrec
def run[A](io: IO[A]): A = io match {
case Return(a) => a
case Suspend(r) => r()
case FlatMap(x, f) => x match {
case Return(a) => run(f (a))
case Suspend(r) => run(f( r()))
case FlatMap(y, g) => run(y flatMap (a => g(a) flatMap f))
}
}
flatMap
here, just return it as a value" is basically what happens in Haskell all the time, without being explicit about it - everything is just a lazy thunk, and it's getting evaluated by one huge trampoline that executes the whole program. – Bergi Apr 22 at 2:16