I am writing a program which involves RWS for tracking mutable state and producing some log. My purpose is to define a computation that evaluates some action, gathers the aftercoming state and depending on it appends something to the beginning of the Writer's log. Minimal example:
type M = RWS () String [Int]
run a = evalRWS a () []
prepend s = tell (foldMap show s)
act = do
tell "a"
modify (1:)
tell "b"
comp = mfix $ \s -> prepend s >> act >> get
Here I use MonadFix to alter the past by writing to the log before the act has ever happened. And it works flawlessly returning "1ab". However, if I use the M to traverse over the state then it hangs:
prepend s = forM_ s (tell . show)
This behavior is very strange to me, I don't get why does this computation diverge. It is even harder to justify because the prepend in the second variant does not alter the state to any extent. Why doesn't this program converge? Is there anything I can do to fix (inb4 "hehe fix") it?
I know that I can workaround it using State part of RWS, but for some reason I would like to avoid it.
This happens because
forM_is not lazy. This requirement is explicitly called out in themfixdocumentation: the function has to be lazy in order for the fixpoint to converge. ButforM_does need to destructure its parameter in order to iterate over it. It can still stay lazy with respect to every element of the list, but not the list itself.When you run this recursive-ish computation of yours, it takes three steps (i.e. three monadic binds):
prepend, thenact, and thenget, and as a result you essentially get a value looking like this:Where the
foldMap show spiece is not yet evaluated - i.e. it's a thunk pointing ats, which is the final state of the same computation. It is possible to reference the state to incorporate it into thefoldMap show sexpression before the state is even evaluated, because Haskell is lazy. This is laziness at work.However, if you replace
prependwithfoldM_, you no longer have three steps (three monadic binds) in your computation. Now, you have one step for each element of the resulting state list. Which means that in order to even construct the computation (i.e. define its steps, aka binds) you need to examine its own resulting state.