algorithms – When are you supposed to reduce?

Wikipedia lists the following algorithm to normalize a lambda calculation term $ t $:

  • Yes $ t $ is not in the form of normal head, beta reduces the beta-redex to the top position to get $ t $. Then normalize $ t $ get $ t & # 39; $. $ t & # 39; $ is the normal form of $ t $.
  • Yes $ t $ is in the form of normal head, normalize each of its sub-terms to obtain $ t $. $ t $ is the normal form of $ t $.

The basic case involved is if $ t $ is in normal form but has no sub-terms, in which case it is already in normal form. Yes $ t $ has a normal form, the algorithm above is guaranteed to find it.

The problem that I have, is that it does not seem that the algorithm has ever reduced eta.

For example, the algorithm normalizes $ lambda y. lambda x.y x $ to itself (since it is already in its normal form, and $ x $ is also in normal head shape). It should be normalized $ lambda y. y $ although.

It seems obvious that you need to add eta-reduction steps to some point. The problem is that I do not know where to put them and keep the property as the algorithm is sure to find the normal form.

When and where are you reducing to achieve this?