## Lazy Evaluation

### Lazy and Eager

Lazy Evaluation is a technique in which expressions, and the functions in which they're found, are evaluated only when the value that they are supposed to return is needed. This in contrast to Eager Evaluation, where the expressions are evaluated immediately during run time.

Let's do a quick example to show how both methods work, say we have a simple function that adds two numbers together:

Now we'll create a list and populate it with the values returned by this function:

As you can see, the list is populated by the results of the function. We first called function and passed it the values 1 and 2, then 2 and 3, and so on. The values were calculated immediately and put into the list.

Now let's see what happens when we use lazy evaluation. First we define a function and make it use lazy evaluation:

Then we try to populate the list with the results from our lazy function:

The list elements are referencing the function objects and not the results! What's going on here?

In this case the function has not been evaluated, and will remain in this state until we tell it explicitly to perform the calculation, like so:

To get the values of the previous list, we just call force on all the elements.

### So What's the Point?

Ok, seems like a cool party trick... but how is this relevant to software development in general? Are there any instances where we wouldn't want the computer to evaluated all functions immediately and only when we specifically tell it to?

#### Case Study #1: Control Structures in Programming Languages

Control Structures are the common forms if, or, etc. found in almost all programming languages. The only way these can be properly implemented is with lazy evaluation, since flow of the program will branch out between two (or more) possible choices and not all will be evaluated.

As you can see, only the first expression was evaluated, because the if form prevents the immediate execution of the second expression. Instead, if we place both expressions in a form that executes them immediately:

This delay in evaluation is also what enables a form like or to exist, since or functions by checking for the returned value from a list of expressions or values. Because or only needs to verify one instance of t to be returned, it checks the list of expressions passed to it one by one until it finds a t or any value other than nil or False (in languages like Python). The expressions that are passed to it are not evaluated immediately because there is no need, only one needs to return a t for or itself to return a t.

Let's see this in action with SBCL:

Here, with macroexpand, we can see exactly how the interpreter/compiler treats the or form. or in this case is a macro that expands into an if form. As we have shown above, if itself is implemented with lazy evaluation, by association so does or. If the first evaluation does not produce a Boolean True, then recursion takes place with another call to or and the remaining values passed as arguments.

A better example would be to pass undefined functions. If the undefined function were evaluated, an error would occur:

Yet, as we shall soon see, or will not evaluate the undefined function as long as a function that is placed before it in the arguments evalutes to t:

If we place the undefined function before a valid function:

#### Case Study # 2: Costly Operations (and Large Data Sets)

Let's say that you have a program that shows all possible permutations of 4 sets of elements, but in reality only has to calculate one such set, the one that is chosen by the user. Without Lazy Evaluation$$^\text{TM}$$ all the permutations of all the sets would be calculated where only one such set need be, the one chosen by the user.

Here is the cost in terms of resources and time of one such calculation, assuming that the set has 9 elements:

So if you have code that looks like this....

... where all the sets are passed to permute before the user chooses one, you can just imagine what will happen to your system resources. The fact is, it's all pretty useless because we're interested in only one result.

$$\ast$$Actually the above code won't even run in my copy of SBCL, undoubtedly because the permute function, which I coded, is woefully inefficient, but this just shows what the consequence is of running expensive operations.

How about if we apply lazy evaluation to the problem? Then the computer would only calculate the permutations once the value is referenced or called, therefore performing only one such calculation.

We just make the permute functions lazy with the lazy function (this is not part of the ANSI Standard so you'll have to roll your own) and show the results with the force function.

So here the functions are not evaluated until the user chooses which permutation he wants, then the function tells the computer to do the calculation with the call to force. The code runs without problems and only one evaluation is done, conserving resources (and in this case, preventing a crash).

This specific case actually occurs quite often in game AI code, where the AI has to generate a tree of all possible moves and rate them accordingly so it can decide how best to beat the player. Generating a list of all possible moves is incredible expensive, resource-wise, and also quite wasteful, as a majority of the moves calculated will never be made in the game.

Take a chess opening move. The player can move any of his eight pawns, either one or two squares forward. Leaving the other pieces aside for now, this is a total of 16 possible moves. For each of these 16 possibilities, the AI then has to create another tree of possible moves based on each possible move. Say he only moves his pawns as well, he can move any of his eight pawns, one or two squares forward, for EACH of the possible moves done by the player. That's $$16^3$$, or 4096 possibilities! This quickly gets out of hand, and we're only taking about pawn movements for the first two turns! The clincher though is that only one move will be made, so all other calculated possibilities are thrown away as they are now irrelevant!

The way around this is lazy evaluation, where the specific possibilities are calculated only once the player has made a specific move.

### Implementing Lazy Functions

Several languages, such as Haskell and Clojure have lazy evaluation built-in as the default behavior. Common Lisp however, does not, so if you want to take advantage of lazy data structures or lazy functions you have to roll your own implementation. Fortunately, this is easy to do.

When you come down to it, lazy data structures or functions don't hide the result of execution or value, it actually doesn't perform the calculation at all. With this in mind, you can implement laziness by assigning the functions that you want to be lazy to variables. This is similar to what happens when you pass a function to another function as one of its arguments. The function that is passed is not called, but merely saved as an argument variable by the other function. To evaluate it, you have to use funcall, and only then does the interpreter call the function.

$$\ast$$ Implementation taken from Conrad Barski's Land of Lisp.

So here you can see one possible implementation of the lazy paradigm. The macro lazy takes a series of expressions, usually the innards of a function, and saves those expressions under gensym of value. To evaluate the expressions, it must be called with funcall.

This way any recursion present in the function is not performed, giving you the ability to create and manage infinite data sets.

As you can see, *numbers* returns a list with a number, and the un-evaluated recursion which adds 1 to the number. If this were implemented without the call to lazy when doing the recursion, it would loop infinitely until the stack was exhausted or terminated by the user.

This way, you can create what amounts to an infinite list without exhausting the stack because the functions don't recurse until funcall is called on it. In effect, it's a stop sign for the function and funcall is the green light.

### Consequences

Lazy Evaluation is not without its downsides however, the biggest being a performance hit, as the computer now has to store a reference to the computation, and the result once the computation is done (this is called memoization, and is incredible useful if when used properly, but can be wasteful when not needed). As such, lazy evaluation should only be used when truly needed, as it can slow down code performance and create bottlenecks.