Chapter 1.2 (Procedures and the Processes They Generate)
1.2.1 Linear recursion and iteration
We now have all the elements we need to create procedures, but we lack the knowledge to apply them. We don’t know yet the common patterns, and are unable to predict the consequences of the procedures we write. The ability to visualize the consequences of the actions under con- sideration is crucial to becoming an expert programmer, just as it is in any synthetic, creative activity.
In this section we will examine some common “shapes” of processes, generated by simple procedures.
Given what we have already seen of recursive and iterative processes in Lecture 3 and 4, We must be carefull to not confuse the notion of of a recursive process with a recursive procedure:
When we describe a procedure as recursive, we are referring to the syntactic fact that the procedure definition refers (either directly or indirectly) to the procedure itself. But when we describe a process as following a pattern that is, say, linearly recursive, we are speaking about how the process evolves, not about the syntax of how a procedure is written.
Exercise 1.9
Each of the following two procedures defines a method for adding two >positive integers in terms of the procedures inc, which increments its >argument by 1, and dec, which decrements its argument by 1.
(define (+ a b) (if (= a 0) b (inc (+ (dec a) b))))(define (+ a b) (if (= a 0) b (+ (dec a) (inc b))))Using the substitution model, illustrate the process generated by each procedure in evaluating (+ 4 5). Are these processes iterative or recursive?
Evaluating the first one:
(+ 4 5)
(inc (+ (dec 4) 5))
(inc (+ 3 5))
(inc (inc (+ (dec 3) 5)))
(inc (inc (+ 2 5)))
(inc (inc (inc (+ (dec 2 5)))))
(inc (inc (inc (+ (1 5)))))
(inc (inc (inc (inc (+ dec 1) 5))))
(inc (inc (inc (inc (+ 0 5)))))
(inc (inc (inc (inc 5))))
9
This is clearly a recursive process, with inc as a deferred operation.
Evaluating the second one:
(+ 4 5)
(+ (dec 4) (inc 5))
(+ (3 6))
(+ (dec 3) (inc 6))
(+ (2 7))
(+ (dec 2) (inc 7))
(+ (1 8))
(+ (dec 1) (inc 8))
(+ 0 9)
9
This is an iterative process. No deferred procedures, constant space, and the iteration can stop and continue after losing all previous data.
Exercise 1.10
The following procedure computes a mathematical function called Ackermann’s function.
(define (A x y) (cond ((= y 0) 0) ((= x 0) (* 2 y)) ((= y 1) 2) (else (A (- x 1) (A x (- y 1))))))
What are the values of the following expressions? (A 1 10) (A 2 4) (A 3 3)
(A 1 10)
(A (- 1 1) (A 1 (- 10 1))))
(A (- 1 1) (A 1 9))
(A 0 (A (- 1 1) (A 1 (- 9 1))))
(A 0 (A 0 (A 1 8)))
....
A(0 A(0 A(0 A(0 A(0 A(0 A(0 A(0 A(0 A(1 1))))))))))
A(0 A(0 A(0 A(0 A(0 A(0 A(0 A(0 A(0 2)))))))))
A(0 A(0 A(0 A(0 A(0 A(0 A(0 A(0 4)))))))
A(0 A(0 A(0 A(0 A(0 A(0 A(0 8)))))))
A(0 A(0 A(0 A(0 A(0 A(0 16)))))))
A(0 A(0 A(0 A(0 A(0 32)))))
A(0 A(0 A(0 A(0 64))))
A(0 A(0 A(0 128)))
A(0 A(0 256))
A(0 512)
1024
In similar style, both A(2 4) and A(3 3) return 65536.
Consider the following procedures, where A is the procedure defined above:
(define (f n) (A 0 n)) (define (g n) (A 1 n)) (define (h n) (A 2 n)) (define (k n) (* 5 n n))
Give concise mathematical definitions for the functions computed by the procedures f, g, and h for positive integer values of n
f(n) returns $2n$, since (A 0 n) = (* 2 n)
g(n) returns $2^n$ since (A 1 n) = (A 0 n) (A 1 (n-1))
h(n) returns $n^2$
1.2.2 Tree Recursion
Another common pattern of computation is tree recursion. For example to calculate Fibonacci. The formula can be define as follows:
define (fib n)
(cond ((= n 0) 0)
((= n 1) 1)
(else (+ (fib (- n 1))
(fib (- n 2))))))
Since every call to fibonacci contains 2 recursive calls, the function calls grow tree-like:

While this procedure is a good example of tree recursion, it is a terrible way to calculate fib(n) since there is a lot of duplicate computation. It shows that the amount of times fib(0) or fib(1) is calculated is equal to fib(n + 1). In this case (fib(5)) the amount of fib(0 or 1) calls is 8, which is the 6th fibonacci number. These 2 facts to lead an exponention growth compared to the input.
Fibonacci can also be calculated using an iterative algorithm. Since any number is the sum of the previous 2 numbers, we can keep track of the intermediate state, while counting down the amount of times we add the previous 2 numbers together:
(define (fib n)
(fib-iter 1 0 n))
(define (fib-iter a b count)
(if (= count 0)
b
(fib-iter (+ a b) a (- count 1))))
Using Substitution:
(fib 5)
(fib-iter 1 0 5)
(fib-iter 1 1 4)
(fib-iter 2 1 3)
(fib-iter 3 2 2)
(fib-iter 5 3 1)
(fib-iter 8 5 0)
8
This is a linear iteration with a constant space complexity.
One should not conclude from this that tree-recursive processes are useless. When we consider processes that operate on hierarchically structured data rather than numbers, we will find that tree recursion is a natural and powerful tool.
Secondly, the recursive implementation is much more straight-forward, making it easier to reason about.
Example: Counting Change So lets introduce a problem that is a bit more difficult. At first glance, it is also a tree-recursive problem, but lets see if we can also come up with an iterative algorithm.
How many different ways can we make change of $1.00, given half-dollars, quarters, dimes, nickels, and pennies? More generally, can we write a procedure to compute the number of ways to change any given amount of money?
This problem is relatively simple to solve recursively.
The number of ways to change amount $a$ using $n$ kinds of coins equals • the number of ways to change amount $a$ using all but the first kind of coin, plus • the number of ways to change amount $a-d$ using all $n$ kinds of coins, where $d$ is the denomination of the first kind of coin.
Lets see if we can use a simpler problem to use the reduction function on:
The number of ways to change 10 using (5 1) or 10(5 1) =
5(5 1) + 10(1) =
(0(5 1) + 5(1)) + (9(1) + 10(_)) =
(1 + 5(1)) + (9(1) + 0) =
1 + (4(1) + 5(_)) + (8(1) + (9(_))
1 + 3(1) + 3(_) + 0 + 7(1) + 8(0) + 0 =
1 + 2(1) + 6(1) etc...
1 + 1 + 1 = 3
We can learn some things from this case:
- If the amount reaches 0, that is 1 way to make change (going from
5(5 1)to0(5 1)implies we used 5 cents to create the change). - If the amount reaches less than 0, we could not make exact change, so it counts as 0
- if the number of coins $n$ reaches 0, we have reached the end and can’t make change anymore, so 0 as well.
Applying this to the more difficult problem (atleast for the start):
The number of ways to change 100 using (50, 25, 10, 5, 1) or 100(50 25 10 5 1) =
50(50 25 10 5 1) + 100(25 10 5 1) =
(0(50 25 10 5 1) + 50(25 10 5 1)) +
(75(25 10 5 1) + 100(10 5 1)) =
1 + 50(25 10 5 1) + 75(25 10 5 1) + 100(10 5 1)
... etc
Having these rules we can construct the recursive algorithm, as we have our base case(s):
(define (make-change amount kinds-of-coins)
(cond ((= amount 0 ) 1)
((or (< amount 0) (= kinds-of-coins 0)) 0)
(else (.....)))
Now we need to implement the else. And this is the part we have explained above:
The number of ways to change amount $a$ using $n$ kinds of coins equals • the number of ways to change amount $a$ using all but the first kind of coin, plus • the number of ways to change amount $a-d$ using all $n$ kinds of coins, where $d$ is the denomination of the first kind of coin.
(else
(+
(make-change amount (- kinds-of-coins 1))
(make-change (- amount kinds-of-coins[0]) kinds-of-coins))
This is still pseudo-code, since kinds-of-coins[0] is not actually lisp, but it is a sign that we need a way to get the currency of the kind of coin. This is usually done via a helper function, something like:
(define (first-denomination kinds-of-coins)
(cond ((= kinds-of-coins 1) 1)
((= kinds-of-coins 2) 5)
((= kinds-of-coins 3) 10)
((= kinds-of-coins 4) 25)
((= kinds-of-coins 5) 50)))
As you can see it depends on the problem statement, so should probaby be outside of the procedure, in order to keep it relatively generalizable
(define (count-change amount) (cc amount 5))
(define (cc amount kinds-of-coins)
(cond ((= amount 0) 1)
((or (< amount 0) (= kinds-of-coins 0)) 0)
(else (+ (cc amount
(- kinds-of-coins 1))
(cc (- amount
(first-denomination
kinds-of-coins))
kinds-of-coins)))))
This shows that the recursive process is often relatively simple to reason about and to construct, while the iterative process can be quite difficult to construct and reason.
Exercise 1.11
A function f is defined by the rule that
$f(n)$ = $n$ if $n < 3$ otherwise $f(n-1) + 2f(n-2) + 3f(n-3)$
Write a procedure that computes f by means of a recursive process. Write a procedure that computes f by means of an iterative process.
Lets start with $f(5)$
Recursive:
(define (f n)
(if (< n 3)
n
(+ (f (- n 1))
(* 2(f (- n 2)))
(* 3(f (- n 3))))))
Iterative
Lets see if we can find a pattern and make a table from there:
$$f(0) = 0$$$$f(1) = 1$$
$$f(2) = 2$$
$$f(3) = f(2) + 2*f(1) + 3*f(0) = 2 + 2 + 0 = 4$$
$$f(4) = f(3) + 2 *(f2) + 3* f(1) = (f(2) + 2*f(1) + 3*f(0) )+ 2*f(2) + 3*f(1) = 11$$
(define (f n)
(define (iter a b c counter)
(if (= counter 0)
a
(iter b c (+ c (* 2 b) (* 3 a)) (- counter 1))))
(iter 0 1 2 n))
Lets use Substitution to reason about this:
(f 2)
(iter 0 1 2 2)
(iter 1 2 (+ 2 (*2 1) (* 3 0)) 1)
(iter 1 2 4 1)
(iter 2 4 (whatever) 0)
2
(f 3)
(iter 0 1 2 3)
(iter 1 2 (+ 2 (* 2 1) (* 3 0)) 2)
(iter 1 2 4 2)
(iter 2 4 (+ 4 (* 2 2) (*3 1)) 1)
(iter 2 4 11 1)
(iter 4 11 (+ 11 (* 2 4) (*3 2)) 0)
(iter 4 11 25 0)
4
Alright. I don’t yet fully understand how to get to this, but it seem to work. Maybe it has something to do with the results of the first 3 non-iterative values $f(0), f(1), and f(2)$, since those result in $0,1,2$ respectively.
Exercise 1.12
Write a procedure that computes elements of Pascal’s triangle by means of a recursive process.
With Pascal’s triangle being:

So if we take $i$ to be the row number and $j$ to be the number to print.
(define (pascal i j)
(if (or (= i 0) (= i j))
1
(+ (pascal (- i 1) (- j 1)) (pascal (- i 1) j))))
Exercse 1.13 Too mathematical for my taste…
1.2.3 Orders of growth
So it seems processes can differ greatly in the amount of computational resources they require to calculate a given problem. One convenient way to describe these difference is to use the notion of order of growth Big $O$ notation. We have seen for example
- Linear Growth -> $O(n)$
- Constant Growth -> $O(1)$
- Exponential Growth -> $O(2^n)$
This classification is still to be taken as relatively crude, For example, a process requiring $n^2$ steps and a process requiring $1000n^2$ steps and a process requiring $3n^2 + 10n + 17$ steps all have $O(n^2)$ order of growth. But it is a useful indication as to how we expect a solution to behave if we change the size of the problem.
Lets now examine 2 algorithms whose Order of growth is logarithmic $O(log(n))$
Exercise 1.15 Given
(define (cube x) (* x x x))
(define (p x) (- (* 3 x) (* 4 (cube x))))
(define (sine angle)
(if (not (> (abs angle) 0.1))
angle
(p (sine (/ angle 3.0)))))
a. How many times is the procedure p applied when
(sine 12.15)is evaluated?
(sine 12.15)
(p (sine (/ 12.15 3.0)))
(p (sine (4.05)))
(p (p (sine (/ 4.05 3.0))))
(p (p (sine 1.35)))
(p (p (p (sine (/1.35 3.0)))))
(p (p (p (sine 0.45))))
(p (p (p (p (sine (/ 0.45 3.0))))))
(p (p (p (p (sine (0.15))))))
(p (p (p (p (p (sine (/ 0.15 0.3)))))))
(p (p (p (p (p (sine 0.05))))))
(p (p (p (p (p 0.05)))))
5 times
What is the order of growth in space and number of steps (as a function of a) used by the process generated by the sine procedure when (sine a) is evaluated?
The growth in both number of steps as well as space seems to be $O(log(n))$. That is because technically, every time the function calls itself, the problem is cut in 3. Also, we do have deferred operations, hinting at $O(n)$, but due to the logarithmic nature, we get less deferred steps, thus the $O(log(n))$.
2.2.4 Exponentiation
Consider the problem of computing the exponential of a given number $b^n$. One way to do this is via the recursive definition $b^n = b * b^{n-1}$
(define (exp b n)
(if (= n 0)
1
(* b (exp b (- n 1)))))
with space complexity $\theta(n)$ and time complexity $\theta(n)$ as well.
We can turn this iterative:
(define (exp b n)
(exp-iter b n 1)
(define (exp-iter b counter product)
(if (= counter 0)
product
(exp-iter b (- counter -1) (* b product)))))
with time complexity $\theta(n)$ but space complexity $\theta(1)$
But it is also possible to turn this problem into a logarithmic definition by stating $b^n = b^{(n/2)^{2}}$ if $n$ is even and $b^n = b * b^{n-1}$ when $n$ is odd
(define (fast-exp b n)
(cond ((= n 0) 1)
((even? n) (square (fast-exp b (/ n 2)))
(else (* b fast-exp b (- n 1)))))
This way we once again divide the problem in atleast 2 with every 2 iterations, leading to bot a space- and time complexity of $\theta(log(n))$.
The difference between $\theta$(log n) growth and $\theta$(n) growth becomes striking as n becomes large. For example, fast-expt for n = 1000 requires only 14 multiplications.
Exercise 1.16
Design a procedure that evolves an iterative exponentiation process that uses successive squaring
(define (fast-exp b n)
(define (fast-exp-iter a b n)
(cond ((= n 0) 1)
((even? n) (square (fast-exp-iter a (square b)(/ n 2)))
(else (fast-exp-iter (* a b) b (- n 1))))))
(fast-exp-iter 1 b n)
)
Exercise 1.17 and apparantly 1.18 as well XD
design a multiplication procedure analogous to fast-expt that uses a logarithmic number of steps and together with addition, operations double, which doubles an integer, and halve, which divides an (even) integer by 2.
$a * b$ = $b + b + b + … b$ =
Recursive
(define (* a b)
(if (= b 0)
0
(+ a (* a (- b 1)))))
Iterative
(define (* a b)
(define (iter a b product)
(if ( = b 0)
product
(iter a (- b 1) (+ product a)))
(iter a b 0))
Iterative logarithmic (assuming even? exists)
(define (* a b)
(define (iter a b product)
(cond ((= b 0) product)
((even? n) (iter a (halve b) (double a)))
(else (iter a (- b 1) (+ product a)))
(iter a b 0))
Exercise 1.19 Again quite mathy…. pfff
1.2.5. Greatest Common Divisors
The greatest common divisor of 2 integers $a$ and $b$ is the largest integer that divides $a$ and $b$ with no remainder. There is a very famous and simple algorithm to calculate this.
If $r$ is the remainder of $a / b$, then the common divisors of $a$ and $b$ are the same as the common divisors of $b$ and $r$.
In essence that means the following:
GCD(a,b) = GCD(b,r)
and thus
GCD(206,40) = GCD(40,6)
= GCD(6,4)
= GCD(4,2)
= GCD(2,0)
= 2
This is know as Euclid’s algorithm and can be implemented as follows:
(define (gcd a b)
(if (= b 0)
a
(gcd b (remainder a b))))
which is an iterative process (no deferred operations) with $\theta(log(n))$ complexity.
Exercise 1.20
The process that a procedure generates is of course dependent on the rules used by the interpreter. As an example, consider the iterative gcd procedure given above. Suppose we were to interpret this procedure using normal-order evaluation, as discussed in Section 1.1.5. (the normal-order-evaluation rule for if is described in Exercise 1.5.) Using the substitution method (for normal order), illustrate the process generated in evaluating
(gcd 206 40)and indicate the remainder operations that are actually performed. How many remainder operations are actually performed in the normal-order evaluation of (gcd 206 40)? In the applicative-order evaluation?
applicative-order evaluation (expand and evaluate)
(gcd 206 40)
(gcd 40 (remainder 206 40))
(gcd 40 6)
(gcd 6 (remainder 40 6))
(gcd 6 4)
(gcd 4 (remainder 6 4))
(gcd 4 2)
(gcd 2 (remainder 4 2))
(gcd 2 0)
2
4 remainder operations
applicative-order evaluation (expand until only primitives)
(gcd 206 40) =
(gcd 40 (remainder 206 40)) =
(gcd (remainder 206 40) (remainder 40 (remainder 206 40))) =
(gcd (remainder 40 (remainder 206 40)) (remainder (remainder 206 40) (remainder 40 (remainder 206 40)))
etc...
1.2.6 Example: Testing for primality
This section describes two methods for checking the primality of an integer $n$. One with $\theta(\sqrt n)$ complexity, and another with $\theta(log(n))$ complexity.
One way to test if a number is a prime is to find the number’s divisors. The following procedure finds the smallest integral divisor $> 1$ of a given number $n$.
(define (smallest-divisor n) (find-divisor n 2))
(define (find-divisor n test-divisor)
(cond ((> (square test-divisor) n) n)
((divides? test-divisor n) test-divisor)
(else (find-divisor n (+ test-divisor 1)))))
(define (divides? a b) (= (remainder b a) 0))
Using this, we can test if a number is prime
(define (prime? n)
(= n (smallest-divisor n)))
The end test for find-divisor is based on the fact that if n is not prime it must have a divisor less than or equal to $\sqrt n$. This means that the algorithm need only test divisors between 1 and $\sqrt n$. Consequently, the number of steps required to identify n as prime will have order of growth $\theta(\sqrt n)$.
Exercise 1.21
Use the smallest-divisor procedure to find the smallest divisor of each of the following numbers: 199, 1999, 19999
(smallest-divisor 199) -> 199
(smallest-divisor 1999) -> 1999
(smallest-divisor 19999) -> 7