Procedures as Returned Values

SICP > Building Abstractions with Procedures > Formulating Abstractions with Higher-Order Procedures > Procedures as Returned Values
Previous: Procedures as General Methods Next: Building Abstractions with Data

    The above examples demonstrate how the ability to pass procedures as arguments significantly enhances the expressive power of our programming language. We can achieve even more expressive power by creating procedures whose returned values are themselves procedures.

    We can illustrate this idea by looking again at the fixed-point example described at the end of section [*]. We formulated a new version of the square-root procedure as a fixed-point search, starting with the observation that $\sqrt{x}$ is a fixed-point of the function $y \mapsto x/y$. Then we used average damping to make the approximations converge. Average damping is a useful general technique in itself. Namely, given a function f, we consider the function whose value at x is equal to the average of x and f(x).

    We can express the idea of average damping by means of the following procedure:

    (define (average-damp f)
      (lambda (x) (average x (f x))))
    
    Average-damp is a procedure that takes as its argument a procedure f and returns as its value a procedure (produced by the lambda) that, when applied to a number x, produces the average of x and (f x). For example, applying average-damp to the square procedure produces a procedure whose value at some number x is the average of x and x2. Applying this resulting procedure to 10 returns the average of 10 and 100, or 55: [*]

    ((average-damp square) 10)
    55
    

    Using average-damp, we can reformulate the square-root procedure as follows:

    (define (sqrt x)
      (fixed-point (average-damp (lambda (y) (/ x y)))
                   1.0))
    
    Notice how this formulation makes explicit the three ideas in the method: fixed-point search, average damping, and the function $y \mapsto x/y$. It is instructive to compare this formulation of the square-root method with the original version given in section [*]. Bear in mind that these procedures express the same process, and notice how much clearer the idea becomes when we express the process in terms of these abstractions. In general, there are many ways to formulate a process as a procedure. Experienced programmers know how to choose procedural formulations that are particularly perspicuous, and where useful elements of the process are exposed as separate entities that can be reused in other applications. As a simple example of reuse, notice that the cube root of x is a fixed point of the function $y\mapsto x/y^2$, so we can immediately generalize our square-root procedure to one that extracts cube roots:[*]

    (define (cube-root x)
      (fixed-point (average-damp (lambda (y) (/ x (square y))))
                   1.0))
    

    Newton's method

    When we first introduced the square-root procedure, in section [*], we mentioned that this was a special case of Newton's method. If $x\mapsto g(x)$ is a differentiable function, then a solution of the equation g(x)=0 is a fixed point of the function $x\mapsto f(x)$ where

    \begin{displaymath}f(x) = x - \frac{g(x)}{Dg(x)}
\end{displaymath}

    and Dg(x) is the derivative of g evaluated at x. Newton's method is the use of the fixed-point method we saw above to approximate a solution of the equation by finding a fixed point of the function f. [*] For many functions g and for sufficiently good initial guesses for x, Newton's method converges very rapidly to a solution of g(x)=0. [*]

    In order to implement Newton's method as a procedure, we must first express the idea of derivative. Note that ``derivative,'' like average damping, is something that transforms a function into another function. For instance, the derivative of the function $x\mapsto
x^3$ is the function $x \mapsto 3x^2$. In general, if g is a function and dx is a small number, then the derivative Dg of g is the function whose value at any number x is given (in the limit of small dx) by

    \begin{displaymath}Dg(x) = \frac {g(x+dx) - g(x)}{dx} \end{displaymath}

    Thus, we can express the idea of derivative (taking dx to be, say, 0.00001) as the procedure

    (define (deriv g)
      (lambda (x)
        (/ (- (g (+ x dx)) (g x))
           dx)))
    
    along with the definition

    (define dx 0.00001)
    

    Like average-damp, deriv is a procedure that takes a procedure as argument and returns a procedure as value. For example, to approximate the derivative of $x\mapsto
x^3$ at 5 (whose exact value is 75) we can evaluate

    (define (cube x) (* x x x))
    
    ((deriv cube) 5)
    75.00014999664018
    

    With the aid of deriv, we can express Newton's method as a fixed-point process:

    (define (newton-transform g)
      (lambda (x)
        (- x (/ (g x) ((deriv g) x)))))
    
    (define (newtons-method g guess)
      (fixed-point (newton-transform g) guess))
    
    The newton-transform procedure expresses the formula at the beginning of this section, and newtons-method is readily defined in terms of this. It takes as arguments a procedure that computes the function for which we want to find a zero, together with an initial guess. For instance, to find the square root of x, we can use Newton's method to find a zero of the function $y\mapsto y^2-x$ starting with an initial guess of 1. [*] This provides yet another form of the square-root procedure:

    (define (sqrt x)
      (newtons-method (lambda (y) (- (square y) x))
                      1.0))
    

    Abstractions and first-class procedures

    We've seen two ways to express the square-root computation as an instance of a more general method, once as a fixed-point search and once using Newton's method. Since Newton's method was itself expressed as a fixed-point process, we actually saw two ways to compute square roots as fixed points. Each method begins with a function and finds a fixed point of some transformation of the function. We can express this general idea itself as a procedure:

    (define (fixed-point-of-transform g transform guess)
      (fixed-point (transform g) guess))
    
    This very general procedure takes as its arguments a procedure g that computes some function, a procedure that transforms g, and an initial guess. The returned result is a fixed point of the transformed function.

    Using this abstraction, we can recast the first square-root computation from this section (where we look for a fixed point of the average-damped version of $y \mapsto x/y$) as an instance of this general method:

    (define (sqrt x)
      (fixed-point-of-transform (lambda (y) (/ x y))
                                average-damp
                                1.0))
    
    Similarly, we can express the second square-root computation from this section (an instance of Newton's method that finds a fixed point of the Newton transform of $y\mapsto y^2-x$) as

    (define (sqrt x)
      (fixed-point-of-transform (lambda (y) (- (square y) x))
                                newton-transform
                                1.0))
    

    We began section [*] with the observation that compound procedures are a crucial abstraction mechanism, because they permit us to express general methods of computing as explicit elements in our programming language. Now we've seen how higher-order procedures permit us to manipulate these general methods to create further abstractions.

    As programmers, we should be alert to opportunities to identify the underlying abstractions in our programs and to build upon them and generalize them to create more powerful abstractions. This is not to say that one should always write programs in the most abstract way possible; expert programmers know how to choose the level of abstraction appropriate to their task. But it is important to be able to think in terms of these abstractions, so that we can be ready to apply them in new contexts. The significance of higher-order procedures is that they enable us to represent these abstractions explicitly as elements in our programming language, so that they can be handled just like other computational elements.

    In general, programming languages impose restrictions on the ways in which computational elements can be manipulated. Elements with the fewest restrictions are said to have first-class status. Some of the ``rights and privileges'' of first-class elements are: [*]

    Lisp, unlike other common programming languages, awards procedures full first-class status. This poses challenges for efficient implementation, but the resulting gain in expressive power is enormous. [*]

    Exercise. Define a procedure cubic that can be used together with the newtons-method procedure in expressions of the form

    (newtons-method (cubic a b c) 1)
    
    to approximate zeros of the cubic x3 +ax2 +bx +c.

    Exercise. Define a procedure double that takes a procedure of one argument as argument and returns a procedure that applies the original procedure twice. For example, if inc is a procedure that adds 1 to its argument, then (double inc) should be a procedure that adds 2. What value is returned by

    (((double (double double)) inc) 5)
    

    Exercise. Let f and g be two one-argument functions. The composition f after g is defined to be the function $x\mapsto f(g(x))$. Define a procedure compose that implements composition. For example, if inc is a procedure that adds 1 to its argument,

    ((compose square inc) 6)
    49
    

    Exercise. If f is a numerical function and n is a positive integer, then we can form the nth repeated application of f, which is defined to be the function whose value at x is $f(f(\ldots(f(x))\ldots))$. For example, if f is the function $x \mapsto x+1$, then the nth repeated application of f is the function $x \mapsto x+n$. If f is the operation of squaring a number, then the nth repeated application of f is the function that raises its argument to the 2nth power. Write a procedure that takes as inputs a procedure that computes f and a positive integer n and returns the procedure that computes the nth repeated application of f. Your procedure should be able to be used as follows:

    ((repeated square 2) 5)
    625
    
    Hint: You may find it convenient to use compose from exercise [*].  

    Exercise. The idea of smoothing a function is an important concept in signal processing. If f is a function and dx is some small number, then the smoothed version of f is the function whose value at a point x is the average of f(x-dx), f(x), and f(x+dx). Write a procedure smooth that takes as input a procedure that computes f and returns a procedure that computes the smoothed f. It is sometimes valuable to repeatedly smooth a function (that is, smooth the smoothed function, and so on) to obtained the n-fold smoothed function. Show how to generate the n-fold smoothed function of any given function using smooth and repeated from exercise [*].

    Exercise. We saw in section [*] that attempting to compute square roots by naively finding a fixed point of $y \mapsto x/y$ does not converge, and that this can be fixed by average damping. The same method works for finding cube roots as fixed points of the average-damped $y\mapsto x/y^2$. Unfortunately, the process does not work for fourth roots--a single average damp is not enough to make a fixed-point search for $y\mapsto
x/y^3$ converge. On the other hand, if we average damp twice (i.e., use the average damp of the average damp of $y\mapsto
x/y^3$) the fixed-point search does converge. Do some experiments to determine how many average damps are required to compute nth roots as a fixed-point search based upon repeated average damping of $y\mapsto
x/y^{n-1}$. Use this to implement a simple procedure for computing nth roots using fixed-point, average-damp, and the repeated procedure of exercise [*]. Assume that any arithmetic operations you need are available as primitives.  

    Exercise. Several of the numerical methods described in this chapter are instances of an extremely general computational strategy known as iterative improvement. Iterative improvement says that, to compute something, we start with an initial guess for the answer, test if the guess is good enough, and otherwise improve the guess and continue the process using the improved guess as the new guess. Write a procedure iterative-improve that takes two procedures as arguments: a method for telling whether a guess is good enough and a method for improving a guess. Iterative-improve should return as its value a procedure that takes a guess as argument and keeps improving the guess until it is good enough. Rewrite the sqrt procedure of section [*] and the fixed-point procedure of section [*] in terms of iterative-improve.

    Previous: Procedures as General Methods Next: Building Abstractions with Data

      webmaster@arsdigita.org