Free Essay

Solenet

In:

Submitted By solenet
Words 58807
Pages 236
Introduction to

Computing
Explorations in

Language, Logic, and Machines

David Evans

University of Virginia

For the latest version of this book and supplementary materials, visit:

http://computingbook.org

Version: August 19, 2011

Attribution-Noncommercial-Share Alike 3.0 United States License

Contents
1 Computing
1.1 Processes, Procedures, and Computers . .
1.2 Measuring Computing Power . . . . . . .
1.2.1 Information . . . . . . . . . . . . .
1.2.2 Representing Data . . . . . . . . .
1.2.3 Growth of Computing Power . . .
1.3 Science, Engineering, and the Liberal Arts
1.4 Summary and Roadmap . . . . . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

1
2
3
3
8
12
13
16

Part I: Defining Procedures
2 Language
2.1 Surface Forms and Meanings
2.2 Language Construction . . . .
2.3 Recursive Transition Networks
2.4 Replacement Grammars . . .
2.5 Summary . . . . . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

19
19
20
22
26
32

3 Programming
3.1 Problems with Natural Languages . . . .
3.2 Programming Languages . . . . . . . . .
3.3 Scheme . . . . . . . . . . . . . . . . . . .
3.4 Expressions . . . . . . . . . . . . . . . . .
3.4.1 Primitives . . . . . . . . . . . . .
3.4.2 Application Expressions . . . . .
3.5 Definitions . . . . . . . . . . . . . . . . .
3.6 Procedures . . . . . . . . . . . . . . . . .
3.6.1 Making Procedures . . . . . . . .
3.6.2 Substitution Model of Evaluation
3.7 Decisions . . . . . . . . . . . . . . . . . .
3.8 Evaluation Rules . . . . . . . . . . . . . .
3.9 Summary . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

35
36
37
39
40
40
41
44
45
45
46
48
50
52

4 Problems and Procedures
4.1 Solving Problems . . . . . . . . . . . . . .
4.2 Composing Procedures . . . . . . . . . . .
4.2.1 Procedures as Inputs and Outputs
4.3 Recursive Problem Solving . . . . . . . . .
4.4 Evaluating Recursive Applications . . . . .
4.5 Developing Complex Programs . . . . . .
4.5.1 Printing . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

53
53
54
55
56
64
67
68

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

4.5.2 Tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5 Data
5.1 Types . . . . . . . . . . . . . . . . . . .
5.2 Pairs . . . . . . . . . . . . . . . . . . . .
5.2.1 Making Pairs . . . . . . . . . . .
5.2.2 Triples to Octuples . . . . . . .
5.3 Lists . . . . . . . . . . . . . . . . . . . .
5.4 List Procedures . . . . . . . . . . . . .
5.4.1 Procedures that Examine Lists .
5.4.2 Generic Accumulators . . . . .
5.4.3 Procedures that Construct Lists
5.5 Lists of Lists . . . . . . . . . . . . . . .
5.6 Data Abstraction . . . . . . . . . . . . .
5.7 Summary of Part I . . . . . . . . . . . .

69
73

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

75
. 75
. 77
. 79
. 80
. 81
. 83
. 83
. 84
. 86
. 90
. 92
. 102

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

105
106
108
109
111
114
116
118
123

7 Cost
7.1 Empirical Measurements . . . . . . . .
7.2 Orders of Growth . . . . . . . . . . . .
7.2.1 Big O . . . . . . . . . . . . . . .
7.2.2 Omega . . . . . . . . . . . . . .
7.2.3 Theta . . . . . . . . . . . . . . .
7.3 Analyzing Procedures . . . . . . . . . .
7.3.1 Input Size . . . . . . . . . . . .
7.3.2 Running Time . . . . . . . . . .
7.3.3 Worst Case Input . . . . . . . .
7.4 Growth Rates . . . . . . . . . . . . . . .
7.4.1 No Growth: Constant Time . .
7.4.2 Linear Growth . . . . . . . . . .
7.4.3 Quadratic Growth . . . . . . . .
7.4.4 Exponential Growth . . . . . . .
7.4.5 Faster than Exponential Growth
7.4.6 Non-terminating Procedures .
7.5 Summary . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

125
125
129
130
133
134
136
136
137
138
139
139
140
145
147
149
149
149

Part II: Analyzing Procedures
6 Machines
6.1 History of Computing Machines
6.2 Mechanizing Logic . . . . . . .
6.2.1 Implementing Logic . .
6.2.2 Composing Operations .
6.2.3 Arithmetic . . . . . . . .
6.3 Modeling Computing . . . . . .
6.3.1 Turing Machines . . . .
6.4 Summary . . . . . . . . . . . . .

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

8 Sorting and Searching
153
8.1 Sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
8.1.1 Best-First Sort . . . . . . . . . . . . . . . . . . . . . . . . . . 153
8.1.2 Insertion Sort . . . . . . . . . . . . . . . . . . . . . . . . . . 157

8.1.3 Quicker Sorting . . .
8.1.4 Binary Trees . . . . .
8.1.5 Quicksort . . . . . . .
8.2 Searching . . . . . . . . . . .
8.2.1 Unstructured Search
8.2.2 Binary Search . . . .
8.2.3 Indexed Search . . .
8.3 Summary . . . . . . . . . . .

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

158
161
166
167
168
168
169
178

9 Mutation
9.1 Assignment . . . . . . . . . . . . . . . . . . . . . . .
9.2 Impact of Mutation . . . . . . . . . . . . . . . . . .
9.2.1 Names, Places, Frames, and Environments
9.2.2 Evaluation Rules with State . . . . . . . . .
9.3 Mutable Pairs and Lists . . . . . . . . . . . . . . . .
9.4 Imperative Programming . . . . . . . . . . . . . . .
9.4.1 List Mutators . . . . . . . . . . . . . . . . . .
9.4.2 Imperative Control Structures . . . . . . . .
9.5 Summary . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

179
179
181
182
183
186
188
188
191
193

10 Objects
10.1 Packaging Procedures and State .
10.1.1 Encapsulation . . . . . . .
10.1.2 Messages . . . . . . . . . .
10.1.3 Object Terminology . . . .
10.2 Inheritance . . . . . . . . . . . . .
10.2.1 Implementing Subclasses
10.2.2 Overriding Methods . . .
10.3 Object-Oriented Programming .
10.4 Summary . . . . . . . . . . . . . .

Part III: Improving Expressiveness

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

195
196
196
197
199
200
202
204
207
209

11 Interpreters
11.1 Python . . . . . . . . . . . . . . . . .
11.1.1 Python Programs . . . . . . .
11.1.2 Data Types . . . . . . . . . . .
11.1.3 Applications and Invocations
11.1.4 Control Statements . . . . . .
11.2 Parser . . . . . . . . . . . . . . . . . .
11.3 Evaluator . . . . . . . . . . . . . . . .
11.3.1 Primitives . . . . . . . . . . .
11.3.2 If Expressions . . . . . . . . .
11.3.3 Definitions and Names . . . .
11.3.4 Procedures . . . . . . . . . . .
11.3.5 Application . . . . . . . . . .
11.3.6 Finishing the Interpreter . . .
11.4 Lazy Evaluation . . . . . . . . . . . .
11.4.1 Lazy Interpreter . . . . . . . .
11.4.2 Lazy Programming . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

211
212
213
216
219
219
221
223
223
225
226
227
228
229
229
230
232

.
.
.
.
.
.
.
.
.

11.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
Part IV: The Limits of Computing
12 Computability
12.1 Mechanizing Reasoning . . . . . . . . .
12.1.1 G¨ del’s Incompleteness Theorem o 12.2 The Halting Problem . . . . . . . . . . .
12.3 Universality . . . . . . . . . . . . . . . .
12.4 Proving Non-Computability . . . . . . .
12.5 Summary . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

237
237
240
241
244
245
251

Indexes
253
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
People . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256

List of Explorations
1.1
1.2
2.1
4.1
4.2
4.3
5.1
5.2
7.1
8.1
12.1
12.2

Guessing Numbers . . . . . . . .
Twenty Questions . . . . . . . . .
Power of Language Systems . . .
Square Roots . . . . . . . . . . . .
Recipes for π . . . . . . . . . . . .
Recursive Definitions and Games
Pascal’s Triangle . . . . . . . . . .
Pegboard Puzzle . . . . . . . . . .
Multiplying Like Rabbits . . . . .
Searching the Web . . . . . . . .
Virus Detection . . . . . . . . . .
Busy Beavers . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

7
8
29
62
69
71
91
93
127
177
246
249

1.1 Using three bits to distinguish eight possible values. . . . . . . . . . .

6

List of Figures

2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9

Simple recursive transition network. . . . . . . . .
RTN with a cycle. . . . . . . . . . . . . . . . . . . .
Recursive transition network with subnetworks. .
Alternate Noun subnetwork. . . . . . . . . . . . . .
RTN generating “Alice runs”. . . . . . . . . . . . . .
System power relationships. . . . . . . . . . . . . .
Converting the Number productions to an RTN. .
Converting the MoreDigits productions to an RTN.
Converting the Digit productions to an RTN. . . .

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

22
23
24
24
25
30
31
31
32

3.1 Running a Scheme program. . . . . . . . . . . . . . . . . . . . . . . . .

39

4.1
4.2
4.3
4.4
4.5

.
.
.
.
.

54
54
57
58
72

5.1 Pegboard Puzzle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

93

6.1
6.2
6.3
6.4
6.5
6.6

A procedure maps inputs to an output.
Composition. . . . . . . . . . . . . . .
Circular Composition. . . . . . . . . .
Recursive Composition. . . . . . . . .
Cornering the Queen. . . . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

Computing and with wine. . . . . . . . . . . . . . . . . . . .
Computing logical or and not with wine . . . . . . . . . . .
Computing and3 by composing two and functions. . . . .
Turing Machine model. . . . . . . . . . . . . . . . . . . . . .
Rules for checking balanced parentheses Turing Machine. .
Checking parentheses Turing Machine. . . . . . . . . . . .

.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

110
111
112
119
121
121

7.1 Evaluation of fibo procedure. . . . . . . . . . . . . . . . . . . . . . . . 128
7.2 Visualization of the sets O( f ), Ω( f ), and Θ( f ). . . . . . . . . . . . . . 130
7.3 Orders of Growth. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
8.1 Unbalanced trees. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
9.1
9.2
9.3
9.4
9.5
9.6

Sample environments. . . . . . . . . . . . . . . . . . . . . .
Environment created to evaluate (bigger 3 4). . . . . . . . .
Environment after evaluating (define inc (make-adder 1)).
Environment for evaluating the body of (inc 149). . . . . . .
Mutable pair created by evaluating (set-mcdr! pair pair). .
MutableList created by evaluating (mlist 1 2 3). . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

182
184
185
186
187
187

10.1 Environment produced by evaluating: . . . . . . . . . . . . . . . . . . 197
10.2 Inheritance hierarchy. . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
10.3 Counter class hierarchy. . . . . . . . . . . . . . . . . . . . . . . . . . . 206
12.1 Incomplete and inconsistent axiomatic systems. . . . . . . . . . . . . 239
12.2 Universal Turing Machine. . . . . . . . . . . . . . . . . . . . . . . . . . 245
12.3 Two-state Busy Beaver Machine. . . . . . . . . . . . . . . . . . . . . . 249

Image Credits
Most of the images in the book, including the tiles on the cover, were generated by the author.
Some of the tile images on the cover are from flickr creative commons licenses images from: ell brown, Johnson Cameraface, cogdogblog, Cyberslayer, dmealiffe, Dunechaser, MichaelFitz, Wolfie Fox, glingl, jurvetson, KayVee.INC, michaeldbeavers, and Oneras.
The Van Gogh Starry Night image from Section 1.2.2 is from the Google Art
Project. The Apollo Guidance Computer image in Section 1.2.3 was released by
NASA and is in the public domain. The traffic light in Section 2.1 is from iStockPhoto, and the rotary traffic signal is from the Wikimedia Commons. The picture of Grace Hopper in Chapter 3 is from the Computer History Museum. The playing card images in Chapter 4 are from iStockPhoto. The images of Gauss,
Heron, and Grace Hopper’s bug are in the public domain. The Dilbert comic in
Chapter 4 is licensed from United Feature Syndicate, Inc. The Pascal’s triangle image in Excursion 5.1 is from Wikipedia and is in the public domain. The image of Ada Lovelace in Chapter 6 is from the Wikimedia Commons, of a painting by
Margaret Carpenter. The odomoter image in Chapter 7 is from iStockPhoto, as is the image of the frustrated student. The Python snake charmer in Section 11.1 is from iStockPhoto. The Dynabook images at the end of Chapter 10 are from Alan
Kay’s paper. The xkcd comic at the end of Chapter 11 is used under the creative commons license generously provided by Randall Munroe.

Preface
This book started from the premise that Computer Science should be taught as a liberal art, not an industrial skill. I had the privilege of taking 6.001 from Gerry
Sussman when I was a first year student at MIT, and that course awakened me to the power and beauty of computing, and inspired me to pursue a career as a teacher and researcher in Computer Science. When I arrived as a new faculty member at the University of Virginia in 1999, I was distraught to discover that the introductory computing courses focused on teaching industrial skills, and with so much of the course time devoted to explaining the technical complexities of using bloated industrial languages like C++ and Java, there was very little, if any, time left to get across the core intellectual ideas that are the essence of computing and the reason everyone should learn it.
With the help of a University Teaching Fellowship and National Science Foundation grants, I developed a new introductory computer science course, targeted especially to students in the College of Arts & Sciences. This course was first offered in Spring 2002, with the help of an extraordinary group of Assistant
Coaches. Because of some unreasonable assumptions in the first assignment, half the students quickly dropped the course, but a small, intrepid, group of pioneering students persisted, and it is thanks to their efforts that this book exists.
That course, and the next several offerings, used Abelson & Sussman’s outstanding Structure and Interpretation of Computer Programs (SICP) textbook along with Douglas Hofstadter’s G¨ del, Escher, Bach: An Eternal Golden Braid. o Spring 2002 CS200 Pioneer Graduates
Back row, from left: Portman Wills (Assistant Coach), Spencer Stockdale, Shawn O’Hargan,
Jeff Taylor, Jacques Fournier, Katie Winstanley, Russell O’Reagan, Victor Clay Yount.
Front: Grace Deng, Rachel Dada, Jon Erdman (Assistant Coach).

I am not alone in thinking SICP is perhaps the greatest textbook ever written in any field, so it was with much trepidation that I endeavored to develop a new textbook. I hope the resulting book captures the spirit and fun of computing exemplified by SICP but better suited to an introductory course for students
,
with no previous background while covering many topics not included in SICP such as languages, complexity analysis, objects, and computability. Although this book is designed around a one semester introductory course, it should also be suitable for self-study students and for people with substantial programming experience but without similar computer science knowledge.

I am indebted to many people who helped develop this course and book. Westley Weimer was the first person to teach using something resembling this book, and his thorough and insightful feedback led to improvements throughout. Greg
Humphreys, Paul Reynolds, and Mark Sherriff have also taught versions of this course, and contributed to its development. I am thankful to all of the Assistant Coaches over the years, especially Sarah Bergkuist (2004), Andrew Connors
(2004), Rachel Dada (2003), Paul DiOrio (2009), Kinga Dobolyi (2007), Jon Erdman (2002), Ethan Fast (2009), David Faulkner (2005), Jacques Fournier (2003),
Richard Hsu (2007), Rachel Lathbury (2009), Michael Lew (2009), Stephen Liang
(2002), Dan Marcus (2007), Rachel Rater (2009), Spencer Stockdale (2003), Dan
Upton (2005), Portman Wills (2002), Katie Winstanley (2003 and 2004), and Rebecca Zapfel (2009). William Aiello, Anna Chefter, Chris Frost, Jonathan Grier,
Thad Hughes, Alan Kay, Tim Koogle, Jerry McGann, Gary McGraw, Radhika Nagpal, Shawn O’Hargan, Mike Peck, and Judith Shatin also made important contributions to the class and book.
My deepest thanks are to my wife, Nora, who is a constant source of inspiration, support, and wonder.
Finally, my thanks to all past, present, and future students who use this book, without whom it would have no purpose.
Happy Computing!
David Evans
Charlottesville, Virginia
August 2011

Spring 2003

Spring 2004

Spring 2005

1

Computing
In their capacity as a tool, computers will be but a ripple on the surface of our culture. In their capacity as intellectual challenge, they are without precedent in the cultural history of mankind.
Edsger Dijkstra, 1972 Turing Award Lecture

The first million years of hominid history produced tools to amplify, and later mechanize, our physical abilities to enable us to move faster, reach higher, and hit harder. We have developed tools that amplify physical force by the trillions and increase the speeds at which we can travel by the thousands.
Tools that amplify intellectual abilities are much rarer. While some animals have developed tools to amplify their physical abilities, only humans have developed tools to substantially amplify our intellectual abilities and it is those advances that have enabled humans to dominate the planet. The first key intellect amplifier was language. Language provided the ability to transmit our thoughts to others, as well as to use our own minds more effectively. The next key intellect amplifier was writing, which enabled the storage and transmission of thoughts over time and distance.
Computing is the ultimate mental amplifier—computers can mechanize any intellectual activity we can imagine. Automatic computing radically changes how humans solve problems, and even the kinds of problems we can imagine solving. Computing has changed the world more than any other invention of the past hundred years, and has come to pervade nearly all human endeavors. Yet, we are just at the beginning of the computing revolution; today’s computing offers just a glimpse of the potential impact of computing.
There are two reasons why everyone should study computing:
1. Nearly all of the most exciting and important technologies, arts, and sciences of today and tomorrow are driven by computing.
2. Understanding computing illuminates deep insights and questions into the nature of our minds, our culture, and our universe.
Anyone who has submitted a query to Google, watched Toy Story, had LASIK eye surgery, used a smartphone, seen a Cirque Du Soleil show, shopped with a credit card, or microwaved a pizza should be convinced of the first reason. None of these would be possible without the tremendous advances in computing over the past half century.
Although this book will touch on on some exciting applications of computing, our primary focus is on the second reason, which may seem more surprising.

It may be true that you have to be able to read in order to fill out forms at the
DMV, but that’s not why we teach children to read. We teach them to read for the higher purpose of allowing them access to beautiful and meaningful ideas.
Paul Lockhart,
Lockhart’s Lament

2

1.1. Processes, Procedures, and Computers

Computing changes how we think about problems and how we understand the world. The goal of this book is to teach you that new way of thinking.

1.1 information processes

Processes, Procedures, and Computers

Computer science is the study of information processes. A process is a sequence of steps. Each step changes the state of the world in some small way, and the result of all the steps produces some goal state. For example, baking a cake, mailing a letter, and planting a tree are all processes. Because they involve physical things like sugar and dirt, however, they are not pure information processes.
Computer science focuses on processes that involve abstract information rather than physical things.
The boundaries between the physical world and pure information processes, however, are often fuzzy. Real computers operate in the physical world: they obtain input through physical means (e.g., a user pressing a key on a keyboard that produces an electrical impulse), and produce physical outputs (e.g., an image displayed on a screen). By focusing on abstract information, instead of the physical ways of representing and manipulating information, we simplify computation to its essence to better enable understanding and reasoning.

procedure

algorithm

A mathematician is a machine for turning coffee into theorems. Attributed to Paul
Erd¨ s o A procedure is a description of a process. A simple process can be described just by listing the steps. The list of steps is the procedure; the act of following them is the process. A procedure that can be followed without any thought is called a mechanical procedure. An algorithm is a mechanical procedure that is guaranteed to eventually finish.
For example, here is a procedure for making coffee, adapted from the actual directions that come with a major coffeemaker:
1.
2.
3.
4.
5.
6.
7.
8.

Lift and open the coffeemaker lid.
Place a basket-type filter into the filter basket.
Add the desired amount of coffee and shake to level the coffee.
Fill the decanter with cold, fresh water to the desired capacity.
Pour the water into the water reservoir.
Close the lid.
Place the empty decanter on the warming plate.
Press the ON button.

Describing processes by just listing steps like this has many limitations. First, natural languages are very imprecise and ambiguous. Following the steps correctly requires knowing lots of unstated assumptions. For example, step three assumes the operator understands the difference between coffee grounds and finished coffee, and can infer that this use of “coffee” refers to coffee grounds since the end goal of this process is to make drinkable coffee. Other steps assume the coffeemaker is plugged in and sitting on a flat surface.
One could, of course, add lots more details to our procedure and make the language more precise than this. Even when a lot of effort is put into writing precisely and clearly, however, natural languages such as English are inherently ambiguous. This is why the United States tax code is 3.4 million words long, but lawyers can still spend years arguing over what it really means.
Another problem with this way of describing a procedure is that the size of the

Chapter 1. Computing

3

description is proportional to the number of steps in the process. This is fine for simple processes that can be executed by humans in a reasonable amount of time, but the processes we want to execute on computers involve trillions of steps. This means we need more efficient ways to describe them than just listing each step one-by-one.
To program computers, we need tools that allow us to describe processes precisely and succinctly. Since the procedures are carried out by a machine, every step needs to be described; we cannot rely on the operator having “common sense” (for example, to know how to fill the coffeemaker with water without explaining that water comes from a faucet, and how to turn the faucet on). Instead, we need mechanical procedures that can be followed without any thinking.
A computer is a machine that can:
1. Accept input. Input could be entered by a human typing at a keyboard, received over a network, or provided automatically by sensors attached to the computer.
2. Execute a mechanical procedure, that is, a procedure where each step can be executed without any thought.
3. Produce output. Output could be data displayed to a human, but it could also be anything that effects the world outside the computer such as electrical signals that control how a device operates.
Computers exist in a wide range of forms, and thousands of computers are hidden in devices we use everyday but don’t think of as computers such as cars, phones, TVs, microwave ovens, and access cards. Our primary focus is on universal computers, which are computers that can perform all possible mechanical computations on discrete inputs except for practical limits on space and time. The next section explains what it discrete inputs means; Chapters 6 and 12 explore more deeply what it means for a computer to be universal.

computer

A computer terminal is not some clunky old television with a typewriter in front of it. It is an interface where the mind and body can connect with the universe and move bits of it about.
Douglas Adams

1.2

Measuring Computing Power

For physical machines, we can compare the power of different machines by measuring the amount of mechanical work they can perform within a given amount of time. This power can be captured with units like horsepower and watt. Physical power is not a very useful measure of computing power, though, since the amount of computing achieved for the same amount of energy varies greatly. Energy is consumed when a computer operates, but consuming energy is not the purpose of using a computer.
Two properties that measure the power of a computing machine are:
1. How much information it can process?
2. How fast can it process?
We defer considering the second property until Part II, but consider the first question here.

1.2.1

Information

Informally, we use information to mean knowledge. But to understand information quantitatively, as something we can measure, we need a more precise way to think about information.

information

4

bit

binary question

1.2. Measuring Computing Power

The way computer scientists measure information is based on how what is known changes as a result of obtaining the information. The primary unit of information is a bit. One bit of information halves the amount of uncertainty. It is equivalent to answering a “yes” or “no” question, where either answer is equally likely beforehand. Before learning the answer, there were two possibilities; after learning the answer, there is one.
We call a question with two possible answers a binary question. Since a bit can have two possible values, we often represent the values as 0 and 1.
For example, suppose we perform a fair coin toss but do not reveal the result.
Half of the time, the coin will land “heads”, and the other half of the time the coin will land “tails”. Without knowing any more information, our chances of
1
guessing the correct answer are 2 . One bit of information would be enough to convey either “heads” or “tails”; we can use 0 to represent “heads” and 1 to represent “tails”. So, the amount of information in a coin toss is one bit.
Similarly, one bit can distinguish between the values 0 and 1:

Is it 1?
No

Yes

0

1

Example 1.1: Dice
How many bits of information are there in the outcome of tossing a six-sided die? There are six equally likely possible outcomes, so without any more information we have a one in six chance of guessing the correct value. One bit is not enough to identify the actual number, since one bit can only distinguish between two values. We could use five binary questions like this:

No
No
No
No
No
1

2?

3?

Yes

4?

Yes

5?

Yes

6?

Yes

Yes
6

5

4

3

2

This is quite inefficient, though, since we need up to five questions to identify
1
the value (and on average, expect to need 3 3 questions). Can we identify the value with fewer than 5 questions?

5

Chapter 1. Computing

Our goal is to identify questions where the “yes” and “no” answers are equally likely—that way, each answer provides the most information possible. This is not the case if we start with, “Is the value 6?”, since that answer is expected to be
“yes” only one time in six. Instead, we should start with a question like, “Is the value at least 4?”. Here, we expect the answer to be “yes” one half of the time, and the “yes” and “no” answers are equally likely. If the answer is “yes”, we know the result is 4, 5, or 6. With two more bits, we can distinguish between these three values (note that two bits is actually enough to distinguish among four different values, so some information is wasted here). Similarly, if the answer to the first question is no, we know the result is 1, 2, or 3. We need two more bits to distinguish which of the three values it is. Thus, with three bits, we can distinguish all six possible outcomes.

>= 4?
No

Yes

3?
No

Yes

1

No

3

2?
No

6?

No

Yes

2

Yes

5?

4

6
Yes

5

Three bits can convey more information that just six possible outcomes, however. In the binary question tree, there are some questions where the answer is not equally likely to be “yes” and “no” (for example, we expect the answer to
“Is the value 3?” to be “yes” only one out of three times). Hence, we are not obtaining a full bit of information with each question.
Each bit doubles the number of possibilities we can distinguish, so with three bits we can distinguish between 2 ∗ 2 ∗ 2 = 8 possibilities. In general, with n bits, we can distinguish between 2n possibilities. Conversely, distinguishing among k possible values requires log2 k bits. The logarithm is defined such that if a = bc then logb a = c. Since each bit has two possibilities, we use the logarithm base
2 to determine the number of bits needed to distinguish among a set of distinct possibilities. For our six-sided die, log2 6 ≈ 2.58, so we need approximately 2.58 binary questions. But, questions are discrete: we can’t ask 0.58 of a question, so we need to use three binary questions.

Trees. Figure 1.1 depicts a structure of binary questions for distinguishing among eight values. We call this structure a binary tree. We will see many useful applications of tree-like structures in this book.
Computer scientists draw trees upside down. The root is the top of the tree, and the leaves are the numbers at the bottom (0, 1, 2, . . ., 7). There is a unique path from the root of the tree to each leaf. Thus, we can describe each of the eight

logarithm

binary tree

6

1.2. Measuring Computing Power

possible values using the answers to the questions down the tree. For example, if the answers are “No”, “No”, and “No”, we reach the leaf 0; if the answers are
“Yes”, “No”, “Yes”, we reach the leaf 5. Since there are no more than two possible answers for each node, we call this a binary tree.
We can describe any non-negative integer using bits in this way, by just adding additional levels to the tree. For example, if we wanted to distinguish between
16 possible numbers, we would add a new question, “Is is >= 8?” to the top of the tree. If the answer is “No”, we use the tree in Figure 1.1 to distinguish numbers between 0 and 7. If the answer is “Yes”, we use a tree similar to the one in Figure 1.1, but add 8 to each of the numbers in the questions and the leaves. depth The depth of a tree is the length of the longest path from the root to any leaf. The example tree has depth three. A binary tree of depth d can distinguish up to 2d different values.

>= 4?
Yes

No
>= 2?
No

>= 6?
Yes

1?

3?
No

No Yes

0

No

1

2

Yes

5?

Yes

7?

No Yes

3

4

No

5

Yes

6

7

Figure 1.1. Using three bits to distinguish eight possible values.

Units of Information. One byte is defined as eight bits. Hence, one byte of information corresponds to eight binary questions, and can distinguish among
28 (256) different values. For larger amounts of information, we use metric prefixes, but instead of scaling by factors of 1000 they scale by factors of 210 (1024).
Hence, one kilobyte is 1024 bytes; one megabyte is 220 (approximately one million) bytes; one gigabyte is 230 (approximately one billion) bytes; and one terabyte is 240 (approximately one trillion) bytes.
Exercise 1.1. Draw a binary tree with the minimum possible depth to:
a. Distinguish among the numbers 0, 1, 2, . . . , 15.
b. Distinguish among the 12 months of the year.

Chapter 1. Computing

7

Exercise 1.2. How many bits are needed:
a. To uniquely identify any currently living human?
b. To uniquely identify any human who ever lived?
c. To identify any location on Earth within one square centimeter?
d. To uniquely identify any atom in the observable universe?
Exercise 1.3. The examples all use binary questions for which there are two possible answers. Suppose instead of basing our decisions on bits, we based it on trits where one trit can distinguish between three equally likely values. For each trit, we can ask a ternary question (a question with three possible answers).
a. How many trits are needed to distinguish among eight possible values? (A convincing answer would show a ternary tree with the questions and answers for each node, and argue why it is not possible to distinguish all the values with a tree of lesser depth.)
b. [ ] Devise a general formula for converting between bits and trits. How many trits does it require to describe b bits of information?
Exploration 1.1: Guessing Numbers
The guess-a-number game starts with one player (the chooser) picking a number between 1 and 100 (inclusive) and secretly writing it down. The other player
(the guesser) attempts to guess the number. After each guess, the chooser responds with “correct” (the guesser guessed the number and the game is over),
“higher” (the actual number is higher than the guess), or “lower” (the actual number is lower than the guess).
a. Explain why the guesser can receive slightly more than one bit of information for each response.
b. Assuming the chooser picks the number randomly (that is, all values between
1 and 100 are equally likely), what are the best first guesses? Explain why these guesses are better than any other guess. (Hint: there are two equally good first guesses.)
c. What is the maximum number of guesses the second player should need to always find the number?
d. What is the average number of guesses needed (assuming the chooser picks the number randomly as before)?
e. [ ] Suppose instead of picking randomly, the chooser picks the number with the goal of maximizing the number of guesses the second player will need.
What number should she pick?
f. [ ] How should the guesser adjust her strategy if she knows the chooser is picking adversarially?
g. [ ] What are the best strategies for both players in the adversarial guess-anumber game where chooser’s goal is to pick a starting number that maximizes the number of guesses the guesser needs, and the guesser’s goal is to guess the number using as few guesses as possible.

8

1.2. Measuring Computing Power

Exploration 1.2: Twenty Questions
The two-player game twenty questions starts with the first player (the answerer) thinking of an object, and declaring if the object is an animal, vegetable, or mineral (meant to include all non-living things). After this, the second player (the questioner), asks binary questions to try and guess the object the first player thought of. The first player answers each question “yes” or “no”. The website http://www.20q.net/ offers a web-based twenty questions game where a human acts as the answerer and the computer as the questioner. The game is also sold as a $10 stand-alone toy (shown in the picture).
20Q Game
Image from ThinkGeek

a. How many different objects can be distinguished by a perfect questioner for the standard twenty questions game?
b. What does it mean for the questioner to play perfectly?
c. Try playing the 20Q game at http://www.20q.net. Did it guess your item?
d. Instead of just “yes” and “no”, the 20Q game offers four different answers:
“Yes”, “No”, “Sometimes”, and “Unknown”. (The website version of the game also has “Probably”, “Irrelevant”, and “Doubtful”.) If all four answers were equally likely (and meaningful), how many items could be distinguished in
20 questions?
e. For an Animal, the first question 20Q sometimes asks is “Does it jump?” (20Q randomly selected from a few different first questions). Is this a good first question? f. [ ] How many items do you think 20Q has data for?
g. [

1.2.2

] Speculate on how 20Q could build up its database.

Representing Data

We can use sequences of bits to represent many kinds of data. All we need to do is think of the right binary questions for which the bits give answers that allow us to represent each possible value. Next, we provide examples showing how bits can be used to represent numbers, text, and pictures.

binary number system There are only 10 types of people in the world: those who understand binary, and those who don’t.
Infamous T-Shirt

Numbers. In the previous section, we identified a number using a tree where each node asks a binary question and the branches correspond to the “Yes” and
“No” answers. A more compact way of writing down our decisions following the tree is to use 0 to encode a “No” answer, and 1 to encode a “Yes” answer and describe a path to a leaf by a sequence of 0s and 1s—the “No”, “No”, “No” path to
0 is encoded as 000, and the “Yes”, “No”, “Yes” path to 5 is encoded as 101. This is known as the binary number system. Whereas the decimal number system uses ten as its base (there are ten decimal digits, and the positional values increase as powers of ten), the binary system uses two as its base (there are two binary digits, and the positional values increase as powers of two).
For example, the binary number 10010110 represents the decimal value 150:
Binary:
Value:
Decimal Value:

1
27
128

0
26
64

0
25
32

1
24
16

0
23
8

1
22
4

1
21
2

0
20
1

Chapter 1. Computing

9

As in the decimal number system, the value of each binary digit depends on its position. By using more bits, we can represent larger numbers. With enough bits, we can represent any natural number this way. The more bits we have, the larger the set of possible numbers we can represent. As we saw with the binary decision trees, n bits can be used to represent 2n different numbers.
Discrete Values. We can use a finite sequence of bits to describe any value that is selected from a countable set of possible values. A set is countable if there is a countable way to assign a unique natural number to each element of the set. All finite sets are countable. Some, but not all, infinite sets are countable. For example, there appear to be more integers than there are natural numbers since for each natural number, n, there are two corresponding integers, n and −n. But, the integers are in fact countable. We can enumerate the integers as: 0, 1, −1, 2, −2, 3, −3, 4, −4, . . . and assign a unique natural number to each integer in turn.
Other sets, such as the real numbers, are uncountable. Georg Cantor proved this using a technique known as diagonalization. Suppose the real numbers are enumerable. This means we could list all the real numbers in order, so we could assign a unique integer to each number. For example, considering just the real numbers between 0 and 1, our enumeration might be:
1
2
3
4
···
57236
···

.00000000000000 . . .
.25000000000000 . . .
.33333333333333 . . .
.6666666666666 . . .
···
.141592653589793 . . .
···

Cantor proved by contradiction that there is no way to enumerate all the real numbers. The trick is to produce a new real number that is not part of the enumeration. We can do this by constructing a number whose first digit is different from the first digit of the first number, whose second digit is different from the second digit of the second number, etc. For the example enumeration above, we might choose .1468 . . ..
The kth digit of the constructed number is different from the kth digit of the number k in the enumeration. Since the constructed number differs in at least one digit from every enumerated number, it does not match any of the enumerated numbers exactly. Thus, there is a real number that is not included in the enumeration list, and it is impossible to enumerate all the real numbers.1
Digital computers2 operate on inputs that are discrete values. Continuous values, such as real numbers, can only be approximated by computers. Next, we
1 Alert readers should be worried that this isn’t quite correct since the resulting number may be a different way to represent the same real number (for example, .1999999999999 . . . = .20000000000 . . . even though they differ in each digit). This technical problem can be fixed by placing some restrictions on how the modified digits are chosen to avoid infinite repetitions.
2 This is, indeed, part of the definition of a digital computer. An analog computer operates on continuous values. In Chapter 6, we explain more of the inner workings of a computer and why nearly all computers today are digital. We use computer to mean a digital computer in this book.
The property that there are more real numbers than natural numbers has important implications for what can and cannot be computed, which we return to in Chapter 12.

diagonalization

10

1.2. Measuring Computing Power

consider how two types of data, text and images, can be represented by computers. The first type, text, is discrete and can be represented exactly; images are continuous, and can only be represented approximately.
Text. The set of all possible sequences of characters is countable. One way to see this is to observe that we could give each possible text fragment a unique number, and then use that number to identify the item. For example we could enumerate all texts alphabetically by length (here, we limit the characters to lowercase letters): a, b, c, . . ., z, aa, ab, . . ., az, ba, . . ., zz, aaa, . . .
Since we have seen that we can represent all the natural numbers with a sequence of bits, so once we have the mapping between each item in the set and a unique natural number, we can represent all of the items in the set. For the representation to be useful, though, we usually need a way to construct the corresponding number for any item directly.
So, instead of enumerating a mapping between all possible character sequences and the natural numbers, we need a process for converting any text to a unique number that represents that text. Suppose we limit our text to characters in the standard English alphabet. If we include lower-case letters (26), upper-case letters (26), and punctuation (space, comma, period, newline, semi-colon), we have 57 different symbols to represent. We can assign a unique number to each symbol, and encode the corresponding number with six bits (this leaves seven values unused since six bits can distinguish 64 values). For example, we could encode using the mapping shown in Table 1.1. The first bit answers the question: “Is it an uppercase letter after F or a special character?”. When the first bit is 0, the second bit answers the question: “Is it after p?”. a b c d
···
p q ··· z 000000
000001
000010
000011
···
001111
010000
···
011001

A
B
C
···
F
G
···
Y
Z

011010
011011
011100
···
011111
100000
···
110010
110011

space
,
. newline ; unused ··· unused unused

110100
110101
110110
110111
111000
111001
···
111110
111111

Table 1.1. Encoding characters using bits.
This is one way to encode the alphabet, but not the one typically used by computers.
One commonly used encoding known as ASCII (the American Standard Code for Information Interchange) uses seven bits so that 128 different symbols can be encoded. The extra symbols are used to encode more special characters.

Once we have a way of mapping each individual letter to a fixed-length bit sequence, we could write down any sequence of letters by just concatenating the bits encoding each letter. So, the text CS is encoded as 011100 101100. We could write down text of length n that is written in the 57-symbol alphabet using this encoding using 6n bits. To convert the number back into text, just invert the mapping by replacing each group of six bits with the corresponding letter.
Rich Data. We can also use bit sequences to represent complex data like pictures, movies, and audio recordings. First, consider a simple black and white picture: 11

Chapter 1. Computing

Since the picture is divided into discrete squares known as pixels, we could encode this as a sequence of bits by using one bit to encode the color of each pixel
(for example, using 1 to represent black, and 0 to represent white). This image is 16x16, so has 256 pixels total. We could represent the image using a sequence of 256 bits (starting from the top left corner):
0000011111100000
0000100000010000
0011000000001100
0010000000000100
···
0000011111100000
What about complex pictures that are not divided into discrete squares or a fixed number of colors, like Van Gogh’s Starry Night?

Different wavelengths of electromagnetic radiation have different colors. For example, light with wavelengths between 625 and 730 nanometers appears red.
But, each wavelength of light has a slightly different color; for example, light with wavelength 650 nanometers would be a different color (albeit imperceptible to humans) from light of wavelength 650.0000001 nanometers. There are arguably infinitely many different colors, corresponding to different wavelengths of visible light.3 Since the colors are continuous and not discrete, there is no way to map each color to a unique, finite bit sequence.
3 Whether there are actually infinitely many different colors comes down to the question of whether the space-time of the universe is continuous or discrete. Certainly in our common perception it seems to be continuous—we can imagine dividing any length into two shorter lengths. In reality, this may not be the case at extremely tiny scales. It is not known if time can continue to be subdivided below 10−40 of a second.

pixel

12

1.2. Measuring Computing Power

On the other hand, the human eye and brain have limits. We cannot actually perceive infinitely many different colors; at some point the wavelengths are close enough that we cannot distinguish them. Ability to distinguish colors varies, but most humans can perceive only a few million different colors. The set of colors that can be distinguished by a typical human is finite; any finite set is countable, so we can map each distinguishable color to a unique bit sequence.
A common way to represent color is to break it into its three primary components (red, green, and blue), and record the intensity of each component. The more bits available to represent a color, the more different colors that can be represented. Thus, we can represent a picture by recording the approximate color at each point. If space in the universe is continuous, there are infinitely many points.
But, as with color, once the points get smaller than a certain size they are imperceptible. We can approximate the picture by dividing the canvas into small regions and sampling the average color of each region. The smaller the sample regions, the more bits we will have and the more detail that will be visible in the image. With enough bits to represent color, and enough sample points, we can represent any image as a sequence of bits.
Summary. We can use sequences of bits to represent any natural number exactly, and hence, represent any member of a countable set using a sequence of bits. The more bits we use the more different values that can be represented; with n bits we can represent 2n different values.
We can also use sequences of bits to represent rich data like images, audio, and video. Since the world we are trying to represent is continuous there are infinitely many possible values, and we cannot represent these objects exactly with any finite sequence of bits. However, since human perception is limited, with enough bits we can represent any of these adequately well. Finding ways to represent data that are both efficient and easy to manipulate and interpret is a constant challenge in computing. Manipulating sequences of bits is awkward, so we need ways of thinking about bit-level representations of data at higher levels of abstraction. Chapter 5 focuses on ways to manage complex data.

1.2.3

Growth of Computing Power

The number of bits a computer can store gives an upper limit on the amount of information it can process. Looking at the number of bits different computers can store over time gives us a rough indication of how computing power has increased. Here, we consider two machines: the Apollo Guidance Computer and a modern laptop.

AGC User Interface

The Apollo Guidance Computer was developed in the early 1960s to control the flight systems of the Apollo spacecraft. It might be considered the first personal computer, since it was designed to be used in real-time by a single operator (an astronaut in the Apollo capsule). Most earlier computers required a full room, and were far too expensive to be devoted to a single user; instead, they processed jobs submitted by many users in turn. Since the Apollo Guidance Computer was designed to fit in the Apollo capsule, it needed to be small and light.
Its volume was about a cubic foot and it weighed 70 pounds. The AGC was the first computer built using integrated circuits, miniature electronic circuits that can perform simple logical operations such as performing the logical and

Chapter 1. Computing

13

of two values. The AGC used about 4000 integrated circuits, each one being able to perform a single logical operation and costing $1000. The AGC consumed a significant fraction of all integrated circuits produced in the mid-1960s, and the project spurred the growth of the integrated circuit industry.
The AGC had 552 960 bits of memory (of which only 61 440 bits were modifiable, the rest were fixed). The smallest USB flash memory you can buy today (from
SanDisk in December 2008) is the 1 gigabyte Cruzer for $9.99; 1 gigabyte (GB) is 230 bytes or approximately 8.6 billion bits, about 140 000 times the amount of memory in the AGC (and all of the Cruzer memory is modifiable). A typical lowend laptop today has 2 gigabytes of RAM (fast memory close to the processor that loses its state when the machine is turned off ) and 250 gigabytes of hard disk memory (slow memory that persists when the machine is turned off ); for under $600 today we get a computer with over 4 million times the amount of memory the AGC had.
Improving by a factor of 4 million corresponds to doubling just over 22 times.
The amount of computing power approximately doubled every two years between the AGC in the early 1960s and a modern laptop today (2009). This property of exponential improvement in computing power is known as Moore’s Law.
Gordon Moore, a co-founder of Intel, observed in 1965 than the number of components that can be built in integrated circuits for the same cost was approximately doubling every year (revisions to Moore’s observation have put the doubling rate at approximately 18 months instead of one year). This progress has been driven by the growth of the computing industry, increasing the resources available for designing integrated circuits. Another driver is that today’s technology is used to design the next technology generation. Improvement in computing power has followed this exponential growth remarkably closely over the past 40 years, although there is no law that this growth must continue forever.
Although our comparison between the AGC and a modern laptop shows an impressive factor of 4 million improvement, it is much slower than Moore’s law would suggest. Instead of 22 doublings in power since 1963, there should have been 30 doublings (using the 18 month doubling rate). This would produce an improvement of one billion times instead of just 4 million. The reason is our comparison is very unequal relative to cost: the AGC was the world’s most expensive small computer of its time, reflecting many millions of dollars of government funding. Computing power available for similar funding today is well over a billion times more powerful than the AGC.

1.3

Science, Engineering, and the Liberal Arts

Much ink and many bits have been spent debating whether computer science is an art, an engineering discipline, or a science. The confusion stems from the nature of computing as a new field that does not fit well into existing silos. In fact, computer science fits into all three kingdoms, and it is useful to approach computing from all three perspectives.
Science. Traditional science is about understanding nature through observation. The goal of science is to develop general and predictive theories that allow us to understand aspects of nature deeply enough to make accurate quantitative predications. For example, Newton’s law of universal gravitation makes predictions about how masses will move. The more general a theory is the better. A key,

Moore’s law is a violation of
Murphy’s law.
Everything gets better and better.
Gordon Moore

14

1.3. Science, Engineering, and the Liberal Arts

as yet unachieved, goal of science is to find a universal law that can describe all physical behavior at scales from the smallest subparticle to the entire universe, and all the bosons, muons, dark matter, black holes, and galaxies in between.
Science deals with real things (like bowling balls, planets, and electrons) and attempts to make progress toward theories that predict increasingly precisely how these real things will behave in different situations.
Computer science focuses on artificial things like numbers, graphs, functions, and lists. Instead of dealing with physical things in the real world, computer science concerns abstract things in a virtual world. The numbers we use in computations often represent properties of physical things in the real world, and with enough bits we can model real things with arbitrary precision. But, since our focus is on abstract, artificial things rather than physical things, computer science is not a traditional natural science but a more abstract field like mathematics.
Like mathematics, computing is an essential tool for modern science, but when we study computing on artificial things it is not a natural science itself.
In a deeper sense, computing pervades all of nature. A long term goal of computer science is to develop theories that explain how nature computes. One example of computing in nature comes from biology. Complex life exists because nature can perform sophisticated computing. People sometimes describe DNA as a “blueprint”, but it is really much better thought of as a program. Whereas a blueprint describes what a building should be when it is finished, giving the dimensions of walls and how they fit together, the DNA of an organism encodes a process for growing that organism. A human genome is not a blueprint that describes the body plan of a human, it is a program that turns a single cell into a complex human given the appropriate environment. The process of evolution
(which itself is an information process) produces new programs, and hence new species, through the process of natural selection on mutated DNA sequences.
Understanding how both these processes work is one of the most interesting and important open scientific questions, and it involves deep questions in computer science, as well as biology, chemistry, and physics.

Scientists study the world as it is; engineers create the world that never has been.
Theodore von K´ rm´ n a a

The questions we consider in this book focus on the question of what can and cannot be computed. This is both a theoretical question (what can be computed by a given theoretical model of a computer) and a pragmatic one (what can be computed by physical machines we can build today, as well as by anything possible in our universe).
Engineering. Engineering is about making useful things. Engineering is often distinguished from crafts in that engineers use scientific principles to create their designs, and focus on designing under practical constraints. As William
Wulf and George Fisher put it:4
Whereas science is analytic in that it strives to understand nature, or what is, engineering is synthetic in that it strives to create. Our own favorite description of what engineers do is “design under constraint”. Engineering is creativity constrained by nature, by cost, by concerns of safety, environmental impact, ergonomics, reliability, manufacturability, maintainability– the whole long list of such “ilities”. To be sure, the realities of nature is one of the constraint sets we work under, but it is far from the only one, it is
4 William Wulf and George Fisher, A Makeover for Engineering Education, Issues in Science and
Technology, Spring 2002 (http://www.issues.org/18.3/p wulf.html).

Chapter 1. Computing

15

seldom the hardest one, and almost never the limiting one.
Computer scientists do not typically face the natural constraints faced by civil and mechanical engineers—computer programs are massless and not exposed to the weather, so programmers do not face the kinds of physical constraints like gravity that impose limits on bridge designers. As we saw from the Apollo Guidance Computer comparison, practical constraints on computing power change rapidly — the one billion times improvement in computing power is unlike any change in physical materials5 . Although we may need to worry about manufacturability and maintainability of storage media (such as the disk we use to store a program), our focus as computer scientists is on the abstract bits themselves, not how they are stored.
Computer scientists, however, do face many constraints. A primary constraint is the capacity of the human mind—there is a limit to how much information a human can keep in mind at one time. As computing systems get more complex, there is no way for a human to understand the entire system at once. To build complex systems, we need techniques for managing complexity. The primary tool computer scientists use to manage complexity is abstraction. Abstraction is a way of giving a name to something in a way that allows us to hide unnecessary details. By using carefully designed abstractions, we can construct complex systems with reliable properties while limiting the amount of information a human designer needs to keep in mind at any one time.
Liberal Arts. The notion of the liberal arts emerged during the middle ages to distinguish education for the purpose of expanding the intellects of free people from the illiberal arts such as medicine and carpentry that were pursued for economic purposes. The liberal arts were intended for people who did not need to learn an art to make a living, but instead had the luxury to pursue purely intellectual activities for their own sake. The traditional seven liberal arts started with the Trivium (three roads), focused on language:6
• Grammar — “the art of inventing symbols and combining them to express thought” • Rhetoric — “the art of communicating thought from one mind to another, the adaptation of language to circumstance”
• Logic — “the art of thinking”
The Trivium was followed by the Quadrivium, focused on numbers:





Arithmetic — “theory of number”
Geometry — “theory of space”
Music — “application of the theory of number”
Astronomy — “application of the theory of space”

All of these have strong connections to computer science, and we will touch on each of them to some degree in this book.
Language is essential to computing since we use the tools of language to describe information processes. The next chapter discusses the structure of language and throughout this book we consider how to efficiently use and combine
5 For example, the highest strength density material available today, carbon nanotubes, are perhaps 300 times stronger than the best material available 50 years ago.
6 The quotes defining each liberal art are from Miriam Joseph (edited by Marguerite McGlinn),
The Trivium: The Liberal Arts of Logic, Grammar, and Rhetoric, Paul Dry Books, 2002.

abstraction

I must study politics and war that my sons may have liberty to study mathematics and philosophy. My sons ought to study mathematics and philosophy, geography, natural history, naval architecture, navigation, commerce, and agriculture, in order to give their children a right to study painting, poetry, music, architecture, statuary, tapestry, and porcelain.
John Adams, 1780

16

1.4. Summary and Roadmap

symbols to express meanings. Rhetoric encompasses communicating thoughts between minds. In computing, we are not typically communicating directly between minds, but we see many forms of communication between entities: interfaces between components of a program, as well as protocols used to enable multiple computing systems to communicate (for example, the HTTP protocol defines how a web browser and web server interact), and communication between computer programs and human users. The primary tool for understanding what computer programs mean, and hence, for constructing programs with particular meanings, is logic. Hence, the traditional trivium liberal arts of language and logic permeate computer science.
The connections between computing and the quadrivium arts are also pervasive. We have already seen how computers use sequences of bits to represent numbers. Chapter 6 examines how machines can perform basic arithmetic operations. Geometry is essential for computer graphics, and graph theory is also important for computer networking. The harmonic structures in music have strong connections to the recursive definitions introduced in Chapter 4 and recurring throughout this book.7 Unlike the other six liberal arts, astronomy is not directly connected to computing, but computing is an essential tool for doing modern astronomy.
Although learning about computing qualifies as an illiberal art (that is, it can have substantial economic benefits for those who learn it well), computer science also covers at least six of the traditional seven liberal arts.

1.4

Summary and Roadmap

Computer scientists think about problems differently. When confronted with a problem, a computer scientist does not just attempt to solve it. Instead, computer scientists think about a problem as a mapping between its inputs and desired outputs, develop a systematic sequence of steps for solving the problem for any possible input, and consider how the number of steps required to solve the problem scales as the input size increases.
The rest of this book presents a whirlwind introduction to computer science.
We do not cover any topics in great depth, but rather provide a broad picture of what computer science is, how to think like a computer scientist, and how to solve problems.
Part I: Defining Procedures. Part I focuses on how to define procedures that perform desired computations. The nature of the computer forces solutions to be expressed precisely in a language the computer can interpret. This means a computer scientist needs to understand how languages work and exactly what phrases in a language mean. Natural languages like English are too complex and inexact for this, so we need to invent and use new languages that are simpler, more structured, and less ambiguously defined than natural languages. Chapter 2 focuses on language, and during the course of this book we will use language to precisely describe processes and languages are interpreted.
The computer frees humans from having to actually carry out the steps needed to solve the problem. Without complaint, boredom, or rebellion, it dutifully ex7 See Douglas Hofstadter’s G¨ del, Escher, Bach for lots of interesting examples of connections beo tween computing and music.

Chapter 1. Computing

17

ecutes the exact steps the program specifies. And it executes them at a remarkable rate — billions of simple steps in each second on a typical laptop. This changes not just the time it takes to solve a problem, but qualitatively changes the kinds of problems we can solve, and the kinds of solutions worth considering. Problems like sequencing the human genome, simulating the global climate, and making a photomosaic not only could not have been solved without computing, but perhaps could not have even been envisioned. Chapter 3 introduces programming, and Chapter 4 develops some techniques for constructing programs that solve problems. To represent more interesting problems, we need ways to manage more complex data. Chapter 5 concludes Part I by exploring ways to represent data and define procedures that operate on complex data.
Part II: Analyzing Procedures. Part II considers the problem of estimating the cost required to execute a procedure. This requires understanding how machines can compute (Chapter 6), and mathematical tools for reasoning about how cost grows with the size of the inputs to a procedure (Chapter 7). Chapter 8 provides some extended examples that apply these techniques.
Part III: Improving Expressiveness. The techniques from Part I and II are sufficient for describing all computations. Our goal, however, it to be able to define concise, elegant, and efficient procedures for performing desired computations.
Part III presents techniques that enable more expressive procedures.
Part IV: The Limits of Computing. We hope that by the end of Part III, readers will feel confident that they could program a computer to do just about anything. In Part IV, we consider the question of what can and cannot be done by a mechanical computer. A large class of interesting problems cannot be solved by any computer, even with unlimited time and space.
Themes. Much of the book will revolve around three very powerful ideas that are prevalent throughout computing:
Recursive definitions. A recursive definition define a thing in terms of smaller instances of itself. A simple example is defining your ancestors as (1) your parents, and (2) the ancestors of your ancestors. Recursive definitions can define an infinitely large set with a small description. They also provide a powerful technique for solving problems by breaking a problem into solving a simple instance of the problem and showing how to solve a larger instance of the problem by using a solution to a smaller instance. We use recursive definitions to define infinite languages in Chapter 2, to solve problems in Chapter 4, to build complex data structures in Chapter 5. In later chapters, we see how language interpreters themselves can be defined recursively.
Universality. Computers are distinguished from other machines in that their behavior can be changed by a program. Procedures themselves can be described using just bits, so we can write procedures that process procedures as inputs and that generate procedures as outputs. Considering procedures as data is both a powerful problem solving tool, and a useful way of thinking about the power and fundamental limits of computing. We introduce the use of procedures as inputs and outputs in Chapter 4, see how generated procedures can be packaged with state to model objects in Chapter 10. One of the most fundamental results in computing is that any machine that can perform a few simple operations is powerful enough to perform any computation, and in this deep sense,

18

1.4. Summary and Roadmap

all mechanical computers are equivalent. We introduce a model of computation in Chapter 6, and reason about the limits of computation in Chapter 12.
Abstraction. Abstraction is a way of hiding details by giving things names. We use abstraction to manage complexity. Good abstractions hide unnecessary details so they can be used to build complex systems without needing to understand all the details of the abstraction at once. We introduce procedural abstraction in Chapter 4, data abstraction in Chapter 5, abstraction using objects in Chapter 10, and many other examples of abstraction throughout this book.
Throughout this book, these three themes will recur recursively, universally, and abstractly as we explore the art and science of how to instruct computing machines to perform useful tasks, reason about the resources needed to execute a particular procedure, and understand the fundamental and practical limits on what computers can do.

2

Language
Belittle! What an expression! It may be an elegant one in Virginia, and even perfectly intelligible; but for our part, all we can do is to guess at its meaning. For shame, Mr. Jefferson!
European Magazine and London Review, 1787
(reviewing Thomas Jefferson’s Notes on the State of Virginia)

The most powerful tool we have for communication is language. This is true whether we are considering communication between two humans, between a human programmer and a computer, or between a network of computers. In computing, we use language to describe procedures and use machines to turn descriptions of procedures into executing processes. This chapter is about what language is, how language works, and ways to define languages.

2.1

Surface Forms and Meanings

A language is a set of surface forms and meanings, and a mapping between the surface forms and their associated meanings. In the earliest human languages, the surface forms were sounds but surface forms can be anything that can be perceived by the communicating parties such as drum beats, hand gestures, or pictures. language

A natural language is a language spoken by humans, such as English or Swahili.
Natural languages are very complex since they have evolved over many thousands years of individual and cultural interaction. We focus on designed languages that are created by humans for some a specific purpose such as for expressing procedures to be executed by computers.

natural language

We focus on languages where the surface forms are text. In a textual language, the surface forms are linear sequences of characters. A string is a sequence of zero or more characters. Each character is a symbol drawn from a finite set known as an alphabet. For English, the alphabet is the set { a, b, c, . . . , z} (for the full language, capital letters, numerals, and punctuation symbols are also needed). A simple communication system can be described using a table of surface forms and their associated meanings. For example, this table describes a communication system between traffic lights and drivers:
Surface Form
Green
Yellow
Red

Meaning
Go
Caution
Stop

string alphabet 20

2.2. Language Construction

Communication systems involving humans are notoriously imprecise and subjective. A driver and a police officer may disagree on the actual meaning of the
Yellow symbol, and may even disagree on which symbol is being transmitted by the traffic light at a particular time. Communication systems for computers demand precision: we want to know what our programs will do, so it is important that every step they make is understood precisely and unambiguously.
The method of defining a communication system by listing a table of

< Symbol, Meaning > pairs can work adequately only for trivial communication systems. The number of possible meanings that can be expressed is limited by the number of entries in the table. It is impossible to express any new meaning since all meanings must already be listed in the table!
Languages and Infinity. A useful language must be able to express infinitely many different meanings. Hence, there must be a way to generate new surface forms and guess their meanings (see Exercise 2.1). No finite representation, such as a printed table, can contain all the surface forms and meanings in an infinite language. One way to generate infinitely large sets is to use repeating patterns. For example, most humans would interpret the notation: “1, 2, 3, . . . ” as the set of all natural numbers. We interpret the “. . . ” as meaning keep doing the same thing for ever. In this case, it means keep adding one to the preceding number. Thus, with only a few numbers and symbols we can describe a set containing infinitely many numbers. As discussed in Section 1.2.1, the language of the natural numbers is enough to encode all meanings in any countable set.
But, finding a sensible mapping between most meanings and numbers is nearly impossible. The surface forms do not correspond closely enough to the ideas we want to express to be a useful language.

2.2

Language Construction

To define more expressive infinite languages, we need a richer system for constructing new surface forms and associated meanings. We need ways to describe languages that allow us to define an infinitely large set of surface forms and meanings with a compact notation. The approach we use is to define a language by defining a set of rules that produce exactly the set of surface forms in the language.
Components of Language. A language is composed of:
• primitives — the smallest units of meaning.
• means of combination — rules for building new language elements by combining simpler ones.
The primitives are the smallest meaningful units (in natural languages these are known as morphemes). A primitive cannot be broken into smaller parts whose meanings can be combined to produce the meaning of the unit. The means of combination are rules for building words from primitives, and for building phrases and sentences from words.
Since we have rules for producing new words not all words are primitives. For example, we can create a new word by adding anti- in front of an existing word.

Chapter 2. Language

21

The meaning of the new word can be inferred as “against the meaning of the original word”. Rules like this one mean anyone can invent a new word, and use it in communication in ways that will probably be understood by listeners who have never heard the word before.
For example, the verb freeze means to pass from a liquid state to a solid state; antifreeze is a substance designed to prevent freezing. English speakers who know the meaning of freeze and anti- could roughly guess the meaning of antifreeze even if they have never heard the word before.1
Primitives are the smallest units of meaning, not based on the surface forms.
Both anti and freeze are primitive; they cannot be broken into smaller parts with meaning. We can break anti- into two syllables, or four letters, but those sub-components do not have meanings that could be combined to produce the meaning of the primitive.
Means of Abstraction. In addition to primitives and means of combination, powerful languages have an additional type of component that enables economic communication: means of abstraction.
Means of abstraction allow us to give a simple name to a complex entity. In
English, the means of abstraction are pronouns like “she”, “it”, and “they”. The meaning of a pronoun depends on the context in which it is used. It abstracts a complex meaning with a simple word. For example, the it in the previous sentence abstracts “the meaning of a pronoun”, but the it in the sentence before that one abstracts “a pronoun”.
In natural languages, there are a limited number of means of abstraction. English, in particular, has a very limited set of pronouns for abstracting people.
It has she and he for abstracting a female or male person, respectively, but no gender-neutral pronouns for abstracting a person of either sex. The interpretation of what a pronoun abstract in natural languages is often confusing. For example, it is unclear what the it in this sentence refers to. Languages for programming computers need means of abstraction that are both powerful and unambiguous.
Exercise 2.1. According to the Guinness Book of World Records, the longest word in the English language is floccinaucinihilipilification, meaning “The act or habit of describing or regarding something as worthless”. This word was reputedly invented by a non-hippopotomonstrosesquipedaliophobic student at Eton who combined four words in his Latin textbook. Prove Guinness wrong by identifying a longer English word. An English speaker (familiar with floccinaucinihilipilification and the morphemes you use) should be able to deduce the meaning of your word.
Exercise 2.2. Merriam-Webster’s word for the year for 2006 was truthiness, a word invented and popularized by Stephen Colbert. Its definition is, “truth that comes from the gut, not books”. Identify the morphemes that are used to build truthiness, and explain, based on its composition, what truthiness should mean.
1 Guessing that it is a verb meaning to pass from the solid to liquid state would also be reasonable.
This shows how imprecise and ambiguous natural languages are; for programming computers, we need the meanings of constructs to be clearly determined.

22

2.3. Recursive Transition Networks

Exercise 2.3. According to the Oxford English Dictionary, Thomas Jefferson is the first person to use more than 60 words in the dictionary. Jeffersonian words include: (a) authentication, (b) belittle, (c) indecipherable, (d) inheritability,
(e) odometer, (f) sanction, (g) vomit-grass, and (h) shag. For each Jeffersonian word, guess its derivation and explain whether or not its meaning could be inferred from its components.
Dictionaries are but the depositories of words already legitimated by usage. Society is the workshop in which new ones are elaborated. When an individual uses a new word, if ill formed, it is rejected; if well formed, adopted, and after due time, laid up in the depository of dictionaries. Thomas Jefferson, letter to John Adams,
1820

recursive transition network Exercise 2.4. Embiggening your vocabulary with anticromulent words ecdysiasts can grok.
a. Invent a new English word by combining common morphemes.
b. Get someone else to use the word you invented.
c. [

2.3

] Convince Merriam-Webster to add your word to their dictionary.

Recursive Transition Networks

This section describes a more powerful technique for defining languages. The surface forms of a textual language are a (typically infinite) set of strings. To define a language, we need to define a system that produces all strings in the language and no other strings. (The problem of associating meanings with those strings is more difficult; we consider it in later chapters.)
A recursive transition network (RTN) is defined by a graph of nodes and edges.
The edges are labeled with output symbols—these are the primitives in the language. The nodes and edge structure provides the means of combination.
One of the nodes is designated the start node (indicated by an arrow pointing into that node). One or more of the nodes may be designated as final nodes
(indicated by an inner circle). A string is in the language if there exists some path from the start node to a final node in the graph where the output symbols along the path edges produce the string.
Figure 2.1 shows a simple RTN with three nodes and four edges that can produce four different sentences. Starting at the node marked Noun, there are two possible edges to follow. Each edge outputs a different symbol, and leads to the node marked Verb. From that node there are two output edges, each leading to the final node marked S. Since there are no edges out of S, this ends the string. Hence, the RTN can produce four strings corresponding to the four different paths from the start to final node: “Alice jumps”, “Alice runs”, “Bob jumps”, and “Bob runs”.
Recursive transition networks are more efficient than listing the strings in a language, since the number of possible strings increases with the number of possible paths through the graph. For example, adding one more edge from Noun to
Alice

jumps

Verb

Noun
Bob

S runs Figure 2.1. Simple recursive transition network.

23

Chapter 2. Language
Verb with label “Colleen” adds two new strings to the language.

The expressive power of recursive transition networks increases dramatically once we add edges that form cycles in the graph. This is where the recursive in the name comes from. Once a graph has a cycle, there are infinitely many possible paths through the graph!
Consider what happens when we add the single “and” edge to the previous network to produce the network shown in Figure 2.2 below. and Alice

jumps

Verb

Noun

S runs Bob

Figure 2.2. RTN with a cycle.
Now, we can produce infinitely many different strings! We can follow the “and” edge back to the Noun node to produce strings like “Alice runs and Bob jumps and Alice jumps” with as many conjuncts as we want.
Exercise 2.5. Draw a recursive transition network that defines the language of the whole numbers: 0, 1, 2, . . .
Exercise 2.6. How many different strings can be produced by the RTN below: jumps Alice
Noun

Adverb

Verb
Bob

quickly

eats

runs

S slowly Exercise 2.7. Recursive transition networks.
a. How many nodes are needed for a recursive transition network that can produce exactly 8 strings?
b. How many edges are needed for a recursive transition network that can produce exactly 8 strings?
c. [ ] Given a whole number n, how many edges are needed for a recursive transition network that can produce exactly n strings?
Subnetworks. In the RTNs we have seen so far, the labels on the output edges are direct outputs known as terminals: following an edge just produces the symbol on that edge. We can make more expressive RTNs by allowing edge labels to also name subnetworks. A subnetwork is identified by the name of its starting

24

2.3. Recursive Transition Networks

node. When an edge labeled with a subnetwork is followed, the network traversal jumps to the subnetwork node. Then, it can follow any path from that node to a final node. Upon reaching a final node, the network traversal jumps back to complete the edge.
For example, consider the network shown in Figure 2.3. It describes the same language as the RTN in Figure 2.1, but uses subnetworks for Noun and Verb. To produce a string, we start in the Sentence node. The only edge out from Sentence is labeled Noun. To follow the edge, we jump to the Noun node, which is a separate subnetwork. Now, we can follow any path from Noun to a final node
(in this cases, outputting either “Alice” or “Bob” on the path toward EndNoun. jumps Alice
Sentence

Noun

S1

Verb

EndSentence

EndNoun

Noun

EndVerb

Verb runs Bob

Figure 2.3. Recursive transition network with subnetworks.
Suppose we replace the Noun subnetwork with the more interesting version shown in Figure 2.4.This subnetwork includes an edge from Noun to N1 labeled
Noun. Following this edge involves following a path through the Noun subnetwork. Starting from Noun, we can generate complex phrases like “Alice and Bob” or “Alice and Bob and Alice” (find the two different paths through the network that generate this phrase).
Noun

N1

and
N2

Alice
Noun

EndNoun

Noun

Bob

Figure 2.4. Alternate Noun subnetwork.
To keep track of paths through RTNs without subnetworks, a single marker suffices. We can start with the marker on the start node, and move it along the path through each node to the final node. Keeping track of paths on an RTN with subnetworks is more complicated. We need to keep track of where we are in the current network, and also where to continue to when a final node of the current subnetwork is reached. Since we can enter subnetworks within subnetworks, we need a way to keep track of arbitrarily many jump points. stack A stack is a useful way to keep track of the subnetworks. We can think of a stack like a stack of trays in a cafeteria. At any point in time, only the top tray on the stack can be reached. We can pop the top tray off the stack, after which the next tray is now on top. We can push a new tray on top of the stack, which makes the old top of the stack now one below the top.
We use a stack of nodes to keep track of the subnetworks as they are entered.
The top of the stack represents the next node to process. At each step, we pop the node off the stack and follow a transition from that node.

25

Chapter 2. Language
Noun
Stack

Sentence

Step

1

EndNoun

S1
N = Sentence
2

S1

S1

S1

3, 4, 5, 6

N = Noun
2

3, 4, 5, 6

N = EndNoun
2, 3

Alice

Output
Verb
Stack
Step
Output

EndVerb

EndSentence
N = S1
2

EndSentence

EndSentence

3, 4, 5, 6

N = Verb
2

3, 4, 5, 6

EndSentence
N = EndVerb N = EndSentence
2
2, 3

runs

Figure 2.5. RTN generating “Alice runs”.
Using a stack, we can derive a path through an RTN using this procedure:
1.
2.
3.
4.

Initially, push the starting node on the stack.
If the stack is empty, stop. Otherwise, pop a node, N, off the stack.
If the popped node, N, is a final node return to step 2.2
Select an edge from the RTN that starts from node N. Use D to denote the destination of that edge, and s to denote the output symbol on the edge.
5. Push D on the stack.
6. If s is a subnetwork, push the node s on the stack. Otherwise, output s, which is a terminal.
7. Go back to step 2.

Consider generating the string “Alice runs” using the RTN in Figure 2.3. We start following step 1 by pushing Sentence on the stack. In step 2, we pop the stack, so the current node, N, is Sentence. Since Sentence is not a final node, we do nothing for step 3. In step 4, we follow an edge starting from Sentence. There is only one edge to choose and it leads to the node labeled S1. In step 5, we push
S1 on the stack. The edge we followed is labeled with the node Noun, so we push Noun on the stack. The stack now contains two items: [Noun, S1]. Since
Noun is on top, this means we will first traverse the Noun subnetwork, and then continue from S1.
As directed by step 7, we go back to step 2 and continue by popping the top node, Noun, off the stack. It is not a final node, so we continue to step 4, and select the edge labeled “Alice” from Noun to EndNoun. We push EndNoun on the stack, which now contains: [EndNoun, S1]. The label on the edge is the terminal, “Alice”, so we output “Alice” following step 6. We continue in the same manner, following the steps in the procedure as we keep track of a path through the network. The full processing steps are shown in Figure 2.5.
Exercise 2.8. Show the sequence of stacks used in generating the string “Alice and Bob and Alice runs” using the network in Figure 2.3 with the alternate Noun subnetwork from Figure 2.4.
2 For simplicity, this procedure assumes we always stop when a final node is reached. RTNs can have edges out of final nodes (as in Figure 2.2) where it is possible to either stop or continue from a final node.

26

2.4. Replacement Grammars

Exercise 2.9. Identify a string that cannot be produced using the RTN from
Figure 2.3 with the alternate Noun subnetwork from Figure 2.4 without the stack growing to contain five elements.
Exercise 2.10. The procedure given for traversing RTNs assumes that a subnetwork path always stops when a final node is reached. Hence, it cannot follow all possible paths for an RTN where there are edges out of a final node. Describe a procedure that can follow all possible paths, even for RTNs that include edges from final nodes.

2.4

Replacement Grammars

Another way to define a language is to use a grammar. This is the most common way languages are defined by computer scientists today, and the way we will use for the rest of this book. grammar John Backus

A grammar is a set of rules for generating all strings in the language. We use the Backus-Naur Form (BNF) notation to define a grammar. BNF grammars are exactly as powerful as recursive transition networks (Exploration 2.1 explains what this means and why it is the case), but easier to write down.
BNF was invented by John Backus in the late 1950s. Backus led efforts at IBM to define and implement Fortran, the first widely used programming language.
Fortran enabled computer programs to be written in a language more like familiar algebraic formulas than low-level machine instructions, enabling programs to be written more quickly. In defining the Fortran language, Backus and his team used ad hoc English descriptions to define the language. These ad hoc descriptions were often misinterpreted, motivating the need for a more precise way of defining a language.
Rules in a Backus-Naur Form grammar have the form: nonterminal ::⇒ replacement

I flunked out every year. I never studied. I hated studying. I was just goofing around. It had the delightful consequence that every year I went to summer school in
New Hampshire where I spent the summer sailing and having a nice time.
John Backus

The left side of a rule is always a single symbol, known as a nonterminal since it can never appear in the final generated string. The right side of a rule contains one or more symbols. These symbols may include nonterminals, which will be replaced using replacement rules before generating the final string. They may also be terminals, which are output symbols that never appear as the left side of a rule. When we describe grammars, we use italics to represent nonterminal symbols, and bold to represent terminal symbols. The terminals are the primitives in the language; the grammar rules are its means of combination.
We can generate a string in the language described by a replacement grammar by starting from a designated start symbol (e.g., sentence), and at each step selecting a nonterminal in the working string, and replacing it with the right side of a replacement rule whose left side matches the nonterminal. Wherever we find a nonterminal on the left side of a rule, we can replace it with what appears on the right side of any rule where that nonterminal matches the left side. A string is generated once there are no nonterminals remaining.
Here is an example BNF grammar (that describes the same language as the RTN

27

Chapter 2. Language in Figure 2.1):
1.
2.
3.
4.
5.

Sentence ::⇒
Noun
::⇒
Noun
::⇒
Verb
::⇒
Verb
::⇒

Noun Verb
Alice
Bob jumps runs

Starting from Sentence, the grammar can generate four sentences: “Alice jumps”,
“Alice runs”, “Bob jumps”, and “Bob runs”.
A derivation shows how a grammar generates a given string. Here is the derivation of “Alice runs”:
Sentence ::⇒Noun Verb
::⇒Alice Verb
::⇒Alice runs

derivation

using Rule 1 replacing Noun using Rule 2 replacing Verb using Rule 5

We can represent a grammar derivation as a tree, where the root of the tree is the starting nonterminal (Sentence in this case), and the leaves of the tree are the terminals that form the derived sentence. Such a tree is known as a parse tree. Here is the parse tree for the derivation of “Alice runs”:
Sentence
Noun

Verb

Alice

runs

BNF grammars can be more compact than just listing strings in the language since a grammar can have many replacements for each nonterminal. For example, adding the rule, Noun ::⇒ Colleen, to the grammar adds two new strings
(“Colleen runs” and “Colleen jumps”) to the language.
Recursive Grammars. The real power of BNF as a compact notation for describing languages, though, comes once we start adding recursive rules to our grammar. A grammar is recursive if the grammar contains a nonterminal that can produce a production that contains itself.
Suppose we add the rule,
Sentence ::⇒ Sentence and Sentence to our example grammar. Now, how many sentences can we generate?
Infinitely many! This grammar describes the same language as the RTN in Figure 2.2. It can generate “Alice runs and Bob jumps” and “Alice runs and Bob jumps and Alice runs” and sentences with any number of repetitions of “Alice runs”. This is very powerful: by using recursive rules a compact grammar can be used to define a language containing infinitely many strings.

parse tree

28

2.4. Replacement Grammars

Example 2.1: Whole Numbers
This grammar defines the language of the whole numbers (0, 1, . . .) with leading zeros allowed:
Number ::⇒
MoreDigits ::⇒
MoreDigits ::⇒
Digit
::⇒
Digit
::⇒
Digit
::⇒
Digit
::⇒

Digit MoreDigits
Digit ::⇒
Digit ::⇒
Digit ::⇒
Digit ::⇒
Digit ::⇒
Digit ::⇒

Number
0
1
2
3

4
5
6
7
8
9

Here is the parse tree for a derivation of 37 from Number:
Number
Digit

MoreDigits

3

Number
Digit

MoreDigits

7
Circular vs. Recursive Definitions. The second rule means we can replace
MoreDigits with nothing. This is sometimes written as to make it clear that the replacement is empty: MoreDigits ::⇒ .
This is a very important rule in the grammar—without it no strings could be generated; with it infinitely many strings can be generated. The key is that we can only produce a string when all nonterminals in the string have been replaced with terminals. Without the MoreDigits ::⇒ rule, the only rule we would have with MoreDigits on the left side is the third rule: MoreDigits ::⇒ Number.
The only rule we have with Number on the left side is the first rule, which replaces Number with Digit MoreDigits. Every time we follow this rule, we replace
MoreDigits with Digit MoreDigits. We can produce as many Digits as we want, but without the MoreDigits ::⇒ rule we can never stop.

base case

This is the difference between a circular definition, and a recursive definition.
Without the stopping rule, MoreDigits would be defined in a circular way. There is no way to start with MoreDigits and generate a production that does not contain MoreDigits (or a nonterminal that eventually must produce MoreDigits).
With the MoreDigits ::⇒ rule, however, we have a way to produce something terminal from MoreDigits. This is known as a base case — a rule that turns an otherwise circular definition into a meaningful, recursive definition.
Condensed Notation. It is common to have many grammar rules with the same left side nonterminal. For example, the whole numbers grammar has ten rules with Digit on the left side to produce the ten terminal digits. Each of these is an alternative rule that can be used when the production string contains the nonterminal Digit. A compact notation for these types of rules is to use the vertical

29

Chapter 2. Language

bar (|) to separate alternative replacements. For example, we could write the ten
Digit rules compactly as:
Digit ::⇒ 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
Exercise 2.11. Suppose we replaced the first rule (Number ::⇒ Digit MoreDigits) in the whole numbers grammar with: Number ::⇒ MoreDigits Digit.
a. How does this change the parse tree for the derivation of 37? Draw the parse tree that results from the new grammar.
b. Does this change the language? Either show some string that is in the language defined by the modified grammar but not in the original language (or vice versa), or argue that both grammars generate the same strings.
Exercise 2.12. The grammar for whole numbers we defined allows strings with non-standard leading zeros such as “000” and “00005”. Devise a grammar that produces all whole numbers (including “0”), but no strings with unnecessary leading zeros.
Exercise 2.13. Define a BNF grammar that describes the language of decimal numbers (the language should include 3.14159, 0.423, and 1120 but not 1.2.3).
Exercise 2.14. The BNF grammar below (extracted from Paul Mockapetris, Domain Names - Implementation and Specification, IETF RFC 1035) describes the language of domain names on the Internet.
Domain
::⇒
SubDomainList ::⇒
Label
::⇒
MoreLetters
::⇒
LetterHyphens ::⇒
LDHyphen
::⇒
LetterDigit
::⇒
Letter
::⇒
Digit
::⇒

SubDomainList
Label | SubDomainList . Label
Letter MoreLetters
LetterHyphens LetterDigit |
LDHyphen | LDHyphen LetterHyphens |
LetterDigit | Letter | Digit
A|B| ... |Z|a|b| ... |z
0|1|2|3|4|5|6|7|8|9

a. Show a derivation for www.virginia.edu in the grammar.
b. According to the grammar, which of the following are valid domain names:
(1) tj, (2) a.-b.c, (3) a-a.b-b.c-c, (4) a.g.r.e.a.t.d.o.m.a.i.n-.
Exploration 2.1: Power of Language Systems
Section 2.4 claimed that recursive transition networks and BNF grammars are equally powerful. What does it mean to say two systems are equally powerful?
A language description mechanism is used to define a set of strings comprising a language. Hence, the power of a language description mechanism is determined by the set of languages it can define.
One approach to measure the power of language description mechanism would be to count the number of languages that it can define. Even the simplest mech-

30

2.4. Replacement Grammars

anisms can define infinitely many languages, however, so just counting the number of languages does not distinguish well between the different language description mechanisms. Both RTNs and BNFs can describe infinitely many different languages. We can always add a new edge to an RTN to increase the number of strings in the language, or add a new replacement rule to a BNF that replaces a nonterminal with a new terminal symbol.
Instead, we need to consider the set of languages that each mechanism can define. A system A is more powerful that another system B if we can use A to define every language that can be defined by B, and there is some language L that can be defined using A that cannot be defined using B. This matches our intuitive interpretation of more powerful — A is more powerful than B if it can do everything B can do and more.
The diagrams in Figure 2.6 show three possible scenarios. In the leftmost picture, the set of languages that can be defined by B is a proper subset of the set of languages that can be defined by A. Hence, A is more powerful than B. In the center picture, the sets are equal. This means every language that can be defined by A can also be defined by B, and every language that can be defined by B can also be defined by A, and the systems are equally powerful. In the rightmost picture, there are some elements of A that are not elements of B, but there are also some elements of B that are not elements of A. This means we cannot say either one is more powerful; A can do some things B cannot do, and B can do some things A cannot do.

A

A

B

A is more powerful than B

A

B

A is as powerful as B

B

A and B are not comparable

Figure 2.6. System power relationships.
To determine the relationship between RTNs and BNFs we need to understand if there are languages that can be defined by a BNF that cannot be defined by a RTN and if there are languages that can be defined by a RTN that cannot be defined by an BNF. We will show only the first part of the proof here, and leave the second part as an exercise.

proof by construction For the first part, we prove that there are no languages that can be defined by a
BNF that cannot be defined by an RTN. Equivalently, every language that can be defined by a BNF grammar has a corresponding RTN. Since there are infinitely many languages that can be defined by BNF grammars, we cannot prove this by enumerating each language and showing its corresponding RTN. Instead, we use a proof technique commonly used in computer science: proof by construction. We show an algorithm that given any BNF grammar constructs an RTN that defines the same language as the input BNF grammar.
Our strategy is to construct a subnetwork corresponding to each nonterminal.
For each rule where the nonterminal is on the left side, the right hand side is converted to a path through that node’s subnetwork.

31

Chapter 2. Language

Before presenting the general construction algorithm, we illustrate the approach with the example BNF grammar from Example 2.1:
Number ::⇒ Digit MoreDigits
MoreDigits ::⇒
MoreDigits ::⇒ Number
Digit
::⇒ 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
The grammar has three nonterminals: Number, Digit, and MoreDigits. For each nonterminal, we construct a subnetwork by first creating two nodes corresponding to the start and end of the subnetwork for the nonterminal. We make StartNumber the start node for the RTN since Number is the starting nonterminal for the grammar.
Next, we need to add edges to the RTN corresponding to the production rules in the grammar. The first rule indicates that Number can be replaced by Digit
MoreDigits. To make the corresponding RTN, we need to introduce an intermediate node since each RTN edge can only contain one label. We need to traverse two edges, with labels StartDigit and StartMoreDigits between the StartNumber and EndNumber nodes. The resulting partial RTN is shown in Figure 2.7.
StartDigit

StartNumber

StartMoreDigits

EndNumber

X0

Figure 2.7. Converting the Number productions to an RTN.
For the MoreDigits nonterminal there are two productions. The first means
MoreDigits can be replaced with nothing. In an RTN, we cannot have edges with unlabeled outputs. So, the equivalent of outputting nothing is to turn StartMoreDigits into a final node. The second production replaces MoreDigits with
Number. We do this in the RTN by adding an edge between StartMoreDigits and
EndMoreDigits labeled with Number, as shown in Figure 2.8.
StartMoreDigits

Number

EndMoreDigits

Figure 2.8. Converting the MoreDigits productions to an RTN.
Finally, we convert the ten Digit productions. For each rule, we add an edge between StartDigit and EndDigit labeled with the digit terminal, as shown in
Figure 2.9.
This example illustrates that it is possible to convert a particular grammar to an
RTN. For a general proof, we present a general an algorithm that can be used to do the same conversion for any BNF:
1. For each nonterminal X in the grammar, construct two nodes, StartX and

32

2.5. Summary
0
1
......

StartDigit

EndDigit

9

Figure 2.9. Converting the Digit productions to an RTN.
EndX, where EndX is a final node. Make the node StartS the start node of the RTN, where S is the start nonterminal of the grammar.
2. For each rule in the grammar, add a corresponding path through the RTN.
All BNF rules have the form X ::⇒ replacement where X is a nonterminal in the grammar and replacement is a sequence of zero or more terminals and nonterminals: [ R0 , R1 , . . . , Rn ].
(a) If the replacement is empty, make StartX a final node.
(b) If the replacement has just one element, R0 , add an edge from StartX to EndX with edge label R0 .
(c) Otherwise:
i. Add an edge from StartX to a new node labeled Xi,0 (where i identifies the grammar rule), with edge label R0 . ii. For each remaining element R j in the replacement add an edge from Xi,j−1 to a new node labeled Xi,j with edge label R j . (For example, for element R1 , a new node Xi,1 is added, and an edge from Xi,0 to Xi,1 with edge label R1 .) iii. Add an edge from Xi,n−1 to EndX with edge label Rn .
Following this procedure, we can convert any BNF grammar into an RTN that defines the same language. Hence, we have proved that RTNs are at least as powerful as BNF grammars.
To complete the proof that BNF grammars and RTNs are equally powerful ways of defining languages, we also need to show that a BNF can define every language that can be defined using an RTN. This part of the proof can be done using a similar strategy in reverse: by showing a procedure that can be used to construct a BNF equivalent to any input RTN. We leave the details as an exercise for especially ambitious readers.
Exercise 2.15. Produce an RTN that defines the same languages as the BNF grammar from Exercise 2.14.
Exercise 2.16. [ ] Prove that BNF grammars are as powerful as RTNs by devising a procedure that can construct a BNF grammar that defines the same language as any input RTN.

2.5

Summary

Languages define a set of surface forms and associated meanings. Since useful language must be able to express infinitely many things, we need tools for

Chapter 2. Language

33

defining infinite sets of surface forms using compact and precise notations. The tool we will use for the remainder of this book is the BNF replacement grammar which precisely defines a language using replacement rules. This system can describe infinite languages with small representations because of the power of recursive rules. In the next chapter, we introduce the Scheme programming language that we will use to describe procedures.

34

2.5. Summary

3

Programming
The Analytical Engine has no pretensions whatever to originate any thing. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with.
Augusta Ada Countess of Lovelace, in Notes on the Analytical Engine, 1843

What distinguishes a computer from other machines is its programmability.
Without a program, a computer is an overpriced door stopper. With the right program, though, a computer can be a tool for communicating across the continent, discovering a new molecule that can cure cancer, composing a symphony, or managing the logistics of a retail empire.
Programming is the act of writing instructions that make the computer do something useful. It is an intensely creative activity, involving aspects of art, engineering, and science. Good programs are written to be executed efficiently by computers, but also to be read and understood by humans. The best programs are delightful in ways similar to the best architecture, elegant in both form and function. The ideal programmer would have the vision of Isaac Newton, the intellect of
Albert Einstein, the creativity of Miles Davis, the aesthetic sense of Maya Lin, the wisdom of Benjamin Franklin, the literary talent of William Shakespeare, the oratorical skills of Martin Luther King, the audacity of John Roebling, and the self-confidence of Grace Hopper.
Fortunately, it is not necessary to possess all of those rare qualities to be a good programmer! Indeed, anyone who is able to master the intellectual challenge of learning a language (which, presumably, anyone who has gotten this far has done at least for English) can become a good programmer. Since programming is a new way of thinking, many people find it challenging and even frustrating at first. Because the computer does exactly what it is told, a small mistake in a program may prevent it from working as intended. With a bit of patience and persistence, however, the tedious parts of programming become easier, and you will be able to focus your energies on the fun and creative problem solving parts.
In the previous chapter, we explored the components of language and mechanisms for defining languages. In this chapter, we explain why natural languages are not a satisfactory way for defining procedures and introduce a language for programming computers and how it can be used to define procedures.

Golden Gate Bridge

36

3.1

3.1. Problems with Natural Languages

Problems with Natural Languages

Natural languages, such as English, work adequately (most, but certainly not all, of the time) for human-human communication, but are not well-suited for human-computer or computer-computer communication. Why can’t we use natural languages to program computers?
Next, we survey several of the reasons for this. We use specifics from English, although all natural languages suffer from these problems to varying degrees.
Complexity. Although English may seem simple to you now, it took many years of intense effort (most of it subconscious) for you to learn it. Despite using it for most of their waking hours for many years, native English speakers know a small fraction of the entire language. The Oxford English Dictionary contains 615,000 words, of which a typical native English speaker knows about 40,000.
Ambiguity. Not only do natural languages have huge numbers of words, most words have many different meanings. Understanding the intended meaning of an utterance requires knowing the context, and sometimes pure guesswork.
For example, what does it mean to be paid biweekly? According to the American
Heritage Dictionary1 , biweekly has two definitions:
1. Happening every two weeks.
2. Happening twice a week; semiweekly.
Merriam-Webster’s Dictionary2 takes the opposite approach:
1. occurring twice a week
2. occurring every two weeks : fortnightly
So, depending on which definition is intended, someone who is paid biweekly could either be paid once or four times every two weeks! The behavior of a payroll management program better not depend on how biweekly is interpreted.
Even if we can agree on the definition of every word, the meaning of a sentence is often ambiguous. This particularly difficult example is taken from the instructions with a shipment of ballistic missiles from the British Admiralty:3
It is necessary for technical reasons that these warheads be stored upside down, that is, with the top at the bottom and the bottom at the top. In order that there be no doubt as to which is the bottom and which is the top, for storage purposes, it will be seen that the bottom of each warhead has been labeled ’TOP’.
Irregularity. Because natural languages evolve over time as different cultures interact and speakers misspeak and listeners mishear, natural languages end up a morass of irregularity. Nearly all grammar rules have exceptions. For example,
English has a rule that we can make a word plural by appending an s. The new
1 American Heritage, Dictionary of the English Language (Fourth Edition), Houghton Mifflin Company, 2007 (http://www.answers.com/biweekly).
2 Merriam-Webster Online, Merriam-Webster, 2008 (http://www.merriam-webster.com/dictionary/ biweekly). 3 Carl C. Gaither and Alma E. Cavazos-Gaither, Practically Speaking: A Dictionary of Quotations on Engineering, Technology and Architecture, Taylor & Francis, 1998.

Chapter 3. Programming

37

word means “more than one of the original word’s meaning”. This rule works for most words: word → words, language → languages, person → persons.4
It does not work for all words, however. The plural of goose is geese (and gooses is not an English word), the plural of deer is deer (and deers is not an English word), and the plural of beer is controversial (and may depend on whether you speak American English or Canadian English).
These irregularities can be charming for a natural language, but they are a constant source of difficulty for non-native speakers attempting to learn a language.
There is no sure way to predict when the rule can be applied, and it is necessary to memorize each of the irregular forms.
Uneconomic. It requires a lot of space to express a complex idea in a natural language. Many superfluous words are needed for grammatical correctness, even though they do not contribute to the desired meaning. Since natural languages evolved for everyday communication, they are not well suited to describing the precise steps and decisions needed in a computer program.
As an example, consider a procedure for finding the maximum of two numbers.
In English, we could describe it like this:

I have made this letter longer than usual, only because
I have not had the time to make it shorter. Blaise Pascal, 1657

To find the maximum of two numbers, compare them. If the first number is greater than the second number, the maximum is the first number.
Otherwise, the maximum is the second number.
Perhaps shorter descriptions are possible, but any much shorter description probably assumes the reader already knows a lot. By contrast, we can express the same steps in the Scheme programming language in very concise way (don’t worry if this doesn’t make sense yet—it should by the end of this chapter):
(define (bigger a b) (if (> a b) a b))
Limited means of abstraction. Natural languages provide small, fixed sets of pronouns to use as means of abstraction, and the rules for binding pronouns to meanings are often unclear. Since programming often involves using simple names to refer to complex things, we need more powerful means of abstraction than natural languages provide.

3.2

Programming Languages

For programming computers, we want simple, unambiguous, regular, and economical languages with powerful means of abstraction. A programming language is a language that is designed to be read and written by humans to create programs that can be executed by computers.
Programming languages come in many flavors. It is difficult to simultaneously satisfy all desired properties since simplicity is often at odds with economy. Every feature that is added to a language to increase its expressiveness incurs a cost in reducing simplicity and regularity. For the first two parts of this book, we use the Scheme programming language which was designed primarily for simplicity. For the later parts of the book, we use the Python programming language, which provides more expressiveness but at the cost of some added complexity.
4 Or is it

people? What is the singular of people? What about peeps? Can you only have one peep?

programming language 38

3.2. Programming Languages

Another reason there are many different programming languages is that they are at different levels of abstraction. Some languages provide programmers with detailed control over machine resources, such as selecting a particular location in memory where a value is stored. Other languages hide most of the details of the machine operation from the programmer, allowing them to focus on higherlevel actions.
Ultimately, we want a program the computer can execute. This means at the lowest level we need languages the computer can understand directly. At this level, the program is just a sequence of bits encoding machine instructions.
Code at this level is not easy for humans to understand or write, but it is easy for a processor to execute quickly. The machine code encodes instructions that direct the processor to take simple actions like moving data from one place to another, performing simple arithmetic, and jumping around to find the next instruction to execute.
For example, the bit sequence 1110101111111110 encodes an instruction in the
Intel x86 instruction set (used on most PCs) that instructs the processor to jump backwards two locations. Since the instruction itself requires two locations of space, jumping back two locations actually jumps back to the beginning of this instruction. Hence, the processor gets stuck running forever without making any progress.
Grace Hopper
Image courtesy Computer
History Museum (1952)

The computer’s processor is designed to execute very simple instructions like jumping, adding two small numbers, or comparing two values. This means each instruction can be executed very quickly. A typical modern processor can execute billions of instructions in a second.5
Until the early 1950s, all programming was done at the level of simple instructions. The problem with instructions at this level is that they are not easy for humans to write and understand, and you need many simple instructions before you have a useful program.

compiler

A compiler is a computer program that generates other programs. It translates an input program written in a high-level language that is easier for humans to create into a program in a machine-level language that can be executed by the computer. Admiral Grace Hopper developed the first compilers in the 1950s.

interpreter

An alternative to a compiler is an interpreter. An interpreter is a tool that translates between a higher-level language and a lower-level language, but where a compiler translates an entire program at once and produces a machine language program that can be executed directly, an interpreter interprets the program a small piece at a time while it is running. This has the advantage that we do not have to run a separate tool to compile a program before running it; we can simply enter our program into the interpreter and run it right away. This makes it easy to make small changes to a program and try it again, and to observe the state of our program as it is running.

Nobody believed that I had a running compiler and nobody would touch it. They told me computers could only do arithmetic.
Grace Hopper

One disadvantage of using an interpreter instead of a compiler is that because the translation is happening while the program is running, the program executes slower than a compiled program. Another advantage of compilers over
5 A “2GHz processor” executes 2 billion cycles per second. This does not map directly to the number of instructions it can execute in a second, though, since some instructions take several cycles to execute. 39

Chapter 3. Programming

interpreters is that since the compiler translates the entire program it can also analyze the program for consistency and detect certain types of programming mistakes automatically instead of encountering them when the program is running (or worse, not detecting them at all and producing unintended results).
This is especially important when writing critical programs such as flight control software — we want to detect as many problems as possible in the flight control software before the plane is flying!
Since we are more concerned with interactive exploration than with performance and detecting errors early, we use an interpreter instead of a compiler.

3.3

Scheme

The programming system we use for the first part of this book is depicted in
Figure 3.1. The input to our programming system is a program written in a programming language named Scheme. A Scheme interpreter interprets a Scheme program and executes it on the machine processor.
Scheme was developed at MIT in the 1970s by Guy Steele and Gerald Sussman, based on the LISP programming language that was developed by John McCarthy in the 1950s. Although many large systems have been built using Scheme, it is not widely used in industry. It is, however, a great language for learning about computing and programming. The primary advantage of using Scheme to learn about computing is its simplicity and elegance. The language is simple enough that this chapter covers nearly the entire language (we defer describing a few aspects until Chapter 9), and by the end of this book you will know enough to implement your own Scheme interpreter. By contrast, some programming languages that are widely used in industrial programming such as C++ and Java require thousands of pages to describe, and even the world’s experts in those languages do not agree on exactly what all programs mean.

Scheme Program

(define (bigger a b)
(if (> a b) a b))
(bigger 3 4)

Interpreter
(DrRacket)

Processor

Figure 3.1. Running a Scheme program.

40

3.4. Expressions

Although almost everything we describe should work in all Scheme interpreters, for the examples in this book we assume the DrRacket programming environment which is freely available from http://racket-lang.org/. DrRacket includes interpreters for many different languages, so you must select the desired language using the Language menu. The selected language defines the grammar and evaluation rules that will be used to interpret your program. For all the examples in this book, we use a version of the Scheme language named Pretty Big.

3.4 expression evaluation

Expressions

A Scheme program is composed of expressions and definitions (we cover definitions in Section 3.5). An expression is a syntactic element that has a value.
The act of determining the value associated with an expression is called evaluation. A Scheme interpreter, such as the one provided in DrRacket, is a machine for evaluating Scheme expressions. If you enter an expression into a Scheme interpreter, the interpreter evaluates the expression and displays its value.
Expressions may be primitives. Scheme also provides means of combination for producing complex expressions from simple expressions. The next subsections describe primitive expressions and application expressions. Section 3.6 describes expressions for making procedures and Section 3.7 describes expressions that can be used to make decisions.

3.4.1

Primitives

An expression can be replaced with a primitive:
Expression ::⇒ PrimitiveExpression
As with natural languages, primitives are the smallest units of meaning. Hence, the value of a primitive is its pre-defined meaning.
Scheme provides many different primitives. Three useful types of primitives are described next: numbers, Booleans, and primitive procedures.
Numbers. Numbers represent numerical values. Scheme provides all the kinds of numbers you are familiar with including whole numbers, negative numbers, decimals, and rational numbers.
Example numbers include:
150
3.14159

0
3/4

−12
999999999999999999999

Numbers evaluate to their value. For example, the value of the primitive expression 1120 is 1120.
Booleans. Booleans represent truth values. There are two primitives for representing true and false:
PrimitiveExpression ::⇒ true | false
The meaning of true is true, and the meaning of false is false. In the DrRacket interpreter, #t and #f are used to represent the primitive truth values. So, the value true appears as #t in the interactions window.

41

Chapter 3. Programming
Symbol

Description

Inputs

Output

+

add

zero or more numbers sum of the input numbers (0 if there are no inputs)



multiply

zero or more numbers product of the input numbers (1 if there are no inputs)



subtract

two numbers

the value of the first number minus the value the second number

/

divide

two numbers

the value of the first number divided by the value of the second number zero?

is zero?

one number

true if the input value is 0, otherwise false

=

is equal to?

two numbers

true if the input values have the same value, otherwise false

<

is less than?

two numbers

true if the first input value has lesser value than the second input value, otherwise false

>

is greater than?

two numbers

true if the first input value has greater value than the second input value, otherwise false

=

is greater than or equal to?

two numbers

true if the first input value is not less than the second input value, otherwise false

Table 3.1. Selected Scheme Primitive Procedures.
All of these primitive procedures operate on numbers. The first four are the basic arithmetic operators; the rest are comparison procedures. Some of these procedures are defined for more inputs than just the ones shown here (e.g., the subtract procedure also works on one number, producing its negation).

Primitive Procedures. Scheme provides primitive procedures corresponding to many common functions. Mathematically, a function is a mapping from inputs to outputs. For each valid input to the function, there is exactly one associated output. For example, + is a procedure that takes zero or more inputs, each of which must be a number. Its output is the sum of the values of the inputs. Table
3.1 describes some primitive procedures for performing arithmetic and comparisons on numbers.

3.4.2

Application Expressions

Most of the actual work done by a Scheme program is done by application expressions that apply procedures to operands. The expression (+ 1 2) is an ApplicationExpression, consisting of three subexpressions. Although this example is probably simple enough that you can probably guess that it evaluates to 3, we will show in detail how it is evaluated by breaking down into its subexpressions using the grammar rules. The same process will allow us to understand how any expression is evaluated.

function

42

3.4. Expressions

The grammar rule for application is:

Expression
::⇒ ApplicationExpression
ApplicationExpression ::⇒ (Expression MoreExpressions)
MoreExpressions
::⇒ | Expression MoreExpressions

operands arguments This rule produces a list of one or more expressions surrounded by parentheses.
The value of the first expression should be a procedure; the remaining expressions are the inputs to the procedure known as operands. Another name for operands is arguments.
Here is a parse tree for the expression (+ 1 2):

Expression
ApplicationExpression
(

Expression

PrimitiveExpression

MoreExpressions
Expression

)
MoreExpressions

PrimitiveExpression

Expression

1

+

MoreExpressions

PrimitiveExpression
2

Following the grammar rules, we replace Expression with ApplicationExpression at the top of the parse tree. Then, we replace ApplicationExpression with (Expression MoreExpressions). The Expression term is replaced PrimitiveExpression, and finally, the primitive addition procedure +. This is the first subexpression of the application, so it is the procedure to be applied. The MoreExpressions term produces the two operand expressions: 1 and 2, both of which are primitives that evaluate to their own values. The application expression is evaluated by applying the value of the first expression (the primitive procedure +) to the inputs given by the values of the other expressions. Following the meaning of the primitive procedure, (+ 1 2) evaluates to 3 as expected.
The Expression nonterminals in the application expression can be replaced with anything that appears on the right side of an expression rule, including an ApplicationExpression.
We can build up complex expressions like (+ (∗ 10 10) (+ 25 25)). Its parse tree is: 43

Chapter 3. Programming
Expression
ApplicationExpression
(

Expression

MoreExpressions

PrimitiveExpression

+

Expression

MoreExpressions

ApplicationExpression
(∗ 10 10)

)

Expression

MoreExpressions

ApplicationExpression
(+ 25 25)

This tree is similar to the previous tree, except instead of the subexpressions of the first application expression being simple primitive expressions, they are now application expressions. (Instead of showing the complete parse tree for the nested application expressions, we use triangles.)
To evaluate the output application, we need to evaluate all the subexpressions.
The first subexpression, +, evaluates to the primitive procedure. The second subexpression, (∗ 10 10), evaluates to 100, and the third expression, (+ 25 25), evaluates to 50. Now, we can evaluate the original expression using the values for its three component subexpressions: (+ 100 50) evaluates to 150.
Exercise 3.1. Draw a parse tree for the Scheme expression (+ 100 (∗ 5 (+ 5 5))) and show how it is evaluated.
Exercise 3.2. Predict how each of the following Scheme expressions is evaluated. After making your prediction, try evaluating the expression in DrRacket. If the result is different from your prediction, explain why the Scheme interpreter evaluates the expression as it does.
a. 1120
b. (+ 1120)
c. (+ (+ 10 20) (∗ 2 0))
d. (= (+ 10 20) (∗ 15 (+ 5 5)))
e. +
f. (+ + inexact Expression) to convert the value of the expression to a decimal representation.)

3.5

Definitions

Scheme provides a simple, yet powerful, mechanism for abstraction. A definition introduces a new name and gives it a value:
Definition ::⇒ (define Name Expression)
After a definition, the N ame in the definition is now associated with the value of the expression in the definition. A definition is not an expression since it does not evaluate to a value.
A name can be any sequence of letters, digits, and special characters (such as
−, >, ?, and !) that starts with a letter or special character. Examples of valid names include a, Ada, Augusta-Ada, gold49, !yuck, and yikes!\%@\#. We don’t recommend using some of these names in your programs, however! A good programmer will pick names that are easy to read, pronounce, and remember, and that are not easily confused with other names.
After a name has been bound to a value by a definition, that name may be used in an expression:
Expression
::⇒ NameExpression
NameExpression ::⇒ Name
The value of a NameExpression is the value associated with the Name. (Alert readers should be worried that we need a more precise definition of the meaning of definitions to know what it means for a value to be associated with a name.
This informal notion will serve us well for now, but we will need a more precise explanation of the meaning of a definition in Chapter 9.)
Below we define speed-of-light to be the speed of light in meters per second, define seconds-per-hour to be the number of seconds in an hour, and use them to calculate the speed of light in kilometers per hour:

> (define speed-of-light 299792458)
> speed-of-light
299792458

> (define seconds-per-hour (∗ 60 60))
> (/ (∗ speed-of-light seconds-per-hour) 1000)
1079252848 4/5

Chapter 3. Programming

3.6

45

Procedures

In Chapter 1 we defined a procedure as a description of a process. Scheme provides a way to define procedures that take inputs, carry out a sequence of actions, and produce an output. Section 3.4.1 introduced some of Scheme’s primitive procedures. To construct complex programs, however, we need to be able to create our own procedures.
Procedures are similar to mathematical functions in that they provide a mapping between inputs and outputs, but they differ from mathematical functions in two important ways:
State. In addition to producing an output, a procedure may access and modify state. This means that even when the same procedure is applied to the same inputs, the output produced may vary. Because mathematical functions do not have external state, when the same function is applied to the same inputs it always produces the same result. State makes procedures much harder to reason about. We will ignore this issue until Chapter 9, and focus until then only on procedures that do not involve any state.
Resources. Unlike an ideal mathematical function, which provides an instantaneous and free mapping between inputs and outputs, a procedure requires resources to execute before the output is produced. The most important resources are space (memory) and time. A procedure may need space to keep track of intermediate results while it is executing. Each step of a procedure requires some time to execute. Predicting how long a procedure will take to execute and finding the fastest procedure possible for solving some problem are core problems in computer science. We consider this throughout this book, and in particular in Chapter 7.
For the rest of this chapter, we view procedures as idealized mathematical functions: we consider only procedures that involve no state and do not worry about the resources required to execute our procedures.

3.6.1

Making Procedures

Scheme provides a general mechanism for making a procedure:
Expression
::⇒ ProcedureExpression
ProcedureExpression ::⇒ (lambda (Parameters) Expression)
Parameters
::⇒ | Name Parameters
Evaluating a ProcedureExpression produces a procedure that takes as inputs the
Parameters following the lambda. The lambda special form means “make a procedure”. The body of the resulting procedure is the Expression, which is not evaluated until the procedure is applied.
A ProcedureExpression can replace an Expression. This means anywhere an Expression is used we can create a new procedure. This is very powerful since it means we can use procedures as inputs to other procedures and create procedures that return new procedures as their output!
Here are some example procedures:
(lambda (x) (∗ x x))
Procedure that takes one input, and produces the square of the input value

46

3.6. Procedures as its output.

(lambda (a b) (+ a b))
Procedure that takes two inputs, and produces the sum of the input values as its output.
(lambda () 0)
Procedure that takes no inputs, and produces 0 as its output. The result of applying this procedure to any argument is always 0.

higher-order procedure (lambda (a) (lambda (b) (+ a b)))
Procedure that takes one input (a), and produces as its output a procedure that takes one input and produces the sum of a and that input as its output.
This is an example of a higher-order procedure. Higher-order procedures produce procedures as their output or take procedures as their arguments.
This can be confusing, but is also very powerful.

3.6.2

Substitution Model of Evaluation

For a procedure to be useful, we need to apply it. In Section 3.4.2, we saw the syntax and evaluation rule for an ApplicationExpression when the procedure to be applied is a primitive procedure. The syntax for applying a constructed procedure is identical to the syntax for applying a primitive procedure:
Expression
::⇒ ApplicationExpression
ApplicationExpression ::⇒ (Expression MoreExpressions)
MoreExpressions
::⇒ | Expression MoreExpressions
To understand how constructed procedures are evaluated, we need a new evaluation rule. In this case, the first Expression evaluates to a procedure that was created using a ProcedureExpression, so the ApplicationExpression becomes:
ApplicationExpression ::⇒
((lambda (Parameters)Expression) MoreExpressions)
(The underlined part is the replacement for the ProcedureExpression.)
To evaluate the application, first evaluate the MoreExpressions in the application expression. These expressions are known as the operands of the application. The resulting values are the inputs to the procedure. There must be exactly one expression in the MoreExpressions corresponding to each name in the parameters list. Next, associate the names in the Parameters list with the corresponding operand values. Finally, evaluate the expression that is the body of the procedure. Whenever any parameter name is used inside the body expression, the name evaluates to the value of the corresponding input that is associated with that name.
Example 3.1: Square
Consider evaluating the following expression:
((lambda (x) (∗ x x)) 2)
It is an ApplicationExpression where the first subexpression is the ProcedureExpression, (lambda (x) (∗ x x)). To evaluate the application, we evaluate all the subexpressions and apply the value of the first subexpression to the values of

Chapter 3. Programming

47

the remaining subexpressions. The first subexpression evaluates to a procedure that takes one parameter named x and has the expression body (∗ x x). There is one operand expression, the primitive 2, that evaluates to 2.
To evaluate the application we bind the first parameter, x, to the value of the first operand, 2, and evaluate the procedure body, (∗ x x). After substituting the parameter values, we have (∗ 2 2). This is an application of the primitive multiplication procedure. Evaluating the application results in the value 4.
The procedure in our example, (lambda (x) (∗ x x)), is a procedure that takes a number as input and as output produces the square of that number. We can use the definition mechanism (from Section 3.5) to give this procedure a name so we can reuse it:
(define square (lambda (x) (∗ x x)))
This defines the name square as the procedure. After this, we can apply square to any number:

> (square 2)
4

> (square 1/4)
1/16

> (square (square 2))
16

Example 3.2: Make adder
The expression
((lambda (a)
(lambda (b) (+ a b)))
3)
evaluates to a procedure that adds 3 to its input. Applying that procedure to 4,
(((lambda (a) (lambda (b) (+ a b))) 3)
4)
evaluates to 7. By using define, we can give these procedures sensible names:
(define make-adder
(lambda (a)
(lambda (b) (+ a b))))
Then, (define add-three (make-adder 3)) defines add-three as a procedure that takes one parameter and outputs the value of that parameter plus 3.

Abbreviated Procedure Definitions. Since we commonly define new procedures, Scheme provides a condensed notation for defining a procedure6 :
6 The condensed notation also includes a begin expression, which is a special form. We will not need the begin expression until we start dealing with procedures that have side effects. We describe the begin special form in Chapter 9.

48

3.7. Decisions
Definition ::⇒ (define (Name Parameters) Expression)

This incorporates the lambda invisibly into the definition, but means exactly the same thing. For example,
(define square (lambda (x) (∗ x x))) can be written equivalently as:
(define (square x) (∗ x x))
Exercise 3.5. Define a procedure, cube, that takes one number as input and produces as output the cube of that number.
Exercise 3.6. Define a procedure, compute-cost, that takes as input two numbers, the first represents that price of an item, and the second represents the sales tax rate. The output should be the total cost, which is computed as the price of the item plus the sales tax on the item, which is its price times the sales tax rate. For example, (compute-cost 13 0.05) should evaluate to 13.65.

3.7

Decisions

To make more useful procedures, we need the actions taken to depend on the input values. For example, we may want a procedure that takes two numbers as inputs and evaluates to the greater of the two inputs. To define such a procedure we need a way of making a decision. The IfExpression expression provides a way of using the result of one expression to select which of two possible expressions to evaluate:
Expression ::⇒ IfExpression
IfExpression ::⇒ (if ExpressionPredicate
ExpressionConsequent
ExpressionAlternate )
The IfExpression replacement has three Expression terms. For clarity, we give each of them names as denoted by the Predicate, Consequent, and Alternate subscripts. To evaluate an IfExpression, first evaluate the predicate expression,
ExpressionPredicate . If it evaluates to any non-false value, the value of the IfExpression is the value of ExpressionConsequent , the consequent expression, and the alternate expression is not evaluated at all. If the predicate expression evaluates to false, the value of the IfExpression is the value of ExpressionAlternate , the alternate expression, and the consequent expression is not evaluated at all.
The predicate expression determines which of the two following expressions is evaluated to produce the value of the IfExpression. If the value of the predicate is anything other than false, the consequent expression is used. For example, if the predicate evaluates to true, to a number, or to a procedure the consequent expression is evaluated. special form

The if expression is a special form. This means that although it looks syntactically identical to an application (that is, it could be an application of a procedure named if), it is not evaluated as a normal application would be. Instead, we have

Chapter 3. Programming

49

a special evaluation rule for if expressions. The reason a special evaluation rule is needed is because we do not want all the subexpressions to be evaluated. With the normal application rule, all the subexpressions are evaluated first, and then the procedure resulting from the first subexpression is applied to the values resulting from the others. With the if special form evaluation rule, the predicate expression is always evaluated first and only one of the following subexpressions is evaluated depending on the result of evaluating the predicate expression.
This means an if expression can evaluate to a value even if evaluating one of its subexpressions would produce an error. For example,
(if (> 3 4) (∗ + +) 7) evaluates to 7 even though evaluating the subexpression (∗ + +) would produce an error. Because of the special evaluation rule for if expressions, the consequent expression is never evaluated.
Example 3.3: Bigger
Now that we have procedures, decisions, and definitions, we can understand the bigger procedure from the beginning of the chapter. The definition,
(define (bigger a b) (if (> a b) a b)) is a condensed procedure definition. It is equivalent to:
(define bigger (lambda (a b) (if (> a b) a b)))
This defines the name bigger as the value of evaluating the procedure expression
(lambda (a b) (if (> a b) a b)). This is a procedure that takes two inputs, named a and b. Its body is an if expression with predicate expression (> a b). The predicate expression compares the value that is bound to the first parameter, a, with the value that is bound to the second parameter, b, and evaluates to true if the value of the first parameter is greater, and false otherwise. According to the evaluation rule for an if expression, when the predicate evaluates to any nonfalse value (in this case, true), the value of the if expression is the value of the consequent expression, a. When the predicate evaluates to false, the value of the if expression is the value of the alternate expression, b. Hence, our bigger procedure takes two numbers as inputs and produces as output the greater of the two inputs.

Exercise 3.7. Follow the evaluation rules to evaluate the Scheme expression:
(bigger 3 4) where bigger is the procedure defined above. (It is very tedious to follow all of the steps (that’s why we normally rely on computers to do it!), but worth doing once to make sure you understand the evaluation rules.)

50

3.8. Evaluation Rules

Exercise 3.8. Define a procedure, xor, that implements the logical exclusive-or operation. The xor function takes two inputs, and outputs true if exactly one of those outputs has a true value. Otherwise, it outputs false. For example, (xor true true) should evaluate to false and (xor (< 3 5) (= 8 8)) should evaluate to true.
Exercise 3.9. Define a procedure, absvalue, that takes a number as input and produces the absolute value of that number as its output. For example, (absvalue 3) should evaluate to 3 and (absvalue −150) should evaluate to 150.
Exercise 3.10. Define a procedure, bigger-magnitude, that takes two inputs, and outputs the value of the input with the greater magnitude (that is, absolute distance from zero). For example, (bigger-magnitude 5 −7) should evaluate to −7, and (bigger-magnitude 9 −3) should evaluate to 9.
Exercise 3.11. Define a procedure, biggest, that takes three inputs, and produces as output the maximum value of the three inputs. For example, (biggest 5 7 3) should evaluate to 7. Find at least two different ways to define biggest, one using bigger, and one without using it.

3.8

Evaluation Rules

Here we summarize the grammar rules and evaluation rules. Since each grammar rule has an associated evaluation rule, we can determine the meaning of any grammatical Scheme fragment by combining the evaluation rules corresponding to the grammar rules followed to derive that fragment.
Program
ProgramElement

::⇒
::⇒

| ProgramElement Program
Expression | Definition

A program is a sequence of expressions and definitions.
Definition

::⇒

(define Name Expression)

A definition evaluates the expression, and associates the value of the expression with the name.
Definition

::⇒

(define (Name Parameters) Expression)

Abbreviation for
(define Name (lambda Parameters) Expression)
Expression

::⇒

PrimitiveExpression | NameExpression
| ApplicationExpression
| ProcedureExpression | IfExpression

The value of the expression is the value of the replacement expression. PrimitiveExpression

::⇒

Number | true | false | primitive procedure

Evaluation Rule 1: Primitives. A primitive expression evaluates to its pre-defined value.

51

Chapter 3. Programming
NameExpression

::⇒

Name

Evaluation Rule 2: Names. A name evaluates to the value associated with that name.
ApplicationExpression ::⇒

(Expression MoreExpressions)

Evaluation Rule 3: Application. To evaluate an application expression: a. Evaluate all the subexpressions;
b. Then, apply the value of the first subexpression to the values of the remaining subexpressions.
MoreExpressions
ProcedureExpression
Parameters

::⇒
::⇒
::⇒

| Expression MoreExpressions
(lambda (Parameters) Expression)
| Name Parameters

Evaluation Rule 4: Lambda. Lambda expressions evaluate to a procedure that takes the given parameters and has the expression as its body.
IfExpression

::⇒

(if ExpressionPredicate
ExpressionConsequent
ExpressionAlternate )

Evaluation Rule 5: If. To evaluate an if expression, (a) evaluate the predicate expression; then, (b) if the value of the predicate expression is a false value then the value of the if expression is the value of the alternate expression; otherwise, the value of the if expression is the value of the consequent expression.
The evaluation rule for an application (Rule 3b) uses apply to perform the application. Apply is defined by the two application rules:
Application Rule 1: Primitives.
To apply a primitive procedure, just do it.
Application Rule 2: Constructed Procedures.
To apply a constructed procedure, evaluate the body of the procedure with each parameter name bound to the corresponding input expression value.
Application Rule 2 uses the evaluation rules to evaluate the expression. Thus, the evaluation rules are defined using the application rules, which are defined using the evaluation rules! This appears to be a circular definition, but as with the grammar examples, it has a base case. Some expressions evaluate without using the application rules (e.g., primitive expressions, name expressions), and some applications can be performed without using the evaluation rules (when the procedure to apply is a primitive). Hence, the process of evaluating an expression will sometimes finish and when it does we end with the value of the expression.7 7 This does not guarantee that evaluation always finishes, however! The next chapter includes some examples where evaluation never finishes.

52

3.9

3.9. Summary

Summary

At this point, we have covered enough of Scheme to write useful programs (even if the programs we have seen so far seem rather dull). In fact (as we show in
Chapter 12), we have covered enough to express every possible computation!
We just need to combine these constructs in more complex ways to perform more interesting computations. The next chapter (and much of the rest of this book), focuses on ways to combine the constructs for making procedures, making decisions, and applying procedures in more powerful ways.

4

Problems and Procedures
A great discovery solves a great problem, but there is a grain of discovery in the solution of any problem. Your problem may be modest, but if it challenges your curiosity and brings into play your inventive faculties, and if you solve it by your own means, you may experience the tension and enjoy the triumph of discovery.
George P´ lya, How to Solve It o Computers are tools for performing computations to solve problems. In this chapter, we consider what it means to solve a problem and explore some strategies for constructing procedures that solve problems.

4.1

Solving Problems

Traditionally, a problem is an obstacle to overcome or some question to answer.
Once the question is answered or the obstacle circumvented, the problem is solved and we can declare victory and move on to the next one.
When we talk about writing programs to solve problems, though, we have a larger goal. We don’t just want to solve one instance of a problem, we want an algorithm that can solve all instances of a problem. A problem is defined by its inputs and the desired property of the output. Recall from Chapter 1, that a procedure is a precise description of a process and a procedure is guaranteed to always finish is called an algorithm. The name algorithm is a Latinization
¯ a of the name of the Persian mathematician and scientist, Muhammad ibn Mus¯ al-Khw¯ rizm¯, who published a book in 825 on calculation with Hindu numera ı als. Although the name algorithm was adopted after al-Khw¯ rizm¯’s book, algoa ı rithms go back much further than that. The ancient Babylonians had algorithms for finding square roots more than 3500 years ago (see Exploration 4.1).
For example, we don’t just want to find the best route between New York and
Washington, we want an algorithm that takes as inputs the map, start location, and end location, and outputs the best route. There are infinitely many possible inputs that each specify different instances of the problem; a general solution to the problem is a procedure that finds the best route for all possible inputs.1
To define a procedure that can solve a problem, we need to define a procedure that takes inputs describing the problem instance and produces a different information process depending on the actual values of its inputs. A procedure
1 Actually

finding a general algorithm that does without needing to essentially try all possible routes is a challenging and interesting problem, for which no efficient solution is known. Finding one (or proving no fast algorithm exists) would resolve the most important open problem in computer science!

problem

54

4.2. Composing Procedures

takes zero or more inputs, and produces one output or no outputs2 , as shown in
Figure 4.1.

Inputs

Procedure

Output

Figure 4.1. A procedure maps inputs to an output.

Our goal in solving a problem is to devise a procedure that takes inputs that define a problem instance, and produces as output the solution to that problem instance. The procedure should be an algorithm — this means every application of the procedure must eventually finish evaluating and produce an output value.
There is no magic wand for solving problems. But, most problem solving involves breaking problems you do not yet know how to solve into simpler and simpler problems until you find problems simple enough that you already know how to solve them. The creative challenge is to find the simpler subproblems that can be combined to solve the original problem. This approach of solving problems by breaking them into simpler parts is known as divide-and-conquer. divide-and-conquer The following sections describe a two key forms of divide-and-conquer problem solving: composition and recursive problem solving. We will use these same problem-solving techniques in different forms throughout this book.

4.2

Composing Procedures

One way to divide a problem is to split it into steps where the output of the first step is the input to the second step, and the output of the second step is the solution to the problem. Each step can be defined by one procedure, and the two procedures can be combined to create one procedure that solves the problem.
Figure 4.2 shows a composition of two functions, f and g . The output of f is used as the input to g.

Input

f

g

Output

Figure 4.2. Composition.

We can express this composition with the Scheme expression (g (f x)) where x is the input. The written order appears to be reversed from the picture in Figure 4.2. This is because we apply a procedure to the values of its subexpressions:
2 Although procedures can produce more than one output, we limit our discussion here to procedures that produce no more than one output. In the next chapter, we introduce ways to construct complex data, so any number of output values can be packaged into a single output.

Chapter 4. Problems and Procedures

55

the values of the inner subexpressions must be computed first, and then used as the inputs to the outer applications. So, the inner subexpression (f x) is evaluated first since the evaluation rule for the outer application expression is to first evaluate all the subexpressions.
To define a procedure that implements the composed procedure we make x a parameter: (define fog (lambda (x) (g (f x))))
This defines fog as a procedure that takes one input and produces as output the composition of f and g applied to the input parameter. This works for any two procedures that both take a single input parameter.
We can compose the square and cube procedures from Chapter 3:
(define sixth-power (lambda (x) (cube (square x))))
Then, (sixth-power 2) evaluates to 64.

4.2.1

Procedures as Inputs and Outputs

All the procedure inputs and outputs we have seen so far have been numbers.
The subexpressions of an application can be any expression including a procedure. A higher-order procedure is a procedure that takes other procedures as inputs or that produces a procedure as its output. Higher-order procedures give us the ability to write procedures that behave differently based on the procedures that are passed in as inputs.
We can create a generic composition procedure by making f and g parameters:
(define fog (lambda (f g x) (g (f x))))
The fog procedure takes three parameters. The first two are both procedures that take one input. The third parameter is a value that can be the input to the first procedure.
For example, (fog square cube 2) evaluates to 64, and (fog (lambda (x) (+ x 1)) square 2) evaluates to 9. In the second example, the first parameter is the procedure produced by the lambda expression (lambda (x) (+ x 1)). This procedure takes a number as input and produces as output that number plus one. We use a definition to name this procedure inc (short for increment):
(define inc (lambda (x) (+ x 1)))
A more useful composition procedure would separate the input value, x, from the composition. The fcompose procedure takes two procedures as inputs and produces as output a procedure that is their composition:3
(define fcompose
(lambda (f g ) (lambda (x) (g (f x)))))
The body of the fcompose procedure is a lambda expression that makes a procedure. Hence, the result of applying fcompose to two procedures is not a simple value, but a procedure. The resulting procedure can then be applied to a value.
3 We name our composition procedure fcompose to avoid collision with the built-in compose procedure that behaves similarly.

higher-order procedure 56

4.3. Recursive Problem Solving

Here are some examples using fcompose:
> (fcompose inc inc)
#

> ((fcompose inc inc) 1)
3

> ((fcompose inc square) 2)
9

> ((fcompose square inc) 2)
5

Exercise 4.1. For each expression, give the value to which the expression evaluates. Assume fcompose and inc are defined as above.
a. ((fcompose square square) 3)
b. (fcompose (lambda (x) (∗ x 2)) (lambda (x) (/ x 2)))
c. ((fcompose (lambda (x) (∗ x 2)) (lambda (x) (/ x 2))) 1120)
d. ((fcompose (fcompose inc inc) inc) 2)
Exercise 4.2. Suppose we define self-compose as a procedure that composes a procedure with itself:
(define (self-compose f ) (fcompose f f ))
Explain how (((fcompose self-compose self-compose) inc) 1) is evaluated.
Exercise 4.3. Define a procedure fcompose3 that takes three procedures as input, and produces as output a procedure that is the composition of the three input procedures. For example, ((fcompose3 abs inc square) −5) should evaluate to 36. Define fcompose3 two different ways: once without using fcompose, and once using fcompose.
Exercise 4.4. The fcompose procedure only works when both input procedures take one input. Define a f2compose procedure that composes two procedures where the first procedure takes two inputs, and the second procedure takes one input. For example, ((f2compose + abs) 3 −5) should evaluate to 2.

4.3

Recursive Problem Solving

In the previous section, we used functional composition to break a problem into two procedures that can be composed to produce the desired output. A particularly useful variation on this is when we can break a problem into a smaller version of the original problem.
The goal is to be able to feed the output of one application of the procedure back into the same procedure as its input for the next application, as shown in
Figure 4.3.
Here’s a corresponding Scheme procedure:
(define f (lambda (n) (f n)))

57

Chapter 4. Problems and Procedures

f
Input

Figure 4.3. Circular Composition.
Of course, this doesn’t work very well!4 Every application of f results in another application of f to evaluate. This never stops — no output is ever produced and the interpreter will keep evaluating applications of f until it is stopped or runs out of memory.
We need a way to make progress and eventually stop, instead of going around in circles. To make progress, each subsequent application should have a smaller input. Then, the applications stop when the input to the procedure is simple enough that the output is already known. The stopping condition is called the base case, similarly to the grammar rules in Section 2.4. In our grammar examples, the base case involved replacing the nonterminal with nothing (e.g.,
MoreDigits ::⇒ ) or with a terminal (e.g., Noun ::⇒ Alice). In recursive procedures, the base case will provide a solution for some input for which the problem is so simple we already know the answer. When the input is a number, this is often (but not necessarily) when the input is 0 or 1.

base case

To define a recursive procedure, we use an if expression to test if the input matches the base case input. If it does, the consequent expression is the known answer for the base case. Otherwise, the recursive case applies the procedure again but with a smaller input. That application needs to make progress towards reaching the base case. This means, the input has to change in a way that gets closer to the base case input. If the base case is for 0, and the original input is a positive number, one way to get closer to the base case input is to subtract 1 from the input value with each recursive application.
This evaluation spiral is depicted in Figure 4.4. With each subsequent recursive call, the input gets smaller, eventually reaching the base case. For the base case application, a result is returned to the previous application. This is passed back up the spiral to produce the final output. Keeping track of where we are in a recursive evaluation is similar to keeping track of the subnetworks in an RTN traversal. The evaluator needs to keep track of where to return after each recursive evaluation completes, similarly to how we needed to keep track of the stack of subnetworks to know how to proceed in an RTN traversal.
Here is the corresponding procedure:
(define g
(lambda (n)
(if (= n 0) 1 (g (− n 1)))))
Unlike the earlier circular f procedure, if we apply g to any non-negative integer it will eventually produce an output. For example, consider evaluating (g 2).
4 Curious

readers should try entering this definition into a Scheme interpreter and evaluating (f

0). If you get tired of waiting for an output, in DrRacket you can click the Stop button in the upper right corner to interrupt the evaluation.

58

4.3. Recursive Problem Solving

2

1

0
1
1

1

Figure 4.4. Recursive Composition.
When we evaluate the first application, the value of the parameter n is 2, so the predicate expression (= n 0) evaluates to false and the value of the procedure body is the value of the alternate expression, (g (− n 1)). The subexpression, (− n
1) evaluates to 1, so the result is the result of applying g to 1. As with the previous application, this leads to the application, (g (− n 1)), but this time the value of n is 1, so (− n 1) evaluates to 0. The next application leads to the application, (g
0). This time, the predicate expression evaluates to true and we have reached the base case. The consequent expression is just 1, so no further applications of g are performed and this is the result of the application (g 0). This is returned as the result of the (g 1) application in the previous recursive call, and then as the output of the original (g 2) application.
We can think of the recursive evaluation as winding until the base case is reached, and then unwinding the outputs back to the original application. For this procedure, the output is not very interesting: no matter what positive number we apply g to, the eventual result is 1. To solve interesting problems with recursive procedures, we need to accumulate results as the recursive applications wind or unwind. Examples 4.1 and 4.2 illustrate recursive procedures that accumulate the result during the unwinding process. Example 4.3 illustrates a recursive procedure that accumulates the result during the winding process.
Example 4.1: Factorial
How many different arrangements are there of a deck of 52 playing cards?
The top card in the deck can be any of the 52 cards, so there are 52 possible choices for the top card. The second card can be any of the cards except for the card that is the top card, so there are 51 possible choices for the second card.
The third card can be any of the 50 remaining cards, and so on, until the last card for which there is only one choice remaining.
52 ∗ 51 ∗ 50 ∗ · · · ∗ 2 ∗ 1 factorial This is known as the factorial function (denoted in mathematics using the exclamation point, e.g., 52!). It can be defined recursively:
0! = 1 n! = n ∗ (n − 1)! for all n > 0
The mathematical definition of factorial is recursive, so it is natural that we can define a recursive procedure that computes factorials:
(define (factorial n)
(if (= n 0)
1

(∗ n (factorial (− n 1)))))

Chapter 4. Problems and Procedures

59

Evaluating (factorial 52) produces the number of arrangements of a 52-card deck: a sixty-eight digit number starting with an 8.
The factorial procedure has structure very similar to our earlier definition of the useless recursive g procedure. The only difference is the alternative expression for the if expression: in g we used (g (− n 1)); in factorial we added the outer application of ∗: (∗ n (factorial (− n 1))). Instead of just evaluating to the result of the recursive application, we are now combining the output of the recursive evaluation with the input n using a multiplication application.

Exercise 4.5. How many different ways are there of choosing an unordered 5card hand from a 52-card deck?
This is an instance of the “n choose k” problem (also known as the binomial coefficient): how many different ways are there to choose a set of k items from n items. There are n ways to choose the first item, n − 1 ways to choose the second, . . ., and n − k + 1 ways to choose the kth item. But, since the order does not matter, some of these ways are equivalent. The number of possible ways to order the k items is k!, so we can compute the number of ways to choose k items from a set of n items as: n ∗ ( n − 1) ∗ · · · ∗ ( n − k + 1) n! = k! (n − k)!k!
a. Define a procedure choose that takes two inputs, n (the size of the item set) and k (the number of items to choose), and outputs the number of possible ways to choose k items from n.
b. Compute the number of possible 5-card hands that can be dealt from a 52card deck.
c. [ ] Compute the likelihood of being dealt a flush (5 cards all of the same suit).
In a standard 52-card deck, there are 13 cards of each of the four suits. Hint: divide the number of possible flush hands by the number of possible hands.
Exercise 4.6. Reputedly, when Karl Gauss was in elementary school his teacher assigned the class the task of summing the integers from 1 to 100 (e.g., 1 + 2 +
3 + · · · + 100) to keep them busy. Being the (future) “Prince of Mathematics”,
Gauss developed the formula for calculating this sum, that is now known as the
Gauss sum. Had he been a computer scientist, however, and had access to a
Scheme interpreter in the late 1700s, he might have instead defined a recursive procedure to solve the problem. Define a recursive procedure, gauss-sum, that takes a number n as its input parameter, and evaluates to the sum of the integers from 1 to n as its output. For example, (gauss-sum 100) should evaluate to 5050.
Exercise 4.7. [ ] Define a higher-order procedure, accumulate, that can be used to make both gauss-sum (from Exercise 4.6) and factorial. The accumulate procedure should take as its input the function used for accumulation (e.g., ∗ for factorial, + for gauss-sum). With your accumulate procedure, ((accumulate +)
100) should evaluate to 5050 and ((accumulate ∗) 3) should evaluate to 6. We as-

Karl Gauss

60

4.3. Recursive Problem Solving

sume the result of the base case is 1 (although a more general procedure could take that as a parameter).
Hint: since your procedure should produce a procedure as its output, it could start like this:
(define (accumulate f )
(lambda (n)
(if (= n 1) 1
...
Example 4.2: Find Maximum
Consider the problem of defining a procedure that takes as its input a procedure, a low value, and a high value, and outputs the maximum value the input procedure produces when applied to an integer value between the low value and high value input. We name the inputs f , low, and high. To find the maximum, the find-maximum procedure should evaluate the input procedure f at every integer value between the low and high, and output the greatest value found.
Here are a few examples:

> (find-maximum (lambda (x) x) 1 20)
20

> (find-maximum (lambda (x) (− 10 x)) 1 20)
9

> (find-maximum (lambda (x) (∗ x (− 10 x))) 1 20)
25

To define the procedure, think about how to combine results from simpler problems to find the result. For the base case, we need a case so simple we already know the answer. Consider the case when low and high are equal. Then, there is only one value to use, and we know the value of the maximum is (f low). So, the base case is (if (= low high) (f low) . . . ).
How do we make progress towards the base case? Suppose the value of high is equal to the value of low plus 1. Then, the maximum value is either the value of
(f low) or the value of (f (+ low 1)). We could select it using the bigger procedure
(from Example 3.3): (bigger (f low) (f (+ low 1))). We can extend this to the case where high is equal to low plus 2:
(bigger (f low) (bigger (f (+ low 1)) (f (+ low 2))))
The second operand for the outer bigger evaluation is the maximum value of the input procedure between the low value plus one and the high value input. If we name the procedure we are defining find-maximum, then this second operand is the result of (find-maximum f (+ low 1) high). This works whether high is equal to (+ low 1), or (+ low 2), or any other value greater than high.
Putting things together, we have our recursive definition of find-maximum:
(define (find-maximum f low high)
(if (= low high)
(f low)
(bigger (f low) (find-maximum f (+ low 1) high))))
Exercise 4.8. To find the maximum of a function that takes a real number as

Chapter 4. Problems and Procedures

61

its input, we need to evaluate at all numbers in the range, not just the integers.
There are infinitely many numbers between any two numbers, however, so this is impossible. We can approximate this, however, by evaluating the function at many numbers in the range.
Define a procedure find-maximum-epsilon that takes as input a function f , a low range value low, a high range value high, and an increment epsilon, and produces as output the maximum value of f in the range between low and high at interval epsilon. As the value of epsilon decreases, find-maximum-epsilon should evaluate to a value that approaches the actual maximum value.
For example,
(find-maximum-epsilon (lambda (x) (∗ x (− 5.5 x))) 1 10 1) evaluates to 7.5. And,
(find-maximum-epsilon (lambda (x) (∗ x (− 5.5 x))) 1 10 0.01) evaluates to 7.5625.

Exercise 4.9. [ ] The find-maximum procedure we defined evaluates to the maximum value of the input function in the range, but does not provide the input value that produces that maximum output value. Define a procedure that finds the input in the range that produces the maximum output value.
For example, (find-maximum-input inc 1 10) should evaluate to 10 and (findmaximum-input (lambda (x) (∗ x (− 5.5 x))) 1 10) should evaluate to 3.
Exercise 4.10. [ ] Define a find-area procedure that takes as input a function f , a low range value low, a high range value high, and an increment epsilon, and produces as output an estimate for the area under the curve produced by the function f between low and high using the epsilon value to determine how many regions to evaluate.
Example 4.3: Euclid’s Algorithm
In Book 7 of the Elements, Euclid describes an algorithm for finding the greatest common divisor of two non-zero integers. The greatest common divisor is the greatest integer that divides both of the input numbers without leaving any remainder. For example, the greatest common divisor of 150 and 200 is 50 since
(/ 150 50) evaluates to 3 and (/ 200 50) evaluates to 4, and there is no number greater than 50 that can evenly divide both 150 and 200.
The modulo primitive procedure takes two integers as its inputs and evaluates to the remainder when the first input is divided by the second input. For example,
(modulo 6 3) evaluates to 0 and (modulo 7 3) evaluates to 1.
Euclid’s algorithm stems from two properties of integers:
1. If (modulo a b) evaluates to 0 then b is the greatest common divisor of a and b.
2. If (modulo a b) evaluates to a non-zero integer r, the greatest common divisor of a and b is the greatest common divisor of b and r.
We can define a recursive procedure for finding the greatest common divisor

62

4.3. Recursive Problem Solving

closely following Euclid’s algorithm5 :
(define (gcd-euclid a b)
(if (= (modulo a b) 0) b (gcd-euclid b (modulo a b))))
The structure of the definition is similar to the factorial definition: the procedure body is an if expression and the predicate tests for the base case. For the gcd-euclid procedure, the base case corresponds to the first property above. It occurs when b divides a evenly, and the consequent expression is b. The alternate expression, (gcd-euclid b (modulo a b)), is the recursive application.

tail recursive

The gcd-euclid procedure differs from the factorial definition in that there is no outer application expression in the recursive call. We do not need to combine the result of the recursive application with some other value as was done in the factorial definition, the result of the recursive application is the final result. Unlike the factorial and find-maximum examples, the gcd-euclid procedure produces the result in the base case, and no further computation is necessary to produce the final result. When no further evaluation is necessary to get from the result of the recursive application to the final result, a recursive definition is said to be tail recursive. Tail recursive procedures have the advantage that they can be evaluated without needing to keep track of the stack of previous recursive calls. Since the final call produces the final result, there is no need for the interpreter to unwind the recursive calls to produce the answer.
Exercise 4.11. Show the structure of the gcd-euclid applications in evaluating
(gcd-euclid 6 9).
Exercise 4.12. [ ] Provide a convincing argument why the evaluation of (gcdeuclid a b) will always finish when the inputs are both positive integers.
Exercise 4.13. Provide an alternate definition of factorial that is tail recursive.
To be tail recursive, the expression containing the recursive application cannot be part of another application expression. (Hint: define a factorial-helper procedure that takes an extra parameter, and then define factorial as (define (factorial
n) (factorial-helper n 1)).)
Exercise 4.14. Provide a tail recursive definition of find-maximum.
Exercise 4.15. [ ] Provide a convincing argument why it is possible to transform any recursive procedure into an equivalent procedure that is tail recursive.
Exploration 4.1: Square Roots
One of the earliest known algorithms is a method for computing square roots. It is known as Heron’s method after the Greek mathematician Heron of Alexandria who lived in the first century AD who described the method, although it was also known to the Babylonians many centuries earlier. Isaac Newton developed a
5 DrRacket provides a built-in procedure gcd that computes the greatest common divisor. We name our procedure gcd-euclid to avoid a clash with the build-in procedure.

63

Chapter 4. Problems and Procedures

more general method for estimating functions based on their derivatives known as Netwon’s method, of which Heron’s method is a specialization.
Square root is a mathematical function that take a number, a, as input and outputs a value x such that x2 = a. For many numbers (including 2), the square root is irrational, so the best we can hope for with is a good approximation. We define a procedure find-sqrt that takes the target number as input and outputs an approximation for its square root.
Heron’s method works by starting with an arbitrary guess, g0 . Then, with each iteration, compute a new guess (gn is the nth guess) that is a function of the previous guess (gn−1 ) and the target number (a): gn =

g n −1 +

a gn −1

2
As n increases gn gets closer and closer to the square root of a.
The definition is recursive since we compute gn as a function of gn−1 , so we can define a recursive procedure that computes Heron’s method. First, we define a procedure for computing the next guess from the previous guess and the target:
(define (heron-next-guess a g ) (/ (+ g (/ a g )) 2))
Next, we define a recursive procedure to compute the nth guess using Heron’s method. It takes three inputs: the target number, a, the number of guesses to make, n, and the value of the first guess, g.
(define (heron-method a n g )
(if (= n 0) g (heron-method a (− n 1) (heron-next-guess a g ))))
To start, we need a value for the first guess. The choice doesn’t really matter
— the method works with any starting guess (but will reach a closer estimate quicker if the starting guess is good). We will use 1 as our starting guess. So, we can define a find-sqrt procedure that takes two inputs, the target number and the number of guesses to make, and outputs an approximation of the square root of the target number.
(define (find-sqrt a guesses)
(heron-method a guesses 1))
Heron’s method converges to a good estimate very quickly:
> (square (find-sqrt 2 0))
1

> (square (find-sqrt 2 1))
2 1/4

> (square (find-sqrt 2 2))
2 1/144

> (square (find-sqrt 2 4))
2 1/221682772224

> (exact->inexact (find-sqrt 2 5))
1.4142135623730951

Heron of
Alexandria

64

4.4. Evaluating Recursive Applications

The actual square root of 2 is 1.414213562373095048 . . ., so our estimate is correct to 16 digits after only five guesses.
Users of square roots don’t really care about the method used to find the square root (or how many guesses are used). Instead, what is important to a square root user is how close the estimate is to the actual value. Can we change our find-sqrt procedure so that instead of taking the number of guesses to make as its second input it takes a minimum tolerance value?
Since we don’t know the actual square root value (otherwise, of course, we could just return that), we need to measure tolerance as how close the square of the approximation is to the target number. Hence, we can stop when the square of the guess is close enough to the target value.
(define (close-enough? a tolerance g )
( (exact->inexact (square (find-sqrt-approx 2 0.01)))
2.0069444444444446

> (exact->inexact (square (find-sqrt-approx 2 0.0000001)))
2.000000000004511

a. How accurate is the built-in sqrt procedure?
b. Can you produce more accurate square roots than the built-in sqrt procedure?
c. Why doesn’t the built-in procedure do better?

4.4

Evaluating Recursive Applications

Evaluating an application of a recursive procedure follows the evaluation rules just like any other expression evaluation. It may be confusing, however, to see that this works because of the apparent circularity of the procedure definition.
Here, we show in detail the evaluation steps for evaluating (factorial 2). The evaluation and application rules refer to the rules summary in Section 3.8. We first show the complete evaluation following the substitution model evaluation rules

65

Chapter 4. Problems and Procedures

in full gory detail, and later review a subset showing the most revealing steps.
Stepping through even a fairly simple evaluation using the evaluation rules is quite tedious, and not something humans should do very often (that’s why we have computers!) but instructive to do once to understand exactly how an expression is evaluated.
The evaluation rule for an application expression does not specify the order in which the subexpressions are evaluated. A Scheme interpreter is free to evaluate them in any order. Here, we choose to evaluate the subexpressions in the order that is most readable. The value produced by an evaluation does not depend on the order in which the subexpressions are evaluated.6
In the evaluation steps, we use typewriter font for uninterpreted Scheme expressions and sans-serif font to show values. So, 2 represents the Scheme expression that evaluates to the number 2.
1
2
3

4
5

6
7

8
9

Evaluation Rule 3(a): Application subexpressions
(factorial 2)
Evaluation Rule 2: Name
(factorial 2)
((lambda (n) (if (= n 0) 1 (* n (factorial (- n 1))))) 2)
Evaluation Rule 4: Lambda
Evaluation Rule 1: Primitive

((lambda (n) (if (= n 0) 1 (* n (factorial (- n 1))))) 2)
((lambda (n) (if (= n 0) 1 (* n (factorial (- n 1))))) 2)

Evaluation Rule 3(b): Application, Application Rule 2
Evaluation Rule 5(a): If predicate

(if (= 2 0) 1 (* 2 (factorial (- 2 1))))
(if (= 2 0) 1 (* 2 (factorial (- 2 1))))

Evaluation Rule 3(a): Application subexpressions
Evaluation Rule 1: Primitive

(if (= 2 0) 1 (* 2 (factorial (- 2 1))))
(if (= 2 0) 1 (* 2 (factorial (- 2 1))))

Evaluation Rule 3(b): Application, Application Rule 1
10
11
12
13
14
15
16
17
18

(if false 1 (* 2 (factorial (- 2 1))))
Evaluation Rule 5(b): If alternate
(* 2 (factorial (- 2 1)))
Evaluation Rule 3(a): Application subexpressions
(* 2 (factorial (- 2 1)))
Evaluation Rule 1: Primitive
(* 2 (factorial (- 2 1)))
Evaluation Rule 3(a): Application subexpressions
(* 2 (factorial (- 2 1)))
Evaluation Rule 3(a): Application subexpressions
(* 2 (factorial (- 2 1)))
Evaluation Rule 1: Primitive
(* 2 (factorial (- 2 1)))
Evaluation Rule 3(b): Application, Application Rule 1
(* 2 (factorial 1))
Continue Evaluation Rule 3(a); Evaluation Rule 2: Name
(* 2 ((lambda (n) (if (= n 0) 1 (* n (factorial (- n 1))))) 1))
Evaluation Rule 4: Lambda

19

(* 2 ((lambda (n) (if (= n 0) 1 (* n (factorial (- n 1))))) 1))

20

(* 2 (if (= 1 0) 1 (* 1 (factorial (- 1 1)))))

21

(* 2 (if (= 1 0) 1 (* 1 (factorial (- 1 1)))))

22

(* 2 (if (= 1 0) 1 (* 1 (factorial (- 1 1)))))

Evaluation Rule 3(b): Application, Application Rule 2
Evaluation Rule 5(a): If predicate
Evaluation Rule 3(a): Application subexpressions
Evaluation Rule 1: Primitives
23

(* 2 (if (= 1 0) 1 (* 1 (factorial (- 1 1)))))

24

(* 2 (if false 1 (* 1 (factorial (- 1 1)))))

25

(* 2 (* 1 (factorial (- 1 1))))

Evaluation Rule 3(b): Application Rule 1
Evaluation Rule 5(b): If alternate
Evaluation Rule 3(a): Application

6 This is only true for the subset of Scheme we have defined so far. Once we introduce side effects and mutation, it is no longer the case, and expressions can produce different results depending on the order in which they are evaluated.

66

4.4. Evaluating Recursive Applications
(*
(*
(*
(*
(*

26
27
28
29
30

2
2
2
2
2

(*
(*
(*
(*
(*

1
1
1
1
1

(factorial
(factorial
(factorial
(factorial
(factorial

(- 1 1))))
(- 1 1))))
(- 1 1))))
(- 1 1))))
(- 1 1))))

Evaluation Rule 1: Primitives
Evaluation Rule 3(a): Application
Evaluation Rule 3(a): Application
Evaluation Rule 1: Primitives
Evaluation Rule 3(b): Application, Application Rule 1

32

(* 2 (* 1 (factorial 0)))
Evaluation Rule 2: Name
(* 2 (* 1 ((lambda (n) (if (= n 0) 1 (* n (fact... )))) 0)))

33

(* 2 (* 1 ((lambda (n) (if (= n 0) 1 (* n (factorial (- n 1))))) 0)))

34

(* 2 (* 1 (if (= 0 0) 1 (* 0 (factorial (- 0 1))))))

35

(* 2 (* 1 (if (= 0 0) 1 (* 0 (factorial (- 0 1))))))

36

(* 2 (* 1 (if (= 0 0) 1 (* 0 (factorial (- 0 1))))))

31

Evaluation Rule 4, Lambda
Evaluation Rule 3(b), Application Rule 2
Evaluation Rule 5(a): If predicate
Evaluation Rule 3(a): Application subexpressions
Evaluation Rule 1: Primitives
37

(* 2 (* 1 (if (= 0 0) 1 (* 0 (factorial (- 0 1))))))

38

(* 2 (* 1 (if true 1 (* 0 (factorial (- 0 1))))))

Evaluation Rule 3(b): Application, Application Rule 1

(* 2 (* 1 1))
(* 2 (* 1 1))
(* 2 1)

39
40
41
42

2

Evaluation Rule 5(b): If consequent
Evaluation Rule 1: Primitives
Evaluation Rule 3(b): Application, Application Rule 1
Evaluation Rule 3(b): Application, Application Rule 1
Evaluation finished, no unevaluated expressions remain.

The key to evaluating recursive procedure applications is if special evaluation rule. If the if expression were evaluated like a regular application all subexpressions would be evaluated, and the alternative expression containing the recursive call would never finish evaluating! Since the evaluation rule for if evaluates the predicate expression first and does not evaluate the alternative expression when the predicate expression is true, the circularity in the definition ends when the predicate expression evaluates to true. This is the base case. In the example, this is the base case where (= n 0) evaluates to true and instead of producing another recursive call it evaluates to 1.
The Evaluation Stack. The structure of the evaluation is clearer from just the most revealing steps:
1
17
31
40
41
42

(factorial 2)
(* 2 (factorial 1))
(* 2 (* 1 (factorial 0)))
(* 2 (* 1 1))
(* 2 1)
2

Step 1 starts evaluating (factorial 2). The result is found in Step 42. To evaluate (factorial 2), we follow the evaluation rules, eventually reaching the body expression of the if expression in the factorial definition in Step 17. Evaluating this expression requires evaluating the (factorial 1) subexpression. At Step 17, the first evaluation is in progress, but to complete it we need the value resulting from the second recursive application.
Evaluating the second application results in the body expression, (∗ 1 (factorial
0)), shown for Step 31. At this point, the evaluation of (factorial 2) is stuck in

Chapter 4. Problems and Procedures

67

Evaluation Rule 3, waiting for the value of (factorial 1) subexpression. The evaluation of the (factorial 1) application leads to the (factorial 0) subexpression, which must be evaluated before the (factorial 1) evaluation can complete.
In Step 40, the (factorial 0) subexpression evaluation has completed and produced the value 1. Now, the (factorial 1) evaluation can complete, producing 1 as shown in Step 41. Once the (factorial 1) evaluation completes, all the subexpressions needed to evaluate the expression in Step 17 are now evaluated, and the evaluation completes in Step 42.
Each recursive application can be tracked using a stack, similarly to processing RTN subnetworks (Section 2.3). A stack has the property that the first item pushed on the stack will be the last item removed—all the items pushed on top of this one must be removed before this item can be removed. For application evaluations, the elements on the stack are expressions to evaluate. To finish evaluating the first expression, all of its component subexpressions must be evaluated. Hence, the first application evaluation started is the last one to finish. Exercise 4.16. This exercise tests your understanding of the (factorial 2) evaluation.
a. In step 5, the second part of the application evaluation rule, Rule 3(b), is used.
In which step does this evaluation rule complete?
b. In step 11, the first part of the application evaluation rule, Rule 3(a), is used.
In which step is the following use of Rule 3(b) started?
c. In step 25, the first part of the application evaluation rule, Rule 3(a), is used.
In which step is the following use of Rule 3(b) started?
d. To evaluate (factorial 3), how many times would Evaluation Rule 2 be used to evaluate the name factorial?
e. [ ] To evaluate (factorial n) for any positive integer n, how many times would
Evaluation Rule 2 be used to evaluate the name factorial?
Exercise 4.17. For which input values n will an evaluation of (factorial n) eventually reach a value? For values where the evaluation is guaranteed to finish, make a convincing argument why it must finish. For values where the evaluation would not finish, explain why.

4.5

Developing Complex Programs

To develop and use more complex procedures it will be useful to learn some helpful techniques for understanding what is going on when procedures are evaluated. It is very rare for a first version of a program to be completely correct, even for an expert programmer. Wise programmers build programs incrementally, by writing and testing small components one at a time.
The process of fixing broken programs is known as debugging . The key to debugging effectively is to be systematic and thoughtful. It is a good idea to take notes to keep track of what you have learned and what you have tried. Thoughtless debugging can be very frustrating, and is unlikely to lead to a correct program.
A good strategy for debugging is to:

debugging

68

4.5. Developing Complex Programs
1. Ensure you understand the intended behavior of your procedure. Think of a few representative inputs, and what the expected output should be.
2. Do experiments to observe the actual behavior of your procedure. Try your program on simple inputs first. What is the relationship between the actual outputs and the desired outputs? Does it work correctly for some inputs but not others?
3. Make changes to your procedure and retest it. If you are not sure what to do, make changes in small steps and carefully observe the impact of each change. First actual bug
Grace Hopper’s notebook, 1947

For more complex programs, follow this strategy at the level of sub-components.
For example, you can try debugging at the level of one expression before trying the whole procedure. Break your program into several procedures so you can test and debug each procedure independently. The smaller the unit you test at one time, the easier it is to understand and fix problems.
DrRacket provides many useful and powerful features to aid debugging, but the most important tool for debugging is using your brain to think carefully about what your program should be doing and how its observed behavior differs from the desired behavior. Next, we describe two simple ways to observe program behavior. 4.5.1

Printing

One useful procedure built-in to DrRacket is the display procedure. It takes one input, and produces no output. Instead of producing an output, it prints out the value of the input (it will appear in purple in the Interactions window). We can use display to observe what a procedure is doing as it is evaluated.
For example, if we add a (display n) expression at the beginning of our factorial procedure we can see all the intermediate calls. To make each printed value appear on a separate line, we use the newline procedure. The newline procedure prints a new line; it takes no inputs and produces no output.
(define (factorial n)
(display "Enter factorial: ") (display n) (newline)
(if (= n 0) 1 (∗ n (factorial (− n 1)))))
Evaluating (factorial 2) produces:
Enter factorial: 2
Enter factorial: 1
Enter factorial: 0
2

The built-in printf procedure makes it easier to print out many values at once.
It takes one or more inputs. The first input is a string (a sequence of characters enclosed in double quotes). The string can include special ~a markers that print out values of objects inside the string. Each ~a marker is matched with a corresponding input, and the value of that input is printed in place of the ~a in the string. Another special marker, ~n, prints out a new line inside the string.
Using printf , we can define our factorial procedure with printing as:

Chapter 4. Problems and Procedures

69

(define (factorial n)
(printf "Enter factorial: ˜a˜n" n)
(if (= n 0) 1 (∗ n (factorial (− n 1)))))
The display, printf , and newline procedures do not produce output values. Instead, they are applied to produce side effects. A side effect is something that changes the state of a computation. In this case, the side effect is printing in the Interactions window. Side effects make reasoning about what programs do much more complicated since the order in which events happen now matters.
We will mostly avoid using procedures with side effects until Chapter 9, but printing procedures are so useful that we introduce them here.

4.5.2

Tracing

DrRacket provides a more automated way to observe applications of procedures.
We can use tracing to observe the start of a procedure evaluation (including the procedure inputs) and the completion of the evaluation (including the output).
To use tracing, it is necessary to first load the tracing library by evaluating this expression: (require racket/trace)
This defines the trace procedure that takes one input, a constructed procedure
(trace does not work for primitive procedures). After evaluating (trace proc), the interpreter will print out the procedure name and its inputs at the beginning of every application of proc and the value of the output at the end of the application evaluation. If there are other applications before the first application finishes evaluating, these will be printed indented so it is possible to match up the beginning and end of each application evaluation. For example (the trace outputs are shown in typewriter font),

> (trace factorial)
> (factorial 2)
(factorial 2)
|(factorial 1)
| (factorial 0)
| 1
|1
2
2

The trace shows that (factorial 2) is evaluated first; within its evaluation, (factorial 1) and then (factorial 0) are evaluated. The outputs of each of these applications is lined up vertically below the application entry trace.
Exploration 4.2: Recipes for π
The value π is the defined as the ratio between the circumference of a circle and its diameter. One way to calculate the approximate value of π is the GregoryLeibniz series (which was actually discovered by the Indian mathematician M¯ da hava in the 14th century):

π=

4 4 4 4 4
− + − + −···
1 3 5 7 9

side effects

70

4.5. Developing Complex Programs

This summation converges to π. The more terms that are included, the closer the computed value will be to the actual value of π.
a. [ ] Define a procedure compute-pi that takes as input n, the number of terms to include and outputs an approximation of π computed using the first n terms of the Gregory-Leibniz series. (compute-pi 1) should evaluate to 4 and
(compute-pi 2) should evaluate to 2 2/3. For higher terms, use the built-in procedure exact->inexact to see the decimal value. For example,
(exact->inexact (compute-pi 10000)) evaluates (after a long wait!) to 3.1414926535900434.
The Gregory-Leibniz series is fairly simple, but it takes an awful long time to converge to a good approximation for π — only one digit is correct after 10 terms, and after summing 10000 terms only the first four digits are correct.
M¯ dhava discovered another series for computing the value of π that converges a much more quickly: π= √

12 ∗ (1 −

1
1
1
1

+
− . . .)
+
3 ∗ 3 5 ∗ 32
7 ∗ 33
9 ∗ 34

M¯ dhava computed the first 21 terms of this series, finding an approximation of a π that is correct for the first 12 digits: 3.14159265359.
b. [ ] Define a procedure cherry-pi that takes as input n, the number of terms to include and outputs an approximation of π computed using the first n terms of the M¯ dhava series. (Continue reading for hints.) a To define faster-pi, first define two helper functions: faster-pi-helper, that takes one input, n, and computes the sum of the first n terms in the series without the

12 factor, and faster-pi-term that takes one input n and computes the value of the nth term in the series (without alternating the adding and subtracting).
(faster-pi-term 1) should evaluate to 1 and (faster-pi-term 2) should evaluate to
1/9. Then, define faster-pi as:
(define (faster-pi terms) (∗ (sqrt 12) (faster-pi-helper terms)))
This uses the built-in sqrt procedure that takes one input and produces as output an approximation of its square root. The accuracy of the sqrt procedure7 limits the number of digits of π that can be correctly computed using this method
(see Exploration 4.1 for ways to compute a more accurate approximation for the square root of 12). You should be able to get a few more correct digits than
M¯ dhava was able to get without a computer 600 years ago, but to get more a digits would need a more accurate sqrt procedure or another method for computing π.
The built-in expt procedure takes two inputs, a and b, and produces ab as its output. You could also define your own procedure to compute ab for any integer inputs a and b.
c. [
] Find a procedure for computing enough digits of π to find the Feynman point where there are six consecutive 9 digits. This point is named for
Richard Feynman, who quipped that he wanted to memorize π to that point so he could recite it as “. . . nine, nine, nine, nine, nine, nine, and so on”.
7 To test its accuracy, try evaluating (square

(sqrt 12)).

Chapter 4. Problems and Procedures

71

Exploration 4.3: Recursive Definitions and Games
Many games can be analyzed by thinking recursively. For this exploration, we consider how to develop a winning strategy for some two-player games. In all the games, we assume player 1 moves first, and the two players take turns until the game ends. The game ends when the player who’s turn it is cannot move; the other player wins. A strategy is a winning strategy if it provides a way to always select a move that wins the game, regardless of what the other player does.
One approach for developing a winning strategy is to work backwards from the winning position. This position corresponds to the base case in a recursive definition. If the game reaches a winning position for player 1, then player 1 wins.
Moving back one move, if the game reaches a position where it is player 2’s move, but all possible moves lead to a winning position for player 1, then player 1 is guaranteed to win. Continuing backwards, if the game reaches a position where it is player 1’s move, and there is a move that leads to a position where all possible moves for player 2 lead to a winning position for player 1, then player 1 is guaranteed to win.
The first game we will consider is called Nim. Variants on Nim have been played widely over many centuries, but no one is quite sure where the name comes from. We’ll start with a simple variation on the game that was called Thai 21 when it was used as an Immunity Challenge on Survivor.
In this version of Nim, the game starts with a pile of 21 stones. One each turn, a player removes one, two, or three stones. The player who removes the last stone wins, since the other player cannot make a valid move on the following turn.
a. What should the player who moves first do to ensure she can always win the game? (Hint: start with the base case, and work backwards. Think about a game starting with 5 stones first, before trying 21.)
b. Suppose instead of being able to take 1 to 3 stones with each turn, you can take 1 to n stones where n is some number greater than or equal to 1. For what values of n should the first player always win (when the game starts with 21 stones)?
A standard Nim game starts with three heaps. At each turn, a player removes any number of stones from any one heap (but may not remove stones from more than one heap). We can describe the state of a 3-heap game of Nim using three numbers, representing the number of stones in each heap. For example, the
Thai 21 game starts with the state (21 0 0) (one heap with 21 stones, and two empty heaps).8
c. What should the first player do to win if the starting state is (2 1 0)?
d. Which player should win if the starting state is (2 2 2)?
e. [ ] Which player should win if the starting state is (5 6 7)?
f. [ ] Describe a strategy for always winning a winnable game of Nim starting from any position.9
8 With the standard Nim rules, this would not be an interesting game since the first player can simply win by removing all 21 stones from the first heap.
9 If you get stuck, you’ll find many resources about Nim on the Internet; but, you’ll get a lot more out of this if you solve it yourself.

72

4.5. Developing Complex Programs

7

í

ê

3

4

0

1

2

3

4

5

ç

6

The final game we consider is the “Corner the Queen” game invented by Rufus
Isaacs.10 The game is played using a single Queen on a arbitrarily large chessboard as shown in Figure 4.5.

«
0

1

2

5

6

7

Figure 4.5. Cornering the Queen.
On each turn, a player moves the Queen one or more squares in either the left, down, or diagonally down-left direction (unlike a standard chess Queen, in this game the queen may not move right, up or up-right). As with the other games, the last player to make a legal move wins. For this game, once the Queen reaches the bottom left square marked with the , there are no moves possible. Hence, the player who moves the Queen onto the wins the game. We name the squares using the numbers on the sides of the chessboard with the column number first.
So, the Queen in the picture is on square (4 7).
g. Identify all the starting squares for which the first played to move can win right away. (Your answer should generalize to any size square chessboard.)
h. Suppose the Queen is on square (2 1) and it is your move. Explain why there is no way you can avoid losing the game.
i. Given the shown starting position (with the Queen at (4 7), would you rather be the first or second player?
j. [ ] Describe a strategy for winning the game (when possible). Explain from which starting positions it is not possible to win (assuming the other player always makes the right move).
k. [ ] Define a variant of Nim that is essentially the same as the “Corner the
Queen” game. (This game is known as “Wythoff’s Nim”.)
Developing winning strategies for these types of games is similar to defining a recursive procedure that solves a problem. We need to identify a base case from which it is obvious how to win, and a way to make progress fro m a large input towards that base case.
10 Described in Martin Gardner, Penrose Tiles to Trapdoor Ciphers. . .And the Return of Dr Matrix,
The Mathematical Association of America, 1997.

Chapter 4. Problems and Procedures

4.6

73

Summary

By breaking problems down into simpler problems we can develop solutions to complex problems. Many problems can be solved by combining instances of the same problem on simpler inputs. When we define a procedure to solve a problem this way, it needs to have a predicate expression to determine when the base case has been reached, a consequent expression that provides the value for the base case, and an alternate expression that defines the solution to the given input as an expression using a solution to a smaller input.
Our general recursive problem solving strategy is:
1. Be optimistic! Assume you can solve it.
2. Think of the simplest version of the problem, something you can already solve. This is the base case.
3. Consider how you would solve a big version of the problem by using the result for a slightly smaller version of the problem. This is the recursive case. 4. Combine the base case and the recursive case to solve the problem.
For problems involving numbers, the base case is often when the input value is zero. The problem size is usually reduced is by subtracting 1 from one of the inputs. In the next chapter, we introduce more complex data structures. For problems involving complex data, the same strategy will work but with different base cases and ways to shrink the problem size.

I’d rather be an optimist and a fool than a pessimist and right.
Albert Einstein

74

4.6. Summary

5

Data

From a bit to a few hundred megabytes, from a microsecond to half an hour of computing confronts us with the completely baffling ratio of 109 ! .... By evoking the need for deep conceptual hierarchies, the automatic computer confronts us with a radically new intellectual challenge that has no precedent in our history.
Edsger Dijkstra

For all the programs so far, we have been limited to simple data such as numbers and Booleans. We call this scalar data since it has no structure. As we saw in
Chapter 1, we can represent all discrete data using just (enormously large) whole numbers. For example, we could represent the text of a book using only one
(very large!) number, and manipulate the characters in the book by changing the value of that number. But, it would be very difficult to design and understand computations that use numbers to represent complex data.

scalar

We need more complex data structures to better model structured data. We want to represent data in ways that allow us to think about the problem we are trying to solve, rather than the details of how data is represented and manipulated.
This chapter covers techniques for building data structures and for defining procedures that manipulate structured data, and introduces data abstraction as a tool for managing program complexity.

5.1

Types

All data in a program has an associated type. Internally, all data is stored just as a sequence of bits, so the type of the data is important to understand what it means. We have seen several different types of data already: Numbers, Booleans, and Procedures (we use initial capital letters to signify a datatype).
A datatype defines a set (often infinite) of possible values. The Boolean datatype contains the two Boolean values, true and false. The Number type includes the infinite set of all whole numbers (it also includes negative numbers and rational numbers). We think of the set of possible Numbers as infinite, even though on any particular computer there is some limit to the amount of memory available, and hence, some largest number that can be represented. On any real computer, the number of possible values of any data type is always finite. But, we can imagine a computer large enough to represent any given number.
The type of a value determines what can be done with it. For example, a Number can be used as one of the inputs to the primitive procedures +, ∗, and =. A
Boolean can be used as the first subexpression of an if expression and as the

datatype

76

5.1. Types

input to the not procedure (—not— can also take a Number as its input, but for all Number value inputs the output is false), but cannot be used as the input to
+, ∗, or =.1
A Procedure can be the first subexpression in an application expression. There are infinitely many different types of Procedures, since the type of a Procedure depends on its input and output types. For example, recall bigger procedure from Chapter 3:
(define (bigger a b) (if (> a b) a b))
It takes two Numbers as input and produces a Number as output. We denote this type as:
Number × Number → Number
The inputs to the procedure are shown on the left side of the arrow. The type of each input is shown in order, separated by the × symbol.2 The output type is given on the right side of the arrow.
From its definition, it is clear that the bigger procedure takes two inputs from its parameter list. How do we know the inputs must be Numbers and the output is a Number?
The body of the bigger procedure is an if expression with the predicate expression (> a b). This applies the > primitive procedure to the two inputs. The type of the > procedure is Number × Number → Boolean. So, for the predicate expression to be valid, its inputs must both be Numbers. This means the input values to bigger must both be Numbers. We know the output of the bigger procedure will be a Number by analyzing the consequent and alternate subexpressions: each evaluates to one of the input values, which must be a Number.
Starting with the primitive Boolean, Number, and Procedure types, we can build arbitrarily complex datatypes. This chapter introduces mechanisms for building complex datatypes by combining the primitive datatypes.
Exercise 5.1. Describe the type of each of these expressions.
a. 17
b. (lambda (a) (> a 0))
c. ((lambda (a) (> a 0)) 3)
d. (lambda (a) (lambda (b) (> a b)))
e. (lambda (a) a)
1 The primitive procedure equal? is a more general comparison procedure that can take as inputs any two values, so could be used to compare Boolean values. For example, (equal? false false) evaluates to true and (equal? true 3) is a valid expression that evaluates to false.
2 The notation using × to separate input types makes sense if you think about the number of different inputs to a procedure. For example, consider a procedure that takes two Boolean values as inputs, so its type is Boolean × Boolean → Value. Each Boolean input can be one of two possible values. If we combined both inputs into one input, there would be 2 × 2 different values needed to represent all possible inputs.

77

Chapter 5. Data

Exercise 5.2. Define or identify a procedure that has the given type.
a. Number × Number → Boolean
b. Number → Number
c. (Number → Number) × (Number → Number)
→ (Number → Number)
d. Number → (Number → (Number → Number))

5.2

Pairs

The simplest structured data construct is a Pair. We draw a Pair as two boxes, each containing a value. We call each box of a Pair a cell. Here is a Pair where the first cell has the value 37 and the second cell has the value 42:

37

42

Scheme provides built-in procedures for constructing a Pair, and for extracting each cell from a Pair: cons: Value × Value → Pair
Evaluates to a Pair whose first cell is the first input and second cell is the second input. The inputs can be of any type. car: Pair → Value
Evaluates to the first cell of the input, which must be a Pair. cdr: Pair → Value
Evaluates to the second cell of input, which must be a Pair.
These rather unfortunate names come from the original LISP implementation on the IBM 704. The name cons is short for “construct”. The name car is short for
“Contents of the Address part of the Register” and the name cdr (pronounced
“could-er”) is short for “Contents of the Decrement part of the Register”. The designers of the original LISP implementation picked the names because of how pairs could be implemented on the IBM 704 using a single register to store both parts of a pair, but it is a mistake to name things after details of their implementation (see Section 5.6). Unfortunately, the names stuck.
We can construct the Pair shown above by evaluating (cons 37 42). DrRacket displays a Pair by printing the value of each cell separated by a dot: (37 . 42). The interactions below show example uses of cons, car, and cdr.

> (define mypair (cons 37 42))
> (car mypair)
37

> (cdr mypair)
42

The values in the cells of a Pair can be any type, including other Pairs. For example, this definition defines a Pair where each cell of the Pair is itself a Pair:
(define doublepair (cons (cons 1 2) (cons 3 4)))

Pair

78

5.2. Pairs

We can use the car and cdr procedures to access components of the doublepair structure: (car doublepair) evaluates to the Pair (1 . 2), and (cdr doublepair) evaluates to the Pair (3 . 4).
We can compose multiple car and cdr applications to extract components from nested pairs:

> (cdr (car doublepair))
2

> (car (cdr doublepair))
3

> ((fcompose cdr cdr) doublepair)

fcompose from Section 4.2.1

4

> (car (car (car doublepair))) car: expects argument of type ; given 1
The last expression produces an error when it is evaluated since car is applied to the scalar value 1. The car and cdr procedures can only be applied to an input that is a Pair. Hence, an error results when we attempt to apply car to a scalar value. This is an important property of data: the type of data (e.g., a
Pair) defines how it can be used (e.g., passed as the input to car and cdr). Every procedure expects a certain type of inputs, and typically produces an error when it is applied to values of the wrong type.
We can draw the value of doublepair by nesting Pairs within cells:

2

1

3

4

Drawing Pairs within Pairs within Pairs can get quite difficult, however. For instance, try drawing (cons 1 (cons 2 (cons 3 (cons 4 5)))) this way.
Instead, we us arrows to point to the contents of cells that are not simple values.
This is the structure of doublepair shown using arrows:

Using arrows to point to cell contents allows us to draw arbitrarily complicated data structures such as (cons 1 (cons 2 (cons 3 (cons 4 5)))), keeping the cells reasonable sizes:

1

2

3

4

5

Chapter 5. Data

79

Exercise 5.3. Suppose the following definition has been executed:
(define tpair
(cons (cons (cons 1 2) (cons 3 4))
5))
Draw the structure defined by tpair, and give the value of each of the following expressions. a. (cdr tpair)
b. (car (car (car tpair)))
c. (cdr (cdr (car tpair)))
d. (car (cdr (cdr tpair)))
Exercise 5.4. Write expressions that extract each of the four elements from fstruct defined by (define fstruct (cons 1 (cons 2 (cons 3 4)))).
Exercise 5.5. Give an expression that produces the structure shown below.

5.2.1

Making Pairs

Although Scheme provides the built-in procedures cons, car, and cdr for creating Pairs and accessing their cells, there is nothing magical about these procedures. We can define procedures with the same behavior ourselves using the subset of Scheme introduced in Chapter 3.
Here is one way to define the pair procedures (we prepend an s to the names to avoid confusion with the built-in procedures):
(define (scons a b) (lambda (w) (if w a b)))
(define (scar pair) (pair true))
(define (scdr pair) (pair false))
The scons procedure takes the two parts of the Pair as inputs, and produces as output a procedure. The output procedure takes one input, a selector that determines which of the two cells of the Pair to output. If the selector is true, the value of the if expression is the value of the first cell; if the selector is false, it is the value of the second cell. The scar and scdr procedures apply a procedure constructed by scons to either true (to select the first cell in scar) or false (to select the second cell in scdr).

80

5.2. Pairs

Exercise 5.6. Convince yourself the definitions of scons, scar, and scdr above work as expected by following the evaluation rules to evaluate
(scar (scons 1 2))
Exercise 5.7. Show the corresponding definitions of tcar and tcdr that provide the pair selection behavior for a pair created using tcons defined as:
(define (tcons a b) (lambda (w) (if w b a)))

5.2.2

Triples to Octuples

Pairs are useful for representing data that is composed of two parts such as a calendar date (composed of a number and month), or a playing card (composed of a rank and suit). But, what if we want to represent data composed of more than two parts such as a date (composed of a number, month, and year) or a poker hand consisting of five playing cards? For more complex data structures, we need data structures that have more than two components.
A triple has three components. Here is one way to define a triple datatype:
(define (make-triple a b c)
(lambda (w) (if (= w 0) a (if (= w 1) b c))))
(define (triple-first t) (t 0))
(define (triple-second t) (t 1))
(define (triple-third t) (t 2))
Since a triple has three components we need three different selector values.
Another way to make a triple would be to combine two Pairs. We do this by making a Pair whose second cell is itself a Pair:
(define (make-triple a b c) (cons a (cons b c)))
(define (triple-first t) (car t))
(define (triple-second t) (car (cdr t)))
(define (triple-third t) (cdr (cdr t)))
Similarly, we can define a quadruple as a Pair whose second cell is a triple:
(define (make-quad a b c d) (cons a (make-triple b c d)))
(define (quad-first q) (car q))
(define (quad-second q) (triple-first (cdr q))
(define (quad-third q) (triple-second (cdr q))
(define (quad-fourth q) (triple-third (cdr q))
We could continue in this manner defining increasingly large tuples.
A triple is a Pair whose second cell is a Pair.
A quadruple is a Pair whose second cell is a triple.
A quintuple is a Pair whose second cell is a quadruple.
···
An n + 1-uple is a Pair whose second cell is an n-uple.
Building from the simple Pair, we can construct tuples containing any number of components.

Chapter 5. Data

81

Exercise 5.8. Define a procedure that constructs a quintuple and procedures for selecting the five elements of a quintuple.
Exercise 5.9. Another way of thinking of a triple is as a Pair where the first cell is a Pair and the second cell is a scalar. Provide definitions of make-triple, triplefirst, triple-second, and triple-third for this construct.

5.3

Lists

In the previous section, we saw how to construct arbitrarily large tuples from
Pairs. This way of managing data is not very satisfying since it requires defining different procedures for constructing and accessing elements of every length tuple. For many applications, we want to be able to manage data of any length such as all the items in a web store, or all the bids on a given item. Since the number of components in these objects can change, it would be very painful to need to define a new tuple type every time an item is added. We need a data type that can hold any number of items.
This definition almost provides what we need:
An any-uple is a Pair whose second cell is an any-uple.
This seems to allow an any-uple to contain any number of elements. The problem is we have no stopping point. With only the definition above, there is no way to construct an any-uple without already having one.
The situation is similar to defining MoreDigits as zero or more digits in Chapter 2, defining MoreExpressions in the Scheme grammar in Chapter 3 as zero or more Expressions, and recursive composition in Chapter 4.
Recall the grammar rules for MoreExpressions:
MoreExpressions ::⇒ Expression MoreExpressions
MoreExpressions ::⇒
The rule for constructing an any-uple is analogous to the first MoreExpression replacement rule. To allow an any-uple to be constructed, we also need a construction rule similar to the second rule, where MoreExpression can be replaced with nothing. Since it is hard to type and read nothing in a program, Scheme has a name for this value: null.

null

DrRacket will print out the value of null as (). It is also known as the empty list, since it represents the List containing no elements. The built-in procedure null? takes one input parameter and evaluates to true if and only if the value of that parameter is null.
Using null, we can now define a List:
A List is either (1) null or (2) a Pair whose second cell is a List.
Symbolically, we define a List as:
List ::⇒ null
List ::⇒ (cons Value List )

List

82

5.3. Lists

These two rules define a List as a data structure that can contain any number of elements. Starting from null, we can create Lists of any length: null evaluates to a List containing no elements.
(cons 1 null) evaluates to a List containing one element.
(cons 1 (cons 2 null)) evaluates to a List containing two elements.
(cons 1 (cons 2 (cons 3 null))) evaluates to a 3-element List.






Scheme provides a convenient procedure, list, for constructing a List. The list procedure takes zero or more inputs, and evaluates to a List containing those inputs in order. The following expressions are equivalent to the corresponding expressions above: (list), (list 1), (list 1 2), and (list 1 2 3).
Lists are just a collection of Pairs, so we can draw a List using the same box and arrow notation we used to draw structures created with Pairs. Here is the structure resulting from (list 1 2 3):

2

1

3

There are three Pairs in the List, the second cell of each Pair is a List. For the third Pair, the second cell is the List null, which we draw as a slash through the final cell in the diagram.
Table 5.1 summarizes some of the built-in procedures for manipulating Pairs and Lists.
Exercise 5.10. For each of the following expressions, explain whether or not the expression evaluates to a List. Check your answers with a Scheme interpreter by using the list? procedure.
a. null
b. (cons 1 2)
c. (cons null null)
d. (cons (cons (cons 1 2) 3) null)
e. (cdr (cons 1 (cons 2 (cons null null))))
f. (cons (list 1 2 3) 4)
Type

Output

cons

Value × Value → Pair

a Pair consisting of the two inputs

car

Pair → Value

the first cell of the input Pair

cdr

Pair → Value

the second cell of the input Pair

list

zero or more Values → List

a List containing the inputs

null?

Value → Boolean

true if the input is null, otherwise false

pair?

Value → Boolean

true if the input is a Pair, otherwise false

list?

Value → Boolean

true if the input is a List, otherwise false

Table 5.1. Selected Built-In Scheme Procedures for Lists and Pairs.

Chapter 5. Data

5.4

83

List Procedures

Since the List data structure is defined recursively, it is natural to define recursive procedures to examine and manipulate lists. Whereas most recursive procedures on inputs that are Numbers usually used 0 as the base case, for lists the most common base case is null. With numbers, we make progress by subtracting 1; with lists, we make progress by using cdr to reduce the length of the input
List by one element for each recursive application. This means we often break problems involving Lists into figuring out what to do with the first element of the List and the result of applying the recursive procedure to the rest of the List.
We can specialize our general problem solving strategy from Chapter 3 for procedures involving lists:
1. Be very optimistic! Since lists themselves are recursive data structures, most problems involving lists can be solved with recursive procedures.
2. Think of the simplest version of the problem, something you can already solve. This is the base case. For lists, this is usually the empty list.
3. Consider how you would solve a big version of the problem by using the result for a slightly smaller version of the problem. This is the recursive case. For lists, the smaller version of the problem is usually the rest (cdr) of the List.
4. Combine the base case and the recursive case to solve the problem.
Next we consider procedures that examine lists by walking through their elements and producing a scalar value. Section 5.4.2 generalizes these procedures.
In Section 5.4.3, we explore procedures that output lists.

5.4.1

Procedures that Examine Lists

All of the example procedures in this section take a single List as input and produce a scalar value that depends on the elements of the List as output. These procedures have base cases where the List is empty, and recursive cases that apply the recursive procedure to the cdr of the input List.
Example 5.1: Length
How many elements are in a given List?3 Our standard recursive problem solving technique is to “Think of the simplest version of the problem, something you can already solve.” For this procedure, the simplest version of the problem is when the input is the empty list, null. We know the length of the empty list is
0. So, the base case test is (null? p) and the output for the base case is 0.
For the recursive case, we need to consider the structure of all lists other than null. Recall from our definition that a List is either null or (cons Value List ). The base case handles the null list; the recursive case must handle a List that is a Pair of an element and a List. The length of this List is one more than the length of the List that is the cdr of the Pair.
3 Scheme

provides a built-in procedure length that takes a List as its input and outputs the number of elements in the List. Here, we will define our own list-length procedure that does this (without using the built-in length procedure). As with many other examples and exercises in this chapter, it is instructive to define our own versions of some of the built-in list procedures.

84

5.4. List Procedures

(define (list-length p)
(if (null? p)
0

(+ 1 (list-length (cdr p)))))
Here are a few example applications of our list-length procedure:
> (list-length null)
0

> (list-length (cons 0 null))
1

> (list-length (list 1 2 3 4))
4

Example 5.2: List Sums and Products
First, we define a procedure that takes a List of numbers as input and produces as output the sum of the numbers in the input List. As usual, the base case is when the input is null: the sum of an empty list is 0. For the recursive case, we need to add the value of the first number in the List, to the sum of the rest of the numbers in the List.
(define (list-sum p)
(if (null? p) 0 (+ (car p) (list-sum (cdr p)))))
We can define list-product similarly, using ∗ in place of +. The base case result cannot be 0, though, since then the final result would always be 0 since any number multiplied by 0 is 0. We follow the mathematical convention that the product of the empty list is 1.
(define (list-product p)
(if (null? p) 1 (∗ (car p) (list-product (cdr p)))))
Exercise 5.11. Define a procedure is-list? that takes one input and outputs true if the input is a List, and false otherwise. Your procedure should behave identically to the built-in list? procedure, but you should not use list? in your definition.
Exercise 5.12. Define a procedure list-max that takes a List of non-negative numbers as its input and produces as its result the value of the greatest element in the List (or 0 if there are no elements in the input List). For example, (list-max
(list 1 1 2 0)) should evaluate to 2.

5.4.2

Generic Accumulators

The list-length, list-sum, and list-product procedures all have very similar structures. The base case is when the input is the empty list, and the recursive case involves doing something with the first element of the List and recursively calling the procedure with the rest of the List:
(define (Recursive-Procedure p)
(if (null? p)
Base-Case-Result
(Accumulator-Function (car p) (Recursive-Procedure (cdr p)))))
We can define a generic accumulator procedure for lists by making the base case result and accumulator function inputs:

Chapter 5. Data

85

(define (list-accumulate f base p)
(if (null? p) base (f (car p) (list-accumulate f base (cdr p)))))
We can use list-accumulate to define list-sum and list-product:
(define (list-sum p) (list-accumulate + 0 p))
(define (list-product p) (list-accumulate ∗ 1 p))
Defining the list-length procedure is a bit less natural. The recursive case in the original list-length procedure is (+ 1 (list-length (cdr p))); it does not use the value of the first element of the List. But, list-accumulate is defined to take a procedure that takes two inputs—the first input is the first element of the List; the second input is the result of applying list-accumulate to the rest of the List.
We should follow our usual strategy: be optimistic! Being optimistic as in recursive definitions, the value of the second input should be the length of the rest of the List. Hence, we need to pass in a procedure that takes two inputs, ignores the first input, and outputs one more than the value of the second input:
(define (list-length p)
(list-accumulate (lambda (el length-rest) (+ 1 length-rest)) 0 p))
Exercise 5.13. Use list-accumulate to define list-max (from Exercise 5.12).
Exercise 5.14. [ ] Use list-accumulate to define is-list? (from Exercise 5.11).
Example 5.3: Accessing List Elements
The built-in car procedure provides a way to get the first element of a list, but what if we want to get the third element? We can do this by taking the cdr twice to eliminate the first two elements, and then using car to get the third:
(car (cdr (cdr p)))
We want a more general procedure that can access any selected list element. It takes two inputs: a List, and an index Number that identifies the element. If we start counting from 1 (it is often more natural to start from 0), then the base case is when the index is 1 and the output should be the first element of the List:
(if (= n 1) (car p) . . .)
For the recursive case, we make progress by eliminating the first element of the list. We also need to adjust the index: since we have removed the first element of the list, the index should be reduced by one. For example, instead of wanting the third element of the original list, we now want the second element of the cdr of the original list.
(define (list-get-element p n)
(if (= n 1)
(car p)
(list-get-element (cdr p) (− n 1))))
What happens if we apply list-get-element to an index that is larger than the size of the input List (for example, (list-get-element (list 1 2) 3))?

86

5.4. List Procedures

The first recursive call is (list-get-element (list 2) 2). The second recursive call is
(list-get-element (list) 1). At this point, n is 1, so the base case is reached and (car
p) is evaluated. But, p is the empty list (which is not a Pair), so an error results.
A better version of list-get-element would provide a meaningful error message when the requested element is out of range. We do this by adding an if expression that tests if the input List is null:
(define (list-get-element p n)
(if (null? p)
(error "Index out of range")
(if (= n 1) (car p) (list-get-element (cdr p) (− n 1)))))
The built-in procedure error takes a String as input. The String datatype is a sequence of characters; we can create a String by surrounding characters with double quotes, as in the example. The error procedure terminates program execution with a message that displays the input value. defensive programming

Checking explicitly for invalid inputs is known as defensive programming . Programming defensively helps avoid tricky to debug errors and makes it easier to understand what went wrong if there is an error.
Exercise 5.15. Define a procedure list-last-element that takes as input a List and outputs the last element of the input List. If the input List is empty, list-lastelement should produce an error.
Exercise 5.16. Define a procedure list-ordered? that takes two inputs, a test procedure and a List. It outputs true if all the elements of the List are ordered according to the test procedure. For example, (list-ordered? < (list 1 2 3)) evaluates to true, and (list-ordered? < (list 1 2 3 2)) evaluates to false. Hint: think about what the output should be for the empty list.

5.4.3

Procedures that Construct Lists

The procedures in this section take values (including Lists) as input, and produce a new List as output. As before, the empty list is typically the base case.
Since we are producing a List as output, the result for the base case is also usually null. The recursive case will use cons to construct a List combining the first element with the result of the recursive application on the rest of the List.
Example 5.4: Mapping
One common task for manipulating a List is to produce a new List that is the result of applying some procedure to every element in the input List.
For the base case, applying any procedure to every element of the empty list produces the empty list. For the recursive case, we use cons to construct a List.
The first element is the result of applying the mapping procedure to the first element of the input List. The rest of the output List is the result of recursively mapping the rest of the input List.
Here is a procedure that constructs a List that contains the square of every element of the input List:

87

Chapter 5. Data
(define (list-square p)
(if (null? p) null
(cons (square (car p))
(list-square (cdr p)))))

We generalize this by making the procedure which is applied to each element an input. The procedure list-map takes a procedure as its first input and a List as its second input. It outputs a List whose elements are the results of applying the input procedure to each element of the input List.4
(define (list-map f p)
(if (null? p) null
(cons (f (car p))
(list-map f (cdr p)))))
We can use list-map to define square-all:
(define (square-all p) (list-map square p))
Exercise 5.17. Define a procedure list-increment that takes as input a List of numbers, and produces as output a List containing each element in the input
List incremented by one. For example, (list-increment 1 2 3) evaluates to (2 3 4).

Exercise 5.18. Use list-map and list-sum to define list-length:
(define (list-length p) (list-sum (list-map

p)))

Example 5.5: Filtering
Consider defining a procedure that takes as input a List of numbers, and evaluates to a List of all the non-negative numbers in the input. For example, (listfilter-negative (list 1 −3 −4 5 −2 0)) evaluates to (1 5 0).
First, consider the base case when the input is the empty list. If we filter the negative numbers from the empty list, the result is an empty list. So, for the base case, the result should be null.
In the recursive case, we need to determine whether or not the first element should be included in the output. If it should be included, we construct a new
List consisting of the first element followed by the result of filtering the remaining elements in the List. If it should not be included, we skip the first element and the result is the result of filtering the remaining elements in the List.
(define (list-filter-negative p)
(if (null? p) null
(if (>= (car p) 0)
(cons (car p) (list-filter-negative (cdr p)))
(list-filter-negative (cdr p)))))
Similarly to list-map, we can generalize our filter by making the test procedure as an input, so we can use any predicate to determine which elements to include
4 Scheme provides a built-in map procedure. It behaves like this one when passed a procedure and a single List as inputs, but can also work on more than one List input at a time.

88

5.4. List Procedures

in the output List.5
(define (list-filter test p)
(if (null? p) null
(if (test (car p))
(cons (car p) (list-filter test (cdr p)))
(list-filter test (cdr p)))))
Using the list-filter procedure, we can define list-filter-negative as:
(define (list-filter-negative p) (list-filter (lambda (x) (>= x 0)) p))
We could also define the list-filter procedure using the list-accumulate procedure from Section 5.4.1:
(define (list-filter test p)
(list-accumulate
(lambda (el rest) (if (test el) (cons el rest) rest)) null p))
Exercise 5.19. Define a procedure list-filter-even that takes as input a List of numbers and produces as output a List consisting of all the even elements of the input List.
Exercise 5.20. Define a procedure list-remove that takes two inputs: a test procedure and a List. As output, it produces a List that is a copy of the input List with all of the elements for which the test procedure evaluates to true removed.
For example, (list-remove (lambda (x) (= x 0)) (list 0 1 2 3)) should evaluates to the List (1 2 3).
Exercise 5.21. [ ] Define a procedure list-unique-elements that takes as input a List and produces as output a List containing the unique elements of the input
List. The output List should contain the elements in the same order as the input
List, but should only contain the first appearance of each value in the input List.
Example 5.6: Append
The list-append procedure takes as input two lists and produces as output a List consisting of the elements of the first List followed by the elements of the second List.6 For the base case, when the first List is empty, the result of appending the lists should just be the second List. When the first List is non-empty, we can produce the result by cons-ing the first element of the first List with the result of appending the rest of the first List and the second List.
(define (list-append p q)
(if (null? p) q
(cons (car p) (list-append (cdr p) q))))

5 Scheme provides a built-in function

filter that behaves like our list-filter procedure. is a built-in procedure append that does this. The built-in append takes any number of
Lists as inputs, and appends them all into one List.
6 There

89

Chapter 5. Data
Example 5.7: Reverse

The list-reverse procedure takes a List as input and produces as output a List containing the elements of the input List in reverse order.7 For example, (listreverse (list 1 2 3)) evaluates to the List (3 2 1). As usual, we consider the base case where the input List is null first. The reverse of the empty list is the empty list. To reverse a non-empty List, we should put the first element of the List at the end of the result of reversing the rest of the List.
The tricky part is putting the first element at the end, since cons only puts elements at the beginning of a List. We can use the list-append procedure defined in the previous example to put a List at the end of another List. To make this work, we need to turn the element at the front of the List into a List containing just that element. We do this using (list (car p)).
(define (list-reverse p)
(if (null? p) null
(list-append (list-reverse (cdr p)) (list (car p)))))
Exercise 5.22. Define the list-reverse procedure using list-accumulate.
Example 5.8: Intsto
For our final example, we define the intsto procedure that constructs a List containing the whole numbers between 1 and the input parameter value. For example, (intsto 5) evaluates to the List (1 2 3 4 5).
This example combines ideas from the previous chapter on creating recursive definitions for problems involving numbers, and from this chapter on lists. Since the input parameter is not a List, the base case is not the usual list base case when the input is null. Instead, we use the input value 0 as the base case. The result for input 0 is the empty list. For higher values, the output is the result of putting the input value at the end of the List of numbers up to the input value minus one.
A first attempt that doesn’t quite work is:
(define (revintsto n)
(if (= n 0) null
(cons n (revintsto (− n 1)))))
The problem with this solution is that it is cons-ing the higher number to the front of the result, instead of at the end. Hence, it produces the List of numbers in descending order: (revintsto 5) evaluates to (5 4 3 2 1).
One solution is to reverse the result by composing list-reverse with revintsto:
(define (intsto n) (list-reverse (revintsto n)))
Equivalently, we can use the fcompose procedure from Section 4.2:
(define intsto (fcompose list-reverse revintsto))
Alternatively, we could use list-append to put the high number directly at the end of the List. Since the second operand to list-append must be a List, we use
(list n) to make a singleton List containing the value as we did for list-reverse.
7 The built-in procedure reverse

does this.

90

5.5. Lists of Lists

(define (intsto n)
(if (= n 0) null
(list-append (intsto (− n 1)) (list n))))
Although all of these procedures are functionally equivalent (for all valid inputs, each function produces exactly the same output), the amount of computing work (and hence the time they take to execute) varies across the implementations. We consider the problem of estimating the running-times of different procedures in Part II.
Exercise 5.23. Define factorial using intsto.

5.5

Lists of Lists

The elements of a List can be any datatype, including, of course, other Lists. In defining procedures that operate on Lists of Lists, we often use more than one recursive call when we need to go inside the inner Lists.
Example 5.9: Summing Nested Lists
Consider the problem of summing all the numbers in a List of Lists. For example, (nested-list-sum (list (list 1 2 3) (list 4 5 6))) should evaluate to 21. We can define nested-list-sum using list-sum on each List.
(define (nested-list-sum p)
(if (null? p) 0
(+ (list-sum (car p))
(nested-list-sum (cdr p)))))
This works when we know the input is a List of Lists. But, what if the input can contain arbitrarily deeply nested Lists?
To handle this, we need to recursively sum the inner Lists. Each element in our deep List is either a List or a Number. If it is a List, we should add the value of the sum of all elements in the List to the result for the rest of the List. If it is a
Number, we should just add the value of the Number to the result for the rest of the List. So, our procedure involves two recursive calls: one for the first element in the List when it is a List, and the other for the rest of the List.
(define (deep-list-sum p)
(if (null? p) 0
(+ (if (list? (car p))
(deep-list-sum (car p))
(car p))
(deep-list-sum (cdr p)))))
Example 5.10: Flattening Lists
Another way to compute the deep list sum would be to first flatten the List, and then use the list-sum procedure.
Flattening a nested list takes a List of Lists and evaluates to a List containing the elements of the inner Lists. We can define list-flatten by using list-append to append all the inner Lists together.

91

Chapter 5. Data
(define (list-flatten p)
(if (null? p) null
(list-append (car p) (list-flatten (cdr p)))))

This flattens a List of Lists into a single List. To completely flatten a deeply nested List, we use multiple recursive calls as we did with deep-list-sum:
(define (deep-list-flatten p)
(if (null? p) null
(list-append (if (list? (car p))
(deep-list-flatten (car p))
(list (car p)))
(deep-list-flatten (cdr p)))))
Now we can define deep-list-sum as:
(define deep-list-sum (fcompose deep-list-flatten list-sum))
Exercise 5.24. [ ] Define a procedure deep-list-map that behaves similarly to list-map but on deeply nested lists. It should take two parameters, a mapping procedure, and a List (that may contain deeply nested Lists as elements), and output a List with the same structure as the input List with each value mapped using the mapping procedure.
Exercise 5.25. [ ] Define a procedure deep-list-filter that behaves similarly to list-filter but on deeply nested lists.
Exploration 5.1: Pascal’s Triangle
Pascal’s Triangle (named for Blaise Pascal, although known to many others before him) is shown below:
1
1

1
1 2 1
1 3 3 1
1 4 6 4 1
1 5 10 10 5 1
···

Each number in the triangle is the sum of the two numbers immediately above and to the left and right of it. The numbers in Pascal’s Triangle are the coefficients in a binomial expansion. The numbers of the nth row (where the rows are numbered starting from 0) are the coefficients of the binomial expansion of
( x + y)n . For example, ( x + y)2 = x2 + 2xy + y2 , so the coefficients are 1 2 1, matching the third row in the triangle; from the fifth row, ( x + y)4 = x4 + 4x3 y +
6x2 y2 + 4xy3 + y4 . The values in the triangle also match the number of ways to choose k elements from a set of size n (see Exercise 4.5) — the kth number on the nth row of the triangle gives the number of ways to choose k elements from a set of size n. For example, the third number on the fifth (n = 4) row is 6, so there are

92

5.6. Data Abstraction

6 ways to choose 3 items from a set of size 4.
The goal of this exploration is to define a procedure, pascals-triangle to produce
Pascal’s Triangle. The input to your procedure should be the number of rows; the output should be a list, where each element of the list is a list of the numbers on that row of Pascal’s Triangle. For example, (pascals-triangle 0) should produce
((1)) (a list containing one element which is a list containing the number 1), and
(pascals-triangle 4) should produce ((1) (1 1) (1 2 1) (1 3 3 1) (1 4 6 4 1)).
Ambitious readers should attempt to define pascals-triangle themselves; the sub-parts below provide some hints for one way to define it.
a. First, define a procedure expand-row that expands one row in the triangle. It takes a List of numbers as input, and as output produces a List with one more element than the input list. The first number in the output List should be the first number in the input List; the last number in the output List should be the last number in the input List. Every other number in the output List is the sum of two numbers in the input List. The nth number in the output List is the sum of the n − 1th and nth numbers in the input List. For example, (expandrow (list 1)) evaluates to (1 1); (expand-row (list 1 1)) evaluates to (1 2 1); and
(expand-row (list 1 4 6 4 1)) evaluates to (1 5 10 10 5 1). This is trickier than the recursive list procedures we have seen so far since the base case is not the empty list. It also needs to deal with the first element specially. To define expand-row, it will be helpful to divide it into two procedures, one that deals with the first element of the list, and one that produces the rest of the list:
(define (expand-row p) (cons (car p) (expand-row-rest p)))
b. Define a procedure pascals-triangle-row that takes one input, n, and outputs the nth row of Pascal’s Triangle. For example, (pascals-triangle-row 0) evaluates to (1) and (pascals-triangle-row 3) produces (1 3 3 1).
c. Finally, define pascals-triangle with the behavior described above.

5.6

data abstraction

Data Abstraction

The mechanisms we have for constructing and manipulating complex data structures are valuable because they enable us to think about programs closer to the level of the problem we are solving than the low level of how data is stored and manipulated in the computer. Our goal is to hide unnecessary details about how data is represented so we can focus on the important aspects of what the data means and what we need to do with it to solve our problem. The technique of hiding how data is represented from how it is used is known as data abstraction.
The datatypes we have seen so far are not very abstract. We have datatypes for representing Pairs, triples, and Lists, but we want datatypes for representing objects closer to the level of the problem we want to solve. A good data abstraction is abstract enough to be used without worrying about details like which cell of the Pair contains which datum and how to access the different elements of a List.
Instead, we want to define procedures with meaningful names that manipulate the relevant parts of our data.
The rest of this section is an extended example that illustrates how to solve problems by first identifying the objects we need to model the problem, and then implementing data abstractions that represent those objects. Once the appropri-

93

Chapter 5. Data

ate data abstractions are designed and implemented, the solution to the problem often follows readily. This example also uses many of the list procedures defined earlier in this chapter.
Exploration 5.2: Pegboard Puzzle
For this exploration, we develop a program to solve the infamous pegboard puzzle, often found tormenting unsuspecting diners at pancake restaurants. The standard puzzle is a one-player game played on a triangular board with fifteen holes with pegs in all of the holes except one.
The goal is to remove all but one of the pegs by jumping pegs over one another.
A peg may jump over an adjacent peg only when there is a free hole on the other side of the peg. The jumped peg is removed. The game ends when there are no possible moves. If there is only one peg remaining, the player wins (according to the Cracker Barrel version of the game, “Leave only one—you’re genius”). If more than one peg remains, the player loses (“Leave four or more’n you’re just plain ‘eg-no-ra-moose’.”).

Figure 5.1. Pegboard Puzzle.
The blue peg can jump the red peg as shown, removing the red peg. The resulting position is a winning position.

Our goal is to develop a program that finds a winning solution to the pegboard game from any winnable starting position. We use a brute force approach: try all possible moves until we find one that works. Brute force solutions only work on small-size problems. Because they have to try all possibilities they are often too slow for solving large problems, even on the most powerful computers imaginable.8 The first thing to think about to solve a complex problem is what datatypes we need. We want datatypes that represent the things we need to model in our problem solution. For the pegboard game, we need to model the board with its pegs. We also need to model actions in the game like a move (jumping over a peg). The important thing about a datatype is what you can do with it. To design our board datatype we need to think about what we want to do with a board. In the physical pegboard game, the board holds the pegs. The important property we need to observe about the board is which holes on the board contain pegs.
For this, we need a way of identifying board positions. We define a datatype
8 The

generalized pegboard puzzle is an example of a class of problems known as NP-Complete.
This means it is not known whether or not any solution exists that is substantially better than the brute force solution, but it would be extraordinarily surprising (and of momentous significance!) to find one.

brute force

94

5.6. Data Abstraction

for representing positions first, then a datatype for representing moves, and a datatype for representing the board. Finally, we use these datatypes to define a procedure that finds a winning solution.
Position. We identify the board positions using row and column numbers:
(1 1)
(2 1) (2 2)
(3 1) (3 2) (3 3)
(4 1) (4 2) (4 3) (4 4)
(5 1) (5 2) (5 3) (5 4) (5 5)
A position has a row and a column, so we could just use a Pair to represent a position. This would work, but we prefer to have a more abstract datatype so we can think about a position’s row and column, rather than thinking that a position is a Pair and using the car and cdr procedures to extract the row and column from the position.
Our Position datatype should provide at least these operations: make-position: Number × Number → Position
Creates a Position representing the row and column given by the input numbers. position-get-row: Position → Number
Outputs the row number of the input Position. position-get-column: Position → Number
Outputs the column number of the input Position.

tagged list

Since the Position needs to keep track of two numbers, a natural way to implement the Position datatype is to use a Pair. A more defensive implementation of the Position datatype uses a tagged list. With a tagged list, the first element of the list is a tag denoting the datatype it represents. All operations check the tag is correct before proceeding. We can use any type to encode the list tag, but it is most convenient to use the built-in Symbol type. Symbols are a quote (’) followed by a sequence of characters. The important operation we can do with a Symbol, is test whether it is an exact match for another symbol using the eq? procedure. We define the tagged list datatype, tlist, using the list-get-element procedure from Example 5.3:
(define (make-tlist tag p) (cons tag p))
(define (tlist-get-tag p) (car p))
(define (tlist-get-element tag p n)
(if (eq? (tlist-get-tag p) tag )
(list-get-element (cdr p) n)
(error (format "Bad tag: ˜a (expected ˜a)"
(tlist-get-tag p) tag ))))
The format procedure is a built-in procedure similar to the printf procedure described in Section 4.5.1. Instead of printing as a side effect, format produces a String. For example, (format "list: ˜a number: ˜a." (list 1 2 3) 123) evaluates to the
String "list: (1 2 3) number: 123.".

95

Chapter 5. Data

This is an example of defensive programming. Using our tagged lists, if we accidentally attempt to use a value that is not a Position as a position, we will get a clear error message instead of a hard-to-debug error (or worse, an unnoticed incorrect result).
Using the tagged list, we define the Position datatype as:
(define (make-position row col) (make-tlist ’Position (list row col)))
(define (position-get-row posn) (tlist-get-element ’Position posn 1))
(define (position-get-column posn) (tlist-get-element ’Position posn 2))
Here are some example interactions with our Position datatype:

> (define pos (make-position 2 1))
> pos
(Position 2 1)
> (get-position-row pos)
2

> (get-position-row (list 1 2))
Bad tag: 1 (expected Position)

Error since input is not a Position.

Move. A move involves three positions: where the jumping peg starts, the position of the peg that is jumped and removed, and the landing position. One possibility would be to represent a move as a list of the three positions. A better option is to observe that once any two of the positions are known, the third position is determined. For example, if we know the starting position and the landing position, we know the jumped peg is at the position between them. Hence, we could represent a jump using just the starting and landing positions.
Another possibility is to represent a jump by storing the starting Position and the direction. This is also enough to determine the jumped and landing positions.
This approach avoids the difficulty of calculating jumped positions. To do it, we first design a Direction datatype for representing the possible move directions.
Directions have two components: the change in the column (we use 1 for right and −1 for left), and the change in the row (1 for down and −1 for up).
We implement the Direction datatype using a tagged list similarly to how we defined Position:
(define (make-direction right down)
(make-tlist ’Direction (list right down)))
(define (direction-get-horizontal dir) (tlist-get-element ’Direction dir 1))
(define (direction-get-vertical dir) (tlist-get-element ’Direction dir 2))
The Move datatype is defined using the starting position and the jump direction:
(define (make-move start direction)
(make-tlist ’Move (list start direction)))
(define (move-get-start move) (tlist-get-element ’Move move 1))
(define (move-get-direction move) (tlist-get-element ’Move move 2))
We also define procedures for getting the jumped and landing positions of a move. The jumped position is the result of moving one step in the move direction from the starting position. So, it will be useful to define a procedure that takes a Position and a Direction as input, and outputs a Position that is one step in the input Direction from the input Position.

96

5.6. Data Abstraction

(define (direction-step pos dir)
(make-position
(+ (position-get-row pos) (direction-get-vertical dir))
(+ (position-get-column pos) (direction-get-horizontal dir))))
Using direction-step we can implement procedure to get the middle and landing positions. (define (move-get-jumped move)
(direction-step (move-get-start move) (move-get-direction move)))
(define (move-get-landing move)
(direction-step (move-get-jumped move) (move-get-direction move)))
Board. The board datatype represents the current state of the board. It keeps track of which holes in the board contain pegs, and provides operations that model adding and removing pegs from the board: make-board: Number → Board
Outputs a board full of pegs with the input number of rows. (The standard physical board has 5 rows, but our datatype supports any number of rows.) board-rows: Board → Number
Outputs the number of rows in the input board. board-valid-position?: Board × Position → Boolean
Outputs true if input Position corresponds to a position on the Board; otherwise, false. board-is-winning?: Board → Boolean
Outputs true if the Board represents a winning position (exactly one peg); otherwise, false. board-contains-peg?: Position → Boolean
Outputs true if the hole at the input Position contains a peg; otherwise, false. board-add-peg : Board × Position → Board
Output a Board containing all the pegs of the input Board and one additional peg at the input Position. If the input Board already has a peg at the input Position, produces an error. board-remove-peg : Board × Position → Board
Outputs a Board containing all the pegs of the input Board except for the peg at the input Position. If the input Board does not have a peg at the input Position, produces an error.
The procedures for adding and removing pegs change the state of the board to reflect moves in the game, but nothing we have seen so far, however, provides a means for changing the state of an existing object.9 So, instead of defining these operations to change the state of the board, they actually create a new board that is different from the input board by the one new peg. These procedures take a
Board and Position as inputs, and produce as output a Board.
There are lots of different ways we could represent the Board. One possibility is to keep a List of the Positions of the pegs on the board. Another possibility is to
9 We will introduce mechanisms for changing state in Chapter 9. Allowing state to change breaks the substitution model of evaluation.

Chapter 5. Data

97

keep a List of the Positions of the empty holes on the board. Yet another possibility is to keep a List of Lists, where each List corresponds to one row on the board. The elements in each of the Lists are Booleans representing whether or not there is a peg at that position. The good thing about data abstraction is we could pick any of these representations and change it to a different representation later (for example, if we needed a more efficient board implementation). As long as the procedures for implementing the Board are updated the work with the new representation, all the code that uses the board abstraction should continue to work correctly without any changes.
We choose the third option and represent a Board using a List of Lists where each element of the inner lists is a Boolean indicating whether or not the corresponding position contains a peg. So, make-board evaluates to a List of Lists, where each element of the List contains the row number of elements and all the inner elements are true (the initial board is completely full of pegs). First, we define a procedure make-list-of-constants that takes two inputs, a Number, n, and a Value, val. The output is a List of length n where each element has the value val. (define (make-list-of-constants n val)
(if (= n 0) null (cons val (make-list-of-constants (− n 1) val))))
To make the initial board, we use make-list-of-constants to make each row of the board. As usual, a recursive problem solving strategy works well: the simplest board is a board with zero rows (represented as the empty list); for each larger board, we add a row with the right number of elements.
The tricky part is putting the rows in order. This is similar to the problem we faced with intsto, and a similar solution using append-list works here:
(define (make-board rows)
(if (= rows 0) null
(list-append (make-board (− rows 1))
(list (make-list-of-constants rows true)))))
Evaluating (make-board 3) produces ((true) (true true) (true true true)).
The board-rows procedure takes a Board as input and outputs the number of rows on the board.
(define (board-rows board) (length board))
The board-valid-position? indicates if a Position is on the board. A position is valid if its row number is between 1 and the number of rows on the board, and its column numbers is between 1 and the row number.
(define (board-valid-position? board pos)
(and (>= (position-get-row pos) 1) (>= (position-get-column pos) 1)
( (time (car (list-append (intsto 1000) (intsto 100)))) cpu time: 609 real time: 609 gc time: 0
1
The two expressions evaluated are identical, but the reported time varies. Even on the same computer, the time needed to evaluate the same expression varies.
Many properties unrelated to our expression (such as where things happen to be stored in memory) impact the actual time needed for any particular evaluation. Hence, it is dangerous to draw conclusions about which procedure is faster based on a few timings.

There’s no sense in being precise when you don’t even know what you’re talking about. John von Neumann

Another limitation of this way of measuring cost is it only works if we wait for the evaluation to complete. If we try an evaluation and it has not finished after an hour, say, we have no idea if the actual time to finish the evaluation is sixty-one minutes or a quintillion years. We could wait another minute, but if it still hasn’t finished we don’t know if the execution time is sixty-two minutes or a quintillion years. The techniques we develop allow us to predict the time an evaluation needs without waiting for it to execute.
Finally, measuring the time of a particular application of a procedure does not provide much insight into how long it will take to apply the procedure to different inputs. We would like to understand how the evaluation time scales with the size of the inputs so we can understand which inputs the procedure can sensibly be applied to, and can choose the best procedure to use for different situations.
The next section introduces mathematical tools that are helpful for capturing how cost scales with input size.
Exercise 7.1. Suppose you are defining a procedure that needs to append two lists, one short list, short and one very long list, long , but the order of elements in the resulting list does not matter. Is it better to use (list-append short long ) or
(list-append long short)? (A good answer will involve both experimental results and an analytical explanation.)

127

Chapter 7. Cost
Exploration 7.1: Multiplying Like Rabbits

Filius Bonacci was an Italian monk and mathematician in the 12th century. He published a book, Liber Abbaci, on how to calculate with decimal numbers that introduced Hindu-Arabic numbers to Europe (replacing Roman numbers) along with many of the algorithms for doing arithmetic we learn in elementary school.
It also included the problem for which Fibonacci numbers are named:2
A pair of newly-born male and female rabbits are put in a field. Rabbits mate at the age of one month and after that procreate every month, so the female rabbit produces a new pair of rabbits at the end of its second month.
Assume rabbits never die and that each female rabbit produces one new pair (one male, one female) every month from her second month on. How many pairs will there be in one year?
We can define a function that gives the number of pairs of rabbits at the beginning of the nth month as:

1 : n=1

1 : n=2
Fibonacci (n) =

Fibonacci (n − 1) + Fibonacci (n − 2) : n > 1
The third case follows from Bonacci’s assumptions: all the rabbits alive at the beginning of the previous month are still alive (the Fibonacci (n − 1) term), and all the rabbits that are at least two months old reproduce (the Fibonacci (n − 2) term). The sequence produced is known as the Fibonacci sequence:
1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, . . .
After the first two 1s, each number in the sequence is the sum of the previous two numbers. Fibonacci numbers occur frequently in nature, such as the arrangement of florets in the sunflower (34 spirals in one direction and 55 in the other) or the number of petals in common plants (typically 1, 2, 3, 5, 8, 13, 21, or
34), hence the rarity of the four-leaf clover.
Translating the definition of the Fibonacci function into a Scheme procedure is straightforward; we combine the two base cases using the or special form:
(define (fibo n)
(if (or (= n 1) (= n 2)) 1
(+ (fibo (− n 1)) (fibo (− n 2)))))
Applying fibo to small inputs works fine:

> (time (fibo 10)) cpu time: 0 real time: 0 gc time: 0
55
> (time (fibo 30)) cpu time: 2156 real time: 2187 gc time: 0
832040
2 Although the sequence is named for Bonacci, it was probably not invented by him. The sequence was already known to Indian mathematicians with whom Bonacci studied.

Filius Bonacci

128

7.1. Empirical Measurements

But when we try to determine the number of rabbits in five years by computing
(fibo 60), our interpreter just hangs without producing a value.
The fibo procedure is defined in a way that guarantees it eventually completes when applied to a non-negative whole number: each recursive call reduces the input by 1 or 2, so both recursive calls get closer to the base case. Hence, we always make progress and must eventually reach the base case, unwind the recursive applications, and produce a value. To understand why the evaluation of
(fibo 60) did not finish in our interpreter, we need to consider how much work is required to evaluate the expression.
To evaluate (fibo 60), the interpreter follows the if expressions to the recursive case, where it needs to evaluate (+ (fibo 59) (fibo 58)). To evaluate (fibo 59), it needs to evaluate (fibo 58) again and also evaluate (fibo 57). To evaluate (fibo 58)
(which needs to be done twice), it needs to evaluate (fibo 57) and (fibo 56). So, there is one evaluation of (fibo 60), one evaluation of (fibo 59), two evaluations of (fibo 58), and three evaluations of (fibo 57).
The total number of evaluations of the fibo procedure for each input is itself the Fibonacci sequence! To understand why, consider the evaluation tree for
(fibo 4) shown in Figure 7.1. The only direct number values are the 1 values that result from evaluations of either (fibo 1) or (fibo 2). Hence, the number of 1 values must be the value of the final result, which just sums all these numbers.
For (fibo 4), there are 5 leaf applications, and 3 more inner applications, for 8
(= Fibonacci (5)) total recursive applications. The number of evaluations of applications of fibo needed to evaluate (fibo 60) is the 61st Fibonacci number —
2,504,730,781,961 — over two and a half trillion applications of fibo!
(fibo 5)
(fibo 4)
(fibo 3)

(fibo 3)

(fibo 2)

(fibo 2)

(fibo 1)

1

(fibo 2)

(fibo 1)

1

1

1

1

Figure 7.1. Evaluation of fibo procedure.
Although our fibo definition is correct, it is ridiculously inefficient and only finishes for input numbers below about 40. It involves a tremendous amount of duplicated work: for the (fibo 60) example, there are two evaluations of (fibo 58) and over a trillion evaluations of (fibo 1) and (fibo 2).
We can avoid this duplicated effort by building up to the answer starting from the base cases. This is more like the way a human would determine the numbers in the Fibonacci sequence: we find the next number by adding the previous two

Chapter 7. Cost

129

numbers, and stop once we have reached the number we want.
The fast-fibo procedure computes the nth Fibonacci number, but avoids the duplicate effort by computing the results building up from the first two Fibonacci numbers, instead of working backwards.
(define (fast-fibo n)
(define (fibo-iter a b left)
(if ( Fibonacci (n). The rate of increase is multiplicative,

and must be at least a factor of 2 ≈ 1.414 (since increasing by one twice more than doubles the value). (In fact, the rate of increase is a factor of

φ = (1 + 5)/2 ≈ 1.618, also known as the “golden ratio”. This is a rather remarkable result, but explaining why is beyond the scope of this book.)
This is much faster than the growth rate of n, which increases by one when we increase n by one. So, n is in the set O(Fibonacci (n)), but Fibonacci (n) is not in the set O(n).
Some of the example functions are plotted in Figure 7.3. The O notation reveals the asymptotic behavior of functions. The functions plotted are the same in both graphs, but the scale of the horizontal axis is different. In the first graph, the r n2

100

r

80

40
20
0

5000

r

60

Fibo (n)

r

r r r ` ` `
`r `
`r `r
2

6

4

`

Fibo (n)
6000

`

`

4000
3000

3n

2000
1000
0

8

10

n

r 2
` ` ` ` ` ` ` r
` `
`r `r r r `r `r `r r r `r r r `r `r r `r r r ` ` n
4

8

12 n Figure 7.3. Orders of Growth.

16

20

132

7.2. Orders of Growth

rightmost value of n2 is greatest; for higher input values, the value of Fibonacci (n) is greatest. In the second graph, the values of Fibonacci (n) for input values up to
20 are so large that the other functions appear as nearly flat lines on the graph.
Definition of O. The function g is a member of the set O( f ) if and only if there exist positive constants c and n0 such that, for all values n ≥ n0 , g ( n ) ≤ c f ( n ).
We can show g is in O( f ) using the definition of O( f ) by choosing positive constants for the values of c and n0 , and showing that the property g(n) ≤ c f (n) holds for all values n ≥ n0 . To show g is not in O( f ), we need to explain how, for any choices of c and n0 , we can find values of n that are greater than n0 such that g(n) ≤ c f (n) does not hold.
Example 7.1: O Examples
We now show the claimed properties are true using the formal definition. n − 7 is in O(n + 12)
Choose c = 1 and n0 = 1. Then, we need to show n − 7 ≤ 1(n + 12) for all values n ≥ 1. This is true, since n − 7 > n + 12 for all values n. n + 12 is in O(n − 7)
Choose c = 2 and n0 = 26. Then, we need to show n + 12 ≤ 2(n − 7) for all values n ≥ 26. The equation simplifies to n + 12 ≤ 2n − 14, which simplifies to 26 ≤ n. This is trivially true for all values n ≥ 26.
2n is in O(3n)
Choose c = 1 and n0 = 1. Then, 2n ≤ 3n for all values n ≥ 1.
3n is in O(2n)
Choose c = 2 and n0 = 1. Then, 3n ≤ 2(2n) simplifies to n ≤ 4/3n which is true for all values n ≥ 1. n is in O(n2 )
Choose c = 1 and n0 = 1. Then n ≤ n2 for all values n ≥ 1. n2 is not in O(n)
We need to show that no matter what values are chosen for c and n0 , there are values of n ≥ n0 such that the inequality n2 ≤ cn does not hold. For any value of c, we can make n2 > cn by choosing n > c. n is in O(Fibonacci (n))
Choose c = 1 and n0 = 3. Then n ≤ Fibonacci (n) for all values n ≥ n0 .
Fibonacci (n) is not in O(n − 2)
No matter what values are chosen for c and n0 , there are values of n ≥ n0 such that Fibonacci (n) > c(n). We know Fibonacci (12) = 144, and, from the discussion above, that:
Fibonacci (n + 2) > 2 ∗ Fibonacci (n)
This means, for n > 12, we know Fibonacci (n) > n2 . So, no matter what value is chosen for c, we can choose n = c. Then, we need to show
Fibonacci (n) > n(n)
The right side simplifies to n2 . For n > 12, we know Fibonacci (n) > n2 .
Hence, we can always choose an n that contradicts the Fibonacci (n) ≤ cn inequality by choosing an n that is greater than n0 , 12, and c.

133

Chapter 7. Cost

For all of the examples where g is in O( f ), there are many acceptable choices for c and n0 . For the given c values, we can always use a higher n0 value than the selected value. It only matters that there is some finite, positive constant we can choose for n0 , such that the required inequality, g(n) ≤ c f (n) holds for all values n ≥ n0 . Hence, our proofs work equally well with higher values for n0 than we selected. Similarly, we could always choose higher c values with the same n0 values. The key is just to pick any appropriate values for c and n0 , and show the inequality holds for all values n ≥ n0 .
Proving that a function is not in O( f ) is usually tougher. The key to these proofs is that the value of n that invalidates the inequality is selected after the values of c and n0 are chosen. One way to think of this is as a game between two adversaries. The first player picks c and n0 , and the second player picks n. To show the property that g is not in O( f ), we need to show that no matter what values the first player picks for c and n0 , the second player can always find a value n that is greater than n0 such that g(n) > c f (n).
Exercise 7.2. For each of the g functions below, answer whether or not g is in the set O(n). Your answer should include a proof. If g is in O(n) you should identify values of c and n0 that can be selected to make the necessary inequality hold.
If g is not in O(n) you should argue convincingly that no matter what values are chosen for c and n0 there are values of n ≥ n0 such the inequality in the definition of O does not hold.
a. g(n) = n + 5
b. g(n) = .01n
c. g(n) = 150n +
d. g(n) =



n

n1.5

e. g(n) = n!
Exercise 7.3. [ ] Given f is some function in O(h), and g is some function not in
O(h), which of the following must always be true:
a. For all positive integers m, f (m) ≤ g(m).
b. For some positive integer m, f (m) < g(m).
c. For some positive integer m0 , and all positive integers m > m0 , f ( m ) < g ( m ).

7.2.2

Omega

The set Ω( f ) (omega) is the set of functions that grow no slower than f grows.
So, a function g is in Ω( f ) if g grows as fast as f or faster. Constrast this with
O( f ), the set of all functions that grow no faster than f grows. In Figure 7.2,
Ω( f ) is the set of all functions outside the darker circle.
The formal definition of Ω( f ) is nearly identical to the definition of O( f ): the only difference is the ≤ comparison is changed to ≥.
Definition of Ω( f ). The function g is a member of the set Ω( f ) if and only if there exist positive constants c and n0 such that, for all values n ≥ n0 , g ( n ) ≥ c f ( n ).

134

7.2. Orders of Growth

Example 7.2: Ω Examples
We repeat selected examples from the previous section with Ω instead of O. The strategy is similar: we show g is in Ω( f ) using the definition of Ω( f ) by choosing positive constants for the values of c and n0 , and showing that the property g(n) ≥ c f (n) holds for all values n ≥ n0 . To show g is not in Ω( f ), we need to explain how, for any choices of c and n0 , we can find a choice for n ≥ n0 such that g(n) < c f (n). n − 7 is in Ω(n + 12)
1
1
Choose c = 2 and n0 = 26. Then, we need to show n − 7 ≥ 2 (n + 12) for n all values n ≥ 26. This is true, since the inequality simplifies 2 ≥ 13 which holds for all values n ≥ 26.
2n is in Ω(3n)
Choose c = 1 and n0 = 1. Then, 2n ≥ 1 (3n) simplifies to n ≥ 0 which holds
3
3 for all values n ≥ 1. n is not in Ω(n2 )
Whatever values are chosen for c and n0 , we can choose n ≥ n0 such that n ≥ cn2 does not hold. Choose n > 1 (note that c must be less than 1 for c the inequality to hold for any positive n, so if c is not less than 1 we can just choose n ≥ 2). Then, the right side of the inequality cn2 will be greater than n, and the needed inequality n ≥ cn2 does not hold. n is not in Ω(Fibonacci (n))
No matter what values are chosen for c and n0 , we can choose n ≥ n0 such that n ≥ Fibonacci (n) does not hold. The value of Fibonacci (n) more than doubles every time n is increased by 2 (see Section 7.2.1), but the value of c(n) only increases by 2c. Hence, if we keep increasing n, eventually
Fibonacci (n + 1) > c(n − 2) for any choice of c.

Exercise 7.4. Repeat Exercise 7.2 using Ω instead of O.
Exercise 7.5. For each part, identify a function g that satisfies the property.
a. g is in O(n2 ) but not in Ω(n2 ).
b. g is not in O(n2 ) but is in Ω(n2 ).
c. g is in both O(n2 ) and Ω(n2 ).

7.2.3

Theta

The function Θ( f ) denotes the set of functions that grow at the same rate as f .
It is the intersection of the sets O( f ) and Ω( f ). Hence, a function g is in Θ( f ) if and only if g is in O( f ) and g is in Ω( f ). In Figure 7.2, Θ( f ) is the ring between the outer and inner circles.
An alternate definition combines the inequalities for O and Ω:
Definition of Θ( f ). The function g is a member of the set Θ( f ) if any only if there exist positive constants c1 , c2 , and n0 such that, for all values n ≥ n0 , c1 f ( n ) ≥ g ( n ) ≥ c2 f ( n ).

135

Chapter 7. Cost

If g(n) is in Θ( f (n)), then the sets Θ( f (n)) and Θ( g(n)) are identical. If g(n) ∈
Θ( f (n)) then g and f grow at the same rate,
Example 7.3: Θ Examples
Determining membership in Θ( f ) is simple once we know membership in O( f ) and Ω( f ). n − 7 is in Θ(n + 12)
Since n − 7 is in O(n + 12) and n − 7 is in Ω(n + 12) we know n − 7 is in Θ(n +
12). Intuitively, n − 7 increases at the same rate as n + 12, since adding one to n adds one to both function outputs. We can also show this using the
1
definition of Θ( f ): choose c1 = 1, c2 = 2 , and n0 = 38.
2n is in Θ(3n)
2n is in O(3n) and in Ω(3n). Choose c1 = 1, c2 = 1 , and n0 = 1.
3
n is not in Θ(n2 ) n is not in Ω(n2 ). Intuitively, n grows slower than n2 since increasing n by one always increases the value of the first function, n, by one, but increases the value of n2 by 2n + 1, a value that increases as n increases. n2 is not in Θ(n): n2 is not in O(n). n − 2 is not in Θ(Fibonacci (n + 1)): n − 2 is not in Ω(n).
Fibonacci (n) is not in Θ(n): Fibonacci (n + 1) is not in O(n − 2).
Properties of O, Ω, and Θ. Because O, Ω, and Θ are concerned with the asymptotic properties of functions, that is, how they grow as inputs approach infinity, many functions that are different when the actual output values matter generate identical sets with the O, Ω, and Θ functions. For example, we saw n − 7 is in
Θ(n + 12) and n + 12 is in Θ(n − 7). In fact, every function that is in Θ(n − 7) is also in Θ(n + 12).
More generally, if we could prove g is in Θ( an + k) where a is a positive constant and k is any constant, then g is also in Θ(n). Thus, the set Θ( an + k) is equivalent to the set Θ(n).
We prove Θ( an + k ) ≡ Θ(n) using the definition of Θ. To prove the sets are equivalent, we need to show inclusion in both directions.
Θ(n) ⊆ Θ( an + k): For any function g, if g is in Θ(n) then g is in Θ( an + k ).
Since g is in Θ(n) there exist positive constants c1 , c2 , and n0 such that c1 n ≥ g(n) ≥ c2 n. To show g is also in Θ( an + k) we find d1 , d2 , and m0 such that d1 ( an + k ) ≥ g(n) ≥ d2 ( an + k ) for all n ≥ m0 . Simplifying the inequalities, we need ( ad1 )n + kd1 ≥ g(n) ≥ ( ad2 )n + kd2 . Ignoring the constants for now, we can pick d1 = ca1 and d2 = ca2 . Since g is in Θ(n), we know

(a

c1 c )n ≥ g(n) ≥ ( a 2 )n a a

is satisfied. As for the constants, as n increases they become insignificant.
Adding one to d1 and d2 adds an to the first term and k to the second term.
Hence, as n grows, an becomes greater than k.
Θ( an + k) ⊆ Θ(k): For any function g, if g is in Θ( an + k) then g is in Θ(n).
Since g is in Θ( an + k) there exist positive constants c1 , c2 , and n0 such

136

7.3. Analyzing Procedures that c1 ( an + k) ≥ g(n) ≥ c2 ( an + k). Simplifying the inequalities, we have
( ac1 )n + kc1 ≥ g(n) ≥ ( ac2 )n + kc2 or, for some different positive constants b1 = ac1 and b2 = ac2 and constants k1 = kc1 and k2 = kc2 , b1 n + k1 ≥ g(n) ≥ b2 n + k2 . To show g is also in Θ(n), we find d1 , d2 , and m0 such that d1 n ≥ g(n) ≥ d2 n for all n ≥ m0 . If it were not for the constants, we already have this with d1 = b1 and d2 = b2 . As before, the constants become inconsequential as n increases.

This property also holds for the O and Ω operators since our proof for Θ also proved the property for the O and Ω inequalities.
This result can be generalized to any polynomial. The set Θ( a0 + a1 n + a2 n2 +
... + ak nk ) is equivalent to Θ(nk ). Because we are concerned with the asymptotic growth, only the highest power term of the polynomial matters once n gets big enough. Exercise 7.6. Repeat Exercise 7.2 using Θ instead of O.
Exercise 7.7. Show that Θ(n2 − n) is equivalent to Θ(n2 ).
Exercise 7.8. [ ] Is Θ(n2 ) equivalent to Θ(n2.1 )? Either prove they are identical, or prove they are different.
Exercise 7.9. [ ] Is Θ(2n ) equivalent to Θ(3n )? Either prove they are identical, or prove they are different.

7.3

Analyzing Procedures

By considering the asymptotic growth of functions, rather than their actual outputs, the O, Ω, and Θ operators allow us to hide constants and factors that change depending on the speed of our processor, how data is arranged in memory, and the specifics of how our interpreter is implemented. Instead, we can consider the essential properties of how the running time of the procedures increases with the size of the input.
This section explains how to measure input sizes and running times. To understand the growth rate of a procedure’s running time, we need a function that maps the size of the inputs to the procedure to the amount of time it takes to evaluate the application. First we consider how to measure the input size; then, we consider how to measure the running time. In Section 7.3.3 we consider which input of a given size should be used to reason about the cost of applying a procedure. Section 7.4 provides examples of procedures with different growth rates. The growth rate of a procedure’s running time gives us an understanding of how the running time increases as the size of the input increases.

7.3.1

Input Size

Procedure inputs may be many different types: Numbers, Lists of Numbers,
Lists of Lists, Procedures, etc. Our goal is to characterize the input size with a single number that does not depend on the types of the input.
We use the Turing machine to model a computer, so the way to measure the size of the input is the number of characters needed to write the input on the tape.
The characters can be from any fixed-size alphabet, such as the ten decimal digits, or the letters of the alphabet. The number of different symbols in the tape

Chapter 7. Cost

137

alphabet does not matter for our analysis since we are concerned with orders of growth not absolute values. Within the O, Ω, and Θ operators, a constant factor does not matter (e.g., Θ(n) ≡ Θ(17n + 523)). This means is doesn’t matter whether we use an alphabet with two symbols or an alphabet with 256 symbols.
With two symbols the input may be 8 times as long as it is with a 256-symbol alphabet, but the constant factor does not matter inside the asymptotic operator.
Thus, we measure the size of the input as the number of symbols required to write the number on a Turing Machine input tape. To figure out the input size of a given type, we need to think about how many symbols it would require to write down inputs of that type.
Booleans. There are only two Boolean values: true and false. Hence, the length of a Boolean input is fixed.
Numbers. Using the decimal number system (that is, 10 tape symbols), we can write a number of magnitude n using log10 n digits. Using the binary number system (that is, 2 tape symbols), we can write it using log2 n bits. Within the asymptotic operators, the base of the logarithm does not matter (as long as it is a constant) since it changes the result by a constant factor. We can see this from the argument above — changing the number of symbols in the input alphabet changes the input length by a constant factor which has no impact within the asymptotic operators.
Lists. If the input is a List, the size of the input is related to the number of elements in the list. If each element is a constant size (for example, a list of numbers where each number is between 0 and 100), the size of the input list is some constant multiple of the number of elements in the list. Hence, the size of an input that is a list of n elements is cn for some constant c. Since Θ(cn) = Θ(n), the size of a List input is Θ(n) where n is the number of elements in the List. If
List elements can vary in size, then we need to account for that in the input size.
For example, suppose the input is a List of Lists, where there are n elements in each inner List, and there are n List elements in the main List. Then, there are n2 total elements and the input size is in Θ(n2 ).

7.3.2

Running Time

We want a measure of the running time of a procedure that satisfies two properties: (1) it should be robust to ephemeral properties of a particular execution or computer, and (2) it should provide insights into how long it takes evaluate the procedure on a wide range of inputs.
To estimate the running time of an evaluation, we use the number of steps required to perform the evaluation. The actual number of steps depends on the details of how much work can be done on each step. For any particular processor, both the time it takes to perform a step and the amount of work that can be done in one step varies. When we analyze procedures, however, we usually don’t want to deal with these details. Instead, what we care about is how the running time changes as the input size increases. This means we can count anything we want as a “step” as long as each step is the approximately same size and the time a step requires does not depend on the size of the input.
The clearest and simplest definition of a step is to use one Turing Machine step.
We have a precise definition of exactly what a Turing Machine can do in one step:

138

Time makes more converts than reason. Thomas Paine

7.3. Analyzing Procedures

it can read the symbol in the current square, write a symbol into that square, transition its internal state number, and move one square to the left or right.
Counting Turing Machine steps is very precise, but difficult because we do not usually start with a Turing Machine description of a procedure and creating one is tedious.
Instead, we usually reason directly from a Scheme procedure (or any precise description of a procedure) using larger steps. As long as we can claim that whatever we consider a step could be simulated using a constant number of steps on a Turing Machine, our larger steps will produce the same answer within the asymptotic operators. One possibility is to count the number of times an evaluation rule is used in an evaluation of an application of the procedure. The amount of work in each evaluation rule may vary slightly (for example, the evaluation rule for an if expression seems more complex than the rule for a primitive) but does not depend on the input size.
Hence, it is reasonable to assume all the evaluation rules to take constant time.
This does not include any additional evaluation rules that are needed to apply one rule. For example, the evaluation rule for application expressions includes evaluating every subexpression. Evaluating an application constitutes one work unit for the application rule itself, plus all the work required to evaluate the subexpressions. In cases where the bigger steps are unclear, we can always return to our precise definition of a step as one step of a Turing Machine.

7.3.3

Worst Case Input

A procedure may have different running times for inputs of the same size.
For example, consider this procedure that takes a List as input and outputs the first positive number in the list:
(define (list-first-pos p)
(if (null? p) (error "No positive element found")
(if (> (car p) 0) (car p) (list-first-pos (cdr p)))))
If the first element in the input list is positive, evaluating the application of listfirst-pos requires very little work. It is not necessary to consider any other elements in the list if the first element is positive. On the other hand, if none of the elements are positive, the procedure needs to test each element in the list until it reaches the end of the list (where the base case reports an error). worst case

In our analyses we usually consider the worst case input. For a given size, the worst case input is the input for which evaluating the procedure takes the most work. By focusing on the worst case input, we know the maximum running time for the procedure. Without knowing something about the possible inputs to the procedure, it is safest to be pessimistic about the input and not assume any properties that are not known (such as that the first number in the list is positive for the first-pos example).
In some cases, we also consider the average case input. Since most procedures can take infinitely many inputs, this requires understanding the distribution of possible inputs to determine an “average” input. This is often necessary when we are analyzing the running time of a procedure that uses another helper procedure. If we use the worst-case running time for the helper procedure, we will grossly overestimate the running time of the main procedure. Instead, since

Chapter 7. Cost

139

we know how the main procedure uses the helper procedure, we can more precisely estimate the actual running time by considering the actual inputs. We see an example of this in the analysis of how the + procedure is used by list-length in Section 7.4.2.

7.4

Growth Rates

Since our goal is to understand how the running time of an application of a procedure is related to the size of the input, we want to devise a function that takes as input a number that represents the size of the input and outputs the maximum number of steps required to complete the evaluation on an input of that size. Symbolically, we can think of this function as:
Max-Steps Proc : Number → Number where Proc is the name of the procedure we are analyzing. Because the output represents the maximum number of steps required, we need to consider the worst-case input of the given size.
Because of all the issues with counting steps exactly, and the uncertainty about how much work can be done in one step on a particular machine, we cannot usually determine the exact function for Max-Steps Proc . Instead, we characterize the running time of a procedure with a set of functions denoted by an asymptotic operator. Inside the O, Ω, and Θ operators, the actual time needed for each step does not matter since the constant factors are hidden by the operator; what matters is how the number of steps required grows as the size of the input grows.
Hence, we will characterize the running time of a procedure using a set of functions produced by one of the asymptotic operators. The Θ operator provides the most information. Since Θ( f ) is the intersection of O( f ) (no faster than) and
Ω( f ) (no slower than), knowing that the running time of a procedure is in Θ( f ) for some function f provides much more information than just knowing it is in
O( f ) or just knowing that it is in Ω( f ). Hence, our goal is to characterize the running time of a procedure using the set of functions defined by Θ( f ) of some function f .
The rest of this section provides examples of procedures with different growth rates, from slowest (no growth) through increasingly rapid growth rates. The growth classes described are important classes that are commonly encountered when analyzing procedures, but these are only examples of growth classes. Between each pair of classes described here, there are an unlimited number of different growth classes.

7.4.1

No Growth: Constant Time

If the running time of a procedure does not increase when the size of the input increases, the procedure must be able to produce its output by looking at only a constant number of symbols in the input. Procedures whose running time does not increase with the size of the input are known as constant time procedures.
Their running time is in O(1) — it does not grow at all. By convention, we use
O(1) instead of Θ(1) to describe constant time. Since there is no way to grow slower than not growing at all, O(1) and Θ(1) are equivalent.

constant time

140

7.4. Growth Rates

We cannot do much in constant time, since we cannot even examine the whole input. A constant time procedure must be able to produce its output by examining only a fixed-size part of the input. Recall that the input size measures the number of squares needed to represent the input. No matter how long the input is, a constant time procedure can look at no more than some fixed number of squares on the tape, so cannot even read the whole input.
An example of a constant time procedure is the built-in procedure car. When car is applied to a non-empty list, it evaluates to the first element of that list.
No matter how long the input list is, all the car procedure needs to do is extract the first component of the list. So, the running time of car is in O(1).4 Other built-in procedures that involve lists and pairs that have running times in O(1) include cons, cdr, null?, and pair?. None of these procedures need to examine more than the first pair of the list.

7.4.2 linearly Linear Growth

When the running time of a procedure increases by a constant amount when the size of the input grows by one, the running time of the procedure grows linearly with the input size. If the input size is n, the running time is in Θ(n). If a procedure has running time in Θ(n), doubling the size of the input will approximately double the execution time.
An example of a procedure that has linear growth is the elementary school addition algorithm from Section 6.2.3. To add two d-digit numbers, we need to perform a constant amount of work for each digit. The number of steps required grows linearly with the size of the numbers (recall from Section 7.3.1 that the size of a number is the number of input symbols needed to represent the number).
Many procedures that take a List as input have linear time growth. A procedure that does something that takes constant time with every element in the input
List, has running time that grows linearly with the size of the input since adding one element to the list increases the number of steps by a constant amount.
Next, we analyze three list procedures, all of which have running times that scale linearly with the size of their input.
Example 7.4: Append
Consider the list-append procedure (from Example 5.6):
(define (list-append p q)
(if (null? p) q (cons (car p) (list-append (cdr p) q))))
Since list-append takes two inputs, we need to be careful about how we refer to the input size. We use n p to represent the number of elements in the first input, and nq to represent the number of elements in the second input. So, our goal is to define a function Max-Steps list-append (n p , nq ) that captures how the maximum number of steps required to evaluate an application of list-append scales with the size of its input.
4 Since we are speculating based on what car does, not examining how car a particular Scheme interpreter actually implements it, we cannot say definitively that its running time is in O(1). It would be rather shocking, however, for an implementation to implement car in a way such that its running time that is not in O(1). The implementation of scar in Section 5.2.1 is constant time: regardless of the input size, evaluating an application of it involves evaluating a single application expression, and then evaluating an if expression.

141

Chapter 7. Cost

To analyze the running time of list-append, we examine its body which is an if expression. The predicate expression applies the null? procedure with is constant time since the effort required to determine if a list is null does not depend on the length of the list. When the predicate expression evaluates to true, the alternate expression is just q, which can also be evaluated in constant time.
Next, we consider the alternate expression. It includes a recursive application of list-append. Hence, the running time of the alternate expression is the time required to evaluate the recursive application plus the time required to evaluate everything else in the expression. The other expressions to evaluate are applications of cons, car, and cdr, all of which is are constant time procedures.
So, we can defined the total running time recursively as:
Max-Steps list-append (n p , nq ) = C + Max-Steps list-append (n p − 1, nq ) where C is some constant that reflects the time for all the operations besides the recursive call. Note that the value of nq does not matter, so we simplify this to:
Max-Steps list-append (n p ) = C + Max-Steps list-append (n p − 1).
This does not yet provide a useful characterization of the running time of listappend though, since it is a circular definition. To make it a recursive definition, we need a base case. The base case for the running time definition is the same as the base case for the procedure: when the input is null. For the base case, the running time is constant:
Max-Steps list-append (0) = C0 where C0 is some constant.
To better characterize the running time of list-append, we want a closed form solution. For a given input n, Max-Steps (n) is C + C + C + C + . . . + C + C0 where there are n − 1 of the C terms in the sum. This simplifies to (n − 1)C + C0 = nC − C + C0 = nC + C2 . We do not know what the values of C and C2 are, but within the asymptotic notations the constant values do not matter. The important property is that the running time scales linearly with the value of its input.
Thus, the running time of list-append is in Θ(n p ) where n p is the number of elements in the first input.
Usually, we do not need to reason at quite this low a level. Instead, to analyze the running time of a recursive procedure it is enough to determine the amount of work involved in each recursive call (excluding the recursive application itself ) and multiply this by the number of recursive calls. For this example, there are n p recursive calls since each call reduces the length of the p input by one until the base case is reached. Each call involves only constant-time procedures (other than the recursive application), so the amount of work involved in each call is constant. Hence, the running time is in Θ(n p ). Equivalently, the running time for the list-append procedure scales linearly with the length of the first input list.
Example 7.5: Length
Consider the list-length procedure from Example 5.1:
(define (list-length p) (if (null? p) 0 (+ 1 (list-length (cdr p)))))

142

7.4. Growth Rates

This procedure makes one recursive application of list-length for each element in the input p. If the input has n elements, there will be n + 1 total applications of list-length to evaluate (one for each element, and one for the null). So, the total work is in Θ(n · work for each recursive application).
To determine the running time, we need to determine how much work is involved in each application. Evaluating an application of list-length involves evaluating its body, which is an if expression. To evaluate the if expression, the predicate expression, (null? p), must be evaluated first. This requires constant time since the null? procedure has constant running time (see Section 7.4.1). The consequent expression is the primitive expression, 0, which can be evaluated in constant time. The alternate expression, (+ 1 (list-length (cdr p))), includes the recursive call. There are n + 1 total applications of list-length to evaluate, the total running time is n + 1 times the work required for each application (other than the recursive application itself ).
The remaining work is evaluating (cdr p) and evaluating the + application. The cdr procedure is constant time. Analyzing the running time of the + procedure application is more complicated.
Cost of Addition. Since + is a built-in procedure, we need to think about how it might be implemented. Following the elementary school addition algorithm
(from Section 6.2.3), we know we can add any two numbers by walking down the digits. The work required for each digit is constant; we just need to compute the corresponding result and carry bits using a simple formula or lookup table.
The number of digits to add is the maximum number of digits in the two input numbers. Thus, if there are b digits to add, the total work is in Θ(b). In the worst case, we need to look at all the digits in both numbers. In general, we cannot do asymptotically better than this, since adding two arbitrary numbers might require looking at all the digits in both numbers.
But, in the list-length procedure the + is used in a very limited way: one of the inputs is always 1. We might be able to add 1 to a number without looking at all the digits in the number. Recall the addition algorithm: we start at the rightmost
(least significant) digit, add that digit, and continue with the carry. If one of the input numbers is 1, then once the carry is zero we know now of the more significant digits will need to change. In the worst case, adding one requires changing every digit in the other input. For example, (+ 99999 1) is 100000. In the best case (when the last digit is below 9), adding one requires only examining and changing one digit.
Figuring out the average case is more difficult, but necessary to get a good estimate of the running time of list-length. We assume the numbers are represented in binary, so instead of decimal digits we are counting bits (this is both simpler, and closer to how numbers are actually represented in the computer). Approximately half the time, the least significant bit is a 0, so we only need to examine one bit. When the last bit is not a 0, we need to examine the second least significant bit (the second bit from the right): if it is a 0 we are done; if it is a 1, we need to continue.
We always need to examine one bit, the least significant bit. Half the time we also need to examine the second least significant bit. Of those times, half the time we need to continue and examine the next least significant bit. This con-

143

Chapter 7. Cost

tinues through the whole number. Thus, the expected number of bits we need to examine is,
1
1
1
1
1+
1+
1+
1 + (1 + . . . )
2
2
2
2 where the number of terms is the number of bits in the input number, b. Simplifying the equation, we get:
1+

1 1 1
1
1
+ + +
+...+ b
2 4 8 16
2

No matter how large b gets, this value is always less than 2. So, on average, the number of bits to examine to add 1 is constant: it does not depend on the length of the input number.
This result generalizes to addition where one of the inputs is any constant value.
Adding any constant C to a number n is equivalent to adding one C times. Since adding one is a constant time procedure, adding one C times can also be done in constant time for any constant C.
Excluding the recursive application, the list-length application involves applications of two constant time procedures: cdr and adding one using +. Hence, the total time needed to evaluate one application of list-length, excluding the recursive application, is constant.
There are n + 1 total applications of list-length to evaluate total, so the total running time is c(n + 1) where c is the amount of time needed for each application.
The set Θ(c(n + 1)) is identical to the set Θ(n), so the running time for the length procedure is in Θ(n) where n is the length of the input list.
Example 7.6: Accessing List Elements
Consider the list-get-element procedure from Example 5.3:
(define (list-get-element p n)
(if (= n 1)
(car p)
(list-get-element (cdr p) (− n 1))))
The procedure takes two inputs, a List and a Number selecting the element of the list to get. Since there are two inputs, we need to think carefully about the input size. We can use variables to represent the size of each input, for example s p and sn for the size of p and n respectively. In this case, however, only the size of the first input really matters.
The procedure body is an if expression. The predicate uses the built-in = procedure to compare n to 1. The worst case running time of the = procedure is linear in the size of the input: it potentially needs to look at all bits in the input numbers to determine if they are equal. Similarly to +, however, if one of the inputs is a constant, the comparison can be done in constant time. To compare a number of any size to 1, it is enough to look at a few bits. If the least significant bit of the input number is not a 1, we know the result is false. If it is a 1, we need to examine a few other bits of the input number to determine if its value is different from 1 (the exact number of bits depends on the details of how numbers are represented). So, the = comparison can be done in constant time.

144

7.4. Growth Rates

If the predicate is true, the base case applies the car procedure, which has constant running time. The alternate expression involves the recursive calls, as well as evaluating (cdr p), which requires constant time, and (− n 1). The − procedure is similar to +: for arbitrary inputs, its worst case running time is linear in the input size, but when one of the inputs is a constant the running time is constant. This follows from a similar argument to the one we used for the + procedure (Exercise 7.13 asks for a more detailed analysis of the running time of subtraction). So, the work required for each recursive call is constant.
The number of recursive calls is determined by the value of n and the number of elements in the list p. In the best case, when n is 1, there are no recursive calls and the running time is constant since the procedure only needs to examine the first element. Each recursive call reduces the value passed in as n by 1, so the number of recursive calls scales linearly with n (the actual number is n − 1 since the base case is when n equals 1). But, there is a limit on the value of n for which this is true. If the value passed in as n exceeds the number of elements in p, the procedure will produce an error when it attempts to evaluate (cdr p) for the empty list. This happens after s p recursive calls, where s p is the number of elements in p. Hence, the running time of list-get-element does not grow with the length of the input passed as n; after the value of n exceeds the number of elements in p it does not matter how much bigger it gets, the running time does not continue to increase.
Thus, the worst case running time of list-get-element grows linearly with the length of the input list. Equivalently, the running time of list-get-element is in
Θ(s p ) where s p is the number of elements in the input list.
Exercise 7.10. Explain why the list-map procedure from Section 5.4.1 has running time that is linear in the size of its List input. Assume the procedure input has constant running time.
Exercise 7.11. Consider the list-sum procedure (from Example 5.2):
(define (list-sum p) (if (null? p) 0 (+ (car p) (list-sum (cdr p)))))
What assumptions are needed about the elements in the list for the running time to be linear in the number if elements in the input list?
Exercise 7.12. For the decimal six-digit odometer (shown in the picture on page 142), we measure the amount of work to add one as the total number of wheel digit turns required. For example, going from 000000 to 000001 requires one work unit, but going from 000099 to 000100 requires three work units.
a. What are the worst case inputs?
b. What are the best case inputs?
c. [ ] On average, how many work units are required for each mile? Assume over the lifetime of the odometer, the car travels 1,000,000 miles.
d. Lever voting machines were used by the majority of American voters in the
1960s, although they are not widely used today. Most level machines used a three-digit odometer to tally votes. Explain why candidates ended up with 99 votes on a machine far more often than 98 or 100 on these machines.

145

Chapter 7. Cost

Exercise 7.13. [ ] The list-get-element argued by comparison to +, that the − procedure has constant running time when one of the inputs is a constant. Develop a more convincing argument why this is true by analyzing the worst case and average case inputs for −.
Exercise 7.14. [ ] Our analysis of the work required to add one to a number argued that it could be done in constant time. Test experimentally if the DrRacket
+ procedure actually satisfies this property. Note that one + application is too quick to measure well using the time procedure, so you will need to design a procedure that applies + many times without doing much other work.

7.4.3

Quadratic Growth

If the running time of a procedure scales as the square of the size of the input, the procedure’s running time grows quadratically. Doubling the size of the input approximately quadruples the running time. The running time is in Θ(n2 ) where n is the size of the input.
A procedure that takes a list as input has running time that grows quadratically if it goes through all elements in the list once for every element in the list. For example, we can compare every element in a list of length n with every other element using n(n − 1) comparisons. This simplifies to n2 − n, but Θ(n2 − n) is equivalent to Θ(n2 ) since as n increases only the highest power term matters
(see Exercise 7.7).
Example 7.7: Reverse
Consider the list-reverse procedure defined in Section 5.4.2:
(define (list-reverse p)
(if (null? p) null (list-append (list-reverse (cdr p)) (list (car p)))))
To determine the running time of list-reverse, we need to know how many recursive calls there are and how much work is involved in each recursive call. Each recursive application passes in (cdr p) as the input, so reduces the length of the input list by one. Hence, applying list-reverse to a input list with n elements involves n recursive calls.
The work for each recursive application, excluding the recursive call itself, is applying list-append. The first input to list-append is the output of the recursive call. As we argued in Example 7.4, the running time of list-append is in Θ(n p ) where n p is the number of elements in its first input. So, to determine the running time we need to know the length of the first input list to list-append. For the first call, (cdr p) is the parameter, with length n − 1; for the second call, there will be n − 2 elements; and so forth, until the final call where (cdr p) has 0 elements.
The total number of elements in all of these calls is:

(n − 1) + (n − 2) + . . . + 1 + 0.
The average number of elements in each call is approximately n . Within the
2
asymptotic operators the constant factor of 1 does not matter, so the average
2
running time for each recursive application is in Θ(n).
There are n recursive applications, so the total running time of list-reverse is n

quadratically

146

7.4. Growth Rates

times the average running time of each recursive application: n · Θ ( n ) = Θ ( n2 ).
Thus, the running time is quadratic in the size of the input list.
Example 7.8: Multiplication
Consider the problem of multiplying two numbers. The elementary school long multiplication algorithm works by multiplying each digit in b by each digit in a, aligning the intermediate results in the right places, and summing the results:

×

+ r2n−1 a n − 1 bn − 1

···

an−1 b1 a 1 bn −1

r2n−2

···

···

a n −1 bn − 1

···
···

a1 b1 a0 b0 an−1 b0
···
a 0 bn − 1

··· a1 b1

a1 b0 a0 b1

a0 b0

r3

r2

r1

r0

If both input numbers have n digits, there are n2 digit multiplications, each of which can be done in constant time. The intermediate results will be n rows, each containing n digits. So, the total number of digits to add is n2 : 1 digit in the ones place, 2 digits in the tens place, . . ., n digits in the 10n−1 s place, . . ., 2 digits in the 102n−3 s place, and 1 digit in the 102n−2 s place. Each digit addition requires constant work, so the total work for all the digit additions is in Θ(n2 ). Adding the work for both the digit multiplications and the digit additions, the total running time for the elementary school multiplication algorithm is quadratic in the number of input digits, Θ(n2 ) where n is the number if digits in the inputs.
This is not the fastest known algorithm for multiplying two numbers, although it was the best algorithm known until 1960. In 1960, Anatolii Karatsuba discovers a multiplication algorithm with running time in Θ(nlog2 3 ). Since log2 3 < 1.585 this is an improvement over the Θ(n2 ) elementary school algorithm. In 2007,
¨
Martin Furer discovered an even faster algorithm for multiplication.5 It is not yet known if this is the fastest possible multiplication algorithm, or if faster ones exist. Exercise 7.15. [ ] Analyze the running time of the elementary school long division algorithm.
Exercise 7.16. [ ] Define a Scheme procedure that multiplies two multi-digit numbers (without using the built-in ∗ procedure except to multiply single-digit numbers). Strive for your procedure to have running time in Θ(n) where n is the total number of digits in the input numbers.
Exercise 7.17. [
] Devise an asymptotically faster general multiplication
¨
algorithm than Furer’s, or prove that no faster algorithm exists.
5 Martin Furer, Faster Integer Multiplication,
¨

ACM Symposium on Theory of Computing, 2007.

Chapter 7. Cost

7.4.4

147

Exponential Growth

If the running time of a procedure scales as a power of the size of the input, the procedure’s running time grows exponentially. When the size of the input increases by one, the running time is multiplied by some constant factor. The growth rate of a function whose output is multiplied by w when the input size, n, increases by one is wn . Exponential growth is very fast—it is not feasible to evaluate applications of an exponential time procedure on large inputs.
For a surprisingly large number of interesting problems, the best known algorithm has exponential running time. Examples of problems like this include finding the best route between two locations on a map (the problem mentioned at the beginning of Chapter 4), the pegboard puzzle (Exploration 5.2, solving generalized versions of most other games such as Suduko and Minesweeper, and finding the factors of a number. Whether or not it is possible to design faster algorithms that solve these problems is the most important open problem in computer science.
Example 7.9: Factoring
A simple way to find a factor of a given input number is to exhaustively try all possible numbers below the input number to find the first one that divides the number evenly. The find-factor procedure takes one number as input and outputs the lowest factor of that number (other than 1):
(define (find-factor n)
(define (find-factor-helper v)
(if (= (modulo n v) 0) v (find-factor-helper (+ 1 v))))
(find-factor-helper 2))
The find-factor-helper procedure takes two inputs, the number to factor and the current guess. Since all numbers are divisible by themselves, the modulo test will eventually be true for any positive input number, so the maximum number of recursive calls is n, the magnitude of the input to find-factor. The magnitude of n is exponential in its size, so the number of recursive calls is in Θ(2b ) where b is the number of bits in the input. This means even if the amount of work required for each recursive call were constant, the running time of the find-factor procedure is still exponential in the size of its input.
The actual work for each recursive call is not constant, though, since it involves an application of modulo. The modulo built-in procedure takes two inputs and outputs the remainder when the first input is divided by the second input. Hence, it output is 0 if n is divisible by v. Computing a remainder, in the worst case, at least involves examining every bit in the input number, so scales at least linearly in the size of its input6 . This means the running time of find-factor is in Ω(2b ): it grows at least as fast as 2b .
There are lots of ways we could produce a faster procedure for finding factors: stopping once the square root of the input number is reached since we know there is no need to check the rest of the numbers, skipping even numbers after 2 since if a number is divisible by any even number it is also divisible by 2, or using advanced sieve methods. This techniques can improve the running time by constant factors, but there is no known factoring algorithm that runs in faster than
6 In fact, it computing the remainder requires performing division, which is quadratic in the size of the input.

148

7.4. Growth Rates

exponential time. The security of the widely used RSA encryption algorithm depends on factoring being hard. If someone finds a fast factoring algorithm it would put the codes used to secure Internet commerce at risk.7
Example 7.10: Power Set power set

The power set of a set S is the set of all subsets of S. For example, the power set of {1, 2, 3} is {{}, {1}, {2}, {3}, {1, 2}, {1, 3}, {2, 3}, {1, 2, 3}}.
The number of elements in the power set of S is 2|S| (where |S| is the number of elements in the set S).
Here is a procedure that takes a list as input, and produces as output the power set of the elements of the list:
(define (list-powerset s)
(if (null? s) (list null)
(list-append (list-map (lambda (t) (cons (car s) t))
(list-powerset (cdr s)))
(list-powerset (cdr s)))))
The list-powerset procedure produces a List of Lists. Hence, for the base case, instead of just producing null, it produces a list containing a single element, null. In the recursive case, we can produce the power set by appending the list of all the subsets that include the first element, with the list of all the subsets that do not include the first element. For example, the powerset of {1, 2, 3} is found by finding the powerset of {2, 3}, which is {{}, {2}, {3}, {2, 3}}, and taking the union of that set with the set of all elements in that set unioned with {1}.
An application of list-powerset involves applying list-append, and two recursive applications of (list-powerset (cdr s)). Increasing the size of the input list by one, doubles the total number of applications of list-powerset since we need to evaluate (list-powerset (cdr s)) twice. The number of applications of list-powerset is
2n where n is the length of the input list.8
The body of list-powerset is an if expression. The predicate applies the constanttime procedure, null?. The consequent expression, (list null) is also constant time. The alternate expression is an application of list-append. From Example 7.4, we know the running time of list-append is Θ(n p ) where n p is the number of elements in its first input. The first input is the result of applying list-map to a procedure and the List produced by (list-powerset (cdr s)). The length of the list output by list-map is the same as the length of its input, so we need to determine the length of (list-powerset (cdr s)).
We use ns to represent the number of elements in s. The length of the input list to map is the number of elements in the power set of a size ns − 1 set: 2ns −1 . But, for each application, the value of ns is different. Since we are trying to determine the total running time, we can do this by thinking about the total length of all the input lists to list-map over all of the list-powerset. In the input is a list of length n, the total list length is 2n−1 + 2n−2 + ... + 21 + 20 , which is equal to 2n − 1. So,
7 The movie Sneakers is a fictional account of what would happen if someone finds a faster than exponential time factoring algorithm.
8 Observant readers will note that it is not really necessary to perform this evaluation twice since we could do it once and reuse the result. Even with this change, though, the running time would still be in Θ(2n ).

Chapter 7. Cost

149

the running time for all the list-map applications is in Θ(2n ).
The analysis of the list-append applications is similar. The length of the first input to list-append is the length of the result of the list-powerset application, so the total length of all the inputs to append is 2n .
Other than the applications of list-map and list-append, the rest of each listpowerset application requires constant time. So, the running time required for
2n applications is in Θ(2n ). The total running time for list-powerset is the sum of the running times for the list-powerset applications, in Θ(2n ); the list-map applications, in Θ(2n ); and the list-append applications, in Θ(2n ). Hence, the total running time is in Θ(2n ).
In this case, we know there can be no faster than exponential procedure that solves the same problem, since the size of the output is exponential in the size of the input. Since the most work a Turing Machine can do in one step is write one square, the size of the output provides a lower bound on the running time of the Turing Machine. The size of the powerset is 2n where n is the size of the input set. Hence, the fastest possible procedure for this problem has at least exponential running time.

7.4.5

Faster than Exponential Growth

We have already seen an example of a procedure that grows faster than exponentially in the size of the input: the fibo procedure at the beginning of this chapter! Evaluating an application of fibo involves Θ(φn ) recursive applications where n is the magnitude of the input parameter. The size of a numeric input is the number of bits needed to express it, so the value n can be as high as 2b − 1 where b is the number of bits. Hence, the running time of the fibo procedure is b in Θ(φ2 ) where b is the size of the input. This is why we are still waiting for (fibo
60) to finish evaluating.

7.4.6

Non-terminating Procedures

All of the procedures so far in the section are algorithms: they may be slow, but they are guaranteed to eventually finish if one can wait long enough. Some procedures never terminate. For example,
(define (run-forever) (run-forever)) defines a procedure that never finishes. Its body calls itself, never making any progress toward a base case. The running time of this procedure is effectively infinite since it never finishes.

7.5

Summary

Because the speed of computers varies and the exact time required for a particular application depends on many details, the most important property to understand is how the work required scales with the size of the input. The asymptotic operators provide a convenient way of understanding the cost involved in evaluating a procedure applications.
Procedures that can produce an output only touching a fixed amount have constant running times. Procedures whose running times increase by a fixed amount

150

7.5. Summary

when the input size increases by one have linear (in Θ(n)) running times. Procedures whose running time quadruples when the input size doubles have quadratic
(in Θ(n2 )) running times. Procedures whose running time doubles when the input size increases by one have exponential (in Θ(2n )) running times. Procedures with exponential running time can only be evaluated for small inputs.
Asymptotic analysis, however, must be interpreted cautiously. For large enough inputs, a procedure with running time in Θ(n) is always faster than a procedure with running time in Θ(n2 ). But, for an input of a particular size, the Θ(n2 ) procedure may be faster. Without knowing the constants that are hidden by the asymptotic operators, there is no way to accurately predict the actual running time on a given input.
Exercise 7.18. Analyze the asymptotic running time of the list-sum procedure
(from Example 5.2):
(define (list-sum p)
(if (null? p)
0

(+ (car p) (list-sum (cdr p)))))
You may assume all of the elements in the list have values below some constant
(but explain why this assumption is useful in your analysis).
Exercise 7.19. Analyze the asymptotic running time of the factorial procedure
(from Example 4.1):
(define (factorial n) (if (= n 0) 1 (∗ n (factorial (− n 1)))))
Be careful to describe the running time in terms of the size (not the magnitude) of the input.
Exercise 7.20. Consider the intsto problem (from Example 5.8).
a. [ ] Analyze the asymptotic running time of this intsto procedure:
(define (revintsto n)
(if (= n 0) null (cons n (revintsto (− n 1)))))
(define (intsto n) (list-reverse (revintsto n)))
b. [ ] Analyze the asymptotic running time of this instto procedure:
(define (intsto n)
(if (= n 0) null (list-append (intsto (− n 1)) (list n))))
c. Which version is better?
d. [

] Is there an asymptotically faster intsto procedure?

Chapter 7. Cost

151

Exercise 7.21. Analyze the running time of the board-replace-peg procedure
(from Exploration 5.2):
(define (row-replace-peg pegs col val)
(if (= col 1) (cons val (cdr pegs))
(cons (car pegs) (row-replace-peg (cdr pegs) (− col 1) val))))
(define (board-replace-peg board row col val)
(if (= row 1) (cons (row-replace-peg (car board) col val) (cdr board))
(cons (car board) (board-replace-peg (cdr board) (− row 1) col val))))
Exercise 7.22. Analyze the running time of the deep-list-flatten procedure from
Section 5.5:
(define (deep-list-flatten p)
(if (null? p) null
(list-append (if (list? (car p))
(deep-list-flatten (car p))
(list (car p)))
(deep-list-flatten (cdr p)))))
Exercise 7.23. [ ] Find and correct at least one error in the Orders of Growth section of the Wikipedia page on Analysis of Algorithms (http://en.wikipedia.org/ wiki/Analysis of algorithms). This is rated as [ ] now (July 2011), since the current entry contains many fairly obvious errors. Hopefully it will soon become a
[
] challenge, and perhaps, eventually will become impossible!

152

7.5. Summary

8

Sorting and Searching
If you keep proving stuff that others have done, getting confidence, increasing the complexities of your solutions—for the fun of it—then one day you’ll turn around and discover that nobody actually did that one!
And that’s the way to become a computer scientist.
Richard Feynman, Lectures on Computation

This chapter presents two extended examples that use the programming techniques from Chapters 2–5 and analysis ideas from Chapters 6–7 to solve some interesting and important problems: sorting and searching. These examples involve some quite challenging problems and incorporate many of the ideas we have seen up to this point in the book. Once you understand them, you are well on your way to thinking like a computer scientist!

8.1

Sorting

The sorting problem takes two inputs: a list of elements and a comparison procedure. It outputs a list containing same elements as the input list ordered according to the comparison procedure. For example, if we sort a list of numbers using < as the comparison procedure, the output is the list of numbers sorted in order from least to greatest.
Sorting is one of the most widely studied problems in computing, and many different sorting algorithms have been proposed. Try to develop a sorting procedure yourself before continuing further. It may be illuminating to try sorting some items by hand an think carefully about how you do it and how much work it is. For example, take a shuffled deck of cards and arrange them in sorted order by ranks. Or, try arranging all the students in your class in order by birthday.
Next, we present and analyze three different sorting procedures.

8.1.1

Best-First Sort

A simple sorting strategy is to find the best element in the list and put that at the front. The best element is an element for which the comparison procedure evaluates to true when applied to that element and every other element. For example, if the comparison function is 0

A tree of depth zero has one node. Increasing the depth of a tree by one means we can add two nodes for each leaf node in the tree, so the total number of nodes in the new tree is the sum of the number of nodes in the original tree and twice the number of leaves in the original tree. The maximum number of leaves in a tree of depth d is 2d since each level doubles the number of leaves. Hence, the second equation simplifies to
TreeNodes (d − 1) + 2 × 2d−1 = TreeNodes (d − 1) + 2d .
The value of TreeNodes (d − 1) is 2d−1 + 2d−2 + . . . + 1 = 2d − 1. Adding 2d and
2d − 1 gives 2d+1 − 1 as the maximum number of nodes in a tree of depth d.
Hence, a well-balanced tree containing n nodes has depth approximately log2 n.
A tree is well-balanced if the left and right subtrees of all nodes in the contain nearly the same number of elements.
Procedures that are analogous to the list-first-half , list-second-half , and listappend procedures that had linear running times for the standard list representation can all be implemented with constant running times for the tree representation. For example, tree-left is analogous to list-first-half and make-tree is analogous to list-append.
The tree-insert-one procedure inserts an element in a sorted binary tree:
(define (tree-insert-one cf el tree)
(if (null? tree) (make-tree null el null)
(if (cf el (tree-element tree))
(make-tree (tree-insert-one cf el (tree-left tree))
(tree-element tree)
(tree-right tree))
(make-tree (tree-left tree)
(tree-element tree)
(tree-insert-one cf el (tree-right tree))))))
When the input tree is null, the new element is the top element of a new tree whose left and right subtrees are null. Otherwise, the procedure compares the element to insert with the element at the top node of the tree. If the comparison evaluates to true, the new element belongs in the left subtree. The result is a tree where the left tree is the result of inserting this element in the old left subtree, and the element and right subtree are the same as they were in the original tree.
For the alternate case, the element is inserted in the right subtree, and the left subtree is unchanged.
In addition to the recursive call, tree-insert-one only applies constant time procedures. If the tree is well-balanced, each recursive application halves the size of the input tree so there are approximately log2 n recursive calls. Hence, the running time to insert an element in a well-balanced tree using tree-insert-one is in Θ(log n).

well-balanced

164

8.1. Sorting

Using tree-insert-one, we define list-to-sorted-tree, a procedure that takes a comparison function and a list as its inputs, and outputs a sorted binary tree containing the elements in the input list. It inserts each element of the list in turn into the sorted tree:
(define (list-to-sorted-tree cf p)
(if (null? p) null
(tree-insert-one cf (car p) (list-to-sorted-tree cf (cdr p)))))
Assuming well-balanced trees as above (we revisit this assumption later), the expected running time of list-to-sorted-tree is in Θ(n log n) where n is the size of the input list. There are n recursive applications of list-to-sorted-tree since each application uses cdr to reduce the size of the input list by one. Each application involves an application of tree-insert-one (as well as only constant time procedures), so the expected running time of each application is in Θ(log n). Hence, the total running time for list-to-sorted-tree is in Θ(n log n).
To use our list-to-sorted-tree procedure to perform sorting we need to extract a list of the elements in the tree in the correct order. The leftmost element in the tree should be the first element in the list. Starting from the top node, all elements in its left subtree should appear before the top element, and all the elements in its right subtree should follow it. The tree-extract-elements procedure does this:
(define (tree-extract-elements tree)
(if (null? tree) null
(list-append (tree-extract-elements (tree-left tree))
(cons (tree-element tree)
(tree-extract-elements (tree-right tree))))))
The total number of applications of tree-extract-elements is between n (the number of elements in the tree) and 3n since there can be up to two null trees for each leaf element (it could never actually be 3n, but for our asymptotic analysis it is enough to know it is always less than some constant multiple of n). For each application, the body applies list-append where the first parameter is the elements extracted from the left subtree. The end result of all the list-append applications is the output list, containing the n elements in the input tree.
Hence, the total size of all the appended lists is at most n, and the running time for all the list-append applications is in Θ(n). Since this is the total time for all the list-append applications, not the time for each application of tree-extractelements, the total running time for tree-extract-elements is the time for the recursive applications, in Θ(n), plus the time for the list-append applications, in
Θ(n), which is in Θ(n).
Putting things together, we define list-sort-tree:
(define (list-sort-tree cf p)
(tree-extract-elements (list-to-sorted-tree cf p)))
The total running time for list-sort-tree is the running time of the list-to-sortedtree application plus the running time of the tree-extract-elements application.
The running time of list-sort-tree is in Θ(n log n) where n is the number of elements in the input list (in this case, the number of elements in p), and the running time of tree-extract-elements is in Θ(n) where n is the number of ele-

Chapter 8. Sorting and Searching

165

ments in its input list (which is the result of the list-to-sorted tree application, a list containing n elements where n is the number of elements in p).
Only the fastest-growing term contributes to the total asymptotic running time, so the expected total running time for an application of list-sort-tree-insert to a list containing n elements is in Θ(n log n). This is substantially better than the previous sorting algorithms which had running times in Θ(n2 ) since logarithms grow far slower than their input. For example, if n is one million, n2 is over 50,000 times bigger than n log2 n; if n is one billion, n2 is over 33 million times bigger than n log2 n since log2 1000000000 is just under 30.
There is no general sorting procedure that has expected running time better than Θ(n log n), so there is no algorithm that is asymptotically faster than listsort-tree (in fact, it can be proven that no asymptotically faster sorting procedure exists). There are, however, sorting procedures that may have advantages such as how they use memory which may provide better absolute performance in some situations.
Unbalanced Trees. Our analysis assumes the left and right halves of the tree passed to tree-insert-one having approximately the same number of elements.
If the input list is in random order, this assumption is likely to be valid: each element we insert is equally likely to go into the left or right half, so the halves contain approximately the same number of elements all the way down the tree.
But, if the input list is not in random order this may not be the case.
For example, suppose the input list is already in sorted order. Then, each element that is inserted will be the rightmost node in the tree when it is inserted.
For the previous example, this produces the unbalanced tree shown in Figure 8.1.
This tree contains the same six elements as the earlier example, but because it is not well-balanced the number of branches that must be traversed to reach the deepest element is 5 instead of 2. Similarly, if the input list is in reverse sorted order, we will have an unbalanced tree where only the left branches are used.
In these pathological situations, the tree effectively becomes a list. The num1

5

6

7

12

17

Figure 8.1. Unbalanced trees.

166

8.1. Sorting

ber of recursive applications of tree-insert-one needed to insert a new element will not be in Θ(log n), but rather will be in Θ(n). Hence, the worst case running time for list-sort-tree-insert is in Θ(n2 ) since the worst case time for treeinsert-one is in Θ(n) and there are Θ(n) applications of tree-insert-one. The listsort-tree-insert procedure has expected running time in Θ(n log n) for randomly distributed inputs, but has worst case running time in Θ(n2 ).
Exercise 8.7. Define a procedure binary-tree-size that takes as input a binary tree and outputs the number of elements in the tree. Analyze the running time of your procedure.
Exercise 8.8. [ ] Define a procedure binary-tree-depth that takes as input a binary tree and outputs the depth of the tree. The running time of your procedure should not grow faster than linearly with the number of nodes in the tree.
Exercise 8.9. [ ] Define a procedure binary-tree-balance that takes as input a sorted binary tree and the comparison function, and outputs a sorted binary tree containing the same elements as the input tree but in a well-balanced tree.
The depth of the output tree should be no higher than log2 n + 1 where n is the number of elements in the input tree.
My first task was to implement a library subroutine for a new fast method of internal sorting just invented by Shell. . .
My boss and tutor,
Pat Shackleton, was very pleased with my completed program. I then said timidly that I thought I had invented a sorting method that would usually run faster than Shell sort, without taking much extra store.
He bet me sixpence that I had not.
Although my method was very difficult to explain, he finally agreed that I had won my bet. Sir Tony Hoare, The
Emperor’s Old Clothes,
1980 Turing Award
Lecture. (Shell sort is a
Θ(n2 ) sorting algorithm, somewhat similar to insertion sort.) 8.1.5

Quicksort

Although building and extracting elements from trees allows us to sort with expected time in Θ(n log n), the constant time required to build all those trees and extract the elements from the final tree is high.
In fact, we can use the same approach to sort without needing to build trees.
Instead, we keep the two sides of the tree as separate lists, and sort them recursively. The key is to divide the list into halves by value, instead of by position.
The values in the first half of the list are all less than the values in the second half of the list, so the lists can be sorted separately.
The list-quicksort procedure uses list-filter (from Example 5.5) to divide the input list into sublists containing elements below and above the comparison element, and then recursively sorts those sublists:
(define (list-quicksort cf p)
(if (null? p) null
(list-append
(list-quicksort cf
(list-filter (lambda (el) (cf el (car p))) (cdr p)))
(cons (car p)
(list-quicksort cf
(list-filter (lambda (el) (not (cf el (car p)))) (cdr p)))))))
This is the famous quicksort algorithm that was invented by Sir C. A. R. (Tony)
Hoare while he was an exchange student at Moscow State University in 1959. He was there to study probability theory, but also got a job working on a project to translate Russian into English. The translation depended on looking up words in a dictionary. Since the dictionary was stored on a magnetic tape which could be read in order faster than if it was necessary to jump around, the translation

Chapter 8. Sorting and Searching

167

could be done more quickly if the words to translate were sorted alphabetically.
Hoare invented the quicksort algorithm for this purpose and it remains the most widely used sorting algorithm.
As with list-sort-tree-insert, the expected running time for a randomly arranged list is in Θ(n log n) and the worst case running time is in Θ(n2 ). In the expected cases, each recursive call halves the size of the input list (since if the list is randomly arranged we expect about half of the list elements are below the value of the first element), so there are approximately log n expected recursive calls.
Each call involves an application of list-filter, which has running time in Θ(m) where m is the length of the input list. At each call depth, the total length of the inputs to all the calls to list-filter is n since the original list is subdivided into 2d sublists, which together include all of the elements in the original list. Hence, the total running time is in Θ(n log n) in the expected cases where the input list is randomly arranged. As with list-sort-tree-insert, if the input list is not randomly rearranged it is possible that all elements end up in the same partition. Hence, the worst case running time of list-quicksort is still in Θ(n2 ).
Exercise 8.10. Estimate the time it would take to sort a list of one million elements using list-quicksort.
Exercise 8.11. Both the list-quicksort and list-sort-tree-insert procedures have expected running times in Θ(n log n). Experimentally compare their actual running times.
Exercise 8.12. What is the best case input for list-quicksort? Analyze the asymptotic running time for list-quicksort on best case inputs.
Exercise 8.13. [ ] Instead of using binary trees, we could use ternary trees.
A node in a ternary tree has two elements, a left element and a right element, where the left element must be before the right element according to the comparison function. Each node has three subtrees: left, containing elements before the left element; middle, containing elements between the left and right elements; and right, containing elements after the right element. Is it possible to sort faster using ternary trees?

8.2

Searching

In a broad sense, nearly all problems can be thought of as search problems. If we can define the space of possible solutions, we can search that space to find a correct solution. For example, to solve the pegboard puzzle (Exploration 5.2) we enumerate all possible sequences of moves and search that space to find a winning sequence. For most interesting problems, however, the search space is far too large to search through all possible solutions.
This section explores a few specific types of search problems. First, we consider the simple problem of finding an element in a list that satisfies some property.
Then, we consider searching for an item in sorted data. Finally, we consider the more specific problem of efficiently searching for documents (such as web

There are two ways of constructing a software design: one way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult. It demands the same skill, devotion, insight, and even inspiration as the discovery of the simple physical laws which underlie the complex phenomena of nature. Sir Tony Hoare, The
Emperor’s Old Clothes
(1980 Turing Award
Lecture)

168

8.2. Searching

pages) that contain some target word.

8.2.1

Unstructured Search

Finding an item that satisfies an arbitrary property in unstructured data requires testing each element in turn until one that satisfies the property is found. Since we have no more information about the property or data, there is no way to more quickly find a satisfying element.
The list-search procedure takes as input a matching function and a list, and outputs the first element in the list that satisfies the matching function or false if there is no satisfying element:1
(define (list-search ef p)
(if (null? p) false ; Not found
(if (ef (car p)) (car p) (list-search ef (cdr p)))))
For example,
(list-search (lambda (el) (= 12 el)) (intsto 10)) ⇒ false
(list-search (lambda (el) (= 12 el)) (intsto 15)) ⇒ 12
(list-search (lambda (el) (> el 12)) (intsto 15)) ⇒ 13
Assuming the matching function has constant running time, the worst case running time of list-search is linear in the size of the input list. The worst case is when there is no satisfying element in the list. If the input list has length n, there are n recursive calls to list-search, each of which involves only constant time procedures. Without imposing more structure on the input and comparison function, there is no more efficient search procedure. In the worst case, we always need to test every element in the input list before concluding that there is no element that satisfies the matching function.

8.2.2

Binary Search

If the data to search is structured, it may be possible to find an element that satisfies some property without examining all elements. Suppose the input data is a sorted binary tree, as introduced in Section 8.1.4. Then, with a single comparison we can determine if the element we are searching for would be in the left or right subtree. Instead of eliminating just one element with each application of the matching function as was the case with list-search, with a sorted binary tree a single application of the comparison function is enough to exclude approximately half the elements.
The binary-tree-search procedure takes a sorted binary tree and two procedures as its inputs. The first procedure determines when a satisfying element has been found (we call this the ef procedure, suggesting equality). The second procedure, cf , determines whether to search the left or right subtree. Since cf is used to traverse the tree, the input tree must be sorted by cf .
1 If

the input list contains false as an element, we do not know when the list-search result is false if it means the element is not in the list or the element whose value is false satisfies the property. An alternative would be to produce an error if no satisfying element is found, but this is more awkward when list-search is used by other procedures.

Chapter 8. Sorting and Searching

169

(define (binary-tree-search ef cf tree) ; requires: tree is sorted by cf
(if (null? tree) false
(if (ef (tree-element tree)) (tree-element tree)
(if (cf (tree-element tree))
(binary-tree-search ef cf (tree-left tree))
(binary-tree-search ef cf (tree-right tree))))))
For example, we can search for a number in a sorted binary tree using = as the equality function and < as the comparison function:
(define (binary-tree-number-search tree target)
(binary-tree-search (lambda (el) (= target el))
(lambda (el) (< target el)) tree)) To analyze the running time of binary-tree-search, we need to determine the number of recursive calls. Like our analysis of list-sort-tree, we assume the input tree is well-balanced. If not, all the elements could be in the right branch, for example, and binary-tree-search becomes like list-search in the pathological case. If the tree is well-balanced, each recursive call approximately halves the number of elements in the input tree since it passed in either the left or right subtree.
Hence, the number of calls needed to reach a null tree is in Θ(log n) where n is the number of elements in the input tree. This is the depth of the tree: binarytree-search traverses one path from the root through the tree until either reaching an element that satisfies the ef function, or reaching a null node.
Assuming the procedures passed as ef and cf have constant running time, the work for each call is constant except for the recursive call. Hence, the total running time for binary-tree-search is in Θ(log n) where n is the number of elements in the input tree. This is a huge improvement over linear searching: with linear search, doubling the number of elements in the input doubles the search time; with binary search, doubling the input size only increases the search time by a constant. 8.2.3

Indexed Search

The limitation of binary search is we can only use is when the input data is already sorted. What if we want to search a collection of documents, such as finding all web pages that contain a given word? The web visible to search engines contains billions of web pages most of which contain hundreds or thousands of words. A linear search over such a vast corpus would be infeasible: supposing each word can be tested in 1 millisecond, the time to search 1 trillion words would be over 30 years!
Providing useful searches over large data sets like web documents requires finding a way to structure the data so it is not necessary to examine all documents to perform a search. One way to do this is to build an index that provides a mapping from words to the documents that contain them. Then, we can build the index once, store it in a sorted binary tree, and use it to perform all the searches.
Once the index is built, the work required to perform one search is just the time it takes to look up the target word in the index. If the index is stored as a sorted binary tree, this is logarithmic in the number of distinct words.

170

8.2. Searching

Strings. We use the built-in String datatype to represent documents and target words. A String is similar to a List, but specialized for representing sequences of characters. A convenient way to make a String it to just use double quotes around a sequence of characters. For example, "abcd" evaluates to a String containing four characters.
The String datatype provides procedures for matching, ordering, and converting between Strings and Lists of characters: string=?: String × String → Boolean
Outputs true if the input Strings have exactly the same sequence of characters, otherwise false.
string

Similar Documents