Free Essay

Discrete Math for Computer Science Students

In:

Submitted By luttappi
Words 71201
Pages 285
Discrete Math for Computer Science Students

Ken Bogart Dept. of Mathematics Dartmouth College

Scot Drysdale Dept. of Computer Science Dartmouth College

Cliff Stein Dept. of Industrial Engineering and Operations Research Columbia University

ii

c Kenneth P. Bogart, Scot Drysdale, and Cliff Stein, 2004

Contents
1 Counting 1.1 Basic Counting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Sum Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Abstraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summing Consecutive Integers . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Product Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Two element subsets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Important Concepts, Formulas, and Theorems . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Counting Lists, Permutations, and Subsets. . . . . . . . . . . . . . . . . . . . . . Using the Sum and Product Principles . . . . . . . . . . . . . . . . . . . . . . . . Lists and functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Bijection Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . k-element permutations of a set . . . . . . . . . . . . . . . . . . . . . . . . . . . . Counting subsets of a set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Important Concepts, Formulas, and Theorems . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Binomial Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pascal’s Triangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A proof using the Sum Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . The Binomial Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 1 2 3 3 5 6 7 9 9 10 12 13 13 15 16 19 19 20 22 23 24 25 27 27

Labeling and trinomial coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . Important Concepts, Formulas, and Theorems . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Equivalence Relations and Counting (Optional) . . . . . . . . . . . . . . . . . . . The Symmetry Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii

iv

CONTENTS Equivalence Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Quotient Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Equivalence class counting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multisets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The bookcase arrangement problem. . . . . . . . . . . . . . . . . . . . . . . . . . The number of k-element multisets of an n-element set. . . . . . . . . . . . . . . Using the quotient principle to explain a quotient . . . . . . . . . . . . . . . . . . Important Concepts, Formulas, and Theorems . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 29 30 31 32 33 34 34 35 39 39 39 40 42 43 47 48 49 52 52 53 55 55 56 58 59 61 62 63 66 66 66 68

2 Cryptography and Number Theory 2.1 Cryptography and Modular Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . Introduction to Cryptography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Private Key Cryptography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Public-key Cryptosystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Arithmetic modulo n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cryptography using multiplication mod n . . . . . . . . . . . . . . . . . . . . . . Important Concepts, Formulas, and Theorems . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Inverses and GCDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Solutions to Equations and Inverses mod n . . . . . . . . . . . . . . . . . . . . . Inverses mod n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Converting Modular Equations to Normal Equations . . . . . . . . . . . . . . . . Greatest Common Divisors (GCD) . . . . . . . . . . . . . . . . . . . . . . . . . . Euclid’s Division Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The GCD Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Extended GCD algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Computing Inverses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Important Concepts, Formulas, and Theorems . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 The RSA Cryptosystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exponentiation mod n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Rules of Exponents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fermat’s Little Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

CONTENTS The RSA Cryptosystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Chinese Remainder Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . Important Concepts, Formulas, and Theorems . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Details of the RSA Cryptosystem . . . . . . . . . . . . . . . . . . . . . . . . . . . Practical Aspects of Exponentiation mod n . . . . . . . . . . . . . . . . . . . . . How long does it take to use the RSA Algorithm? . . . . . . . . . . . . . . . . . . How hard is factoring? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Finding large primes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Important Concepts, Formulas, and Theorems . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Reflections on Logic and Proof 3.1 Equivalence and Implication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Equivalence of statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Truth tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DeMorgan’s Laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Implication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Important Concepts, Formulas, and Theorems . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Variables and Quantifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Variables and universes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Quantifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Standard notation for quantification . . . . . . . . . . . . . . . . . . . . . . . . . Statements about variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rewriting statements to encompass larger universes . . . . . . . . . . . . . . . . . Proving quantified statements true or false . . . . . . . . . . . . . . . . . . . . . . Negation of quantified statements . . . . . . . . . . . . . . . . . . . . . . . . . . . Implicit quantification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Proof of quantified statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Important Concepts, Formulas, and Theorems . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Direct Inference (Modus Ponens) and Proofs . . . . . . . . . . . . . . . . . . . .

v 69 72 73 74 76 76 77 78 78 81 81 83 83 83 85 88 89 92 94 96 96 97 98 99 100 101 101 103 104 105 106 108 108

vi

CONTENTS Rules of inference for direct proofs . . . . . . . . . . . . . . . . . . . . . . . . . . Contrapositive rule of inference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . Proof by contradiction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Important Concepts, Formulas, and Theorems . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 110 112 114 115 117 117 117 120 123 124 125 126 128 128 129 130 131 133 136 137 139 139 140 146 148 148 150 150 152 154 155 157

4 Induction, Recursion, and Recurrences 4.1 Mathematical Induction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Smallest Counter-Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Principle of Mathematical Induction . . . . . . . . . . . . . . . . . . . . . .

Strong Induction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Induction in general . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Important Concepts, Formulas, and Theorems . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Recursion, Recurrences and Induction . . . . . . . . . . . . . . . . . . . . . . . . Recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . First order linear recurrences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Iterating a recurrence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Geometric series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . First order linear recurrences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Important Concepts, Formulas, and Theorems . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Growth Rates of Solutions to Recurrences . . . . . . . . . . . . . . . . . . . . . . Divide and Conquer Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . Recursion Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Three Different Behaviors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Important Concepts, Formulas, and Theorems . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 The Master Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Master Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Solving More General Kinds of Recurrences . . . . . . . . . . . . . . . . . . . . . More realistic recurrences (Optional) . . . . . . . . . . . . . . . . . . . . . . . . . Recurrences for general n (Optional) . . . . . . . . . . . . . . . . . . . . . . . . . Appendix: Proofs of Theorems (Optional) . . . . . . . . . . . . . . . . . . . . . .

CONTENTS Important Concepts, Formulas, and Theorems . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 More general kinds of recurrences . . . . . . . . . . . . . . . . . . . . . . . . . . . Recurrence Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Wrinkle with Induction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Further Wrinkles in Induction Proofs . . . . . . . . . . . . . . . . . . . . . . . . . Dealing with Functions Other Than nc . . . . . . . . . . . . . . . . . . . . . . . . Important Concepts, Formulas, and Theorems . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Recurrences and Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The idea of selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A recursive selection algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . Selection without knowing the median in advance . . . . . . . . . . . . . . . . . . An algorithm to find an element in the middle half . . . . . . . . . . . . . . . . . An analysis of the revised selection algorithm . . . . . . . . . . . . . . . . . . . . Uneven Divisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Important Concepts, Formulas, and Theorems . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Probability 5.1 Introduction to Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Why do we study probability? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Some examples of probability computations . . . . . . . . . . . . . . . . . . . . . Complementary probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Probability and hashing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Uniform Probability Distribution . . . . . . . . . . . . . . . . . . . . . . . . Important Concepts, Formulas, and Theorems . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Unions and Intersections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The probability of a union of events . . . . . . . . . . . . . . . . . . . . . . . . . Principle of inclusion and exclusion for probability . . . . . . . . . . . . . . . . . The principle of inclusion and exclusion for counting . . . . . . . . . . . . . . . . Important Concepts, Formulas, and Theorems . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

vii 159 161 163 163 164 165 167 171 172 174 174 174 175 177 179 180 182 182 185 185 185 186 187 188 188 191 192 194 194 196 200 201 202

viii 5.3

CONTENTS Conditional Probability and Independence . . . . . . . . . . . . . . . . . . . . . . Conditional Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Independent Trials Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tree diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Important Concepts, Formulas, and Theorems . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . What are Random Variables? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Binomial Probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Expected Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Expected Values of Sums and Numerical Multiples . . . . . . . . . . . . . . . . . The Number of Trials until the First Success . . . . . . . . . . . . . . . . . . . . Important Concepts, Formulas, and Theorems . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Probability Calculations in Hashing . . . . . . . . . . . . . . . . . . . . . . . . . 204 204 206 208 209 212 213 215 215 215 218 220 222 224 225 227 227 228 228 230 234 235 237 237 238 240 245 247 248 251 251 253 258 259

Expected Number of Items per Location . . . . . . . . . . . . . . . . . . . . . . . Expected Number of Empty Locations . . . . . . . . . . . . . . . . . . . . . . . . Expected Number of Collisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . Expected maximum number of elements in a slot of a hash table (Optional) . . . Important Concepts, Formulas, and Theorems . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Conditional Expectations, Recurrences and Algorithms . . . . . . . . . . . . . . . When Running Times Depend on more than Size of Inputs . . . . . . . . . . . . Conditional Expected Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Randomized algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A more exact analysis of RandomSelect . . . . . . . . . . . . . . . . . . . . . . . Important Concepts, Formulas, and Theorems . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Probability Distributions and Variance . . . . . . . . . . . . . . . . . . . . . . . . Distributions of random variables . . . . . . . . . . . . . . . . . . . . . . . . . . . Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Important Concepts, Formulas, and Theorems . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

CONTENTS 6 Graphs 6.1 Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The degree of a vertex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other Properties of Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Important Concepts, Formulas, and Theorems . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Spanning Trees and Rooted Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . Spanning Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Breadth First Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rooted Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Important Concepts, Formulas, and Theorems . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Eulerian and Hamiltonian Paths and Tours . . . . . . . . . . . . . . . . . . . . . Eulerian Tours and Trails . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hamiltonian Paths and Cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . .

ix 263 263 265 267 269 269 270 272 274 276 276 278 281 283 285 288 288 291 295 297 298 300 300 303 305 306 307 309 310 311 313 313 315 317

NP-Complete Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Important Concepts, Formulas, and Theorems . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Matching Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The idea of a matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Making matchings bigger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Matching in Bipartite Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Searching for Augmenting Paths in Bipartite Graphs . . . . . . . . . . . . . . . . The Augmentation-Cover algorithm . . . . . . . . . . . . . . . . . . . . . . . . . Good Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Important Concepts, Formulas, and Theorems . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Coloring and planarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The idea of coloring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Interval Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Planarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

x

CONTENTS The Faces of a Planar Drawing . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Five Color Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Important Concepts, Formulas, and Theorems . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 321 323 324

Chapter 1

Counting
1.1 Basic Counting

The Sum Principle
We begin with an example that illustrates a fundamental principle. Exercise 1.1-1 The loop below is part of an implementation of selection sort, which sorts a list of items chosen from an ordered set (numbers, alphabet characters, words, etc.) into non-decreasing order. (1) (2) (3) (4) for i = 1 to n − 1 for j = i + 1 to n if (A[i] > A[j]) exchange A[i] and A[j]

How many times is the comparison A[i] > A[j] made in Line 3? In Exercise 1.1-1, the segment of code from lines 2 through 4 is executed n − 1 times, once for each value of i between 1 and n − 1 inclusive. The first time, it makes n − 1 comparisons. The second time, it makes n − 2 comparisons. The ith time, it makes n − i comparisons. Thus the total number of comparisons is (n − 1) + (n − 2) + · · · + 1 . (1.1)

This formula is not as important as the reasoning that lead us to it. In order to put the reasoning into a broadly applicable format, we will describe what we were doing in the language of sets. Think about the set S containing all comparisons the algorithm in Exercise 1.1-1 makes. We divided set S into n − 1 pieces (i.e. smaller sets), the set S1 of comparisons made when i = 1, the set S2 of comparisons made when i = 2, and so on through the set Sn−1 of comparisons made when i = n − 1. We were able to figure out the number of comparisons in each of these pieces by observation, and added together the sizes of all the pieces in order to get the size of the set of all comparisons. 1

2

CHAPTER 1. COUNTING

in order to describe a general version of the process we used, we introduce some set-theoretic terminology. Two sets are called disjoint when they have no elements in common. Each of the sets Si we described above is disjoint from each of the others, because the comparisons we make for one value of i are different from those we make with another value of i. We say the set of sets {S1 , . . . , Sm } (above, m was n − 1) is a family of mutually disjoint sets, meaning that it is a family (set) of sets, any two of which are disjoint. With this language, we can state a general principle that explains what we were doing without making any specific reference to the problem we were solving. Principle 1.1 (Sum Principle) The size of a union of a family of mutually disjoint finite sets is the sum of the sizes of the sets. Thus we were, in effect, using the sum principle to solve Exercise 1.1-1. We can describe the sum principle using an algebraic notation. Let |S| denote the size of the set S. For example, |{a, b, c}| = 3 and |{a, b, a}| = 2.1 Using this notation, we can state the sum principle as: if S1 , S2 , . . . Sm are disjoint sets, then |S1 ∪ S2 ∪ · · · ∪ Sm | = |S1 | + |S2 | + · · · + |Sm | . To write this without the “dots” that indicate left-out material, we write m m

(1.2)

| i=1 Si | = i=1 |Si |.

When we can write a set S as a union of disjoint sets S1 , S2 , . . . , Sk we say that we have partitioned S into the sets S1 , S2 , . . . , Sk , and we say that the sets S1 , S2 , . . . , Sk form a partition of S. Thus {{1}, {3, 5}, {2, 4}} is a partition of the set {1, 2, 3, 4, 5} and the set {1, 2, 3, 4, 5} can be partitioned into the sets {1}, {3, 5}, {2, 4}. It is clumsy to say we are partitioning a set into sets, so instead we call the sets Si into which we partition a set S the blocks of the partition. Thus the sets {1}, {3, 5}, {2, 4} are the blocks of a partition of {1, 2, 3, 4, 5}. In this language, we can restate the sum principle as follows. Principle 1.2 (Sum Principle) If a finite set S has been partitioned into blocks, then the size of S is the sum of the sizes of the blocks.

Abstraction
The process of figuring out a general principle that explains why a certain computation makes sense is an example of the mathematical process of abstraction. We won’t try to give a precise definition of abstraction but rather point out examples of the process as we proceed. In a course in set theory, we would further abstract our work and derive the sum principle from the axioms of
It may look strange to have |{a, b, a}| = 2, but an element either is or is not in a set. It cannot be in a set multiple times. (This situation leads to the idea of multisets that will be introduced later on in this section.) We gave this example to emphasize that the notation {a, b, a} means the same thing as {a, b}. Why would someone even contemplate the notation {a, b, a}. Suppose we wrote S = {x|x is the first letter of Ann, Bob, or Alice}. Explicitly following this description of S would lead us to first write down {a, b, a} and the realize it equals {a, b}.
1

1.1. BASIC COUNTING

3

set theory. In a course in discrete mathematics, this level of abstraction is unnecessary, so we will simply use the sum principle as the basis of computations when it is convenient to do so. If our goal were only to solve this one exercise, then our abstraction would have been almost a mindless exercise that complicated what was an “obvious” solution to Exercise 1.1-1. However the sum principle will prove to be useful in a wide variety of problems. Thus we observe the value of abstraction—when you can recognize the abstract elements of a problem, then abstraction often helps you solve subsequent problems as well.

Summing Consecutive Integers
Returning to the problem in Exercise 1.1-1, it would be nice to find a simpler form for the sum given in Equation 1.1. We may also write this sum as n−1 n − i. i=1 Now, if we don’t like to deal with summing the values of (n − i), we can observe that the values we are summing are n − 1, n − 2, . . . , 1, so we may write that n−1 n−1

n−i= i=1 i=1

i.

A clever trick, usually attributed to Gauss, gives us a shorter formula for this sum. We write 1 + 2 + ··· + n − 2 + n − 1 + n − 1 + n − 2 + ··· + 2 + 1 n + n + ··· + n + n

The sum below the horizontal line has n − 1 terms each equal to n, and thus it is n(n − 1). It is the sum of the two sums above the line, and since these sums are equal (being identical except for being in reverse order), the sum below the line must be twice either sum above, so either of the sums above must be n(n − 1)/2. In other words, we may write n−1 n−1

n−i= i=1 i=1

i=

n(n − 1) . 2

This lovely trick gives us little or no real mathematical skill; learning how to think about things to discover answers ourselves is much more useful. After we analyze Exercise 1.1-2 and abstract the process we are using there, we will be able to come back to this problem at the end of this section and see a way that we could have discovered this formula for ourselves without any tricks.

The Product Principle
Exercise 1.1-2 The loop below is part of a program which computes the product of two matrices. (You don’t need to know what the product of two matrices is to answer this question.)

4 (1) (2) (3) (4) (5) (6) for i = 1 to r for j = 1 to m S=0 for k = 1 to n S = S + A[i, k] ∗ B[k, j] C[i, j] = S

CHAPTER 1. COUNTING

How many multiplications (expressed in terms of r, m, and n) does this code carry out in line 5? Exercise 1.1-3 Consider the following longer piece of pseudocode that sorts a list of numbers and then counts “big gaps” in the list (for this problem, a big gap in the list is a place where a number in the list is more than twice the previous number: (1) for i = 1 to n − 1 (2) minval = A[i] (3) minindex = i (4) for j = i to n (5) if (A[j] < minval) (6) minval = A[j] (7) minindex = j (8) exchange A[i] and A[minindex] (9) (10) for i = 2 to n (11) if (A[i] > 2 ∗ A[i − 1]) (12) bigjump = bigjump +1 How many comparisons does the above code make in lines 5 and 11 ?

In Exercise 1.1-2, the program segment in lines 4 through 5, which we call the “inner loop,” takes exactly n steps, and thus makes n multiplications, regardless of what the variables i and j are. The program segment in lines 2 through 5 repeats the inner loop exactly m times, regardless of what i is. Thus this program segment makes n multiplications m times, so it makes nm multiplications. Why did we add in Exercise 1.1-1, but multiply here? We can answer this question using the abstract point of view we adopted in discussing Exercise 1.1-1. Our algorithm performs a certain set of multiplications. For any given i, the set of multiplications performed in lines 2 through 5 can be divided into the set S1 of multiplications performed when j = 1, the set S2 of multiplications performed when j = 2, and, in general, the set Sj of multiplications performed for any given j value. Each set Sj consists of those multiplications the inner loop carries out for a particular value of j, and there are exactly n multiplications in this set. Let Ti be the set of multiplications that our program segment carries out for a certain i value. The set Ti is the union of the sets Sj ; restating this as an equation, we get m Ti = j=1 Sj .

1.1. BASIC COUNTING

5

Then, by the sum principle, the size of the set Ti is the sum of the sizes of the sets Sj , and a sum of m numbers, each equal to n, is mn. Stated as an equation, m m m

|Ti | = | j=1 Sj | = j=1 |Sj | = j=1 n = mn.

(1.3)

Thus we are multiplying because multiplication is repeated addition! From our solution we can extract a second principle that simply shortcuts the use of the sum principle. Principle 1.3 (Product Principle) The size of a union of m disjoint sets, each of size n, is mn. We now complete our discussion of Exercise 1.1-2. Lines 2 through 5 are executed once for each value of i from 1 to r. Each time those lines are executed, they are executed with a different i value, so the set of multiplications in one execution is disjoint from the set of multiplications in any other execution. Thus the set of all multiplications our program carries out is a union of r disjoint sets Ti of mn multiplications each. Then by the product principle, the set of all multiplications has size rmn, so our program carries out rmn multiplications. Exercise 1.1-3 demonstrates that thinking about whether the sum or product principle is appropriate for a problem can help to decompose the problem into easily-solvable pieces. If you can decompose the problem into smaller pieces and solve the smaller pieces, then you either add or multiply solutions to solve the larger problem. In this exercise, it is clear that the number of comparisons in the program fragment is the sum of the number of comparisons in the first loop in lines 1 through 8 with the number of comparisons in the second loop in lines 10 through 12 (what two disjoint sets are we talking about here?). Further, the first loop makes n(n + 1)/2 − 1 comparisons2 , and that the second loop has n − 1 comparisons, so the fragment makes n(n + 1)/2 − 1 + n − 1 = n(n + 1)/2 + n − 2 comparisons.

Two element subsets
Often, there are several ways to solve a problem. We originally solved Exercise 1.1-1 by using the sum principal, but it is also possible to solve it using the product principal. Solving a problem two ways not only increases our confidence that we have found the correct solution, but it also allows us to make new connections and can yield valuable insight. Consider the set of comparisons made by the entire execution of the code in this exercise. When i = 1, j takes on every value from 2 to n. When i = 2, j takes on every value from 3 to n. Thus, for each two numbers i and j, we compare A[i] and A[j] exactly once in our loop. (The order in which we compare them depends on whether i or j is smaller.) Thus the number of comparisons we make is the same as the number of two element subsets of the set {1, 2, . . . , n}3 . In how many ways can we choose two elements from this set? If we choose a first and second element, there are n ways to choose a first element, and for each choice of the first element, there are n − 1 ways to choose a second element. Thus the set of all such choices is the union of n sets
To see why this is true, ask yourself first where the n(n + 1)/2 comes from, and then why we subtracted one. The relationship between the set of comparisons and the set of two-element subsets of {1, 2, . . . , n} is an example of a bijection, an idea which will be examined more in Section 1.2.
3 2

6

CHAPTER 1. COUNTING

of size n − 1, one set for each first element. Thus it might appear that, by the product principle, there are n(n − 1) ways to choose two elements from our set. However, what we have chosen is an ordered pair, namely a pair of elements in which one comes first and the other comes second. For example, we could choose 2 first and 5 second to get the ordered pair (2, 5), or we could choose 5 first and 2 second to get the ordered pair (5, 2). Since each pair of distinct elements of {1, 2, . . . , n} can be ordered in two ways, we get twice as many ordered pairs as two element sets. Thus, since the number of ordered pairs is n(n − 1), the number of two element subsets of {1, 2, . . . , n} is n(n − 1)/2. Therefore the answer to Exercise 1.1-1 is n(n − 1)/2. This number comes up so often that it has its own name and notation. We call this number “n choose 2” and denote it by n . To summarize, n stands for the number of two element subsets of an n 2 2 element set and equals n(n − 1)/2. Since one answer to Exercise 1.1-1 is 1 + 2 + · · · + n − 1 and a second answer to Exercise 1.1-1 is n , this shows that 2 1 + 2 + ··· + n − 1 = n 2 = n(n − 1) . 2

Important Concepts, Formulas, and Theorems
1. Set. A set is a collection of objects. In a set order is not important. Thus the set {A, B, C} is the same as the set {A, C, B}. An element either is or is not in a set; it cannot be in a set more than once, even if we have a description of a set which names that element more than once. 2. Disjoint. Two sets are called disjoint when they have no elements in common. 3. Mutually disjoint sets. A set of sets {S1 , . . . , Sn } is a family of mutually disjoint sets, if each two of the sets Si are disjoint. 4. Size of a set. Given a set S, the size of S, denoted |S|, is the number of distinct elements in S. 5. Sum Principle. The size of a union of a family of mutually disjoint sets is the sum of the sizes of the sets. In other words, if S1 , S2 , . . . Sn are disjoint sets, then |S1 ∪ S2 ∪ · · · ∪ Sn | = |S1 | + |S2 | + · · · + |Sn |. To write this without the “dots” that indicate left-out material, we write n n

| i=1 Si | = i=1 |Si |.

6. Partition of a set. A partition of a set S is a set of mutually disjoint subsets (sometimes called blocks) of S whose union is S. 7. Sum of first n − 1 numbers. n n−1

n−i= i=1 i=1

i=

n(n − 1) . 2

8. Product Principle. The size of a union of m disjoint sets, each of size n, is mn. 9. Two element subsets. n stands for the number of two element subsets of an n element set 2 and equals n(n − 1)/2. n is read as “n choose 2.” 2

1.1. BASIC COUNTING

7

Problems
1. The segment of code below is part of a program that uses insertion sort to sort a list A for i = 2 to n j=i while j ≥ 2 and A[j] < A[j − 1] exchange A[j] and A[j − 1] j−− What is the maximum number of times (considering all lists of n items you could be asked to sort) the program makes the comparison A[j] < A[j − 1]? Describe as succinctly as you can those lists that require this number of comparisons. 2. Five schools are going to send their baseball teams to a tournament, in which each team must play each other team exactly once. How many games are required? 3. Use notation similar to that in Equations 1.2 and 1.3 to rewrite the solution to Exercise 1.1-3 more algebraically. 4. In how many ways can you draw a first card and then a second card from a deck of 52 cards? 5. In how many ways can you draw two cards from a deck of 52 cards. 6. In how many ways may you draw a first, second, and third card from a deck of 52 cards? 7. In how many ways may a ten person club select a president and a secretary-treasurer from among its members? 8. In how many ways may a ten person club select a two person executive committee from among its members? 9. In how many ways may a ten person club select a president and a two person executive advisory board from among its members (assuming that the president is not on the advisory board)? 10. By using the formula for n 2

is is straightforward to show that n n−1 2 = n (n − 2). 2

However this proof just uses blind substitution and simplification. Find a more conceptual explanation of why this formula is true. 11. If M is an m element set and N is an n-element set, how many ordered pairs are there whose first member is in M and whose second member is in N ? 12. In the local ice cream shop, there are 10 different flavors. How many different two-scoop cones are there? (Following your mother’s rule that it all goes to the same stomach, a cone with a vanilla scoop on top of a chocolate scoop is considered the same as a cone with a a chocolate scoop on top of a vanilla scoop.)

8

CHAPTER 1. COUNTING 13. Now suppose that you decide to disagree with your mother in Exercise 12 and say that the order of the scoops does matter. How many different possible two-scoop cones are there? 14. Suppose that on day 1 you receive 1 penny, and, for i > 1, on day i you receive twice as many pennies as you did on day i − 1. How many pennies will you have on day 20? How many will you have on day n? Did you use the sum or product principal? 15. The “Pile High Deli” offers a “simple sandwich” consisting of your choice of one of five different kinds of bread with your choice of butter or mayonnaise or no spread, one of three different kinds of meat, and one of three different kinds of cheese, with the meat and cheese “piled high” on the bread. In how many ways may you choose a simple sandwich? 16. Do you see any unnecessary steps in the pseudocode of Exercise 1.1-3?

1.2. COUNTING LISTS, PERMUTATIONS, AND SUBSETS.

9

1.2

Counting Lists, Permutations, and Subsets.

Using the Sum and Product Principles
Exercise 1.2-1 A password for a certain computer system is supposed to be between 4 and 8 characters long and composed of lower and/or upper case letters. How many passwords are possible? What counting principles did you use? Estimate the percentage of the possible passwords that have exactly four characters. A good way to attack a counting problem is to ask if we could use either the sum principle or the product principle to simplify or completely solve it. Here that question might lead us to think about the fact that a password can have 4, 5, 6, 7 or 8 characters. The set of all passwords is the union of those with 4, 5, 6, 7, and 8 letters so the sum principle might help us. To write the problem algebraically, let Pi be the set of i-letter passwords and P be the set of all possible passwords. Clearly, P = P4 ∪ P5 ∪ P6 ∪ P7 ∪ P8 . The Pi are mutually disjoint, and thus we can apply the sum principal to obtain
8

|P | = i=4 |Pi | .

We still need to compute |Pi |. For an i-letter password, there are 52 choices for the first letter, 52 choices for the second and so on. Thus by the product principle, |Pi |, the number of passwords with i letters is 52i . Therefore the total number of passwords is 524 + 525 + 526 + 527 + 528 . Of these, 524 have four letters, so the percentage with 54 letters is 100 · 524 524 + 525 + 526 + 527 + 528 . Although this is a nasty formula to evaluate by hand, we can get a quite good estimate as follows. Notice that 528 is 52 times as big as 527 , and even more dramatically larger than any other term in the sum in the denominator. Thus the ratio thus just a bit less than 100 · 524 528 , which is 100/524 , or approximately .000014. Thus to five decimal places, only .00001% of the passwords have four letters. It is therefore much easier guess a password that we know has four letters than it is to guess one that has between 4 and 8 letters—roughly 7 million times easier! In our solution to Exercise 1.2-1, we casually referred to the use of the product principle in computing the number of passwords with i letters. We didn’t write any set as a union of sets of equal size. We could have, but it would have been clumsy and repetitive. For this reason we will state a second version of the product principle that we can derive from the version for unions of sets by using the idea of mathematical induction that we study in Chapter 4. Version 2 of the product principle states:

10

CHAPTER 1. COUNTING

Principle 1.4 (Product Principle, Version 2) If a set S of lists of length m has the properties that 1. There are i1 different first elements of lists in S, and 2. For each j > 1 and each choice of the first j − 1 elements of a list in S there are ij choices of elements in position j of those lists, then there are i1 i2 · · · im = m k=1 ik

lists in S.

Let’s apply this version of the product principle to compute the number of m-letter passwords. Since an m-letter password is just a list of m letters, and since there are 52 different first elements of the password and 52 choices for each other position of the password, we have that i1 = 52, i2 = 52, . . . , im = 52. Thus, this version of the product principle tells us immediately that the number of passwords of length m is i1 i2 · · · im = 52m . In our statement of version 2 of the Product Principle, we have introduced a new notation, the use of Π to stand for product. This notation is called the product notation, and it is used just like summation notation. In particular, m ik is read as “The product from k = 1 to m of k=1 ik .” Thus m ik means the same thing as i1 · i2 · · · im . k=1

Lists and functions
We have left a term undefined in our discussion of version 2 of the product principle, namely the word “list.” A list of 3 things chosen from a set T consists of a first member t1 of T , a second member t2 of T , and a third member t3 of T . If we rewrite the list in a different order, we get a different list. A list of k things chosen from T consists of a first member of T through a kth member of T . We can use the word “function,” which you probably recall from algebra or calculus, to be more precise. Recall that a function from a set S (called the domain of the function) to a set T (called the range of the function) is a relationship between the elements of S and the elements of T that relates exactly one element of T to each element of S. We use a letter like f to stand for a function and use f (x) to stand for the one and only one element of T that the function relates to the element x of S. You are probably used to thinking of functions in terms of formulas like f (x) = x2 . We need to use formulas like this in algebra and calculus because the functions that you study in algebra and calculus have infinite sets of numbers as their domains and ranges. In discrete mathematics, functions often have finite sets as their domains and ranges, and so it is possible to describe a function by saying exactly what it is. For example f (1) = Sam, f (2) = Mary, f (3) = Sarah is a function that describes a list of three people. This suggests a precise definition of a list of k elements from a set T : A list of k elements from a set T is a function from {1, 2, . . . , k} to T . Exercise 1.2-2 Write down all the functions from the two-element set {1, 2} to the twoelement set {a, b}. Exercise 1.2-3 How many functions are there from a two-element set to a three element set?

1.2. COUNTING LISTS, PERMUTATIONS, AND SUBSETS. Exercise 1.2-4 How many functions are there from a three-element set to a two-element set?

11

In Exercise 1.2-2 one thing that is difficult is to choose a notation for writing the functions down. We will use f1 , f2 , etc., to stand for the various functions we find. To describe a function fi from {1, 2} to {a, b} we have to specify fi (1) and fi (2). We can write f1 (1) = a f2 (1) = b f3 (1) = a f4 (1) = b f1 (2) = b f2 (2) = a f3 (2) = a f4 (2) = b

We have simply written down the functions as they occurred to us. How do we know we have all of them? The set of all functions from {1, 2} to {a, b} is the union of the functions fi that have fi (1) = a and those that have fi (1) = b. The set of functions with fi (1) = a has two elements, one for each choice of fi (2). Therefore by the product principle the set of all functions from {1, 2} to {a, b} has size 2 · 2 = 4. To compute the number of functions from a two element set (say {1, 2}) to a three element set, we can again think of using fi to stand for a typical function. Then the set of all functions is the union of three sets, one for each choice of fi (1). Each of these sets has three elements, one for each choice of fi (2). Thus by the product principle we have 3 · 3 = 9 functions from a two element set to a three element set. To compute the number of functions from a three element set (say {1, 2, 3}) to a two element set, we observe that the set of functions is a union of four sets, one for each choice of fi (1) and fi (2) (as we saw in our solution to Exercise 1.2-2). But each of these sets has two functions in it, one for each choice of fi (3). Then by the product principle, we have 4 · 2 = 8 functions from a three element set to a two element set. A function f is called one-to-one or an injection if whenever x = y, f (x) = f (y). Notice that the two functions f1 and f2 we gave in our solution of Exercise 1.2-2 are one-to-one, but f3 and f4 are not. A function f is called onto or a surjection if every element y in the range is f (x) for some x in the domain. Notice that the functions f1 and f2 in our solution of Exercise 1.2-2 are onto functions but f3 and f4 are not. Exercise 1.2-5 Using two-element sets or three-element sets as domains and ranges, find an example of a one-to-one function that is not onto. Exercise 1.2-6 Using two-element sets or three-element sets as domains and ranges, find an example of an onto function that is not one-to-one. Notice that the function given by f (1) = c, f (2) = a is an example of a function from {1, 2} to {a, b, c} that is one-to one but not onto. Also, notice that the function given by f (1) = a, f (2) = b, f (3) = a is an example of a function from {1, 2, 3} to {a, b} that is onto but not one to one.

12

CHAPTER 1. COUNTING

The Bijection Principle
Exercise 1.2-7 The loop below is part of a program to determine the number of triangles formed by n points in the plane. (1) (2) (3) (4) (5) (6) trianglecount = 0 for i = 1 to n for j = i + 1 to n for k = j + 1 to n if points i, j, and k are not collinear trianglecount = trianglecount +1

How many times does the above code check three points to see if they are collinear in line 5? In Exercise 1.2-7, we have a loop embedded in a loop that is embedded in another loop. Because the second loop, starting in line 3, begins with j = i + 1 and j increase up to n, and because the third loop, starting in line 4, begins with k = j + 1 and increases up to n, our code examines each triple of values i, j, k with i < j < k exactly once. For example, if n is 4, then the triples (i, j, k) used by the algorithm, in order, are (1, 2, 3), (1, 2, 4), (1, 3, 4), and (2, 3, 4). Thus one way in which we might have solved Exercise 1.2-7 would be to compute the number of such triples, which we will call increasing triples. As with the case of two-element subsets earlier, the number of such triples is the number of three-element subsets of an n-element set. This is the second time that we have proposed counting the elements of one set (in this case the set of increasing triples chosen from an n-element set) by saying that it is equal to the number of elements of some other set (in this case the set of three element subsets of an n-element set). When are we justified in making such an assertion that two sets have the same size? There is another fundamental principle that abstracts our concept of what it means for two sets to have the same size. Intuitively two sets have the same size if we can match up their elements in such a way that each element of one set corresponds to exactly one element of the other set. This description carries with it some of the same words that appeared in the definitions of functions, one-to-one, and onto. Thus it should be no surprise that one-to-one and onto functions are part of our abstract principle. Principle 1.5 (Bijection Principle) Two sets have the same size if and only if there is a one-to-one function from one set onto the other. Our principle is called the bijection principle because a one-to-one and onto function is called a bijection. Another name for a bijection is a one-to-one correspondence. A bijection from a set to itself is called a permutation of that set. What is the bijection that is behind our assertion that the number of increasing triples equals the number of three-element subsets? We define the function f to be the one that takes the increasing triple (i, j, k) to the subset {i, j, k}. Since the three elements of an increasing triple are different, the subset is a three element set, so we have a function from increasing triples to three element sets. Two different triples can’t be the same set in two different orders, so different triples have to be associated with different sets. Thus f is one-to-one. Each set of three integers can be listed in increasing order, so it is the image under f of an increasing triple. Therefore f is onto. Thus we have a one-to-one correspondence, or bijection, between the set of increasing triples and the set of three element sets.

1.2. COUNTING LISTS, PERMUTATIONS, AND SUBSETS.

13

k-element permutations of a set
Since counting increasing triples is equivalent to counting three-element subsets, we can count increasing triples by counting three-element subsets instead. We use a method similar to the one we used to compute the number of two-element subsets of a set. Recall that the first step was to compute the number of ordered pairs of distinct elements we could chose from the set {1, 2, . . . , n}. So we now ask in how many ways may we choose an ordered triple of distinct elements from {1, 2, . . . , n}, or more generally, in how many ways may we choose a list of k distinct elements from {1, 2, . . . , n}. A list of k-distinct elements chosen from a set N is called a k-element permutation of N .4 How many 3-element permutations of {1, 2, . . . , n} can we make? Recall that a k-element permutation is a list of k distinct elements. There are n choices for the first number in the list. For each way of choosing the first element, there are n − 1 choices for the second. For each choice of the first two elements, there are n − 2 ways to choose a third (distinct) number, so by version 2 of the product principle, there are n(n − 1)(n − 2) ways to choose the list of numbers. For example, if n is 4, the three-element permutations of {1, 2, 3, 4} are L = {123, 124, 132, 134, 142, 143, 213, 214, 231, 234, 241, 243, 312, 314, 321, 324, 341, 342, 412, 413, 421, 423, 431, 432}. (1.4) There are indeed 4 · 3 · 2 = 24 lists in this set. Notice that we have listed the lists in the order that they would appear in a dictionary (assuming we treated numbers as we treat letters). This ordering of lists is called the lexicographic ordering. A general pattern is emerging. To compute the number of k-element permutations of the set {1, 2, . . . , n}, we recall that they are lists and note that we have n choices for the first element of the list, and regardless of which choice we make, we have n − 1 choices for the second element of the list, and more generally, given the first i − 1 elements of a list we have n − (i − 1) = n − i + 1 choices for the ith element of the list. Thus by version 2 of the product principle, we have n(n − 1) · · · (n − k + 1) (which is the first k terms of n!) ways to choose a k-element permutation of {1, 2, . . . , n}. There is a very handy notation for this product first suggested by Don Knuth. We use nk to stand for n(n − 1) · · · (n − k + 1) = k−1 n − i, and call it the kth falling factorial i=0 power of n. We can summarize our observations in a theorem. Theorem 1.1 The number k-element permutations of an n-element set is k−1 nk = i=0 n − i = n(n − 1) · · · (n − k + 1) = n!/(n − k)! .

Counting subsets of a set
We now return to the question of counting the number of three element subsets of a {1, 2, . . . , n}. We use n , which we read as “n choose 3” to stand for the number of three element subsets of 3
In particular a k-element permutation of {1, 2, . . . k} is a list of k distinct elements of {1, 2, . . . , k}, which, by our definition of a list is a function from {1, 2, . . . , k} to {1, 2, . . . , k}. This function must be one-to-one since the elements of the list are distinct. Since there are k distinct elements of the list, every element of {1, 2, . . . , k} appears in the list, so the function is onto. Therefore it is a bijection. Thus our definition of a permutation of a set is consistent with our definition of a k-element permutation in the case where the set is {1, 2, . . . , k}.
4

14

CHAPTER 1. COUNTING

{1, 2, . . . , n}, or more generally of any n-element set. We have just carried out the first step of computing n by counting the number of three-element permutations of {1, 2, . . . , n}. 3 Exercise 1.2-8 Let L be the set of all three-element permutations of {1, 2, 3, 4}, as in Equation 1.4. How many of the lists (permutations) in L are lists of the 3 element set {1, 3, 4}? What are these lists? We see that this set appears in L as 6 different lists: 134, 143, 314, 341, 413, and 431. In general given three different numbers with which to create a list, there are three ways to choose the first number in the list, given the first there are two ways to choose the second, and given the first two there is only one way to choose the third element of the list. Thus by version 2 of the product principle once again, there are 3 · 2 · 1 = 6 ways to make the list. Since there are n(n − 1)(n − 2) permutations of an n-element set, and each three-element subset appears in exactly 6 of these lists, the number of three-element permutations is six times the number of three element subsets. That is, n(n − 1)(n − 2) = n · 6. Whenever we see that 3 one number that counts something is the product of two other numbers that count something, we should expect that there is an argument using the product principle that explains why. Thus we should be able to see how to break the set of all 3-element permutations of {1, 2, . . . , n} into either 6 disjoint sets of size n or into n subsets of size six. Since we argued that each 3 3 three element subset corresponds to six lists, we have described how to get a set of six lists from one three-element set. Two different subsets could never give us the same lists, so our sets of three-element lists are disjoint. In other words, we have divided the set of all three-element permutations into n mutually sets of size six. In this way the product principle does explain 3 why n(n − 1)(n − 2) = n · 6. By division we get that we have 3 n 3 = n(n − 1)(n − 2)/6

three-element subsets of {1, 2, . . . , n}. For n = 4, the number is 4(3)(2)/6 = 4. These sets are {1, 2, 3}, {1, 2, 4}, {1, 3, 4}, and {2, 3, 4}. It is straightforward to verify that each of these sets appears 6 times in L, as 6 different lists. Essentially the same argument gives us the number of k-element subsets of {1, 2, . . . , n}. We denote this number by n , and read it as “n choose k.” Here is the argument: the set of all k k-element permutations of {1, 2, . . . , n} can be partitioned into n disjoint blocks5 , each block k consisting of all k-element permutations of a k-element subset of {1, 2, . . . , n}. But the number of k-element permutations of a k-element set is k!, either by version 2 of the product principle or by Theorem 1.1. Thus by version 1 of the product principle we get the equation nk = Division by k! gives us our next theorem. Theorem 1.2 For integers n and k with 0 ≤ k ≤ n, the number of k element subsets of an n element set is nk n! = k! k!(n − k)!
5

n k!. k

Here we are using the language introduced for partitions of sets in Section 1.1

1.2. COUNTING LISTS, PERMUTATIONS, AND SUBSETS.

15

Proof: The proof is given above, except in the case that k is 0; however the only subset of our n-element set of size zero is the empty set, so we have exactly one such subset. This is exactly what the formula gives us as well. (Note that the cases k = 0 and k = n both use the fact that 0! = 1.6 ) The equality in the theorem comes from the definition of nk . Another notation for the numbers n k

is C(n, k). Thus we have that n k = n! . k!(n − k)! (1.5)

C(n, k) =

These numbers are called binomial coefficients for reasons that will become clear later.

Important Concepts, Formulas, and Theorems
1. List. A list of k items chosen from a set X is a function from {1, 2, . . . k} to X. 2. Lists versus sets. In a list, the order in which elements appear in the list matters, and an element may appear more than once. In a set, the order in which we write down the elements of the set does not matter, and an element can appear at most once. 3. Product Principle, Version 2. If a set S of lists of length m has the properties that (a) There are i1 different first elements of lists in S, and (b) For each j > 1 and each choice of the first j − 1 elements of a list in S there are ij choices of elements in position j of those lists, then there are i1 i2 · · · im lists in S. 4. Product Notaton. We use the Greek letter Π to stand for product just as we use the Greek letter Σ to stand for sum. This notation is called the product notation, and it is used just like summation notation. In particular, m ik is read as “The product from k = 1 to m k=1 of ik .” Thus m ik means the same thing as i1 · i2 · · · im . k=1 5. Function. A function f from a set S to a set T is a relationship between S and T that relates exactly one element of T to each element of S. We write f (x) for the one and only one element of T that the function f relates to the element x of S. The same element of T may be related to different members of S. 6. Onto, Surjection A function f from a set S to a set T is onto if for each element y ∈ T , there is at least one x ∈ S such that f (x) = y. An onto function is also called a surjection. 7. One-to-one, Injection. A function f from a set S to a set T is one-to-one if, for each x ∈ S and y ∈ S with x = y, f (x) = f (y). A one-to-one function is also called an injection. 8. Bijection, One-to-one correspondence. A function from a set S to a set T is a bijection if it is both one-to-one and onto. A bijection is sometimes called a one-to-one correspondence. 9. Permutation. A one-to-one function from a set S to S is called a permutation of S.
6

There are many reasons why 0! is defined to be one; making the formula for

n k

work out is one of them.

16

CHAPTER 1. COUNTING

10. k-element permutation. A k-element permutation of a set S is a list of k distinct elements of S. 11. k-element subsets. n choose k. Binomial Coefficients. For integers n and k with 0 ≤ k ≤ n, the number of k element subsets of an n element set is n!/k!(n − k)!. The number of kelement subsets of an n-element set is usually denoted by n or C(n, k), both of which are k read as “n choose k.” These numbers are called binomial coefficients. 12. The number of k-element permutations of an n-element set is nk = n(n − 1) · · · (n − k + 1) = n!/(n − k)!. 13. When we have a formula to count something and the formula expresses the result as a product, it is useful to try to understand whether and how we could use the product principle to prove the formula.

Problems
1. The “Pile High Deli” offers a “simple sandwich” consisting of your choice of one of five different kinds of bread with your choice of butter or mayonnaise or no spread, one of three different kinds of meat, and one of three different kinds of cheese, with the meat and cheese “piled high” on the bread. In how many ways may you choose a simple sandwich? 2. In how many ways can we pass out k distinct pieces of fruit to n children (with no restriction on how many pieces of fruit a child may get)? 3. Write down all the functions from the three-element set {1, 2, 3} to the set {a, b}. Indicate which functions, if any, are one-to-one. Indicate which functions, if any, are onto. 4. Write down all the functions form the two element set {1, 2} to the three element set {a, b, c} Indicate which functions, if any, are one-to-one. Indicate which functions, if any, are onto. 5. There are more functions from the real numbers to the real numbers than most of us can imagine. However in discrete mathematics we often work with functions from a finite set S with s elements to a finite set T with t elements. Then there are only a finite number of functions from S to T . How many functions are there from S to T in this case? 6. Assuming k ≤ n, in how many ways can we pass out k distinct pieces of fruit to n children if each child may get at most one? What is the number if k > n? Assume for both questions that we pass out all the fruit. 7. Assume k ≤ n, in how many ways can we pass out k identical pieces of fruit to n children if each child may get at most one? What is the number if k > n? Assume for both questions that we pass out all the fruit. 8. What is the number of five digit (base ten) numbers? What is the number of five digit numbers that have no two consecutive digits equal? What is the number that have at least one pair of consecutive digits equal?

1.2. COUNTING LISTS, PERMUTATIONS, AND SUBSETS.

17

9. We are making a list of participants in a panel discussion on allowing alcohol on campus. They will be sitting behind a table in the order in which we list them. There will be four administrators and four students. In how many ways may we list them if the administrators must sit together in a group and the students must sit together in a group? In how many ways may we list them if we must alternate students and administrators? 10. (This problem is for students who are working on the relationship between k-element permutations and k-element subsets.) Write down all three element permutations of the five element set {1, 2, 3, 4, 5} in lexicographic order. Underline those that correspond to the set {1, 3, 5}. Draw a rectangle around those that correspond to the set {2, 4, 5}. How many three-element permutations of {1, 2, 3, 4, 5} correspond to a given 3-element set? How many three-element subsets does the set {1, 2, 3, 4, 5} have? 11. In how many ways may a class of twenty students choose a group of three students from among themselves to go to the professor and explain that the three-hour labs are actually taking ten hours? 12. We are choosing participants for a panel discussion allowing on allowing alcohol on campus. We have to choose four administrators from a group of ten administrators and four students from a group of twenty students. In how many ways may we do this? 13. We are making a list of participants in a panel discussion on allowing alcohol on campus. They will be sitting behind a table in the order in which we list them. There will be four administrators chosen from a group of ten administrators and four students chosen from a group of twenty students. In how many ways may we choose and list them if the administrators must sit together in a group and the students must sit together in a group? In how many ways may we choose and list them if we must alternate students and administrators? 14. In the local ice cream shop, you may get a sundae with two scoops of ice cream from 10 flavors (in accordance with your mother’s rules from Problem 12 in Section 1.1, the way the scoops sit in the dish does not matter), any one of three flavors of topping, and any (or all or none) of whipped cream, nuts and a cherry. How many different sundaes are possible? 15. In the local ice cream shop, you may get a three-way sundae with three of the ten flavors of ice cream, any one of three flavors of topping, and any (or all or none) of whipped cream, nuts and a cherry. How many different sundaes are possible(in accordance with your mother’s rules from Problem 12 in Section 1.1, the way the scoops sit in the dish does not matter) ? 16. A tennis club has 2n members. We want to pair up the members by twos for singles matches. In how many ways may we pair up all the members of the club? Suppose that in addition to specifying who plays whom, for each pairing we say who serves first. Now in how many ways may we specify our pairs? 17. A basketball team has 12 players. However, only five players play at any given time during a game. In how may ways may the coach choose the five players? To be more realistic, the five players playing a game normally consist of two guards, two forwards, and one center. If there are five guards, four forwards, and three centers on the team, in how many ways can the coach choose two guards, two forwards, and one center? What if one of the centers is equally skilled at playing forward?

18

CHAPTER 1. COUNTING

18. Explain why a function from an n-element set to an n-element set is one-to-one if and only if it is onto. 19. The function g is called an inverse to the function f if the domain of g is the range of f , if g(f (x)) = x for every x in the domain of f and if f (g(y)) = y for each y in the range of f . (a) Explain why a function is a bijection if and only if it has an inverse function. (b) Explain why a function that has an inverse function has only one inverse function.

1.3. BINOMIAL COEFFICIENTS

19

1.3

Binomial Coefficients

In this section, we will explore various properties of binomial coefficients. Remember that we defined the quantitu m to be the number of k-element subsets of an n-element set. k

Pascal’s Triangle
Table 1 contains the values of the binomial coefficients n for n = 0 to 6 and all relevant k k values. The table begins with a 1 for n = 0 and k = 0, because the empty set, the set with no elements, has exactly one 0-element subset, namely itself. We have not put any value into the table for a value of k larger than n, because we haven’t directly said what we mean by the binomial coefficient n in that case. However, since there are no subsets of an n-element set that k have size larger than n, it is natural to say that n is zero when k > n. Therefore we define n k k to be zero7 when k > n. Thus we could could fill in the empty places in the table with zeros. The table is easier to read if we don’t fill in the empty spaces, so we just remember that they are zero. Table 1.1: A table of binomial coefficients n\k 0 1 2 3 4 5 6 0 1 1 1 1 1 1 1 1 1 2 3 4 5 6 2 3 4 5 6

1 3 6 10 15

1 4 10 20

1 5 15

1 6 1

Exercise 1.3-1 What general properties of binomial coefficients do you see in Table 1.1 Exercise 1.3-2 What is the next row of the table of binomial coefficients? Several properties of binomial coefficients are apparent in Table 1.1. Each row begins with a 1, because n is always 1. This is the case because there is just one subset of an n-element set with 0 0 elements, namely the empty set. Similarly, each row ends with a 1, because an n-element set S has just one n-element subset, namely S itself. Each row increases at first, and then decreases. Further the second half of each row is the reverse of the first half. The array of numbers called Pascal’s Triangle emphasizes that symmetry by rearranging the rows of the table so that they line up at their centers. We show this array in Table 2. When we write down Pascal’s triangle, we leave out the values of n and k. You may know a method for creating Pascal’s triangle that does not involve computing binomial coefficients, but rather creates each row from the row above. Each entry in Table 1.2, except for the ones, is the sum of the entry directly above it to the left and the entry directly
If you are thinking “But we did define n to be zero when k > n by saying that it is the number of k element k subsets of an n-element set, so of course it is zero,” then good for you.
7

20 Table 1.2: Pascal’s Triangle 1 1 1 1 1 1 1 6 5 15 4 10 20 3 6 10 15 2 3 4 5 6 1 1 1 1

CHAPTER 1. COUNTING

1 1

above it to the right. We call this the Pascal Relationship, and it gives another way to compute binomial coefficients without doing the multiplying and dividing in Equation 1.5. If we wish to compute many binomial coefficients, the Pascal relationship often yields a more efficient way to do so. Once the coefficients in a row have been computed, the coefficients in the next row can be computed using only one addition per entry. We now verify that the two methods for computing Pascal’s triangle always yield the same result. In order to do so, we need an algebraic statement of the Pascal Relationship. In Table 1.1, each entry is the sum of the one above it and the one above it and to the left. In algebraic terms, then, the Pascal Relationship says n k n−1 n−1 + k−1 k

=

,

(1.6)

whenever n > 0 and 0 < k < n. It is possible to give a purely algebraic (and rather dreary) proof of this formula by plugging in our earlier formula for binomial coefficients into all three terms and verifying that we get an equality. A guiding principle of discrete mathematics is that when we have a formula that relates the numbers of elements of several sets, we should find an explanation that involves a relationship among the sets.

A proof using the Sum Principle
From Theorem 1.2 and Equation 1.5, we know that the expression n is the number of k-element k subsets of an n element set. Each of the three terms in Equation 1.6 therefore represents the number of subsets of a particular size chosen from an appropriately sized set. In particular, the three terms are the number of k-element subsets of an n-element set, the number of (k−1)-element subsets of an (n − 1)-element set, and the number of k-element subsets of an (n − 1)-element set. We should, therefore, be able to explain the relationship among these three quantities using the sum principle. This explanation will provide a proof, just as valid a proof as an algebraic derivation. Often, a proof using the sum principle will be less tedious, and will yield more insight into the problem at hand. Before giving such a proof in Theorem 1.3 below, we work out a special case. Suppose n = 5, k = 2. Equation 1.6 says that 5 4 4 = + . (1.7) 2 1 2

1.3. BINOMIAL COEFFICIENTS

21

Because the numbers are small, it is simple to verify this by using the formula for binomial coefficients, but let us instead consider subsets of a 5-element set. Equation 1.7 says that the number of 2 element subsets of a 5 element set is equal to the number of 1 element subsets of a 4 element set plus the number of 2 element subsets of a 4 element set. But to apply the sum principle, we would need to say something stronger. To apply the sum principle, we should be able to partition the set of 2 element subsets of a 5 element set into 2 disjoint sets, one of which has the same size as the number of 1 element subsets of a 4 element set and one of which has the same size as the number of 2 element subsets of a 4 element set. Such a partition provides a proof of Equation 1.7. Consider now the set S = {A, B, C, D, E}. The set of two element subsets is S1 = {{A, B}, {AC}, {A, D}, {A, E}, {B, C}, {B, D}, {B, E}, {C, D}, {C, E}, {D, E}}. We now partition S1 into 2 blocks, S2 and S3 . S2 contains all sets in S1 that do contain the element E, while S3 contains all sets in S1 that do not contain the element E. Thus, S2 = {{AE}, {BE}, {CE}, {DE}} and S3 = {{AB}, {AC}, {AD}, {BC}, {BD}, {CD}}. Each set in S2 must contain E and thus contains one other element from S. Since there are 4 other elements in S that we can choose along with E, we have |S2 | = 4 . Each set in S3 contains 1 2 elements from the set {A, B, C, D}. There are 4 ways to choose such a two-element subset of 2 {A < B < C < D}. But S1 = S2 ∪ S3 and S2 and S3 are disjoint, and so, by the sum principle, Equation 1.7 must hold. We now give a proof for general n and k. Theorem 1.3 If n and k are integers with n > 0 and 0 < k < n, then n k = n−1 n−1 + . k−1 k

Proof: The formula says that the number of k-element subsets of an n-element set is the sum of two numbers. As in our example, we will apply the sum principle. To apply it, we need to represent the set of k-element subsets of an n-element set as a union of two other disjoint sets. Suppose our n-element set is S = {x1 , x2 , . . . xn }. Then we wish to take S1 , say, to be the n k -element set of all k-element subsets of S and partition it into two disjoint sets of k-element subsets, S2 and S3 , where the sizes of S2 and S3 are n−1 and n−1 respectively. We can do this k−1 k as follows. Note that n−1 stands for the number of k element subsets of the first n − 1 elements k x1 , x2 , . . . , xn−1 of S. Thus we can let S3 be the set of k-element subsets of S that don’t contain xn . Then the only possibility for S2 is the set of k-element subsets of S that do contain xn . How can we see that the number of elements of this set S2 is n−1 ? By observing that removing xn k−1 from each of the elements of S2 gives a (k − 1)-element subset of S = {x1 , x2 , . . . xn−1 }. Further each (k − 1)-element subset of S arises in this way from one and only one k-element subset of S containing xn . Thus the number of elements of S2 is the number of (k − 1)-element subsets

22

CHAPTER 1. COUNTING

of S , which is n−1 . Since S2 and S3 are two disjoint sets whose union is S, the sum principle k−1 shows that the number of elements of S is n−1 + n−1 . k−1 k Notice that in our proof, we used a bijection that we did not explicitly describe. Namely, there is a bijection f between S3 (the k-element sets of S that contain xn ) and the (k −1)-element subsets of S . For any subset K in S3 , We let f (K) be the set we obtain by removing xn from K. It is immediate that this is a bijection, and so the bijection principle tells us that the size of S3 is the size of the set of all subsets of S .

The Binomial Theorem
Exercise 1.3-3 What is (x + y)3 ? What is (x + 1)4 ? What is (2 + y)4 ? What is (x + y)4 ? The number of k-element subsets of an n-element set is called a binomial coefficient because of the role that these numbers play in the algebraic expansion of a binomial x+y. The Binomial Theorem states that Theorem 1.4 (Binomial Theorem) For any integer n ≥ 0 (x + y)n = n n n n−1 n n−2 2 n n n x + x y+ x y + ··· + xy n−1 + y , 0 1 2 n−1 n n (1.8)

or in summation notation, (x + y)n =

i=0

n n−i i x y . i

Unfortunately when most people first see this theorem, they do not have the tools to see easily why it is true. Armed with our new methodology of using subsets to prove algebraic identities, we can give a proof of this theorem. Let us begin by considering the example (x + y)3 which by the binomial theorem is (x + y)3 = 3 3 3 2 3 3 3 x + x y+ xy 2 + y 0 1 2 3 (1.9) (1.10)

= x3 + 3x2 y + 3xy 2 + x3 .

Suppose that we did not know the binomial theorem but still wanted to compute (x + y)3 . Then we would write out (x + y)(x + y)(x + y) and perform the multiplication. Probably we would multiply the first two terms, obtaining x2 + 2xy + y 2 , and then multiply this expression by x + y. Notice that by applying distributive laws you get (x + y)(x + y) = (x + y)x + (x + y)y = xx + xy + yx + y. (1.11)

We could use the commutative law to put this into the usual form, but let us hold off for a moment so we can see a pattern evolve. To compute (x + y)3 , we can multiply the expression on the right hand side of Equation 1.11 by x + y using the distributive laws to get (xx + xy + yx + yy)(x + y) = (xx + xy + yx + yy)x + (xx + xy + yx + yy)y = xxx + xyx + yxx + yxx + xxy + xyy + yxy + yyy (1.12) (1.13)

1.3. BINOMIAL COEFFICIENTS

23

Each of these 8 terms that we got from the distributive law may be thought of as a product of terms, one from the first binomial, one from the second binomial, and one from the third binomial. Multiplication is commutative, so many of these products are the same. In fact, we have one xxx or x3 product, three products with two x’s and one y, or x2 y, three products with one x and two y’s, or xy 2 and one product which becomes y 3 . Now look at Equation 1.9, which summarizes this process. There are 3 = 1 way to choose a product with 3 x’s and 0 y’s, 3 = 3 0 1 way to choose a product with 2 x’s and 1 y, etc. Thus we can understand the binomial theorem as counting the subsets of our binomial factors from which we choose a y-term to get a product with k y’s in multiplying a string of n binomials. Essentially the same explanation gives us a proof of the binomial theorem. Note that when we multiplied out three factors of (x + y) using the distributive law but not collecting like terms, we had a sum of eight products. Each factor of (x+y) doubles the number of summands. Thus when we apply the distributive law as many times as possible (without applying the commutative law and collecting like terms) to a product of n binomials all equal to (x+y), we get 2n summands. Each summand is a product of a length n list of x’s and y’s. In each list, the ith entry comes from the ith binomial factor. A list that becomes xn−k y k when we use the commutative law will have a y in k of its places and an x in the remaining places. The number of lists that have a y in k places is thus the number of ways to select k binomial factors to contribute a y to our list. But the number of ways to select k binomial factors from n binomial factors is simply n , and k so that is the coefficient of xn−k y k . This proves the binomial theorem. Applying the Binomial Theorem to the remaining questions in Exercise 1.3-3 gives us (x + 1)4 = x4 + 4x3 + 6x2 + 4x + 1 (2 + y)4 = 16 + 32y + 24y 2 + 8y 3 + y 4 and (x + y)4 = x4 + 4x3 y + 6x2 y 2 + 4xy 3 + y 4 .

Labeling and trinomial coefficients
Exercise 1.3-4 Suppose that I have k labels of one kind and n − k labels of another. In how many different ways may I apply these labels to n objects? Exercise 1.3-5 Show that if we have k1 labels of one kind, k2 labels of a second kind, and n! k3 = n − k1 − k2 labels of a third kind, then there are k1 !k2 !k3 ! ways to apply these labels to n objects. Exercise 1.3-6 What is the coefficient of xk1 y k2 z k3 in (x + y + z)n ? Exercise 1.3-4 and Exercise 1.3-5 can be thought of as immediate applications of binomial coefficients. For Exercise 1.3-4, there are n ways to choose the k objects that get the first label, k n and the other objects get the second label, so the answer is n . For Exercise 1.3-5, there are k1 k ways to choose the k1 objects that get the first kind of label, and then there are n−k1 ways to k2 choose the objects that get the second kind of label. After that, the remaining k3 = n − k1 − k2 objects get the third kind of label. The total number of labellings is thus, by the product principle, the product of the two binomial coefficients, which simplifies as follows. n k1 n − k1 k2 = (n − k1 )! n! k1 !(n − k1 )! k2 !(n − k1 − k2 )!

24 = = n! k1 !k2 !(n − k1 − k2 )! n! . k1 !k2 !k3 !

CHAPTER 1. COUNTING

A more elegant approach to Exercise 1.3-4, Exercise 1.3-5, and other related problems appears in the next section. Exercise 1.3-6 shows how Exercise 1.3-5 applies to computing powers of trinomials. In expanding (x + y + z)n , we think of writing down n copies of the trinomial x + y + z side by side, and applying the distributive laws until we have a sum of terms each of which is a product of x’s, y’s and z’s. How many such terms do we have with k1 x’s, k2 y’s and k3 z’s? Imagine choosing x from some number k1 of the copies of the trinomial, choosing y from some number k2 , and z from the remaining k3 copies, multiplying all the chosen terms together, and adding up over all ways of picking the ki s and making our choices. Choosing x from a copy of the trinomial “labels” that copy with x, and the same for y and z, so the number of choices that yield xk1 y k2 z k3 is the number of ways to label n objects with k1 labels of one kind, k2 labels of a second kind, and k3 labels of a third. Notice that this requires that k3 = n − k1 − k2 . By analogy with our n n! notation for a binomial coefficient, we define the trinomial coefficient k1 ,k2 ,k3 to be k1 !k2 !k3 ! if n k1 + k2 + k3 = n and 0 otherwise. Then k1 ,k2 ,k3 is the coefficient of xk1 y k2 z k3 in (x + y + z)n . This is sometimes called the trinomial theorem.

Important Concepts, Formulas, and Theorems
1. Pascal Relationship. The Pascal Relationship says that n k whenever n > 0 and 0 < k < n. 2. Pascal’s Triangle. Pascal’s Triangle is the triangular array of numbers we get by putting ones in row n and column 0 and in row n and column n of a table for every positive integer n and then filling the remainder of the table by letting the number in row n and column j be the sum of the numbers in row n − 1 and columns j − 1 and j whenever 0 < j < n. 3. Binomial Theorem. The Binomial Theorem states that for any integer n ≥ 0 (x + y)n = xn + n n−1 n n−2 2 n n n x y+ x y + ··· + xy n−1 + y , 1 2 n−1 n n n

=

n−1 n−1 + k−1 k

,

or in summation notation, (x + y) =

i=0

n n−i i x y . i

4. Labeling. The number of ways to apply k labels of one kind and n − k labels of another kind to n objects is n . k 5. Trinomial coefficient. We define the trinomial coefficient k3 = n and 0 otherwise. n k1 ,k2 ,k3

to be n i,j,k

n! k1 !k2 !k3 !

if k1 + k2 +

6. Trinomial Theorem. The coefficient of xi y j z k in (x + y + z)n is

.

1.3. BINOMIAL COEFFICIENTS

25

Problems
1. Find
12 3

and

12 9

. What can you say in general about

n k

and

n n−k

?

2. Find the row of the Pascal triangle that corresponds to n = 8. 3. Find the following a. (x + 1)5 b. (x + y)5 c. (x + 2)5 d. (x − 1)5 4. Carefully explain the proof of the binomial theorem for (x + y)4 . That is, explain what each of the binomial coefficients in the theorem stands for and what powers of x and y are associated with them in this case. 5. If I have ten distinct chairs to paint, in how many ways may I paint three of them green, three of them blue, and four of them red? What does this have to do with labellings? 6. When n1 , n2 , . . . nk are nonnegative integers that add to n, the number n1 !,n2n! k ! is !,...,n called a multinomial coefficient and is denoted by n1 ,n2n k . A polynomial of the form ,...,n x1 + x2 + · · · + xk is called a multinomial. Explain the relationship between powers of a multinomial and multinomial coefficients. This relationship is called the Multinomial Theorem. 7. Give a bijection that proves your statement about section. n k

and

n n−k

in Problem 1 of this

8. In a Cartesian coordinate system, how many paths are there from the origin to the point with integer coordinates (m, n) if the paths are built up of exactly m + n horizontal and vertical line segments each of length one? 9. What is the formula we get for the binomial theorem if, instead of analyzing the number of ways to choose k distinct y’s, we analyze the number of ways to choose k distinct x’s? 10. Explain the difference between choosing four disjoint three element sets from a twelve element set and labelling a twelve element set with three labels of type 1, three labels of type two, three labels of type 3, and three labels of type 4. What is the number of ways of choosing three disjoint four element subsets from a twelve element set? What is the number of ways of choosing four disjoint three element subsets from a twelve element set? 11. A 20 member club must have a President, Vice President, Secretary and Treasurer as well as a three person nominations committee. If the officers must be different people, and if no officer may be on the nominating committee, in how many ways could the officers and nominating committee be chosen? Answer the same question if officers may be on the nominating committee. 12. Prove Equation 1.6 by plugging in the formula for n k

.

26 13. Give two proofs that n k 14. Give at least two proofs that n k k j = n j n−j . k−j = n n−k .

CHAPTER 1. COUNTING

15. Give at least two proofs that n k n−k j = n j n−j . k

16. You need not compute all of rows 7, 8, and 9 of Pascal’s triangle to use it to compute 9 . 6 Figure out which entries of Pascal’s triangle not given in Table 2 you actually need, and compute them to get 9 . 6 17. Explain why n (−1)i i=0 n i

=0

18. Apply calculus and the binomial theorem to (1 + x)n to show that n n n +2 +3 + · · · = n2n−1 . 1 2 3 19. True or False: n = n−2 + n−2 + n−2 . If true, give a proof. If false, give a value of n k k−2 k−1 k and k that show the statement is false, find an analogous true statement, and prove it.

1.4. EQUIVALENCE RELATIONS AND COUNTING (OPTIONAL)

27

1.4

Equivalence Relations and Counting (Optional)

The Symmetry Principle
Consider again the example from Section 1.2 in which we wanted to count the number of 3 element subsets of a four element set. To do so, we first formed all possible lists of k = 3 distinct elements chosen from an n = 4 element set. (See Equation 1.4.) The number of lists of k distinct elements is nk = n!/(n − k)!. We then observed that two lists are equivalent as sets, if one can be obtained by rearranging (or “permuting”) the other. This process divides the lists up into classes, called equivalence classes, each of size k!. Returning to our example in Section 1.2, we noted that one such equivalence class was {134, 143, 314, 341, 413, 431}. The other three are {234, 243, 324, 342, 423, 432}, {132, 123, 312, 321, 213, 231}, and {124, 142, 214, 241, 412, 421}. The product principle told us that if q is the number of such equivalence class, if each equivalence class has k! elements, and the entire set of lists has n!/(n − k)! element, then we must have that qk! = n!/(n − k)! . Dividing, we solve for q and get an expression for the number of k element subsets of an n element set. In fact, this is how we proved Theorem 1.2. A principle that helps in learning and understanding mathematics is that if we have a mathematical result that shows a certain symmetry, it often helps our understanding to find a proof that reflects this symmetry. We call this the Symmetry Principle. Principle 1.6 If a formula has a symmetry (e.g. interchanging two variables doesn’t change the result), then a proof that explains this symmetry is likely to give us additional insight into the formula. The proof above does not account for the symmetry of the k! term and the (n − k)! term in the n! expression k!(n−k)! . This symmetry arises because choosing a k element subset is equivalent to choosing the (n − k)-element subset of elements we don’t want. In Exercise 1.4-4, we saw that the binomial coefficient n also counts the number of ways to label n objects, say with the labels k “in” and “out,” so that we have k “ins” and therefore n − k “outs.” For each labelling, the k objects that get the label “in” are in our subset. This explains the symmetry in our formula, but it doesn’t prove the formula. Here is a new proof that the number of labellings is n!/k!(n − k)! that explains the symmetry. Suppose we have m ways to assign k blue and n − k red labels to n elements. From each labeling, we can create a number of lists, using the convention of listing the k blue elements first and the remaining n − k red elements last. For example, suppose we are considering the number of ways to label 3 elements blue (and 2 red) from a five element set {A, B, C, D, E}. Consider

28

CHAPTER 1. COUNTING

the particular labelling in which A, B, and D are labelled blue and C and E are labelled red. Which lists correspond to this labelling? They are ABDCE BDACE ABDEC BDAEC ADBCE DABCE ADBEC DABEC BADCE DBACE BADEC DBAEC

that is, all lists in which A, B, and D precede C and E. Since there are 3! ways to arrange A, B, and D, and 2! ways to arrange C and E, by the product principal, there are 3!2! = 12 lists in which A, B, and D precede C and E. For each of the q ways to construct a labelling, we could find a similar set of 12 lists that are associated with that labelling. Since every possible list of 5 elements will appear exactly once via this process, and since there are 5! = 120 five-element lists overall, we must have by the product principle that q · 12 = 120, or that q = 10. This agrees with our previous calculations of label 5 items so that 3 are blue and 2 are red.
5 3

(1.14) = 10 for the number of ways to

Generalizing, we let q be the number of ways to label n objects with k blue labels and n − k red labels. To create the lists associated with a labelling, we list the blue elements first and then the red elements. We can mix the k blue elements among themselves, and we can mix the n − k red elements among themselves, giving us k!(n − k)! lists consisting of first the elements with a blue label followed by the elements with a red label. Since we can choose to label any k elements blue, each of our lists of n distinct elements arises from some labelling in this way. Each such list arises from only one labelling, because two different labellings will have a different first k elements in any list that corresponds to the labelling. Each such list arises only once from a given labelling, because two different lists that correspond to the same labelling differ by a permutation of the first k places or the last n − k places or both. Therefore, by the product principle, qk!(n − k)! is the number of lists we can form with n distinct objects, and this must equal n!. This gives us qk!(n − k)! = n!, and division gives us our original formula for q. Recall that our proof of the formula we had in Exercise 1.4-5 did not explain why the product of three factorials appeared in the denominator, it simply proved the formula was correct. With this idea in hand, we could now explain why the product in the denominator of the formula in Exercise 1.4-5 for the number of labellings with three labels is what it is, and could generalize this formula to four or more labels.

Equivalence Relations
The process above divided the set of all n! lists of n distinct elements into classes (another word for sets) of lists. In each class, all the lists are mutually equivalent, with respect to labeling with two labels. More precisely, two lists of the n objects are equivalent for defining labellings if we get one from the other by mixing the first k elements among themselves and mixing the last n − k elements among themselves. Relating objects we want to count to sets of lists (so that each object corresponds to an set of equivalent lists) is a technique we can use to solve a wide variety of counting problems. (This is another example of abstraction.)

1.4. EQUIVALENCE RELATIONS AND COUNTING (OPTIONAL)

29

A relationship that divides a set up into mutually exclusive classes is called an equivalence relation.8 Thus, if S = S 1 ∪ S 2 ∪ . . . ∪ Sm and Si ∩ Sj = ∅ for all i and j with i = j, then the relationship that says any two elements x ∈ S and y ∈ S are equivalent if and only if they lie in the same set Si is an equivalence relation. The sets Si are called equivalence classes, and, as we noted in Section 1.1 the family S1 , S2 , . . . , Sm is called a partition of S. One partition of the set S = {a, b, c, d, e, f, g} is {a, c}, {d, g}, {b, e, f }. This partition corresponds to the following (boring) equivalence relation: a and c are equivalent, d and g are equivalent, and b, e, and f are equivalent. A slightly less boring equivalence relation is that two letters are equivalent if typographically, their top and bottom are at the same height. This give the partition {a, c, e}, {b, d}, {f }, {g}. Exercise 1.4-1 On the set of integers between 0 and 12 inclusive, define two integers to be related if they have the same remainder on division by 3. Which numbers are related to 0? to 1? to 2? to 3? to 4?. Is this relationship an equivalence relation? In Exercise 1.4-1, the set of numbers related to 0 is the set {0, 3, 6, 9, 12}, the set to 1 is {1, 4, 7, 10}, the set related to 2 is {2, 5, 8, 11}, the set related to 3 is {0, 3, 6, 9, 12}, the set related to 4 is {1, 4, 7, 10}. A little more precisely, a number is related to one of 0, 3, 6, 9, or 12, if and only if it is in the set {0, 3, 6, 9, 12}, a number is related to 1, 4, 7, or 10 if and only if it is in the set {1, 4, 7, 10} and a number is related to 2, 5, 8, or 11 if and only if it is in the set {2, 5, 8, 11}. Therefore the relationship is an equivalence relation.

The Quotient Principle
In Exercise 1.4-1 the equivalence classes had two different sizes. In the examples of counting labellings and subsets that we have seen so far, all the equivalence classes had the same size. This was very important. The principle we have been using to count subsets and labellings is given in the following theorem. We will call this principle the Quotient Principle. Theorem 1.5 (Quotient Principle) If an equivalence relation on a p-element set S has q classes each of size r, then q = p/r. Proof: By the product principle, p = qr, and so q = p/r.

Another statement of the quotient principle that uses the idea of a partition is Principle 1.7 (Quotient Principle.) If we can partition a set of size p into q blocks of size r, then q = p/r. Returning to our example of 3 blue and 2 red labels, s = 5! = 120, t = 12 and so by Theorem 1.5, 120 s = 10 . m= = t 12
The usual mathematical approach to equivalence relations, which we shall discuss in the exercises, is different from the one given here. Typically, one sees an equivalence relation defined as a reflexive (everything is related to itself), symmetric (if x is related to y, then y is related to x), and transitive (if x is related to y and y is related to z, then x is related to z) relationship on a set X. Examples of such relationships are equality (on any set), similarity (on a set of triangles), and having the same birthday as (on a set of people). The two approaches are equivalent, and we haven’t found a need for the details of the other approach in what we are doing in this course.
8

30

CHAPTER 1. COUNTING

Equivalence class counting
We now give several examples of the use of Theorem 1.5. Exercise 1.4-2 When four people sit down at a round table to play cards, two lists of their four names are equivalent as seating charts if each person has the same person to the right in both lists9 . (The person to the right of the person in position 4 of the list is the person in position 1). We will use Theorem 1.5 to count the number of possible ways to seat the players. We will take our set S to be the set of all 4-element permutations of the four people, i.e., the set of all lists of the four people. (a) How many lists are equivalent to a given one? (b) What are the lists equivalent to ABCD? (c) Is the relationship of equivalence an equivalence relation? (d) Use the Quotient Principle to compute the number of equivalence classes, and hence, the number of possible ways to seat the players. Exercise 1.4-3 We wish to count the number of ways to attach n distinct beads to the corners of a regular n-gon (or string them on a necklace). We say that two lists of the n beads are equivalent if each bead is adjacent to exactly the same beads in both lists. (The first bead in the list is considered to be adjacent to the last.) • How does this exercise differ from the previous exercise? • How many lists are in an equivalence class? • How many equivalence classes are there? In Exercise 1.4-2, suppose we have named the places at the table north, east, south, and west. Given a list we get an equivalent one in two steps. First we observe that we have four choices of people to sit in the north position. Then there is one person who can sit to this person’s right, one who can be next on the right, and one who can be the following on on the right, all determined by the original list. Thus there are exactly four lists equivalent to a given one, including that given one. The lists equivalent to ABCD are ABCD, BCDA, CDAB, and DABC. This shows that two lists are equivalent if and only if we can get one from the other by moving everyone the same number of places to the right around the table (or we can get one from the other moving everyone the same number of places to the left around the table). From this we can see we have an equivalence relation, because each list is in one of these sets of four equivalent lists, and if two lists are equivalent, they are right or left shifts of each other, and we’ve just observed that all right and left shifts of a given list are in the same class. This means our relationship divides the set of all lists of the four names into equivalence classes each of size four. There are a total of 4! = 24 lists of four distinct names, and so by Theorem 1.5 we have 4!/4 = 3! = 6 seating arrangements. Exercise 1.4-3 is similar in many ways to Exercise 1.4-2, but there is one significant difference. We can visualize the problem as one of dividing lists of n distinct beads up into equivalence classes,
Think of the four places at the table as being called north, east, south, and west, or numbered 1-4. Then we get a list by starting with the person in the north position (position 1), then the person in the east position (position 2) and so on clockwise
9

1.4. EQUIVALENCE RELATIONS AND COUNTING (OPTIONAL)

31

but now two lists are equivalent if each bead is adjacent to exactly the same beads in both of them. Suppose we number the vertices of our polygon as 1 through n clockwise. Given a list, we can count the equivalent lists as follows. We have n choices for which bead to put in position 1. Then either of the two beads adjacent to it10 in the given list can go in position 2. But now, only one bead can go in position 3, because the other bead adjacent to position 2 is already in position 1. We can continue in this way to fill in the rest of the list. For example, with n = 4, the lists ABCD, ADCB, BCDA, BADC, CDAB, CBAD, DABC, and DCBA are all equivalent. Notice the first, third , fifth and seventh lists are obtained by shifting the beads around the polygon, as are the second, fourth, sixth and eighth (though in the opposite direction). Also note that the eighth list is the reversal of the first, the third is the reversal of the second, and so on. Rotating a necklace in space corresponds to shifting the letters in the list. Flipping a necklace over in space corresponds to reversing the order of a list. There will always be 2n lists we can get by shifting and reversing shifts of a list. The lists equivalent to a given one consist of everything we can get from the given list by rotations and reversals. Thus the relationship of every bead being adjacent to the same beads divides the set of lists of beads into disjoint sets. These sets, which have size 2n, are the equivalence classes of our equivalence relation. Since there are n! lists, Theorem 1.5 says there are n! (n − 1)! = 2n 2 bead arrangements.

Multisets
Sometimes when we think about choosing elements from a set, we want to be able to choose an element more than once. For example the set of letters of the word “roof” is {f, o, r}. However it is often more useful to think of the of the multiset of letters, which in this case is {{f, o, o, r}}. We use the double brackets to distinguish a multiset from a set. We can specify a multiset chosen from a set S by saying how many times each of its elements occurs. If S is the set of English letters, the “multiplicity” function for roof is given by m(f ) = 1, m(o) = 2, m(r) = 1, and m(letter) = 0 for every other letter. In a multiset, order is not important, that is the multiset {{r, o, f, o}} is equivalent to the multiset {{f, o, o, r}}. We know that this is the case, because they each have the same multiplicity function. We would like to say that the size of {{f, o, o, r}} is 4, so we define the size of a multiset to be the sum of the multiplicities of its elements. Exercise 1.4-4 Explain how placing k identical books onto the n shelves of a bookcase can be thought of as giving us a k-element multiset of the shelves of the bookcase. Explain how distributing k identical apples to n children can be thought of as giving us a k-element multiset of the children. In Exercise 1.4-4 we can think of the multiplicity of a bookshelf as the number of books it gets and the multiplicity of a child as the number of apples the child gets. In fact, this idea of distribution of identical objects to distinct recipients gives a great mental model for a multiset chosen from a set S. Namely, to determine a k-element multiset chosen from S form S, we “distribute” k identical objects to the elements of S and the number of objects an element x gets is the multiplicity of x.
10

Remember, the first and last bead are considered adjacent, so they have two beads adjacent to them.

32

CHAPTER 1. COUNTING

Notice that it makes no sense to ask for the number of multisets we may choose from a set with n elements, because {{A}}, {{A, A}}, {{A, A, A}}, and so on are infinitely many multisets chosen from the set {A}. However it does make sense to ask for the number of k-element multisets we can choose from an n-element set. What strategy could we employ to figure out this number? To count k-element subsets, we first counted k-element permutations, and then divided by the number of different permutations of the same set. Here we need an analog of permutations that allows repeats. A natural idea is to consider lists with repeats. After all, one way to describe a multiset is to list it, and there could be many different orders for listing a multiset. However the two element multiset {{A, A}} can be listed in just one way, while the two element multiset {{A, B}} can be listed in two ways. When we counted k-element subsets of an n-element set by using the quotient principle, it was essential that each k-element subset corresponded to the same number (namely k!) of permutations (lists), because we were using the reasoning behind the quotient principle to do our counting here. So if we hope to use similar reasoning, we can’t apply it to lists with repeats because different k-element multisets can correspond to different numbers of lists. Suppose, however, we could count the number of ways to arrange k distinct books on the n shelves of a bookcase. We can still think of the multiplicity of a shelf as being the number of books on it. However, many different arrangements of distinct books will give us the same multiplicity function. In fact, any way of mixing the books up among themselves that does not change the number of books on each shelf will give us the same multiplicities. But the number of ways to mix the books up among themselves is the number of permutations of the books, namely k!. Thus it looks like we have an equivalence relation on the arrangements of distinct books on a bookshelf such that 1. Each equivalence class has k! elements, and 2. There is a bijection between the equivalence classes and k-element multisets of the n shelves. Thus if we can compute the number of ways to arrange k distinct books on the n shelves of a bookcase, we should be able to apply the quotient principle to compute the number of k-element multisets of an n-element set.

The bookcase arrangement problem.
Exercise 1.4-5 We have k books to arrange on the n shelves of a bookcase. The order in which the books appear on a shelf matters, and each shelf can hold all the books. We will assume that as the books are placed on the shelves they are moved as far to the left as they will go so that all that matters is the order in which the books appear and not the actual places where the books sit. When book i is placed on a shelf, it can go between two books already there or to the left or right of all the books on that shelf. (a) Since the books are distinct, we may think of a first, second, third, etc. book. In how many ways may we place the first book on the shelves? (b) Once the first book has been placed, in how many ways may the second book be placed? (c) Once the first two books have been placed, in how many ways may the third book be placed?

1.4. EQUIVALENCE RELATIONS AND COUNTING (OPTIONAL) (d) Once i − 1 books have been placed, book i can be placed on any of the shelves to the left of any of the books already there, but there are some additional ways in which it may be placed. In how many ways in total may book i be placed? (e) In how many ways may k distinct books be place on n shelves in accordance with the constraints above? Exercise 1.4-6 How many k-element multisets can we choose from an n-element set?

33

In Exercise 1.4-5 there are n places where the first book can go, namely on the left side of any shelf. Then the next book can go in any of the n places on the far left side of any shelf, or it can go to the right of book one. Thus there are n + 1 places where book 2 can go. At first, placing book three appears to be more complicated, because we could create two different patterns by placing the first two books. However book 3 could go to the far left of any shelf or to the immediate right of any of the books already there. (Notice that if book 2 and book 1 are on shelf 7 in that order, putting book 3 to the immediate right of book 2 means putting it between book 2 and book 1.) Thus in any case, there are n+2 ways to place book 3. Similarly, once i − 1 books have been placed, there are n + i − 1 places where we can place book i. It can go at the far left of any of the n shelves or to the immediate right of any of the i − 1 books that we have already placed. Thus the number of ways to place k distinct books is k k−1

n(n + 1)(n + 2) · · · (n + k − 1) = i=1 (n + i − 1) = j=0 (n + j) =

(n + k − 1)! . (n − 1)!

(1.15)

The specific product that arose in Equation 1.15 is called a rising factorial power. It has a notation (also introduced by Don Knuth) analogous to that for the falling factorial notation. Namely, we write k nk = n(n + 1) · · · (n + k − 1) = i=1 (n + i − 1).

This is the product of k successive numbers beginning with n.

The number of k-element multisets of an n-element set.
We can apply the formula of Exercise 1.4-5 to solve Exercise 1.4-6. We define two bookcase arrangements of k books on n shelves to be equivalent if we get one from the other by permuting the books among themselves. Thus if two arrangements put the same number of books on each shelf they are put into the same class by this relationship. On the other hand, if two arrangements put a different number of books on at least one shelf, they are not equivalent, and therefore they are put into different classes by this relationship. Thus the classes into which this relationship divides the the arrangements are disjoint and partition the set of all arrangements. Each class has k! arrangements in it. The set of all arrangements has nk arrangements in it. This leads to the following theorem. Theorem 1.6 The number of k-element multisets chosen from an n-element set is nk = k! n+k−1 . k

34

CHAPTER 1. COUNTING

Proof: The relationship on bookcase arrangements that two arrangements are equivalent if and only if we get one from the other by permuting the books is an equivalence relation. The set of all arrangements has nk elements, and the number of elements in an equivalence class is k!. By the quotient principle, the number of equivalence classes is n . There is a bijection between k! equivalence classes of bookcase arrangements with k books and multisets with k elements. The second equality follows from the definition of binomial coefficients. The number of k-element multisets chosen from an n-elements is sometimes called the number of combinations with repetitions of n elements taken k at a time. The right-hand side of the formula is a binomial coefficient, so it is natural to ask whether there is a way to interpret choosing a k-element multiset from an n-element set as choosing a k-element subset of some different n + k − 1-element set. This illustrates an important principle. When we have a quantity that turns our to be equal to a binomial coefficient, it helps our understanding to interpret it as counting the number of ways to choose a subset of an appropriate size from a set of an appropriate size. We explore this idea for multisets in Problem 8 in this section. k Using the quotient principle to explain a quotient
Since the last expression in Equation 1.15 is quotient of two factorials it is natural to ask whether it is counting equivalence classes of an equivalence relation. If so, the set on which the relation is defined has size (n + k − 1)!. Thus it might be all lists or permutations of n + k − 1 distinct objects. The size of an equivalence class is (n − 1)! and so what makes two lists equivalent might be permuting n − 1 of the objects among themselves. Said differently, the quotient principle suggests that we look for an explanation of the formula involving lists of n + k − 1 objects, of which n − 1 are identical, so that the remaining k elements are distinct. Can we find such an interpretation? Exercise 1.4-7 In how many ways may we arrange k distinct books and n − 1 identical blocks of wood in a straight line? Exercise 1.4-8 How does Exercise 1.4-7 relate to arranging books on the shelves of a bookcase? In Exercise 1.4-7, if we tape numbers to the wood so that so that the pieces of wood are distinguishable, there are (n + k − 1)! arrangements of the books and wood. But since the pieces of wood are actually indistinguishable, (n − 1)! of these arrangements are equivalent. Thus by the quotient principle there are (n + k − 1)!/(n − 1)! arrangements. Such an arrangement allows us to put the books on the shelves as follows: put all the books before the first piece of wood on shelf 1, all the books between the first and second on shelf 2, and so on until you put all the books after the last piece of wood on shelf n.

Important Concepts, Formulas, and Theorems
1. Symmetry Principle. If we have a mathematical result that shows a certain symmetry, it often helps our understanding to find a proof that reflects this symmetry. 2. Partition. Given a set S of items, a partition of S consists of m sets S1 , S2 , . . . , Sm , sometimes called blocks so that S1 ∪S2 ∪· · ·∪Sm = S and for each i and j with i = j, Si ∩Sj = ∅.

1.4. EQUIVALENCE RELATIONS AND COUNTING (OPTIONAL)

35

3. Equivalence relation. Equivalence class. A relationship that partitions a set up into mutually exclusive classes is called an equivalence relation. Thus if S = S1 ∪ S2 ∪ . . . ∪ Sm is a partition of S, the relationship that says any two elements x ∈ S and y ∈ S are equivalent if and only if they lie in the same set Si is an equivalence relation. The sets Si are called equivalence classes 4. Quotient principle. The quotient principle says that if we can partition a set of p objects up into q classes of size r, then q = p/r. Equivalently, if an equivalence relation on a set of size p has q equivalence classes of size r, then q = p/r. The quotient principle is frequently used for counting the number of equivalence classes of an equivalence relation. When we have a quantity that is a quotient of two others, it is often helpful to our understanding to find a way to use the quotient principle to explain why we have this quotient. 5. Multiset. A multiset is similar to a set except that each item can appear multiple times. We can specify a multiset chosen from a set S by saying how many times each of its elements occurs. 6. Choosing k-element multisets. The number of k-element multisets that can be chosen from an n-element set is (n + k − 1)! n+k−1 . = k!(n − 1)! k This is sometimes called the formula for “combinations with repetitions.” 7. Interpreting binomial coefficients. When we have a quantity that turns out to be a binomial coefficient (or some other formula we recognize) it is often helpful to our understanding to try to interpret the quantity as the result of choosing a subset of a set (or doing whatever the formula that we recognize counts.)

Problems
1. In how many ways may n people be seated around a round table? (Remember, two seating arrangements around a round table are equivalent if everyone is in the same position relative to everyone else in both arrangements.) 2. In how many ways may we embroider n circles of different colors in a row (lengthwise, equally spaced, and centered halfway between the top and bottom edges) on a scarf (as follows)?

i

i

i

i

i

i

3. Use binomial coefficients to determine in how many ways three identical red apples and two identical golden apples may be lined up in a line. Use equivalence class counting (in particular, the quotient principle) to determine the same number.

36

CHAPTER 1. COUNTING 4. Use multisets to determine the number of ways to pass out k identical apples to n children. 5. In how many ways may n men and n women be seated around a table alternating gender? (Use equivalence class counting!!) 6. In how many ways may we pass out k identical apples to n children if each child must get at least one apple? 7. In how many ways may we place k distinct books on n shelves of a bookcase (all books pushed to the left as far as possible) if there must be at least one book on each shelf? 8. The formula for the number of multisets is (n + k − 1)! divided by a product of two other factorials. We seek an explanation using the quotient principle of why this counts multisets. The formula for the number of multisets is also a binomial coefficient, so it should have an interpretation involving choosing k items from n + k − 1 items. The parts of the problem that follow lead us to these explanations. (a) In how many ways may we place k red checkers and n − 1 black checkers in a row? (b) How can we relate the number of ways of placing k red checkers and n − 1 black checkers in a row to the number of k-element multisets of an n-element set, say the set {1, 2, . . . , n} to be specific? (c) How can we relate the choice of k items out of n + k − 1 items to the placement of red and black checkers as in the previous parts of this problem? 9. How many solutions to the equation x1 + x2 + · · · xn = k are there with each xi ≥ 0?

10. How many solutions to the equation x1 + x2 + · · · xn = k are there with each xi > 0? 11. In how many ways may n red checkers and n + 1 black checkers be arranged in a circle? (This number is a famous number called a Catalan number.) 12. A standard notation for the number of partitions of an n element set into k classes is S(n, k). S(0, 0) is 1, because technically the empty family of subsets of the empty set is a partition of the empty set, and S(n, 0) is 0 for n > 0, because there are no partitions of a nonempty set into no parts. S(1, 1) is 1. (a) Explain why S(n, n) is 1 for all n > 0. Explain why S(n, 1) is 1 for all n > 0. (b) Explain why, for 1 < k < n, S(n, k) = S(n − 1, k − 1) + kS(n − 1, k). (c) Make a table like our first table of binomial coefficients that shows the values of S(n, k) for values of n and k ranging from 1 to 6. 13. You are given a square, which can be rotated 90 degrees at a time (i.e. the square has four orientations). You are also given two red checkers and two black checkers, and you will place each checker on one corner of the square. How many lists of four letters, two of which are R and two of which are B, are there? Once you choose a starting place on the square, each list represents placing checkers on the square in clockwise order. Consider two lists to be equivalent if they represent the same arrangement of checkers at the corners of the square, that is, if one arrangement can be rotated to create the other one. Write down the equivalence classes of this equivalence relation. Why can’t we apply Theorem 1.5 to compute the number of equivalence classes?

1.4. EQUIVALENCE RELATIONS AND COUNTING (OPTIONAL)

37

14. The terms “reflexive”, “symmetric” and “transitive” were defined in Footnote 2. Which of these properties is satisfied by the relationship of “greater than?” Which of these properties is satisfied by the relationship of “is a brother of?” Which of these properties is satisfied by “is a sibling of?” (You are not considered to be your own brother or your own sibling). How about the relationship “is either a sibling of or is?” a Explain why an equivalence relation (as we have defined it) is a reflexive, symmetric, and transitive relationship. b Suppose we have a reflexive, symmetric, and transitive relationship defined on a set S. For each x is S, let Sx = {y|y is related to x}. Show that two such sets Sx and Sy are either disjoint or identical. Explain why this means that our relationship is an equivalence relation (as defined in this section of the notes, not as defined in the footnote). c Parts b and c of this problem prove that a relationship is an equivalence relation if and only if it is symmetric, reflexive, and transitive. Explain why. (A short answer is most appropriate here.) 15. Consider the following C++ function to compute int pascal(int n, int k) { if (n < k) { cout m a true statement and for what values is it a false statement? Since we have not specified a universe, your answer will depend on what universe you choose to use.

If you used the universe of positive integers, the statement is true for every value of m but 1; if you used the real numbers, the statement is true for every value of m except for those in the closed interval [0, 1]. There are really two points to make here. First, a statement about a variable can often be interpreted as a statement about more than one universe, and so to make it unambiguous, the universe must be clearly stated. Second, a statement about a variable can be true for some values of a variable and false for others.
4 Note that to declare a variable x as an integer in, say, a C program does not mean that same thing as saying that x is an integer. In a C program, an integer may really be a 32-bit integer, and so it is limited to values between 231 − 1 and −231 . Similarly a real has some fixed precision, and hence a real variable y may not be able to take on a value of, say, 10−985 .

3.2. VARIABLES AND QUANTIFIERS

97

Quantifiers
In contrast, the statement For every integer m, m2 > m. (3.1) is false; we do not need to qualify our answer by saying that it is true some of the time and false at other times. To determine whether Statement 3.1 is true or false, we could substitute various values for m into the simpler statement m2 > m, and decide, for each of these values, whether the statement m2 > m is true or false. Doing so, we see that the statement m2 > m is true for values such as m = −3 or m = 9, but false for m = 0 or m = 1. Thus it is not the case that for every integer m, m2 > m, so Statement 3.1 is false. It is false as a statement because it is an assertion that the simpler statement m2 > m holds for each integer value of m we substitute in. A phrase like “for every integer m” that converts a symbolic statement about potentially any member of our universe into a statement about the universe instead is called a quantifier. A quantifier that asserts a statement about a variable is true for every value of the variable in its universe is called a universal quantifier. The previous example illustrates a very important point. If a statement asserts something for every value of a variable, then to show the statement is false, we need only give one value of the variable for which the assertion is untrue. Another example of a quantifier is the phrase “There is an integer m” in the sentence There is an integer m such that m2 > m. This statement is also about the universe of integers, and as such it is true—there are plenty of integers m we can substitute into the symbolic statement m2 > m to make it true. This is an example of an “existential quantifier.” An existential quantifier asserts that a certain element of our universe exists. A second important point similar to the one we made above is: To show that a statement with an existential quantifier is true, we need only exhibit one value of the variable being quantified that makes the statement true. As the more complex statement For every pair of positive integers m and n, there are nonnegative integers q and r with 0 ≤ r < n such that m = qn + r, shows, statements of mathematical interest abound with quantifiers. Recall the following definition of the “big-O” notation you have probably used in earlier computer science courses: Definition 3.2 We say that f (x) = O(g(x)) if there are positive numbers c and n0 such that f (x) ≤ cg(x) for every x > n0 . Exercise 3.2-2 Quantification is present in our everyday language as well. The sentences “Every child wants a pony” and “No child wants a toothache” are two different examples of quantified sentences. Give ten examples of everyday sentences that use quantifiers, but use different words to indicate the quantification.

98

CHAPTER 3. REFLECTIONS ON LOGIC AND PROOF

Exercise 3.2-3 Convert the sentence “No child wants a toothache” into a sentence of the form “It is not the case that...” Find an existential quantifier in your sentence. Exercise 3.2-4 What would you have to do to show that a statement about one variable with an existential quantifier is false? Correspondingly, what would you have to do to show that a statement about one variable with a universal quantifier is true?

As Exercise 3.2-2 points out, English has many different ways to express quantifiers. For example, the sentences, “All hammers are tools”, “Each sandwich is delicious”, “No one in their right mind would do that”, “Somebody loves me”, and “Yes Virginia, there is a Santa Claus” all contain quantifiers. For Exercise 3.2-3, we can say “It is not the case that there is a child who wants a toothache.” Our quantifier is the phrase “there is.” To show that a statement about one variable with an existential quantifier is false, we have to show that every element of the universe makes the statement (such as m2 > m) false. Thus to show that the statement “There is an x in [0, 1] with x2 > x” is false, we have to show that every x in the interval makes the statement x2 > x false. Similarly, to show that a statement with a universal quantifier is true, we have to show that the statement being quantified is true for every member of our universe. We will give more details about how to show a statement about a variable is true or false for every member of our universe later in this section. Mathematical statements of theorems, lemmas, and corollaries often have quantifiers. For example in Lemma 2.5 the phrase “for any” is a quantifier, and in Corollary 2.6 the phrase “there is” is a quantifier.

Standard notation for quantification
Each of the many variants of language that describe quantification describe one of two situations: A quantified statement about a variable x asserts either • that the statement is true for all x in the universe, or • that there exists an x in the universe that makes the statement true. All quantified statements have one of these two forms. We use the standard shorthand of ∀ for the phrase “for all” and the standard shorthand of ∃ for the phrase “there exists.” We also adopt the convention that we parenthesize the expression that is subject to the quantification. For example, using Z to stand for the universe of all integers, we write ∀n ∈ Z (n2 ≥ n) as a shorthand for the statement “For all integers n, n2 ≥ n.” It is perhaps more natural to read the notation as “For all n in Z, n2 ≥ n,” which is how we recommend reading the symbolism. We similarly use ∃n ∈ Z(n2 > n) to stand for “There exists an n in Z such that n2 > n.” Notice that in order to cast our symbolic form of an existence statement into grammatical English we have included the supplementary word “an” and the supplementary phrase “such that.” People often leave out the “an” as they

3.2. VARIABLES AND QUANTIFIERS

99

read an existence statement, but rarely leave out the “such that.” Such supplementary language is not needed with ∀. As another example, we rewrite the definition of the “Big Oh” notation with these symbols. We use the letter R to stand for the universe of real numbers, and the symbol R+ to stand for the universe of positive real numbers. f = O(g) means that ∃c ∈ R+ (∃n0 ∈ R+ (∀x ∈ R(x > n0 ⇒ f (x) ≤ cg(x)))) We would read this literally as f is big Oh of g means that there exists a c in R+ such that there exists an n0 in R+ such that for all x in R, if x > n0 , then f (x) ≤ cg(x). Clearly this has the same meaning (when we translate it into more idiomatic English) as f is big Oh of g means that there exist a c in R+ and an n0 in R+ such that for all real numbers x > n0 , f (x) ≤ cg(x). This statement is identical to the definition of “big Oh” that we gave earlier in Definition 3.2, except for more precision as to what c and n0 actually are. Exercise 3.2-5 How would you rewrite Euclid’s division theorem, Theorem 2.12 using the shorthand notation we have introduced for quantifiers? Use Z + to to stand for the positive integers and N to stand for the nonnegative integers.

We can rewrite Euclid’s division theorem as ∀m ∈ N (∀n ∈ Z + (∃q ∈ N (∃r ∈ N ((r < n) ∧ (m = qn + r))))).

Statements about variables
To talk about statements about variables, we need a notation to use for such statements. For example, we can use p(n) to stand for the statement n2 > n. Now, we can say that p(4) and p(−3) are true, while p(1) and p(.5) are false. In effect we are introducing variables that stand for statements about (other) variables! We typically use symbols like p(n), q(x), etc. to stand for statements about a variable n or x. Then the statement “For all x in U p(x)” can be written as ∀x ∈ U (p(x)) and the statement “There exists an n in U such that q(n)” can be written as ∃n ∈ U (q(n)). Sometimes we have statements about more than one variable; for example, our definition of “big Oh” notation had the form ∃c(∃n0 (∀x(p(c, n0 , x)))), where p(c, n0 , x) is (x > n0 ⇒ f (x) ≤ cg(x)). (We have left out mention of the universes for our variables here to emphasize the form of the statement.) Exercise 3.2-6 Rewrite Euclid’s division theorem, using the notation above for statements about variables. Leave out the references to universes so that you can see clearly the order in which the quantifiers occur. The form of Euclid’s division theorem is ∀m(∀n(∃q(∃r(p(m, n, q, r))))).

100

CHAPTER 3. REFLECTIONS ON LOGIC AND PROOF

Rewriting statements to encompass larger universes
It is sometimes useful to rewrite a quantified statement so that the universe is larger, and the statement itself serves to limit the scope of the universe. Exercise 3.2-7 Let R to stand for the real numbers and R+ to stand for the positive real numbers. Consider the following two statements: a) ∀x ∈ R+ (x > 1) b) ∃x ∈ R+ (x > 1) Rewrite these statements so that the universe is all the real numbers, but the statements say the same thing in everyday English that they did before.

For Exercise 3.2-7, there are potentially many ways to rewrite the statements. Two particularly simple ways are ∀x ∈ R(x > 0 ⇒ x > 1) and ∃x ∈ R(x > 0 ∧ x > 1). Notice that we translated one of these statements with “implies” and one with “and.” We can state this rule as a general theorem: Theorem 3.2 Let U1 be a universe, and let U2 be another universe with U1 ⊆ U2 . Suppose that q(x) is a statement such that U1 = {x| q(x) is true}. (3.2)

Then if p(x) is a statement about U2 , it may also be interpreted as a statement about U1 , and (a) ∀x ∈ U1 (p(x)) is equivalent to ∀x ∈ U2 (q(x) ⇒ p(x)). (b) ∃x ∈ U1 (p(x)) is equivalent to ∃x ∈ U2 (q(x) ∧ p(x)). Proof: By Equation 3.2 the statement q(x) must be true for all x ∈ U1 and false for all x in U2 but not U1 . To prove part (a) we must show that ∀x ∈ U1 (p(x)) is true in exactly the same cases as the statement ∀x ∈ U2 (q(x) ⇒ p(x)). For this purpose, suppose first that ∀x ∈ U1 (p(x)) is true. Then p(x) is true for all x in U1 . Therefore, by the truth table for “implies” and our remark about Equation 3.2, the statement ∀x ∈ U2 (q(x) ⇒ p(x)) is true. Now suppose ∀x ∈ U1 (p(x)) is false. Then there exists an x in U1 such that p(x) is false. Then by the truth table for “implies,” the statement ∀x ∈ U2 (q(x) ⇒ p(x)) is false. Thus the statement ∀x ∈ U1 (p(x)) is true if and only if the statement ∀x ∈ U2 (q(x) ⇒ p(x)) is true. Therefore the two statements are true in exactly the same cases. Part (a) of the theorem follows. Similarly, for Part (b), we observe that if ∃x ∈ U1 (p(x)) is true, then for some x ∈ U1 , p(x ) is true. For that x , q(x ) is also true, and hence p(x ) ∧ q(x ) is true, so that ∃x ∈ U2 (q(x) ∧ p(x)) is true as well. On the other hand, if ∃x ∈ U1 (p(x)) is false, then no x ∈ U1 has p(x) true. Therefore by the truth table for “and” q(x) ∧ p(x) won’t be true either. Thus the two statements in Part (b) are true in exactly the same cases and so are equivalent.

3.2. VARIABLES AND QUANTIFIERS

101

Proving quantified statements true or false
Exercise 3.2-8 Let R stand for the real numbers and R+ stand for the positive real numbers. For each of the following statements, say whether it is true or false and why. a) ∀x ∈ R+ (x > 1) b) ∃x ∈ R+ (x > 1) c) ∀x ∈ R(∃y ∈ R(y > x)) d) ∀x ∈ R(∀y ∈ R(y > x)) e) ∃x ∈ R(x ≥ 0 ∧ ∀y ∈ R+ (y > x)) In Exercise 3.2-8, since .5 is not greater than 1, statement (a) is false. However since 2 > 1, statement (b) is true. Statement (c) says that for each real number x there is a real number y bigger than x, which we know is true. Statement (d) says that every y in R is larger than every x in R, and so it is false. Statement (e) says that there is a nonnegative number x such that every positive y is larger than x, which is true because x = 0 fills the bill. We can summarize what we know about the meaning of quantified statements as follows. Principle 3.2 (The meaning of quantified statements) • The statement ∃x ∈ U (p(x)) is true if there is at least one value of x in U for which the statement p(x) is true. • The statement ∃x ∈ U (p(x)) is false if there is no x ∈ U for which p(x) is true. • The statement ∀x ∈ U (p(x)) is true if p(x) is true for each value of x in U . • The statement ∀x ∈ U (p(x)) is false if p(x) is false for at least one value of x in U .

Negation of quantified statements
An interesting connection between ∀ and ∃ arises from the negation of statements. Exercise 3.2-9 What does the statement “It is not the case that for all integers n, n2 > 0” mean? From our knowledge of English we see that since the statement ¬∀n ∈ Z(n2 > 0) asserts that it is not the case that, for all integers n, we have n2 > 0, there must be some integer n such that n2 > 0. In other words, it says there is some integer n such that n2 ≤ 0. Thus the negation of our “for all” statement is a “there exists” statement. We can make this idea more precise by recalling the notion of equivalence of statements. We have said that two symbolic statements are equivalent if they are true in exactly the same cases. By considering the case when p(x) is true for all x ∈ U , (we call this case “always true”) and the case when p(x) is false for at least one x ∈ U (we call this case “not always true”) we can analyze the equivalence. The theorem that follows, which formalizes the example above in which p(x) was the statement x2 > 0, is proved by dividing these cases into two possibilities.

102

CHAPTER 3. REFLECTIONS ON LOGIC AND PROOF

Theorem 3.3 The statements ¬∀x ∈U(p(x)) and ∃x ∈ U (¬p(x)) are equivalent. Proof: Consider the following table which we have set up much like a truth table, except that the relevant cases are not determined by whether p(x) is true or false, but by whether p(x) is true for all x in the universe U or not. p(x) always true not always true ¬p(x) always false not always false ∀x ∈ U (p(x)) true false ¬∀x ∈ U (p(x)) false true ∃x ∈ U (¬p(x)) false true

Since the last two columns are identical, the theorem holds. Corollary 3.4 The statements ¬∃x ∈ U (q(x)) and ∀x ∈ U (¬q(x)) are equivalent. Proof: Since the two statements in Theorem 3.3 are equivalent, their negations are also equivalent. We then substitute ¬q(x) for p(x) to prove the corollary. Put another way, to negate a quantified statement, you switch the quantifier and “push” the negation inside. To deal with the negation of more complicated statements, we simply take them one quantifier at a time. Recall Definition 3.2, the definition of big Oh notation, f (x) = O(g(x)) if ∃c ∈ R+ (∃n0 ∈ R+ (∀x ∈ R(x > n0 ⇒ f (x) ≤ cg(x)))). What does it mean to say that f (x) is not O(g(x))? First we can write f (x) = O(g(x)) if ¬∃c ∈ R+ (∃n0 ∈ R+ (∀x ∈ R(x > n0 ⇒ f (x) ≤ cg(x)))). After one application of Corollary 3.4 we get f (x) = O(g(x)) if ∀c ∈ R+ (¬∃n0 ∈ R+ (∀x ∈ R(x > n0 ⇒ f (x) ≤ cg(x)))). After another application of Corollary 3.4 we obtain f (x) = O(g(x)) if ∀c ∈ R+ (∀n0 ∈ R+ (¬∀x ∈ R(x > n0 ⇒ f (x) ≤ cg(x)))). Now we apply Theorem 3.3 and obtain f (x) = O(g(x)) if ∀c ∈ R+ (∀n0 ∈ R+ (∃x ∈ R(¬(x > n0 ⇒ f (x) ≤ cg(x))))). Now ¬(p ⇒ q) is equivalent to p ∧ ¬q, so we can write f (x) = O(g(x)) if ∀c ∈ R+ (∀n0 ∈ R+ (∃x ∈ R((x > n0 ) ∧ (f (x) ≤ cg(x)))))). Thus f (x) is not O(g(x)) if for every c in R+ and every n0 in R+ , there is an x such that x > n0 and f (x) ≤ cg(x). In our next exercise, we use the “Big Theta” notation defined as follows:

3.2. VARIABLES AND QUANTIFIERS Definition 3.3 f (x) = Θ(g(x)) means that f (x) = O(g(x)) and g(x) = O(f (x)). Exercise 3.2-10 Express ¬(f (x) = Θ(g(x))) in terms similar to those we used to describe f (x) = O(g(x)). Exercise 3.2-11 Suppose the universe for a statement p(x) is the integers from 1 to 10. Express the statement ∀x(p(x)) without any quantifiers. Express the negation in terms of ¬p without any quantifiers. Discuss how negation of “for all” and “there exists” statements corresponds to DeMorgan’s Law.

103

By DeMorgan’s law, ¬(f = Θ(g)) means ¬(f = O(g)) ∨ ¬(g = O(f )). Thus ¬(f = Θ(g)) means that either for every c and n0 in R+ there is an x in R with x > n0 and f (x) < cg(x) or for every c and n0 in R+ there is an x in R with x > n0 and g(x) < cf (x) (or both). For Exercise 3.2-11 we see that ∀x(p(x)) is simply p(1) ∧ p(2) ∧ p(3) ∧ p(4) ∧ p(5) ∧ p(6) ∧ p(7) ∧ p(8) ∧ p(9) ∧ p(10). By DeMorgan’s law the negation of this statement is ¬p(1) ∨ ¬p(2) ∨ ¬p(3) ∨ ¬p(4) ∨ ¬p(5) ∨ ¬p(6) ∨ ¬p(7) ∨ ¬p(8) ∨ ¬p(9) ∨ ¬p(10). Thus the relationship that negation gives between “for all” and “there exists” statements is the extension of DeMorgan’s law from a finite number of statements to potentially infinitely many statements about a potentially infinite universe.

Implicit quantification
Exercise 3.2-12 Are there any quantifiers in the statement “The sum of even integers is even?” It is an elementary fact about numbers that the sum of even integers is even. Another way to say this is that if m and n are even, then m + n is even. If p(n) stands for the statement “n is even,” then this last sentence translates to p(m) ∧ p(n) ⇒ p(m + n). From the logical form of the statement, we see that our variables are free, so we could substitute various integers in for m and n to see whether the statement is true. But in Exercise 3.2-12, we said we were stating a more general fact about the integers. What we meant to say is that for every pair of integers m and n, if m and n are even, then m + n is even. In symbols, using p(k) for “k is even,” we have ∀m ∈ Z(∀n ∈ Z(p(m) ∧ p(n) ⇒ p(m + n))). This way of representing the statement captures the meaning we originally intended. This is one of the reasons that mathematical statements and their proofs sometimes seem confusing—just as in English, sentences in mathematics have to be interpreted in context. Since mathematics has to be written in some natural language, and since context is used to remove ambiguity in natural language, so must context be used to remove ambiguity from mathematical statements made in natural language. In fact, we frequently rely on context in writing mathematical statements with implicit quantifiers because, in context, it makes the statements easier to read. For example, in Lemma 2.8 we said

104 The equation

CHAPTER 3. REFLECTIONS ON LOGIC AND PROOF

a ·n x = 1 has a solution in Zn if and only if there exist integers x and y such that ax + ny = 1. In context it was clear that the a we were talking about was an arbitrary member of Zn . It would simply have made the statement read more clumsily if we had said For every a ∈ Zn , the equation a ·n x = 1

has a solution in Zn if and only if there exist integers x and y such that ax + ny = 1. On the other hand, we were making a transition from talking about Zn to talking about the integers, so it was important for us to include the quantified statement “there exist integers x and y such that ax + ny = 1.” More recently in Theorem 3.3, we also did not feel it was necessary to say “For all universes U and for all statements p about U ,” at the beginning of the theorem. We felt the theorem would be easier to read if we kept those quantifiers implicit and let the reader (not necessarily consciously) infer them from context.

Proof of quantified statements
We said that “the sum of even integers is even” is an elementary fact about numbers. How do we know it is a fact? One answer is that we know it because our teachers told us so. (And presumably they knew it because their teachers told them so.) But someone had to figure it out in the first place, and so we ask how we would prove this statement? A mathematician asked to give a proof that the sum of even numbers is even might write If m and n are even, then m = 2i and n = 2j so that m + n = 2i + 2j = 2(i + j) and thus m + n is even. Because mathematicians think and write in natural language, they will often rely on context to remove ambiguities. For example, there are no quantifiers in the proof above. However the sentence, while technically incomplete as a proof, captures the essence of why the sum of two even numbers is even. A typical complete (but more formal and wordy than usual) proof might go like this. Let m and n be integers. Suppose m and n are even. If m and n are even, then by definition there are integers i and j such that m = 2i and n = 2j. Thus there are integers i and j such that m = 2i and n = 2j. Then m + n = 2i + 2j = 2(i + j), so by definition m + n is an even integer. We have shown that if m and n are even, then m + n is even. Therefore for every m and n, if m and n are even integers, then so is m + n.

3.2. VARIABLES AND QUANTIFIERS

105

We began our proof by assuming that m and n are integers. This gives us symbolic notation for talking about two integers. We then appealed to the definition of an even integer, namely that an integer h is even if there is another integer k so that h = 2k. (Note the use of a quantifier in the definition.) Then we used algebra to show that m + n is also two times another number. Since this is the definition of m + n being even, we concluded that m + n is even. This allowed us to say that if m and n are even, the m + n is even. Then we asserted that for every pair of integers m and n, if m and n are even, then m + n is even. There are a number of principles of proof illustrated here. The next section will be devoted to a discussion of principles we use in constructing proofs. For now, let us conclude with a remark about the limitations of logic. How did we know that we wanted to write the symbolic equation m + n = 2i + 2j = 2(i + j)? It was not logic that told us to do this, but intuition and experience.

Important Concepts, Formulas, and Theorems
1. Varies over. We use the phrase varies over to describe the set of values a variable may take on. 2. Universe. We call the set of possible values for a variable the universe of that variable. 3. Free variables. Variables that are not constrained in any way whatever are called free variables. 4. Quantifier. A phrase that converts a symbolic statement about potentially any member of our universe into a statement about the universe instead is called a quantifier. There are two types of quantifiers: • Universal quantifier. A quantifier that asserts a statement about a variable is true for every value of the variable in its universe is called a universal quantifier. • Existential quantifier. A quantifier that asserts a statement about a variable is true for at least one value of the variable in its universe is called an existential quantifier. 5. Larger universes. Let U1 be a universe, and let U2 be another universe with U1 ⊆ U2 . Suppose that q(x) is a statement such that U1 = {x| q(x) is true}. Then if p(x) is a statement about U2 , it may also be interpreted as a statement about U1 , and (a) ∀x ∈ U1 (p(x)) is equivalent to ∀x ∈ U2 (q(x) ⇒ p(x)). (b) ∃x ∈ U1 (p(x)) is equivalent to ∃x ∈ U2 (q(x) ∧ p(x)). 6. Proving quantified statements true or false. • The statement ∃x ∈ U (p(x)) is true if there is at least one value of x in U for which the statement p(x) is true.

106

CHAPTER 3. REFLECTIONS ON LOGIC AND PROOF • The statement ∃x ∈ U (p(x)) is false if there is no x ∈ U for which p(x) is true. • The statement ∀x ∈ U (p(x)) is true if p(x) is true for each value of x in U . • The statement ∀x ∈ U (p(x)) is false if p(x) is false for at least one value of x in U .

7. Negation of quantified statements. To negate a quantified statement, you switch the quantifier and push the negation inside. • The statements ¬∀x ∈U(p(x)) and ∃x ∈ U (¬p(x)) are equivalent. • The statements ¬∃x ∈U(p(x)) and ∀x ∈ U (¬p(x)) are equivalent. 8. Big-Oh We say that f (x) = O(g(x)) if there are positive numbers c and n0 such that f (x) ≤ cg(x) for every x > n0 . 9. Big-Theta. f (x) = Θ(g(x)) means that f = O(g(x)) and g = O(f (x)). 10. Some notation for sets of numbers. We use R to stand for the real numbers, R+ to stand for the positive real numbers, Z to stand for the integers (positive, negative, and zero), Z + to stand for the positive integers, and N to stand for the nonnegative integers.

Problems
1. For what positive integers x is the statement (x − 2)2 + 1 ≤ 2 true? For what integers is it true? For what real numbers is it true? If we expand the universe for which we are considering a statement about a variable, does this always increase the size of the statement’s truth set? 2. Is the statement “There is an integer greater than 2 such that (x − 2)2 + 1 ≤ 2” true or false? How do you know? 3. Write the statement that the square of every real number is greater than or equal to zero as a quantified statement about the universe of real numbers. You may use R to stand for the universe of real numbers. 4. The definition of a prime number is that it is an integer greater than 1 whose only positive integer factors are itself and 1. Find two ways to write this definition so that all quantifiers are explicit. (It may be convenient to introduce a variable to stand for the number and perhaps a variable or some variables for its factors.) 5. Write down the definition of a greatest common divisor of m and n in such a way that all quantifiers are explicit and expressed explicitly as “for all” or “there exists.” Write down Euclid’s extended greatest common divisor theorem that relates the greatest common divisor of m and n algebraically to m and n. Again make sure all quantifiers are explicit and expressed explicitly as “for all” or “there exists.” 6. What is the form of the definition of a greatest common divisor, using s(x, y, z) to be the statement x = yz and t(x, y) to be the statement x < y? (You need not include references to the universes for the variables.) 7. Which of the following statements (in which Z + stands for the positive integers and Z stands for all integers) is true and which is false, and why?

3.2. VARIABLES AND QUANTIFIERS (a) ∀z ∈ Z + (z 2 + 6z + 10 > 20). (b) ∀z ∈ Z(z 2 − z ≥ 0). (c) ∃z ∈ Z + (z − z 2 > 0). (d) ∃z ∈ Z(z 2 − z = 6).

107

8. Are there any (implicit) quantifiers in the statement “The product of odd integers is odd?” If so, what are they? 9. Rewrite the statement “The product of odd integers is odd,” with all quantifiers (including any in the definition of odd integers) explicitly stated as “for all” or “there exist.” 10. Rewrite the following statement without any negations. It is not the case that there exists an integer n such that n > 0 and for all integers m > n, for every polynomial equation p(x) = 0 of degree m there are no real numbers for solutions. 11. Consider the following slight modification of Theorem 3.2. For each part below, either prove that it is true or give a counterexample. Let U1 be a universe, and let U2 be another universe with U1 ⊆ U2 . Suppose that q(x) is a statement such that U1 = {x| q(x) is true}. (a) ∀x ∈ U1 (p(x)) is equivalent to ∀x ∈ U2 (q(x) ∧ p(x)). (b) ∃x ∈ U1 (p(x)) is equivalent to ∃x ∈ U2 (q(x) ⇒ p(x)). 12. Let p(x) stand for “x is a prime,” q(x) for “x is even,” and r(x, y) stand for “x = y.” Write down the statement “There is one and only one even prime,” using these three symbolic statements and appropriate logical notation. (Use the set of integers for your universe.) 13. Each expression below represents a statement about the integers. Using p(x) for “x is prime,” q(x, y) for “x = y 2 ,” r(x, y) for “x ≤ y,” s(x, y, z) for “z = xy,” and t(x, y) for “x = y,” determine which expressions represent true statements and which represent false statements. (a) ∀x ∈ Z(∃y ∈ Z(q(x, y) ∨ p(x))) (b) ∀x ∈ Z(∀y ∈ Z(s(x, x, y) ⇔ q(x, y))) (c) ∀y ∈ Z(∃x ∈ Z(q(y, x))) (d) ∃z ∈ Z(∃x ∈ Z(∃y ∈ Z(p(x) ∧ p(y) ∧ ¬t(x, y))) 14. Find a reason why (∃x ∈ U (p(x))) ∧ (∃y ∈ U (q(y))) is not equivalent to ∃z ∈ U (p(z) ∨ q(z)). Are the statements (∃x ∈ U (p(x))) ∨ (∃y ∈ U (q(y))) and ∃z ∈ U (p(z) ∨ q(z)) equivalent? 15. Give an example (in English) of a statement that has the form ∀x ∈ U (∃y ∈ V (p(x, y))). (The statement can be a mathematical statement or a statement about “everyday life,” or whatever you prefer.) Now write in English the statement using the same p(x, y) but of the form ∃y ∈ V (∀x ∈ U (p(x, y))). Comment on whether “for all” and “there exist” commute.

108

CHAPTER 3. REFLECTIONS ON LOGIC AND PROOF

3.3

Inference

Direct Inference (Modus Ponens) and Proofs
We concluded our last section with a proof that the sum of two even numbers is even. That proof contained several crucial ingredients. First, we introduced symbols for members of the universe of integers. In other words, rather than saying “suppose we have two integers,” we introduced symbols for the two members of our universe we assumed we had. How did we know to use algebraic symbols? There are many possible answers to this question, but in this case our intuition was probably based on thinking about what an even number is, and realizing that the definition itself is essentially symbolic. (You may argue that an even number is just twice another number, and you would be right. Apparently no symbols are in that definition. But they really are there; they are the phrases “even number” and “another number.” Since we all know algebra is easier with symbolic variables rather than words, we should recognize that it makes sense to use algebraic notation.) Thus this decision was based on experience, not logic. Next we assumed the two integers were even. We then used the definition of even numbers, and, as our previous parenthetic comment suggests, it was natural to use the definition symbolically. The definition tells us that if m is an even number, then there exists another integer i such that m = 2i. We combined this with the assumption that m is even to conclude that in fact there does exist an integer i such that m = 2i. This is an example of using the principle of direct inference (called modus ponens in Latin). Principle 3.3 (Direct inference) From p and p ⇒ q we may conclude q. This common-sense principle is a cornerstone of logical arguments. But why is it true? In Table 3.5 we take another look at the truth table for implication. Table 3.5: Another look at implication p T T F F q T F T F p⇒q T F T T

The only line which has a T in both the p column and the p ⇒ q column is the first line. In this line q is true also, and we therefore conclude that if p and p ⇒ q hold then q must hold also. While this may seem like a somewhat “inside out” application of the truth table, it is simply a different way of using a truth table. There are quite a few rules (called rules of inference) like the principle of direct inference that people commonly use in proofs without explicitly stating them. Before beginning a formal study of rules of inference, we complete our analysis of which rules we used in the proof that the sum of two even integers is even. After concluding that m = 2i and n = 2j, we next used algebra to show that because m = 2i and n = 2j, there exists a k such that m + n = 2k (our k was i + j). Next we used the definition of even number again to say that m + n was even. We then used a rule of inference which says

3.3. INFERENCE

109

Principle 3.4 (Conditional Proof ) If, by assuming p, we may prove q, then the statement p ⇒ q is true. Using this principle, we reached the conclusion that if m and n are even integers, then m + n is an even integer. In order to conclude that this statement is true for all integers m and n, we used another rule of inference, one of the more difficult to describe. We originally introduced the variables m and n. We used only well-known consequences of the fact that they were in the universe of integers in our proof. Thus we felt justified in asserting that what we concluded about m and n is true for any pair of integers. We might say that we were treating m and n as generic members of our universe. Thus our rule of inference says Principle 3.5 (Universal Generalization) If we can prove a statement about x by assuming x is a member of our universe, then we can conclude the statement is true for every member of our universe. Perhaps the reason this rule is hard to put into words is that it is not simply a description of a truth table, but is a principle that we use in order to prove universally quantified statements.

Rules of inference for direct proofs
We have seen the ingredients of a typical proof. What do we mean by a proof in general? A proof of a statement is a convincing argument that the statement is true. To be more precise about it, we can agree that a direct proof consists of a sequence of statements, each of which is either a hypothesis5 , a generally accepted fact, or the result of one of the following rules of inference for compound statements. Rules of Inference for Direct Proofs 1) From an example x that does not satisfy p(x), we may conclude ¬p(x). 2) From p(x) and q(x), we may conclude p(x) ∧ q(x). 3) From either p(x) or q(x), we may conclude p(x) ∨ q(x). 4) From either q(x) or ¬p(x) we may conclude p(x) ⇒ q(x). 5) From p(x) ⇒ q(x) and q(x) ⇒ p(x) we may conclude p(x) ⇔ q(x). 6) From p(x) and p(x) ⇒ q(x) we may conclude q(x). 7) From p(x) ⇒ q(x) and q(x) ⇒ r(x) we may conclude p(x) ⇒ r(x). 8) If we can derive q(x) from the hypothesis that x satisfies p(x), then we may conclude p(x) ⇒ q(x). 9) If we can derive p(x) from the hypothesis that x is a (generic) member of our universe U , we may conclude ∀x ∈ U (p(x)).
If we are proving an implication s ⇒ t, we call s a hypothesis. If we make assumptions by saying “Let . . . ,” “Suppose . . . ,” or something similar before we give the statement to be proved, then these assumptions are hypotheses as well.
5

110

CHAPTER 3. REFLECTIONS ON LOGIC AND PROOF

10) From an example of an x ∈ U satisfying p(x) we may conclude ∃x ∈ U (p(x)). The first rule is a statement of the principle of the excluded middle as it applies to statements about variables. The next four four rules are in effect a description of the truth tables for “and,” “or,” “implies” and “if and only if.” Rule 5 says what we must do in order to write a proof of an “if and only if” statement. Rule 6, exemplified in our earlier discussion, is the principle of direct inference, and describes one row of the truth table for p ⇒ q. Rule 7 is the transitive law, one we could derive by truth table analysis. Rule 8, the principle of conditional proof, which is also exemplified earlier, may be regarded as yet another description of one row of the truth table of p ⇒ q. Rule 9 is the principle of universal generalization, discussed and exemplified earlier. Rule 10 specifies what we mean by the truth of an existentially quantified statement, according to Principle 3.2. Although some of our rules of inference are redundant, they are useful. For example, we could have written a portion of our proof that the sum of even numbers is even as follows without using Rule 8. “Let m and n be integers. If m is even, then there is a k with m = 2k. If n is even, then there is a j with n = 2j. Thus if m is even and n is even, there are a k and j such that m + n = 2k + 2j = 2(k + j). Thus if m is even and n is even, there is an integer h = k + j such that m + n = 2h. Thus if m is even and n is even, m + n is even.” This kind of argument could always be used to circumvent the use of Rule 8, so Rule 8 is not required as a rule of inference, but because it permits us to avoid such unnecessarily complicated “silliness” in our proofs, we choose to include it. Rule 7, the transitive law, has a similar role. Exercise 3.3-1 Prove that if m is even, then m2 is even. Explain which steps of the proof use one of the rules of inference above. For Exercise 3.3-1, we can mimic the proof that the sum of even integers is even. Let m be integer. Suppose that m is even. If m is even, then there is a k with m = 2k. Thus, there is a k such that m2 = 4k 2 . Therefore, there is an integer h = 2k 2 such that m2 = 2h. Thus if m is even, m2 is even. Therefore, for all integers m, if m is even, then m2 is even. In our first sentence we are setting things up to use Rule 9. In the second sentence we are simply stating an implicit hypothesis. In the next two sentences we use Rule 6, the principle of direct inference. When we said “Therefore, there is an integer h = 2k 2 such that m2 = 2h,” we were simply stating an algebraic fact. In our next sentence we used Rule 8. Finally, we used Rule 9. You might have written the proof in a different way and used different rules of inference.

Contrapositive rule of inference.
Exercise 3.3-2 Show that “p implies q” is equivalent to “¬q implies ¬p.”

3.3. INFERENCE Exercise 3.3-3 Is “p implies q” equivalent to “q implies p?”

111

To do Exercise 3.3-2, we construct the double truth table in Table 3.6. Since the columns under p ⇒ q and under ¬q ⇒ ¬p are exactly the same, we know the two statements are equivalent. This exercise tells us that if we know that ¬q ⇒ ¬p, then we can conclude that p ⇒ q. This is Table 3.6: A double truth table for p ⇒ q and ¬q ⇒ ¬p. p T T F F q T F T F p⇒q T F T T ¬p F F T T ¬q F T F T ¬q ⇒ ¬p T F T T

called the principle of proof by contraposition. Principle 3.6 (Proof by Contraposition) The statement p ⇒ q and the statement ¬q ⇒ ¬p are equivalent, and so a proof of one is a proof of the other. The statement ¬q ⇒ ¬p is called the contrapositive of the statement p ⇒ q. The following example demonstrates the utility of the principle of proof by contraposition. Lemma 3.5 If n is a positive integer with n2 > 100, then n > 10. Proof: Suppose n is not greater than 10. (Now we use the rule of algebra for inequalities which says that if x ≤ y and c ≥ 0, then cx ≤ cy.) Then since 1 ≤ n ≤ 10, n · n ≤ n · 10 ≤ 10 · 10 = 100. Thus n2 is not greater than 100. Therefore, if n is not greater than 10, n2 is not greater than 100. Then, by the principle of proof by contraposition, if n2 > 100, n must be greater than 10. We adopt Principle 3.6 as a rule of inference, called the contrapositive rule of inference. 11) From ¬q(x) ⇒ ¬p(x) we may conclude p(x) ⇒ q(x). In our proof of the Chinese Remainder Theorem, Theorem 2.24, we wanted to prove that for a certain function f that if x and y were different integers between 0 and mn − 1, then f (x) = f (y). To prove this we assumed that in fact f (x) = f (y) and proved that x and y were not different integers between 0 and mn − 1. Had we known the principle of contrapositive inference, we could have concluded then and there that f was one-to-one. Instead, we used the more common principle of proof by contradiction, the major topic of the remainder of this section, to complete our proof. If you look back at the proof, you will see that we might have been able to shorten it by a sentence by using contrapositive inference. For Exercise 3.3-3, a quick look at the double truth table for p ⇒ q and q ⇒ p in Table 3.7 demonstrates that these two statements are not equivalent. The statement q ⇒ p is called the converse of p ⇒ q. Notice that p ⇔ q is true exactly when p ⇒ q and its converse are true. It is surprising how often people, even professional mathematicians, absent-mindedly try to prove the converse of a statement when they mean to prove the statement itself. Try not to join this crowd!

112

CHAPTER 3. REFLECTIONS ON LOGIC AND PROOF Table 3.7: A double truth table for p ⇒ q and q ⇒ p. p T T F F q T F T F p⇒q T F T T q⇒p T T F T

Proof by contradiction
Proof by contrapositive inference is an example of what we call indirect proof. We have actually seen another example indirect proof, the principle of proof by contradiction. In our proof of Corollary 2.6 we introduced the principle of proof by contradiction, Principle 2.1. We were trying to prove the statement Suppose there is a b in Zn such that the equation a ·n x = b does not have a solution. Then a does not have a multiplicative inverse in Zn . We assumed that the hypothesis that a ·n x = b does not have a solution was true. We also assumed that the conclusion that a does not have a multiplicative inverse was false. We showed that these two assumptions together led to a contradiction. Then, using the principle of the excluded middle, Principle 3.1 (without saying so), we concluded that if the hypothesis is in fact true, then the only possibility was that the conclusion is true as well. We used the principle again later in our proof of Euclid’s Division Theorem. Recall that in that proof we began by assuming that the theorem was false. We then chose among the pairs of integers (m, n) such that m = qn + r with 0 ≤ r < n a pair with the smallest possible m. We then made some computations by which we proved that in this case there are a q and r with 0 ≤ r < n such that m = qn + r. Thus we started out by assuming the theorem was false, and from that assumption we drew drew a contradiction to the assumption. Since all our reasoning, except for the assumption that the theorem was false, used accepted rules of inference, the only source of that contradiction was our assumption. Thus, by the principle of the excluded middle, our assumption had to be incorrect. We adopt the principle of proof by contradiction (also called the principle of reduction to absurdity) as our last rule of inference. 12) If from assuming p(x) and ¬q(x), we can derive both r(x) and ¬r(x) for some statement r(x), then we may conclude p(x) ⇒ q(x). There can be many variations of proof by contradiction. For example, we may assume p is true and q is false, and from this derive the contradiction that p is false, as in the following example. Prove that if x2 + x − 2 = 0, then x = 0. Proof: Suppose that x2 + x − 2 = 0. Assume that x = 0. Then x2 + x − 2 = 0 + 0 − 2 = −2. This contradicts x2 + x − 2 = 0. Thus (by the principle of proof by contradiction), if x2 + x − 2 = 0, then x = 0.

3.3. INFERENCE Here the statement r was identical to p, namely x2 + x − 2 = 0.

113

On the other hand, we may instead assume p is true and q is false, and derive a contradiction of a known fact, as in the following example. Prove that if x2 + x − 2 = 0, then x = 0. Proof: Suppose that x2 + x − 2 = 0. Assume that x = 0. Then x2 + x − 2 = 0 + 0 − 2 = −2. Thus 0 = −2, a contradiction. Thus (by the principle of proof by contradiction), if x2 + x − 2 = 0, then x = 0. Here the statement r is the known fact that 0 = −2. Sometimes the statement r that appears in the principle of proof by contradiction is simply a statement that arises naturally as we are trying to construct our proof, as in the following example. Prove that if x2 + x − 2 = 0, then x = 0. Proof: Suppose that x2 + x − 2 = 0. Then x2 + x = 2. Assume that x = 0. Then 2 + x = 0 + 0 = 0. But this is a contradiction (to our observation that x2 + x = 2). x Thus (by the principle of proof by contradiction), if x2 + x − 2 = 0, then x = 0. Here the statement r is “x2 + x = 2.” Finally, if proof by contradiction seems to you not to be much different from proof by contraposition, you are right, as the example that follows shows. Prove that if x2 + x − 2 = 0, then x = 0. Proof: Assume that x = 0. Then x2 +x−2 = 0+0−2 = −2, so that x2 +x−2 = 0. Thus (by the principle of proof by contraposition), if x2 + x − 2 = 0, then x = 0. Any proof that uses one of the indirect methods of inference is called an indirect proof. The last four examples illustrate the rich possibilities that indirect proof provides us. Of course they also illustrate why indirect proof can be confusing. There is no set formula that we use in writing a proof by contradiction, so there is no rule we can memorize in order to formulate indirect proofs. Instead, we have to ask ourselves whether assuming the opposite of what we are trying to prove gives us insight into why the assumption makes no sense. If it does, we have the basis of an indirect proof, and the way in which we choose to write it is a matter of personal choice. Exercise 3.3-4 Without extracting square roots, prove that if n is a positive integer such that n2 < 9, then n < 3. You may use rules of algebra for dealing with inequalities. √ Exercise 3.3-5 Prove that 5 is not rational. To prove the statement in Exercise 3.3-4, we assume, for purposes of contradiction, that n ≥ 3. Squaring both sides of this equation, we obtain n2 ≥ 9 ,

114

CHAPTER 3. REFLECTIONS ON LOGIC AND PROOF

which contradicts our hypothesis that n2 < 9. Therefore, by the principle of proof by contradiction, n < 3. √ To prove the statement in Exercise 3.3-5, we assume, for the purpose of contradiction, that 5 is rational. This means that it can be expressed as the fraction m , where m and n are integers. n √ Squaring both sides of the equation m = 5, we obtain n m2 = 5, n2 or m2 = 5n2 . Now m2 must have an even number of prime factors (counting each prime factor as many times as it occurs) as must n2 . But 5n2 has an odd number of prime factors. Thus a product of an even number of prime factors is equal to a product of an odd number of prime factors, which is a contradiction since each positive integer may be expressed uniquely as a product of (positive) √ prime numbers. Thus by the principle of proof by contradiction, 5 is not rational.

Important Concepts, Formulas, and Theorems
1. Principle of direct inference or modus ponens. From p and p ⇒ q we may conclude q. 2. Principle of conditional proof. If, by assuming p, we may prove q, then the statement p ⇒ q is true. 3. Principle of universal generalization. If we can prove a statement about x by assuming x is a member of our universe, then we can conclude it is true for every member of our universe. 4. Rules of Inference. 12 rules of inference appear in this chapter. They are 1) From an example x that does not satisfy p(x), we may conclude ¬p(x). 2) From p(x) and q(x), we may conclude p(x) ∧ q(x). 3) From either p(x) or q(x), we may conclude p(x) ∨ q(x). 4) From either q(x) or ¬p(x) we may conclude p(x) ⇒ q(x). 5) From p(x) ⇒ q(x) and q(x) ⇒ p(x) we may conclude p(x) ⇔ q(x). 6) From p(x) and p(x) ⇒ q(x) we may conclude q(x). 7) From p(x) ⇒ q(x) and q(x) ⇒ r(x) we may conclude p(x) ⇒ r(x). 8) If we can derive q(x) from the hypothesis that x satisfies p(x), then we may conclude p(x) ⇒ q(x). 9) If we can derive p(x) from the hypothesis that x is a (generic) member of our universe U , we may conclude ∀x ∈ U (p(x)). 10) From an example of an x ∈ U satisfying p(x) we may conclude ∃x ∈ U (p(x)). 11) From ¬q(x) ⇒ ¬p(x) we may conclude p(x) ⇒ q(x). 12) If from assuming p(x) and ¬q(x), we can derive both r(x) and ¬r(x) for some statement r, then we may conclude p(x) ⇒ q(x).

3.3. INFERENCE

115

5. Contrapositive of p ⇒ q. The contrapositive of the statement p ⇒ q is the statement ¬q ⇒ ¬p. 6. Converse of p ⇒ q. The converse of the statement p ⇒ q is the statement q ⇒ p. 7. Contrapositive rule of inference. From ¬q ⇒ ¬p we may conclude p ⇒ q. 8. Principle of proof by contradiction. If from assuming p and ¬q, we can derive both r and ¬r for some statement r, then we may conclude p ⇒ q.

Problems
1. Write down the converse and contrapositive of each of these statements. (a) If the hose is 60 feet long, then the hose will reach the tomatoes. (b) George goes for a walk only if Mary goes for a walk. (c) Pamela recites a poem if Andre asks for a poem. 2. Construct a proof that if m is odd, then m2 is odd. 3. Construct a proof that for all integers m and n, if m is even and n is odd, then m + n is odd. 4. What do we really mean when we say “prove that if m is odd and n is odd then m + n is even?” Prove this more precise statement. 5. Prove that for all integers m and n if m is odd and n is odd, then m · n is odd. 6. Is the statement p ⇒ q equivalent to the statement ¬p ⇒ ¬q? 7. Construct a contrapositive proof that for all real numbers x if x2 − 2x = −1, then x = 1. 8. Construct a proof by contradiction that for all real numbers x if x2 − 2x = −1, then x = 1. 9. Prove that if x3 > 8, then x > 2. √ 10. Prove that 3 is irrational. 11. Construct a proof that if m is an integer such that m2 is even, then m is even. 12. Prove or disprove the following statement. “For every positive integer n, if n is prime, then 12 and n3 − n2 + n have a common factor.” 13. Prove or disprove the following statement. “For all integers b, c, and d, if x is a rational number such that x2 + bx + c = d, then x is an integer.” (Hints: Are all the quantifiers given explicitly? It is ok to use the quadratic formula.) 14. Prove that there is no largest prime number. 15. Prove that if f (x), g(x) and h(x) are functions from R+ to R+ such that f (x) = O(g(x)) and g(x) = O(h(x)), then f (x) = O(h(x)).

116

CHAPTER 3. REFLECTIONS ON LOGIC AND PROOF

Chapter 4

Induction, Recursion, and Recurrences
4.1 Mathematical Induction

Smallest Counter-Examples
In Section 3.3, we saw one way of proving statements about infinite universes: we considered a “generic” member of the universe and derived the desired statement about that generic member. When our universe is the universe of integers, or is in a one-to-one correspondence with the integers, there is a second technique we can use. Recall our our proof of Euclid’s Division Theorem (Theorem 2.12), which says that for each pair (m, n) of positive integers, there are nonnegative integers q and r such that m = nq + r and 0 ≤ r < n. For the purpose of a proof by contradiciton, we assumed that the statement was fales. Then we said the following. “Among all pairs (m, n) that make it false, choose the smallest m that makes it false. We cannot have m < n because then the statement would be true with q = 0 and r = m, and we cannot have m = n because then the statement is true with q = 1 and r = 0. This means m − n is a positive number smaller than m. We assumed that m was the smallest value that made the theorem false, and so the theorem must be true for the pair (m − n, n). Therefore, there must exist a q and r such that m − n = q n + r , with 0 ≤ r < n. Thus m = (q + 1)n + r . Now, by setting q = q + 1 and r = r , we can satisfy the theorem for the pair (m, n), contradicting the assumption that the statement is false. Thus the only possibility is that the statement is true.” Focus on the sentences “This means m − n is a positive number smaller than m. We assumed that m was the smallest value that made the theorem false, and so the theorem must be true for the pair (m − n, n). Therefore, there must exist a q and r such that m − n = q n + r , with 0 ≤ r < n. Thus m = (q + 1)n + r .” To analyze these sentences, let p(m, n) denote the statement “there are nonnegative integers q and r with 0 ≤ r < n such that m = nq + r” The quoted sentences 117

118

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES

we focused on provide a proof that p(m − n, n) ⇒ p(m, n). This implication is the crux of the proof. Let us give an analysis of the proof that shows the pivotal role of this impliction. • We assumed a counter-example with a smallest m existed. • Then using the fact that p(m , n) had to be true for every m smaller than m, we chose m = m − n, and observed that p(m , n) had to be true. • Then we used the implication p(m − n, n) ⇒ p(m, n) to conclude the truth of p(m, n). • But we had assumed that p(m, n) was false, so this is the assumption we contradicted in the proof by contradiction. Exercise 4.1-1 In Chapter 1 we learned Gauss’s trick for showing that for all positive integers n, n(n + 1) 1 + 2 + 3 + 4 + ... + n = . (4.1) 2 Use the technique of asserting that if there is a counter-example, there is a smallest counter-example and deriving a contradiction to prove that the sum is n(n + 1)/2. What implication did you have to prove in the process? Exercise 4.1-2 For what values of n ≥ 0 do you think 2n+1 ≥ n2 + 2? Use the technique of asserting there is a smallest counter-example and deriving a contradiction to prove you are right. What implication did you have to prove in the process? Exercise 4.1-3 For what values of n ≥ 0 do you think 2n+1 ≥ n2 + 3? Is it possible to use the technique of asserting there is a smallest counter-example and deriving a contradiction to prove you are right? If so, do so and describe the implication you had to prove in the process. If not, why not? Exercise 4.1-4 Would it make sense to say that if there is a counter example there is a largest counter-example and try to base a proof on this? Why or why not? In Exercise 4.1-1, suppose the formula for the sum is false. Then there must be a smallest n such that the formula does not hold for the sum of the first n positive integers. Thus for any positive integer i smaller than n, i(i + 1) . (4.2) 2 Because 1 = 1 · 2/2, Equation 4.1 holds when n = 1, and therefore the smallest counterexample is not when n = 1. So n > 1, and n − 1 is one of the positive integers i for which the formula holds. Substituting n − 1 for i in Equation 4.2 gives us 1 + 2 + ··· + i = 1 + 2 + ··· + n − 1 = Adding n to both sides gives 1 + 2 + ··· + n − 1 + n = = = (n − 1)n +n 2 n2 − n + 2n 2 n(n + 1) . 2 (n − 1)n . 2

4.1. MATHEMATICAL INDUCTION

119

Thus n is not a counter-example after all, and therefore there is no counter-example to the formula. Thus the formula holds for all positive integers n. Note that the crucial step was proving that p(n − 1) ⇒ p(n), where p(n) is the formula 1 + 2 + ··· + n = n(n + 1) . 2

In Exercise 4.1-2, let p(n) be the statement that 2n+1 ≥ n2 + 2. Some experimenting with small values of n leads us to believe this statement is true for all nonnegative integers. Thus we want to prove p(n) is true for all nonnegative integers n. To do so, we assume that the statement that “p(n) is true for all nonnegative integers n” is false. When a “for all” statement is false, there must be some n for which it is false. Therefore, there is some smallest nonnegative integer n so that 2n+1 ≥ n2 + 2. Assume now that n has this value. This means that for all nonnegative integers i with i < n, 2i+1 ≥ i2 + 2. Since we know from our experimentation that n = 0, we know n − 1 is a nonnegative integer less than n, so using n − 1 in place of i, we get 2(n−1)+1 ≥ (n − 1)2 + 2, or 2n ≥ n2 − 2n + 1 + 2 = n2 − 2n + 3. From this we want to draw a contradiction, presumably a contradiction to 2n+1 ≥ n2 + 2. To get the contradiction, we want to convert the left-hand side of Equation 4.3 to 2n+1 . For this purpose, we multiply both sides by 2, giving 2n+1 = 2 · 2n ≥ 2n2 − 4n + 6 . You may have gotten this far and wondered “What next?” Since we want to obtain a contradiction, we want to convert the right hand side into something like n2 + 2. More precisely, we will convert the right-hand side into n2 + 2 plus an additional term. If we can show that the additional term is nonnegative, the proof will be complete. Thus we write 2n+1 ≥ 2n2 − 4n + 6 = (n2 + 2) + (n2 − 4n + 4) = n2 + 2 + (n − 2)2 ≥ n2 + 2 , (4.4) since (n − 2)2 ≥ 0. This is a contradiction, so there must not have been a smallest counterexample, and thus there must be no counter-example. Therefore 2n ≥ n2 + 2 for all nonnegative integers n. What implication did we prove above? Let p(n) stand for 2n+1 ≥ n2 + 2. Then in Equations 4.3 and 4.4 we proved that p(n − 1) ⇒ p(n). Notice that at one point in our proof we had to note that we had considered the case with n = 0 already. Although we have given a proof by smallest counterexample, it is natural to ask whether it would make more sense to try to prove the statement directly. Would it make more sense to forget about the contradiction now that we (4.3)

120

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES

have p(n − 1) ⇒ p(n) in hand and just observe that p(0) and p(n − 1) ⇒ p(n) implies p(1),that p(1) and p(n − 1) ⇒ p(n) implies p(2), and so on so that we have p(k) for every k? We will address this question shortly. Now let’s consider Exercise 4.1-3. Notice that 2n+1 > n2 +3 for n = 0 and 1, but 2n+1 > n2 +3 for any larger n we look at at. Let us try to prove that 2n+1 > n2 + 3 for n ≥ 2. We now let p (n) be the statement 2n+1 > n2 + 3. We can easily prove p (2): since 8 = 23 ≥ 22 + 3 = 7. Now suppose that among the integers larger than 2 there is a counter-example m to p (n). That is, suppose that there is an m such that m > 2 and p (m) is false. Then there is a smallest such m, so that for k between 2 and m − 1, p (k) is true. If you look back at your proof that p(n − 1) ⇒ p(n), you will see that, when n ≥ 2, essentially the same proof applies to p as well. That is, with very similar computations we can show that p (n − 1) ⇒ p (n), so long as n ≥ 2. Thus since p (m − 1) is true, our implication tells us that p (m) is also true. This is a contradiction to our assumption that p (m) is false. therefore, p (m) is true. Again, we could conclude from p (2) and p (2) ⇒ p (3) that p (3) is true, and similarly for p (4), and so on. The implication we had to prove was p (n − 1) ⇒ p (n). For Exercise 4.1-4 if we have a counter-example to a statement p(n) about an integer n, this means that there is an m such that p(m) is false. To find a smallest counter example we would need to examine p(0), p(1), . . . , perhaps all the way up to p(m) in order to find a smallest counter-example, that is a smallest number k such that p(k) is false. Since this involves only a finite number of cases, it makes sense to assert that there is a smallest counter-example. But, in answer to Exercise 4.1-4, it does not make sense to assert that there is a largest counter example, because there are infinitely many cases n that we would have to check in hopes if finding a largest one, and thus we might never find one. Even if we found one, we wouldn’t be able to figure out that we had a largest counter-example just by checking larger and larger values of n, because we would never run out of values of n to check. Sometimes there is a largest counter-example, as in Exercise 4.1-3. To prove this, though, we didn’t check all cases. Instead, based on our intuition, we guessed that the largest counter example was n = 1. Then we proved that we were right by showing that among numbers greater than or equal to two, there is no smallest counter-example. Sometimes there is no largest counter example n to a statement p(n); for example n2 < n is false for all all integers n, and therefore there is no largest counter-example.

The Principle of Mathematical Induction
It may seem clear that repeatedly using the implication p(n−1) ⇒ p(n) will prove p(n) for all n (or all n ≥ 2). That observation is the central idea of the Principle of Mathematical Induction, which we are about to introduce. In a theoretical discussion of how one constructs the integers from first principles, the principle of mathematical induction (or the equivalent principle that every set of nonnegative integers has a smallest element, thus letting us use the “smallest counter-example” technique) is one of the first principles we assume. The principle of mathematical induction is usually described in two forms. The one we have talked about so far is called the “weak form.” It applies to statements about integers n. The Weak Principle of Mathematical Induction. If the statement p(b) is true, and the statement p(n − 1) ⇒ p(n) is true for all n > b, then p(n) is true for all integers n ≥ b. Suppose, for example, we wish to give a direct inductive proof that 2n+1 > n2 + 3 for n ≥ 2. We would proceed as follows. (The material in square brackets is not part of the proof; it is a

4.1. MATHEMATICAL INDUCTION running commentary on what is going on in the proof.) We shall prove by induction that 2n+1 > n2 + 3 for n ≥ 2. First, 22+1 = 23 = 8, while 22 + 3 = 7. [We just proved p(2). We will now proceed to prove p(n − 1) ⇒ p(n).] Suppose now that n > 2 and that 2n > (n − 1)2 + 3. [We just made the hypothesis of p(n − 1) in order to use Rule 8 of our rules of inference.] Now multiply both sides of this inequality by 2, giving us 2n+1 > 2(n2 − 2n + 1) + 6 = n2 + 3 + n2 − 4n + 4 + 1 = n2 + 3 + (n − 2)2 + 1 . Since (n − 2)2 + 1 is positive for n > 2, this proves 2n+1 > n2 + 3. [We just showed that from the hypothesis of p(n − 1) we can derive p(n). Now we can apply Rule 8 to assert that p(n − 1) ⇒ p(n).] Therefore 2n > (n − 1)2 + 3 ⇒ 2n+1 > n2 + 3. Therefore by the principle of mathematical induction, 2n+1 > n2 + 3 for n ≥ 2.

121

In the proof we just gave, the sentence “First, 22+1 = 23 = 8, while 22 + 3 = 7” is called the base case. It consisted of proving that p(b) is true, where in this case b is 2 and p(n) is 2n+1 > n2 + 3. The sentence “Suppose now that n > 2 and that 2n > (n − 1)2 + 3.” is called the inductive hypothesis. This is the assumption that p(n − 1) is true. In inductive proofs, we always make such a hypothesis1 in order to prove the implication p(n − 1) ⇒ p(n). The proof of the implication is called the inductive step of the proof. The final sentence of the proof is called the inductive conclusion. Exercise 4.1-5 Use mathematical induction to show that 1 + 3 + 5 + · · · + (2k − 1) = k 2 for each positive integer k. Exercise 4.1-6 For what values of n is 2n > n2 ? Use mathematical induction to show that your answer is correct. For Exercise 4.1-5, we note that the formula holds when k = 1. Assume inductively that the formula holds when k = n − 1, so that 1 + 3 + · · · + (2n − 3) = (n − 1)2 . Adding 2n − 1 to both sides of this equation gives 1 + 3 + · · · + (2n − 3) + (2n − 1) = n2 − 2n + 1 + 2n − 1 = n2 . (4.5)

Thus the formula holds when k = n, and so by the principle of mathematical induction, the formula holds for all positive integers k.
At times, it might be more convenient to assume that p(n) is true and use this assumption to prove that p(n + 1) is true. This proves the implication p(n) ⇒ p(n + 1), which lets us reason in the same way.
1

122

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES

Notice that in our discussion of Exercise 4.1-5 we nowhere mentioned a statement p(n). In fact, p(n) is the statement we get by substituting n for k in the formula, and in Equation 4.5 we were proving p(n − 1) ⇒ p(n). Next notice that we did not explicitly say we were going to give a proof by induction; instead we told the reader when we were making the inductive hypothesis by saying “Assume inductively that . . . .” This convention makes the prose flow nicely but still tells the reader that he or she is reading a proof by induction. Notice also how the notation in the statement of the exercise helped us write the proof. If we state what we are trying to prove in terms of a variable other than n, say k, then we can assume that our desired statement holds when this variable (k) is n − 1 and then prove that the statement holds when k = n. Without this notational device, we have to either mention our statement p(n) explicitly, or avoid any discussion of substituting values into the formula we are trying to prove. Our proof above that 2n+1 > n2 + 3 demonstrates this last approach to writing an inductive proof in plain English. This is usually the “slickest” way of writing an inductive proof, but it is often the hardest to master. We will use this approach first for the next exercise. For Exercise 4.1-6 we note that 2 = 21 > 12 = 1, but then the inequality fails for n = 2, 3, 4. However, 32 > 25. Now we assume inductively that for n > 5 we have 2n−1 > (n − 1)2 . Multiplying by 2 gives us 2n > 2(n2 − 2n + 1) = n2 + n2 − 4n + 2 > n2 + n2 − n · n = n2 , since n > 5 implies that −4n > −n·n. (We also used the fact that n2 +n2 −4n+2 > n2 +n2 −4n.) Thus by the principle of mathematical induction, 2n > n2 for all n ≥ 5. Alternatively, we could write the following. Let p(n) denote the inequality 2n > n2 . Then p(5) is true because 32 > 25. Assume that n > 5 and p(n − 1) is true. This gives us 2n−1 > (n − 1)2 . Multiplying by 2 gives 2n > 2(n2 − 2n + 1) = n2 + n2 − 4n + 2 > n2 + n2 − n · n = n2 , since n > 5 implies that −4n > −n · n. Therefore p(n − 1) ⇒ p(n). Thus by the principle of mathematical induction, 2n > n2 for all n ≥ 5. Notice how the “slick” method simply assumes that the reader knows we are doing a proof by induction from our ”Assume inductively. . . ,” and mentally supplies the appropriate p(n) and observes that we have proved p(n − 1) ⇒ p(n) at the right moment. Here is a slight variation of the technique of changing variables. To prove that 2n > n2 when n ≥ 5, we observe that the inequality holds when n = 5 since 32 > 25. Assume inductively that the inequality holds when n = k, so that 2k > k 2 . Now when k ≥ 5, multiplying both sides of this inequality by 2 yields 2k+1 > 2k 2 = k 2 + k 2 ≥ k 2 + 5k > k 2 + 2k + 1 = (k + 1)2 ,

4.1. MATHEMATICAL INDUCTION

123

since k ≥ 5 implies that k 2 ≥ 5k and 5k = 2k+3k > 2k+1. Thus by the principle of mathematical induction, 2n > n2 for all n ≥ 5. This last variation of the proof illustrates two ideas. First, there is no need to save the name n for the variable we use in applying mathematical induction. We used k as our “inductive variable” in this case. Second, as suggested in a footnote earlier, there is no need to restrict ourselves to proving the implication p(n − 1) ⇒ p(n). In this case, we proved the implication p(k) ⇒ p(k + 1). Clearly these two implications are equivalent as n ranges over all integers larger than b and as k ranges over all integers larger than or equal to b.

Strong Induction
In our proof of Euclid’s division theorem we had a statement of the form p(m, n) and, assuming that it was false, we chose a smallest m such that p(m, n) is false for some n. This meant we could assume that p(m , n) is true for all m < m, and we needed this assumption, because we ended up showing that p(m − n, n) ⇒ p(m, n) in order to get our contradiction. This situation differs from the examples we used to introduce mathematical induction, for in those we used an implication of the form p(n − 1) ⇒ p(n). The essence of our method in proving Euclid’s division theorem is that we have a statement q(k) we want to prove. We suppose it is false, so that there must be a smallest k for which q(k) is false. This means we may assume q(k ) is true for all k in the universe of q with k < k. We then use this assumption to derive a proof of q(k), thus generating our contradiction. Again, we can avoid the step of generating a contradiction in the following way. Suppose first we have a proof of q(0). Suppose also that we have a proof that q(0) ∧ q(1) ∧ q(2) ∧ . . . ∧ q(k − 1) ⇒ q(k) for all k larger than 0. Then from q(0) we can prove q(1), from q(0) ∧ q(1) we can prove q(2), from q(0) ∧ q(1) ∧ q(2) we can prove q(3) and so on, giving us a proof of q(n) for any n we desire. This is another form of the mathematical induction principle. We use it when, as in Euclid’s division theorem, we can get an implication of the form q(k ) ⇒ q(k) for some k < k or when we can get an implication of the form q(0) ∧ q(1) ∧ q(2) ∧ . . . ∧ q(k − 1) ⇒ q(k). (As is the case in Euclid’s division theorem, we often don’t really know what the k is, so in these cases the first kind of situation is really just a special case of the second. Thus, we do not treat the first of the two implications separately.) We have described the method of proof known as the Strong Principle of Mathematical Induction. The Strong Principle of Mathematical Induction. If the statement p(b) is true, and the statement p(b) ∧ p(b + 1) ∧ . . . ∧ p(n − 1) ⇒ p(n) is true for all n > b, then p(n) is true for all integers n ≥ b. Exercise 4.1-7 Prove that every positive integer is either a power of a prime number or the product of powers of prime numbers. In Exercise 4.1-7 we can observe that 1 is a power of a prime number; for example 1 = 20 . Suppose now we know that every number less than n is a power of a prime number or a product of powers of prime numbers. Then if n is not a prime number, it is a product of two smaller

124

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES

numbers, each of which is, by our supposition, a power of a prime number or a product of powers of prime numbers. Therefore n is a power of a prime number or a product of powers of prime numbers. Thus, by the strong principle of mathematical induction, every positive integer is a power of a prime number or a product of powers of prime numbers. Note that there was no explicit mention of an implication of the form p(b) ∧ p(b + 1) ∧ . . . ∧ p(n − 1) ⇒ p(n) . This is common with inductive proofs. Note also that we did not explicitly identify the base case or the inductive hypothesis in our proof. This is common too. Readers of inductive proofs are expected to recognize when the base case is being given and when an implication of the form p(n − 1) ⇒ p(n) or p(b) ∧ p(b + 1) ∧ · · · ∧ p(n − 1) ⇒ p(n) is being proved. Mathematical induction is used frequently in discrete math and computer science. Many quantities that we are interested in measuring, such as running time, space, or output of a program, typically are restricted to positive integers, and thus mathematical induction is a natural way to prove facts about these quantities. We will use it frequently throughout this book. We typically will not distinguish between strong and weak induction, we just think of them both as induction. (In Problems 14 and 15 at the end of the section you will be asked to derive each version of the principle from the other.)

Induction in general
To summarize what we have said so far, a typical proof by mathematical induction showing that a statement p(n) is true for all integers n ≥ b consists of three steps. 1. First we show that p(b) is true. This is called “establishing a base case.” 2. Then we show either that for all n > b, p(n − 1) ⇒ p(n), or that for all n > b, p(b) ∧ p(b + 1) ∧ . . . ∧ p(n − 1) ⇒ p(n). For this purpose, we make either the inductive hypothesis of p(n − 1) or the inductive hypothesis p(b) ∧ p(b + 1) ∧ . . . ∧ p(n − 1). Then we derive p(n) to complete the proof of the implication we desire, either p(n − 1) ⇒ p(n) or p(b) ∧ p(b + 1) ∧ . . . ∧ p(n − 1) ⇒ p(n). Instead we could 2. show either that for all n ≥ b, p(n) ⇒ p(n + 1) or p(b) ∧ p(b + 1) ∧ · · · ∧ p(n) ⇒ p(n + 1). For this purpose, we make either the inductive hypothesis of p(n) or the inductive hypothesis p(b) ∧ p(b + 1) ∧ . . . ∧ p(n). Then we derive p(n = 1) to complete the proof of the implication we desire, either p(n) ⇒ p(n = 1) or p(b) ∧ p(b + 1) ∧ . . . ∧ p(n) ⇒ p(n = 1). 3. Finally, we conclude on the basis of the principle of mathematical induction that p(n) is true for all integers n greater than or equal to b.

4.1. MATHEMATICAL INDUCTION

125

The second step is the core of an inductive proof. This is usually where we need the most insight into what we are trying to prove. In light of our discussion of Exercise 4.1-6, it should be clear that step 2 is simply a variation on the theme of writing an inductive proof. It is important to realize that induction arises in some circumstances that do not fit the “pat” typical description we gave above. These circumstances seem to arise often in computer science. However, inductive proofs always involve three things. First we always need a base case or cases. Second, we need to show an implication that demonstrates that p(n) is true given that p(n ) is true for some set of n < n, or possibly we may need to show a set of such implications. Finally, we reach a conclusion on the basis of the first two steps. For example, consider the problem of proving the following statement: n i=0

i =  2

 2  n

4 n2 −1 4

if n is even if n is odd

(4.6)

In order to prove this, one must show that p(0) is true, p(1) is true, p(n − 2) ⇒ p(n) if n is odd, and that p(n − 2) ⇒ p(n), if n is even. Putting all these together, we see that our formulas hold for all n ≥ 0. We can view this as either two proofs by induction, one for even and one for odd numbers, or one proof in which we have two base cases and two methods of deriving results from previous ones. This second view is more profitable, because it expands our view of what induction means, and makes it easier to find inductive proofs. In particular we could find situations where we have just one implication to prove but several base cases to check to cover all cases, or just one base case, but several different implications to prove to cover all cases. Logically speaking, we could rework the example above so that it fits the pattern of strong induction. For example, when we prove a second base case, then we have just proved that the first base case implies it, because a true statement implies a true statement. Writing a description of mathematical induction that covers all kinds of base cases and implications one might want to consider in practice would simply give students one more unnecessary thing to memorize, so we shall not do so. However, in the mathematics literature and especially in the computer science literature, inductive proofs are written with multiple base cases and multiple implications with no effort to reduce them to one of the standard forms of mathematical induction. So long as it is possible to ”cover” all the cases under consideration with such a proof, it can be rewritten as a standard inductive proof. Since readers of such proofs are expected to know this is possible, and since it adds unnecessary verbiage to a proof to do so, this is almost always left out.

Important Concepts, Formulas, and Theorems
1. Weak Principle of Mathematical Induction. The weak principle of mathematical induction states that If the statement p(b) is true, and the statement p(n − 1) ⇒ p(n) is true for all n > b, then p(n) is true for all integers n ≥ b. 2. Strong Principle of Mathematical Induction. The strong principle of mathematical induction states that If the statement p(b) is true, and the statement p(b)∧p(b+1)∧. . .∧p(n−1) ⇒ p(n) is true for all n > b, then p(n) is true for all integers n ≥ b.

126

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES

3. Base Case. Every proof by mathematical induction, strong or weak, begins with a base case which establishes the result being proved for at least one value of the variable on which we are inducting. This base case should prove the result for the smallest value of the variable for which we are asserting the result. In a proof with multiple base cases, the base cases should cover all values of the variable which are not covered by the inductive step of the proof. 4. Inductive Hypothesis. Every proof by induction includes an inductive hypothesis in which we assume the result p(n) we are trying to prove is true when n = k − 1 or when n < k (or in which we assume an equivalent statement). 5. Inductive Step. Every proof by induction includes an inductive step in which we prove the implication that p(k−1) ⇒ p(k) or the implication that p(b)∧p(b+1)∧· · ·∧p(k−1) ⇒ p(k), or some equivalent implication. 6. Inductive Conclusion. A proof by mathematical induction should include, at least implicitly, a concluding statement of the form “Thus by the principle of mathematical induction . . . ,” which asserts that by the principle of mathematical induction the result p(n) which we are trying to prove is true for all values of n including and beyond the base case(s).

Problems
1. This exercise explores ways to prove that integers n.
2 3

+

2 9

+ ··· +

2 3n

= 1−

1 3

n

for all positive

(a) First, try proving the formula by contradiction. Thus you assume that there is some integer n that makes the formula false. Then there must be some smallest n that makes 2 the formula false. Can this smallest n be 1? What do we know about 2 + 2 + · · · + 3i 3 9 when i is a positive integer smaller than this smallest n? Is n − 1 a positive integer 2 for this smallest n? What do we know about 2 + 2 + · · · + 3n−1 for this smallest n? 3 9 2 Write this as an equation and add 3n to both sides and simplify the right side. What does this say about our assumption that the formula is false? What can you conclude about the truth of the formula? If p(k) is the statement 2 + 2 + · · · + 32 = 1 − k 3 9 what implication did we prove in the process of deriving our contradiction?
1 3 n 1 3 k

,

(b) What is the base step in a proof by mathematical induction that 2 + 2 + · · · + 32 = 1 − n 3 9 for all positive integers n? What would you assume as an inductive hypothesis? What would you prove in the inductive step of a proof of this formula by induction? Prove it. What does the principle of mathematical induction allow you to conclude? If p(k) is the statement 2 + 2 + · · · + 32 = 1 − k 3 9 the process of doing our proof by induction?
1 3 k

, what implication did we prove in n(n+1)(n+2) . 3

2. Use contradiction to prove that 1 · 2 + 2 · 3 + · · · + n(n + 1) = 3. Use induction to prove that 1 · 2 + 2 · 3 + · · · + n(n + 1) = 4. Prove that 13 + 23 + 33 + · · · + n3 = n2 (n+1)2 . 4

n(n+1)(n+2) . 3

5. Write a careful proof of Euclid’s division theorem using strong induction.

4.1. MATHEMATICAL INDUCTION

127

i 6. Prove that n j = n+1 . As well as the inductive proof that we are expecting, there is i=j j+1 a nice “story” proof of this formula. It is well worth trying to figure it out.

7. Prove that every number greater than 7 is a sum of a nonnegative integer multiple of 3 and a nonnegative integer multiple of 5. 8. The usual definition of exponents in an advanced mathematics course (or an intermediate computer science course) is that a0 = 1 and an+1 = an · a. Explain why this defines an for all nonnegative integers n. Prove the rule of exponents am+n = am an from this definition. 9. Our arguments in favor of the sum principle were quite intuitive. In fact the sum principle for n sets follows from the sum principle for two sets. Use induction to prove the sum principle for a union of n sets from the sum principle for a union of two sets. 10. We have proved that every positive integer is a power of a prime number or a product of powers of prime numbers. Show that this factorization is unique in the following sense: If you have two factorizations of a positive integer, both factorizations use exactly the same primes, and each prime occurs to the same power in both factorizations. For this purpose, it is helpful to know that if a prime divides a product of integers, then it divides one of the integers in the product. (Another way to say this is that if a prime is a factor of a product of integers, then it is a factor of one of the integers in the product.) 11. Prove that 14 + 24 + · · · + n4 = O(n5 − n4 ). 12. Find the error in the following “proof” that all positive integers n are equal. Let p(n) be the statement that all numbers in an n-element set of positive integers are equal. Then p(1) is true. Now assume p(n − 1) is true, and let N be the set of the first n integers. Let N be the set of the first n − 1 integers, and let N be the set of the last n − 1 integers. Then by p(n − 1) all members of N are equal and all members of N are equal. Thus the first n − 1 elements of N are equal and the last n − 1 elements of N are equal, and so all elements of N are equal. Thus all positive integers are equal. 13. Prove by induction that the number of subsets of an n-element set is 2n . 14. Prove that the Strong Principle of Mathematical Induction implies the Weak Principle of Mathematical Induction. 15. Prove that the Weak Principal of Mathematical Induction implies the Strong Principal of Mathematical Induction. 16. Prove (4.6).

128

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES

4.2

Recursion, Recurrences and Induction

Recursion
Exercise 4.2-1 Describe the uses you have made of recursion in writing programs. Include as many as you can. Exercise 4.2-2 Recall that in the Towers of Hanoi problem we have three pegs numbered 1, 2 and 3, and on one peg we have a stack of n disks, each smaller in diameter than the one below it as in Figure 4.1. An allowable move consists of removing a disk Figure 4.1: The Towers of Hanoi

1

2

3

1

2

3

from one peg and sliding it onto another peg so that it is not above another disk of smaller size. We are to determine how many allowable moves are needed to move the disks from one peg to another. Describe the strategy you have used or would use in a recursive program to solve this problem. For the Tower of Hanoi problem, to solve the problem with no disks you do nothing. To solve the problem of moving all disks to peg 2, we do the following 1. (Recursively) solve the problem of moving n − 1 disks from peg 1 to peg 3, 2. move disk n to peg 2, 3. (Recursively) solve the problem of moving n − 1 disks on peg 3 to peg 2. Thus if M (n) is the number of moves needed to move n disks from peg i to peg j, we have M (n) = 2M (n − 1) + 1. This is an example of a recurrence equation or recurrence. A recurrence equation for a function defined on the set of integers greater than or equal to some number b is one that tells us how to compute the nth value of a function from the (n − 1)st value or some or all the values preceding n. To completely specify a function on the basis of a recurrence, we have to give enough information about the function to get started. This information is called the initial condition (or the initial conditions) (which we also call the base case) for the recurrence. In this case we have said that M (0) = 0. Using this, we get from the recurrence that M (1) = 1, M (2) = 3, M (3) = 7, M (4) = 15, M (5) = 31, and are led to guess that M (n) = 2n − 1. Formally, we write our recurrence and initial condition together as M (n) = 0 if n = 0 2M (n − 1) + 1 otherwise (4.7)

4.2. RECURSION, RECURRENCES AND INDUCTION

129

Now we give an inductive proof that our guess is correct. The base case is trivial, as we have defined M (0) = 0, and 0 = 20 − 1. For the inductive step, we assume that n > 0 and M (n − 1) = 2n−1 − 1. From the recurrence, M (n) = 2M (n − 1) + 1. But, by the inductive hypothesis, M (n − 1) = 2n−1 − 1, so we get that: M (n) = 2M (n − 1) + 1 = 2(2 n n−1

(4.8) (4.9) (4.10)

− 1) + 1

= 2 − 1.

thus by the principle of mathematical induction, M (n) = 2n − 1 for all nonnegative integers n. The ease with which we solved this recurrence and proved our solution correct is no accident. Recursion, recurrences and induction are all intimately related. The relationship between recursion and recurrences is reasonably transparent, as recurrences give a natural way of analyzing recursive algorithms. Recursion and recurrences are abstractions that allow you to specify the solution to an instance of a problem of size n as some function of solutions to smaller instances. Induction also falls naturally into this paradigm. Here, you are deriving a statement p(n) from statements p(n ) for n < n. Thus we really have three variations on the same theme. We also observe, more concretely, that the mathematical correctness of solutions to recurrences is naturally proved via induction. In fact, the correctness of recurrences in describing the number of steps needed to solve a recursive problem is also naturally proved by induction. The recurrence or recursive structure of the problem makes it straightforward to set up the induction proof.

First order linear recurrences
Exercise 4.2-3 The empty set (∅) is a set with no elements. How many subsets does it have? How many subsets does the one-element set {1} have? How many subsets does the two-element {1, 2} set have? How many of these contain 2? How many subsets does {1, 2, 3} have? How many contain 3? Give a recurrence for the number S(n) of subsets of an n-element set, and prove by induction that your recurrence is correct. Exercise 4.2-4 When someone is paying off a loan with initial amount A and monthly payment M at an interest rate of p percent, the total amount T (n) of the loan after n months is computed by adding p/12 percent to the amount due after n−1 months and then subtracting the monthly payment M . Convert this description into a recurrence for the amount owed after n months. Exercise 4.2-5 Given the recurrence T (n) = rT (n − 1) + a, where r and a are constants, find a recurrence that expresses T (n) in terms of T (n−2) instead of T (n − 1). Now find a recurrence that expresses T (n) in terms of T (n − 3) instead of T (n − 2) or T (n − 1). Now find a recurrence that expresses T (n) in terms of T (n − 4) rather than T (n − 1), T (n − 2), or T (n − 3). Based on your work so far, find a general formula for the solution to the recurrence T (n) = rT (n − 1) + a, with T (0) = b, and where r and a are constants.

130

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES

If we construct small examples for Exercise 4.2-3, we see that ∅ has only 1 subset, {1} has 2 subsets, {1, 2} has 4 subsets, and {1, 2, 3} has 8 subsets. This gives us a good guess as to what the general formula is, but in order to prove it we will need to think recursively. Consider the subsets of {1, 2, 3}: ∅ {1} {3} {1, 3} {2} {2, 3} {1, 2} {1, 2, 3}

The first four subsets do not contain three, and the second four do. Further, the first four subsets are exactly the subsets of {1, 2}, while the second four are the four subsets of {1, 2} with 3 added into each one. This suggests that the recurrence for the number of subsets of an n-element set (which we may assume is {1, 2, . . . , n}) is S(n) = 2S(n − 1) if n ≥ 1 . 1 if n = 0 (4.11)

To prove this recurrence is correct, we note that the subsets of an n-element set can be partitioned by whether they contain element n or not. The subsets of {1, 2, . . . , n} containing element n can be constructed by adjoining the element n to the subsets not containing element n. So the number of subsets containing element n is the same as the number of subsets not containing element n. The number of subsets not containing element n is just the number of subsets of an n − 1-element set. Therefore each block of our partition has size equal to the number of subsets of an n − 1-element set. Thus, by the sum principle, the number of subsets of {1, 2, . . . , n} is twice the number of subsets of {1, 2, . . . , n − 1}. This proves that S(n) = 2S(n − 1) if n > 0. We already observed that ∅ has no subsets, so we have proved the correctness of Recurrence 4.11. For Exercise 4.2-4 we can algebraically describe what the problem said in words by T (n) = (1 + .01p/12) · T (n − 1) − M, with T (0) = A. Note that we add .01p/12 times the principal to the amount due each month, because p/12 percent of a number is .01p/12 times the number.

Iterating a recurrence
Turning to Exercise 4.2-5, we can substitute the right hand side of the equation T (n − 1) = rT (n − 2) + a for T (n − 1) in our recurrence, and then substitute the similar equations for T (n − 2) and T (n − 3) to write

T (n) = r(rT (n − 2) + a) + a = r2 T (n − 2) + ra + a = r2 (rT (n − 3) + a) + ra + a = r3 T (n − 3) + r2 a + ra + a = r3 (rT (n − 4) + a) + r2 a + ra + a = r4 T (n − 4) + r3 a + r2 a + ra + a

4.2. RECURSION, RECURRENCES AND INDUCTION From this, we can guess that n−1 131

T (n) = rn T (0) + a i=0 n−1

ri ri . (4.12)

= rn b + a i=0 The method we used to guess the solution is called iterating the recurrence because we repeatedly use the recurrence with smaller and smaller values in place of n. We could instead have written

T (0) = b T (1) = rT (0) + a = rb + a T (2) = rT (1) + a = r(rb + a) + a = r2 b + ra + a T (3) = rT (2) + a = r3 b + r2 a + ra + a This leads us to the same guess, so why have we introduced two methods? Having different approaches to solving a problem often yields insights we would not get with just one approach. For example, when we study recursion trees, we will see how to visualize the process of iterating certain kinds of recurrences in order to simplify the algebra involved in solving them.

Geometric series
You may recognize that sum n−1 ri in Equation 4.12. It is called a finite geometric series with i=0 common ratio r. The sum n−1 ari is called a finite geometric series with common ratio r and i=0 initial value a. Recall from algebra the factorizations (1 − x)(1 + x) = 1 − x2 (1 − x)(1 + x + x2 ) = 1 − x3 (1 − x)(1 + x + x2 + x3 ) = 1 − x4 These factorizations are easy to verify, and they suggest that (1−r)(1+r+r2 +· · ·+rn−1 ) = 1−rn , or n−1 1 − rn ri = . (4.13) 1−r i=0 In fact this formula is true, and lets us rewrite the formula we got for T (n) in a very nice form. Theorem 4.1 If T (n) = rT (n − 1) + a, T (0) = b, and r = 1 then T (n) = rn b + a for all nonnegative integers n. 1 − rn 1−r (4.14)

132

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES

Proof: We will prove our formula by induction. Notice that the formula gives T (0) = 0 r0 b + a 1−r which is b, so the formula is true when n = 0. Now assume that n > 0 and 1−r T (n − 1) = rn−1 b + a Then we have T (n) = rT (n − 1) + a 1 − rn−1 +a 1−r ar − arn = rn b + +a 1−r ar − arn + a − ar = rn b + 1−r 1 − rn . = rn b + a 1−r = r rn−1 b + a Therefore by the principle of mathematical induction, our formula holds for all integers n greater than 0. We did not prove Equation 4.13. However it is easy to use Theorem 4.1 to prove it. Corollary 4.2 The formula for the sum of a geometric series with r = 1 is n−1 1 − rn−1 . 1−r

ri = i=0 1 − rn . 1−r

(4.15)

Proof: Define T (n) = n−1 ri . Then T (n) = rT (n − 1) + 1, and since T (0) is a sum with no i=0 n terms, T (0) = 0. Applying Theorem 4.1 with b = 0 and a = 1 gives us T (n) = 1−r . 1−r Often, when we see a geometric series, we will only be concerned with expressing the sum in big-O notation. In this case, we can show that the sum of a geometric series is at most the largest term times a constant factor, where the constant factor depends on r, but not on n. Lemma 4.3 Let r be a quantity whose value is independent of n and not equal to 1. Let t(n) be the largest term of the geometric series n−1 ri . i=0 Then the value of the geometric series is O(t(n)). Proof: It is straightforward to see that we may limit ourselves to proving the lemma for r > 0. We consider two cases, depending on whether r > 1 or r < 1. If r > 1, then n−1 ri = i=0 rn − 1 r−1 rn r−1



r r−1 = O(rn−1 ). = rn−1

4.2. RECURSION, RECURRENCES AND INDUCTION On the other hand, if r < 1, then the largest term is r0 = 1, and the sum has value 1 − rn 1 < . 1−r 1−r Thus the sum is O(1), and since t(n) = 1, the sum is O(t(n)).

133

In fact, when r is nonnegative, an even stronger statement is true. Recall that we said that, for two functions f and g from the real numbers to the real numbers that f = Θ(g) if f = O(g) and g = O(f ). Theorem 4.4 Let r be a nonnegative quantity whose value is independent of n and not equal to 1. Let t(n) be the largest term of the geometric series n−1 ri . i=0 Then the value of the geometric series is Θ(t(n)).
−1 Proof: By Lemma 4.3, we need only show that t(n) = O( rr−1 ). Since all ri are nonnegative, n−1 i the sum i=0 r is at least as large as any of its summands. But t(n) is one of these summands, n −1 so t(n) = O( rr−1 ). n Note from the proof that t(n) and the constant in the big-O upper bound depend on r. We will use this Theorem in subsequent sections.

First order linear recurrences
A recurrence of the form T (n) = f (n)T (n − 1) + g(n) is called a first order linear recurrence. When f (n) is a constant, say r, the general solution is almost as easy to write down as in the case we already figured out. Iterating the recurrence gives us T (n) = rT (n − 1) + g(n) = r rT (n − 2) + g(n − 1) + g(n) = r2 T (n − 2) + rg(n − 1) + g(n) = r2 rT (n − 3) + g(n − 2) + rg(n − 1) + g(n) = r3 T (n − 3) + r2 g(n − 2) + rg(n − 1) + g(n) = r3 rT (n − 4) + g(n − 3) + r2 g(n − 2) + rg(n − 1) + g(n) = r4 T (n − 4) + r3 g(n − 3) + r2 g(n − 2) + rg(n − 1) + g(n) . . . n−1 = rn T (0) + i=0 ri g(n − i)

This suggests our next theorem.

134

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES

Theorem 4.5 For any positive constants a and r, and any function g defined on the nonnegative integers, the solution to the first order linear recurrence T (n) = is T (n) = rn a + i=1 rT (n − 1) + g(n) if n > 0 a if n = 0 n rn−i g(i).

(4.16)

Proof:

Let’s prove this by induction.

Since the sum n rn−i g(i) in Equation 4.16 has no terms when n = 0, the formula gives i=1 T (0) = 0 and so is valid when n = 0. We now assume that n is positive and T (n − 1) = rn−1 a + n−1 r(n−1)−i g(i). Using the definition of the recurrence and the inductive hypothesis i=1 we get that T (n) = rT (n − 1) + g(n) n−1 = r rn−1 a + i=1 n−1

r(n−1)−i g(i) + g(n)

= rn a + i=1 n−1

r(n−1)+1−i g(i) + g(n) rn−i g(i) + g(n) i=1 n

= rn a + = rn a + i=1 rn−i g(i).

Therefore by the principle of mathematical induction, the solution to T (n) = rT (n − 1) + g(n) if n > 0 a if n = 0

is given by Equation 4.16 for all nonnegative integers n. The formula in Theorem 4.5 is a little less easy to use than that in Theorem 4.1 because it gives us a sum to compute. Fortunately, for a number of commonly occurring functions g the sum n rn−i g(i) is reasonable to compute. i=1 Exercise 4.2-6 Solve the recurrence T (n) = 4T (n − 1) + 2n with T (0) = 6. Exercise 4.2-7 Solve the recurrence T (n) = 3T (n − 1) + n with T (0) = 10. For Exercise 4.2-6, using Equation 4.16, we can write n T (n) = 6 · 4n + i=1 4n−i · 2i n = 6 · 4n + 4n i=1 4−i · 2i

4.2. RECURSION, RECURRENCES AND INDUCTION n 135 i = 6 · 4n + 4n i=1 1 2

= 6 · 4n + 4n ·

1 n−1 1 · 2 i=0 2

i

1 = 6 · 4n + (1 − ( )n ) · 4n 2 = 7 · 4n − 2n For Exercise 4.2-7 we begin in the same way and face a bit of a surprise. Using Equation 4.16, we write n T (n) = 10 · 3n + i=1 3n−i · i n = 10 · 3n + 3n i=1 n

i3−i i 1 3 i = 10 · 3n + 3n i=1 .

(4.17)

Now we are faced with a sum that you may not recognize, a sum that has the form n n

ixi = x i=1 i=1

ixi−1 ,

with x = 1/3. However by writing it in in this form, we can use calculus to recognize it as x times a derivative. In particular, using the fact that 0x0 = 0, we can write n n

ixi = x i=1 i=0

ixi−1 = x

d dx

n

xi = x i=0 d dx

1 − xn+1 1−x

.

But using the formula for the derivative of a quotient from calculus, we may write x d dx 1 − xn+1 1−x =x (1 − x)(−(n + 1)xn ) − (1 − xn+1 )(−1) nxn+2 − (n + 1)xn+1 + x = . (1 − x)2 (1 − x)2 n Connecting our first and last equations, we get ixi = i=1 nxn+2 − (n + 1)xn+1 + x . (1 − x)2

(4.18)

Substituting in x = 1/3 and simplifying gives us n i i=1 1 3

i

1 3 = − (n + 1) 2 3

n+1



3 4

1 3

n+1

3 + . 4

Substituting this into Equation 4.17 gives us 1 3 T (n) = 10 · 3n + 3n − (n + 1) 2 3 = 10 · 3n − = n + 1 1 3n+1 − + 2 4 4 43 n n + 1 1 3 − − . 4 2 4 n+1 −

3 3 (1/3)n+1 + 4 4

136

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES The sum that arises in this exercise occurs so often that we give its formula as a theorem.

Theorem 4.6 For any real number x = 1, n ixi = i=1 nxn+2 − (n + 1)xn+1 + x . (1 − x)2

(4.19)

Proof:

Given before the statement of the theorem.

Important Concepts, Formulas, and Theorems
1. Recurrence Equation or Recurrence. A recurrence equation is one that tells us how to compute the nth term of a sequence from the (n − 1)st term or some or all the preceding terms. 2. Initial Condition. To completely specify a function on the basis of a recurrence, we have to give enough information about the function to get started. This information is called the initial condition (or the initial conditions) for the recurrence. 3. First Order Linear Recurrence. A recurrence T (n) = f (n)T (n − 1) + g(n) is called a first order linear recurrence. 4. Constant Coefficient Recurrence. A recurrence in which T (n) is expressed in terms of a sum of constant multiples of T (k) for certain values k < n (and perhaps another function of n) is called a constant coefficient recurrence. 5. Solution to a First Order Constant Coefficient Linear Recurrence. If T (n) = rT (n − 1) + a, T (0) = b, and r = 1 then 1 − rn T (n) = rn b + a 1−r for all nonnegative integers n. 6. Finite Geometric Series. A finite geometric series with common ratio r is a sum of the form n−1 ri . The formula for the sum of a geometric series with r = 1 is i=0 n−1 ri = i=0 1 − rn . 1−r

7. Big-T heta Bounds on the Sum of a Geometric Series. Let r be a nonnegative quantity whose value is independent of n and not equal to 1. Let t(n) be the largest term of the geometric series n−1 ri . i=0 Then the value of the geometric series is Θ(t(n)).

4.2. RECURSION, RECURRENCES AND INDUCTION 8. Solution to a First Order Linear Recurrence. For any any function g defined on the nonnegative integers, the recurrence rT (n − 1) + g(n) if T (n) = a if is T (n) = rn a + i=1 n

137 positive constants a and r, and solution to the first order linear n>0 n=0

rn−i g(i).

9. Iterating a Recurrence. We say we are iterating a recurrence when we guess its solution by using the equation that expresses T (n) in terms of T (k) for k smaller than n to re-express T (n) in terms of T (k) for k smaller than n − 1, then for k smaller than n − 2, and so on until we can guess the formula for the sum. 10. An Important Sum. For any real number x = 1, n ixi = i=1 nxn+2 − (n + 1)xn+1 + x . (1 − x)2

Problems
1. Prove Equation 4.15 directly by induction. 2. Prove Equation 4.18 directly by induction. 3. Solve the recurrence M (n) = 2M (n − 1) + 2, with a base case of M (1) = 1. How does it differ from the solution to Recurrence 4.7? 4. Solve the recurrence M (n) = 3M (n − 1) + 1, with a base case of M (1) = 1. How does it differ from the solution to Recurrence 4.7. 5. Solve the recurrence M (n) = M (n − 1) + 2, with a base case of M (1) = 1. How does it differ from the solution to Recurrence 4.7. 6. There are m functions from a one-element set to the set {1, 2, . . . , m}. How many functions are there from a two-element set to {1, 2, . . . , m}? From a three-element set? Give a recurrence for the number T (n) of functions from an n-element set to {1, 2, . . . , m}. Solve the recurrence. 7. Solve the recurrence that you derived in Exercise 4.2-4. 8. At the end of each year, a state fish hatchery puts 2000 fish into a lake. The number of fish in the lake at the beginning of the year doubles due to reproduction by the end of the year. Give a recurrence for the number of fish in the lake after n years and solve the recurrence. 9. Consider the recurrence T (n) = 3T (n − 1) + 1 with the initial condition that T (0) = 2. We know that we could write the solution down from Theorem 4.1. Instead of using the theorem, try to guess the solution from the first four values of T (n) and then try to guess the solution by iterating the recurrence four times.

138

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES

10. What sort of big-Θ bound can we give on the value of a geometric series 1 + r + r2 + · · · + rn with common ratio r = 1? 11. Solve the recurrence T (n) = 2T (n − 1) + n2n with the initial condition that T (0) = 1. 12. Solve the recurrence T (n) = 2T (n − 1) + n3 2n with the initial condition that T (0) = 2. 13. Solve the recurrence T (n) = 2T (n − 1) + 3n with T (0) = 1. 14. Solve the recurrence T (n) = rT (n − 1) + rn with T (0) = 1. 15. Solve the recurrence T (n) = rT (n − 1) + r2n with T (0) = 1 16. Solve the recurrence T (n) = rT (n − 1) + sn with T (0) = 1. 17. Solve the recurrence T (n) = rT (n − 1) + n with T (0) = 1. 18. The Fibonacci numbers are defined by the recurrence T (n) = T (n − 1) + T (n − 2) if n > 0 1 if n = 0 or n = 1


(a) Write down the first ten Fibonacci numbers. (b) Show that ( 1+2 5 )n and ( 1−2 5 )n are solutions to the equation F (n) = F (n − 1) + F (n − 2). (c) Why is √ √ 1+ 5 n 1− 5 n c1 ( ) + c2 ( ) 2 2 a solution to the equation F (n) = F (n − 1) + F (n − 2) for any real numbers c1 and c2 ?


(d) Find constants c1 and c2 such that the Fibonacci numbers are given by √ √ 1+ 5 n 1− 5 n F (n) = c1 ( ) + c2 ( ) 2 2

4.3. GROWTH RATES OF SOLUTIONS TO RECURRENCES

139

4.3

Growth Rates of Solutions to Recurrences

Divide and Conquer Algorithms
One of the most basic and powerful algorithmic techniques is divide and conquer. Consider, for example, the binary search algorithm, which we will describe in the context of guessing a number between 1 and 100. Suppose someone picks a number between 1 and 100, and allows you to ask questions of the form “Is the number greater than k?” where k is an integer you choose. Your goal is to ask as few questions as possible to figure out the number. Your first question should be “Is the number greater than 50?” Why is this? Well, after asking if the number is bigger than 50, you have learned either that the number is between one and 50, or that the number is between 51 and 100. In either case have reduced your problem to one in which the range is only half as big. Thus you have divided the problem up into a problem that is only half as big, and you can now (recursively) conquer this remaining problem. (If you ask any other question, the size of one of the possible ranges of values you could end up with would be more than half the size of the original problem.) If you continue in this fashion, always cutting the problem size in half, you will reduce the problem size down to one fairly quickly, and then you will know what the number is. Of course it would be easier to cut the problem size exactly in half each time if we started with a number in the range from one to 128, but the question doesn’t sound quite so plausible then. Thus to analyze the problem we will assume someone asks you to figure out a number between 0 and n, where n is a power of 2. Exercise 4.3-1 Let T (n) be number of questions in binary search on the range of numbers between 1 and n. Assuming that n is a power of 2, give a recurrence for T (n). For Exercise 4.3-1 we get: T (n/2) + 1 if n ≥ 2 1 if n = 1

T (n) =

(4.20)

That is, the number of guesses to carry out binary search on n items is equal to 1 step (the guess) plus the time to solve binary search on the remaining n/2 items. What we are really interested in is how much time it takes to use binary search in a computer program that looks for an item in an ordered list. While the number of questions gives us a feel for the amount of time, processing each question may take several steps in our computer program. The exact amount of time these steps take might depend on some factors we have little control over, such as where portions of the list are stored. Also, we may have to deal with lists whose length is not a power of two. Thus a more realistic description of the time needed would be T (n) ≤ where C1 and C2 are constants. Note that x stands for the smallest integer larger than or equal to x, while x stands for the largest integer less than or equal to x. It turns out that the solution to (4.20) and (4.21) are roughly the same, in a sense that will hopefully become clear later. (This is almost always T ( n/2 ) + C1 if n ≥ 2 if n = 1, C2

(4.21)

140

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES

the case.) For now, let us not worry about floors and ceilings and the distinction between things that take 1 unit of time and things that take no more than some constant amount of time. Let’s turn to another example of a divide and conquer algorithm, mergesort. In this algorithm, you wish to sort a list of n items. Let us assume that the data is stored in an array A in positions 1 through n. Mergesort can be described as follows: MergeSort(A,low,high) if (low == high) return else mid = (low + high)/2 MergeSort(A,low,mid) MergeSort(A,mid+1,high) Merge the sorted lists from the previous two steps More details on mergesort can be found in almost any algorithms textbook. Suffice to say that the base case (low = high) takes one step, while the other case executes 1 step, makes two recursive calls on problems of size n/2, and then executes the Merge instruction, which can be done in n steps. Thus we obtain the following recurrence for the running time of mergesort: T (n) = 2T (n/2) + n if n > 1 1 if n = 1 (4.22)

Recurrences such as this one can be understood via the idea of a recursion tree, which we introduce below. This concept allows us to analyze recurrences that arise in divide-and-conquer algorithms, and those that arise in other recursive situations, such as the Towers of Hanoi, as well. A recursion tree for a recurrence is a visual and conceptual representation of the process of iterating the recurrence.

Recursion Trees
We will introduce the idea of a recursion tree via several examples. It is helpful to have an “algorithmic” interpretation of a recurrence. For example, (ignoring for a moment the base case) we can interpret the recurrence T (n) = 2T (n/2) + n (4.23) as “in order to solve a problem of size n we must solve 2 problems of size n/2 and do n units of additional work.” Similarly we can interpret T (n) = T (n/4) + n2 as “in order to solve a problem of size n we must solve one problem of size n/4 and do n2 units of additional work.” We can also interpret the recurrence T (n) = 3T (n − 1) + n

4.3. GROWTH RATES OF SOLUTIONS TO RECURRENCES Figure 4.2: The initial stage of drawing a recursion tree diagram.
Problem Size Work

141

n

n

n/2

as “in order to solve a problem of size n, we must solve 3 subproblems of size n − 1 and do n additional units of work. In Figure 4.2 we draw the beginning of the recursion tree diagram for (4.23). For now, assume n is a power of 2. A recursion tree diagram has three parts, a left, a middle, and a right. On the left, we keep track of the problem size, in the middle we draw the tree, and on right we keep track of the work done. We draw the diagram in levels, each level of the diagram representing a level of recursion. Equivalently, each level of the diagram represents a level of iteration of the recurrence. So to begin the recursion tree for (4.23), we show, in level 0 on the left, that we have problem of size n. Then by drawing a root vertex with two edges leaving it, we show in the middle that we are splitting our problem into 2 problems. We note on the right that we do n units of work in addition to whatever is done on the two new problems we created. In the next level, we draw two vertices in the middle representing the two problems into which we split our main problem and show on the left that each of these problems has size n/2. You can see how the recurrence is reflected in levels 0 and 1 of the recursion tree. The top vertex of the tree represents T (n), and on the next level we have two problems of size n/2, representing the recursive term 2T (n/2) of our recurrence. Then after we solve these two problems we return to level 0 of the tree and do n additional units of work for the nonrecursive term of the recurrence. Now we continue to draw the tree in the same manner. Filling in the rest of level one and adding a few more levels, we get Figure 4.3. Let us summarize what the diagram tells us so far. At level zero (the top level), n units of work are done. We see that at each succeeding level, we halve the problem size and double the number of subproblems. We also see that at level 1, each of the two subproblems requires n/2 units of additional work, and so a total of n units of additional work are done. Similarly level 2 has 4 subproblems of size n/4 and so 4(n/4) = n units of additional work are done. Notice that to compute the total work done on a level we multiply the number of subproblems by the amount of additional work per subproblem. To see how iteration of the recurrence is reflected in the diagram, we iterate the recurrence once, getting T (n) = 2T (n/2) + n T (n) = 2(2T (n/4) + n/2) + n T (n) = 4T (n/4) + n + n = 4T (n/4) + 2n

142

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES Figure 4.3: Four levels of a recursion tree diagram.
Problem Size Work

n

n

n/2

n/2 + n/2 = n

n/4

n/4 + n/4 + n/4 + n/4 = n

n/8

8(n/8) = n

If we examine levels 0, 1, and 2 of the diagram, we see that at level 2 we have four vertices which represent four problems, each of size n/4 This corresponds to the recursive term that we obtained after iterating the recurrence. However after we solve these problems we return to level 1 where we twice do n/2 additional units of work and to level 0 where we do another n additional units of work. In this way each time we add a level to the tree we are showing the result of one more iteration of the recurrence. We now have enough information to be able to describe the recursion tree diagram in general. To do this, we need to determine, for each level, three things: • the number of subproblems, • the size of each subproblem, • the total work done at that level. We also need to figure out how many levels there are in the recursion tree. We see that for this problem, at level i, we have 2i subproblems of size n/2i . Further, since a problem of size 2i requires 2i units of additional work, there are (2i )[n/(2i )] = n units of work done per level. To figure out how many levels there are in the tree, we just notice that at each level the problem size is cut in half, and the tree stops when the problem size is 1. Therefore there are log2 n + 1 levels of the tree, since we start with the top level and cut the problem size in half log2 n times.2 We can thus visualize the whole tree in Figure 4.4. The computation of the work done at the bottom level is different from the other levels. In the other levels, the work is described by the recursive equation of the recurrence; in this case the amount of work is the n in T (n) = 2T (n/2) + n. At the bottom level, the work comes from the base case. Thus we must compute the number of problems of size 1 (assuming that one is the base case), and then multiply this value by T (1) = 1. In our recursion tree in Figure 4.4, the number of nodes at the bottom level is 2log2 n = n. Since T (1) = 1, we do n units of work at
To simplify notation, for the remainder of the book, if we omit the base of a logarithm, it should be assumed to be base 2.
2

4.3. GROWTH RATES OF SOLUTIONS TO RECURRENCES Figure 4.4: A finished recursion tree diagram.
Problem Size Work

143

n

n

n/2 log n +1 levels n/4

n/2 + n/2 = n

n/4 + n/4 + n/4 + n/4 = n

n/8

8(n/8) = n

1

n(1) = n

the bottom level of the tree. Had we chosen to say that T (1) was some constant other than 1, this would not have been the case. We emphasize that the correct value always comes from the base case; it is just a coincidence that it sometimes also comes from the recursive equation of the recurrence. The bottom level of the tree represents the final stage of iterating the recurrence. We have seen that at this level we have n problems each requiring work T (1) = 1, giving us total work n at that level. After we solve the problems represented by the bottom level, we have to do all the additional work from all the earlier levels. For this reason, we sum the work done at all the levels of the tree to get the total work done. Iteration of the recurrence shows us that the solution to the recurrence is the sum of all the work done at all the levels of the recursion tree. The important thing is that we now know how much work is done at each level. Once we know this, we can sum the total amount of work done over all the levels, giving us the solution to our recurrence. In this case, there are log2 n + 1 levels, and at each level the amount of work we do is n units. Thus we conclude that the total amount of work done to solve the problem described by recurrence (4.23) is n(log2 n + 1). The total work done throughout the tree is the solution to our recurrence, because the tree simply models the process of iterating the recurrence. Thus the solution to recurrence (4.22) is T (n) = n(log n + 1). Since one unit of time will vary from computer to computer, and since some kinds of work might take longer than other kinds, we are usually interested in the big-θ behavior of T (n). For example, we can consider a recurrence that it identical to (4.22), except that T (1) = a, for some constant a. In this case, T (n) = an + n log n, because an units of work are done at level 1 and n additional units of work are done at each of the remaining log n levels. It is still true that T (n) = Θ(n log n), because the different base case did not change the solution to the recurrence by more than a constant factor3 . Although recursion trees can give us the exact solutions (such as T (n) = an + n log n above) to recurrences, our interest in the big-Θ behavior of solutions will usually lead us to use a recursion tree to determine the big-Θ or even, in complicated cases, just the big-O behavior of the actual solution to the recurrence. In Problem 10 we explore whether
3

More precisely, n log n < an + n log n < (a + 1)n log n for any a > 0.

144

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES

the value of T (1) actually influences the big-Θ behavior of the solution to a recurrence. Let’s look at one more recurrence. T (n) = T (n/2) + n if n > 1 1 if n = 1 (4.24)

Again, assume n is a power of two. We can interpret this as follows: to solve a problem of size n, we must solve one problem of size n/2 and do n units of additional work. We draw the tree for this problem in Figure 4.5 and see that the problem sizes are the same as in the previous tree. The remainder, however, is different. The number of subproblems does not double, rather Figure 4.5: A recursion tree diagram for Recurrence 4.24.
Problem Size Work

n

n

n/2 log n + 1 levels n/4

n/2

n/4

n/8

n/8

1

1

it remains at one on each level. Consequently the amount of work halves at each level. Note that there are still log n + 1 levels, as the number of levels is determined by how the problem size is changing, not by how many subproblems there are. So on level i, we have 1 problem of size n/2i , for total work of n/2i units. We now wish to compute how much work is done in solving a problem that gives this recurrence. Note that the additional work done is different on each level, so we have that the total amount of work is n + n/2 + n/4 + · · · + 2 + 1 = n 1 + 1 1 + + ··· + 2 4 1 2 log2 n

,

which is n times a geometric series. By Theorem 4.4, the value of a geometric series in which the largest term is one is Θ(1). This implies that the work done is described by T (n) = Θ(n). We emphasize that there is exactly one solution to recurrence (4.24); it is the one we get by using the recurrence to compute T (2) from T (1), then to compute T (4) from T (2), and so on. What we have done here is show that T (n) = Θ(n). In fact, for the kinds of recurrences we have been examining, once we know T (1) we can compute T (n) for any relevant n by repeatedly using the recurrence, so there is no question that solutions do exist and can, in principle, be computed for any value of n. In most applications, we are not interested in the exact form of the solution, but a big-O upper bound, or Big-Θ bound on the solution.

4.3. GROWTH RATES OF SOLUTIONS TO RECURRENCES Exercise 4.3-2 Find a big-Θ bound for the solution to the recurrence T (n) = 3T (n/3) + n if n ≥ 3 1 if n < 3

145

using a recursion tree. Assume that n is a power of 3. Exercise 4.3-3 Solve the recurrence T (n) = 4T (n/2) + n if n ≥ 2 1 if n = 1

using a recursion tree. Assume that n is a power of 2. Convert your solution to a big-Θ statement about the behavior of the solution. Exercise 4.3-4 Can you give a general big-Θ bound for solutions to recurrences of the form T (n) = aT (n/2) + n when n is a power of 2? You may have different answers for different values of a. The recurrence in Exercise 4.3-2 is similar to the mergesort recurrence. One difference is that at each step we divide into 3 problems of size n/3. Thus we get the picture in Figure 4.6. Another difference is that the number of levels, instead of being log2 n + 1 is now log3 n + 1, so Figure 4.6: The recursion tree diagram for the recurrence in Exercise 4.3-2.
Problem Size Work

n

n

n/3

n/3 + n/3 + n/3 = n

log n + 1 levels n/9 9(n/9) = n

1

n(1) = n

the total work is still Θ(n log n) units. (Note that logb n = Θ(log2 n) for any b > 1.) Now let’s look at the recursion tree for Exercise 4.3-3. Here we have 4 children of size n/2, and we get Figure 4.7. Let’s look carefully at this tree. Just as in the mergesort tree there are log2 n + 1 levels. However, in this tree, each node has 4 children. Thus level 0 has 1 node, level 1 has 4 nodes, level 2 has 16 nodes, and in general level i has 4i nodes. On level i each node corresponds to a problem of size n/2i and hence requires n/2i units of additional work. Thus the total work on level i is 4i (n/2i ) = 2i n units. This formula applies on level log2 n (the bottom

146

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES Figure 4.7: The Recursion tree for Exercise 4.3-3.
Problem Size Work

n

n

n/2

n/2 + n/2 + n/2 + n/2 = 2n

log n + 1 levels n/4 16(n/4) = 4n

1

n^2(1) = n^2

level) as well since there are n2 = 2log2 n n nodes, each requiring T (1) = 1 work. Summing over the levels, we get log2 n log2 n

2i n = n i=0 i=0

2i .

There are many ways to simplify that expression, for example from our formula for the sum of a geometric series we get log2 n

T (n) = n i=0 2i

1 − 2(log2 n)+1 = n 1−2 1 − 2n = n −1 = 2n2 − n = Θ(n2 ). More simply, by Theorem 4.4 we have that T (n) = nΘ(2log n ) = Θ(n2 ).

Three Different Behaviors
Now let’s compare the recursion tree diagrams for the recurrences T (n) = 2T (n/2) + n, T (n) = T (n/2) + n and T (n) = 4T (n/2) + n. Note that all three trees have depth 1 + log2 n, as this is determined by the size of the subproblems relative to the parent problem, and in each case, the size of each subproblem is 1/2 the size of of the parent problem. The trees differ, however, in the amount of work done per level. In the first case, the amount of work on each level is the same. In the second case, the amount of work done on a level decreases as you go down the tree, with the most work being at the top level. In fact, it decreases geometrically, so by Theorem 4.4 the

4.3. GROWTH RATES OF SOLUTIONS TO RECURRENCES

147

total work done is bounded above and below by a constant times the work done at the root node. In the third case, the number of nodes per level is growing at a faster rate than the problem size is decreasing, and the level with the largest amount of work is the bottom one. Again we have a geometric series, and so by Theorem 4.4 the total work is bounded above and below by a constant times the amount of work done at the last level. If you understand these three cases and the differences among them, you now understand the great majority of the recursion trees that arise in algorithms. So to answer Exercise 4.3-4, which asks for a general Big-Θ bound for the solutions to recurrences of the form T (n) = aT (n/2) + n, we can conclude the following: Lemma 4.7 Suppose that we have a recurrence of the form T (n) = aT (n/2) + n, where a is a positive integer and T (1) is nonnegative. Thus we have the following big-Theta bounds on the solution. 1. If a < 2 then T (n) = Θ(n). 2. If a = 2 then T (n) = Θ(n log n) 3. If a > 2 then T (n) = Θ(nlog2 a ) Proof: Cases 1 and 2 follow immediately from our observations above. We can verify case 3 as follows. At each level i we have ai nodes, each corresponding to a problem of size n/2i . Thus at level i the total amount of work is ai (n/2i ) = n(a/2)i units. Summing over the log2 n levels, we get
(log2 n)−1

a

log2 n

T (1) + n i=0 (a/2)i .

The sum given by the summation sign is a geometric series, so, since a/2 = 1, the sum will be big-Θ of the largest term (see Theorem 4.4). Since a > 2, the largest term in this case is clearly the last one, namely n(a/2)(log2 n)−1 , and applying rules of exponents and logarithms, we get that n times the largest term is n a 2
(log2 n)−1

= =

2 n · alog2 n 2 n · alog2 n 2 = · · = · alog2 n log2 n a 2 a n a 2 log2 (alog2 a 2 log2 a log2 n 2 = ·2 = · nlog2 a . a a a a
2 a

(4.25)

Thus T (1)alog2 n = T (1)nlog2 a . Since Θ(nlog2 a ).

and T (1) are both nonnegative, the total work done is

In fact Lemma 4.7 holds for all positive real numbers a; we can iterate the recurrence to see this. Since a recursion tree diagram is a way to visualize iterating the recurrence when a is an integer, iteration is the natural thing to try when a is not an integer. Notice that in the last two equalities of computation we made in Equation 4.25, we showed that alog n = nlog a . This is a useful and, perhaps, surprising fact, so we state it (in slightly more generality) as a corollary to the proof. Corollary 4.8 For any base b, we have alogb n = nlogb a .

148

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES

Important Concepts, Formulas, and Theorems
1. Divide and Conquer Algorithm. A divide and conquer algorithm is one that solves a problem by dividing it into problems that are smaller but otherwise of the same type as the original one, recursively solves these problems, and then assembles the solution of these so-called subproblems into a solution of the original one. Not all problems can be solved by such a strategy, but a great many problems of interest in computer science can. 2. Mergesort. In mergesort we sort a list of items that have some underlying order by dividing the list in half, sorting the first half (by recursively using mergesort), sorting the second half (by recursively using mergesort), and then merging the two sorted list. For a list of length one mergesort returns the same list. 3. Recursion Tree. A recursion tree diagram for a recurrence of the form T (n) = aT (n/b)+g(n) has three parts, a left, a middle, and a right. On the left, we keep track of the problem size, in the middle we draw the tree, and on right we keep track of the work done. We draw the diagram in levels, each level of the diagram representing a level of recursion. The tree has a vertex representing the initial problem and one representing each subproblem we have to solve. Each non-leaf vertex has a children. The vertices are divided into levels corresponding to (sub-)problems of the same size; to the left of a level of vertices we write the size of the problems the vertices correspond to; to the right of the vertices on a given level we write the total amount of work done at that level by an algorithm whose work is described by the recurrence, not including the work done by any recursive calls from that level. 4. The Base Level of a Recursion Tree. The amount of work done on the lowest level in a recursion tree is the number of nodes times the value given by the initial condition; it is not determined by attempting to make a computation of “additional work” done at the lowest level. 5. Bases for Logarithms. We use log n as an alternate notation for log2 n. A fundamental fact about logarithms is that logb n = Θ(log2 n) for any real number b > 1. 6. An Important Fact About Logarithms. For any b > 0, alogb n = nlogb a . 7. Three behaviors of solutions. The solution to a recurrence of the form T (n) = aT (n/2) + n behaves in one of the following ways: (a) if a < 2 then T (n) = Θ(n). (b) if a = 2 then T (n) = Θ(n log n) (c) if a > 2 then T (n) = Θ(nlog2 a ).

Problems
1. Draw recursion trees and find big-Θ bounds on the solutions to the following recurrences. For all of these, assume that T (1) = 1 and n is a power of the appropriate integer. (a) T (n) = 8T (n/2) + n (b) T (n) = 8T (n/2) + n3

4.3. GROWTH RATES OF SOLUTIONS TO RECURRENCES (c) T (n) = 3T (n/2) + n (d) T (n) = T (n/4) + 1 (e) T (n) = 3T (n/3) + n2

149

2. Draw recursion trees and find exact solutions to the following recurrences. For all of these, assume that T (1) = 1 and n is a power of the appropriate integer. (a) T (n) = 8T (n/2) + n (b) T (n) = 8T (n/2) + n3 (c) T (n) = 3T (n/2) + n (d) T (n) = T (n/4) + 1 (e) T (n) = 3T (n/3) + n2 3. Find the exact solution to Recurrence 4.24. 4. Show that logb n = Θ(log2 n), for any constant b > 1. 5. Prove Corollary 4.8 by showing that alogb n = nlogb a for any b > 0. 6. Recursion trees will still work, even if the problems do not break up geometrically, or even if the work per level is not nc units. Draw recursion trees and and find the best big-O bounds you can for solutions to the following recurrences. For all of these, assume that T (1) = 1. (a) T (n) = T (n − 1) + n (b) T (n) = 2T (n − 1) + n √ i (c) T (n) = T ( n ) + 1 (You may assume n has the form n = 22 .) (d) T (n) = 2T (n/2) + n log n (You may assume n is a power of 2.) 7. In each case in the previous problem, is the big-O bound you found a big-Θ bound? 8. If S(n) = aS(n − 1) + g(n) and g(n) < cn with 0 ≤ c < a, how fast does S(n) grow (in big-Θ terms)? 9. If S(n) = aS(n − 1) + g(n) and g(n) = cn with 0 < a ≤ c, how fast does S(n) grow in big-Θ terms? 10. Given a recurrence of the form T (n) = aT (n/b) + g(n) with T (1) = c > 0 and g(n) > 0 for all n and a recurrence of the form S(n) = aS(n/b) + g(n) with S(1) = 0 (and the same a, b, and g(n)), is there any difference in the big-Θ behavior of the solutions to the two recurrences? What does this say about the influence of the initial condition on the big-Θ behavior of such recurrences?

150

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES

4.4

The Master Theorem

Master Theorem
In the last section, we saw three different kinds of behavior for recurrences of the form aT (n/2) + n if n > 1 d if n = 1.

T (n) =

These behaviors depended upon whether a < 2, a = 2, or a > 2. Remember that a was the number of subproblems into which our problem was divided. Dividing by 2 cut our problem size in half each time, and the n term said that after we completed our recursive work, we had n additional units of work to do for a problem of size n. There is no reason that the amount of additional work required by each subproblem needs to be the size of the subproblem. In many applications it will be something else, and so in Theorem 4.9 we consider a more general case. Similarly, the sizes of the subproblems don’t have to be 1/2 the size of the parent problem. We then get the following theorem, our first version of a theorem called the Master Theorem. (Later on we will develop some stronger forms of this theorem.) Theorem 4.9 Let a be an integer greater than or equal 1. Let c be a positive real number and d a nonnegative form aT (n/b) + nc T (n) = d in which n is restricted to be a power of b, 1. if logb a < c, T (n) = Θ(nc ), 2. if logb a = c, T (n) = Θ(nc log n), 3. if logb a > c, T (n) = Θ(nlogb a ). Proof: In this proof, we will set d = 1, so that the work done at the bottom level of the tree is the same as if we divided the problem one more time and used the recurrence to compute the additional work. As in Footnote 3 in the previous section, it is straightforward to show that we get the same big-Θ bound if d is positive. It is only a little more work to show that we get the same big-Θ bound if d is zero. Let’s think about the recursion tree for this recurrence. There will be 1 + logb n levels. At each level, the number of subproblems will be multiplied by a, and so the number of subproblems at level i will be ai . Each subproblem at level i is a problem of size (n/bi ). A subproblem of size n/bi requires (n/bi )c additional work and since there are ai problems on level i, the total number of units of work on level i is ai (n/bi )c = nc ai bci = nc a bc i to 1 and b be a real number greater than real number. Given a recurrence of the if n > 1 if n = 1

.

(4.26)

Recall from Lemma 4.7 that the different cases for c = 1 were when the work per level was decreasing, constant, or increasing. The same analysis applies here. From our formula for work

4.4. THE MASTER THEOREM

151

on level i, we see that the work per level is decreasing, constant, or increasing exactly when ( ba )i c is decreasing, constant, or increasing, respectively. These three cases depend on whether ( ba ) is c less than one, equal to one, or greater than one, respectively. Now observe that ( ba ) = 1 c ⇔ ⇔ a = bc logb a = c. ⇔ logb a = c logb b

This shows us where the three cases in the statement of the theorem come from. Now we need to show the bound on T (n) in the different cases. In the following paragraphs, we will use the facts (whose proof is a straightforward application of the definition of logarithms and rules of exponents) that for any x, y and z, each greater than 1, xlogy z = z logy x (see Corollary 4.8, Problem 5 at the end of the previous section, and Problem 3 at the end of this section) and that logx y = Θ(log2 y) (see Problem 4 at the end of the previous section). In general, the total work done is computed by summing the expression for the work per level given in Equation 4.26 over all the levels, giving logb n

nc i=0 a bc

i

logb n

= nc i=0 a bc

i

In case 1, (part 1 in the statement of the theorem) this is nc times a geometric series with a ratio of less than 1. Theorem 4.4 tells us that logb n

nc i=0 a bc

i

= Θ(nc ).

Exercise 4.4-1 Prove Case 2 (part 2 of the statement) of the Master Theorem. Exercise 4.4-2 Prove Case 3 (part 3 of the statement) of the Master Theorem. In Case 2 we have that a bc

= 1 and so logb n

nc i=0 a bc

i

logb n

= nc i=0 c

1i

= n (1 + logb n) = Θ(nc log n). In Case 3, we have that a bc

> 1. So in the series logb n

nc i=0 a bc

i

logb n

= nc i=0 a bc

i

, a logb n bc

the largest term is the last one, so by Theorem 4.4,the sum is Θ nc

. But

152

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES

nc

a bc

logb n

alogb n (bc )logb n nlogb a = nc · log bc n b nlogb a = nc · nc logb a = n . = nc ·

Thus the solution is Θ(nlogb a ). We note that we may assume that a is a real number with a > 1 and give a somewhat similar proof (replacing the recursion tree with an iteration of the recurrence), but we do not give the details here.

Solving More General Kinds of Recurrences
Exercise 4.4-3 What can you say about the big-θ behavior of the solution to T (n) = 2T (n/3) + 4n3/2 if n > 1 d if n = 1,

where n can be any nonnegative power of three? √ Exercise 4.4-4 If f (n) = n n + 1, what can you say about the Big-Θ behavior of solutions to 2S(n/3) + f (n) if n > 1 S(n) = d if n = 1, where n can be any nonnegative power of three? For Exercise 4.4-3, the work done at each level of the tree except for the bottom level will be four times the work done by the recurrence T (n) = 2T (n/3) + n3/2 if n > 1 d if n = 1,

Thus the work done by T will be no more than four times the work done by T , but will be larger than the work done by T . Therefore T (n) = Θ(T (n)). Thus by the master theorem, since log3 2 < 1 < 3/2, we have that T (n) = Θ(n3/2 ). √ √ For Exercise 4.4-4, Since n n + 1 > n n = n3/2 we have that S(n) is at least as big as the solution to the recurrence T (n) = 2T (n/3) + n3/2 if n > 1 d if n = 1,

where n can be any nonnegative power of three. But the solution to the recurrence for S will be √ no more than the solution to the recurrence in Exercise 4.4-3 for T , because n n + 1 ≤ 4n3/2 for n ≥ 0. Since T (n) = Θ(T (n)), then S(n) = Θ(T (n)) as well.

4.4. THE MASTER THEOREM Extending the Master Theorem

153

As Exercise 4.4-3 and Exercise 4.4-4 suggest, there is a whole range of interesting recurrences that do not fit the master theorem but are closely related to recurrences that do. These recurrences have the same kind of behavior predicted by our original version of the Master Theorem, but the original version of the Master Theorem does not apply to them, just as it does not apply to the recurrences of Exercise 4.4-3 and Exercise 4.4-4. We now state a second version of the Master Theorem that covers these cases. A still stronger version of the theorem may be found in Introduction to Algorithms by Cormen, et. al., but the version here captures much of the interesting behavior of recurrences that arise from the analysis of algorithms. Theorem 4.10 Let a and b be positive real numbers with a ≥ 1 and b > 1. Let T (n) be defined for powers n of b by aT (n/b) + f (n) if n > 1 T (n) = d if n = 1. Then 1. if f (n) = Θ(nc ) where logb a < c, then T (n) = Θ(nc ) = Θ(f (n)). 2. if f (n) = Θ(nc ), where logb a = c, then T (n) = Θ(nlogb a logb n) 3. if f (n) = Θ(nc ), where logb a > c, then T (n) = Θ(nlogb a ). Proof: We construct a recursion tree or iterate the recurrence. Since we have assumed that f (n) = Θ(nc ), there are constants c1 and c2 , independent of the level, so that the work at each i i level is between c1 nc ba and c2 nc ba so from this point on the proof is largely a translation c c of the original proof. Exercise 4.4-5 What does the Master Theorem tell us about the solutions to the recurrence √ 3T (n/2) + n n + 1 if n > 1 1 if n = 1?

T (n) =

√ √ √ As we saw in our solution to Exercise 4.4-4 x x + 1 = Θ(x3/2 ). Since 23/2 = 23 = 8 < 3, we have that log2 3 > 3/2. Then by conclusion 3 of version 2 of the Master Theorem, T (n) = Θ(nlog2 3 ). The remainder of this section is devoted to carefully analyzing divide and conquer recurrences in which n is not a power of b and T (n/b) is replaced by T ( n/b ). While the details are somewhat technical, the end result is that the big-Θ behavior of such recurrences is the same as the corresponding recurrences for functions defined on powers of b. In particular, the following theorem is a consequence of what we prove.

154

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES

Theorem 4.11 Let a and b be positive real numbers with a ≥ 1 and b ≥ 2. Let T (n) satisfy the recurrence aT ( n/b ) + f (n) if n > 1 T (n) = d if n = 1. Then 1. if f (n) = Θ(nc ) where logb a < c, then T (n) = Θ(nc ) = Θ(f (n)). 2. if f (n) = Θ(nc ), where logb a = c, then T (n) = Θ(nlogb a logb n) 3. if f (n) = Θ(nc ), where logb a > c, then T (n) = Θ(nlogb a ). (The condition that b ≥ 2 can be changed to B > 1 with an appropriate change in the base case of the recurrence, but the base case will then depend on b.) The reader should be able to skip over the remainder of this section without loss of continuity.

More realistic recurrences (Optional)
So far, we have considered divide and conquer recurrences for functions T (n) defined on integers n which are powers of b. In order to consider a more realistic recurrence in the master theorem, namely aT ( n/b ) + nc if n > 1 T (n) = d if n = 1, or aT ( n/b ) + nc if n > 1 T (n) = d if n = 1, or even a T ( n/b ) + (a − a )T ( n/b ) + nc if n > 1 T (n) = d if n = 1, it turns out to be easiest to first extend the domain for our recurrences to a much bigger set than the nonnegative integers, either the real or rational numbers, and then to work backwards. For example, we can write a recurrence of the form t(x) = f (x)t(x/b) + g(x) if x ≥ b k(x) if 1 ≤ x < b

for two (known) functions f and g defined on the real [or rational] numbers greater than 1 and one (known) function k defined on the real [or rational] numbers x with 1 ≤ x < b. Then so long as b > 1 it is possible to prove that there is a unique function t defined on the real [or rational] numbers greater than or equal to 1 that satisfies the recurrence. We use the lower case t in this situation as a signal that we are considering a recurrence whose domain is the real or rational numbers greater than or equal to 1. Exercise 4.4-6 How would we compute t(x) in the recurrence t(x) = 3t(x/2) + x2 if x ≥ 2 5x if 1 ≤ x < 2

if x were 7? How would we show that there is one and only one function t that satisfies the recurrence?

4.4. THE MASTER THEOREM Exercise 4.4-7 Is it the case that there is one and only one solution to the recurrence T (n) = f (n)T ( n/b ) + g(n) if n > 1 k if n = 1

155

when f and g are (known) functions defined on the positive integers, and k and b are (known) constants with b an integer larger than or equal to 2? To compute t(7) in Exercise 4.4-6 we need to know t(7/2). To compute t(7/2), we need to know t(7/4). Since 1 < 7/4 < 2, we know that t(7/4) = 35/4. Then we may write t(7/2) = 3 · Next we may write t(7) = 3t(7/2) + 72 77 = 3· + 49 2 329 . = 2 Clearly we can compute t(x) in this way for any x, though we are unlikely to enjoy the arithmetic. On the other hand suppose all we need to do is to show that there is a unique value of t(x) determined by the recurrence, for all real numbers x ≥ 1. If 1 ≤ x < 2, then t(x) = 5x, which uniquely determines t(x). Given a number x ≥ 2, there is a smallest integer i such that x/2i < 2, and for this i, we have 1 ≤ x/2i . We can now prove by induction on i that t(x) is uniquely determined by the recurrence relation. In Exercise 4.4-7 there is one and only one solution. Why? Clearly T (1) is determined by the recurrence. Now assume inductively that n > 1 and that T (m) is uniquely determined for positive integers m < n. We know that n ≥ 2, so that n/2 ≤ n − 1. Since b ≥ 2, we know that n/2 ≥ n/b, so that n/b ≤ n − 1. Therefore n/b < n, so that we know by the inductive hypothesis that T ( n/b ) is uniquely determined by the recurrence. Then by the recurrence, T (n) = f (n)T n b + g(n), 35 49 154 77 + = = . 4 4 4 2

which uniquely determines T (n). Thus by the principle of mathematical induction, T (n) is determined for all positive integers n. For every kind of recurrence we have dealt with, there is similarly one and only one solution. Because we know solutions exist, we don’t find formulas for solutions to demonstrate that solutions exist, but rather to help us understand properties of the solutions. In this section and the last section, for example, we were interested in how fast the solutions grew as n grew large. This is why we were finding Big-O and Big-Θ bounds for our solutions.

Recurrences for general n (Optional)
We will now show how recurrences for arbitrary real numbers relate to recurrences involving floors and ceilings. We begin by showing that the conclusions of the Master Theorem apply to recurrences for arbitrary real numbers when we replace the real numbers by “nearby” powers of b.

156

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES

Theorem 4.12 Let a and b be positive real numbers with b > 1 and c and d be real numbers. Let t(x) be the solution to the recurrence t(x) = at(x/b) + xc if x ≥ b d if 1 ≤ x < b.

Let T (n) be the solution to the recurrence T (n) = aT (n/b) + nc if n ≥ 0 d if n = 1,

defined for n a nonnegative integer power of b. Let m(x) be the largest integer power of b less than or equal to x. Then t(x) = Θ(T (m(x))) Proof: If we iterate (or, in the case that a is an integer, draw recursion trees for) the two recurrences, we can see that the results of the iterations are nearly identical. This means the solutions to the recurrences have the same big-Θ behavior. See the Appendix to this Section for details. Removing Floors and Ceilings (Optional) We have also pointed out that a more realistic Master Theorem would apply to recurrences of the form T (n) = aT ( n/b ) + nc , or T (n) = aT ( n/b ) + nc , or even T (n) = a T ( n/b ) + (a − a )T ( n/b )+nc . For example, if we are applying mergesort to an array of size 101, we really break it into pieces, of size 50 and 51. Thus the recurrence we want is not really T (n) = 2T (n/2) + n, but rather T (n) = T ( n/2 ) + T ( n/2 ) + n. We can show, however, that one can essentially “ignore” the floors and ceilings in typical divide-and-conquer recurrences. If we remove the floors and ceilings from a recurrence relation, we convert it from a recurrence relation defined on the integers to one defined on the rational numbers. However we have already seen that such recurrences are not difficult to handle. The theorem below says that in recurrences covered by the master theorem, if we remove ceilings, our recurrences still have the same big-Θ bounds on their solutions. A similar proof shows that we may remove floors and still get the same big-Θ bounds. Without too much more work we can see that we can remove floors and ceilings simultaneously without changing the big-Θ bounds on our solutions. Since we may remove either floors or ceilings, that means that we may deal with recurrences of the form T (n) = a T ( n/b ) + (a − a )T ( n/b ) + nc . The condition that b > 2 can be replaced by b > 1, but the base case for the recurrence will depend on b. Theorem 4.13 Let a and b be positive real numbers with b ≥ 2 and let c and d be real numbers. Let T (n) be the function defined on the integers by the recurrence T (n) = aT ( n/b ) + nc if n > 1 d n = 1,

and let t(x) be the function on the real numbers defined by the recurrence t(x) = at(x/b) + xc if x ≥ b d if 1 ≤ x < b

Then T (n) = Θ(t(n)). The same statement applies with ceilings replaced by floors.

4.4. THE MASTER THEOREM

157

Proof: As in the previous theorem, we can consider iterating the two recurrences. It is straightforward (though dealing with the notation is difficult) to show that for a given value of n, the iteration for computing T (n) has at most two more levels than the iteration for computing t(n). The work per level also has the same Big-Θ bounds at each level, and the work for the two additional levels of the iteration for T (n) has the same Big-Θ bounds as the work at the bottom level of the recursion tree for t(n). We give the details in the appendix at the end of this section. Theorem 4.12 and Theorem 4.13 tell us that the Big-Θ behavior of solutions to our more realistic recurrences aT ( n/b ) + nc if n > 1 T (n) = d n=1 is determined by their Big-Θ behavior on powers of the base b. Floors and ceilings in the stronger version of the Master Theorem (Optional) In our first version of the master theorem, we showed that we could ignore ceilings and assume our variables were powers of b. In fact we can ignore them in circumstances where the function telling us the “work” done at each level of our recursion tree is Θ(xc ) for some positive real number c. This lets us apply the second version of the master theorem to recurrences of the form T (n) = aT ( n/b ) + f (n). Theorem 4.14 Theorems 4.12 and 4.13 apply to recurrences in which the xc or nc term is replaced by f (x) or f (n) for a function f with f (x) = Θ(xc ). Proof: We iterate the recurrences or construct recursion trees in the same way as in the proofs of the original theorems, and find that the condition f (x) = Θ(xc ) gives us enough information to again bound the solution above and below with multiples of the solution of the recurrence with xc . The details are similar to those in the original proofs.

Appendix: Proofs of Theorems (Optional)
For convenience, we repeat the statements of the earlier theorems whose proofs we merely outlined. Theorem 4.12 Let a and b be positive real numbers with b > 1 and c and d be real numbers. Let t(x) be the solution to the recurrence t(x) = at(x/b) + xc if x ≥ b d if 1 ≤ x < b.

Let T (n) be the solution to the recurrence T (n) = aT (n/b) + nc if n ≥ 0 d if n = 1,

defined for n is a nonnegative integer power of b. Let m(x) be the largest integer power of b less than or equal to x. Then t(x) = Θ(T (m(x)))

158

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES

Proof: By iterating each recursion 4 times (or using a four level recursion tree in the case that a is an integer), we see that t(x) = a4 t and x a + c 4 b b
3

xc +

a bc

2

xc +

a c x bc

n a 3 a 2 a + c nc + c nc + c n c . 4 b b b b Thus, continuing until we have a solution, in both cases we get a solution that starts with a raised to an exponent that we will denote as either e(x) or e(n) when we want to distinguish between them and e when it is unnecessary to distinguish. The solution for t will be ae times t(x/be ) T (n) = a4 T e−1 a i i=0 bc .

plus xc times a geometric series a geometric series e−1 a i i=0 bc .

The solution for T will be ae times d plus nc times e In both cases t(x/be ) (or T (n/be )) will be d. In both cases the

geometric series will be Θ(1), Θ(e) or Θ ba , depending on whether ba is less than 1, equal to 1, c c or greater than one. Clearly e(n) = logb n. Since we must divide x by b an integer number greater than logb x − 1 times in order to get a value in the range from 1 to b, e(x) = logb x . Thus, if m is the largest integer power of b less than or equal to x, then 0 ≤ e(x) − e(m) < 1. Let us use r to stand for the real number ba . Then we have r0 ≤ re(x)−e(m) < r, or re(m) ≤ re(x) ≤ r · re(m) . c Thus we have re(x) = Θ(re(m) ) Finally, mc ≤ xc ≤ bc mc , and so xc = Θ(mc ). Therefore, every term of t(x) is Θ of the corresponding term of T (m). Further, there are only a fixed number of different constants involved in our Big-Θ bounds. Therefore since t(x) is composed of sums and products of these terms, t(x) = Θ(T (m)). Theorem 4.13 Let a and b be positive real numbers with b ≥ 2 and let c and d be real numbers. Let T (n) be the function defined on the integers by the recurrence T (n) = aT ( n/b ) + nc if n ≥ b d n = 1,

and let t(x) be the function on the real numbers defined by the recurrence t(x) = Then T (n) = Θ(t(n)). Proof: As in the previous proof, we can iterate both recurrences. Let us compare what the results will be of iterating the recurrence for t(n) and the recurrence for T (n) the same number of times. Note that at(x/b) + xc if x ≥ b d if 1 ≤ x < b.

n/b n/b /b < n/b + 1/b n/b /b /b < n/b3 + 1/b2 + 1/b
2

< n/b + 1 < n/b2 + 1/b + 1 < n/b3 + 1/b2 + 1/b + 1

This suggests that if we define n0 = n, and ni = ni−1 /b , then, using the fact that b ≥ 2, it is straightforward to prove by induction, or with the formula for the sum of a geometric series,

4.4. THE MASTER THEOREM

159

that ni < n/bi + 2. The number ni is the argument of T in the ith iteration of the recurrence for T . We have just seen that it differs from the argument of t in the ith iteration of t by at most 2. In particular, we might have to iterate the recurrence for T twice more than we iterate the recurrence for t to reach the base case. When we iterate the recurrence for t, we get the same solution we got in the previous theorem, with n substituted for x. When we iterate the recurrence for T , we get for some integer j that j−1 T (n) = aj d + i=0 n bi n bi

ai nc , i

≥ 2, we have n/bi + 2 ≤ n/bi−1 . Since the number of with ≤ ni ≤ + 2. But, so long as iterations of T is at most two more than the number of iterations of t, and since the number of iterations of t is logb n , we have that j is at most logb n + 2. Therefore all but perhaps the last three values of ni are less than or equal to n/bi−1 , and these last three values are at most b2 , b, and 1. Putting all these bounds together and using n0 = n gives us n/bi j−1 ai i=0 n bi

c

j−1

≤ i=0 c

ai nc i j−4 ≤ n + i=1 ai

n bi−1

c

+ aj−2 (b2 )c + aj−1 bc + aj 1c ,

or j−1 n a i b i=0 i c

j−1

≤ i=0 c

ai nc i j−4 i c

n ≤ n +b a i b i=1

+a

j−2

bj bj−2

c

+a

j−1

bj bj−1

c

+a

j

bj bj

c

.

As we shall see momentarily these last three “extra” terms and the b in front of the summation sign do not change the Big-Θ behavior of the right-hand side. As in the proof of the master theorem, the Big-Θ behavior of the left hand side depends on whether a/bc is less than 1, in which case it is Θ(nc ), equal to 1, in which case it is Θ(nc logb n), or greater than one in which case it is Θ(nlogb a ). But this is exactly the Big-Θ behavior of the c j c n right-hand side, because n < bj < nb2 , so bj = Θ(n), which means that b i = Θ bi , b and the b in front of the summation sign does not change its Big-Θ behavior. Adding aj d to the middle term of the inequality to get T (n) does not change this behavior. But this modified middle term is exactly T (n). Since the left and right hand sides have the same big-Θ behavior as t(n), we have T(n) = Θ(t(n)).

Important Concepts, Formulas, and Theorems
1. Master Theorem, simplified version. The simplified version of the Master Theorem states: Let a be an integer greater than or equal to 1 and b be a real number greater than 1. Let c be a positive real number and d a nonnegative real number. Given a recurrence of the form T (n) = then for n a power of b, aT (n/b) + nc if n > 1 d if n = 1

160

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES (a) if logb a < c, T (n) = Θ(nc ), (b) if logb a = c, T (n) = Θ(nc log n), (c) if logb a > c, T (n) = Θ(nlogb a ).

2. Properties of Logarithms. For any x, y and z, each greater than 1, xlogy z = z logy x . Also, logx y = Θ(log2 y). 3. Master Theorem, More General Version. Let a and b be positive real numbers with a ≥ 1 and b ≥ 2. Let T (n) be defined for powers n of b by T (n) = Then (a) if f (n) = Θ(nc ) where logb a < c, then T (n) = Θ(nc ) = Θ(f (n)). (b) if f (n) = Θ(nc ), where logb a = c, then T (n) = Θ(nlogb a logb n) (c) if f (n) = Θ(nc ), where logb a > c, then T (n) = Θ(nlogb a ). A similar result with a base case that depends on b holds when 1 < b < 2. 4. Important Recurrences have Unique Solutions. (Optional.) The recurrence T (n) = f (n)T ( n/b ) + g(n) if n > 1 k if n = 1 aT (n/b) + f (n) if n > 1 d if n = 1

has a unique solution when f and g are (known) functions defined on the positive integers, and k and b are (known) constants with b an integer larger than 2. 5. Recurrences Defined on the Positive Real Numbers and Recurrences Defined on the Positive Integers. (Optional.) Let a and b be positive real numbers with b > 1 and c and d be real numbers. Let t(x) be the solution to the recurrence t(x) = at(x/b) + xc if x ≥ b d if 1 ≤ x < b.

Let T (n) be the solution to the recurrence T (n) = aT (n/b) + nc if n ≥ 0 d if n = 1,

where n is a nonnegative integer power of b. Let m(x) be the largest integer power of b less than or equal to x. Then t(x) = Θ(T (m(x))) 6. Removing Floors and Ceilings from Recurrences. (Optional.) Let a and b be positive real numbers with b ≥ 2 and let c and d be real numbers. Let T (n) be the function defined on the integers by the recurrence T (n) = aT ( n/b ) + nc if n > 1 , d n=1

4.4. THE MASTER THEOREM and let t(x) be the function on the real numbers defined by the recurrence t(x) = at(x/b) + xc if x ≥ b . d if 1 ≤ x < b

161

Then T (n) = Θ(t(n)). The same statement applies with ceilings replaced by floors. 7. Extending 5 and 6 (Optional.) In the theorems summarized in 5 and 6 the nc or xc term may be replaced by a function f with f (x) = Θ(xc ). 8. Solutions to Realistic Recurrences. The theorems summarized in 5, 6, and 7 tell us that the Big-Θ behavior of solutions to our more realistic recurrences T (n) = aT ( n/b ) + f (n) if n > 1 d n=1,

where f (n) = Θ(nc ), is determined by their Big-Θ behavior on powers of the base b and with f (n) = nc .

Problems
1. Use the master theorem to give Big-Θ bounds on the solutions to the following recurrences. For all of these, assume that T (1) = 1 and n is a power of the appropriate integer. (a) T (n) = 8T (n/2) + n (b) T (n) = 8T (n/2) + n3 (c) T (n) = 3T (n/2) + n (d) T (n) = T (n/4) + 1 (e) T (n) = 3T (n/3) + n2 2. Extend the proof of the Master Theorem, Theorem 4.9 to the case T (1) = d. 3. Show that for any x, y and z, each greater than 1, xlogy z = z logy x . 4. (Optional) Show that for each real number x ≥ 0 there is one and only one value of t(x) given by the recurrence t(x) = 7xt(x − 1) + 1 if x ≥ 1 1 if 0 ≤ x < 1.

5. (Optional) Show that for each real number x ≥ 1 there is one and only one value of t(x) given by the recurrence t(x) = 3xT (x/2) + x2 if x ≥ 2 . 1 if 1 ≤ x < 2

162

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES

6. (Optional) How many solutions are there to the recurrence T (n) = f (n)T ( n/b ) + g(n) if n > 1 k if n = 1

if b < 2? If b = 10/9, by what would we have to replace the condition that T (n) = k if n = 1 in order to get a unique solution? 7. Give a big-Θ bound on the solution to the recurrence T (n) = 3T ( n/2 ) + d √ n + 3 if n > 1 if n = 1.

8. Give a big-Θ bound on the solution to the recurrence √ 3T ( n/2 ) + n3 + 3 if n > 1 T (n) = d if n = 1.

9. Give a big-Θ bound on the solution to the recurrence √ 3T ( n/2 ) + n4 + 3 if n > 1 T (n) = d if n = 1.

10. Give a big-Θ bound on the solution to the recurrence √ 2T ( n/2 ) + n2 + 3 if n > 1 T (n) = d if n = 1.

11. (Optional) Explain why theorem 4.11 is a consequence of Theorem 4.12 and Theorem 4.13

4.5. MORE GENERAL KINDS OF RECURRENCES

163

4.5

More general kinds of recurrences

Recurrence Inequalities
The recurrences we have been working with are really idealized versions of what we know about the problems we are working on. For example, in merge-sort on a list of n items, we say we divide the list into two parts of equal size, sort each part, and then merge the two sorted parts. The time it takes to do this is the time it takes to divide the list into two parts plus the time it takes to sort each part, plus the time it takes to merge the two sorted lists. We don’t specify how we are dividing the list, or how we are doing the merging. (We assume the sorting is done by applying the same method to the smaller lists, unless they have size 1, in which case we do nothing.) What we do know is that any sensible way of dividing the list into two parts takes no more than some constant multiple of n time units (and might take no more than constant time if we do it by leaving the list in place and manipulating pointers) and that any sensible algorithm for merging two lists will take no more than some (other) constant multiple of n time units. Thus we know that if T (n) is the amount of time it takes to apply merge sort to n data items, then there is a constant c (the sum of the two constant multiples we mentioned) such that T (n) ≤ 2T (n/2) + cn. (4.27)

Thus real world problems often lead us to recurrence inequalities rather than recurrence equations. These are inequalities that state that T (n) is less than or equal to some expression involving values of T (m) for m < n. (We could also include inequalities with a greater than or equal to sign, but they do not arise in the applications we are studying.) A solution to a recurrence inequality is a function T that satisfies the inequality. For simplicity we will expand what we mean by the word recurrence to include either recurrence inequalities or recurrence equations. In Recurrence 4.27 we are implicitly assuming that T is defined only on positive integer values and, since we said we divided the list into two equal parts each time, our analysis only makes sense if we assume that n is a power of 2. Note that there are actually infinitely many solutions to Recurrence 4.27. (For example for any c < c, the unique solution to T (n) = 2T (n/2) + c n if n ≥ 2 k if n = 1 (4.28)

satisfies Inequality 4.27 for any constant k.) The idea that Recurrence 4.27 has infinitely many solutions, while Recurrence 4.28 has exactly one solution is analogous to the idea that x − 3 ≤ 0 has infinitely many solutions while x − 3 = 0 has one solution. Later in this section we shall see how to show that all the solutions to Recurrence 4.27 satisfy T (n) = O(n log2 n). In other words, no matter how we sensibly implement merge sort, we have a O(n log2 n) time bound on how long the merge sort process takes. Exercise 4.5-1 Carefully prove by induction that for any function T defined on the nonnegative powers of 2, if T (n) ≤ 2T (n/2) + cn for some constant c, then T (n) = O(n log n).

164

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES

A Wrinkle with Induction
We can analyze recurrence inequalities via a recursion tree. The process is virtually identical to our previous use of recursion trees. We must, however, keep in mind that on each level, we are really computing an upper bound on the work done on that level. We can also use a variant of the method we used a few sections ago, guessing an upper bound and verifying by induction. We use this method for the recurrence in Exercise 4.5-1. Here we wish to show that T (n) = O(n log n). From the definition of Big-O, we can see that we wish to show that T (n) ≤ kn log n for some positive constant k (so long as n is larger than some value n0 ). We are going to do something you may find rather curious. We will consider the possibility that we have a value of k for which the inequality holds. Then in analyzing the consequences of this possibility, we will discover that there are assumptions that we need to make about k in order for such a k to exist. What we will really be doing is experimenting to see how we will need to choose k to make an inductive proof work. We are given that T (n) ≤ 2T (n/2) + cn for all positive integers n that are powers of 2. We want to prove there is another positive real number k > 0 and an n0 > 0 such that for n > n0 , T (n) ≤ kn log n. We cannot expect to have the inequality T (n) ≤ kn log n hold for n = 1, because log 1 = 0. To have T (2) ≤ k · 2 log 2 = k · 2, we must choose k ≥ T (2) . This is the first 2 assumption we must make about k. Our inductive hypothesis will be that if n is a power of 2 and m is a power of 2 with 2 ≤ m < n then T (m) ≤ km log m. Now n/2 < n, and since n is a power of 2 greater than 2, we have that n/2 ≥ 2, so (n/2) log n/2 ≥ 2. By the inductive hypothesis, T (n/2) ≤ k(n/2) log n/2. But then n n log + cn 2 2 n = kn log + cn 2 = kn log n − kn log 2 + cn = kn log n − kn + cn.

T (n) ≤ 2T (n/2) + cn ≤ 2k

(4.29) (4.30) (4.31) (4.32)

Recall that we are trying to show that T (n) ≤ kn log n. But that is not quite what Line 4.32 tells us. This shows that we need to make another assumption about k, namely that −kn+cn ≤ 0, or k ≥ c. Then if both our assumptions about k are satisfied, we will have T (n) < kn log n, and we can conclude by the principle of mathematical induction that for all n > 1 (so our n0 is 2), T (n) ≤ kn log n, so that T (n) = O(n log n). A full inductive proof that T (n) = O(n log n) is actually embedded in the discussion above, but since it might not appear to everyone to be a proof, below we will summarize our observations in a more traditional looking proof. However you should be aware that some authors and teachers prefer to write their proofs in a style that shows why we make the choices about k that we do, and so you should learn how to to read discussions like the one above as proofs. We want to show that if T (n) ≤ T (n/2) + cn, then T (n) = O(n log n). We are given a real number c > 0 such that T (n) ≤ 2T (n/2) + cn for all n > 1. Choose k to be larger than or equal to T (2) and larger than or equal to c. Then 2 T (2) ≤ k · 2 log 2

4.5. MORE GENERAL KINDS OF RECURRENCES

165

because k ≥ T (n0 )/2 and log 2 = 1. Now assume that n > 2 and assume that for m with 2 ≤ m < n, we have T (m) ≤ km log m. Since n is a power of 2, we have n ≥ 4, so that n/2 is an m with 2 ≤ m < n. Thus, by the inductive hypothesis, T Then by the recurrence, T (n) ≤ 2k n n log + cn 2 2 = kn(log n − 1) + cn ≤ kn log n, since k ≥ c. Thus by the principle of mathematical induction, T (n) ≤ kn log n for all n > 2, and therefore T (n) = O(n log n). There are three things to note about this proof. First without the preceding discussion, the choice of k seems arbitrary. Second, without the preceding discussion, the implicit choice of 2 for the n0 in the big-O statement also seems arbitrary. Third, the constant k is chosen in terms of the previous constant c. Since c was given to us by the recurrence, it may be used in choosing the constant we use to prove a Big-O statement about solutions to the recurrence. If you compare the formal proof we just gave with the informal discussion that preceded it, you will find each step of the formal proof actually corresponds to something we said in the informal discussion. Since the informal discussion explained why we were making the choices we did, it is natural that some people prefer the informal explanation to the formal proof. n 2 ≤k n n log 2 2.

= kn log n + cn − kn

Further Wrinkles in Induction Proofs
Exercise 4.5-2 Suppose that c is a real number greater than zero. Show by induction that any solution T (n) to the recurrence T (n) ≤ T (n/3) + cn with n restricted to integer powers of 3 has T (n) = O(n). Exercise 4.5-3 Suppose that c is a real number greater than zero. Show by induction that any solution T (n) to the recurrence T (n) ≤ 4T (n/2) + cn with n restricted to integer powers of 2 has T (n) = O(n2 ). In Exercise 4.5-2 we are given a constant c such that T (n) ≤ T (n/3) + cn if n > 1. Since we want to show that T (n) = O(n), we want to find two more constants n0 and k such that T (n) ≤ kn whenever n > n0 . We will choose n0 = 1 here. (This was not an arbitrary choice; it is based on observing that T (1) ≤ kn is not an impossible condition to satisfy when n = 1.) In order to have T (n) ≤ kn for

166

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES

n = 1, we must assume k ≥ T (1). Now assuming inductively that T (m) ≤ km when 1 ≤ m < n we can write T (n) ≤ T (n/3) + cn ≤ k(n/3) + cn 2k = kn + c − n 3 Thus, as long as c − 2k ≤ 0, i.e. k ≥ 3 c, we may conclude by mathematical induction that 3 2 T (n) ≤ kn for all n ≥ 1. Again, the elements of an inductive proof are in the preceding discussion. Again you should try to learn how to read the argument we just finished as a valid inductive proof. However, we will now present something that looks more like an inductive proof. We choose k to be the maximum of T (1) and 3c/2 and we choose n0 = 1. To prove by induction that T (x) ≤ kx we begin by observing that T (1) ≤ k · 1. Next we assume that n > 1 and assume inductively that for m with 1 ≤ m < n we have T (m) ≤ km. Now we may write T (n) ≤ T (n/3) + cn ≤ kn/3 + cn = kn + (c − 2k/3)n ≤ kn, because we chose k to be at least as large as 3c/2, making c − 2k/3 negative or zero. Thus by the principle of mathematical induction we have T (n) ≤ kn for all n ≥ 1 and so T (n) = O(n). Now let’s analyze Exercise 4.5-3. We won’t dot all the i’s and cross all the t’s here because there is only one major difference between this exercise and the previous one. We wish to prove that there are an n0 and a k such that T (n) ≤ kn2 for n > n0 . Assuming that we have chosen n0 and k so that the base case holds, we can bound T (n) inductively by assuming that T (m) ≤ km2 for m < n and reasoning as follows: T (n) ≤ 4T ≤ 4 k = 4 n 2 n 2 + cn
2

+ cn + cn

kn2 4

= kn2 + cn. To proceed as before, we would like to choose a value of k so that cn ≤ 0. But we see that we have a problem because both c and n are always positive! What went wrong? We have a statement that we know is true, and we have a proof method (induction) that worked nicely for similar problems. The usual way to describe the problem we are facing is that, while the statement is true, it is too weak to be proved by induction. To have a chance of making the inductive proof work, we will have to make an inductive hypothesis that puts some sort of negative quantity, say a term like −kn, into the last line of our display above. Let’s see if we can prove something that is actually stronger than we were originally trying to prove, namely that for some positive constants k1 and k2 , T (n) ≤ k1 n2 − k2 n. Now proceeding as before, we get T (n) ≤ 4T (n/2) + cn

4.5. MORE GENERAL KINDS OF RECURRENCES ≤ 4 k1 = 4 n 2
2

167 n 2 + cn + cn

− k2

k1 n2 n − k2 4 2

= k1 n2 − 2k2 n + cn = k1 n2 − k2 n + (c − k2 )n. Now we have to make (c − k2 )n ≤ 0 for the last line to be at most k1 n2 − k2 n, and so we just choose k2 ≥ c (and greater than whatever we need in order to make a base case work). Since T (n) ≤ k1 n2 − k2 n for some constants k1 and k2 , then T (n) = O(n2 ). At first glance, this approach seems paradoxical: why is it easier to prove a stronger statement than it is to prove a weaker one? This phenomenon happens often in induction: a stronger statement is often easier to prove than a weaker one. Think carefully about an inductive proof where you have assumed that a bound holds for values smaller than n and you are trying to prove a statement for n. You use the bound you have assumed for smaller values to help prove the bound for n. Thus if the bound you used for smaller values is actually weak, then that is hindering you in proving the bound for n. In other words when you want to prove something about p(n) you are using p(1) ∧ . . . ∧ p(n − 1). Thus if these are stronger, they will be of greater help in proving p(n). In the case above, the problem was that the statements, p(1), . . . , p(n − 1) were too weak, and thus we were not able to prove p(n). By using a stronger p(1), . . . , p(n − 1), however, we were able to prove a stronger p(n), one that implied the original p(n) we wanted. When we give an induction proof in this way, we say that we are using a stronger inductive hypothesis.

Dealing with Functions Other Than nc
Our statement of the Master Theorem involved a recursive term plus an added term that was Θ(nc ). Sometimes algorithmic problems lead us to consider other kinds of functions. The most common such is example is when that added function involves logarithms. For example, consider the recurrence: 2T (n/2) + n log n if n > 1 T (n) = 1 if n = 1, where n is a power of 2. Just as before, we can draw a recursion tree; the whole methodology works, but our sums may be a little more complicated. The tree for this recurrence is shown in Figure 4.8. n This is similar to the tree for T (n) = 2T (n/2) + n, except that the work on level i is n log 2i for i ≥ 2, and, for the bottom level, it is n, the number of subproblems, times 1. Thus if we sum the work per level we get log n−1



n log i=0 n 2i

+ n = n


log n−1



log i=0 log n−1 i=0

n 2i

+ 1


= n

(log n − log 2i ) + 1

168

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES

Figure 4.8: The recursion tree for T (n) = 2T (n/2) + n log n if n > 1 and T (1) = 1.
Problem Size Work

n

n log n

n/2 log n +1 levels n/4

n/2 log(n/2) + n/2 log(n/2) = n log(n/2)

4(n/4 log(n/4)) = n log(n/4)

n/8

8(n/8 log(n/8)) = n log(n/8)

2
1

n/2( n/(n/2) log(n/(n/2))) = n n(1) = n



= n

log n−1

log n−1



log n − i=0 i=0

i + n +n

= n (log n)(log n) − = O(n log2 n) .

(log n)(log n − 1) 2

A bit of mental arithmetic in the second last line of our equations shows that the log2 n will not cancel out, so our solution is in fact Θ(n log2 n). Exercise 4.5-4 Find the best big-O bound you can on the solution to the recurrence T (n) = T (n/2) + n log n if n > 1 1 if n = 1,

assuming n is a power of 2. Is this bound a big-Θ bound? The tree for this recurrence is in Figure 4.9 Notice that the work done at the bottom nodes of the tree is determined by the statement T (1) = 1 in our recurrence; it is not 1 log 1. Summing the work, we get
 

log n−1

1+ i=0 n n log i 2i 2

= 1 + n


log n−1 i=0 log n−1 i=0

1 (log n − log 2i ) 2i 1 2 i 

= 1 + n


(log(n) − i) 1 2 i ≤ 1 + n log n

log n−1 i=0

 

4.5. MORE GENERAL KINDS OF RECURRENCES

169

Figure 4.9: The recursion tree for the recurrence T (n) = T (n/2) + n log n if n > 1 and T (1) = 1.
Problem Size Work

n

n

n/2 log n levels n/4

n/2 log(n/2)

n/4 log(n/4)

n/8

n/8 log(n/8)

2

2 log 2

≤ 1 + n(log n)(2) = O(n log n). Note that the largest term in the sum in our second line of equations is log(n), and none of the terms in the sum are negative. This means that n times the sum is at least n log n. Therefore, we have T (n) = Θ(n log n). Removing Ceilings and Using Powers of b. (Optional) We showed that in our versions of the master theorem, we could ignore ceilings and assume our variables were powers of b. It might appear that the two theorems we used do not apply to the more general functions we have studied in this section any more than the master theorem does. However, they actually only depend on properties of the powers nc and not the three different kinds of cases, so it turns out we can extend them. Notice that (xb)c = bc xc , and this proportionality holds for all values of x with constant of proportionality bc . Putting this just a bit less precisely, we can write (xb)c = O(xc ). This suggests that we might be able to obtain Big-Θ bounds on T (n) when T satisfies a recurrence of the form T (n) = aT (n/b) + f (n) with f (nb) = Θ(f (n)), and we might be able to obtain Big-O bounds on T when T satisfies a recurrence of the form T (n) ≤ aT (n/b) + f (n) with f (nb) = O(f (n)). But are these conditions satisfied by any functions of practical interest? Yes. For example if f (x) = log(x), then f (bx) = log(b) + log(x) = Θ(log(x)). Exercise 4.5-5 Show that if f (x) = x2 log x, then f (bx) = Θ(f (x)).

170

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES

Exercise 4.5-6 If f (x) = 3x and b = 2, is f (bx) = Θ(f (x))? Is f (b(x)) = O(f (x))? For Exercise 4.5-5 if f (x) = x2 log x, then f (bx) = (bx)2 log bx = b2 x2 (log b + log x) = Θ(x2 log x). However, if f (x) = 3x , then f (2x) = 32x = (3x )2 = 3x · 3x , and there is no way that this can be less than or equal to a constant multiple of 3x , so it is neither Θ(3x ) nor O(3x ). Our exercises suggest the kinds of functions that satisfy the condition f (bx) = O(f (x)) might include at least some of the kinds of functions of x which arise in the study of algorithms. They certainly include the power functions and thus polynomial functions and root functions, or functions bounded by such functions. There was one other property of power functions nc that we used implicitly in our discussions of removing floors and ceilings and assuming our variables were powers of b. Namely, if x > y (and c ≥ 0) then xc ≥ y c . A function f from the real numbers to the real numbers is called (weakly) increasing if whenever x > y, then f (x) ≥ f (y). Functions like f (x) = log x and f (x) = x log x are increasing functions. On the other hand, the function defined by f (x) = x if x is a power of b x2 otherwise

is not increasing even though it does satisfy the condition f (bx) = Θ(f (x)). Theorem 4.15 Theorems 4.12 and 4.13 apply to recurrences in which the xc term is replaced by an increasing function f for which f (bx) = Θ(f (x)). Proof: We iterate the recurrences in the same way as in the proofs of the original theorems, and find that the condition f (bx) = Θ(f (x)) applied to an increasing function gives us enough information to again bound the solution to one kind of recurrence above and below with a multiple of the solution of the other kind. The details are similar to those in the original proofs so we omit them. In fact there are versions of Theorems 4.12 and 4.13 for recurrence inequalities also. The proofs involve a similar analysis of iterated recurrences or recursion trees, and so we omit them. Theorem 4.16 Let a and b be positive real numbers with b > 2 and let f : R+ → R+ be an increasing function such that f (bx) = O(f (x)). Then every solution t(x) to the recurrence t(x) ≤ at(x/b) + f (x) if x ≥ b c if 1 ≤ x < b,

where a, b, and c are constants, satisfies t(x) = O(h(x)) if and only if every solution T (n) to the recurrence aT (n/b) + f (n) if n > 1 T (n) ≤ d if n = 1, where n is restricted to powers of b, satisfies T (n) = O(h(n)).

4.5. MORE GENERAL KINDS OF RECURRENCES

171

Theorem 4.17 Let a and b be positive real numbers with b ≥ 2 and let f : R+ → R+ be an increasing function such that f (bx) = O(f (x)). Then every solution T (n) to the recurrence T (n) ≤ at( n/b ) + f (n) if n > 1 d if n = 1,

satisfies T (n) = O(h(n)) if and only if every solution t(x) to the recurrence t(x) ≤ satisfies t(x) = O(h(x)). aT (x/b) + f (x) if x ≥ b d if 1 ≤ x < b,

Important Concepts, Formulas, and Theorems
1. Recurrence Inequality. Recurrence inequalities are inequalities that state that T (n) is less than or equal to some expression involving values of T (m) for m < n. A solution to a recurrence inequality is a function T that satisfies the inequality. 2. Recursion Trees for Recurrence Inequalities. We can analyze recurrence inequalities via a recursion tree. The process is virtually identical to our previous use of recursion trees. We must, however, keep in mind that on each level, we are really computing an upper bound on the work done on that level. 3. Discovering Necessary Assumptions for an Inductive Proof. If we are trying to prove a statement that there is a value k such that an inequality of the form f (n) ≤ kg(n) or some other statement that involves the parameter k is true, we may start an inductive proof without knowing a value for k and determine conditions on k by assumptions that we need to make in order for the inductive proof to work. When written properly, such an explanation is actually a valid proof. 4. Making a Stronger Inductive Hypothesis. If we are trying to prove by induction a statement of the form p(n) ⇒ q(n) and we have a statement s(n) such that s(n) ⇒ q(n), it is sometimes useful to try to prove the statement p(n) ⇒ s(n). This process is known as proving a stronger statement or making a stronger inductive hypothesis. It sometimes works because it gives us an inductive hypothesis which suffices to prove the stronger statement even though our original statement q(n) did not give an inductive hypothesis sufficient to prove the original statement. However we must be careful in our choice of s(n), because we have to be able to succeed in proving p(n) ⇒ s(n). 5. When the Master Theorem does not Apply. To deal with recurrences of the form T (n) = aT ( n/b ) + f (n) if n > 1 d if n = 1

where f (n) is not Θ(nc ), recursion trees and iterating the recurrence are appropriate tools even though the Master Theorem does not apply. The same holds for recurrence inequalities. 6. Increasing function. (Optional.) A function f : R → R is said to be (weakly) increasing if whenever x > y, f (x) ≥ f (y)

172

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES

7. Removing Floors and Ceilings when the Master Theorem does not Apply. (Optional.) To deal with big-Θ bounds with recurrences of the form T (n) = aT ( n/b ) + f (n) if n > 1 d if n = 1

where f (n) is not Θ(nc ), we may remove floors and ceilings and replace n by powers of b if f is increasing and satisfies the condition f (nb) = Θ(f (n)). To deal with big-O bounds for a similar recurrence inequality we may remove floors and ceilings if f is increasing and satisfies the condition that f (nb) = O(f (n)).

Problems
1. (a) Find the best big-O upper bound you can to any solution to the recurrence T (n) = 4T (n/2) + n log n if n > 1 1 if n = 1.

(b) Assuming that you were able to guess the result you got in part (a), prove by induction that your answer is correct. 2. Is the big-O upper bound in the previous problem actually a big-Θ bound? 3. Show by induction that T (n) = 8T (n/2) + n log n if n > 1 d if n = 1

has T (n) = O(n3 ) for any solution T (n). 4. Is the big-O upper bound in the previous problem actually a big-Θ bound? 5. Show by induction that any solution to a recurrence of the form T (n) ≤ 2T (n/3) + c log3 n is O(n log3 n). What happens if you replace 2 by 3 (explain why)? Would it make a difference if we used a different base for the logarithm (only an intuitive explanation is needed here)? 6. What happens if you replace the 2 in Problem 5 by 4? (Hint: one way to attack this is with recursion trees.) 7. Is the big-O upper bound in Problem 5 actually a big Θ bound? 8. (Optional) Give an example (different from any in the text) of a function for which f (bx) = O(f (x)). Give an example (different from any in the text) of a function for which f (bx) is not O(f (x)). 9. Give the best big O upper bound you can for the solution to the recurrence T (n) = 2T (n/3− 3) + n, and then prove by induction that your upper bound is correct.

4.5. MORE GENERAL KINDS OF RECURRENCES

173

10. Find the best big-O upper bound you can to any solution to the recurrence defined on nonnegative integers by T (n) ≤ 2T ( n/2 + 1) + cn. Prove by induction that your answer is correct.

174

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES

4.6

Recurrences and Selection

The idea of selection
One common problem that arises in algorithms is that of selection. In this problem you are given n distinct data items from some set which has an underlying order. That is, given any two items a and b, you can determine whether a < b. (Integers satisfy this property, but colors do not.) Given these n items, and some value i, 1 ≤ i ≤ n, you wish to find the ith smallest item in the set. For example in the set {3, 1, 8, 6, 4, 11, 7}, (4.33) the first smallest (i = 1) is 1, the third smallest (i = 3) is 4 and the seventh smallest (i = n = 7) is 11. An important special case is that of finding the median, which is the case of i = n/2 . Another important special case is finding percentiles; for example the 90th percentile is the case i = .9n . As this suggests, i is frequently given as some fraction of n. Exercise 4.6-1 How do you find the minimum (i = 1) or maximum (i = n) in a set? What is the running time? How do you find the second smallest element? Does this approach extend to finding the ith smallest? What is the running time? Exercise 4.6-2 Give the fastest algorithm you can to find the median (i = n/2 ). In Exercise 4.6-1, the simple O(n) algorithm of going through the list and keeping track of the minimum value seen so far will suffice to find the minimum. Similarly, if we want to find the second smallest, we can go through the list once, find the smallest, remove it and then find the smallest in the new list. This also takes O(n + n − 1) = O(n) time. If we extend this to finding the ith smallest, the algorithm will take O(in) time. Thus for finding the median, this method takes O(n2 ) time. A better idea for finding the median is to first sort the items, and then take the item in position n/2. Since we can sort in O(n log n) time, this algorithm will take O(n log n) time. Thus if i = O(log n) we might want to run the algorithm of the previous paragraph, and otherwise run this algorithm.4 All these approaches, when applied to the median, take at least some multiple of (n log n) units of time.5 The best sorting algorithms take O(n log n) time also, and one can prove every comparison-based sorting algorithm takes Ω(n log n) time. This raises the natural question of whether it is possible to do selection any faster than sorting. In other words, is the problem of finding the median element, or of finding the ith smallest element of a set, significantly easier than the problem of ordering (sorting) the whole set?

A recursive selection algorithm
Suppose for a minute that we magically knew how to find the median in O(n) time. That is, we have a routine MagicMedian, that given as input a set A, returns the median. We could then use this in a divide and conquer algorithm for Select as follows:
4 We also note that the running time can be improved to O(n + i log n) by first creating a heap, which takes O(n) time, and then performing a Delete-Min operation i times. 5 An alternate notation for f (x) = O(g(x)) is g(x) = Ω(f (x)). Notice the change in roles of f and g. In this notation, we say that all of these algorithms take Ω(n log n) time.

4.6. RECURRENCES AND SELECTION Select(A, i, n) (selects the ith smallest element in set A, where n = |A|) (1) if (n = 1) (2) return the one item in A (3) else (4) p = M agicM edian(A) (5) Let H be the set of elements greater than p (6) Let L be the set of elements less than or equal to p (7) if (i ≤ |L|) (8) Return Select(L, i, |L|) (9) else (10) Return Select(H, i − |L|, |H|).

175

By H we do not mean the elements that come after p in the list, but the elements of the list which are larger than p in the underlying ordering of our set. This algorithm is based on the following simple observation. If we could divide the set A up into a “lower half” (L) and an “upper” half (H), then we know in which of these two sets the ith smallest element in A will be. Namely, if i ≤ n/2 , it will be in L, and otherwise it will be in H. Thus, we can recursively look in one or the other set. We can easily partition the data into two sets by making two passes, in the first we copy the numbers smaller than p into L, and in the second we copy the numbers larger than p into H.6 The only additional detail is that if we look in H, then instead of looking for the ith smallest, we look for the i − n/2 th smallest, as H is formed by removing the n/2 smallest elements from A. For example, if the input is the set given in 4.33, and p is 6, the set L would be {3, 1, 6, 4}, and H would be {8, 11, 7}. If i were 2, we would recurse on the set L, with i = 2. On the other hand, if i were 6, we would recurse on the set H, with i = 6 − 4 = 2. Observe that the second smallest element in H is 8, as is the sixth smallest element in the original set. We can express the running time of Select by the following recurrence: T (n) ≤ T (n/2) + cn . (4.34)

From the master theorem, we know any function which satisfies this recurrence has T (n) = O(n). So we can conclude that if we already know how to find the median in linear time, we can design a divide and conquer algorithm that will solve the selection problem in linear time. However, this is nothing to write home about (yet)!

Selection without knowing the median in advance
Sometimes a knowledge of solving recurrences can help us design algorithms. What kinds of recurrences do we know about that have solutions T (n) with T (n) = O(n)? In particular, consider recurrences of the form T (n) ≤ T (n/b) + cn, and ask when they have solutions with T (n) = O(n). Using the master theorem, we see that as long as logb 1 < 1 (and since logb 1 = 0
6

We can do this more efficiently, and “in place”, using the partition algorithm of quicksort.

176

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES

for any b, then any b allowed by the master theorem works; that is, any b > 1 will work), all solutions to this recurrence will have T (n) = O(n). (Note that b does not have to be an integer.) If we let b = 1/b, we can say equivalently that as long as we can solve a problem of size n by solving (recursively) a problem of size b n, for some b < 1, and also doing O(n) additional work, our algorithm will run in O(n) time. Interpreting this in the selection problem, it says that as long as we can, in O(n) time, choose p to ensure that both L and H have size at most b n, we will have a linear time algorithm. (You might ask “What about actually dividing our set into L and H, doesn’t that take some time too?” The answer is yes it does, but we already know we can do the division into H and L in time O(n), so if we can find p in time O(n) also, then we can do both these things in time O(n).) In particular, suppose that, in O(n) time, we can choose p to ensure that both L and H have size at most (3/4)n. Then the running time is described by the recurrence T (n) = T (3n/4)+O(n) and we will be able to solve the selection problem in linear time. To see why (3/4)n is relevant, suppose instead of the “black box” MagicMedian, we have a much weaker magic black box, one which only guarantees that it will return some number in the middle half of our set in time O(n). That is, it will return a number that is guaranteed to be somewhere between the n/4th smallest number and the 3n/4th smallest number. If we use the number given by this magic box to divide our set into H and L, then neither will have size more than 3n/4. We will call this black box a MagicMiddle box, and can use it in the following algorithm: Select1(A,i,n) (selects the ith smallest element in set A, where n = |A| ) (1) if (n = 1) (2) return the one item in A (3) else (4) p = M agicM iddle(A) (5) Let H be the set of elements greater than p (6) Let L be the set of elements less than or equal to p (7) if (i ≤ |L|) (8) Return Select1(L, i, |L|) (9) else (10) Return Select1(H, i − |L|, |H|). The algorithm Select1 is similar to Select. The only difference is that p is now only guaranteed to be in the middle half. Now, when we recurse, we decide whether to recruse on L or H based on whether i is less than or equal to |L|. The element p is called a partition element, because it is used to partition our set A into the two sets L and H. This is progress, as we now don’t need to assume that we can find the median in order to have a linear time algorithm, we only need to assume that we can find one number in the middle half of the set. This problem seems simpler than the original problem, and in fact it is. Thus our knowledge of which recurrences have solutions which are O(n) led us toward a more plausible algorithm. It takes a clever algorithm to find an item in the middle half of our set. We now describe such an algorithm in which we first choose a subset of the numbers and then recursively find the median of that subset.

4.6. RECURRENCES AND SELECTION

177

An algorithm to find an element in the middle half
More precisely, consider the following algorithm in which we assume that |A| is a multiple of 5. (The condition that n < 60 in line 2 is a technical condition that will be justified later.) MagicMiddle(A) (1) Let n = |A| (2) if (n < 60) (3) use sorting to return the median of A (4) else (5) Break A into k = n/5 groups of size 5, G1 , . . . , Gk (6) for i = 1 to k (7) find mi , the median of Gi (by sorting) (8) Let M = {m1 , . . . , mk } (9) return Select1 (M, k/2 , k).

In this algorithm, we break A into n/5 sets of size 5, and then find the median of each set. We then (using Select1 recursively) find the median of medians and return this as our p. Lemma 4.18 The value returned by MagicMiddle(A) is in the middle half of A. Proof: Consider arranging the elements as follows. List each set of 5 vertically in sorted order, with the smallest element on top. Then line up all n/5 of these lists, ordered by their medians, smallest on the left. We get the picture in Figure 4.10. In this picture, the medians Figure 4.10: Dividing a set into n/5 parts of size 5, finding the median of each part and the median of the medians.

are in white, the median of medians is cross-hatched, and we have put in all the inequalities that we know from the ordering information that we have. Now, consider how many items are less than or equal to the median of medians. Every smaller median is clearly less than the median

178

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES

of medians and, in its 5 element set, the elements smaller than the median are also smaller than the median of medians. Now in Figure 4.11 we circle a set of elements that is guaranteed to be smaller than the median of medians. In one fewer (or in the case of an odd number of columns Figure 4.11: The circled elements are less than the median of the medians.

as in Figure 4.11, one half fewer) than half the columns, we have circled 3 elements and in one column we have circled 2 elements. Therefore, we have circled at least7 1 2 elements. So far we have assumed n is an exact multiple of 5, but we will be using this idea in circumstances when it is not. If it is not an exact multiple of 5, we will have n/5 columns (in particular more than n/5 columns), but in one of them we might have only one element. It is possible that this column is one of the ones we counted on for 3 elements, so our estimate could be two elements too large.8 Thus we have circled at least 3n 3n −1−2= −3 10 10 elements. It is a straightforward argument with inequalities that as long as n ≥ 60, this quantity is at least n/4. So if at least n/4 items are guaranteed to be less than the median, then at most 3n/4 items can be greater than the median, and hence |H| ≤ 3n/4. A set of elements that is guaranteed to be larger than the median of medians is circled in the Figure 4.12. We can make the same argument about the number of larger elements circled when the number of columns is odd; when the number of columns is even, a similar argument shows that we circle even more elements. By the same argument as we used with |H|, this shows that the size of L is at most 3n/4.
We say “at least” because our argument applies exactly when n is even, but underestimates the number of circled elements when n is odd. 8 A bit less than 2 because we have more than n/5 columns.
7

n 5

−1 3+2=

3n −1 10

4.6. RECURRENCES AND SELECTION Figure 4.12: The circled elements are greater than the median of the medians.

179

Note that we don’t actually identify all the nodes that are guaranteed to be, say, less than the median of medians, we are just guaranteed that the proper number exists. Since we only have the guarantee that MagicMiddle gives us an element in the middle half of the set if the set has at least sixty elements, we modify Select1 to start out by checking to see if n < 60, and sorting the set to find the element in position i if n < 60. Since 60 is a constant, sorting and finding the desired element takes at most a constant amount of time.

An analysis of the revised selection algorithm
Exercise 4.6-3 Let T (n) be the running time of the modified Select1 on n items. How can you express the running time of Magic Middle in terms of T (n)? Exercise 4.6-4 What is a recurrence for the running time of Select1? Hint: how could Exercise 4.6-3 help you? Exercise 4.6-5 Can you prove by induction that each solution to the recurrence for Select1 is O(n)? For Exercise 4.6-3, we have the following steps. • The first step of MagicMiddle is to divide the items into sets of five; this takes O(n) time. • We then have to find the median of each five-element set. (We can find this median by any straightforward method we choose and still only take at most a constant amount of time; we don’t use recursion here.) There are n/5 sets and we spend no more than some constant time per set, so the total time is O(n). • Next we recursively call Select1 to find the median of medians; this takes T (n/5) time. • Finally, we partition A into those elements less than or equal to the “magic middle” and those that are not, which takes O(n) time.

180

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES

Thus the total running time is T (n/5) + O(n), which implies that for some n0 there is a constant c0 > 0 such that, for all n > n0 , the running time is no more than c0 n. Even if n0 > 60, there are only finitely many cases between 60 and n0 so there is a constant c such that for n ≥ 60, the running time of Magic Middle is no more than T (n/5) + cn. We now get a recurrence for the running time of Select1. Note that for n ≥ 60 Select1 has to call Magic Middle and then recurse on either L or H, each of which has size at most 3n/4. For n < 60, note that it takes time no more than some constant amount d of time to find the median by sorting. Therefore we get the following recurrence for the running time of Select1: T (n) ≤ This answers Exercise 4.6-4. As Exercise 4.6-5 requests, we can now verify by induction that T (n) = O(n). What we want to prove is that there is a constant k such that T (n) ≤ kn. What the recurrence tells us is that there are constants c and d such that T (n) ≤ T (3n/4) + T (n/5) + cn if n ≥ 60, and otherwise T (n) ≤ d. For the base case we have T (n) ≤ d ≤ dn for n < 60, so we choose k to be at least d and then T (n) ≤ kn for n < 60. We now assume that n ≥ 60 and T (m) ≤ km for values m < n, and get T (n) ≤ T (3n/4) + T (n/5) + cn ≤ 3kn/4 + kn/5 + cn = 19/20kn + cn = kn + (c − k/20)n . As long as k ≥ 20c, this is at most kn; so we simply choose k this big and by the principle of mathematical induction, we have T (n) < kn for all positive integers n. T (3n/4) + T (n/5) + c n if n ≥ 60 d if n < 60. (4.35)

Uneven Divisions
The kind of recurrence we found for the running time of Select1 is actually an instance of a more general class which we will now explore. Exercise 4.6-6 We already know that when g(n) = O(n), then every solution of T (n) = T (n/2) + g(n) satisfies T (n) = O(n). Use the master theorem to find a Big-O bound to the solution of T (n) = T (cn) + g(n) for any constant c < 1, assuming that g(n) = O(n). Exercise 4.6-7 Use the master theorem to find Big-O bounds to all solutions of T (n) = 2T (cn) + g(n) for any constant c < 1/2, assuming that g(n) = O(n). Exercise 4.6-8 Suppose g(n) = O(n) and you have a recurrence of the form T (n) = T (an) + T (bn) + g(n) for some constants a and b. What conditions on a and b guarantee that all solutions to this recurrence have T (n) = O(n)? Using the master theorem for Exercise 4.6-6, we get T (n) = O(n), since log1/c 1 < 1. We also get T (n) = O(n) for Exercise 4.6-7, since log1/c 2 < 1 for c < 1/2. You might now guess that as

4.6. RECURRENCES AND SELECTION

181

long as a + b < 1, any solution to the recurrence T (n) ≤ T (an) + T (bn) + cn has T (n) = O(n). We will now see why this is the case. First, let’s return to the recurrence we had, T (n) = T (3/4n)+T (n/5)+g(n), were g(n) = O(n) and let’s try to draw a recursion tree. This recurrence doesn’t quite fit our model for recursion trees, as the two subproblems have unequal size (thus we can’t even write down the problem size on the left), but we will try to draw a recursion tree anyway and see what happens. As we draw Figure 4.13: Attempting a recursion tree for T (n) = T (3/4n) + T (n/5) + g(n).
Work n n

3/4 n

1/5 n

(3/4+1/5)n

(3/4)(3/4)n

(3/4)(1/5)n

(1/5)(3/4)n

(1/5)(1/5)n

((3/4)(3/4) + (3/4)(1/5) +(1/5)(3/4) +(1/5)(1/5)) n

levels one and two, we see that at level one, we have (3/4 + 1/5)n work. At level two we have ((3/4)2 + 2(3/4)(1/5) + (1/5)2 )n work. Were we to work out the third level we would see that we have ((3/4)3 + 3(3/4)2 (1/5) + 3(3/4)(1/5)2 + (1/5)3 )n. Thus we can see a pattern emerging. At level one we have (3/4 + 1/5)n work. At level 2 we have, by the binomial theorem, (3/4 + 1/5)2 n work. At level 3 we have, by the binomial theorem, (3/4 + 1/5)3 n work. And, similarly, at level i of the tree, we have amount of work is
3 4

+

1 5

i

n=

19 20

i

n work. Thus summing over all the levels, the total 1 n = 20n. 1 − 19/20

O(log n) i=0

19 20

i

n≤

We have actually ignored one detail here. In contrast to a recursion tree in which all subproblems at a level have equal size, the “bottom” of the tree is more complicated. Different branches of the tree will reach problems of size 1 and terminate at different levels. For example, the branch that follows all 3/4’s will bottom out after log4/3 n levels, while the one that follows all 1/5’s will bottom out after log5 n levels. However, the analysis above overestimates the work. That is, it assumes that nothing bottoms out until everything bottoms out, i.e. at log20/19 n levels. In fact, the upper bound we gave on the sum “assumes” that the recurrence never bottoms out. We see here something general happening. It seems as if to understand a recurrence of the form T (n) = T (an) + T (bn) + g(n), with g(n) = O(n), we can study the simpler recurrence T (n) = T ((a + b)n) + g(n) instead. This simplifies things (in particular, it lets us use the Master Theorem) and allows us to analyze a larger class of recurrences. Turning to the median algorithm, it tells us that the important thing that happened there was that the sizes of the two recursive calls, namely 3/4n and n/5, summed to less than 1. As long as that is the case for an

182

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES

algorithm with two recursive calls and an O(n) additional work term, whose recurrence has the form T (n) = T (an) + T (bn) + g(n), with g(n) = O(n), the algorithm will work in O(n) time.

Important Concepts, Formulas, and Theorems
1. Median. The median of a set (with an underlying order) of n elements is the element that would be in position n/2 if the set were sorted into a list in order. 2. Percentile. The pth percentile of a set (with an underlying order) is the element that would p be in position 100 n if the set were sorted into a list in order. 3. Selection. Given an n-element set with some underlying order, the problem of selection of the ith smallest element is that of finding the element that would be in the ith position if the set were sorted into a list in order. Note that often i is expressed as a fraction of n. 4. Partition Element. A partition element in an algorithm is an element of a set (with an underlying order) which is used to divide the set into two parts, those that come before or are equal to the element (in the underlying order), and the remaining elements. Notice that the set as given to the algorithm is not necessarily (in fact not usually) given in the underlying order. 5. Linear Time Algorithms. If the running time of an algorithm satisfies a recurrence of the form T (n) ≤ T (an) + cn with 0 ≤ a < 1, or a recurrence of the form T (n) ≤ T (an) + T (bn) + cn with a and b nonnegative and a + b < 1, then T (n) = O(n). 6. Finding a Good Partition Element. If a set (with an underlying order) has sixty or more elements, then the procedure of breaking the set into pieces of size 5 (plus one leftover piece if necessary), finding the median of each piece and then finding the median of the medians gives an element guaranteed to be in the middle half of the set. 7. Selection algorithm. The Selection algorithm that runs in linear time sorts a set of size less than sixty to find the element in the ith position; otherwise • it recursively uses the median of medians of five to find a partition element, • it uses that partition element to divide the set into two pieces and • then it looks for the appropriate element in the appropriate piece recursively.

Problems
1. In the MagicMiddle algorithm, suppose we broke our data up into n/3 sets of size 3. What would the running time of Select1 be? 2. In the MagicMiddle algorithm, suppose we broke our data up into n/7 sets of size 7. What would the running time of Select1 be? 3. Let T (n) = T (n/3) + T (n/2) + n if n ≥ 6 1 otherwise,

4.6. RECURRENCES AND SELECTION and let S(n) =

183

S(5n/6) + n if n ≥ 6 1 otherwise.

Draw recursion trees for T and S. What are the big-O bounds we get on solutions to the recurrences? Use the recursion trees to argue that, for all n, T (n) ≤ S(n). 4. Find a (big-O) upper bound (the best you know how to get) on solutions to the recurrence T (n) = T (n/3) + T (n/6) + T (n/4) + n. 5. Find a (big-O) upper bound (the best you know how to get) on solutions the recurrence T (n) = T (n/4) + T (n/2) + n2 . 6. Note that we have chosen the median of an n-element set to be the element in position n/2 . We have also chosen to put the median of the medians into the set L of algorithm Select1. Show that this lets us prove that T (n) ≤ T (3n/4) + T (n/5) + cn for n ≥ 40 rather than n ≥ 60. (You will need to analyze the case where n/5 is even and the case where it is odd separately.) Is 40 the least value possible?

184

CHAPTER 4. INDUCTION, RECURSION, AND RECURRENCES

Chapter 5

Probability
5.1 Introduction to Probability

Why do we study probability?
You have likely studied hashing as a way to store data (or keys to find data) in a way that makes it possible to access that data quickly. Recall that we have a table in which we want to store keys, and we compute a function h of our key to tell us which location (also known as a “slot” or a “bucket”) in the table to use for the key. Such a function is chosen with the hope that it will tell us to put different keys in different places, but with the realization that it might not. If the function tells us to put two keys in the same place, we might put them into a linked list that starts at the appropriate place in the table, or we might have some strategy for putting them into some other place in the table itself. If we have a table with a hundred places and fifty keys to put in those places, there is no reason in advance why all fifty of those keys couldn’t be assigned (hashed) to the same place in the table. However someone who is experienced with using hash functions and looking at the results will tell you you’d never see this in a million years. On the other hand that same person would also tell you that you’d never see all the keys hash into different locations in a million years either. In fact, it is far less likely that all fifty keys would hash into one place than that all fifty keys would hash into different places, but both events are quite unlikely. Being able to understand just how likely or unlikely such events are is our reason for taking up the study of probability. In order to assign probabilities to events, we need to have a clear picture of what these events are. Thus we present a model of the kinds of situations in which it is reasonable to assign probabilities, and then recast our questions about probabilities into questions about this model. We use the phrase sample space to refer to the set of possible outcomes of a process. For now, we will deal with processes that have finite sample spaces. The process might be a game of cards, a sequence of hashes into a hash table, a sequence of tests on a number to see if it fails to be a prime, a roll of a die, a series of coin flips, a laboratory experiment, a survey, or any of many other possibilities. A set of elements in a sample space is called an event. For example, if a professor starts each class with a 3 question true-false quiz the sample space of all possible patterns of correct answers is {T T T, T T F, T F T, F T T, T F F, F T F, F F T, F F F }. 185

186

CHAPTER 5. PROBABILITY

The event of the first two answers being true is {T T T, T T F }. In order to compute probabilities we assign a probability weight p(x) to each element of the sample space so that the weight represents what we believe to be the relative likelihood of that outcome. There are two rules we must follow in assigning weights. First the weights must be nonnegative numbers, and second the sum of the weights of all the elements in a sample space must be one. We define the probability P (E) of the event E to be the sum of the weights of the elements of E. Algebraically we can write P (E) = x:x∈E p(x).

(5.1)

We read this as p(E) equals the sum, over all x such that x is in E, of p(x). Notice that a probability function P on a sample space S satisfies the rules1 1. P (A) ≥ 0 for any A ⊆ S. 2. P (S) = 1. 3. P (A ∪ B) = P (A) + P (B) for any two disjoint events A and B. The first two rules reflect our rules for assigning weights above. We say that two events are disjoint if A ∩ B = ∅. The third rule follows directly from the definition of disjoint and our definition of the probability of an event. A function P satisfying these rules is called a probability distribution or a probability measure. In the case of the professor’s three question quiz, it is natural to expect each sequence of trues and falses to be equally likely. (A professor who showed any pattern of preferences would end up rewarding a student who observed this pattern and used it in educated guessing.) Thus it is natural to assign equal weight 1/8 to each of the eight elements of our quiz sample space. Then the probability of an event E, which we denote by P (E), is the sum of the weights of its elements. Thus the probability of the event “the first answer is T” is 1 + 1 + 1 + 1 = 1 . The event “There 8 8 8 8 2 is at exactly one True” is {T F F, F T F, F F T }, so P (there is exactly one True) is 3/8.

Some examples of probability computations
Exercise 5.1-1 Try flipping a coin five times. Did you get at least one head? Repeat five coin flips a few more times! What is the probability of getting at least one head in five flips of a coin? What is the probability of no heads? Exercise 5.1-2 Find a good sample space for rolling two dice. What weights are appropriate for the members of your sample space? What is the probability of getting a 6 or 7 total on the two dice? Assume the dice are of different colors. What is the probability of getting less than 3 on the red one and more than 3 on the green one? Exercise 5.1-3 Suppose you hash a list of n keys into a hash table with 20 locations. What is an appropriate sample space, and what is an appropriate weight function? (Assume the keys and the hash function are not in any special relationship to the
1 These rules are often called “the axioms of probability.” For a finite sample space, we could show that if we started with these axioms, our definition of probability in terms of the weights of individual elements of S is the only definition possible. That is, for any other definition, the probabilities we would compute would still be the same if we take w(x) = P ({x}).

5.1. INTRODUCTION TO PROBABILITY number 20.) If n is three, what is the probability that all three keys hash to different locations? If you hash ten keys into the table, what is the probability that at least two keys have hashed to the same location? We say two keys collide if they hash to the same location. How big does n have to be to insure that the probability is at least one half that has been at least one collision?

187

In Exercise 5.1-1 a good sample space is the set of all 5-tuples of Hs and T s. There are 32 elements in the sample space, and no element has any reason to be more likely than any other, so 1 a natural weight to use is 32 for each element of the sample space. Then the event of at least one head is the set of all elements but T T T T T . Since there are 31 elements in this set, its probability is 31 . This suggests that you should have observed at least one head pretty often! 32

Complementary probabilities
1 The probability of no heads is the weight of the set {T T T T T }, which is 32 . Notice that the probabilities of the event of “no heads” and the opposite event of “at least one head” add to one. This observation suggests a theorem. The complement of an event E in a sample space S, denoted by S − E, is the set of all outcomes in S but not E. The theorem tells us how to compute the probability of the complement of an event from the probability of the event.

Theorem 5.1 If two events E and F are complementary, that is they have nothing in common (E ∩ F = ∅) and their union is the whole sample space (E ∪ F = S), then P (E) = 1 − P (F ).

The sum of all the probabilities of all the elements of the sample space is one, and Proof: since we can break this sum into the sum of the probabilities of the elements of E plus the sum of the probabilities of the elements of F , we have P (E) + P (F ) = 1, which gives us P (E) = 1 − P (F ). For Exercise 5.1-2 a good sample space would be pairs of numbers (a, b) where (1 ≤ a, b ≤ 6). By the product principle2 , the size of this sample space is 6 · 6 = 36. Thus a natural weight for 1 each ordered pair is 36 . How do we compute the probability of getting a sum of six or seven? There are 5 ways to roll a six and 6 ways to roll a seven, so our event has eleven elements each of weight 1/36. Thus the probability of our event is is 11/36. For the question about the red and green dice, there are two ways for the red one to turn up less than 3, and three ways for the green one to turn up more than 3. Thus, the event of getting less than 3 on the red one and greater than 3 on the green one is a set of size 2 · 3 = 6 by the product principle. Since each element of the event has weight 1/36, the event has probability 6/36 or 1/6.
2

From Section 1.1.

188

CHAPTER 5. PROBABILITY

Probability and hashing
In Exercise 5.1-3 an appropriate sample space is the set of n-tuples of numbers between 1 and 20. The first entry in an n-tuple is the position our first key hashes to, the second entry is the position our second key hashes to, and so on. Thus each n tuple represents a possible hash function, and each hash function, applied to our keys, would give us one n-tuple. The size of the sample space is 20n (why?), so an appropriate weight for an n-tuple is 1/20n . To compute the probability of a collision, we will first compute the probability that all keys hash to different locations and then apply Theorem 5.1 which tells us to subtract this probability from 1 to get the probability of collision. To compute the probability that all keys hash to different locations we consider the event that all keys hash to different locations. This is the set of n tuples in which all the entries are different. (In the terminology of functions, these n-tuples correspond to one-to-one hash functions). There are 20 choices for the first entry of an n-tuple in our event. Since the second entry has to be different, there are 19 choices for the second entry of this n-tuple. Similarly there are 18 choices for the third entry (it has to be different from the first two), 17 for the fourth, and in general 20 − i + 1 possibilities for the ith entry of the n-tuple. Thus we have 20 · 19 · 18 · · · · · (20 − n + 1) = 20n elements of our event.3 Since each element of this event has weight 1/20n , the probability that all the keys hash to different locations is 20 · 19 · 18 · · · · · (20 − n + 1) 20n = n 20n 20 . In particular if n is 3 the probability is (20 · 19 · 18)/203 = .855. We show the values of this function for n between 0 and 20 in Table 5.1. Note how quickly the probability of getting a collision grows. As you can see with n = 10, the probability that there have been no collisions is about .065, so the probability of at least one collision is .935. If n = 5 this number is about .58, and if n = 6 this number is about .43. By Theorem 5.1 the probability of a collision is one minus the probability that all the keys hash to different locations. Thus if we hash six items into our table, the probability of a collision is more than 1/2. Our first intuition might well have been that we would need to hash ten items into our table to have probability 1/2 of a collision. This example shows the importance of supplementing intuition with careful computation! The technique of computing the probability of an event of interest by first computing the probability of its complementary event and then subtracting from 1 is very useful. You will see many opportunities to use it, perhaps because about half the time it is easier to compute directly the probability that an event doesn’t occur than the probability that it does. We stated Theorem 5.1 as a theorem to emphasize the importance of this technique.

The Uniform Probability Distribution
In all three of our exercises it was appropriate to assign the same weight to all members of our sample space. We say P is the uniform probability measure or uniform probability distribution
3

Using the notation for falling factorial powers that we introduced in Section 1.2.

5.1. INTRODUCTION TO PROBABILITY n 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Prob of empty slot 1 0.95 0.9 0.85 0.8 0.75 0.7 0.65 0.6 0.55 0.5 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 Prob of no collisions 1 0.95 0.855 0.72675 0.5814 0.43605 0.305235 0.19840275 0.11904165 0.065472908 0.032736454 0.014731404 0.005892562 0.002062397 0.000618719 0.00015468 3.09359E-05 4.64039E-06 4.64039E-07 2.3202E-08

189

Table 5.1: The probabilities that all elements of a set hash to different entries of a hash table of size 20. when we assign the same probability to all members of our sample space. The computations in the exercises suggest another useful theorem. Theorem 5.2 Suppose P is the uniform probability measure defined on a sample space S. Then for any event E, P (E) = |E|/|S|, the size of E divided by the size of S. Proof: Let S = {x1 , x2 , . . . , x|S| }. Since P is the uniform probability measure, there must be some value p such that for each xi ∈ S, P (xi ) = p. Combining this fact with the second and third probability rules, we obtain 1 = P (S) = P (x1 ∪ x2 ∪ · · · ∪ x|S| ) = P (x1 ) + P (x2 ) + . . . + P (x|S| ) = p|S| . Equivalently p= 1 . |S| p(xi ) = |E|p . xi ∈E

(5.2)

E is a subset of S with |E| elements and therefore P (E) = (5.3)

190

CHAPTER 5. PROBABILITY

Combining equations 5.2 and 5.3 gives that P (E) = |E|p = |E|(1/|S|) = |E|/|S| . Exercise 5.1-4 What is the probability of an odd number of heads in three tosses of a coin? Use Theorem 5.2. Using a sample space similar to that of first example (with “T” and “F” replaced by “H” and “T”), we see there are three sequences with one H and there is one sequence with three H’s. Thus we have four sequences in the event of “an odd number of heads come up.” There are eight sequences in the sample space, so the probability is 4 = 1 . 8 2 It is comforting that we got one half because of a symmetry inherent in this problem. In flipping coins, heads and tails are equally likely. Further if we are flipping 3 coins, an odd number of heads implies an even number of tails. Therefore, the probability of an odd number of heads, even number of heads, odd number of tails and even number of tails must all be the same. Applying Theorem 5.1 we see that the probability must be 1/2. A word of caution is appropriate here. Theorem 5.2 applies only to probabilities that come from the equiprobable weighting function. The next example shows that it does not apply in general. Exercise 5.1-5 A sample space consists of the numbers 0, 1, 2 and 3. We assign weight 1 3 3 1 8 to 0, 8 to 1, 8 to 2, and 8 to 3. What is the probability that an element of the sample space is positive? Show that this is not the result we would obtain by using the formula of Theorem 5.2. The event “x is positive” is the set E = {1, 2, 3}. The probability of E is P (E) = P (1) + P (2) + P (3) = However,
|E| |S|

3 3 1 7 + + = . 8 8 8 8

= 3. 4

The previous exercise may seem to be “cooked up” in an unusual way just to prove a point. In fact that sample space and that probability measure could easily arise in studying something as simple as coin flipping. Exercise 5.1-6 Use the set {0, 1, 2, 3} as a sample space for the process of flipping a coin three times and counting the number of heads. Determine the appropriate probability weights P (0), P (1), P (2), and P (3). There is one way to get the outcome 0, namely tails on each flip. There are, however, three ways to get 1 head and three ways to get two heads. Thus P (1) and P (2) should each be three times P (0). There is one way to get the outcome 3—heads on each flip. Thus P (3) should equal P (0). In equations this gives P (1) = 3P (0), P (2) = 3P (0), and P (3) = P (0). We also have the equation saying all the weights add to one, P (0) + P (1) + P (2) + P (3) = 1. There is one and only one solution to these equations, namely P (0) = 1 , P (1) = 3 , P (2) = 3 , and P (3) = 1 . Do 8 8 8 8 3 you notice a relationship between P (x) and the binomial coefficient x here? Can you predict the probabilities of 0, 1, 2, 3, and 4 heads in four flips of a coin? Together, the last two exercises demonstrate that we must be careful not to apply Theorem 5.2 unless we are using the uniform probability measure.

5.1. INTRODUCTION TO PROBABILITY

191

Important Concepts, Formulas, and Theorems
1. Sample Space. We use the phrase sample space to refer to the set of possible outcomes of a process. 2. Event. A set of elements in a sample space is called an event. 3. Probability. In order to compute probabilities we assign a weight to each element of the sample space so that the weight represents what we believe to be the relative likelihood of that outcome. There are two rules we must follow in assigning weights. First the weights must be nonnegative numbers, and second the sum of the weights of all the elements in a sample space must be one. We define the probability P (E) of the event E to be the sum of the weights of the elements of E. 4. The axioms of Probability. Three rules that a probability measure on a finite sample space must satisfy could actually be used to define what we mean by probability. (a) P (A) ≥ 0 for any A ⊆ S. (b) P (S) = 1. (c) P (A ∪ B) = P (A) + P (B) for any two disjoint events A and B. 5. Probability Distribution. A function which assigns a probability to each member of a sample space is called a (discrete) probability distribution. 6. Complement. The complement of an event E in a sample space S, denoted by S − E, is the set of all outcomes in S but not E. 7. The Probabilities of Complementary Events. If two events E and F are complementary, that is they have nothing in common (E ∩ F = ∅), and their union is the whole sample space (E ∪ F = S), then P (E) = 1 − P (F ).

8. Collision, Collide (in Hashing). We say two keys collide if they hash to the same location. 9. Uniform Probability Distribution. We say P is the uniform probability measure or uniform probability distribution when we assign the same probability to all members of our sample space. 10. Computing Probabilities with the Uniform Distribution. Suppose P is the uniform probability measure defined on a sample space S. Then for any event E, P (E) = |E|/|S|, the size of E divided by the size of S. This does not apply to general probability distributions.

192

CHAPTER 5. PROBABILITY

Problems
1. What is the probability of exactly three heads when you flip a coin five times? What is the probability of three or more heads when you flip a coin five times? 2. When we roll two dice, what is the probability of getting a sum of 4 or less on the tops? 3. If we hash 3 keys into a hash table with ten slots, what is the probability that all three keys hash to different slots? How big does n have to be so that if we hash n keys to a hash table with 10 slots, the probability is at least a half that some slot has at least two keys hash to it? How many keys do we need to have probability at least two thirds that some slot has at least two keys hash to it? 4. What is the probability of an odd sum when we roll three dice? 5. Suppose we use the numbers 2 through 12 as our sample space for rolling two dice and adding the numbers on top. What would we get for the probability of a sum of 2, 3, or 4, if we used the equiprobable measure on this sample space. Would this make sense? 6. Two pennies, a nickel and a dime are placed in a cup and a first coin and a second coin are drawn. (a) Assuming we are sampling without replacement (that is, we don’t replace the first coin before taking the second) write down the sample space of all ordered pairs of letters P , N , and D that represent the outcomes. What would you say are the appropriate weights for the elements of the sample space? (b) What is the probability of getting eleven cents? 7. Why is the probability of five heads in ten flips of a coin equal to
63 256 ?

8. Using 5-element sets as a sample space, determine the probability that a “hand” of 5 cards chosen from an ordinary deck of 52 cards will consist of cards of the same suit. 9. Using 5 element permutations as a sample space, determine the probability that a “hand” of 5 cards chosen from an ordinary deck of 52 cards will have all the cards from the same suit 10. How many five-card hands chosen from a standard deck of playing cards consist of five cards in a row (such as the nine of diamonds, the ten of clubs, jack of clubs, queen of hearts, and king of spades)? Such a hand is called a straight. What is the probability that a five-card hand is a straight? Explore whether you get the same answer by using five element sets as your model of hands or five element permutations as your model of hands. 11. A student taking a ten-question, true-false diagnostic test knows none of the answers and must guess at each one. Compute the probability that the student gets a score of 80 or higher. What is the probability that the grade is 70 or lower? 12. A die is made of a cube with a square painted on one side, a circle on two sides, and a triangle on three sides. If the die is rolled twice, what is the probability that the two shapes we see on top are the same?

5.1. INTRODUCTION TO PROBABILITY

193

13. Are the following two events equally likely? Event 1 consists of drawing an ace and a king when you draw two cards from among the thirteen spades in a deck of cards and event 2 consists of drawing an ace and a king when you draw two cards from the whole deck. 14. There is a retired professor who used to love to go into a probability class of thirty or more students and announce “I will give even money odds that there are two people in this classroom with the same birthday.” With thirty students in the room, what is the probability that all have different birthdays? What is the minimum number of students that must be in the room so that the professor has at least probability one half of winning the bet? What is the probability that he wins his bet if there are 50 students in the room. Does this probability make sense to you? (There is no wrong answer to that question!) Explain why or why not.

194

CHAPTER 5. PROBABILITY

5.2

Unions and Intersections

The probability of a union of events
Exercise 5.2-1 If you roll two dice, what is the probability of an even sum or a sum of 8 or more? Exercise 5.2-2 In Exercise 5.2-1, let E be the event “even sum” and let F be the event “8 or more.” We found the probability of the union of the events E and F. Why isn’t it the case that P (E ∪ F ) = P (E) + P (F )? What weights appear twice in the sum P (E) + P (F )? Find a formula for P (E ∪ F ) in terms of the probabilities of E, F , and E ∩ F . Apply this formula to Exercise 5.2-1. What is the value of expressing one probability in terms of three? Exercise 5.2-3 What is P (E ∪ F ∪ G) in terms of probabilities of the events E, F , and G and their intersections? In the sum P (E) + P (F ) the weights of elements of E ∩ F each appear twice, while the weights of all other elements of E ∪ F each appear once. We can see this by looking at a diagram called a Venn Diagram, as in Figure 5.1. In a Venn diagram, the rectangle represents the sample space, and the circles represent the events. If we were to shade both E and F , we would wind Figure 5.1: A Venn diagram for two events.

E

E∩F

F

up shading the region E ∩ F twice. In Figure 5.2, we represent that by putting numbers in the regions, representing how many times they are shaded. This illustrates why the sum P (E)+P (F ) includes the probability weight of each element of E ∩ F twice. Thus to get a sum that includes the probability weight of each element of E ∪ F exactly once, we have to subtract the weight of E ∩ F from the sum P (E) + P (F ). This is why P (E ∪ F ) = P (E) + P (F ) − P (E ∩ F ) . We can now apply this to Exercise 5.2-1 by noting that the probability of an even sum is 1/2, while the probability of a sum of 8 or more is 2 3 4 5 15 1 + + + + = . 36 36 36 36 36 36 (5.4)

5.2. UNIONS AND INTERSECTIONS Figure 5.2: If we shade each of E and F once, then we shade E ∩ F twice

195

E 1

E∩F 2

F 1

From a similar sum, the probability of an even sum of 8 or more is 9/36, so the probability of a sum that is even or is 8 or more is 1 15 9 2 + − = . 2 36 36 3 (In this case our computation merely illustrates the formula; with less work one could add the probability of an even sum to the probability of a sum of 9 or 11.) In many cases, however, probabilities of individual events and their intersections are more straightforward to compute than probabilities of unions (we will see such examples later in this section), and in such cases our formula is quite useful. Now let’s consider the case for three events and draw a Venn diagram and fill in the numbers for shading all E, F , and G. So as not to crowd the figure we use EF to label the region corresponding to E ∩ F , and similarly label other regions. Doing so we get Figure 5.3. Thus we Figure 5.3: The number of ways the intersections are shaded when we shade E, F , and G.

E 1 2 EG

EF 2 EFG 3 1 G 2 FG

F 1

have to figure out a way to subtract from P (E) + P (F ) + P (G) the weights of elements in the regions labeled EF , F G and EG once, and the the weight of elements in the region labeled EF G twice. If we subtract out the weights of elements of each of E ∩ F , F ∩ G, and E ∩ G, this does more than we wanted to do, as we subtract the weights of elements in EF , F G and EG once

196

CHAPTER 5. PROBABILITY

but the weights of elements in of EF G three times, leaving us with Figure 5.4. We then see that Figure 5.4: The result of removing the weights of each intersection of two sets.

E 1 1 EG

EF 1 EFG 0 1 G 1 FG

F 1

all that is left to do is to add weights of elements in the E ∩ F ∩ G back into our sum. Thus we have that P (E ∪ F ∪ G) = P (E) + P (F ) + P (G) − P (E ∩ F ) − P (E ∩ G) − P (F ∩ G) + P (E ∩ F ∩ G).

Principle of inclusion and exclusion for probability
From the last two exercises, it is natural to guess the formula

n

n

n−1

n

n−2 n−1

n

P( i=1 Ei ) = i=1 P (Ei ) − i=1 j=i+1

P (Ei ∩ Ej ) + i=1 j=i+1 k=j+1

P (Ei ∩ Ej ∩ Ek ) − . . . .

(5.5)

All the sum signs in this notation suggest that we need some new notation to describe sums. We are now going to make a (hopefully small) leap of abstraction in our notation and introduce notation capable of compactly describing the sum described in the previous paragraph. This notation is an extension of the one we introduced in Equation 5.1. We use P (Ei1 ∩ Ei2 ∩ · · · Eik ) i1 ,i2 ,...,ik : 1≤i1 0 cycles. Choose an edge which is between two faces, so it is part of a cycle. Deleting that edge joins the two faces it was on together, so the new graph has f = f − 1 faces. The new graph has the same number of vertices and one less edge. It also has fewer cycles than G, so we have v − (e − 1) − (f − 1) = 2 by the inductive hypothesis, and this gives us v − e + f = 2. ForExercise 6.5-8 let’s define an edge-face pair to be an edge and a face such that the edge borders the face. Then we said that the number of such pairs is at least 3f in a simple graph. Since each edge is in either one or two faces, the number of edge-face pairs is also no more than 2e. This gives us 3f ≤ # of edge-face pairs ≤ 2e, or 3f ≤ 2e, so that f ≤ 2 e in a planar drawing of a graph. We can combine this with Theorem 3 6.25 to get 2 2 = v − e + f ≤ v − e + e = v − e/3 3 which we can rewrite as e ≤ 3v − 6 in a planar graph. Corollary 6.26 In a simple planar graph, e ≤ 3v − 6. Proof: Given above.

In our discussion of Exercise 6.5-5 we said that we would see a simple proof that the circuit layout problem was impossible. Notice that the question in that exercise was really the question of whether the complete graph on 5 vertices, K5 , is planar. If it were, the inequality e ≤ 3v − 6 would give us 10 ≤ 3 · 5 − 6 = 9, which is impossible, so K5 can’t be planar. The inequality of Corollary 6.26 is not strong enough to solve Exercise 6.5-6. This exercise is really asking whether the so-called “complete bipartite graph on two parts of size 3,” denoted by K3,3 , is planar. In order to show that it isn’t, we need to refine the inequality of Corollary 6.26 to take into account the fact that in a simple bipartite graph there are no cycles of size 3, so there are no faces that are bordered by just 3 edges. You are asked to do that in Problem 13. Exercise 6.5-9 Prove or give a counter-example: Every planar graph has at least one vertex of degree 5 or less. Exercise 6.5-10 Prove that every planar graph has a proper coloring with six colors. In Exercise 6.5-9 suppose that G is a planar graph in which each vertex has degree six or more. Then the sum of the degrees of the vertices is at least 6v, and also is twice the number of edges. Thus 2e ≥ 6v, or e ≥ 3v, contrary to e ≤ 3v − 6. This gives us yet another corollary to Euler’s formula. Corollary 6.27 Every planar graph has a vertex of degree 5 or less. Proof: Given above.

6.5. COLORING AND PLANARITY

321

The Five Color Theorem
We are now in a position to give a proof of the five color theorem, essentially Heawood’s proof, which was based on his analysis of an incorrect proof given by Kempe to the four color theorem about ten years earlier in 1879. First we observe that in Exercise 6.5-10 we can use straightforward induction to show that any planar graph on n vertices can be properly colored in six colors. As a base step, the theorem is clearly true if the graph has six or fewer vertices. So now assume n > 6 and suppose that a graph with fewer than n vertices can be properly colored with six colors. Let x be a vertex of degree 5 or less. Deleting x gives us a planar graph on n − 1 vertices, so by the inductive hypothesis it can be properly colored with six colors. However only five or fewer of those colors can appear on vertices which were originally neighbors of x, because x had degree 5 or less. Thus we can replace x in the colored graph and there is at least one color not used on its neighbors. We use such a color on x and we have a proper coloring of G. Therefore, by the principle of mathematical induction, every planar graph on n ≥ 1 vertices has a proper coloring with six colors. To prove the five color theorem, we make a similar start. However, it is possible that after deleting x and using an inductive hypothesis to say that the resulting graph has a proper coloring with 5 colors, when we want to restore x into the graph, five distinct colors are already used on its neighbors. This is where the proof will become interesting. Theorem 6.28 A planar graph G has a proper coloring with at most 5 colors. Proof: We may assume that every face except perhaps the outside face of our drawing is a triangle for two reasons. First, if we have a planar drawing with a face that is not a triangle, we can draw in additional edges going through that face until it has been divided into triangles, and the graph will remain planar. Second, if we can prove the theorem for graphs whose faces are all triangles, then we can obtain graphs with non-triangular faces by removing edges from graphs with triangular faces, and a proper coloring remains proper if we remove an edge from our graph. Although this appears to muddy the argument at this point, at a crucial point it makes it possible to give an argument that is clearer than it would otherwise be. Our proof is by induction on the number of vertices of the graph. If G has five or fewer vertices then it is clearly properly colorable with five or fewer colors. Suppose G has n vertices and suppose inductively that every planar graph with fewer than n vertices is properly colorable with five colors. G has a vertex x of degree 5 or less. Let G be the graph obtained by deleting x form G. By the inductive hypothesis, G has a coloring with five or fewer colors. Fix such a coloring. Now if x has degree four or less, or if x has degree 5 but is adjacent to vertices colored with just four colors in G , then we may replace x in G to get G and we have a color available to use on x to get a proper coloring of G. Thus we may assume that x has degree 5, and that in G five different colors appear on the vertices that are neighbors of x in G. Color all the vertices of G other than x as in G . Let the five vertices adjacent to x be a, b, c, d, e in clockwise order, and assume they are colored with colors 1, 2, 3, 4, and 5. Further, by our assumption that all faces are triangles, we have that {a, b}, {b, c}4,{c,d},{d, e}, and {e, a} are all edges, so that we have a pentagonal cycle surrounding x. Consider the graph G1,3 of G which has the same vertex set as G but has only edges with endpoints colored 1 and 3. (Some possibilities are shown in Figure 6.34. We show only edges connecting vertices colored 1 and 3, as well as dashed lines for the edges from x to its neighbors

322

CHAPTER 6. GRAPHS

and the edges between successive neighbors. There may be many more vertices and edges in G.)

Figure 6.34: Some possibilities for the graph G1,3 .
1 3 3 1 1 3 1 e x 3 d 1 3 1 c 1 d 3 1 3 1 b 1 a 3 1 1 3 e x 3 c b 3 1 3 3 a 1 1 3 3 3 1

The graph G1,3 will have a number of connected components. If a and c are not in the same component, then we may exchange the colors on the vertices of the component containing a without affecting the color on c. In this way we obtain a coloring of G with only four colors, 3,2,3,4,5 on the vertices a, b, c, d, e. We may then use the fifth color (in this case 1) on vertex x and we have properly colored G with five colors. Otherwise, as in the second part of Figure 6.34, since a and c are in the same component of G1,3 , there is a path from a to c consisting entirely of vertices colored 1 and 3. Now temporarily color x with a new color, say color 6. Then in G we have a cycle C of vertices colored 1, 3, and 6. This cycle has an inside and an outside. Part of the graph can be on the inside of C, and part can be on the outside. In Figure 6.35 we show two cases for how the cycle could occur, one in Figure 6.35: Possible cycles in the graph G1,3 .
1 3 3 1 1 3 1 e x 3 d 1 3 1 c b 3 1 3 3 1 3 3 d 1 3 1 a 1 1 1 3 1 1 3 e x 3 c 1 3 3 b a 3 3 3 1 1

which vertex b is inside the cycle C and one in which it is outside C. (Notice also that in both cases, we have more than one choice for the cycle because there are two ways in which we could use the quadrilateral at the bottom of the figure.) In G we also have the cycle with vertex sequence a, b, c, d, e which is colored with five different colors. This cycle and the cycle C can intersect only in the vertices a and c. Thus these two cycles divide the plane into four regions: the one inside both cycles, the one outside both cycles, and the two regions inside one cycle but not the other. If b is inside C, then the area inside both cycles is bounded by the cycle a{a, b}b{b, c}c{c, x}x{x, a}a. Therefore e and d are not inside the cycle

6.5. COLORING AND PLANARITY

323

C. If one of d and e is inside C, then both are (because the edge between them cannot cross the cycle) and the boundary of the region inside both cycles is a{a, e}e{e, d}d{d, c}c{c, x}x{x, a}a. In this case b cannot be inside C. Therefore one of b and d is inside the cycle c and one is outside it. Therefore if we look at the graph G2,4 with the same vertex set as G and just the edges connecting vertices colored 2 and 4, the connected component containing b and the connected component containing d must be different, because otherwise a path of vertices colored 2 and 4 would have to cross the cycle C colored with colors 1, 3, and 6. Therefore in G we may exchange the colors 2 and 4 in the component containing d, and we now have only colors 1, 2, 3, and 5 used on vertices a, b, c, d, and e. Therefore we may use this coloring of G as the coloring for the vertices of G different from x and we may change the color on x from 6 to 4, and we have a proper five coloring of G. Therefore by the principle of mathematical induction, every finite planar graph has a proper coloring with 5 colors. Kempe’s argument that seemed to prove the four color theorem was similar to this, though where we had five distinct colors on the neighbors of x and sought to remove one of them, he had four distinct colors on the five neighbors of x and sought to remove one of them. He had a more complicated argument involving two cycles in place of our cycle C, and he missed one of the ways in which these two cycles can interact.

Important Concepts, Formulas, and Theorems
1. Graph Coloring. An assignment of labels to the vertices of a graph, that is a function from the vertices to some set, is called a coloring of the graph. The set of possible labels (the range of the coloring function) is often referred to as a set of colors. 2. Proper Coloring. A coloring of a graph is called a proper coloring if it assigns different colors to adjacent vertices. 3. Intersection Graph. We call a graph an intersection graph if its vertices correspond to sets and it has an edge between two vertices if and only if the corresponding sets intersect. 4. Chromatic Number. The chromatic number of a graph G, traditionally denoted χ(G), is the minimum number of colors needed to properly color G. 5. Complete Subgraphs and Chromatic Numbers. If a graph G contains a subgraph that is a complete graph on n vertices, then the chromatic number of G is at least n. 6. Interval Graph. An intersection graph of a set of intervals of real numbers is called an interval graph. The assignment of intervals to the vertices is called an interval representation. 7. Chromatic Number of an Interval Graph. In an interval graph G, the chromatic number is the size of the largest complete subgraph. 8. Algorithm to Compute the Chromatic number and a proper coloring of an Interval Graph. An interval graph G may be properly colored using χ(G) consecutive integers as colors by listing the intervals of a representation in order of their left endpoints and going through the list, assigning the smallest color not used on an earlier adjacent interval to each interval in the list. 9. Planar Graph and Planar Drawing. A graph is called planar if it has a drawing in the plane such that edges do not meet except at their endpoints. Such a drawing is called a planar drawing of the graph.

324

CHAPTER 6. GRAPHS

10. Face of a Planar Drawing. A geometrically connected connected subset of the plane with the vertices and edges of a planar graph taken away is called a face of the drawing if it not a proper subset of any other connected set of the plane with the drawing removed. 11. Cut Edge. An edge whose removal from a graph increases the number of connected components is called a cut edge of the graph. A cut edge of a planar graph lies on only one face of a planar drawing. 12. Euler’s Formula. Euler’s formula states that in a planar drawing of a graph with v vertices, e edges and f faces, v − e + f = 2. As a consequence, in a planar graph, e ≤ 3v − 6.

Problems
1. What is the minimum number of colors needed to properly color a path on n vertices if n > 1? 2. What is the minimum number of colors needed to properly color a bipartite graph with parts X and Y . 3. If a graph has chromatic number two, is it bipartite? Why or why not? 4. Prove that the chromatic number of a graph G is the maximum of the chromatic numbers of its components. 5. A wheel on n vertices consists of a cycle on n − 1 vertices together with one more vertex, normally drawn inside the cycle, which is connected to every vertex of the cycle. What is the chromatic number of a wheel on 5 vertices? What is the chromatic number of a wheel on an odd number of vertices? 6. A wheel on n vertices consists of a cycle on n − 1 vertices together with one more vertex, normally drawn inside the cycle, which is connected to every vertex of the cycle. What is the chromatic number of a wheel on 6 vertices? What is the chromatic number of a wheel on an even number of vertices? 7. The usual symbol for the maximum degree of any vertex in a graph is ∆. Show that the chromatic number of a graph is no more than ∆ + 1. (In fact Brooks proved that if G is not complete or an odd cycle, then χ(G) ≤ ∆. Though there are now many proofs of this fact, none are easy!) 8. Can an interval graph contain a cycle with four vertices and no other edges between vertices of the cycle? 9. The Petersen graph is in Figure 6.36. What is its chromatic number? 10. Let G consist of a five cycle and a complete graph on four vertices, with all vertices of the five-cycle joined to all vertices of the complete graph. What is the chromatic number of G? 11. In how many ways can we properly color a tree on n vertices with t colors? 12. In how many ways may we properly color a complete graph on n vertices with t colors?

6.5. COLORING AND PLANARITY Figure 6.36: The Petersen Graph.

325

13. Show that in a simple planar graph with no triangles, e ≤ 2v − 4. 14. Show that in a simple bipartite planar graph, e ≤ 2v − 4, and use that fact to prove that K3,3 is not planar. 15. Show that in a planar graph with no triangles there is a vertex of degree three or less. 16. Show that if a planar graph has fewer than twelve vertices, then it has at least one vertex of degree 4. 17. The Petersen Graph is in Figure 6.36. What is the size of the smallest cycle in the Petersen Graph? Is the Petersen Graph planar? 18. Prove the following Theorem of Welsh and Powell. If a graph G has degree sequence d1 ≥ d2 ≥ · · · ≥ dn , then χ(G) ≤ 1 + maxi [min(di , i − 1)]. (That is the maximum over all i of the minimum of di and i − 1.) 19. What upper bounds do Problem 18 and Problem 7 and the Brooks bound in Problem 7 give you for the chromatic number in Problem 10. Which comes closest to the right value? How close?

326

CHAPTER 6. GRAPHS

Index k-element permutation of a set, 13 n choose k, 6 Zn , 48 abstraction, 2 absurdity reduction to, 112 addition mod n, 48 additive identity, 45 adjacency list, 278 adjacent in a graph, 263, 272 Adleman, 70 adversary, 39, 42, 48 algorithm non-deterministic, 294, 296 divide and conquer, 139, 148 polynomial time, 294 randomized, 79, 237, 247 alternating cycle for a matching, 302 alternating path, 309 alternating path for a matching, 302 ancestor, 282, 284 and (in logic), 85, 86, 92 associative law, 48 augmentation-cover algorithm, 307 augmenting path, 309 augmenting path for a matching, 304 axioms of probability, 186 base case for a recurrence, 128 base case in proof by induction, 121, 124, 125 Berge’s Theorem (for matchings), 304 Berge’s Theorem for matchings, 309 Bernoulli trials expected number of successes, 222, 224 variance and standard deviation, 258, 259 Bernoulli trials process, 216, 224 bijection, 12 Bijection Principle, 12 binary tree, 282–284 327 full, 282, 284 binomial coefficient, 14–15, 18–25 binomial probabilities, 216, 224 Binomial Theorem, 21, 23 bipartite graph, 298, 300, 309 block of a partition, 2, 33 bookcase problem, 32 Boole’s inequality, 232 breadth first number, 279 breadth first search, 279, 283 Caesar cipher, 48 Caeser cipher, 40 ceilings removing from recurrences, 156, 170, 172 removing from recurrences, 160 child, 282, 284 Chinese Remainder Theorem, 71, 73 cipher Caeser, 40, 48 ciphertext, 40, 48 Circuit Eulerian, 288, 296 closed path in a graph, 269, 273 codebook, 42 coefficient binomial, 15, 18, 21 multinomial, 24 trinomial, 23 collision in hashing, 187, 191 collisions in hashing, 228, 234 expected number of, 228, 234 coloring proper, 312, 322 coloring of a graph, 312, 322 combinations with repetitions, 34 with repititions, 33 commutative law, 48 complement, 187, 191

328 complementary events, 187, 191 complementary probability, 189 complete bipartite graph, 298 complete graph, 265, 273 component connected, 268, 273 conclusion (of an implication), 90 conditional connective, 90, 93 conditional expected value, 239, 247 conditional probability, 205, 212 conditional proof principle of, 114 conditional statements, 90 connected geometrically, 317 connected component of a graph, 268, 273 connected graph, 267, 273 connective conditional, 90, 93 logical, 86, 93 connectivity relation, 268 constant coefficient recurrence, 136 contradiction, 94 proof by, 52, 112, 115 contraposition proof by, 111 contrapositive, 111 contrapositive (of an implication), 115 converse (of an implication), 111, 115 correspondence one-to-one, 12 counterexample smallest, 56 counting, 1–37 coupon collector’s problem, 230 cryptography, 39, 47 private key, 40, 48 public key, 42, 48 RSA, 68, 70, 72 cut edge, 318, 323 cut-vertex, 298 cycle, 269 Hamiltonian, 291, 296 cycle in a graph, 269, 273 decision problem, 294, 296 degree, 265, 273 DeMorgan’s Laws, 88, 93

INDEX derangement, 199 derangement problem, 199 descendant, 282, 284 diagram Venn, 194, 195, 201 digital signature, 81 direct inference, 108, 114 direct proof, 109 disjoint, 2, 6 mutually, 6 distribution probability, 186, 189, 191 distribution function, 219, 224, 251 distributive law, 48 distributive law (and over or), 88 divide and conquer algorithm, 139, 148 divide and conquer recurrence, 150 division in Zn , 53 domain of a function, 10 drawing planar of a graph, 316, 322 edge in a graph multiple, 265 edge of a graph, 263, 272 empty slots in hashing, 228, 234 encrypted, 39 encryption RSA, 70 equations in Zn solution of, 61, 62 solutions to, 51 equivalence classes, 28, 34 equivalence relation, 27, 34 equivalent (in logic), 93 equivalent statements, 88, 101 Euclid’s Division Theorem, 48, 56 Euclid’s extended greatest common divisor algorithm, 58, 59, 61 Euclid’s greatest common divisor algorithm, 57, 61 Euler”s Formula, 318 Euler’s constant, 230, 234 Euler, Leonhard, 287 Eulerian Circuit, 288, 296 Eulerian Graph, 289, 296 Eulerian Tour, 288, 296

INDEX Eulerian Trail, 288, 296 event, 185, 191 events complementary, 187, 191 independent, 205, 212 excluded middle principle of, 92, 93 exclusive or, 86 exclusive or (in logic), 85, 86, 92 existential quantifier, 97, 105 expectation, 219, 224 additivity of, 221, 224 conditional, 239, 247 linearity of, 221, 224 expected number of trials until first success, 223, 225 expected running time, 237, 247 expected value, 219, 224 conditional, 239, 247 number of successes in Bernoulli trials, 222, 224 exponentiation in Zn , 65, 72 exponentiation mod n, 65 practical aspects, 75 extended greatest common divisor algorithm, 58, 59, 61 external vertex, 282, 284 face of a planar drawing, 317, 323 factorial falling, 13 factoring numbers difficulty of, 77 falling factorial, 13 family of sets, 2 Fermat’s Little Theorem, 67, 72 Fermat’s Little Theorem for integers, 68, 72 first order linear constant coefficient recurrence solution to, 136 first order linear recurrence, 133, 136 solution to, 137 floors removing from recurrences, 156, 160, 170, 172 forest, 274 fractions in Zn , 49 free variable, 96, 105 full binary tree, 282, 284 function, 10 one-to-one, 11 hash, 187 increasing, 170 inverse, 17 one-way, 68, 70 onto, 11

329

gcd, 55 generating function, 217, 224 geometric series bounds on the sum, 136 finite, 131, 136 geometrically connected, 317 graph, 263, 272 bipartite, 298, 300, 309 coloring, 312, 322 complete, 265, 273 complete bipartite, 298 connected, 267, 273 Graph Eulerian, 289, 296 graph Hamiltonian, 291, 296 hypercube, 297 interval, 314, 322 interval representation, 314, 322 neighborhood, 301, 309 planar, 316, 322 planar drawing, 316, 322 face of, 317, 323 weighted, 286 graph decision problem, 294, 296 greatest common divisor, 55, 60–62 greatest common divisor algorithm, 57, 61 extended, 58, 59, 61 Hall’s condition (for a matching), 307 Hall’s Theorem for matchings, 308 Hall’s Theorem for matchings., 309 Hamilton, William Rowan, 291 Hamiltonian Cycle, 291, 296 Hamiltonian graph, 291, 296 Hamiltonian Path, 291, 296 hanoi, towers of, 128 harmonic number, 230, 234 hash

330 function, 187 hash table, 186 hashing collision, 187, 191 collisions, 228, 234 empty slots, 228, 234 expected maximum number of keys per slot, 233, 234 expected number of collisions, 228, 234 expected number of hashes until all slots occupied, 230, 234 expected number of items per slot, 227, 234 hatcheck problem, 199 histogram, 251 hypercube graph, 297 hypothesis (of an implication), 90 identity additive, 45 multiplicative, 45 if (in logic), 93 if and only if (in logic), 93 if . . . then (in logic), 93 implies, 93 implies (in logic), 93 incident in a graph, 263, 272 increasing function, 170 independent events, 205, 212 independent random variables, 255, 258 product of, 255, 258 variance of sum, 256, 259 independent set (in a graph), 300, 309 indirect proof, 112, 113 induced subgraph, 269 induction, 117–125, 164–167 base case, 121, 125 inductive conclusion, 121, 126 inductive hypothesis, 121, 126 inductive step, 121, 126 strong, 123, 125 stronger inductive hypothesis, 167 weak, 120, 125 inductive conclusion in proof by induction, 121, 126 inductive hypothesis in proof by induction, 121, 126 inductive step in proof by induction, 121, 126

INDEX inference direct, 108, 114 rule of, 115 rules of, 109, 111, 112, 114 initial condition for a recurrence, 128 initial condition for recurrence, 136 injection, 11 insertion sort, 237, 238, 247 integers mod n, 48 internal vertex, 282, 284 interval graph, 314, 322 intervalrepresentation of a graph, 314, 322 inverse multiplicative in Zn , 51, 53, 55, 60–62 in Zn , computing, 61 in Zp , p prime, 60, 62 inverse function, 17 iteration of a recurrence, 131, 137, 141 key private, 48 for RSA, 68 public, 42, 48 for RSA, 68 secret, 42 K¨nig-Egerv´ry Theorem, 310 o a K¨nigsberg Bridge Problem, 287 o labeling with two labels, 23 law associative, 48 commutative, 48 distributive, 48 leaf, 282, 284 length of a path in a graph, 265 lexicographic order, 13, 87 linear congruential random number generator, 50 list, 10 logarithms important properties of, 147, 149, 151, 159, 161 logic, 83–115 logical connective, 86, 93 loop in a graph, 265 Master Theorem, 150, 153, 159, 160 matching, 299, 309

INDEX alternating cycle, 302 alternating path, 302 augmenting path, 304 Hall’s condition for, 307 increasing size, 304 maximum, 300, 309 mathematical induction, 117–125, 164–167 base case, 121, 125 inductive conclusion, 121, 126 inductive hypothesis, 121, 126 inductive step, 121, 126 strong, 123, 125 stronger inductive hypothesis, 167 weak, 120, 125 maximum matching, 300, 309 measure probability, 186, 189, 191 median, 174, 181 mergesort, 140, 148 Miller-Rabin primality testing algorithm, 79 minimum spanning tree, 286 mod n using in a calculation, 48 modus ponens, 108, 114 multinomial, 24 multinomial coefficient, 24 Multinomial Theorem, 24 multiple edges, 265 multiple edges in a graph, 265 multiplication mod n, 48 multiplicative identity, 45 multiplicative inverse in Zn , 51, 53, 55, 60–62 computing, 61 multiplicative inverse in Zp , p prime, 60, 62 multiset, 30, 34 size of, 30 mutually disjoint sets, 2, 6 negation, 85, 92 neighbor in a graph, 301, 309 neighborhood, 301, 309 non-deterministic algorithm, 296 non-deterministic graph algorithm, 294 not (in logic), 85, 86, 92 NP, problem class, 295 NP-complete, 295, 296 NP-complete Problems, 294 number theory, 40–81 one-to-one function, 11 one-way function, 68, 70 only if (in logic), 93 onto function, 11 or exclusive (in logic), 85 or (in logic), 85, 86, 92 exclusive, 86 order lexicographic, 13 ordered pair, 6 overflow, 50 P, problem class, 294, 296 pair ordered, 6 parent, 282, 284 part of a bipartite graph, 300, 309 partition, 28 blocks of, 2 partition element, 176, 182, 242 partition of a set, 2, 6, 33 Pascal Relationship, 18, 23 Pascal’s Triangle, 18, 23 path, 269 alternating, 309 augmenting, 309 Hamiltonian, 291, 296 path in a graph, 265, 273 closed, 269, 273 length of, 265 simple, 265, 273 percentile, 174, 181 permutation, 12 k-element, 13 permutation of Zp , 67, 72 Pi notation, 32, 34 plaintext, 40, 48 planar drawing, 316, 322 planar drawing face of, 317, 323 planar graph, 316, 322 polynomial time graph algorithm, 294 power falling factorial, 13 rising factorial, 32 primality testing, 216 deterministic polynomial time, 78 difficulty of, 78

331

332 randomized algorithm, 79 Principle Symmetry, 33 Bijection, 12 Product, 5, 6 Version 2, 10 Quotient, 28 principle quotient, 34 Principle Sum, 2, 6 Symmetry, 26 Principle of conditional proof, 114 Principle of Inclusion and exclusion for counting, 201, 202 principle of inclusion and exclusion for probability, 197 Principle of proof by contradiction, 52, 115 principle of the excluded middle, 92, 93 Principle of universal generalization, 114 private key, 48 for RSA, 68 private key cryptography, 40, 48 probability, 186, 191 axioms of, 186 Bernoulli trials, 216, 224 Probability Bernoulli trials variance and standard deviation, 258, 259 probability binomial, 216, 224 complementary, 189 complementary events, 187, 191 conditional, 205, 212 distribution, 186, 189, 191 binomial, 216, 224 event, 185, 191 independence, 205, 212 independent random variables variance of sum, 256, 259 measure, 186, 189, 191 random variable, 215, 223 distribution function, 219, 224, 251 expectation, 219, 224 expected value, 219, 224 independent, 255, 258 numerical multiple of, 221, 224

INDEX standard deviation, 257, 259 variance, 254, 258 random variables product of, 255, 258 sum of, 220, 224 sample space, 185, 190 uniform, 189, 191 union of events, 194, 196, 197, 201 weight, 186, 191 product notation, 32, 34 Product Principle, 5, 6 Version 2, 10 proof direct, 109 indirect, 112, 113 proof by contradiction, 52, 112, 115 proof by contraposition, 111 proof by smallest counterexample, 56 proper coloring, 312, 322 pseudoprime, 79 public key, 42, 48 for RSA, 68 public key cryptography, 42, 48 quantified statements truth or falsity, 101, 105 quantifier, 97, 105 existential, 97, 105 universal, 97, 105 quicksort, 243 quotient principle, 28, 34 random number, 50 random number generator, 237 random variable, 215, 223 distribution function, 219, 224, 251 expectation, 219, 224 expected value, 219, 224 independence, 255, 258 numerical multiple of, 221, 224 standard deviation, 257, 259 variance, 254, 258 random variables independent variance of sum, 256, 259 product of, 255, 258 sum of, 220, 224 randomized algorithm, 79, 237, 247

INDEX randomized selection algorithm, 242, 247 range of a function, 10 recurence iterating, 131, 137 recurrence, 128, 136 base case for, 128 constant coefficient, 136 divide and conquer, 150 first order linear, 133, 136 solution to, 137 first order linear constant coefficient solution to, 136 initial condition, 128, 136 iteration of, 141 recurrence equation, 128, 136 recurrence inequality, 163 solution to, 163 recurrences on the positive real numbers, 154, 160 recursion tree, 141, 148, 150, 167 reduction to absurdity, 112 register assignment problem, 314 relation equivalence, 27 relatively prime, 55, 60, 61 removing floors and ceilings from recurrences, 160, 172 removing floors and ceilings in recurrences, 156, 170 rising factorial, 32 Rivest, 70 root, 281, 284 rooted tree, 281, 284 RSA Cryptosystem, 68 RSA cryptosystem, 70, 72 security of, 77 time needed to use it, 76 RSA encryption, 70 rule of inference, 115 rules of exponents in Zn , 65, 72 rules of inference, 109, 111, 112, 114 sample space, 185, 190 saturate(by matching edges), 300, 309 secret key, 42 selection algorithm, 174, 182 randomized, 242, 247 recursive, 182

333 running time, 180 set, 6 k-element permutation of, 13 partition of, 2, 6, 33 permutation of, 12 size of, 2, 6 sets disjoint, 2 mutually disjoint, 2, 6 Shamir, 70 signature digital, 81 simple path, 265, 273 size of a multiset, 30 size of a set, 2, 6 solution of equations in Zn , 61 solution to a recurrence inequality, 163 solutions of equations in Zn , 62 solutions to equations in Zn , 51 spanning tree, 276, 283 minimum, 286 standard deviation, 257, 259 statement conditional, 90 contrapositive, 111 converse, 111 statements equivalent, 88 Stirling Numbers of the second kind, 203 Stirling’s formula, 230 stronger induction hypothesis, 167 subgraph, 269 induced, 269 subtree of a graph, 276 success expected number of trials until, 223, 225 Sum Principle, 2, 6 surjection, 11 Symmetry Principle, 26, 33 table hash, 186 tautology, 94 Theorem Binomial, 21, 23 Multinomial, 24 Trinomial, 23 Tour

334 Eulerian, 288, 296 towers of Hanoi problem, 128 Trail Eulerian, 288, 296 tree, 269, 273 binary, 282, 284 recursion, 148, 150, 167 rooted, 281, 284 spanning, 276, 283 minimum, 286 tree recursion, 141 trinomial coefficient, 23 Trinomial Theorem, 23 truth values, 86 uniform probability, 189, 191 union probability of, 194, 196, 197, 201 universal generalization Principle of, 114 universal quantifier, 97, 105 universe for a statement, 96, 105 variable free, 96, 105 variance, 254, 258 Venn diagram, 194, 195, 201 vertex external, 282, 284 internal, 282, 284 vertex cover, 301, 309 vertex of a graph, 263, 272 weight probability, 186, 191 weighted graph, 286 weights for a graph, 286 wheel, 323 xor (in logic), 86, 92

INDEX

Similar Documents

Free Essay

Mth221Syllabus

... | | |Discrete Math for Information Technology | Copyright © 2010 by University of Phoenix. All rights reserved. Course Description Discrete (as opposed to continuous) mathematics is of direct importance to the fields of Computer Science and Information Technology. This branch of mathematics includes studying areas such as set theory, logic, relations, graph theory, and analysis of algorithms. This course is intended to provide students with an understanding of these areas and their use in the field of Information Technology. Policies Faculty and students/learners will be held responsible for understanding and adhering to all policies contained within the following two documents: • University policies: You must be logged into the student website to view this document. • Instructor policies: This document is posted in the Course Materials forum. University policies are subject to change. Be sure to read the policies at the beginning of each class. Policies may be slightly different depending on the modality in which you attend class. If you have recently changed modalities, read the policies governing your current class modality. Course Materials Grimaldi, R. P. (2004). Discrete and combinatorial mathematics: An applied introduction. (5th ed.). Boston, MA: Pearson Addison Wesley. Article...

Words: 1891 - Pages: 8

Premium Essay

Self-Study Report

...ABET Self-Study Report for the COMPUTER ENGINEERING PROGRAM at QASSIM PRIVATE COLLEGES BURIDAH, SAUDI ARABIA First of June 2015 Table of Contents Introduction 3 Requirements and Preparation 3 Supplemental Materials 4 Submission and Distribution of Self-Study Report 4 Confidentiality 5 Template 5 BACKGROUND INFORMATION 7 GENERAL CRITERIA 9 CRITERION 1. STUDENTS 9 CRITERION 2. PROGRAM EDUCATIONAL OBJECTIVES 11 CRITERION 3. STUDENT OUTCOMES 12 CRITERION 4. CONTINUOUS IMPROVEMENT 13 CRITERION 5. CURRICULUM 15 CRITERION 6. FACULTY 17 CRITERION 7. FACILITIES 20 CRITERION 8. INSTITUTIONAL SUPPORT 22 PROGRAM CRITERIA 23 Appendix A – Course Syllabi 24 Appendix B – Faculty Vitae 25 Appendix C – Equipment 26 Appendix D – Institutional Summary 27 Signature...

Words: 10169 - Pages: 41

Free Essay

Ga 411

...edu/student_affairs The Office of Student Affairs student-affairs@usg.edu The high school curriculum is the cornerstone of the University System of Georgia (USG) admissions policy. This document reflects the sdfdsfdsfsdfds unit requirements in each of the academic subject areas. Students should pursue a challenging and rigorous high school minimum USG curriculum to be best prepared for a successful college experience and should consult with their high school counselor to determine appropriate coursework. The following high school requirements must be met by all freshmen applicants and transfer applicants with less than 30 transferable semester hours. Students should contact their college or university of interest to learn about any additional institution-specific admission requirements that may apply. Carnegie Unit Requirements 16 Carnegie Units should be completed by students graduating high school prior to 2012. 17 Carnegie Units should be completed by students graduating high school in 2012 or later. Carnegie Unit Requirement In Specific Subject Areas 4 Carnegie units of college preparatory English Literature (American, English, World) integrated with grammar, usage and advanced composition skills 4 Carnegie units of college preparatory mathematics Mathematics I, II, III and a fourth unit of mathematics from the approved list, or equivalent courses* or Algebra I and II, geometry and a fourth year of advanced math, or equivalent courses* ...

Words: 3458 - Pages: 14

Free Essay

Discrete Math

...Course Design Guide MTH/221 Version 1 1 Course Design Guide College of Information Systems & Technology MTH/221 Version 1 Discrete Math for Information Technology Copyright © 2010 by University of Phoenix. All rights reserved. Course Description Discrete (as opposed to continuous) mathematics is of direct importance to the fields of Computer Science and Information Technology. This branch of mathematics includes studying areas such as set theory, logic, relations, graph theory, and analysis of algorithms. This course is intended to provide students with an understanding of these areas and their use in the field of Information Technology. Policies Faculty and students/learners will be held responsible for understanding and adhering to all policies contained within the following two documents:   University policies: You must be logged into the student website to view this document. Instructor policies: This document is posted in the Course Materials forum. University policies are subject to change. Be sure to read the policies at the beginning of each class. Policies may be slightly different depending on the modality in which you attend class. If you have recently changed modalities, read the policies governing your current class modality. Course Materials Grimaldi, R. P. (2004). Discrete and combinatorial mathematics: An applied introduction. (5th ed.). Boston, MA: Pearson Addison Wesley. Article References Albert, I. Thakar, J., Li, S., Zhang, R., & Albert, R...

Words: 1711 - Pages: 7

Premium Essay

My Interest In Mathematics

...challenging problems in science and engineering or related fields by using numerical computation have reached to a new level. Computation is today considered as a very important tool needed for the advancement of scientific knowledge and engineering practice along with theory and experiment. In the modern world all sorts of calculations are done by sophisticated computer systems. Every company and research farms from small-scale to large-scale are getting more and more reliant on mathematical principles these days. Numerical simulation has enabled the study of complex systems and natural phenomena that would be too expensive or sometimes impossible, to study directly by experimentation. As a matter of fact, engineers and scientists now require solid knowledge of computer science and applied mathematics in order to get optimized output from a system. To make things easier in this matter, Scientific Computing is a discipline that conglomerates Mathematics, Computer Science and Engineering in a single degree program and utilizes mathematical models in computer simulations to solve complex problems for not only in science laboratories but also in business and engineering firms. I have always been fascinated by the application of mathematics and computer science in the real world problems. That is why...

Words: 842 - Pages: 4

Premium Essay

Damsel

...document. Note: Program map information located in the faculty sections of this document are relevant to students beginning their studies in 2014-2015, students commencing their UOIT studies during a different academic year should consult their faculty to ensure they are following the correct program map. i Message from President Tim McTiernan I am delighted to welcome you to the University of Ontario Institute of Technology (UOIT), one of Canada’s most modern and dynamic university communities. We are a university that lives by three words: challenge, innovate and connect. You have chosen a university known for how it helps students meet the challenges of the future. We have created a leading-edge, technology-enriched learning environment. We have invested in state-of-the-art research and teaching facilities. We have developed industry-ready programs that align with the university’s visionary research portfolio. UOIT is known for its innovative approaches to learning. In many cases, our undergraduate and graduate students are working alongside their professors on research projects and gaining valuable hands-on learning, which we believe is integral in preparing you to lead and succeed. I encourage you to take advantage of these opportunities to become the best you can be. We also invite our students to connect to the campus and the neighbouring communities. UOIT students enjoy a stimulating campus life experience that includes a wide variety of clubs, cultural and community...

Words: 195394 - Pages: 782

Premium Essay

Senior Systrem Engineer

...College of Information Systems & Technology Bachelor of Science in Information Technology with a Concentration in Information Management The Bachelor of Science in Information Technology (BSIT) degree program is focused on the acquisition of theory and the application of technical competencies associated with the information technology profession. The courses prepare students with fundamental knowledge in core technologies, such as systems analysis and design; programming; database design; network architecture and administration; web technologies; and application development, implementation, and maintenance. This undergraduate degree program includes 45 credits in the required course of study and 15 credits in the concentration. Some courses have prerequisites. In addition, students must satisfy general education and elective requirements to meet the 120-credit minimum, including a minimum of 48 upper-division credits required for completion of the degree. At the time of enrollment, students must choose a concentration. The Information Management concentration is designed to provide coverage of the collection, architecture, modeling, retrieval and management of data for meaningful presentation to the organization. This concentration prepares students to develop, deploy, manage, and integrate data and information systems to support the organization. Note: The diploma awarded for this program will read: Bachelor of Science in Information Technology and will not reflect the concentration...

Words: 1892 - Pages: 8

Premium Essay

Philosophy of Education

...Personal Philosophy of Education I would not be considered your typical college student in search of an education degree. I am a 31 year old male, married, with two children, and working on my second career. My previous life consisted of working in the coal mines till I was injured. My injury, however, is considered a blessing in disguise. My injury has leaded me to the world of education. I have seen first hand the difference an educator can make in the life of a child; the child was my own son. My eldest son, diagnosed with Asperger’s Syndrome, was unable to communicate. He had the opportunity to be enrolled in the early intervention program in Raleigh County. The first individual with the challenge of assisting my child was not able to fulfill her roles and think “outside of the box” to reach him. My wife and I promptly searched for the appropriate educator for him. My family was blessed when we found “Ms. Mitzi”. In the matter of weeks our son was able to tell his mommy he loved her. This impacted my life significantly and I wish to be able to pass on what was given to my child and my family. I chose education as my career path because I hope to be able to make a small difference in a child’s life. Time and time again I have seen children being educated poorly and/or not having appropriate role models in their life. I feel that an educator must not only be able to convey to the student the classroom material, but also be a counselor, coach, mentor, and a parent. Failing...

Words: 1255 - Pages: 6

Premium Essay

Business Ethics

...16/01/2014Thursday | 17/01/2014Friday | 18/01/2014Saturday | 20/01/2014Monday | 21/01/2014Tuesday | 22/01/2014Wednesday | II Sem -B.A Programmes HEPEPS | English | Languages/Ad. English | | Political Science | Principles of Macro Economics | Contemporary IndiaSociology | PSEngPSEcoJPEng | English | Languages/Ad .English | | BasicPsychologicalProcesses -II | Principles of Macro Economics British Literature | Foundations of SociologyJournalism | CEP | English | Languages/Ad .English | | BasicPsychologicalProcesses -II | British Literature | Software applicationFor print media & the web | TCE(Theatre Studies) | | | | Introduction toMusic & Dance –II | | | PEP | English | Languages/Add.English | | Basic PsychologicalProcess –II | British Literature | Dynamics of DanceMusic & Theatre | II Sem -B.Sc Programmes CME | English-- | 9:30 to 11:30 amLang/Ad .English | | Computer Science Data Structures & operating system | Electronics | Differential Calculus | | | 2:30 to 4:30 pmIntegral Calculus | | | | | EMSCMS | English | 9:30 to 11:30 amLang/Ad .English | Statistics ( 9:30 to 11:30 am)(Examination will be held in separate room for Stats; check the notice board) | Computer ScienceOperating Systems & Data Structures using C | Principles of MacroEconomics | Differential Calculus | | | 2:30 to 4:30 pmIntegral Calculus | | | | | PMEPCM | English...

Words: 2645 - Pages: 11

Free Essay

Math

...MATH 55 SOLUTION SET—SOLUTION SET #5 Note. Any typos or errors in this solution set should be reported to the GSI at isammis@math.berkeley.edu 4.1.8. How many different three-letter initials with none of the letters repeated can people have. Solution. One has 26 choices for the first initial, 25 for the second, and 24 for the third, for a total of (26)(25)(24) possible initials. 4.1.18. How many positive integers less than 1000 (a) are divisible by 7? (b) are divisible by 7 but not by 11? (c) are divisible by both 7 and 11? (d) are divisible by either 7 or 11? (e) are divisible by exactly one of 7 or 11? (f ) are divisible by neither 7 nor 11? (g) have distinct digits? (h) have distinct digits and are even? Solution. (a) Every 7th number is divisible by 7. Since 1000 = (7)(142) + 6, there are 142 multiples of seven less than 1000. (b) Every 77th number is divisible by 77. Since 1000 = (77)(12) + 76, there are 12 multiples of 77 less than 1000. We don’t want to count these, so there are 142 − 12 = 130 multiples of 7 but not 11 less than 1000. (c) We just figured this out to get (b)—there are 12. (d) Since 1000 = (11)(90) + 10, there are 90 multiples of 11 less than 1000. Now, if we add the 142 multiples of 7 to this, we get 232, but in doing this we’ve counted each multiple of 77 twice. We can correct for this by subtracting off the 12 items that we’ve counted twice. Thus, there are 232-12=220 positive integers less than 1000 divisible by 7 or 11. (e) If we want to exclude the multiples...

Words: 3772 - Pages: 16

Free Essay

Customer Satisfaction

...Transforming Lives Communities The Nation …One Student at a Time Disclaimer Academic programmes, requirements, courses, tuition, and fee schedules listed in this catalogue are subject to change at any time at the discretion of the Management and Board of Trustees of the College of Science, Technology and Applied Arts of Trinidad and Tobago (COSTAATT). The COSTAATT Catalogue is the authoritative source for information on the College’s policies, programmes and services. Programme information in this catalogue is effective from September 2010. Students who commenced studies at the College prior to this date, are to be guided by programme requirements as stipulated by the relevant department. Updates on the schedule of classes and changes in academic policies, degree requirements, fees, new course offerings, and other information will be issued by the Office of the Registrar. Students are advised to consult with their departmental academic advisors at least once per semester, regarding their course of study. The policies, rules and regulations of the College are informed by the laws of the Republic of Trinidad and Tobago. iii Table of Contents PG 9 PG 9 PG 10 PG 11 PG 11 PG 12 PG 12 PG 13 PG 14 PG 14 PG 14 PG 14 PG 15 PG 17 PG 18 PG 20 PG 20 PG 20 PG 21 PG 22 PG 22 PG 22 PG 23 PG 23 PG 23 PG 23 PG 24 PG 24 PG 24 PG 24 PG 25 PG 25 PG 25 PG 26 PG 26 PG 26 PG 26 PG 26 PG 26 PG 27 PG 27 PG 27 PG 27 PG 27 PG 27 PG 28 PG 28 PG 28 PG 28 PG 28 PG 33 PG 37 Vision Mission President’s...

Words: 108220 - Pages: 433

Premium Essay

No Paper to Upload

...REGENT UNIVERSITY COLLEGE OF ARTS & SCIENCES UNDERGRADUATE CATALOG 2013-2014 (Fall 2013-Summer 2014) Regent University 1000 Regent University Drive Virginia Beach, VA 23464-9800 800.373.5504 admissions@regent.edu www.regent.edu PREFACE Regional Accreditation Regent University is accredited by the Southern Association of Colleges and Schools Commission on Colleges to award associates, baccalaureate, masters, and doctorate degrees. Contact the Commission on Colleges at 1866 Southern Lane, Decatur, Georgia 30033-4097 or call 404-679-4500 for questions about the accreditation of Regent University. National and State Accreditation Regent University’s undergraduate school is accredited or certified by the following bodies:   Council for Higher Education Accreditation (CHEA) (www.chea.org/) The Teacher Education Accreditation Council (TEAC) The Regent University School of Education's educational leadership and teacher preparation programs and the College of Arts & Sciences interdisciplinary studies program, which are designed to prepare competent, caring, and qualified professional educators are accredited by the Teacher Education Accreditation Council for a period of seven years, from January 9, 2009 to January 9, 2016. This accreditation certifies that the educational leadership, teacher preparation and interdisciplinary studies programs have provided evidence that they adhere to TEAC's quality principles. Teacher Educational Accreditation Council, One Dupont Circle, Suite...

Words: 74326 - Pages: 298

Free Essay

Od Glossary

...360 Degree Feedback         An evaluation method that provides each employee the opportunity to receive performance feedback from his or her supervisor and four to eight peers, reporting staff members, co-workers and customers.   ABE - Adult Basic Education         Adult Basic Education   Accreditation         Certification by a duly recognized body of the facilities, capability, objectivity, competence, and integrity of an agency, service or operational group or individual to provide the specific service(s) or operation(s) needed. Recognition given to a person or organization meeting certain standards.   Achievement         Performance as determined by some type of assessment or testing.   Action Plan         A specific method or process to achieve the results called for by one or more objectives. May be a simpler version of a project plan.   Action planning and processes         Deciding who is going to do what, by when and in what order for the organization to reach its strategic goals. The design and implementation of action planning depends on the nature and needs of the organization. An action plan includes a schedule with deadlines for significant actions.   Action Projects         A specific planned process and steps for completing one or more strategic goals and objectives, including ownership of the project. The Action Projects are the annual goals and challenges currently being addressed by San Juan College.   Active listening         A way of listening that...

Words: 9791 - Pages: 40

Premium Essay

Hello

...Engineering: An Introduction for High School Annapurna Ganesh Chell Roberts Dale Baker Darryl Morrell Janel White-Taylor Stephen Krause Tirupalavanam G. Ganesh Say Thanks to the Authors Click http://www.ck12.org/saythanks (No sign in required) www.ck12.org iii To access a customizable version of this book, as well as other interactive content, visit www.ck12.org CK-12 Foundation is a non-profit organization with a mission to reduce the cost of textbook materials for the K-12 market both in the U.S. and worldwide. Using an open-content, web-based collaborative model termed the FlexBook®, CK-12 intends to pioneer the generation and distribution of high-quality educational content that will serve both as core text as well as provide an adaptive environment for learning, powered through the FlexBook Platform®. Copyright © 2011 CK-12 Foundation, www.ck12.org The names “CK-12” and “CK12” and associated logos and the terms “FlexBook®”, and “FlexBook Platform®”, (collectively “CK-12 Marks”) are trademarks and service marks of CK-12 Foundation and are protected by federal, state and international laws. Any form of reproduction of this book in any format or medium, in whole or in sections must include the referral attribution link http://www.ck12.org/saythanks (placed in a visible location) in addition to the following terms. Except as otherwise noted, all CK-12 Content (including CK-12 Curriculum Material) is made available to Users in accordance with the Creative Commons...

Words: 61128 - Pages: 245

Free Essay

Many Many Sops

...at IITB tend to write long SOPs (1.5 to 2 pages) whereas IITM guys (who get better schools!)Usually write 0.75 page (max. 1 page). Basically, their SOPs are much more direct and to-the-point than ours. [5] REMEMBER THAT A SOP *MUST* BE ORIGINAL. Statement of Purpose I am applying to Stanford University for admission to the Ph.D. program in Computer Science. I am interested in Theoretical Computer Science, particularly in the Design and Analysis of Approximation Algorithms, Combinatory and Complexity Theory. My interest in Mathematics goes back to the time I was at school. This interest has only grown through my years in school and high school, as I have learnt more and more about the subject. Having represented India at the International Mathematical Olympiads on two occasions, I have been exposed to elements of Discrete Mathematics, particularly Combinatory and Graph Theory, outside the regular school curriculum at an early stage. The intensive training programs we were put through for the Olympiads have given me a lot of confidence in dealing with abstract mathematical problems. My exposure to Computer Science began after I entered the Indian Institute of Technology (IIT), Bombay. The excellent facilities,...

Words: 20877 - Pages: 84