Free Essay

Economcis

In:

Submitted By priyacivil
Words 23718
Pages 95
Advanced linear algebra
M. Anthony, M. Harvey
MT2118, 2790118

2011

Undergraduate study in
Economics, Management,
Finance and the Social Sciences
This is an extract from a subject guide for an undergraduate course offered as part of the
University of London International Programmes in Economics, Management, Finance and the Social Sciences. Materials for these programmes are developed by academics at the
London School of Economics and Political Science (LSE).
For more information, see: www.londoninternational.ac.uk

This guide was prepared for the University of London International Programmes by:
Professor M. Anthony, BSc, MA, PhD and Dr M. Harvey, BSc, MSc, PhD, Department of
Mathematics, The London School of Economics and Political Science.
This is one of a series of subject guides published by the University. We regret that due to pressure of work the authors are unable to enter into any correspondence relating to, or arising from, the guide. If you have any comments on this subject guide, favourable or unfavourable, please use the form at the back of this guide.

University of London International Programmes
Publications Office
Stewart House
32 Russell Square
London WC1B 5DN
United Kingdom
Website: www.londoninternational.ac.uk
Published by: University of London
© University of London 2006
Reprinted with minor revisions 2011
The University of London asserts copyright over all material in this subject guide except where otherwise indicated. All rights reserved. No part of this work may be reproduced in any form, or by any means, without permission in writing from the publisher.
We make every effort to contact copyright holders. If you think we have inadvertently used your copyright material, please let us know.

Contents

Contents

Preface ..........................................................................................................v
Introduction ................................................................................................. 1
1.1 This course ........................................................................................................ 1
1.2 Reading............................................................................................................. 2
1.3 Online study resources ....................................................................................... 3
1.4 Using the guide ................................................................................................. 4
1.5 Examination ...................................................................................................... 4
1.6 The use of calculators ........................................................................................ 5
2 Matrices, vectors, and systems of linear equations ................................. 7
2.1 Introduction ...................................................................................................... 7
2.2 Vectors and matrices .......................................................................................... 7
2.2.1 Row operations....................................................................................... 10
2.2.2 Determinants. ......................................................................................... 11
2.2.3 Invertible matrices .................................................................................. 13
2.3 Rank, range, null space, and linear equations ................................................... 15
2.3.1 The rank of a matrix ................................................................................ 15
2.3.2 Rank and systems of linear equations...................................................... 16
2.3.3 General solution of a linear system in vector notation .............................. 19
2.3.4 Null space .............................................................................................. 20
2.3.5 Range .................................................................................................... 21
2.4 Learning outcomes ......................................................................................... .22
2.5 Sample examination questions ......................................................................... 23
2.6 Answers to or comments on selected activities ................................................. 24
2.7 Sketch answers to or comments on selected questions ..................................... 26
3 Vector spaces, basis, dimension .............................................................. 29
3.1 Introduction .................................................................................................... 29
3.2 Vector spaces .................................................................................................. 29
3.2.1 Defnition of a vector space...................................................................... 29
3.2.2 Examples ................................................................................................ 31
3.3 Subspaces........................................................................................................ 32
3.3.1 An alternative characterisation of a subspace .......................................... 35
3.4 Subspaces connected with matrices ................................................................. 35
3.4.1 Null space .............................................................................................. 35
3.4.2 Range .................................................................................................... 36
3.5 Linear span...................................................................................................... 36
3.5.1 Lines and Planes in ℝ3 ............................................................................ 37
3.5.2 Row space and column space ................................................................. 38
3.6 Linear independence. ....................................................................................... 38
3.7 Testing linear independence in ℝn .................................................................... 40
3.8 Bases and dimension ....................................................................................... 43
3.8.1 Coordinates ............................................................................................ 45
3.8.2 Dimensions............................................................................................. 46
3.8.3 Dimension and bases of subspaces ......................................................... 47
3.9 Finding a basis for a linear span in ℝn .............................................................. 48
3.10 Basis and dimension of range and null space ................................................. 49
3.11 Learning outcomes ........................................................................................ 52
3.12 Sample examination questions ....................................................................... 52
3.13 Answers to or comments on selected activities ............................................... 54
3.14 Sketch answers to or comments on selected questions ................................... 58

i

118 Advanced linear algebra

4 Linear transformations, change of basis ................................................ 65
4.1 Introduction .................................................................................................... 65
4.2 Linear transformations ..................................................................................... 65
4.3 Examples ......................................................................................................... 66
4.4 Linear transformations and matrices................................................................. 67
4.4.1 Rotation in ℝ2 ........................................................................................ 68
4.4.2 Identity and zero linear transformations .................................................. 70
4.4.3 Composition and combinations of linear transformations ......................... 70
4.4.4 Inverse linear transformations ................................................................. 70
4.4.5 Linear transformations from V to W......................................................... 71
4.5 Range and null space ...................................................................................... 72
4.6 Rank and nullity .............................................................................................. 73
4.7 Coordinate change .......................................................................................... 75
4.8 Change of basis and similarity ......................................................................... 77
4.9 Learning outcomes .......................................................................................... 79
4.10 Sample examination questions ....................................................................... 79
4.11 Sketch answers to or comments on selected activities ..................................... 80
4.12 Sketch answers to or comments on selected questions ................................... 81
5 Diagonalisation ...................................................................................... 85
5.1 Introduction .................................................................................................... 85
5.2 Eigenvalues and eigenvectors .......................................................................... 85
5.2.1 Defnitions ............................................................................................... 85
5.2.2 Finding eigenvalues and eigenvectors...................................................... 86
5.2.3 Eigenspaces............................................................................................ 89
5.3 Diagonalisation of a square matrix ................................................................... 90
5.3.1 Diagonalisation, when can it fail? ........................................................... 93
5.4 Learning outcomes .......................................................................................... 97
5.5 Sample examination questions ......................................................................... 97
5.6 Sketch answers to or comments on selected activities ....................................... 98
5.7 Sketch answers to or comments on selected questions ..................................... 99
6 Inner products, orthogonality, orthogonal diagonalisation ................. 101
6.1 Introduction .................................................................................................. 101
6.2 The inner product of two real n-vectors .......................................................... 101
6.2.1 Geometric Interpretation in ℝ2 and ℝ3 .................................................. 102
6.3 Inner products more generally ........................................................................ 105
6.3.1 Norms in a vector space ........................................................................ 106
6.3.2 The Cauchy-Schwarz inequality ............................................................. 107
6.3.3 Generalised geometry ........................................................................... 108
6.3.4 Orthogonal vectors ............................................................................... 109
6.3.5 Orthogonality and linear independence ................................................. 109
6.4 Orthogonal matrices and orthonormal sets ..................................................... 110
6.5 Gram-Schmidt orthonormalisation process ..................................................... 111
6.6 Orthogonal diagonalisation of symmetric matrices.......................................... 113
6.7 Learning outcomes ........................................................................................ 118
6.8 Sample examination questions ....................................................................... 119
6.9 Sketch answers to or comments on selected activities ..................................... 120
6.10 Sketch answers to or comments on selected questions ................................. 122
7 Applications of diagonalisation............................................................ 127
7.1 Introduction .................................................................................................. 127
7.2 Powers of matrices ........................................................................................ 128
7.3 Systems of difference equations ..................................................................... 129
7.3.1 Difference equations: revision ............................................................... 129
7.3.2 Systems of difference equations ............................................................ 130
7.3.3 Solving by change of variable ................................................................ 130

ii

Contents

7.3.4 Solving using matrix powers ................................................................. 132
7.3.5 Markov Chains ..................................................................................... 134
7.4 Linear systems of differential equations .......................................................... 138
7.5 Quadratic forms – Applications of orthogonal diagonalisation ........................ 140
7.5.1 Information on quadratic forms ............................................................. 141
7.5.2 Quadratic forms in ℝ2 – conic sections .................................................. 145
7.6 Learning outcomes ........................................................................................ 148
7.7 Sample examination questions ....................................................................... 148
7.8 Sketch answers to or comments on selected activities ..................................... 150
7.9 Sketch answers to or comments on selected questions ................................... 151
8 Direct sums and projections ................................................................. 155
8.1 Introduction .................................................................................................. 155
8.2 The direct sum of two subspaces .................................................................... 155
8.2.1 The sum of two subspaces .................................................................... 155
8.2.2 Direct sums .......................................................................................... 156
8.3 Orthogonal complements............................................................................... 158
8.3.1 The orthogonal complement of a subspace ............................................ 158
8.3.2 Orthogonal complements of null spaces and ranges .............................. 160
8.4 Projections .................................................................................................... 162
8.4.1 The definition of a projection................................................................. 162
8.4.2 An example .......................................................................................... 163
8.4.3 Orthogonal projections ......................................................................... 163
8.5 Characterising projections and orthogonal projections .................................... 164
8.5.1 Projections are idempotents .................................................................. 164
8.6 Orthogonal projection on to the range of a matrix .......................................... 166
8.7 Minimising the distance to a subspace ........................................................... 167
8.8 Fitting functions to data: least squares approximation .................................... 168
8.8.1 A linear algebra view............................................................................. 169
8.8.2 An example .......................................................................................... 169
8.9 Learning outcomes ........................................................................................ 171
8.10 Sample examination questions ..................................................................... 172
8.11 Sketch answers to or comments on selected activities ................................... 172
8.12 Sketch answers to or comments on selected questions ................................. 173
9 Complex matrices, vector spaces ......................................................... 175
9.1 Introduction .................................................................................................. 175
9.2 Complex numbers.......................................................................................... 175
9.2.1 Complex numbers ................................................................................. 176
9.2.2 Roots of polynomials ............................................................................ 178
9.2.3 The complex plane ................................................................................ 179
9.2.4 Polar form of z ...................................................................................... 179
9.2.5 Exponential form of z............................................................................ 181
9.2.6 Complex solutions of differential and difference equations .................... 183
9.3 Complex vector spaces .................................................................................. 187
9.4 Complex matrices .......................................................................................... 188
9.5 Complex inner product spaces ....................................................................... 190
9.5.1 The inner producton ℂn. ........................................................................ 190
9.5.2 Complex inner product in general.......................................................... 191
9.5.3 Orthogonal vectors ............................................................................... 193
9.6 The adjoint. Hermitian and unitary matrices .................................................... 195
9.6.1 The adjoint ................................................................................................. 195
9.6.2 Hermitian matrices................................................................................ 197
9.6.3 Unitary matrices ................................................................................... 198
9.7 Unitary diagonalisation. Normal matrices ....................................................... 200
9.8 Spectral decomposition .................................................................................. 203
9.9 Learning outcomes ........................................................................................ 207

iii

118 Advanced linear algebra

9.10 Sample examination questions ..................................................................... 208
9.11 Sketch answers to or comments on selected activities ................................... 210
9.12 Sketch answers to or comments on selected questions ................................. 213
10 Sample examination paper................................................................. 219
10.1 The format ................................................................................................... 219
10.2 A Sample examination paper ....................................................................... 219

iv

Preface
This subject guide is not a course text. It sets out the logical sequence in which to study the topics in the course. Where coverage in the main texts is weak, it provides some additional background material. Further reading is essential.
We are grateful to Keith Martin and James Ward for their careful reading of a draft of the guide, and for their many helpful comments.

v

118 Advanced linear algebra

Notes

vi

Chapter 1: Introduction

Chapter 1
Introduction
In this very brief introduction, we aim to give you an idea of the nature of this course and to advise on how best to approach it. We give general information about the contents and use of this subject guide, and on recommended reading and how to use the textbooks.

1.1 This course
Relationship to previous mathematics courses
If you are taking this half course as part of a BSc degree, courses which must be passed before this half course may be attempted are 05A
Mathematics 1 and 05B Mathematics 2 or 174 Calculus. Any references in the text to these courses for prerequisite material will apply equally to whatever prerequisite you have taken.
In 05A Mathematics 1 and 05B Mathematics 2 you will have learned about matrices, systems of equations, eigenvalues and eigenvectors, diagonalisation of 2 × 2 matrices and solving systems of linear equations.
In 118 Advanced linear algebra we explore these topics more deeply: we will see new techniques and methods, and we will explore the theoretical ideas behind these. So, for this course, you will not simply have to solve problems: you will have to be able to reason more abstractly, and be able to prove things. In this course, we need to work with precise defnitions, and you will need to know these. Not only will you need to know these, but you will have to understand them, and be able ( through the use of them) to demonstrate that you understand them. Simply learning the defnitions without understanding what they mean is not going to be adequate. I hope that these words of warning don’t discourage you, but I think it’s important to make it clear that this is a course at a higher level than those prerequisite courses.
Please note: 118 Advanced linear algebra will not be offered under the New Regulations. It will be replaced by 175 Further linear algebra.
This new course will be available from September 2012.

Aims
This half course is designed to:
• enable you to acquire further skills in the techniques of linear algebra, as well as understanding of the principles underlying the subject
• prepare you for further courses in mathematics and/or related disciplines (e.g. economics, actuarial science).

Learning outcomes
At the end of this course, and having completed the Essential reading and activities, you should have:
• used the concepts, terminology, methods and conventions covered in this course to solve the mathematical problems in this subject
• the ability to demonstrate an understanding of the underlying principles of the subject

1

118 Advanced linear algebra

• the ability to solve unseen mathematical problems involving an understanding of the concepts and applications of these methods.

Topics covered
Descriptions of topics to be covered appear in the relevant chapters.
However, it is useful to give a brief overview at this stage.
Diagonalisation of 2 by 2 matrices was covered in 05B Mathematics
2. Here, the techniques are extended to larger matrices, but more importantly the theoretical underpinning is given.This involves the important ideas of vector spaces, linear independence, basis, dimension and linear transformations. Wethen explore eigen values, diagonalisation, and the applications of these techniques to solving systems of difference and differential equations (extending methods from 05B Mathematics
2),and tofinding powers of matrices.Then,we study inner products, orthogonality, and quadratic forms. Next, we turn our attention to the ideas of direct sums, projections and least squares, topics that have useful applications in many areass uch as statistics and econometrics. Finally, we look at complex matrices andc omplex vector spaces, extending and generalising the theory developed in the previous chapters.
Throughout, the emphasis is on the theory as much as on the methods.
That is to say, the aim in this course is not only to provide you with useful technique sand methods of linear algebra, but to enable you to understand why these techniques (and those you encountered in your earlier mathematical study) work.
Not all chapters of the guide are the same length. It should not be thought that you should spend the same amount of time on each chapter. We will not try to specify how much relative time should be spent on each: that will vary from person to person and we do not want to be prescriptive.
As a very rough guide (bearing in mind that this must vary from individual to individual), we would suggest that the percentages of time spent on each chapter are something along the lines suggested in the following table.
(This should not be taken as any indication about the composition of the examination.) Chapter

%

2

10

3

17

4

10

5

10

6

10

7

13

8

13

9

17

1.2 Reading
There are many books that would be useful for this course, and the course does not follow any particular one.
Most topics in this course are covered in great detail by a large number of books. For that reason, we have resisted the temptation to specify Essential reading in each chapter of the guide. What is true, however, is that textbook reading is essential. Textbooks will provide more in-depth explanations than you will find in this guide (which is explicitly not meant

2

Chapter 1: Introduction

to be a textbook), and they will also provide many more examples to study, and many more exercises to work through.
The following books are the ones we have referred to in this guide:
Anton, H. Elementary Linear Algebra, (John Wiley, 2005)
[ISBN 9780471449037].1
Lay, David C. Linear Algebra and its Applications, (Pearson Education, 2011) fourth edition [ISBN 9780321385178].
Ostaszewski, A. Advanced Mathematical Methods. (Cambridge: Cambridge
University Press, 1991) [ISBN 9780521289641].
Simon, C.P and Blume, L. Mathematics for Economists, (W.W Norton and
.
Company, 1994) [ISBN 9780393957334].
(for revision of some basic topics) Anthony, M. and Biggs, N. Mathematics for
Economics and Finance: Methods and Modelling. (Cambridge: Cambridge
University Press, 1996) [ISBN 9780521551137].

1

There are many editions and variants of this book, such as the
`Applications version’.
Any one is equally useful, you will not need more than one of them.
You can find the relevant sections in any edition by using the index.

You should be able to find relevant reading on a topic by finding it in the index. In several places we direct you to one of the above textbooks for a proof of as statement or theorem, as corroboration of the result or to satisfy your desire to know why the statement or theorem is true. Such proofs will not be examined. As the editions of the textbooks vary, we have not given explicit section references in the margin.
Detailed reading references in this subject guide refer to the editions of the set textbooks listed above. New editions of one or more of these textbooks may have been published by the time you study this course. You can use a more recent edition of any of the books; use the detailed chapter and section headings and the index to identify relevant readings. Also check the virtual learning environment (VLE) regularly for updated guidance on readings. 1.3 Online study resources
In addition to the subject guide and the reading, it is crucial that you take advantage of the study resources that are available online for this course, including the VLE and the Online Library.
You can access the VLE, the Online Library and your University of London email account via the Student Portal at: http://my.londoninternational.ac.uk You should have received your login details for the Student Portal with your official offer, which was emailed to the address that you gave on your application form. You have probably already logged in to the
Student Portal in order to register! As soon as you registered, you will automatically have been granted access to the VLE, Online Library and your fully functional University of London email account.
If you forget your login details at any point, please email uolia.support@ london.ac.uk quoting your student number.

The VLE
The VLE, which complements this subject guide, has been designed to enhance your learning experience, providing additional support and a sense of community. It forms an important part of your study experience with the University of London and you should access it regularly.
The VLE provides a range of resources for EMFSS courses:
• Self-testing activities: Doing these allows you to test your own understanding of subject material.

3

118 Advanced linear algebra

• Electronic study materials: The printed materials that you receive from the University of London are available to download, including updated reading lists and references.
• Past examination papers and Examiners’ commentaries: These provide advice on how each examination question might best be answered.
• A student discussion forum: This is an open space for you to discuss interests and experiences, seek support from your peers, work collaboratively to solve problems and discuss subject material.
• Videos: There are recorded academic introductions to the subject, interviews and debates and, for some courses, audio-visual tutorials and conclusions.
• Recorded lectures: For some courses, where appropriate, the sessions from previous years’ Study Weekends have been recorded and made available. • Study skills: Expert advice on preparing for examinations and developing your digital literacy skills.
• Feedback forms.
Some of these resources are available for certain courses only, but we are expanding our provision all the time and you should check the VLE regularly for updates.

Making use of the Online Library
The Online Library contains a huge array of journal articles and other resources to help you read widely and extensively.
To access the majority of resources via the Online Library you will either need to use your University of London Student Portal login details, or you will be required to register and use an Athens login: http://tinyurl.com/ollathens The easiest way to locate relevant content and journal articles in the Online
Library is to use the Summon search engine.
If you are having trouble finding an article listed in a reading list, try removing any punctuation from the title, such as single quotation marks, question marks and colons.
For further advice, please see the online help pages: www.external.shl.lon.ac.uk/summon/about.php 1.4 Using the guide
We have already mentioned that this guide is not a textbook. It is important that you read textbooks in conjunction with the guide and that you try problems from them. The Learning activities, and the sample questions at the end of the chapters, in this guide are a very useful resource. You should try them all once you think you have mastered a particular chapter. Do really try them: don’t just simply read the solutions where provided. Make a serious attempt before consulting the solutions.
Note that the solutions are often just sketch solutions, to indicate to you how to answer the questions, but in the examination, you must show all your calculations. It is vital that you develop and enhance your problem-solving skills and the only way to do this is to try lots of examples. 4

Finally, we often use the symbol □ to denote the end of a proof, where we have finished explaining why a particular result is true. This is just to make it clear where the proof ends and the following text begins.

Chapter 1: Introduction

1.5 Examination
Important: Please note that subject guides may be used for several years.
Because of this we strongly advise you to always check both the current
Regulations, for relevant information about the examination, and the VLE where you should be advised of any forthcoming changes. You should also carefully check the rubric/instructions on the paper you actually sit and follow those instructions.
A Sample examination paper is given as an appendix to this guide. There are no optional topics in this course: you should do them all.
Please do not imagine for a moment that the questions in a real examination will necessarily be very similar to these sample questions.
An examination is designed (by definition) to test you. You will get examination questions that are unlike any questions in this guide. The whole point of examining is to see whether you can apply knowledge in both familiar and unfamiliar settings. The Examiners (nice people though they are) have an obligation to surprise you! For this reason, it is important that you try as many examples as possible: from the guide and from the textbooks. This is not so that you can cover any possible type of question the Examiners can think of! It’s so that you get used to confronting unfamiliar questions, grappling with them, and finally coming up with the solution.
Do not panic if you cannot completely solve an examination question.
There are many marks to be awarded for using the correct approach or method. Remember, it is important to check the VLE for:
• up-to-date information on examination and assessment arrangements for this course
• where available, past examination papers and Examiners’ commentaries for this course which give advice on how each question might best be answered. 1.6 The use of calculators
You will not be permitted to use calculators of any type in the examination. This is not something that you should panic about: the examiners are interested in assessing that you understand the key concepts, ideas, methods and techniques, and will set questions which do not require the use of a calculator.

5

118 Advanced linear algebra

Notes

6

Chapter 2

Matrices, vectors, and systems of linear equations Contents
2.1
2.2
2.3

2.4
2.5

Reading

2.6

There is no specific essential reading for this chapter. It is essential that you do some reading, but the topics discussed in this chapter are adequately covered in so many texts on ‘linear algebra’ that it would be artificial and unnecessarily limiting to specify precise passages from precise texts. The list below gives examples of relevant reading. (For full publication details, see Chapter 1.)

2.7

Introduction .
Vectors and matrices . . .
Rank, range, null space, and linear equations . . .
Learning outcomes . . . .
Sample examination questions
Answers to or comments on selected activities . . . . .
Sketch
answers to or comments on selected questions . . .

7
7

15
22
23

24

26

Anton, H. Elementary Linear Algebra. Chapter 1 and Chapter 2.
Lay, David C. Linear Algebra and its Applications. Chapter 1,
Chapter 2 and Chapter 3.
Simon, C.P. and Blume, L. Mathematics for Economists.
Chapter 7, sections 7.1–7.4; Chapter 9, sections 9.1–9.2.
Anthony, M. and Biggs, N. Mathematics for Economics and
Finance. Chapters 15–18, 20 (for revision).

2.1

Introduction
In this chapter we first very briefly revise some basics about vectors and matrices, which will be familiar from 05B Mathematics 2. We then explore
Mathematics 2. We then explore some new topics, in particular the rank of a matrix, and the null space and range of a matrix.

2.2

Vectors and matrices
An n-vector v is a list of n numbers, written either as a row-vector
(v1 , v2 , . . . , vn ),

7

118 Advanced linear algebra

or a column-vector



 v1  v2 
 . .
 . 
.
vn

The numbers v1 , v2 , and so on are known as the components, entries or coordinates of v. The zero vector, denoted 0, is the vector with all of its entries equal to 0.
The set Rn denotes the set of all vectors of length n, and we usually think of these as column vectors.
We can define addition of two n-vectors by the rule

   

w1 v1 w1 + v1
 w2   v2   w2 + v2 
 . + . =
.
.
 .   .  

.
.
.
.
wn

vn

wn + vn

Also, we can multiply a vector by any single number α (usually called a scalar in this context) by the following rule:


 

v1 αv1  v2   αv2  α .  =  . .
 .   . 
.
. vn αvn

For vectors v1 , v2 , . . . , vk and numbers α1 , α2 , . . . , αk , the vector α1 v1 + · · · + αk vk is known as a linear combination of the vectors v1 , . . . , vk .
A matrix is an array of numbers

a11 a12
 a21 a22
 .
.
 .
.
.
.
am1 am2

 a1n a2n 
. .
. 
.
. . . amn

...
...
..
.

We denote this array by the single letter A, or by (aij ), and we say that A has m rows and n columns, or that it is an m × n matrix. We also say that A is a matrix of size m × n. If m = n, the matrix is said to be square. The number aij is known as the (i, j)th entry of A. The row vector (ai1 , ai2 , . . . , ain ) is row i of A, or the ith row of A, and the column vector
a 
1j
 a2j 
 . 
 . 
.
amj is column j of A, or the jth column of A.
It is useful to think of row and column vectors as matrices. For example, we may think of the row vector (1, 2, 4) as being the same as the 1 × 3 matrix ( 1 2 4 ). (Indeed, the only visible difference is that the vector has commas and the matrix does not, and this is merely a notational difference.)

8

Vectors and matrices

Recall that if A and B are two matrices of the same size then we define A + B to be the matrix whose elements are the sums of the corresponding elements in A and B. Formally, the (i, j)th entry of the matrix A + B is aij + bij where aij and bij are the (i, j)th entries of A and B, respectively. Also, if c is a number, we define cA to be the matrix whose elements are c times those of A; that is, cA has (i, j)th entry caij . If A and B are matrices such that the number (say n) of columns of A is equal to the number of rows of B, we define the product C = AB to be the matrix whose elements are cij = ai1 b1j + ai2 b2j + · · · + ain bnj .
The transpose of a matrix


a11
 a21
A = (aij ) =  .
 .
.

a12 a22 .
.
.

...
...
..
.

 a1n a2n 
. 
. 
.

am1

am2

...

amn

a11
 a12
AT = (aji ) =  .
 .
.

a21 a22 .
.
.

...
...
..
.

 am1 am2 
. 
. 
.

a1n

a2n

...

amn

is the matrix


that is obtained from A by interchanging rows and columns.
1 2
3 4

Example: For example, if A =

AT =

1
2

3
4

and B = ( 1

5

3 ) , then

 
1
, BT =  5  .
3

The transpose has the following properties:1
(AT )T = A,

(A + B)T = AT + B T ,

1

See Anton or Simon and
Blume.

(AB)T = B T AT .

The first two properties follow immediately from the definition. The last one can be stated as: The transpose of the product of two matrices is the product of the transposes in the reverse order.

Learning activity 2.1
Check these. For the last property, if A is an m × n matrix and B is n × p, look at the dimensions of the matrices (AB)T , AT , B T and
B T AT to understand why the multiplication of the transposes must be in the reverse order.

9

118 Advanced linear algebra

We will often write a column vector in the text as the transpose of a row vector, either
  x1  x2 
T
or v = (x1 , x2 , · · · , xn )T . v =  .  = ( x1 x2 · · · xn ) ,
 . 
.
xn

2.2.1

Row operations
Recall from 05B Mathematics(for it it is notgoing to be reiterated here inin
Mathematics 2 2 (for is not going reiterated here all its gory detail!) that there are three types of elementary row operation one can perform on a matrix. These are: multiply a row by non-zero constant add a multiple of one row to another interchange (swap) two rows.
In 05B Mathematics 2, row operations were usedto solve systems of linear
Mathematics 2, row operations were used to solve systems of linear equations by reducing the augmented matrix of the system to echelon form. (Yes, we are afraid you do have to remember this!) For example, the following form is an echelon matrix:


1 ∗ ∗ ∗ ... ∗ ∗
0 0 1 ∗ ... ∗ ∗


0 0 0 1 ... ∗ ∗,
. . . . .
. .
. . . .
.. . . 
. . . .
. .
0 0 0 0 ... 0 0 where the ∗ entries denote any numbers. The ‘1’s which have been indicated are called leading ones.
A matrix in echelon form has the following properties:
(1) every non-zero row begins with a leading one
(2) a leading one in a lower row is further to the right
(3) zero rows are at the bottom of the matrix.
Once the augmented matrix is in echelon form, you can either solve the resulting equations using back substitution, or continue with row operations to further reduce the matrix to reduced echelon form.
A matrix is in reduced echelon form if, in addition to echelon form, it has the property that
(4) every column with a leading one has zeros elsewhere.
The example above has reduced

1 ∗ 0
0 0 1

0 0 0
. . .
. . .
. . .
0 0

10

0

echelon form,
0
0
1
.
.
.
0

...
...
...
..
.
...




.
.
.



∗

∗,
.
.
.

0 0

Vectors and matrices

where the ∗ entries denote any numbers and the indicated ‘1’s are leading ones.
To achieve this using row operations, reduce the augmented matrix to echelon form. Then beginning with the last row add suitable multiples to each row above to get zeros above the leading ones.
Example: Consider the system of equations x1 + x2 + x3

= 3

2x1 + x2 + x3

= 4

x1 − x2 + 2x3

= 5.

We solve this system by reducing the augmented matrix using elementary row operations. (Here, Ri indicates the ith row and the row operations are as indicated.)


1 1 1 3
( A|b ) =  2 1 1 4  →
1 −1 2 5


1 1
1
3
R2 − 2R1  0 −1 −1 −2  →
R3 − R1
0 −2 1
2


1 1 1 3
(−1)R2  0 1 1 2  →
0 −2 1 2


1 1 1 3
0 1 1 2 →
R3 + 2R2 0 0 3 6


1 1 1 3
0 1 1 2.
1
( 3 )R3 0 0 1 2
This matrix is in echelon form, continue to reduced echelon form,


R1 − R3 1 1 0 1
R2 − R3  0 1 0 0  →
0 0 1 2


R1 − R2 1 0 0 1
0 1 0 0.
0 0 1 2
The solution can now be read directly from the matrix. This system of equations is equivalent to x1 = 1,

x2 = 0,

x3 = 2.

The system has a unique solution.

2.2.2

Determinants
The determinant of a square matrix A is a particular number associated with A, written det A or |A|. When A is a 2 × 2 matrix, the

11

118 Advanced linear algebra

determinant is given by the formula det a b c d

=

a c b d det

1
2

= 1 × 3 − 2 × 2 = −1.

= ad − bc.

For example,
2
3

To extend this definition to n × n matrices, we need two definitions.
Definition 2.1 The (i, j) minor of A, denoted by Mij , is the determinant of the (n − 1) × (n − 1) matrix obtained by removing the ith row and jth column of A.
Definition 2.2 The (i, j) cofactor of a matrix A is
Cij = (−1)i+j Mij .
Example: Let


1 2 3
A =  4 1 1.
−1 3 0


Then the minor M23 and the cofactor C23 are
M23 =

1 2
=5
−1 3

and

C23 = (−1)(2+3) M23 = −

1 2
= −5.
−1 3

There is a simple way to associate the cofactor Cij with the entry aij of the matrix. Locate the entry aij and cross out the row and the column containing aij . Then evaluate the determinant of the (n − 1) × (n − 1)
' ' matrix which remains. This is the minor, Mij . Then give it a “+” or
' '
“−” sign according to the position of aij on the following pattern:


+ − + − ...
− + − + ....
. .
.
. ..
. .
.
.
.
. .
.
.
If A is an n × n matrix, the determinant of A, is given by a11 a21
|A| = .
.
.

a12 a22 .
.
.

...
...
..
.

a1n a2n .
.
.

an1

an2

...

ann

def

= a11 C11 + a12 C12 + . . . + a1n C1n .

def

(Here, X = Y means that X is defined to be equal to Y .) This is called the cofactor expansion of det(A) by row one. It is a recursive definition, meaning that the determinant of a 3 × 3 matrix is defined as a sum of 2 × 2 determinants, and the determinant of an n × n matrix is given in terms of (n − 1) × (n − 1) determinants.
Example: We calculate the determinant of the matrix A above:
|A| =

1C11 + 2C12 + 3C13
1 1
4 1
4 1
= 1
−2
+3
3 0
−1 0
−1 3
= 1(−3) − 2(1) + 3(13) = 34.

12

Vectors and matrices

For very large matrices this method can involve too many computations, and it is preferable to use row operations to reduce the
05B Mathematics matrix to an upper triangular matrix as shown in Mathematics 2. 2.
However, it turns out that the cofactor expansion is much more useful than it first appears, due to the following theorem.2 2

2

For a proof, see Anton.

Theorem 2.1 If A is an n × n matrix, then the determinant of A can be computed by multiplying the entries of any row (or column) by their cofactors and summing the resulting products:
|A|

=

ai1 Ci1 + ai2 Ci2 + . . . + ain Cin
(cofactor expansion by row i)

|A|

=

a1j C1j + a2j C2j + . . . + anj Cnj
(cofactor expansion by column j).

This allows you to choose any row or any column of a matrix to find its determinant using a cofactor expansion.
Example: In the example above, instead of using the cofactor expansion by row one as shown, we can choose to evaluate the determinant of the matrix A by row three or column three, which will involve fewer calculations since a33 = 0. To check the result |A| = 34, we will evaluate the determinant again using column three. Remember the correct cofactor signs,
|A| =

1 2 3
4 1
1 2
4 1 1 =3
−1
+ 0 = 3(13) − (5) = 34.
−1 3
−1 3
−1 3 0

Learning activity 2.2
Calculate the determinant of each of the following matrices. For A, use repeated cofactor expansions along appropriate rows; for B use row operations, or a combination of both methods.




7 5 2 3
1 −4 3 2
 2 0 0 0 
 2 −7 5 1 
A=
B=

.
23 57 1 −1
1
2
6 0
11 2 0 0
2 −10 14 4

2.2.3

Invertible matrices
In 05B Mathematicsyou learned two methods to calculate the inverse of
Mathematics 2 2 you learned two methods to calculate inverse of a 3 × 3 matrix – either using determinants or using row operations.
Both of these methods generalise to n × n matrices. Let A be an n × n matrix.
Inverse using determinants: If |A| = 0, then A−1 = (1/|A|) adj(A), where adj(A) is the transpose of the matrix of cofactors, and is known

13

118 Advanced linear algebra

as the adjoint matrix of A 3 or the adjugate matrix of A (according to which textbook you are reading):


C11 C21 . . . Cn1
1  C12 C22 . . . Cn2 
 .
A−1 =
.
. .
..
.
. 
|A|  .
.
.
.
.
C1n C2n . . . Cnn
Note that the (i, j) entry of adj(A) is the cofactor Cji .
Inverse using row operations: Form the matrix (A I) by placing the n × n identity matrix next to the matrix A. If (A I) can be reduced to
(I B) using row operations, then A is invertible and A−1 = B. If A cannot be reduced to the identity matrix, then A is not invertible.
You should also recall the fact that the system of equations Ax = b has a unique solution for any b ∈ Rn if and only if A is invertible. In particular, the system Ax = 0 has only the solution x = 0 (known as the trivial solution) if and only if A is invertible. We collect these results in the following theorem.
Theorem 2.2 If A is an n × n matrix, then the following statements are equivalent (meaning if any one of these statements is true for A, then all the statements are true).
A−1 exists.
Ax = b has a unique solution for any b ∈ Rn .
Ax = 0 has only the trivial solution, x = 0.
The reduced echelon form of A is I.
|A| = 0.

Learning activity 2.3
Review the linear algebra section of 05B Mathematics 2 . In particular,
Mathematics 2. In particular, note how the above statements are connected there, and why one implies the other.

Learning activity 2.4
If A is the matrix



5
A =  −2
3

4
−4
1


−2
3 ,
0

show that the system of equations Ax = b has a unique solution for any b ∈ R3 .
Find A−1 .

14

3

See Anton.

Rank, range, null space, and linear equations

2.3

Rank, range, null space, and linear equations

2.3.1

The rank of a matrix
There are several ways of defining the rank of a matrix, and we shall meet some other (more sophisticated) ways later. All are equivalent.
We begin with the following definition.
Definition 2.3 (Rank of a matrix) The rank, rank(A), of a matrix A is the number of non-zero rows in an echelon matrix obtained from A by elementary row operations.
By a non-zero row, we simply mean one that contains entries other than 0. Since every non-zero row of the echelon form begins with a leading one, this is equivalent to the following definition.
Definition 2.4 The rank, rank(A), of a matrix A is the number of leading ones in an echelon matrix obtained from A by elementary row operations. Example: Consider the matrix


1
A = 2
3

2
2
4


1 1
0 2.
1 3

Reducing this to echelon form using elementary row operations, we have: 



1 2 1 1
1
2
1 1
 2 2 0 2  →  0 −2 −2 0 
3 4 1 3
0 −2 −2 0




1
2
1 1
1 2 1 1
→ 0
1
1 0 → 0 1 1 0.
0 0 0 0
0 −2 −2 0
This last matrix is in echelon form and has two non-zero rows (and two leading ones), so the matrix A has rank 2.

Learning activity 2.5
Prove that the matrix



3
A = 1
1

1
−1
2


1 3
−1 1 
2 1

has rank 2.

Generally, if A is an m × n matrix, then the number of non-zero rows
(the number of leading ones) in a row echelon form of A can certainly

15

118 Advanced linear algebra

be no more than the total number of rows, m. Furthermore, since the leading ones must be in different columns, the number of leading ones in the echelon form can be no more than the total number, n, of columns. Thus we have:
Theorem 2.3 For an m × n matrix A, rank(A) ≤ min{m, n}, where min{m, n} denotes the smaller of the two integers m and n.
If a square matrix A of size n × n has rank n, then its reduced echelon form has a leading one in every row and a leading one in every column.
Since every column with a leading one has zeros elsewhere, it follows that the reduced echelon form of A must be I, the n × n identity matrix. We therefore have one more equivalent statement to add to our theorem:
Theorem 2.4 If A is an n × n matrix, then the following statements are equivalent.
A−1 exists.
Ax = b has a unique solution for any b ∈ Rn .
Ax = 0 has only the trivial solution, x = 0.
The reduced echelon form of A is I.
|A| = 0.
The rank of A is n.

2.3.2

Rank and systems of linear equations
Recall that to solve a system of linear equations, one forms the augmented matrix and reduces it to echelon form by using elementary row operations.
Example: Consider the system of equations x1 + 2x2 + x3

= 1

2x1 + 2x2

= 2

3x1 + 4x2 + x3

= 2.

Using row operations to reduce the augmented matrix to echelon form, we obtain




1 2 1 1
1 2
1
1
 2 2 0 2  →  0 −2 −2 0 
3 4 1 2
0 −2 −2 −1




1 2
1
1
1 2 1 1
→ 0 1
1
0  → 0 1 1 0 .
0 −2 −2 −1
0 0 0 −1
Thus the original system of equations is equivalent to the system x1 + 2x2 + x3

= 1

x2 + x3

= 0

0x1 + 0x2 + 0x3

16

= −1.

Rank, range, null space, and linear equations

But this system has no solutions, since there are no values of x1 , x2 , x3 that satisfy the last equation. It reduces to the false statement
‘0 = −1’, whatever values we give the unknowns. We deduce, therefore, that the original system has no solutions. Notice that in this case there is no reason to reduce the matrix further.
If, as in the example just given, the echelon form of an augmented matrix has a row of the kind (0 0 . . . 0 a), with a = 0, then the original system is equivalent to one in which there is an equation
0x1 + 0x2 + · · · + 0xn = a (a = 0).
Clearly this equation cannot be satisfied by any values of the xi s, and we say that such a system is inconsistent.
If there is no row of this kind, the system has at least one solution, and we say that it is consistent.
The number of non-zero rows in the echelon form of the augmented matrix is known as the rank of the system. Thus the rank of the system is the rank of the augmented matrix. Note, however, that if the system is consistent then there can be no leading one in the last column of the reduced augmented matrix, for that would mean there was a row of the form (0 0 . . . 0 1). Thus, in fact, the rank of a consistent system
Ax = b is precisely the same as the rank of the matrix A.
Suppose we have a consistent system, and suppose that the rank r is strictly less than n, the number of unknowns. Then the system in echelon form (and hence the original one) does not provide enough information to specify the values of x1 , x2 , . . . , xn uniquely.
Example: Suppose we are given a system matrix reduces to the echelon form

1 3 −2 0 2
0 0 1 2 0

0 0 0 0 0
0 0 0 0 0

for which the augmented
0
3
1
0


0
1
.
5
0

Here the rank (number of non-zero rows) is r = 3 which is strictly less than the number of unknowns, n = 6.
Continuing to reduced echelon form, we obtain the matrix


1 3 0 4 2 0 −28
 0 0 1 2 0 0 −14 

.
0 0 0 0 0 1
5
0 0 0 0 0 0
0

Learning activity 2.6
Verify this. What are the additional two row operations which need to be carried out?

17

118 Advanced linear algebra

The corresponding system is x1 + 3x2 + 4x4 + 2x5

= −28

x3 + 2x4

= −14

x6

= 5.

The variables x1 , x3 and x6 correspond to the columns with the leading ones and are called leading variables. The other variables are called non-leading variables.
Solving for the leading variables x1 , x3 and x6 in terms of x2 , x4 and x5 we get x6 = 5,

x3 = −14 − 2x4 ,

x1 = −28 − 3x2 − 4x4 − 2x5 .

The form of these equations tells us that we can assign any values to x2 , x4 and x5 , and then the leading variables will be determined.
Explicitly, if we give x2 , x4 , x5 the arbitrary values s, t, u, where s, t, u represent any real numbers, the solution is given by x1 = −28−3s−4t−2u, x2 = s, x3 = −14−2t, x4 = t, x5 = u, x6 = 5.
Observe that there are infinitely many solutions, because the so-called
‘free unknowns’ x2 , x4 , x5 can take any values s, t, u ∈ R.
Generally, we can describe what happens when the echelon form has r < n non-zero rows (0 0 . . . 0 1 ∗ ∗ . . . ∗). If the leading 1 is in the kth column it is the coefficient of the unknown xk . So if the rank is r and the leading 1s occur in columns c1 , c2 , . . . , cr then the general solution to the system can be expressed in a form where the unknowns xc1 , xc2 , . . . , xcr (the leading variables) are given in terms of the other n − r unknowns (the non-leading variables), and those n − r unknowns are free to take any values. In the preceding example, we have n = 6 and r = 3, and the 3 unknowns x1 , x3 , x6 can be expressed in terms of the 6 − 3 = 3 free unknowns x2 , x4 , x5 .
In the case r = n, where the number of non-zero rows r in the echelon form is equal to the number of unknowns n, there is only one solution to the system — for there is a leading one in every column since the leading 1s move one step to the right as we go down the rows. In this case there is a unique solution obtained from the reduced echelon form.
In fact, this can be thought of as a special case of the more general one discussed above: since r = n there are n − r = 0 free unknowns, and the solution is therefore unique.
We can now summarise our conclusions concerning a general linear system. If the echelon form has a row (0 0 . . . 0 a), with a = 0, the original system is inconsistent; it has no solutions.
If the echelon form has no rows of the above type it is consistent, and the general solution involves n − r free unknowns, where r is the rank. When r < n there are infinitely many solutions, but when r = n there are no free unknowns and so there is a unique solution.

18

Rank, range, null space, and linear equations

2.3.3

General solution of a linear system in vector notation
Consider the system solved above. We found that the general solution in terms of the three free unknowns, or parameters, s, t, u is x1 = −28−3s−4t−2u, x2 = s, x3 = −14−2t, x4 = t, x5 = u, x6 = 5.
If we write x as a column vector,


 x1  x2 
 
x  x =  3,
 x4 
  x5 x6 then 

 
 
 
 

−28 − 3s − 4t − 2u
−28
−3s
−4t
−2u s 
  0   s   0   0 

 
 
 
 

−14 − 2t

  −14   0   −2t   0  x= =
+
 +
 +
.
t

  0   0   t   0 

 
 
 
 

u
0
0
0
u
5
5
0
0
0
That is, the general solution is x = v + su1 + tu2 + uu3 ,

s, t, u ∈ R,

where









−28
−3
−4
−2
 0 
 1 
 0 
 0 








−14 
0 
−2 



 0  v=  , u1 = 
 , u2 = 
 , u3 = 
.
 0 
 0 
 1 
 0 








0
0
0
1
5
0
0
0
Applying the same method generally to a consistent system of rank r with n unknowns, we can express the general solution of a consistent system Ax = b in the form x = v + s1 u1 + s2 u2 + · · · + sn−r un−r .
Note that, if we put all the si s equal to 0, we get a solution x = v, which means that Av = b, so v is a particular solution of the system.
Putting s1 = 1 and the remaining si s equal to zero, we get a solution x = v + u1 , which means that A(v + u1 ) = b. Thus b = A(v + u1 ) = Av + Au1 = b + Au1 .
Comparing the first and last expressions, we see that Au1 is the zero vector 0. Clearly, the same equation holds for u2 , . . . , un−r . So we have proved the following.
If A is an m × n matrix of rank r, the general solution of Ax = b is the sum of: a particular solution v of the system Ax = b and

19

118 Advanced linear algebra

a linear combination s1 u1 + s2 u2 + · · · + sn−r un−r of solutions u1 , u2 , . . . , un−r of the system Ax = 0.
If A has rank n, then Ax = 0 only has the solution x = 0, and so
Ax = b has a unique solution: v + 0 = v.

Learning activity 2.7
Solve the following system of equations Ax = b by reducing the augmented matrix to reduced echelon form: x1 − x2 + x3 + x4 + 2x5

=

4

+ x4 − x5

=

−3

x1 − x2 + 2x3 + 3x4 + 4x5

=

7.

−x1 + x2

Show that your solution can be written in the form p + su1 + tu2 where Ap = b, Au1 = 0 and Au2 = 0.

2.3.4

Null space
It is clear from what we have just seen that the general solution to a consistent linear system involves solutions to the system Ax = 0. This set of solutions is given a special name: the null space or kernel of a matrix A. This null space, denoted N (A), is the set of all solutions x to Ax = 0, where 0 is the zero vector. That is,
Definition 2.5 (Null space) For an m × n matrix A, the null space of
A is the subset
N (A) = {x ∈ Rn : Ax = 0} of Rn , where 0 = (0, 0, . . . , 0)T is the zero vector of Rm .
Note that the linear system Ax = 0 is always consistent since, for example, 0 is a solution. It follows from the discussion above that the general solution to Ax = 0 involves n − r free unknowns, where r is the rank of A (and that, if r = n, then there is just one solution, namely x = 0).
Example: Referring back to the example on page 16, consider the system of equations Ax = b given by x1 + 2x2 + x3

= 1

2x1 + 2x2

= 2

3x1 + 4x2 + x3

= 2.

We showed that this system is inconsistent. Now consider instead the system of equations Ax = 0. Using the same row operations to reduce

20

Rank, range, null space, and linear equations

the matrix A to echelon form as we did previously for the augmented matrix, and continuing to reduced row echelon form, we obtain








1 2 1
1 2
1
1 2 1
1 0 −1
A =  2 2 0  →  0 −2 −2  →  0 1 1  →  0 1 1 
3 4 1
0 −2 −2
0 0 0
0 0 0
Setting the non-leading variable x3 = t, we find that the null space of
A consists of all vectors, x,


1
t ∈ R. x = t  −1 
1
The system of equations Ax = 0 has infinitely many solutions.
We now formalise the connection between the solution set of a general consistent linear system, and the null space of the matrix determining the system.
Theorem 2.5 Suppose that A is an m × n matrix, that b ∈ Rn , and that the system Ax = b is consistent. Suppose that x0 is any solution of Ax = b. Then the set of all solutions of Ax = b consists precisely of the vectors x0 + z for z ∈ N (A); that is,
{x : Ax = b} = {x0 + z : z ∈ N (A)}.
Proof To show the two sets are equal, we show that each is a subset of the other. Suppose, first, that x is any solution of Ax = b. Because x0 is also a solution, we have
A(x − x0 ) = Ax − Ax0 = b − b = 0, so the vector z = x − x0 is a solution of the system Az = 0; in other words, z ∈ N (A). But then x = x0 + (x − x0 ) = x0 + z where z ∈ N (A). This shows that
{x : Ax = b} ⊆ {x0 + z : z ∈ N (A)}.
Conversely, if z ∈ N (A) then
A(x0 + z) = Ax0 + Az = b + 0 = b, so x0 + z ∈ {x : Ax = b}. This shows that
{x0 + z : z ∈ N (A)} ⊆ {x : Ax = b}.
So the two sets are equal, as required.

2.3.5

2

Range
The range of a matrix A is defined as follows.
Definition 2.6 (Range of a matrix) Suppose that A is an m × n matrix. Then the range of A, denoted by R(A), is the subset
R(A) = {Ax : x ∈ Rn } of Rm . That is, the range is the set of all vectors y ∈ Rm of the form y = Ax for some x ∈ Rn .

21

118 Advanced linear algebra

Suppose that the columns of A are c1 , c2 , . . . , cn . Then we may write
A = (c1 c2 . . . cn ). If x = (α1 , α2 , . . . , αn )T ∈ Rn , then the product
Ax is equal to α1 c1 + α2 c2 + · · · + αn cn .

Learning activity 2.8
Convince yourself of this last statement. Write out each side using ci = (c1i , c2i , . . . , cmi )T to show that
Ax = α1 c1 + α2 c2 + · · · + αn cn .
This is a very important result which will be used many times in this course, so make sure you understand how it works.

So, R(A), as the set of all such products, is the set of all linear combinations of the columns of A. For this reason R(A) is also called the column space of A. (More on this in the next chapter.)

1 2
Example: Suppose that A =  −1 3 . Then for x = (α1 , α2 )T ,
2 1




1 2 α1 + 2α2 α1 Ax =  −1 3 
=  −α1 + 3α2  , α2 2 1
2α1 + α2


so





 α1 + 2α2
R(A) =  −α1 + 3α2  : α1 , α2 ∈ R .


2α1 + α2

This may also be written as
R(A) = {α1 c1 + α2 c2 : α1 , α2 ∈ R} , where 


 
1
2 c1 =  −1  , c2 =  3 
2
1

are the columns of A.

2.4

Learning outcomes
At the end of this chapter and the relevant reading, you should be able to: solve systems of linear equations and calculate determinants (all of this being revision) compute the rank of a matrix and understand how the rank of a matrix relates to the solution set of a corresponding system of linear equations

22

Sample examination questions

determine the solution to a linear system using vector notation explain what is meant by the range and null space of a matrix and be able to determine these for a given matrix.

2.5

Sample examination questions
The following are typical exam questions, or parts of such questions.
Question 2.1 Find the general solution of the following system of linear equations. Express your answer in the form x = v + su, where v and u are in R3 .
3x1 + x2 + x3

= 3

x1 − x2 − x3

= 1

x1 + 2x2 + 2x3

= 1.

Question 2.2 Write down the augmented matrix for the following system of equations, and use it to solve the system by reducing the augmented matrix to reduced echelon form. Express your answer in vector form. x1 + x2 + x3 + x5

= 1

3x1 + 3x2 + 6x3 + 3x4 + 9x5

=

6

2x1 + 2x2 + 4x3 + x4 + 6x5

=

5.

Question 2.3 Find the rank of the matrix


3 2 1 3
C = 2 1 1 0.
6 2 4 6
(a) If C is the augmented matrix of a system of equations Ax = b,
C = (A|b), find all possible solutions.
(b) If C is the coefficient matrix of a system of homogeneous equations, Cx = 0, find all possible solutions.
Question 2.4 Find the rank of

1
2
A=
1
0

the matrix
0
1
3
3

1
1
−1
−2


0 2
1 3
.
2 2
2 0

Determine N (A), the null space of A and R(A) the range of A.
Question 2.5 Consider λ and µ are constants:

1 2
A = 5 1
1 −1

the system of linear equations Ax = b, where

0 λ 1

  x x = y z  
2
b=7 µ Compute the determinant of A, |A|. Determine for which values of λ and µ this system has:

23

118 Advanced linear algebra

(a) a unique solution
(b) no solutions
(c) infinitely many solutions.
In case (a), use Cramer’s rule to find the value of z in terms of λ and µ. In case (c), solve the system using row operations and express the solution in vector form, x = p + tv.

2.6

Answers to or comments on selected activities Activity 2.2 The answers are |A| = 20 and |B| = −150.
For |A|, expand first by row 2 and then by row 3:
7 5 2
2 0 0
|A| =
23 57 1
11 2 0

3
5 2
0
= −2 57 1
−1
2 0
0

3
2
−1 = −2(2)
1
0

3
= 20
−1

For |B|, using row operations and cofactor expansion:
1
2
|B| =
1
2

−4 3
−7 5
2
6
−10 14
=

9
6

2
1
1
0
=
0
0
4
0

−4
1
6
−2

16
9
=6
−6
1

3
−1
3
8

2
1
−3
= 6
−2
−2
0

−1
3
8

−3
1
−2 = 0
0
0

16
= 6(−25) = −150
−1

Activity 2.4 By Theorem 2.2, there are a number of ways of proving this. For example, we can check that the determinant (expanding by the first row) is
|A| = 5(−3) − 4(−9) − 2(10) = 1 = 0 and the Theorem now applies, since having a nonzero determinant is equivalent to there being a unique solution to the system Ax = b for any b.
The inverse matrix is



A−1

−3
= 9
10

−2
6
7


4
−11  .
−12

You should already know that you have the correct answer, as you should have checked that AA−1 = I !
Activity 2.5 Using row operations, we reduce the matrix as





3 1
1 3
1 −1 −1 1
1 −1
 1 −1 −1 1  →  3 1
1 3 → 0 4
1 2
2 1
1 2
2 1
0 3

24

follows.

−1 1
4 0
3 0

−1
9
6

−3
16
−6

Answers to or comments on selected activities



1
→ 0
0

−1
1
1



−1 1
1
1 0 → 0
1 0
0

−1
1
0


−1 1
1 0.
0 0

Since there are two non-zero rows in this echelon form, rank(A) = 2.
Activity 2.6 The next row operation is R2 − 3R3 followed by
R1 + 2R2 .
Activity 2.7 We reduce the augmented matrix to reduced echelon form: 


1 −1 1 1
1 −1 1 1 2
4
R2 +R1
−3  R−→ 1  0 0 1 2
(A|b) =  −1 1 0 1 −1
3 −R
−→
1 −1 2 3 4
7
0 0 1 2



1 −1 1 1 0
1 −1 1 1 2
4
R2 −R3
R3 −R2
−→
1  R1 −2R3  0 0 1 2 0
−→  0 0 1 2 1
−→
0 0 0 0 1
2
0 0 0 0 1


1 −1 0 −1 0
1
R1 −R2
−→  0 0 1 2 0
−1  .
0 0 0 0 1
2

2
1
2


4
1
3


0
−1 
2

The leading variables are x1 , x3 and x5 . Set the non-leading variables to arbitrary constants: x2 = s, x4 = t, and solve for the leading variables in terms of these parameters, starting with the bottom row,
=⇒ x5 = 2,

x4 = t,

x3 = −1−2t,

x2 = s,

x1 = 1+s+t


=⇒ x

=

=

 

x1
1+s+t
s
 x2  

  

 x3  =  −1 − 2t 
  

x4 t x5
2


 


1
1
1
 0 
1
 0 


 


 −1  + s  0  + t  −2  = p + su1 + tu2 ,


 


0
0
1
2
0
0

where s, t ∈ R. Note that p is a particular solution and that u1 and u2 are solutions of the system Ax = 0, for


1




1 −1 1 1 2
4
 0 
 −1 1 0 1 −1   −1  =  −3 




1 −1 2 3 4
0
7
2


1
 −1
1


1
 −1
1

−1 1
1 0
−1 2

−1 1
1 0
−1 2

 
1
 
1 2
0
1
 
1 −1   0  =  0  ,
 
3 4
0
0
0


1

 
1 2
0
 0 


1 −1   −2  =  0  .


3 4
1
0
0


25

118 Advanced linear algebra

2.7

Sketch answers to or comments on selected questions Question 2.1 The general solution takes the form v + su where v = (1, 0, 0)T and u = (0, −1, 1)T .
Question 2.2 The reduced echelon form of the augmented matrix is


1 1 0 0 −1
−1
0 0 1 0 2
2 
0 0 0 1 0
−1
x2 and x5 are non-leading variables. Set x2 = r and x5 = s.
Using the equations, solve for the other variables in terms of these.
  
 
 
 

x1
−1 − r + s
−1
−1
1
r
 x2  
  0   1   0 
  
 
 
 

r, s ∈ R.
 x3  =  2 − 2s  =  2 +r  0 +s  −2  ,
  
 
 
 

x4
−1
−1
0
0 x5 s
0
0
1
Question 2.3 The rank of C is 3. (a) If C is the augmented matrix of a system Ax = b then the system is inconsistent. There is no solution (but if there had been a solution x it would have been a vector in R3 ). (b) The general solution of Cx = 0 is
   

 x1 −t
−1
x   t 
 1  x =  2  =   = t t∈R , x3 t
1
x4
0
0
These vectors are in R4 .
Question 2.4 Using row operations, the matrix A reduces to echelon form 

1 0 1
0
2
 0 1 −1 1 −1 
A −→ . . . −→ 
.
0 0 1 −1 3
0 0 0
0
0
There are three nonzero rows (three leading ones), so the rank of A is
3.
To find N (A) we need to solve Ax = 0, which is a system of 4 equations in 5 unknowns. Call them x1 , x2 , . . . , x5 . Continuing to reduced echelon form,


1 0 0 1 −1
2 
0 1 0 0
A −→ . . . −→ 
.
0 0 1 −1 3
0 0 0 0
0
The leading variables are x1 , x2 , and x3 . Set the non-leading variables x4 = s and x5 = t. Then the solution is
  





x1
−s + t
−1
1
 x2   −2t 
 0 
 −2 
  





s, t ∈ R.
 x3  =  s − 3t  = s  1  + t  −3  ,
  





x4 s 1
0
x5 t 0
1

26

Sketch answers to or comments on selected questions

So the null space consists of all vectors of the form x = sv1 + tv2 , where v1 and v2 are the vectors displayed above. It is a subset of R5 .
The range of A can be described as the set of all linear combinations of the columns of A,
R(A) = {α1 c1 + α2 c2 + α3 c3 + α4 c4 + α5 c5 : αi ∈ R} where  
 


 
 
0
1
0
2
1
2
1
 1 
1
3 c1 =   , c2 =   , c3 = 
 , c4 =   , c5 =   .
2
1
3
−1
2
0
3
−2
2
0
This is a subset of R4 . We will find a better way to describe this set when we look at the column space of a matrix in the next chapter.
Question 2.5 |A| = 3λ − 9. (a) If |A| = 0, that is, if λ = 3, then the system will have a unique solution. In this case, using Cramer’s rule,4 z = (3 − 3µ)/(λ − 3).
To answer (b) and (c), reduce the augmented with λ = 3



1
1 2 0 2
(A|b) =  5 1 3 7  −→  0
0
1 −1 1 µ



1
1 2 0
2
−→  0 −3 1
−1  −→  0
0
0 −3 1 µ − 2

4

See the 05B Mathematics 2
Mathematics 2 guide to review this.

matrix to echelon form
2
−9
−3
2
3
0

0
3
1
0
−1
0


2
−3  µ−2 
2
1 . µ−1 So if λ = 3, this system will be inconsistent if µ = 1, which answers
(b).
If λ = 3 and µ = 1 we have (c) infinitely many solutions. Setting µ = 1 and continuing to reduced echelon form,




4
2
1 2 0
2
1 0
3
3
1
1
1
1
(A|b) −→ . . . −→  0 1 − 3 3  −→  0 1 − 3 3  .
0 0 0
0 0 0
0
0
The solution can now be read from the matrix. Setting the non-leading variable z = t,
 2
  4 2  4
−3
x
3 − 3t
3
 y  =  1 + 1 t  =  1  + t  1  = p + tv, t ∈ R.
3
3
3
3 t 0
1
z

27

118 Advanced linear algebra

28

Chapter 3

Vector spaces, basis, dimension Contents
3.1
3.2
3.3
3.4

3.5
3.6
3.7

Reading
There is no specific essential reading for this chapter. It is essential that you do some reading, but the topics discussed in this chapter are adequately covered in so many texts on ‘linear algebra’ that it would be artificial and unnecessarily limiting to specify precise passages from precise texts. The list below gives examples of relevant reading. (For full publication details, see Chapter 1.)

3.8
3.9

3.10

3.11

Anton, H. Elementary Linear Algebra. Chapters 3, 4 and 5.

3.12

Lay, David C. Linear Algebra and its Applications. Chapter 4, sections 4.1–4.6.

3.13

Ostaszewski, A. Advanced Mathematical Methods. Chapter 1.

3.14

Simon, C.P. and Blume, L. Mathematics for Economists.
Chapter 11, and Chapter 27.

3.1

Introduction .
Vector spaces
Subspaces . .
Subspaces
connected with matrices
Linear span .
Linear independence . . .
Testing linear independence in Rn . . . . .
Bases and dimension . . .
Finding a basis for a linear span in Rn . .
Basis
and dimension of range and null space . . . . .
Learning outcomes . . . .
Sample examination questions
Answers to or comments on selected activities . . . . .
Sketch
answers to or comments on selected questions . . .

29
29
32

35
36
38

40
43

48

49
52
52

54

58

Introduction
In this chapter we study the important theoretical concept of a vector space. 3.2

Vector spaces

3.2.1

Definition of a vector space
We know that vectors of Rn can be added together and that they can be ‘scaled’ by real numbers. That is, for every x, y ∈ Rn and every α ∈ R, it makes sense to talk about x + y and αx. Furthermore, these operations of addition and multiplication by a scalar (that is, multiplication by a real number) behave and interact ‘sensibly’, in that,

29

118 Advanced linear algebra

for example, α(x + y) = αx + αy, α(βx) = (αβ)x, x + y = y + x, and so on.
But it is not only vectors in Rn that can be added and multiplied by scalars. There are other sets of objects for which this is possible.
Consider the set V of all functions from R to R. Then any two of these functions can be added: given f, g ∈ V we simply define the function f + g by
(f + g)(x) = f (x) + g(x).
Also, for any α ∈ R, the function αf is given by
(αf )(x) = α(f (x)).
These operations of addition and scalar multiplication are sometimes said to be pointwise addition and pointwise scalar multiplication. This might seem a bit abstract, but think about what the functions x + x2 and 2x represent: the former is the function x plus the function x2 , and the latter is the function x multiplied by the scalar 2. So this is just a different way of looking at something you are already familiar with. It turns out that V and its rules for addition and multiplication by a scalar satisfy the same key properties as does the set of vectors in
Rn with its addition and scalar multiplication. We refer to a set with an addition and scalar multiplication which behave appropriately as a vector space. We now give the formal definition of a vector space.
Definition 3.1 (Vector space) A (real) vector space V is a non-empty set equipped with an addition operation and a scalar multiplication operation such that for all α, β ∈ R and all u, v, w ∈ V ,
1. u + v ∈ V

(closure under addition)

2. u + v = v + u

(the commutative law for addition)

3. u + (v + w) = (u + v) + w

(the associative law for addition)

4. there is a single member 0 of V , called the zero vector, such that for all v ∈ V , v + 0 = v
5. for every v ∈ V there is an element w ∈ V (usually written as
−v), called the negative of v, such that v + w = 0
6. αv ∈ V

(closure under scalar multiplication)

7. α(u + v) = αu + αv

(distributive law)

8. (α + β)v = αv + βv

(distributive law)

9. α(βv) = (αβ)v

(associative law for scalar multiplication)

10. 1v = v.
Other properties follow from those listed in the definition. For instance, we can see that 0x = 0 for all x, as follows:
0x = (0 + 0)x = 0x + 0x,

30

Vector spaces

so, adding the negative −0x of 0x to each side,
0 = 0x+(−0x) = (0x+0x)+(−0x) = 0x+(0x+(−0x)) = 0x+0 = 0x.
(A bit sneaky, but just remember the result: 0x = 0.)

Learning activity 3.1
Prove that (−1)x = −x, the negative of the vector x, using a similar argument with 0 = 1 + (−1).

(Note that this definition says nothing at all about ‘multiplying’ together two vectors: the only operations with which the definition is concerned are addition and scalar multiplication.)
A vector space as we have defined it is called a real vector space, to emphasise that the ‘scalars’ α, β and so on are real numbers rather than complex numbers. There is a notion of complex vector space, where the scalars are complex numbers. We shall look at complex vector spaces in a later chapter. Until then, all scalars will be real numbers.

3.2.2

Examples
Example: The set Rn is a vector space with the usual way of adding and scalar multiplying vectors.
Example: The set V of functions from R to R with pointwise addition and scalar multiplication (described earlier in this section) is a vector space. Note that the zero vector in this space is the function that maps every real number to 0—that is, the identically-zero function.

Learning activity 3.2
Show that all 10 properties of a vector space are satisfied. In particular, if the function f is a vector in this space, what is the vector −f ?

Similarly, if S is any set then the set V of functions f : S → R is a vector space. However, if S is any proper subset of the real numbers
(meaning that S ⊂ R, but S = R) and S = {0}, then the set V of functions f : R → S is not a vector space. Why not? Well, let f be any function other than the identically-zero function. Then for some a ∈ R, f (a) = 0. Now, for some λ ∈ R, the real number λf (a) will not belong to S. (Since S is not the whole of R, there is b ∈ R such that b ∈ S. Since f (a) = 0, we can take λ = (b/f (a)), so that λf (a) = b.)
This all shows that the function λf is not in V , since it is not a function from R to S.

31

118 Advanced linear algebra

Example: The set of m × n matrices with real entries is a vector space, with the usual addition and scalar multiplication of matrices.
The ‘zero vector’ in this vector space is the zero m × n matrix which has all entries equal to 0.
Example: Let V be the set of all 3-vectors with third entry equal to 0, that is,
 

 x

V =  y  : x, y ∈ R .


0
Then V is a vector space with the usual addition and scalar multiplication. To verify this, we need only check that V is closed under addition and scalar multiplication. The associative, commutative and distributive laws (properties 2, 3, 7, 8, 9 ,10) will hold for vectors in V because they hold for all vectors in R3 (and all linear combinations of vectors in V are in V ). Furthermore, if we can show that V is closed under scalar multiplication, then for any particular v ∈ V , 0v = 0 ∈ V and (−1)v = −v ∈ V . So we simply need to check that V = ∅ (V is non-empty), that if u, v ∈ V then u + v ∈ V , and if α ∈ R and v ∈ V then αv ∈ V . Each of these is easy to check.

Learning activity 3.3
Verify that for u, v ∈ V and α ∈ R, u + v ∈ V and αv ∈ V .

3.3

Subspaces
The last example above is informative. Arguing as we did there, if V is a vector space and W ⊆ V is non-empty and closed under scalar multiplication and addition, then W too is a vector space (and we do not need to verify that all the other properties hold). The formal definition of a subspace is as follows.
Definition 3.2 (Subspace) A subspace W of a vector space V is a non-empty subset of V that is itself a vector space (under the same operations of addition and scalar multiplication as V ).
The discussion given justifies the following important result.
Theorem 3.1 Suppose V is a vector space. Then a non-empty subset
W of V is a subspace if and only if: for all u, v ∈ W , u + v ∈ W (W is closed under addition), and for all v ∈ W and α ∈ R, αv ∈ W (W is closed under scalar multiplication). 32

Subspaces

Example: In R2 , the lines y = 2x and y = 2x + 1 can be defined as the sets of vectors, x y

S=

: y = 2x, x ∈ R

x y U=

: y = 2x + 1, x ∈ R .

Each vector in one of the sets is the position vector of a point on that line. We will show that the set S is a subspace of R2 , and that the set
U is not a subspace of R2 .
If v =

1
2

and p =

0
, these sets can equally well be expressed as,
1

S = {x : x = tv, t ∈ R}

U = {x : x = p + tv, t ∈ R}.

Learning activity 3.4
Show that the two descriptions of S describe the same set of vectors.

To show S is a subspace, we need to show it is closed under addition and closed under scalar multiplication using any vectors in S and any scalar in R. Working with the second set of definitions, let u, w be any vectors in S and let α ∈ R. Then
1
2

u=s

w=t

1
2

for some s, t ∈ R.

• closure under addition: u+w =s

1
2

+t

1
2

1
2

= (s + t)

∈S

(since s + t ∈ R).

• closure under scalar multiplication: αu = α s

1
2

= (αs)

1
2

∈S

(since αs ∈ R).

This shows that S is a subspace of R2 .
To show U is not a subspace, any one of the three following statements (counterexamples) will suffice.
1. 0 ∈ U .
/
2. U is not closed under addition:
0
1

1
3

∈U

∈U

but

0
1

+

1
3

=

1
4

∈U
/

since 4 = 2(1) + 1.
3. U is not closed under scalar multiplication:
0
1

∈ U,

2 ∈ R but

2

0
1

=

0
2

∈U
/

33

118 Advanced linear algebra

Learning activity 3.5
Show that 0 ∈ U . Explain why this suffices to show that U is not a
/
subspace.

The line y = 2x + 1 is an example of an affine set, a translation of a subspace. Learning activity 3.6
Let v be any non-zero vector in a vector space V . Show that the set
S = {αv : α ∈ R} is a subspace of V . The set S defines a line through the origin in V .

If V is a vector space, the sets V and {0} are subspaces of V . The set
{0} is not empty, it contains one vector, namely the zero vector. It is a subspace because 0 + 0 = 0 and α0 = 0 for any α ∈ R.
Given any subset S of a vector space V , how do you decide if it is a subspace? First check that 0 ∈ S. Then using some vectors in the subset, see if adding them and scalar multiplying them will give you another vector in S. To prove that S is a subspace, you will need to verify that it is closed under addition and closed under scalar multiplication for any vectors in S, so you will need to use letters to represent general vectors, or components of general vectors, in the set.
That is, using letters show that the sum u + v and the scalar product αu of vectors in S also satisfy the definition of a vector in S.
To prove a set S is not a subspace you only need to find one counterexample, one or two particular vectors (use numbers) for which the sum or the scalar product does not satisfy the definition of S.
Note that if 0 is not in the set, it cannot be a subspace.

Learning activity 3.7
Write down a general vector (using letters) and a particular vector
(using numbers) for each of the following subsets. Show that one of the sets is a subspace of R3 and the other is not.
 

 

 x

 x

S1 =  x2  : x ∈ R
S2 =  2x  : x ∈ R




0
0

34

Subspaces connected with matrices

3.3.1

An alternative characterisation of a subspace
We have seen that a subspace is a non-empty subset W of a vector space that is closed under addition and scalar multiplication, meaning that if u, v ∈ W and α ∈ R, then both u + v and αv are in W . Now, it is fairly easy to see that the following equivalent property characterises when W will be a subspace:
Theorem 3.2 A non-empty subset W of a vector space is a subspace if and only if for all u, v ∈ W and all α, β ∈ R, we have αu + βv ∈ W .
That is, W is a subspace if it is non-empty and closed under linear combination. 3.4

Subspaces connected with matrices

3.4.1

Null space
Suppose that A is an m × n matrix. Then the null space N (A), the set of solutions to the homogeneous linear system Ax = 0, is a subspace of Rn .
Theorem 3.3 For any m × n matrix A, N (A) is a subspace of Rn .
Proof To prove this we have to verify that N (A) = ∅, and that if u, v ∈ N (A) and α ∈ R, then u + v ∈ N (A) and αu ∈ N (A). Since
A0 = 0, 0 ∈ N (A) and hence N (A) = ∅. Suppose u, v ∈ N (A).
Then to show u + v ∈ N (A) and αu ∈ N (A), we must show that u + v and αu are solutions of Ax = 0. We have
A(u + v) = Au + Av = 0 + 0 = 0 and A(αu) = α(Au) = α0 = 0, so we have shown what we needed.

2

The null space is the set of solutions to the homogeneous linear system. If we instead consider the set of solutions S to a general system Ax = b, S is not a subspace of Rn if b = 0 (that is, if the system is not homogeneous). This is because 0 does not belong to S.
However, as we saw in the previous chapter (theorem 2.5), there is a relationship between S and N (A): if x0 is any solution of Ax = b then
S = {x0 + z : z ∈ N (A)}, which we may write as x0 + N (A). S is an affine set, a translation of the subspace N (A).
Generally, if W is a subspace of a vector space V and x ∈ V then the set x + W defined by x + W = {x + w : w ∈ W } is called an affine subset of V . An affine subset is not generally a subspace (although every subspace is an affine subset, as we can see by taking x = 0).

35

118 Advanced linear algebra

3.4.2

Range
Recall that the range of an m × n matrix is
R(A) = {Ax : x ∈ Rn }.
Theorem 3.4 For any m × n matrix A, R(A) is a subspace of Rm .
Proof The set R(A) is non-empty as A0 = 0 ∈ R(A). We need to show that if u, v ∈ R(A) then u + v ∈ R(A) and, for any α ∈ R, αv ∈ R(A). So suppose u, v ∈ R(A). Then for some y1 , y2 ∈ Rn , u = Ay1 , v = Ay2 . We need to show that u + v = Ay for some y.
Well,
u + v = Ay1 + Ay2 = A(y1 + y2 ), so we may take y = y1 + y2 to see that, indeed, u + v ∈ R(A). Next, αv = α(Ay1 ) = A(αy1 ), so αv = Ay for some y (namely y = αy1 ) and hence αv ∈ R(A).

3.5

2

Linear span
Recall that by a linear combination of vectors v1 , v2 , . . . , vk we mean a vector of the form v = α1 v1 + α2 v2 + · · · + αk vk .
If we add together two vectors of this form, we get another linear combination of the vectors v1 , v2 , . . . , vk . The same is true of any scalar multiple of v.

Learning activity 3.8
Show this; show that if v = α1 v1 + α2 v2 + · · · + αk vk and w = β1 v1 + β2 v2 + · · · + βk vk then v + w and sv, s ∈ R, are also linear combinations of the vectors v1 , v2 , . . . , vk .

The set of all linear combinations of a given set of vectors of a vector space V forms a subspace, and we give it a special name.
Definition 3.3 (Linear span) Suppose that V is a vector space and that v1 , v2 , . . . , vk ∈ V . The linear span of X = {v1 , . . . , vk } is the set of all linear combinations of the vectors v1 , . . . , vk , denoted by
Lin{v1 , v2 , . . . , vk } or Lin(X) . That is,
Lin{v1 , v2 , . . . , vk } = {α1 v1 + · · · + αk vk : α1 , α2 , . . . , αk ∈ R}.

36

Linear span

Theorem 3.5 If X = {v1 , . . . , vk } is a set of vectors of a vector space
V , then Lin(X) is a subspace of V . It is the smallest subspace containing the vectors v1 , v2 , . . . , vk .
If you have carefully carried out the learning activity 3.8 above, then you have shown that Lin(X) is a subspace of V (why?). Any vector space which contains the vectors v1 , v2 , . . . , vk must also contain all linear combinations of these vectors, so it must contain Lin(X). That is, Lin(X) is the smallest subspace of V containing v1 , v2 , . . . , vk .
This subspace is also known as the subspace spanned by the set
X = {v1 , . . . , vk }, or, simply, as the span, of {v1 , v2 , . . . , vk }.
Different texts use different notations. For example, Anton uses span{v1 , v2 , . . . , vk } and Simon and Blume use L[v1 , v2 , . . . , vn ].
Notation is important, but it is nothing to get anxious about: just always make it clear what you mean by your notation: use words as well as symbols!

3.5.1

Lines and Planes in R3
In R3 a plane is defined as the set of all vectors x whose components satisfy a single Cartesian equation, ax + by + cz = d.1 If d = 0, the plane contains the origin, and the position vectors of points on the plane form a subspace of R3 . This subspace is the linear span of two vectors. To see this, let’s look at a specific example.

1

See, for example, Anton.

Example: Let S be the set given by
 

 x

S =  y  : 3x − 2y + z = 0 .

 z Then for x ∈ S,
  
 
  

   x x x 0
1
0
 =  0 + y  = x  0 +y  1  . x = y =  y 2y − 3x
−3x
2y
−3
2 z That is, x = xv1 + yv2 where x, y can be any real numbers and v1 , v2 are the vectors given above. Since S is the linear span of two vectors, it is a subspace of R3 . Of course, you can show directly that S is a subspace by showing it is closed under addition and scalar multiplication. If d = 0 then the plane is not a subspace. It is an affine set, a translation of a linear space.

Learning activity 3.9
Show this in general, that the set
 

 x

S =  y  : ax + by + cz = d

 z 37

118 Advanced linear algebra

is a subspace if d = 0 and it is not a subspace if d = 0.

Conversely, if v1 , v2 ∈ R3 are two non-parallel vectors, then the set
K = Lin{v1 , v2 } is a plane, and we can obtain its Cartesian equation.
Consider the previous example.
Example: The plane is the set of all linear combinations of v1 and v2 , that is, all vectors x such that

 

  x 1
0
s, t ∈ R. x = y  = s 0  + t1
2
z
−3
This yields three equations in the two unknowns, s and t. Eliminating s and t from these equations yields a single Cartesian equation between the variables x, y, z:

x=s

=⇒ z = −3x + 2y or 3x − 2y + z = 0. y=t  z = −3s + 2t
What is the set Lin{v} of a single non-zero vector v? We have already seen that this defines a line through the origin in R3 . (See Learning activity 3.6 in section 3.3.)

3.5.2

Row space and column space
In the previous chapter we observed that the range R(A) of an m × n matrix A is equal to the set of all linear combinations of its columns.
(See 2.3.5.) In other words, R(A) is the span of the columns of A and is often called the column space of A and denoted by CS(A).
It is also possible to consider the row space RS(A) of a matrix: this is the span of the rows of A. If A is an m × n matrix the row space will be a subspace of Rn and the column space will be a subspace of Rm .

3.6

Linear independence
Linear independence is a central idea in the theory of vector spaces. If
{v1 , v2 , . . . , vk } is a set of vectors in a vector space V , then the vector equation α1 v1 + α2 v2 + · · · + αr vk = 0 always has the trivial solution, α1 = α2 = · · · = αk = 0.
We say that vectors x1 , x2 , . . . , xk in Rn are linearly dependent (LD) if there are numbers α1 , α2 , . . . , αm , not all zero, such that α1 x1 + α2 x2 + · · · + αk xk = 0.

38

Linear independence

In this case the left-hand side is termed a non-trivial linear combination. The vectors are linearly independent (LI) if they are not linearly dependent; that is, if no non-trivial linear combination of them is the zero vector or, equivalently, whenever α1 x1 + α2 x2 + · · · + αk xk = 0, then, necessarily, α1 = α2 = · · · = αk = 0. We have been talking about Rn , but the same definitions can be used for any vector space V .
We state them formally now.
Definition 3.4 (Linear independence) Let V be a vector space and v1 , . . . , vk ∈ V . Then v1 , v2 , . . . , vk form a linearly independent set or are linearly independent if and only if α1 v1 + α2 v2 + · · · + αk vk = 0

=⇒

α1 = α2 = · · · = αk = 0 :

that is, if and only if no non-trivial linear combination of v1 , v2 , . . . , vk equals the zero vector.
Definition 3.5 (Linear dependence) Let V be a vector space and v1 , v2 , . . . , vk ∈ V . Then v1 , v2 , . . . , vk form a linearly dependent set or are linearly dependent if and only if there are real numbers α1 , α2 , . . . , αk , not all zero, such that α1 v1 + α2 v2 + · · · + αk vk = 0, that is, if and only if some non-trivial linear combination of the vectors is the zero vector.
Example: In R3 , the following vectors are linearly dependent:
 
 
 
1
2
4
v1 =  2  , v2 =  1  , v3 =  5  .
3
5
11
This is because
2v1 + v2 − v3 = 0.
Note that this can also be written as v3 = 2v1 + v2 .
This example illustrates the following general result. Try to prove it yourself before looking at the proof.
Theorem 3.6 The set {v1 , v2 , . . . , vk } ⊆ V is linearly dependent if and only if some vector vi is a linear combination of the other vectors.
Proof Since this is an ”if and only if” statement, we must prove it both ways. If {v1 , v2 , . . . , vk } is linearly dependent, the equation α1 v1 + α2 v2 + · · · + αr vk = 0 has a solution with some αi = 0. Then we can solve for the vector vi : vi = −

α2 αi−1 αi+1 αk α1 v1 − v2 − · · · − vi−1 − vi+1 − · · · − vk , αi αi αi αi αi which expresses vi as a linear combination of the other vectors in the set. 39

118 Advanced linear algebra

If vi is a linear combination of the other vectors, say, vi = β1 v1 + · · · + βi−1 vi−1 + βi+1 vi+1 + · · · + βk vk , then β1 v1 + · · · + βi−1 vi−1 − vi + βi+1 vi+1 + · · · βk vk = 0 is a non-trivial linear combination of the vectors that is equal to the zero vector, since the coefficient of vi is −1 = 0. Therefore, the vectors are linearly dependent.
2
It follows from this theorem that a set of two vectors is linearly dependent if and only if one vector is a scalar multiple of the other.
Example: The vectors

 
 
2
1 v1 =  2  , v2 =  1  ,
5
3

in the example above are linearly independent, since one is not a scalar multiple of the other.

Learning activity 3.10
Show that, for any vector v in a vector space V , the set of vectors
{v, 0} is linearly dependent.

3.7

Testing linear independence in Rn
Given k vectors v1 , . . . , vk ∈ Rn , the vector equation α1 v1 + α2 v2 + · · · + αr vk = 0 is a homogeneous system of n linear equations in k unknowns. This can be written in matrix form, Ax = 0, where A is the n × k matrix
A = (v1 v2 · · · vk ) with the vectors v1 , v2 , . . . vk as its columns, and x is the vector (or k × 1 matrix),
  α1  α2  x= . .
 . 
.
αk
Recall (learning activity, page 22) that the matrix product Ax is exactly the linear combination α1 v1 + αv2 + · · · + αk vk .
Then the question of whether or not a set of vectors in Rn is linearly independent can be answered by looking at the solutions of the homogeneous system Ax = 0.
Theorem 3.7 The vectors v1 , v2 , . . . , vk are linearly dependent if and only if the linear system Ax = 0 has a solution other than x = 0, where A is the matrix A = (v1 v2 · · · vk ). Equivalently, the vectors are linearly independent precisely when the only solution to the system is x = 0.

40

Testing linear independence in Rn

If the vectors are linearly dependent, then any solution x = 0 of the system Ax = 0 will directly give a linear combination of the vectors that equals the zero vector.

Learning activity 3.11
Show that the vectors v1 =

1
2

,

v2 =

1
−1

,

v3 =

2
−5

are linearly dependent by solving Ax = 0. Use your solution to give a non-trivial linear combination of the vectors that equals the zero vector.

Now, we know from our experience of solving linear systems with row operations that the system Ax = 0 will have precisely the one solution x = 0 if and only if we obtain from the n × k matrix A an echelon matrix in which there are k leading ones. That is, if and only if rank(A) = k. (Think about this!) Thus, we have the following result.
Theorem 3.8 Suppose that v1 , . . . , vk ∈ Rn . Then the set
{v1 , . . . , vk } is linearly independent if and only if the matrix
(v1 v2 · · · vk ) has rank k.
But the rank is always at most the number of rows, so we certainly need to have k ≤ n. Also, there is a set of n linearly independent vectors in
Rn . In fact, there are infinitely many such sets, but an obvious one is
{e1 , e2 , . . . , en } , where ei is the vector with every entry equal to 0 except for the ith entry, which is 1. Thus, we have the following result.
Theorem 3.9 The maximum size of a linearly independent set of vectors in Rn is n.
So any set of more than n vectors in Rn is linearly dependent. On the other hand, it should not be imagined that any set of n or fewer is linearly independent: that isn’t true.
Example: In R4 , which of the following sets of vectors are linearly independent? 
        
1
2
0
2 
 1


 0  2 1 0 5
L1 = 
, , , ,  ,
9
3
1
9 
 −1


0
2
1
0
1

  
1 
 1


 0  2
L2 = 
,  ,
9 
 −1


0
2

41

118 Advanced linear algebra


    
1
2 
 1


 0  2 1
L3 = 
, ,  ,
9
3 
 −1


0
2
1

      
1
2
0 
 1


 0  2 1 0
L4 = 
, , ,  .
9
3
1 
 −1


0
2
1
0
Try this yourself before reading the answers.
The set L1 is linearly dependent because it consists of five vectors in
R4 . The set L2 is linearly independent because neither vector is a scalar multiple of the other. To see that the set L3 is linearly dependent, write the vectors as the columns of a matrix A and reduce
A to echelon form to find that the rank of A is 2. This means that there is a non-trivial linear combination of the vectors which is equal to
0, or equivalently, that one of the vectors is a linear combination of the other two. The last set, L4 contains the set L3 and is therefore also linearly dependent, since it is still true that one of the vectors is a linear combination of the others.

Learning activity 3.12
For the set L3 above, find the solution of the corresponding homogeneous system Ax = 0 where A is the matrix whose columns are the vectors of L3 . Use the solution to write down a non-trivial linear combination of the vectors that is equal to the zero vector. Express one of the vectors as a linear combination of the other two.

There is an important property of linearly independent sets of vectors which holds for any vector space V .
Theorem 3.10 If x1 , x2 , . . . , xm are linearly independent in V and c1 x1 + c2 x2 + · · · + cm xm = c1 x1 + c2 x2 + · · · + cm xm then c1 = c1 ,

c2 = c2 ,

...,

cm = cm .

Learning activity 3.13
Prove this. Use the fact that c1 x1 + c2 x2 + · · · + cm xm = c1 x1 + c2 x2 + · · · + cm xm if and only if
(c1 − c1 )x1 + (c2 − c2 )x2 + · · · + (cm − cm )xm = 0.

42

Bases and dimension

What does this theorem say about x = c1 x1 + c2 x2 + · · · + cm xm ?
(Think about this before you continue reading.)
It says that if a vector x can be expressed as a linear combination of a set of linearly independent vectors, then this can be done in only one way. The linear combination is unique.

3.8

Bases and dimension
The following result about Rn is very important in the theory of vector spaces. It says that a linearly independent set of n vectors in Rn spans
Rn .
Theorem 3.11 If x1 , x2 , . . . , xn are linearly independent vectors in
Rn , then for any x in Rn , x can be written as a unique linear combination of x1 , . . . , xn .
Proof Because x1 , . . . , xn are linearly independent, the n × n matrix
A = (x1 x2 . . . xn ) has rank(A) = n. In other words, A reduces to an echelon matrix with exactly n leading ones. Suppose now that x is any vector in Rn and consider the system Az = x. By Theorem 2.4 this system has a unique solution. But let’s spell it out. Because A has rank n, it can be reduced (by row operations) to an echelon matrix with n leading ones, one in each row and one in each column. The augmented matrix (Ax) can therefore be reduced to a matrix (Es) where E is an n × n echelon matrix with n leading ones. Solving by back-substitution, or continuing to reduced echelon form, we can find a solution to Az = x. This shows that any vector x can be expressed in the form

 α1  α2  x = Az = (x1 x2 . . . xn )  .  ,
 . 
.
αn where we have written z as (α1 , α2 , . . . , αn )T . Expanding this matrix product, we have that any x ∈ Rn can be expressed as a linear combination x = α1 x1 + α2 x2 + · · · + αn xn , as required. This linear combination is unique since the vectors are linearly independent (or, because there is a leading one in every column of the echelon matrix, so there are no free variables).
2
It follows from this theorem that if we have n linearly independent vectors in Rn , then any vector in Rn can be expressed in exactly one way as a linear combination of the n vectors. We say that the n vectors form a basis of Rn . The formal definition of a (finite) basis for a vector space is as follows.
Definition 3.6 ((Finite) Basis) Let V be a vector space. Then the subset B = {v1 , v2 , . . . , vn } of V is said to be a basis for (or of) V if:

43

118 Advanced linear algebra

B is a linearly independent set of vectors, and
V = Lin(B).
An alternative characterisation of a basis can be given: B is a basis of
V if every vector in V can be expressed in exactly one way as a linear combination of the vectors in B. The set B spans V if and only if a linear combination exists, and B is linearly independent if and only if any linear combination is unique. We have therefore shown
Theorem 3.12 B = {v1 , v2 , . . . , vn } is a basis of V if and only if any v ∈ V is a unique linear combination of v1 , v2 , . . . , vn .
Example: The vector space Rn has the basis {e1 , e2 , . . . , en } where ei is (as earlier) the vector with every entry equal to 0 except for the ith entry, which is 1. It’s clear that the vectors are linearly independent, and there are n of them, so we know straight away that they form a basis. In fact, it’s easy to see that they span the whole of Rn , since for any x = (x1 , x2 , . . . , xn )T ∈ Rn , x = x1 e1 + x2 e2 + · · · + xn en .
The basis {e1 , e2 , . . . , en } is called the standard basis of Rn .

Learning activity 3.14
Prove that the vectors in the standard basis of Rn are linearly independent and that they span the whole of Rn .

Example: Find a basis of the subspace of R3 given by

 

 x
W =  y  : x + y − 3z = 0 .

 z Solution. If x = (x, y, z)T is any vector in W , then its components must satisfy y = −x + 3z, and we can express x as
  


   x x
1
0 x =  y  =  −x + 3z  = x  −1 +z  3  = xv+zw x, z ∈ R. z z
0
1
This shows that the set {v, w} spans W . The set is linearly independent. Why? Because of the positions of the zeros and ones, if αv + βw = 0 then necessarily α = 0 and β = 0.
Example: The set
S=

1
2

,

1
−1

is a basis of R2 . To show this we have to show it spans R2 and is linearly independent, or equivalently, that any vector b ∈ R2 is a

44

Bases and dimension

unique linear combination of these two vectors. Writing the vectors as the columns of a matrix A, we find that |A| = 0, so this is true.
(Theorem 2.4, page 16.)

Learning activity 3.15
Which of these sets is a basis of R3 ?


   
−1 
1
 −1
U =  0 ,2, 2  ,


5
3
1


    
1 
1
 −1
W =  0 ,2,2 .


5
3
1

Show that one of these sets is a basis of R3 and that one of these sets spans a plane in R3 . Find a basis for this plane. Then find a Cartesian equation for the plane.

3.8.1

Coordinates
What is the importance of a basis? If S = {v1 , v2 , . . . , vn } is a basis of Rn , then any vector v ∈ Rn can be expressed uniquely as v = α1 v1 + α2 v2 + · · · + αn vn . The real numbers α1 , α2 , · · · , αn are the coordinates of v with respect to the basis, S.
Definition 3.7 (Coordinates) If S = {v1 , v2 , . . . , vn } is a basis of a vector space V and v = α1 v1 + α2 v2 + · · · + αn vn , then the real numbers α1 , α2 , · · · , αn are the coordinates of v with respect to the basis, S. We use the notation

 α1  α2 
[v]S =  . 
 . 
.
αn

S

to denote the coordinate vector of v in the basis S.
Example: The sets B = {e1 , e2 } and S = {v1 , v2 } where
B=

1
0

,

0
1

and

1
2

S=

1
−1

,

are each a basis of R2 . The coordinates of the vector v = (2, −5)T in each basis are given by the coordinate vectors,
[v]B =

2
−5

and

[v]S =

B

−1
3

.
S

In the standard basis, the coordinates of v are precisely the components of the vector v. In the basis S, the components of v are given by writing v = −1

1
2

+3

1
−1

=

2
−5

.

45

118 Advanced linear algebra

Learning activity 3.16
For the example above, sketch the vector v on graph paper and show it as the sum of the vectors given by each of the linear combinations: v = 2e1 − 5e2 and v = −1v1 + 3v2 .

3.8.2

Dimensions
A fundamental result is that if a vector space V has a finite basis, then all bases of V are of the same size. 2

2

For a proof, see Anton.

3

For a proof, see Anton.

Theorem 3.13 Suppose that the vector space V has a finite basis consisting of d vectors. Then any basis of V consists of exactly d vectors. This enables us, finally, to define exactly what we mean by the dimension of a vector space V .
Definition 3.8 (Dimension) The number d of vectors in a finite basis of a vector space V is the dimension of V , and is denoted dim(V ).
The vector space V = {0} is defined to have dimension 0.
A vector space which has a finite basis is said to be finite-dimensional.
Not all vector spaces are finite-dimensional. (For example, the vector space of real functions with pointwise addition and scalar multiplication has no finite basis. Such a vector space is said to be infinite dimensional.) Example: We already know Rn has a basis of size n. (For example, the standard basis consists of n vectors.) So Rn has dimension n
(which is reassuring, since it is often referred to as n-dimensional
Euclidean space).
If we know the dimension of a vector space V , then we know how many vectors we need for a basis. If we have the correct number of vectors for a basis and we know either that the vectors span V , or that they are linearly independent, then we can conclude that both must be true and they form a basis, as is shown in the following theorem. That is, we do not need to show both. 3
Theorem 3.14 Let V be a finite-dimensional vector space of dimension d. Then: d is the largest size of a linearly independent set of vectors in V .
Furthermore, any set of d linearly independent vectors is necessarily a basis of V ;

46

Bases and dimension

d is the smallest size of a spanning set of vectors for V .
Furthermore, any finite set of d vectors that spans V is necessarily a basis.
Thus, d = dim(V ) is the largest possible size of a linearly independent set of vectors in V , and the smallest possible size of a spanning set of vectors (a set of vectors whose linear span is V ).
Example: We know that the plane W in R3 ,
 

 x

W =  y  : x + y − 3z = 0

 z has dimension 2, because we found a basis for it consisting of two vectors. If we choose any set of 2 linearly independent vectors in W , then that set will be a basis of W . For example, the vectors v1 = (1, 2, 1)T and v2 = (3, 0, 1)T are linearly independent (why?), so by the theorem, S = {v1 , v2 } is a basis of W .

3.8.3

Dimension and bases of subspaces
Suppose that W is a subspace of the finite-dimensional vector space V .
Any set of linearly independent vectors in W is also a linearly independent set in V .

Learning activity 3.17
Prove this last statement.

Now, the dimension of W is the largest size of a linearly independent set of vectors in W , so there is a set of dim(W ) linearly independent vectors in V . But then this means that dim(W ) ≤ dim(V ), since the largest possible size of a linearly independent set in V is dim(V ). There is another important relationship between bases of W and V : this is that any basis of W can be extended to one of V . The following result states this precisely. 4

4

For a proof, see Anton.

Theorem 3.15 Suppose that V is a finite-dimensional vector space and that W is a subspace of V . Then dim(W ) ≤ dim(V ).
Furthermore, if {w1 , w2 , . . . , wr } is a basis of W then there are s = dim(V ) − dim(W ) vectors v1 , v2 , . . . , vs ∈ V such that
{w1 , w2 , . . . , wr , v1 , v2 , . . . , vs } is a basis of V . (In the case W = V , the basis of W is already a basis of V .) That is, we can obtain a basis of the whole space V by adding vectors of V to any basis of W .
Example: The plane W in R3 , W = {x : x + y − 3z = 0} has a basis consisting of the vectors v1 = (1, 2, 1)T and v2 = (3, 0, 1)T . If v3 is any vector which is not in this plane, for example, v3 = (1, 0, 0)T , then the set S = {v1 , v2 , v3 } is a basis of R3 .

47

118 Advanced linear algebra

3.9

Finding a basis for a linear span in Rn
Suppose we are given k vectors x1 , x2 . . . , xk in Rn , and we want to find a basis for the linear span Lin{x1 , . . . , xk }. The point is that the k vectors themselves might not form a linearly independent set (and hence not a basis).
A useful technique is to form a matrix with the xT as rows, and to i perform row operations until the resulting matrix is in echelon form.
Then a basis of the linear span is given by the transposed non-zero rows of the echelon matrix (which, it should be noted, will not generally be among the initial given vectors). The reason this works is that: (i) row operations are such that at any stage in the resulting procedure, the row space of the matrix is equal to the row space of the original matrix, which is precisely the linear span of the original set of vectors, and (ii) the non-zero rows of an echelon matrix are linearly independent (which is clear, since each has a one in a position where the vectors below it all have zero).
Example: We find a basis for the subspace of R5 spanned by the vectors 







1
2
−1
3
 −1 
 1 
 2 
 0 







 x1 =  2  , x2 =  −2  , x3 =  −4  , x4 =  0  .








−1
−2
1
−3
−1
−2
1
−3
Write the vectors as the columns of a matrix A and then take AT
(effectively writing each column vector as a row).


1
 2
T
A =
−1
3

−1 2
1 −2
2 −4
0
0


−1 −1
−2 −2 
.
1 1
−3 −3

Reducing this to echelon form by elementary row operations,


1
 2

−1
3


1
0
→
0
0

−1
1
2
0

2
−2
−4
0



−1 −1
1
−2 −2 
0
→
1 1
0
−3 −3
0

−1
3
1
3

2
−6
−2
−6


−1 −1
0 0 

0 0
0 0

−1
1
1
1

2
−2
−2
−2

−1
0
0
0



−1
1
0 
0
→
0
0
0
0

−1
1
0
0

2
−2
0
0


−1 −1
0 0 
.
0 0
0 0

The echelon matrix at the end of this tells us that a basis for
Lin{x1 , x2 , x3 , x4 } is formed from the first two rows, transposed, of the echelon matrix, that is,

 

0 
 1

 −1   1 




 

 2  ,  −2  .

 −1   0 





−1
0

48

Basis and dimension of range and null space

If we want to find a basis that consists of a subset of the original vectors, then we need to take those vectors that ‘correspond’ to the final non-zero rows in the echelon matrix. By this, we mean the rows of the original matrix that have ended up as non-zero rows in the echelon matrix. For instance, in the example just given, the first and second rows of the original matrix correspond to the non-zero rows of the echelon matrix, so a basis of the span is {x1 , x2 }. On the other hand, if we interchange rows, the correspondence won’t be so obvious.
A better method to obtain such a basis is given in the next section, using the matrix A whose columns are the vectors x1 , x2 . . . , xk .
Then, as we have seen, Lin{x1 , . . . , xk } = R(A). That is,
Lin{x1 , . . . , xk } is the range or column space of the matrix A.

3.10

Basis and dimension of range and null space
We have shown that the range and null space of an m × n matrix are subspaces of Rm and Rn respectively (section 3.4). Their dimensions are so important that they are given special names.
Definition 3.9 (Rank and nullity) The rank of a matrix A is rank(A) = dim(R(A)) and the nullity is nullity(A) = dim(N (A)).
We have, of course, already used the word ‘rank’, so it had better be the case that the usage just given coincides with the earlier one.
Fortunately it does. In fact, we have the following connection.
Theorem 3.16 Suppose that A is an m × n matrix with columns c1 , c2 , . . . , cn , and that an echelon form obtained from A has leading ones in columns i1 , i2 , . . . , ir . Then a basis for R(A) is
B = {ci1 , ci2 , . . . , cir }.
Note that the basis is formed from columns of A, not columns of the echelon matrix: the basis consists of those columns of A corresponding to the leading ones in the echelon matrix.
We will outline a proof of this theorem, so you can see how it works.5
We have already seen that a solution x = (α1 , α2 , . . . , αn ) of Ax = 0 gives a linear combination of the columns of A which is equal to the zero vector,
0 = α1 c1 + α2 c2 + . . . + αn cn .

5

see, also, Anton.

If E denotes the reduced echelon form of A, and if c1 , c2 , . . . , cn denote the columns of E, then exactly the same relationship holds:
0 = α1 c1 + α2 c2 + . . . + αn cn .

49

118 Advanced linear algebra

In fact, we use E to obtain the solution x = (α1 , α2 , . . . , αn ). So the linear dependence relations are the same for the columns of both matrices, which means that the linearly independent columns of A correspond precisely to the linearly independent columns of E. Which columns of E are linearly independent? The columns which contain the leading ones.

Learning activity 3.18
If E is a matrix in reduced echelon form, prove that the set of columns with the leading ones is a basis of the column space of E.

We have already seen that a matrix A and its reduced echelon form have the same row space, and that the non-zero rows form a basis of this row space. So the dimension of the row space of A, RS(A), and the dimension of the column space of A, R(A), are each equal to the number of leading ones in an echelon form of A, that is both are equal to rank(A).
Example: Let A be the matrix,

1
A = 2
9
The reduced echelon form of the

1
E = 0
0


1 2 1
0 1 1.
−1 3 4 matrix is (verify this!)

1
0 2 1
2
3
1 2 1 .
2
0 0 0

The leading ones in this echelon matrix are in the first and second columns, so a basis for R(A) can be obtained by taking the first and second columns of A. (Note: ‘columns of A’, not of the echelon matrix!) Therefore a basis for R(A) is
  

1 
 1
2, 0  .


9
−1
A basis of the row space of A consists of the two non-zero rows of the reduced matrix, or the first two rows of the original matrix,
   
   
0 
2 
 1
 1



 0   1 
 1   0 

 , 
 ,  . or  1   3 
 2   1 
 2





2
 1



1
1
1
2
2
Note that the column space is a 2-dimensional subspace of R3 (a plane) and the row space is a 2-dimensional subspace of R4 . The columns of A and E satisfy the same linear dependence relations, which can be easily read from the reduced echelon form of the matrix, c3 =

50

3
1
c1 + c2 ,
2
2

c4 =

1
1
c1 + c2 .
2
2

Basis and dimension of range and null space

Learning activity 3.19
Check that the columns of A satisfy these same linear dependence relations. There is a very important relationship between the rank and nullity of a matrix. We have already seen some indication of it in our considerations of linear systems. Recall that if an m × n matrix A has rank r then the general solution to the (consistent) system Ax = 0 involves n − r ‘free parameters’. Specifically (noting that 0 is a particular solution, and using a characterisation obtained earlier in the previous chapter), the general solution takes the form x = s1 u1 + s2 u2 + · · · + sn−r un−r , where u1 , u2 , . . . , un−r are themselves solutions of the system Ax = 0.
But the set of solutions of Ax = 0 is precisely the null space N (A).
Thus, the null space is spanned by the n − r vectors u1 , . . . , un−r , and so its dimension is at most n − r. In fact, it turns out that its dimension is precisely n − r. That is, nullity(A) = n − rank(A).
To see this, we need to show that the vectors u1 , . . . , un−r are linearly independent. Because of the way in which these vectors arise (look at the example we worked through), it will be the case that for each of them, there is some position where that vector will have an entry equal to 1 and the entry in that same position of all the other vectors will be
0. From this we can see that no non-trivial linear combination of them can be the zero vector, so they are linearly independent. We have therefore proved the following central result.
Theorem 3.17 (Rank-nullity theorem) For an m × n matrix A, rank(A) + nullity(A) = n.

Learning activity 3.20
Find a basis of the null space of the matrix

1 1 2
A = 2 0 1
9 −1 3


1
1
4

from the previous example. Verify the rank-nullity theorem for this matrix. 51

118 Advanced linear algebra

3.11

Learning outcomes
At the end of this chapter and the relevant reading, you should be able to: explain what is meant by a vector space and a subspace prove that a given set is a vector space, or a subspace of a given vector space explain what is meant by the linear span of a set of vectors explain what is meant by linear independence and linear dependence determine whether a given set of vectors is linearly independent or linearly dependent, and in the latter case, find a non-trivial linear combination of the vectors which equals the zero vector explain what is meant by a basis, and by the dimension of a finite-dimensional vector space explain what is meant by coordinates with respect to a basis and be able to use this definition

find a basis for a linear span find a basis for the null space, range and row space of a matrix from its reduced row echelon form explain how rank and nullity are defined, and the relationship between them (the rank-nullity theorem).

3.12

Sample examination questions
The following are typical exam questions, or parts of questions.
Question 3.1 Let
S=

x1 x2 : x2 = 3x1 .

Prove that S is a subspace of R2 (where the operations of addition and scalar multiplication are the usual ones).
Question 3.2 Let V be the vector space of all functions from R → R with pointwise addition and scalar multiplication. Let n be a fixed positive integer and let W be the set of all real polynomial functions of degree at most n; that is, W consists of all functions of the form f (x) = a0 + a1 x + a2 x2 + · · · + an xn , where a0 , a1 , . . . , an ∈ R.
Prove that W is a subspace of V , under the usual pointwise addition and scalar multiplication for real functions. Show that W is finite-dimensional by presenting a finite set of functions which spans W .

52

Sample examination questions

Question 3.3 Show that the set S1 spans R3 , but any vector v ∈ R3 can be written as a linear combination of the vectors in S1 in infinitely many ways. Show that S2 and S3 do not span R3 .
  

    
    
1
0
1 
2
1 
 1
 1
S1 =  2  ,  0  ,  1  ,  1  , S2 =  0  ,  1  ,  2  ,




3
−1
1
0
−1
3
9
   
2 
 1
S3 =  1  ,  0  .


1
1
Question 3.4 Show that the following three vectors are linearly independent: 
 



−2
3
2
 1 , 4,  3 .
2
6
−1


4
Express the vector v =  7  as a linear combination of these three
−3
vectors.
Question 3.5 Let
 
2
x1 =  3  ,
5

 
1
x2 =  1  ,
2

  a v = b. c Find a vector x3 such that {x1 , x2 , x3 } is a linearly independent set of vectors. Find a condition that a, b, c must satisfy for the set of vectors
{x1 , x2 , v} to be linearly dependent.
Question 3.6 Show that the following vectors are linearly dependent by finding a non-trivial linear combination of the vectors that equals the zero vector.
  
 
 

1
0
4
9
 2   −1   −11   2 
 ,
,
,
.
1
3
5
1
2
4
−1
−3
Question 3.7 Let A be any matrix. Let v1 and v2 be two non-zero vectors and suppose that Av1 = 2v1 and Av2 = 5v2 . Prove that
{v1 , v2 } is linearly independent.
(Hint: Assume α1 v1 + α2 v2 = 0; multiply this equation through by A to get a second equation for v1 and v2 ; then solve the two equations simultaneously.) Can you generalize this result?
Question 3.8 Let B be the basis {v1 , v2 , v3 }, where v1 = (0, 1, 0)T , v2 = (−4/5, 0, 3/5)T , v3 = (3/5, 0, 4/5)T , and let x = (1, −1/5, 7/5)T . Find the coordinates of x with respect to B.

53

118 Advanced linear algebra

Question 3.9 For each of the sets Si of vectors given below, find a basis of the vector space Lin(Si ) and state its dimension.
S1 =

1
2

,

2
3

S2 =


    
2
1 
 1
S3 =  0  ,  1  ,  2 


−1
3
9

1
−1

0
2
−3
,
,
0
−2
3
  
    
2
4
1 
 1


2  0  4 1
S4 =   , 
, , 
−1
1
1 
 1


3
2
8
2
,

Question 3.10 Which of the following sets are a basis for R3 ? (State reasons for your answers.)
       
  

2
4
7 
1 
 1
 1
S1 =  2  ,  1  ,  1  ,  2 
S2 =  0  ,  −1 




3
0
0
1
1
1
  
  
  
 

1
3 
1
2 
 2
 1
S3 =  1  ,  2  ,  3 
S4 =  1  ,  −1  ,  −3 




1
−1
0
1
0
−3
For any set which is a basis of R3 , find the coordinates of the vector w = (1, 2, −1) in that basis.
Question 3.11 Write down a basis for the yz plane in R3 .
Question 3.12 Find a basis for the null space of the matrix
1 1
2 1
Question 3.13 Let



1
 −1
A=
0
1

−2
3
1
2

1 0
0 1

1
0
1
5

1
2
3
13

.


2
−2 
.
4
5

Find a basis for the column space of A.
Question 3.14 Find a basis of the row space, a basis of the range
(column space), and a basis of the null space of the matrix


1 2 1 3 0
B =  0 1 1 1 −1  .
1 3 2 0 1
Find the rank of B and verify the rank-nullity theorem.
Let b = c1 + c5 , the sum of the first and last column of the matrix B.
Without solving the system, use the information you have obtained to write down a general solution of the system of equations Bx = b.

3.13

Answers to or comments on selected activities Activity 3.1 For any x,
0 = 0x = (1 + (−1))x = 1x + (−1)x = x + (−1)x

54

Answers to or comments on selected activities

so adding the negative −x of x to each side, and using properties 3 and 4 of the definition of a vector space,
−x = −x + 0 = −x + x + (−1)x = (−1)x which proves that −x = (−1)x.
Activity 3.2 The negative of a function f is the function −f given by
(−f )(x) = −(f (x)) for all x.
Activity 3.3 Suppose
 
  x x u =  y  , v =  y  ∈ V,
0
0 and that α ∈ R. Then

 x+x u+v = y+y  ∈ V
0


and



 αx αv =  αy  ∈ V.
0

Activity 3.5 0 is not in the set U as
0=

0
0

=

0
1

+t

1
2

for any t ∈ R,

so property 4 of the definition of a vector space is not satisfied
(definition 3.1).
Activity 3.6 Note first that S is non-empty because 0 ∈ S. Suppose that x, y ∈ S. (Why are we not using the usual symbols u and v? Can you see? It’s because v is already used in the definition of the set S.)
Suppose also that β ∈ R. Now, because x and y belong to S, there are α, α ∈ R such that x = αv and y = α v. Then, x + y = αv + α v = (α + α )v, which is in S since it is a scalar multiple of v. Also, βx = β(αv) = (βα)v ∈ S and it follows that S is a subspace.

x
Activity 3.7 A general vector in S1 is of the form  x2 , x ∈ R, and
0
 
1
one particular vector, taking x = 1, is  1 . To show S1 is not a
0
subspace you need to find one counterexample, one or two particular vectors in S1 which do not satisfy the closure property, such as
 
     
1
1
1
2
 1  ∈ S1 but  1  +  1  =  2  ∈ S1 .
/
0
0
0
0


55

118 Advanced linear algebra



 x A general vector in S2 is of the form  2x , x ∈ R, and one particular
0
 
1
vector, taking x = 1, is  2 . S2 is a subspace. We show it is closed
0
under addition and scalar multiplication using general vectors: if u, v ∈ S2 , α ∈ R,

     a+b b a a, b ∈ R a+b ∈ R, u+v =  2a + 2b  =  2(a + b)  ∈ S2
0
0
0


αa αu =  2(αa)  ∈ S2 αa ∈ R.
0
Activity 3.8 Any two such vectors will be of the form v = α1 v1 + α2 v2 + · · · + αk vk and v = α1 v1 + α2 v2 + · · · + αk vk and we will have v + v = (α1 + α1 )v1 + (α2 + α2 )v2 + · · · + (αk + αk )vk , which is a linear combination of the vectors v1 , v2 , · · · , vk . Also, αv = α(α1 v1 +α2 v2 +· · ·+αk vk ) = (αα1 )v1 +(αα2 )v2 +· · ·+(ααk )vk , is a linear combination of the vectors v1 , v2 , · · · , vk .
Activity 3.9 Suppose d = 0. Let u, v ∈ S and α ∈ R. Then
 
  x x u = y , v = y , z z where ax + by + cz = 0 and ax + by + cz = 0. Consider u + v. This equals   

X x+x Y  = y+y 
Z
z+z and we want to show this belongs to S. Now, this is the case, because aX + bY + cZ = a(x + x ) + b(y + y ) + c(z + z )
= (ax + by + cz) + (ax + by + cz ) = 0 + 0 = 0, and similarly it can be shown that, for any α ∈ R, αv ∈ S. So, in this case, S is a subspace. You can see why this argument fails when d is not 0; for, then aX + bY + cZ will equal 2d, which will not be the same as d. So we will not have u + v ∈ S. (Similarly, we could see that αv will not be in S if α = 1.)
Activity 3.10 The non-trivial linear combination 0v + 10 is 0.

56

Answers to or comments on selected activities

1 1
2
. Then, using row operations,
2 −1 −5 it can be seen that the general solution x to Ax = 0 is x = (r, −3r, r)T for r ∈ R. In particular, taking r = −1, and multiplying out the equation Ax = 0, we have that

Activity 3.11 Let A =



1
2

+3

1
−1



2
−5

=

0
0

.

Activity 3.12 The general solution to the system is

  
−3r/2
x x =  y  =  −r/2  , r z for r ∈ R. Taking r = −1 and multiplying out the equation Ax = 0, we see that


   
1
1
2
3  0  1 2 1

 +   −   = 0,
3
2 −1
2 9
0
2
1
and hence

 


 
2
1
1
1 3  0  1 2
 = 
 +  .
3
2 −1
2 9
1
0
2

Activity 3.13 As noted, c1 x1 + c2 x2 + · · · + cm xm = c1 x1 + c2 x2 + · · · + cm xm if and only if
(c1 − c1 )x1 + (c2 − c2 )x2 + · · · + (cm − cm )xm = 0.
But since the vectors are linearly independent, this can be true only if c1 − c1 = 0, c2 − c2 = 0, and so on. That is, for each i, we must have ci = ci .
Activity 3.15 Write each set of vectors as the columns of a matrix:




−1 1 −1
−1 1 1
B =  0 2 2 ,
A =  0 2 2.
1 3 5
1 3 5
|A| = 0, so W is a basis of R3 . |B| = 0, so U is not. Because the set
U is linearly dependent, one of the vectors is a linear combination of the other two. Also, any two vectors of U are linearly independent since no one of them is a scalar multiple of any other. Therefore any two vectors from the set U are a basis of Lin(U ). Since Lin(U ) is a two-dimensional subspace of R3 , it is plane.
Using the first two vectors in U for the basis, if x ∈ U ,
 


  x −1
1
x =  y  = s  0  + t  2  , s, t ∈ R. z 1
3

57

118 Advanced linear algebra

Equating components, you obtain three equations in the two unknowns s and t. Eliminating s and t between the three equations you will obtain a single equation relating x, y, and z. Explicitly, we have x = −s + t, y = 2t, z = s + 3t, so t=

y y , s=t−x= −x
2
2

and

y
3
− x + y,
2
2 so we have x − 2y + z = 0. This is the Cartesian equation of the plane. z = s + 3t =

Note that a Cartesian equation could equally well have been obtained by writing the two basis vectors and the vector x as the columns of a matrix M and using the fact that |M | = 0 if and only if the columns of
M are linearly dependent. That is,
−1 1
0 2
1 3

x y = −2x + 4y − 2z = 0. z Activity 3.20 A general solution of the system of equations Ax = 0 is
 1
 1
−2
−2
 −3 
 −1  x = s1  2  + s2  2  = s1 u1 + s2 u2 .
 1 
 0 
0
1
The set {u1 , u2 } is a basis of the null space of A, so dim(N (A)) = 2.
From the example, rank(A) = 2. The matrix A has n = 4 columns. rank(A) + nullity(A) = 2 + 2 = 4 = n.
Note that the basis vectors of the null space give precisely the same linear dependence relations between the column vectors as those given in the example. Since Au1 = 0 and Au2 = 0,
1
3
Au1 = − c1 − c2 +c3 = 0
2
2

3.14

and

1
1
Au2 = − c1 − c2 +c4 = 0.
2
2

Sketch answers to or comments on selected questions Question 3.1 You need to show that for any α ∈ R and u, v ∈ S, αu ∈ S and u + v ∈ S. Both are reasonably straightforward, and the details are omitted. Also, 0 ∈ S, so S = ∅.
Question 3.2 Clearly, W = ∅. We need to show that for any α ∈ R and f, g ∈ W , αf ∈ W and f + g ∈ W . Suppose that f (x) = a0 +a1 x+a2 x2 +· · ·+an xn , g(x) = b0 +b1 x+b2 x2 +· · ·+bn xn .
Then
(f + g)(x) = f (x) + g(x) = (a0 + b0 ) + (a1 + b1 )x + · · · + (an + bn )xn ,

58

Sketch answers to or comments on selected questions

so f + g is also a polynomial of degree at most n and therefore f + g ∈ W . Similarly,
(αf )(x) = α(f (x)) = (αa0 ) + (αa1 )x + · · · + (αan )xn , so αf ∈ W also. It can be seen that the set of functions
{1, x, x2 , . . . , xn } spans W , where 1 denotes the function that is identically equal to 1 (that is, the function f with f (x) = 1 for all x).
Question 3.3 To show that
   
 


  x 1
0
1
1
α2 + β  0  + γ 1 + δ1 = y 
0
z
−1
1
3
has infinitely many solutions for any vector b ∈ R3 , and therefore spans R3 , one can solve the system Ax = b:
 

 

α x 1 1 0 1
β
x=  b = y .
A = 2 0 1 1 γ z
3 −1 1 0 δ If the row echelon form of the coefficient matrix A has three leading ones, then there will always be infinitely many solutions: a solution will exist since a leading one in each row means an augmented system cannot be inconsistent and there will be infinitely many solutions since there is one free (non-leading) variable.
Row reducing

1 1
A = 2 0
3 −1

A (not all steps are


1
0 1
1 1  −→  0
0
1 0

shown),
1
2
−4

0
−1
1



1
1 1
1  −→  0 1
−3
0 0

0
−1
2
1

1
1
2




1

The row echelon form of the augmented matrix (A|b) has three leading ones.
Therefore, the set S1 spans R3 , and any vector b ∈ R3 can be written as a linear combination of the vectors in S1 in infinitely many ways.
S2 does not span R3 . Since
|B| =

1 2 1
0 1 2 = 1(9 − 6) − 1(4 − 1) = 0
−1 3 9

the REF (reduced echelon form) of B will contain a row of zeros.
There will be some b ∈ R3 for which the system is inconsistent.
For example, the vector b = ( 0 linear combination of the vectors solution: 


1 2 1 0
1
 0 1 2 0  −→  0
−1 3 9 1
0

T

0 1 ) cannot be expressed as a in S2 as the system Bx = b has no
2
1
5



1 0
1 2
2 0  −→  0 1
10 1
0 0

1
2
0


0
0
1

S3 does not span R3 . At least three vectors are required to span R3 .

59

118 Advanced linear algebra

Question 3.4 Call the vectors x1 , x2 , x3 . To show linear independence, show that the matrix A = (x1 x2 x3 ) has rank 3 using row operations, or show |A| = 0. To express the fourth vector x4 as a linear combination of the first 3, solve Ax = x4 . You should obtain the solution x = (217/55, −18/55, 16/11)T , so


 




4
2
3
−2
217 
18   16 
 7 =
1 −
4 +
3 .
55
55
11
−3
−1
6
2
Question 3.5 There are many ways of solving this problem, and there are infinitely many possible x3 s. We can solve it by trying to find a vector x3 such that the matrix (x1 x2 x3 ) has rank 3.
Another approach is to answer the second part of the question first, determine what vectors v form a linearly dependent set with x1 and x2
(find the condition on a, b, c as asked) and then write down any vector whose components do not satisfy the condition.
Write the three vectors as the columns

1 2
A = 1 3
2 5

of a matrix and row reduce it,

a b . c Notice that you can choose to order x1 and x2 so that the row reduction will be easier since it makes no difference in this question.
The vectors {x1 , x2 , v} will be linearly dependent if the row echelon form of A has a row of zeros.






1 2 a 1 2 a 1 2 a
R2 −R1
R3 −R2
−→
b−a 
A =  1 3 b  R3 −2R1  0 1 b − a  −→  0 1
−→
0 1 c − 2a
0 0 c−a−b
2 5 c
The vectors will be linearly dependent if and only if the components of v satisfy a+b−c=0 So, choose any vector for x3 which does not satisfy this equation, such
T
as x3 = ( 1 0 0 ) .
Note that this condition is the equation of a plane in R3 determined by the vectors x1 and x2 . The set {x1 , x2 , v} is linearly dependent if and only if v is the position vector of a point in this plane.
Question 3.6 Let


1
2
A=
1
2

0
−1
3
4

4
−11
5
−1


9
2 
,
1
−3

the matrix with columns equal to the given vectors. If we only needed to show that the vectors were linearly dependent, it would suffice to show, using row operations, that rank(A) < 4. But we’re asked for more: we have to find an explicit non-trivial linear combination that equals the zero vector. So we need to find a non-trivial solution of

60

Sketch answers to or comments on selected questions

Ax = 0. One solution is x = (5, −3, 1, −1)T . (You should use row operations to find this. The details are omitted here.) This means that
 

 
 
  
1
0
4
9
0
2
 −1   −11   2   0 
5  − 3
+
−
 =  .
1
3
5
1
0
2
4
−1
−3
0
Question 3.7 To prove that {v1 , v2 } is linearly independent, assume that α1 and α2 are scalars such that
(∗)

α1 v1 + α2 v2 = 0.

Then
A(α1 v1 + α2 v2 ) α1 Av1 + α2 Av2 α1 (2v1 ) + α2 (5v2 )
2α1 v1 + 5α2 v2

= 0
= 0
= 0
= 0.

Add this last equation to −2 times equation (∗) to obtain 3α2 v2 = 0.
Since v2 = 0, we must have α2 = 0. Substituting back into either equation gives α1 v1 = 0, so that α1 = 0 since v1 = 0. This shows that v1 , v2 are linearly independent.
Generalisation 1. The same proof works for any constants, Av1 = κv1 ,
Av2 = λv2 provided κ = λ.
Generalisation 2. It also extends to 3 (or more) non-zero vectors,
Av1 = κv1 , Av2 = λv2 , Av3 = µv3 with κ, λ, µ distinct constants
(that is, no two are equal).
Question 3.8 The coordinates are [x]B = (−1/5, 1/25, 43/25)T .
Question 3.9 S1 spans R2 since

1 2
= 0. A basis of R2 is S2 or
2 3

{e1 , e2 }. R2 has dimension 2.
A basis of S2 is

1
−1

(since all other vectors are scalar

1 with Cartesian
−1
equation y = −x. This is a one-dimensional subspace of R2 .

multiples of it). The set spans the line x = t

To find a basis of S3 or S4 , write the vectors as the columns of a matrix. You can either row reduce AT , and the non-zero rows will be a basis, or, you can row reduce A and the columns of A corresponding to the columns with leading ones in the echelon form will be a basis.
Using the first method for S3 (row reduce AT ),






1 0 −1
1 0 −1
1 0 −1
 2 1 3  −→  0 1 5  −→  0 1 5  .
1 2 9
0 2 10
0 0 0
Then a basis is given by the top two vectors. The set S3 spans a plane
(through the origin) in R3 , Lin(S3 ) has dimension 2.

61

118 Advanced linear algebra

Using the second method for S4 , write the vectors as the columns of a matrix A and reduce A to echelon form. Columns 1, 2 and 4 have leading ones, so the first, second and fourth vector form a basis of
Lin(S4 ), which is a 3-dimensional subspace of R4 .
Question 3.10 S1 is not a basis. No set of four vectors in R3 can be linearly independent.
S2 is not a basis. Two vectors cannot span R3 : there must be at least three. S3 is not a basis. Either notice that the third vector is the sum of the first two, or reduce AT to echelon form and show it has a row of zeros, or show |A| = 0. The set is not linearly independent.
S4 is a basis. You can reduce AT to echelon form or you can compute |A| = 5 = 0, which shows that Ax = b has a unique solution for all b ∈ R3 .


1
To find the coordinates of w =  2  in the basis S4 you need to
−1
find the unique solution of
 



 

1
1
2
1
α  1  + β  −1  + γ  −3  =  2  .
1
0
−3
−1
Reduce the augmented matrix to reduced echelon form to find α = 2, β = −3, γ = 1, so that


2
[w]S4 =  −3  .
1 S
4

Question 3.11 The yz plane is the set of all vectors of the form
(0, y, z)T . So the set of vectors {e2 , e3 } is a basis of the yz plane.
Question 3.12 We omit the details. A basis for the null space is
(1, −2, 1, 0)T , (−1, 1, 0, 1)T . There are many other possible answers.
Question 3.13 A basis for the column space is

 
 

−2
2 
 1


 −1   3   −2 

,
,
 .
1
4 
 0


1
2
5
Details of the calculations are omitted.
Question

1
B = 0
1


1 2
0 1
0 0

62

3.14 Reduce the matrix B to reduced echelon form,




2 1 3 0
1 2 1 3
0
1
1 1 1 −1  −→  0 1 1 1 −1  −→  0
3 2 0 1
0 1 1 −3 1
0

1
1
0

3
1
1



0
1
−1  −→  0
−1
0
2

2 1
1 1
0 0

0
0
1

3
2
−1
2
−1
2





1 0
 −→  0 1
0 0

2
1
0

1
1
0

3
1
−4


0
−1 
2

5 
−1 0
2
1 0 −1  .
2
0 1 −1
2

Sketch answers to or comments on selected questions

The leading ones are in the first, second and fourth columns. For the solutions of Bx = 0, set the non-leading variables x3 = s and x5 = t.

A basis of the row space is:


 
 1

 0  


 
 −1  , 

 0  


5
2

A basis of the range of B is:

 

0
0 

1   0 

 

1  ,  0  ⊂ R5 .
 

0
1 


1
−1
−2
2

      
2
3 
 1
 0  ,  1  ,  1  ⊂ R3 .


1
3
0

So the range of B is all of R3 .
From the reduced echelon form of B, the solution of Bx = 0 is
  





x1 s − 5t
1
−5
2
1 
 x2   −s + 2 t
 −1 
 1 
  





 x3  = 
 = α  1  + β  0  = αx1 + βx2 . s   





1
 x4  

 0 
 1 
2t
x5 t 0
2
The set {x1 , x2 } ⊂ R5 is a basis of the null space.
Therefore rank(B) = 3 and nullity(B) = 2. is n = 5, so that

The number of columns

rank(B) + nullity(B) = 3 + 2 = 5 = n.
T

If b = c1 + c5 , then p = ( 1 0 0 0 1 ) is a solution of Bx = b.
(Why?) Combining this with the solution of the system Bx = 0, a general solution of Bx = b is
 




1
1
−5
0
 −1 
 1 
 



 x =  0  + α  1  + β  0  = p + αx1 + βx2 .
 




0
0
1
1
0
2

63

118 Advanced linear algebra

64

Similar Documents

Free Essay

A Visit to Mcdonald's

...Gathering academic information Task One: Library Catalogue The library catalogue lists all the material, both in print and online, available in the Universities libraries. The easiest way to access a specific electronic journal or electronic book is via the library catalogue. 1. Open up a web browser and login to the Digital University page. You should see a library section; click on the library home page link. 2. Click on the library catalogue: ------------------------------------------------- ------------------------------------------------- What is an academic journal? ------------------------------------------------- Think of journals as academic magazines. Journals can be published monthly, quarterly or annually and they contain the latest ideas and research in a subject discipline. Journals usually publish a number of issues during a year, which are then collected into a numbered volume. You will be expected to read journal articles in addition to books. Some journals have a broad scope and others have a more specific focus. Note also, that journals do not always have the word ‘Journal’ in their title; however you can usually identify a reference to a journal article as it will contain a year, possibly a volume number and a page number in addition to the article title and journal title. An example of an article reference from the journal Academy of Management Journal: ------------------------------------------------- Raffiee, J. (2014) “Should I quit...

Words: 1074 - Pages: 5

Premium Essay

International Trade and the Environment

...------------------------------------------------- Università Carlo Cattaneo-Liuc ------------------------------------------------- Scuola di Economica e Management Corso di Laurea in Global Markets ReLATORE/TUTOR: Rodolfo helg Paper di Laurea di : Luca Cantadori Matricola: 14771 Paper di Laurea di : Luca Cantadori Matricola: 14771 Anno Accademico : 2012/2013 Anno Accademico : 2012/2013 CONSEQUENCES OF ECONOMIC GROWTH ON THE ENVIRONMENT:Focus on International Trade i. Economic growth and the environment ii. Environmental Kuznets curve: a. Kuznets Curve:Income inequality and growth b. Income inequality , growth and the environment iii. Population growth: how increasing population could affect the environment iv. Economic impacts of environmental policies: c. Economic growth: investment and innovation d. Effect on competitiveness v. International Trade and the environment vi. Effects of Trade on the environment vii. Trade due to differences in Environmental Policies: e. Pollution Haven case viii. Trade not due to differences in Environmental policies: f. Comparative advantage and environment: how factor endowments can influence environment ix. Conclusion x. References xi. Abstract i.Economic growth and the environment In the first half of the twentieth...

Words: 6832 - Pages: 28

Free Essay

Economic Geography

...ECONOMIC GEOGRAPHY Y U K O A O YA M A J A M E S T. M U R P H Y SUSAN HANSON KEY CONCEPTS IN key concepts in economic geography The Key Concepts in Human Geography series is intended to provide a set of companion texts for the core fields of the discipline. To date, students and academics have been relatively poorly served with regards to detailed discussions of the key concepts that geographers use to think about and understand the world. Dictionary entries are usually terse and restricted in their depth of explanation. Student textbooks tend to provide broad overviews of particular topics or the philosophy of Human Geography, but rarely provide a detailed overview of particular concepts, their premises, development over time and empirical use. Research monographs most often focus on particular issues and a limited number of concepts at a very advanced level, so do not offer an expansive and accessible overview of the variety of concepts in use within a subdiscipline. The Key Concepts in Human Geography series seeks to fill this gap, providing detailed description and discussion of the concepts that are at the heart of theoretical and empirical research in contemporary Human Geography. Each book consists of an introductory chapter that outlines the major conceptual developments over time along with approximately twenty-five entries on the core concepts that constitute the theoretical toolkit of geographers working within a specific subdiscipline. Each entry provides...

Words: 94626 - Pages: 379