Free Essay

Proakis

In:

Submitted By amulyamallesh1
Words 53003
Pages 213
SOLUTIONS MANUAL Communication Systems Engineering
Second Edition

John G. Proakis Masoud Salehi

Prepared by Evangelos Zervas

Upper Saddle River, New Jersey 07458

Publisher: Tom Robbins Editorial Assistant: Jody McDonnell Executive Managing Editor: Vince O’Brien Managing Editor: David A. George Production Editor: Barbara A. Till Composition: PreTEX, Inc. Supplement Cover Manager: Paul Gourhan Supplement Cover Design: PM Workshop Inc. Manufacturing Buyer: Ilene Kahn

c 2002 Prentice Hall by Prentice-Hall, Inc. Upper Saddle River, New Jersey 07458

All rights reserved. No part of this book may be reproduced in any form or by any means, without permission in writing from the publisher. The author and publisher of this book have used their best efforts in preparing this book. These efforts include the development, research, and testing of the theories and programs to determine their effectiveness. The author and publisher make no warranty of any kind, expressed or implied, with regard to these programs or the documentation contained in this book. The author and publisher shall not be liable in any event for incidental or consequential damages in connection with, or arising out of, the furnishing, performance, or use of these programs.

Printed in the United States of America 10 9 8 7 6 5 4 3 2 1

ISBN
Pearson Pearson Pearson Pearson Pearson Pearson Pearson Pearson Pearson

0-13-061974-6
Education Ltd., London Education Australia Pty. Ltd., Sydney Education Singapore, Pte. Ltd. Education North Asia Ltd., Hong Kong Education Canada, Inc., Toronto Educac` de Mexico, S.A. de C.V. ıon Education—Japan, Tokyo Education Malaysia, Pte. Ltd. Education, Upper Saddle River, New Jersey

Contents

Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter

2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283

iii

Chapter 2
Problem 2.1 1)
2 ∞ N 2

=

−∞ ∞

x(t) − i=1 N

αi φi (t) dt


=

−∞ ∞

x(t) − i=1 αi φi (t) x∗ (t) −
N ∞

N j=1


∗ αj φ∗ (t) dt j N j=1 ∞ −∞

=

−∞ N

|x(t)|2 dt − i=1 N ∗ αi αj ∞ −∞ N

αi

−∞

φi (t)x∗ (t)dt −

∗ αj

φ∗ (t)x(t)dt j

+ i=1 j=1 ∞

φi (t)φ∗ dt j
N

=

−∞

|x(t)|2 dt + i=1 |αi |2 − i=1 ∞

αi

−∞

φi (t)x∗ (t)dt −

N j=1

∗ αj

∞ −∞

φ∗ (t)x(t)dt j

Completing the square in terms of αi we obtain
2 ∞ N

=

−∞

|x(t)|2 dt − i=1 ∞ −∞

φ∗ (t)x(t)dt + i

2

N

αi − i=1 ∞ −∞

φ∗ (t)x(t)dt i

2

The first two terms are independent of α’s and the last term is always positive. Therefore the minimum is achieved for ∞ αi = φ∗ (t)x(t)dt i
−∞

which causes the last term to vanish. 2) With this choice of αi ’s
2 ∞ N

= =

−∞ ∞ −∞

|x(t)|2 dt − i=1 N

∞ −∞

φ∗ (t)x(t)dt i

2

|x(t)|2 dt − i=1 |αi |2

Problem 2.2 1) The signal x1 (t) is periodic with period T0 = 2. Thus x1,n = 1 1 Λ(t)e−jπnt dt 2 −1 −1 0 1 1 = (t + 1)e−jπnt dt + (−t + 1)e−jπnt dt 2 0 −1 0 1 j −jπnt 0 j −jπnt te e = + 2 2 e−jπnt + πn π n 2πn −1 −1 1 1 j −jπnt 1 j −jπnt 1 te e − + 2 2 e−jπnt + 2 πn π n 2πn 0 0 1 1 1 − (ejπn + e−jπn ) = 2 2 (1 − cos(πn)) π 2 n2 2π 2 n2 π n 1 2 1 2 1 2 Λ(t)e−j2π 2 t dt = n 1

1

When n = 0 then x1,0 = Thus x1 (t) =

1 2

1 −1

Λ(t)dt =

1 2

∞ 1 1 +2 (1 − cos(πn)) cos(πnt) 2 n2 2 π n=1

2) x2 (t) = 1. It follows then that x2,0 = 1 and x2,n = 0, ∀n = 0. 3) The signal is periodic with period T0 = 1. Thus x3,n = = = 1 T0
T0 0

et e−j2πnt dt =
0 1

1

e(−j2πn+1)t dt

e(−j2πn+1) − 1 1 e(−j2πn+1)t = −j2πn + 1 −j2πn + 1 0 e−1 e−1 =√ (1 + j2πn) 1 − j2πn 1 + 4π 2 n2

4) The signal cos(t) is periodic with period T1 = 2π whereas cos(2.5t) is periodic with period T2 = 0.8π. It follows then that cos(t) + cos(2.5t) is periodic with period T = 4π. The trigonometric Fourier series of the even signal cos(t) + cos(2.5t) is


cos(t) + cos(2.5t) = n=1 ∞

αn cos(2π

n t) T0

=

n αn cos( t) 2 n=1

By equating the coefficients of cos( n t) of both sides we observe that an = 0 for all n unless n = 2, 5 2 in which case a2 = a5 = 1. Hence x4,2 = x4,5 = 1 and x4,n = 0 for all other values of n. 2 5) The signal x5 (t) is periodic with period T0 = 1. For n = 0
1

x5,0 =
0

1 (−t + 1)dt = (− t2 + t) 2

1

=
0

1 2

For n = 0
1

x5,n =
0

(−t + 1)e−j2πnt dt
1

= −

1 j te−j2πnt + 2 2 e−j2πnt 2πn 4π n j = − 2πn
∞ 1 1 + sin 2πnt 2 n=1 πn

+
0

j −j2πnt e 2πn

1 0

Thus, x5 (t) =

6) The signal x6 (t) is periodic with period T0 = 2T . We can write x6 (t) as


x6 (t) = n=−∞ δ(t − n2T ) − 2

∞ n=−∞

δ(t − T − n2T )

= =

1 2T



ejπ T t −

n

n 1 (1 − e−jπn )ej2π 2T t 2T n=−∞

n=−∞ ∞

1 2T

∞ n=−∞

ejπ T (t−T )

n

However, this is the Fourier series expansion of x6 (t) and we identify x6,n as x6,n = 1 1 (1 − e−jπn ) = (1 − (−1)n ) = 2T 2T 0
1 T

n even n odd

7) The signal is periodic with period T . Thus, x7,n = = n 1 2 δ (t)e−j2π T t dt T T −2 n 1 j2πn d = (−1) e−j2π T t T dt T2 t=0 T

8) The signal x8 (t) is real even and periodic with period T0 = x8,n = 2f0 = f0 = =
1 4f0 1 − 4f 1 4f0 1 − 4f 0 0

1 2f0 .

Hence, x8,n = a8,n /2 or

cos(2πf0 t) cos(2πn2f0 t)dt cos(2πf0 (1 + 2n)t)dt + f0
1 4f0 1 − 4f 0

cos(2πf0 (1 − 2n)t)dt

1 1 1 1 4f 4f sin(2πf0 (1 − 2n)t)| 10 sin(2πf0 (1 + 2n)t)| 10 + 2π(1 + 2n) 2π(1 − 2n) 4f0 4f0 n 1 1 (−1) + π (1 + 2n) (1 − 2n)

9) The signal x9 (t) = cos(2πf0 t) + | cos(2πf0 t)| is even and periodic with period T0 = 1/f0 . It is 1 1 1 3 equal to 2 cos(2πf0 t) in the interval [− 4f0 , 4f0 ] and zero in the interval [ 4f0 , 4f0 ]. Thus x9,n = 2f0 = f0 = =
1 4f0 1 − 4f 1 4f0 1 − 4f 0 0

cos(2πf0 t) cos(2πnf0 t)dt cos(2πf0 (1 + n)t)dt + f0
1 4f0 1 − 4f 0

cos(2πf0 (1 − n)t)dt

1 1 1 1 4f 4f sin(2πf0 (1 + n)t)| 10 + sin(2πf0 (1 − n)t)| 10 2π(1 + n) 2π(1 − n) 4f0 4f0 π 1 π 1 sin( (1 + n)) + sin( (1 − n)) π(1 + n) 2 π(1 − n) 2

Thus x9,n is zero for odd values of n unless n = ±1 in which case x9,±1 = (n = 2l) then 1 1 (−1)l + x9,2l = π 1 + 2l 1 − 2l

1 2.

When n is even

3

Problem 2.3 It follows directly from the uniqueness of the decomposition of a real signal in an even and odd part. Nevertheless for a real periodic signal x(t) = The even part of x(t) is xe (t) = = x(t) + x(−t) 2 ∞ n n 1 a0 + an (cos(2π t) + cos(−2π t)) 2 T0 T0 n=1 n n t) + sin(−2π t)) T0 T0 ∞ a0 n + an cos(2π t) 2 T0 n=1 +bn (sin(2π
∞ a0 n n + an cos(2π t) + bn sin(2π t) 2 T0 T0 n=1

=

The last is true since cos(θ) is even so that cos(θ) + cos(−θ) = 2 cos θ whereas the oddness of sin(θ) provides sin(θ) + sin(−θ) = sin(θ) − sin(θ) = 0. The odd part of x(t) is xo (t) = − Problem 2.4 a) The signal is periodic with period T . Thus xn = 1 T
T 0

x(t) − x(−t) 2 ∞ bn sin(2π n=1 n t) T0

e−t e−j2π T t dt = n 1 T

T 0

e−(j2π T +1)t dt n = − = If we write xn =

T n 1 1 e−(j2π T +1)t = − e−(j2πn+T ) − 1 n T j2π T + 1 j2πn + T 0 1 T − j2πn [1 − e−T ] = 2 [1 − e−T ] j2πn + T T + 4π 2 n2

an −jbn 2

we obtain the trigonometric Fourier series expansion coefficients as 2T [1 − e−T ], T 2 + 4π 2 n2 bn = 4πn [1 − e−T ] T 2 + 4π 2 n2

an =

b) The signal is periodic with period 2T . Since the signal is odd we obtain x0 = 0. For n = 0 xn = = = = = 1 2T 1 2T 2 1 2T 2
T −T T

x(t)e−j2π 2T t dt = n 1 2T

T −T

t −j2π n t 2T dt e T

−T

te

n −jπ T t

dt
T −T

jT −jπ n t T 2 −jπ n t T + T te e πn π 2 n2 jT 2 T2

1 jT 2 jπn T2 e−jπn + 2 2 e−jπn + e − 2 2 ejπn 2T 2 πn π n πn π n j (−1)n πn 4

The trigonometric Fourier series expansion coefficients are: an = 0, bn = (−1)n+1 2 πn

c) The signal is periodic with period T . For n = 0 1 x0 = T If n = 0 then xn = = = = = 1 T 1 T
T 2 T 2

−T 2

x(t)dt =

3 2

−T 2 T 2 −T 2

x(t)e−j2π T t dt n e−j2π T t dt + n T 2

1 T

T 4

−T 4

e−j2π T t dt n T 4

j −j2π n t T e 2πn

−T 2

j −j2π n t T e + 2πn

−T 4

n n j e−jπn − ejπn + e−jπ 2 − e−jπ 2 2πn n 1 n 1 sin(π ) = sinc( ) πn 2 2 2

Note that xn = 0 for n even and x2l+1 = coefficients are: a0 = 3, , a2l = 0,

1 l π(2l+1) (−1) .

The trigonometric Fourier series expansion , bn = 0, ∀n

, a2l+1 =

2 (−1)l , π(2l + 1)

d) The signal is periodic with period T . For n = 0 x0 = If n = 0 then xn = 1 T 1 + T = 3 T2 − + = 3 T2
T 0
2T 3 T 3

1 T

T

x(t)dt =
0

2 3

x(t)e−j2π T t dt = n 1 T n 1 e−j2π T t dt + T

3 −j2π n t T dt te 0 T T n 3 (− t + 3)e−j2π T t dt 2T T 3
T 3

T 3

jT −j2π n t T 2 −j2π n t T + T te e 2πn 4π 2 n2 n jT −j2π n t T + te e−j2π T t 2 n2 2πn 4π 2T 3 T 3

0 T
2T 3

T2

j −j2π n t T e 2πn

+

3 jT −j2π n t T e T 2πn

T
2T 3

3 2πn ) − 1] [cos( 2 n2 2π 3

The trigonometric Fourier series expansion coefficients are: 4 a0 = , 3 an = 3 π 2 n2 [cos( 5 2πn ) − 1], 3 bn = 0, ∀n

e) The signal is periodic with period T . Since the signal is odd x0 = a0 = 0. For n = 0 xn = 1 T + = 1 T
T 2

−T 2

1 x(t)dt = T
T 4

T 4

−T 2

−e−j2π T t dt n T 2 T 4

−T 4

4 −j2π n t 1 T dt + te T T

e−j2π T t dt n T 4

4 T2 1 − T

jT −j2π n t T 2 −j2π n t T + T te e 2πn 4π 2 n2 jT −j2π n t T e 2πn
−T 4 −T 2

−T 4
T 2 T 4

1 + T

jT −j2π n t T e 2πn

=

2 sin( πn ) j j n 2 (−1)n − = (−1)n − sinc( ) πn πn πn 2 j πn .

For n even, sinc( n ) = 0 and xn = 2 an = 0, ∀n,

The trigonometric Fourier series expansion coefficients are:
1 − πl 2 π(2l+1) [1 2(−1)l π(2l+1) ]

bn =

+

n = 2l n = 2l + 1

f ) The signal is periodic with period T . For n = 0 x0 = For n = 0 xn = = 1 T 3 T2 3 − 2 T + = n 3 1 ( t + 2)e−j2π T t dt + T −T T 3

1 T

T 3

−T 3

x(t)dt = 1

0

T 3

0

n 3 (− t + 2)e−j2π T t dt T

jT −j2π n t T 2 −j2π n t T + T te e 2πn 4π 2 n2 jT −j2π n t T 2 −j2π n t T + T te e 2πn 4π 2 n2
0 −T 3

0 −T 3
T 3

0
T 3

2 jT −j2π n t T e T 2πn

+

2 jT −j2π n t T e T 2πn

0

3 1 1 2πn 2πn ) + ) − cos( sin( 2 n2 2 π πn 3 3 3 π 2 n2 2πn 2πn 1 1 − cos( ) + sin( ) , 2 3 πn 3

The trigonometric Fourier series expansion coefficients are: a0 = 2, an = 2 bn = 0, ∀n

Problem 2.5 1) The signal y(t) = x(t − t0 ) is periodic with period T = T0 . yn = = 1 T0 1 T0 α+T0 α α−t0 +T0 α−t0
0

x(t − t0 )e x(v)e

n −j2π T t 0

dt

n −j2π T

0

(v + t0 )dv n −j2π T v 0

= e

n −j2π T t0

1 T0

α−t0 +T0 α−t0

x(v)e

dv

= xn e

n −j2π T t0 0

6

where we used the change of variables v = t − t0 2) For y(t) to be periodic there must exist T such that y(t + mT ) = y(t). But y(t + T ) = x(t + T )ej2πf0 t ej2πf0 T so that y(t) is periodic if T = T0 (the period of x(t)) and f0 T = k for some k in Z. In this case yn = = 1 T0 1 T0 α+T0 α α+T0 α

x(t)e x(t)e

n −j2π T t j2πf0 t 0

e

dt

−j2π

(n−k) t T0

dt = xn−k

3) The signal y(t) is periodic with period T = T0 /α. yn = = 1 T 1 T0 β+T β βα+T0 βα

y(t)e−j2π T t dt = n α T0

β+ β

T0 α

x(αt)e

−j2π nα t T
0

dt

x(v)e

n −j2π T v 0

dv = xn

where we used the change of variables v = αt. 4) yn = n 1 α+T0 −j2π T t 0 dt x (t)e T0 α α+T0 n n 1 α+T0 n −j2π T t 1 −j2π T t 0 0 dt x(t)e − (−j2π )e = T0 T0 α T0 α n n 1 α+T0 n −j2π T t 0 dt = j2π = j2π x(t)e xn T0 T0 α T0

Problem 2.6 1 T0 α+T0 α

x(t)y ∗ (t)dt = =

1 T0


α+T0 α ∞



xn e n=−∞ ∗ xn ym

j2πn t T0

∞ m=−∞

∗ ym e

− j2πm t T
0

dt

n=−∞ m=−∞ ∞ ∞

1 T0

α+T0 α

e

j2π(n−m) t T0

dt

= n=−∞ m=−∞

∗ xn ym δmn =

∞ n=−∞

∗ xn yn

Problem 2.7 Using the results of Problem 2.6 we obtain 1 T0 Since the signal has finite power 1 T0 Thus,
∞ 2 n=−∞ |xn | α+T0 α α+T0 α

x(t)x∗ (t)dt =

∞ n=−∞

|xn |2

|x(t)|2 dt = K < ∞

= K < ∞. The last implies that |xn | → 0 as n → ∞. To see this write
∞ n=−∞

|xn |2 =

−M n=−∞

M

|xn |2 + n=−M |xn |2 +

∞ n=M

|xn |2

7

Each of the previous terms is positive and bounded by K. Assume that |xn |2 does not converge to zero as n goes to infinity and choose = 1. Then there exists a subsequence of xn , xnk , such that |xnk | > = 1, Then
∞ n=M

for nk > N ≥ M |xn |2 ≥ nk |xn |2 ≥

∞ n=N

|xnk |2 = ∞

This contradicts our assumption that converge to zero as n → ∞. Problem 2.8 The power content of x(t) is Px = lim 1 T →∞ T

∞ n=M

|xn

|2

is finite. Thus |xn |, and consequently xn , should

T 2

−T 2

|x(t)|2 dt =

1 T0

T0 0

|x(t)|2 dt

But |x(t)|2 is periodic with period T0 /2 = 1 so that Px = From Parseval’s theorem 1 Px = T0 α+T0 α

2 T0

T0 /2 0

|x(t)|2 dt =

2 3 t 3T0

T0 /2

=
0

1 3

a2 1 ∞ 2 |x(t)| dt = |xn | = 0 + (a + b2 ) n 4 2 n=1 n n=−∞
2 2



For the signal under consideration an = Thus, 1 3 = = But, 1 ∞ 2 1 ∞ 2 a + b 2 n=1 2 n=1 8 π4
∞ l=0 ∞ l=0

− π24n2 0

n odd n even

bn =

2 − πn 0

n odd n even

1 2 + (2l + 1)4 π 2

∞ l=0

1 (2l + 1)2

1 π2 = 2 (2l + 1) 8

and by substituting this in the previous formula we obtain
∞ l=0

1 π4 = (2l + 1)4 96

Problem 2.9 1) Since (a − b)2 ≥ 0 we have that ab ≤ with equality if a = b. Let n 1 2

a2 b2 + 2 2 n 2 βi i=1
1 2

A= i=1 2 αi

,

B=

8

Then substituting αi /A for a and βi /B for b in the previous inequality we obtain
2 2 αi βi 1 βi 1 αi + ≤ AB 2 A2 2 B 2

with equality if

αi βi

= n A B

= k or αi = kβi for all i. Summing both sides from i = 1 to n we obtain ≤ = 1 2 n i=1 2 1 αi + 2 A 2 n 2 αi i=1 n i=1 2 βi B2 n 2 βi = i=1

i=1

αi βi AB

1 2A2

1 + 2B 2

1 2 1 A + B2 = 1 2A2 2B 2
1 2 1 2

Thus, 1 AB n n n

n 2 βi i=1

αi βi ≤ 1 ⇒ i=1 i=1

αi βi ≤ i=1 2 αi

Equality holds if αi = kβi , for i = 1, . . . , n.
∗ ∗ 2) The second equation is trivial since |xi yi | = |xi ||yi |. To see this write xi and yi in polar jθxi jθyi ∗ coordinates as xi = ρxi e and yi = ρyi e . Then, |xi yi | = |ρxi ρyi ej(θxi −θyi ) | = ρxi ρyi = |xi ||yi | = ∗ |xi ||yi |. We turn now to prove the first inequality. Let zi be any complex with real and imaginary components zi,R and zi,I respectively. Then, n 2 n n 2 n 2 n 2

zi i=1 = =

zi,R i=1 n n i=1 m=1

+j i=1 zi,I

= i=1 zi,R

+ i=1 zi,I

(zi,R zm,R + zi,I zm,I )

Since (zi,R zm,I − zm,R zi,I )2 ≥ 0 we obtain
2 2 2 2 (zi,R zm,R + zi,I zm,I )2 ≤ (zi,R + zi,I )(zm,R + zm,I )

Using this inequality in the previous equation we get n 2 n n

zi i=1 = i=1 m=1 n n

(zi,R zm,R + zi,I zm,I )
2 2 2 2 (zi,R + zi,I ) 2 (zm,R + zm,I ) 2 i=1 m=1 n 2 (zi,R i=1 2
1 1 1

≤ = Thus n n m=1 2

2 + zi,I ) 2

2 2 (zm,R + zm,I ) 2

1

n

= i=1 2 2 (zi,R + zi,I ) 2

1

2

n

zi i=1 ≤ i=1 2 (zi,R

+

2 1 zi,I ) 2

n

n

or i=1 zi ≤ i=1 |zi | zi,R zi,I

∗ The inequality now follows if we substitute zi = xi yi . Equality is obtained if zi = zm = θ.

=

zm,R zm,I

= k1 or

3) From 2) we obtain n i=1 2 ∗ xi yi n

≤ i=1 |xi ||yi |

9

But |xi |, |yi | are real positive numbers so from 1) n n

|xi ||yi | ≤ i=1 i=1

|xi |

2

1 2

n

|yi | i=1 2

1 2

Combining the two inequalities we get n i=1 2 ∗ xi yi n

≤ i=1 |xi |

2

1 2

n

|yi | i=1 2

1 2

∗ ∗ From part 1) equality holds if αi = kβi or |xi | = k|yi | and from part 2) xi yi = |xi yi |ejθ . Therefore, the two conditions are |xi | = k|yi | xi − yi = θ

which imply that for all i, xi = Kyi for some complex constant K. 3) The same procedure can be used to prove the Cauchy-Schwartz inequality for integrals. An easier approach is obtained if one considers the inequality |x(t) + αy(t)| ≥ 0, Then 0 ≤ =
∞ −∞ ∞ −∞ ∞ −∞

for all α

|x(t) + αy(t)|2 dt = |x(t)|2 dt + α
∞ −∞

(x(t) + αy(t))(x∗ (t) + α∗ y ∗ (t))dt
∞ −∞

x∗ (t)y(t)dt + α∗

x(t)y ∗ (t)dt + |a|2
∞ ∗ −∞ x (t)y(t)dt

∞ −∞

|y(t)|2 dt

The inequality is true for

∞ ∗ −∞ x (t)y(t)dt

= 0. Suppose that
∞ 2 −∞ |x(t)| dt ∞ ∗ −∞ x (t)y(t)dt

= 0 and set

α=− Then, 0≤− and
∞ −∞ ∞ −∞

|x(t)| dt +
2

[

∞ 2 2 ∞ 2 −∞ |x(t)| dt] −∞ |y(t)| dt ∞ | −∞ x(t)y ∗ (t)dt|2
1 2

x(t)y (t)dt ≤



∞ −∞

|x(t)| dt
2

∞ −∞

|y(t)| dt
2

1 2

Equality holds if x(t) = −αy(t) a.e. for some complex α. Problem 2.10 1) Using the Fourier transform pair e−α|t| −→
F

α2

2α 2α = 2 2 + (2πf ) 4π

1 α2 4π 2

+ f2

and the duality property of the Fourier transform: X(f ) = F[x(t)] ⇒ x(−f ) = F[X(t)] we obtain 2α F 4π 2 With α = 2π we get the desired result F 1 = πe−2π|f | 1 + t2 10 1 α2 4π 2

+ t2

= e−α|f |

2) F[x(t)] = F[Π(t − 3) + Π(t + 3)] = sinc(f )e−j2πf 3 + sinc(f )ej2πf 3 = 2sinc(f ) cos(2π3f )

3) F[x(t)] = F[Λ(2t + 3) + Λ(3t − 2)] 2 3 = F[Λ(2(t + )) + Λ(3(t − )] 2 3 2 f jπf 3 1 f 1 sinc2 ( )e + sinc2 ( )e−j2πf 3 = 2 2 3 3

4) T (f ) = F[sinc3 (t)] = F[sinc2 (t)sinc(t)] = Λ(f ) Π(f ). But


Π(f ) Λ(f ) =

−∞

Π(θ)Λ(f − θ)dθ =

1 2

−1 2

Λ(f − θ)dθ =

f+ 1 2 f− 1 2

Λ(v)dv

For For For

3 f ≤ − =⇒ T (f ) = 0 2 1 3 − < f ≤ − =⇒ T (f ) = 2 2 1 1 − < f ≤ =⇒ T (f ) = 2 2 1 = ( v 2 + v) 2

f+ 1 2 −1 0 f− 1 2

1 (v + 1)dv = ( v 2 + v) 2 f+ 1 2 0

f+ 1 2 −1

1 3 9 = f2 + f + 2 2 8

(v + 1)dv + 1 + (− v 2 + v) 2

(−v + 1)dv = −f 2 +
1 f− 1 2

0 f− 1 2

f+ 1 2 0

3 4 9 1 3 = f2 − f + 2 2 8

For For Thus,

1 3 < f ≤ =⇒ T (f ) = 2 2 3 < f =⇒ T (f ) = 0 2

1 (−v + 1)dv = (− v 2 + v) 1 2 f− 2

1

  0   1 2  f + 3f + 9  2  2 8

T (f ) =

 1 2  f −   2  

−f 2 + 0

3 4 3 2f

+

9 8

f ≤ −3 2 −3 < f ≤ −1 2 2 −1 < f ≤ 1 2 2 3 1 2 1, then x(at) is a contracted form of x(t) whereas if a < 1, x(at) is an expanded version of x(t). This means that if we expand a signal in the time domain its frequency domain representation (Fourier transform) contracts and if we contract a signal in the time domain its frequency domain representation expands. This is exactly what one expects since contracting a signal in the time domain makes the changes in the signal more abrupt, thus, increasing its frequency content.

13

Problem 2.14 We have F[x(t) y(t)] = =
−∞ −∞



−∞

∞x(τ )y(t − τ ) dτ e−j2πf t dt
−∞

∞x(τ )

∞y(t − τ )e−j2πf (t−τ ) dt e−j2πf τ dτ

Now with the change of variable u = t − τ , we have
−∞

∞y(t − τ )e−j2πf (t−τ ) dt =

= F[y(t)] = Y (f )

−∞

∞f y(u)e−j2πf u du

and, therefore, F[x(t) y(t)] =
−∞

∞x(τ )Y (f )e−j2πf τ dτ

= X(f ) · Y (f ) Problem 2.15 We start with the Fourier transform of x(t − t0 ), F[x(t − t0 )] =
−∞

∞x(t − t0 )e−j2πf t dt

With a change of variable of u = t − t0 , we obtain F[x(t − t0 )] = = e
−∞

∞x(u)e−j2πf t0 e−j2πf u du
−∞

−j2πf t0

∞x(u)e−j2πf u du

= e−j2πf t0 F[x(t)] Problem 2.16 ∞x(t)y ∗ (t) dt = = = ∞ ∞ ∞X(f )ej2πf t df ∞X(f )ej2πf t df
−∞

−∞

−∞ −∞ −∞

−∞ −∞

−∞ −∞ −∞

∞Y (f )ej2πf t df



dt dt df

∞Y ∗ (f )e−j2πf t df

∞X(f )

∞Y ∗ (f )

∞ej2πt(f −f ) dt df

Now using properties of the impulse function.
−∞

∞ej2πt(f −f ) dt = δ(f − f )

and therefore
−∞

∞x(t)y ∗ (t) dt = =

−∞ −∞

∞X(f )

−∞

∞Y ∗ (f )δ(f − f ) df

df

∞X(f )Y ∗ (f ) df 14

where we have employed the sifting property of the impulse signal in the last step. Problem 2.17 (Convolution theorem:) F[x(t) y(t)] = F[x(t)]F[y(t)] = X(f )Y (f ) Thus sinc(t) sinc(t) = F −1 [F[sinc(t) sinc(t)]] = F −1 [F[sinc(t)] · F[sinc(t)]] = F −1 [Π(f )Π(f )] = F −1 [Π(f )] = sinc(t)

Problem 2.18 F[x(t)y(t)] = = = =
∞ −∞ ∞ −∞ ∞ −∞ ∞ −∞

x(t)y(t)e−j2πf t dt
∞ −∞

X(θ)ej2πθt dθ y(t)e−j2πf t dt
∞ −∞

X(θ)

y(t)e−j2π(f −θ)t dt dθ

X(θ)Y (f − θ)dθ = X(f ) Y (f )

Problem 2.19 1) Clearly


x1 (t + kT0 ) = n=−∞ ∞

x(t + kT0 − nT0 ) = x(t − mT0 ) = x1 (t)

∞ n=−∞

x(t − (n − k)T0 )

= m=−∞ where we used the change of variable m = n − k. 2) x1 (t) = x(t) n=−∞ ∞

δ(t − nT0 )

This is because
∞ −∞ ∞

x(τ ) n=−∞ δ(t − τ − nT0 )dτ =





n=−∞ −∞

x(τ )δ(t − τ − nT0 )dτ =

∞ n=−∞

x(t − nT0 )

3) F[x1 (t)] = F[x(t) = X(f ) 1 T0
∞ n=−∞ ∞ n=−∞

δ(t − nT0 )] = F[x(t)]F[ δ(f − n 1 )= T0 T0




δ(t − nT0 )]

n=−∞

X( n=−∞ n n )δ(f − ) T0 T0

15

Problem 2.20 1) By Parseval’s theorem
∞ −∞

sinc5 (t)dt =

∞ −∞

sinc3 (t)sinc2 (t)dt =

∞ −∞

Λ(f )T (f )df

where T (f ) = F[sinc3 (t)] = F[sinc2 (t)sinc(t)] = Π(f ) Λ(f ) But Π(f ) Λ(f ) =
∞ −∞

Π(θ)Λ(f − θ)dθ =

1 2

−1 2

Λ(f − θ)dθ =

f+ 1 2 f− 1 2

Λ(v)dv

For For For

3 f ≤ − =⇒ T (f ) = 0 2 1 3 − < f ≤ − =⇒ T (f ) = 2 2 1 1 − < f ≤ =⇒ T (f ) = 2 2 1 = ( v 2 + v) 2

f+ 1 2 −1 0 f− 1 2

1 (v + 1)dv = ( v 2 + v) 2 f+ 1 2 0

f+ 1 2 −1

1 3 9 = f2 + f + 2 2 8

(v + 1)dv + 1 + (− v 2 + v) 2

(−v + 1)dv = −f 2 +
1 f− 1 2

0 f− 1 2

f+ 1 2 0

3 4 9 1 3 = f2 − f + 2 2 8

For For Thus,

3 1 < f ≤ =⇒ T (f ) = 2 2 3 < f =⇒ T (f ) = 0 2

1 (−v + 1)dv = (− v 2 + v) 1 2 f− 2

1

  0   1 2   2f + 3f + 9  2 8

T (f ) =

 1 2  f −   2  

−f 2 + 0

3 4 3 2f

+

9 8

f ≤ −3 2 −3 < f ≤ −1 2 2 −1 < f ≤ 1 2 2 3 1 2 0.5. This shows that limt→∞ e8 f (t) ≥ limt→∞ e = ∞. 16 This shows that the signal is not energy-type. 1 T To check if the signal is power type, we obviously have limT →∞ T 0 e−2t cos2 t dt = 0. Therefore P = = 1 T →∞ T lim
T →∞ T 0 2

e2t cos2 (t) dt − 3/8

1/4 e2 T (cos(T ))2 + 1/4 e2 T cos(T ) sin(T ) + 1/8 eT lim = ∞ T

Therefore x2 (t) is neither power- nor energy-type. 3)


E x3 =

−∞

(sgn(t))2 dt =

∞ −∞

1 dt

= ∞

and hence the signal is not energy-type. To find the power Px 3
T 1 (sgn(t)))2 dt T →∞ 2T −T T 1 12 dt = lim T →∞ 2T −T 1 2T = 1 = lim T →∞ 2T

=

lim

22

4) Since x4 (t) is periodic (or almost periodic when f1 /f2 is not rational) the signal is not energy type. To see whether it is power type, we have Px 4 1 T →∞ 2T 1 = lim T →∞ 2T A2 + B 2 = 2 = lim
T −T T −T

(A cos 2πf1 t + B cos 2πf2 t)2 dt A2 cos2 2πf1 t + B 2 cos2 2πf2 t + 2AB cos 2πf1 t cos 2πf2 t dt

Problem 2.30 1) P = 1 T →∞ 2T 1 = lim T →∞ 2T = A2 lim
T −T T −T

Aej(2πf0 t+θ) A2 dt

2

dt

2) P 1 T →∞ 2T 1 = 2 = lim
T 0

12 dt

3)
T

E = =

√ lim 2K 2 t T →∞ √ = lim 2K 2 T
T →∞

T →∞ 0

lim

√ K 2 / t dt
T 0

= ∞

therefore, it is not energy-type. To find the power P =
T √ 1 K 2 / t dt T →∞ 2T −T √ 1 2K 2 T = lim T →∞ 2T = 0

lim

and hence it is not power-type either. Problem 2.31 1) x(t) = e−αt u−1 (t). The spectrum of the signal is X(f ) = GX (f ) = |X(f )|2 = Thus, RX (τ ) = F −1 [GX (f )] = 23 1 −α|τ | e 2α

1 α+j2πf

and the energy spectral density

1 α2 + 4π 2 f 2

The energy content of the signal is EX = RX (0) = 1 2α

2) x(t) = sinc(t). Clearly X(f ) = Π(f ) so that GX (f ) = |X(f )|2 = Π2 (f ) = Π(f ). The energy content of the signal is


EX =

−∞

Π(f )df =

1 2

−1 2

Π(f )df = 1

3) x(t) = ∞ n=−∞ Λ(t − 2n). The signal is periodic and thus it is not of the energy type. The power content of the signal is Px = = = 1 2 1 2 1 3
1 −1

|x(t)|2 dt =

1 2
0 −1

0 −1

(t + 1)2 dt +
0

1

(−t + 1)2 dt
1 0

1 3 t + t2 + t 3

+

1 2

1 3 t − t2 + t 3

The same result is obtain if we let SX (f ) = with x0 = 1 , x2l = 0 and x2l+1 = 2 PX = n=−∞ 2 π(2l+1) ∞ ∞ n=−∞

|xn |2 δ(f −

n ) 2

(see Problem 2.2). Then |xn |2
∞ l=0

=

8 1 + 2 4 π

8 π2 1 1 1 = = + 2 4 (2l + 1) 4 π 96 3

4) EX = lim
T 2

T →∞ − T 2

|u−1 (t)| dt = lim
2

T 2

T →∞ 0

dt = lim

T →∞

T =∞ 2

Thus, the signal is not of the energy type. PX = lim 1 T →∞ T
T 2

−T 2

|u−1 (t)|2 dt = lim

1 1T = T →∞ T 2 2

Hence, the signal is of the power type and its power content is 1 . To find the power spectral density 2 we find first the autocorrelation RX (τ ). RX (τ ) = 1 T →∞ T lim
T 2

−T 2
T

u−1 (t)u−1 (t − τ )dt

1 2 dt = lim T →∞ T τ 1 1 T = lim ( − τ ) = T →∞ T 2 2 Thus, SX (f ) = F[RX (τ )] = 1 δ(f ). 2 24

5) Clearly |X(f )|2 = π 2 sgn2 (f ) = π 2 and EX = limT →∞ t 1 xT (t) = Π( ) t T Then,

energy type for the energy content is not bounded. Consider now the signal

−T 2

T 2

π 2 dt = ∞. The signal is not of the

XT (f ) = −jπsgn(f ) T sinc(f T ) and |XT (f )|2 SX (f ) = lim = lim π 2 T T →∞ T →∞ T f −∞

sinc(vT )dv −



2

sinc(vT )dv f However, the squared term on the right side is bounded away from zero so that SX (f ) is ∞. The signal is not of the power type either. Problem 2.32 1) a) If α = γ, |Y (f )|2 = |X(f )|2 |H(f )|2 1 = 2 + 4π 2 f 2 )(β 2 + 4π 2 f 2 ) (α 1 1 1 − = β 2 − α2 α2 + 4π 2 f 2 β 2 + 4π 2 f 2 From this, RY (τ ) = If α = γ then
1 β 2 −α2 1 −α|τ | 2α e



1 −β|τ | 2β e

and Ey = Ry (0) =

1 2αβ(α+β) .

GY (f ) = |Y (f )|2 = |X(f )|2 |H(f )|2 = The energy content of the signal is EY = = = = b) H(f ) =
1 γ+j2πf

(α2

1 + 4π 2 f 2 )2

1 + 4π 2 f 2 )2 −∞ ∞ 2α 2α 1 df 2 2 + 4π 2 f 2 α2 + 4π 2 f 2 4α −∞ α ∞ ∞ 1 1 e−2α|t| dt = 2 e−2αt dt 2 2 4α −∞ 4α 0 1 −2αt ∞ 1 1 e − = 2α2 2α 4α3 0 (α2
1 . γ 2 +4π 2 f 2



=⇒ |H(f )|2 =

The energy spectral density of the output is 1 Π(f ) + 4π 2 f 2

GY (f ) = GX (f )|H(f )|2 = The energy content of the signal is EY = =
1 2

γ2

−1 2

fγ 1 1 arctan df = 2 + 4π 2 f 2 γ 2πγ 2π

1 2

−1 2

fγ 1 arctan πγ 4π

25

c) The power spectral density of the output is SY (f ) = = = n n |xn |2 |H( )|2 δ(f − ) 2 2 n=−∞
∞ 1 |x2l+1 |2 2l + 1 δ(f ) + 2 δ(f − ) 2 2 + π 2 (2l + 1)2 4γ γ 2 l=0 ∞

8 1 δ(f ) + 2 4γ 2 π

∞ l=0

2l + 1 1 δ(f − ) (2l + 1)4 (γ 2 + π 2 (2l + 1)2 ) 2

The power content of the output signal is PY = = = = n |xn |2 |H( )|2 2 n=−∞ 1 8 + 4γ 2 π 2 1 8 + 2 2 4γ π
∞ l=0 ∞

π2 π4 1 − 4 + 4 2 γ 2 (2l + 1)4 γ (γ + π 2 (2l + 1)2 ) γ (2l + 1)2
∞ γ2 l=0 π 2

π4 π2 π2 − 4+ 4 γ 2 96 8γ γ

1 + (2l + 1)2

1 π 2 2π 2 γ − 4 + 5 tanh( ) 2 3γ γ γ 2

where we have used the fact tanh( πx 4x )= 2 π
∞ l=0

1 , 2 + (2l + 1)2 x

tanh(x) =

ex − e−x ex + e−x

d) The power spectral density of the output signal is SY (f ) = SX (f )|H(f )|2 = The power content of the signal is


1 1 1 δ(f ) = 2 δ(f ) 2 + 4π 2 f 2 2γ 2γ

PY =

−∞

SY (f )df =

1 2γ 2

e) X(f ) = −jπsgn(f ) so that |X(f )|2 = π 2 for all f except f = 0 for which |X(f )|2 = 0. Thus, the energy spectral density of the output is GY (f ) = |X(f )|2 |H(f )|2 = and the energy content of the signal EY = π 2
∞ −∞

π2 γ 2 + 4π 2 f 2

f 2π 1 1 arctan( ) df = π 2 2 + 4π 2 f 2 γ 2πγ γ

∞ −∞

=

π2 2γ

2) a) h(t) = sinc(6t) =⇒ H(f ) = 1 Π( f ) The energy spectral density of the output signal is GY (f ) = 6 6 1 GX (f )|H(f )|2 and with GX (f ) = α2 +4π2 f 2 we obtain GY (f ) = 1 f 1 2 f 1 Π ( )= Π( ) α2 + 4π 2 f 2 36 6 36(α2 + 4π 2 f 2 ) 6 26

The energy content of the signal is EY = = = 1 1 3 df 36 −3 α2 + 4π 2 f 2 −∞ 1 2π 3 arctan(f ) 36(2απ) α −3 6π 1 arctan( ) 36απ α GY (f )df = f 1 36 Π( 6 )Π(f ) ∞

b) The energy spectral density is GY (f ) = output

=

1 36 Π(f )

and the energy content of the

EY (f ) = c)

1 36

1 2

−1 2 ∞

df =

1 36 1 n n Π( )δ(f − ) 36 12 2
2 π(2l+1)2

SY (f ) = SX (f )|H(f )|2 = n Since Π( 12 ) is nonzero only for n such that Problem 2.2), we obtain n 12

|xn |2

n=−∞



1 2

and x0 = 1 , x2l = 0 and x2l+1 = 2

(see

SY (f ) = = The power content of the signal is PY =

2 1 1 2l + 1 δ(f ) + |x2l+1 |2 δ(f − ) 4 · 36 36 2 l=−3

1 1 δ(f ) + 2 144 9π

1 2l + 1 ) δ(f − 4 (2l + 1) 2 l=−3

2

2 1 .2253 1 1 1 + 2 (1 + + + ) == 144 9π 81 625 144 π2

1 1 1 d) SX (f ) = 1 δ(f ), |H(f )|2 = 36 Π( f ). Hence, SY (f ) = 72 Π( f )δ(f ) = 72 δ(f ). The power content 2 6 6 ∞ 1 1 of the signal is PY = −∞ 72 δ(f )df = 72 . 1 1 1 e) y(t) = sinc(6t) t = πsinc(6t) πt . However, convolution with πt is the Hilbert transform which is known to conserve the energy of the signal provided that there are no impulses at the origin in the frequency domain (f = 0). This is the case of πsinc(6t), so that ∞

EY =

−∞

π 2 sinc2 (6t)dt = π 2

∞ −∞

π2 1 2 f Π ( )df = 36 36 36

3 −3

df =

π2 6

The energy spectral density is GY (f ) = 1 2 f 2 2 Π ( )π sgn (f ) 36 6

1 3) πt is the impulse response of the Hilbert transform filter, which is known to preserve the energy of the input signal. |H(f )|2 = sgn2 (f ) a) The energy spectral density of the output signal is

GY (f ) = GX (f )sgn2 (f ) =

GX (f ) f = 0 0 f =0

Since GX (f ) does not contain any impulses at the origin EY = EX = 27 1 2α

b) Arguing as in the previous question GY (f ) = GX (f )sgn2 (f ) = Since Π(f ) does not contain any impulses at the origin E Y = EX = 1 c) SY (f ) = SX (f )sgn2 (f ) = But, x2l = 0, x2l+1 =
1 π(2l+1) ∞ n=−∞

Π(f ) f = 0 0 f =0

|xn |2 δ(f −

n ), 2

n=0

so that
∞ l=0

SY (f ) = 2

|x2l+1 |2 δ(f −

8 n )= 2 2 π

∞ l=0

1 n δ(f − ) (2l + 1)4 2

The power content of the output signal is PY = 8 π2
∞ l=0

1 8 π2 1 = = 2 4 (2l + 1) π 96 12

1 d) SX (f ) = 2 δ(f ) and |H(f )|2 = sgn2 (f ). Thus SY (f ) = SX (f )|H(f )|2 = 0, and the power content of the signal is zero. e) The signal 1 has infinite energy and power content, and since GY (f ) = GX (f )sgn2 (f ), SY (f ) = t 1 SX (f )sgn2 (f ) the same will be true for y(t) = 1 πt . t

Problem 2.33 Note that Px =

∞ −∞

Sx (f )df = lim

1 T →∞ T

T 2

−T 2

|x(t)|2 dt

But in the interval [− T , T ], |x(t)|2 = |xT (t)|2 so that 2 2 1 Px = lim T →∞ T Using Rayleigh’s theorem Px 1 = lim T →∞ T 1 = lim T →∞ T
T 2 T 2

−T 2

|xT (t)|2 dt

−T 2 ∞ −∞

|xT (t)|2 dt = lim GxT (f )df =


1 T →∞ T lim

∞ −∞

|XT (f )|2 df

−∞ T →∞

1 Gx (f )df T T

Comparing the last with Px =

∞ −∞ Sx (f )df

we see that 1 Gx (f ) T T

Sx (f ) = lim

T →∞

28

Problem 2.34 Let y(t) be the output signal, which is the convolution of x(t), and h(t), y(t) = Using Cauchy-Schwartz inequality we obtain |y(t)| = ≤
∞ −∞ ∞ −∞
1 2

∞ −∞ h(τ )x(t − τ )dτ .

h(τ )x(t − τ )dτ |h(τ )| dτ
2 ∞ −∞
1 2

∞ −∞ 2

|x(t − τ )| dτ
2
1 2

1 2

≤ Eh

|x(t − τ )| dτ

Squaring the previous inequality and integrating from −∞ to ∞ we obtain
∞ −∞

|y(t)|2 dt ≤ Eh





−∞ −∞

|x(t − τ )|2 dτ dt

∞ ∞ But by assumption −∞ −∞ |x(t − τ )|2 dτ dt, Eh are finite, so that the energy of the output signal is finite. Consider the LTI system with impulse response h(t) = ∞ n=−∞ Π(t − 2n). The signal is periodic with period T = 2, and the power content of the signal is PH = 1 . If the input to this system is 2 the energy type signal x(t) = Π(t), then ∞

y(t) = n=−∞ Λ(t − 2n)

which is a power type signal with power content PY = 1 . 2 Problem 2.35 For no aliasing to occur we must sample at the Nyquist rate fs = 2 · 6000 samples/sec = 12000 samples/sec With a guard band of 2000 fs − 2W = 2000 =⇒ fs = 14000 The reconstruction filter should not pick-up frequencies of the images of the spectrum X(f ). The nearest image spectrum is centered at fs and occupies the frequency band [fs − W, fs + W ]. Thus the highest frequency of the reconstruction filter (= 10000) should satisfy 10000 ≤ fs − W =⇒ fs ≥ 16000 For the value fs = 16000, K should be such that K · fs = 1 =⇒ K = (16000)−1 Problem 2.36 A f ) Π( 1000 1000 Thus the bandwidth W of x(t) is 1000/2 = 500. Since we sample at fs = 2000 there is a gap between the image spectra equal to x(t) = Asinc(1000πt) =⇒ X(f ) = 2000 − 500 − W = 1000 29

The reconstruction filter should have a bandwidth W such that 500 < W < 1500. A filter that satisfy these conditions is H(f ) = Ts Π f 2W = 1 f Π 2000 2W

and the more general reconstruction filters have the form |f | < 500 arbitrary 500 < |f | < 1500 H(f ) =   0 |f | > 1500 Problem 2.37 1)


 1  2000 

xp (t) = n=−∞ x(nTs )p(t − nTs )


= p(t) n=−∞ x(nTs )δ(t − nTs )


= p(t) x(t) n=−∞ δ(t − nTs )

Thus Xp (f ) = P (f ) · F x(t) = P (f )X(f ) F = P (f )X(f )
∞ ∞ n=−∞ ∞ n=−∞ ∞ n=−∞

δ(t − nTs ) δ(t − nTs ) δ(f − n ) Ts

1 Ts

=

n 1 P (f ) X(f − ) Ts Ts n=−∞

2) In order to avoid aliasing |f | < W .

1 Ts

> 2W . Furthermore the spectrum P (f ) should be invertible for

f 3) X(f ) can be recovered using the reconstruction filter Π( 2WΠ ) with W < WΠ < case f ) X(f ) = Xp (f )Ts P −1 (f )Π( 2WΠ

1 Ts

− W . In this

Problem 2.38 1)


x1 (t) =

(−1)n x(nTs )δ(t − nTs ) = x(t)





(−1)n δ(t − nTs )


n=−∞

n=−∞

= x(t) 

δ(t − 2lTs ) −

∞ l=−∞

δ(t − Ts − 2lTs )

l=−∞

30

Thus


1 X1 (f ) = X(f )  2Ts = = = 1 2Ts 1 2Ts 1 Ts
∞ l=−∞ ∞

∞ l=−∞

δ(f −

l 1 )− 2Ts 2Ts
∞ l=−∞ ∞ l=−∞

∞ l=−∞



δ(f −

l )e−j2πf Ts  2Ts

X(f −

l 1 )− 2Ts 2Ts

X(f − X(f −

l l )e−j2π 2Ts Ts 2Ts

l 1 X(f − )− 2Ts 2Ts l=−∞


l )(−1)l 2Ts

X(f −

l=−∞

1 l − ) 2Ts Ts

2) The spectrum of x(t) occupies the frequency band [−W, W ]. Suppose that from the periodic 1 1 k spectrum X1 (f ) we isolate Xk (f ) = Ts X(f − 2Ts − Ts ), with a bandpass filter, and we use it to reconstruct x(t). Since Xk (f ) occupies the frequency band [2kW, 2(k + 1)W ], then for all k, Xk (f ) cannot cover the whole interval [−W, W ]. Thus at the output of the reconstruction filter there will exist frequency components which are not present in the input spectrum. Hence, the reconstruction filter has to be a time-varying filter. To see this in the time domain, note that the original spectrum 1 has been shifted by f = 2Ts . In order to bring the spectrum back to the origin and reconstruct x(t) the sampled signal x1 (t) has to be multiplied by e−j2π 2Ts t = e−j2πW t . However the system described by y(t) = ej2πW t x(t)
1

is a time-varying system. 3) Using a time-varying system we can reconstruct x(t) as follows. Use the bandpass filter −W 1 1 Ts Π( f2W ) to extract the component X(f − 2Ts ). Invert X(f − 2Ts ) and multiply the resultant signal by e−j2πW t . Thus x(t) = e−j2πW t F −1 Ts Π( f −W )X1 (f ) 2W

Problem 2.39 1) The linear interpolation system can be viewed as a linear filter where the sampled signal x(t) ∞ n=−∞ δ(t − nTs ) is passed through the filter with impulse response h(t) =
 t  1+ T  s  

1− 0

t Ts

−Ts ≤ f ≤ 0 0 ≤ f ≤ Ts otherwise

To see this write


x1 (t) = x(t) n=−∞ δ(t − nTs )



h(t) = n=−∞ x(nTs )h(t − nTs )

Comparing this with the interpolation formula in the interval [nTs , (n + 1)Ts ] x1 (t) = x(nTs ) + t − nTs (x((n + 1)Ts ) − x(nTs )) Ts t − nTs t − (n + 1)Ts + x((n + 1)Ts ) 1 + = x(nTs ) 1 − Ts Ts = x(nTs )h(t − nTs ) + x((n + 1)Ts )h(t − (n + 1)Ts ) 31

we observe that h(t) does not extend beyond [−Ts , Ts ] and in this interval its form should be the one described above. The power spectrum of x1 (t) is SX1 (f ) = |X1 (f )|2 where X1 (f ) = F[x1 (t)] = F h(t) x(t) = H(f ) X(f ) = sinc2 (f Ts )
∞ n=−∞ ∞ n=−∞

δ(t − nTs ) n ) Ts

1 Ts

∞ n=−∞

δ(f − n ) Ts

X(f −

2) The system function sinc2 (f Ts ) has zeros at the frequencies f such that f Ts = k, k ∈ Z − {0}

In order to recover X(f ), the bandwidth W of x(t) should be smaller than 1/Ts , so that the whole X(f ) lies inside the main lobe of sinc2 (f Ts ). This condition is automatically satisfied if we choose Ts such that to avoid aliasing (2W < 1/Ts ). In this case we can recover X(f ) from X1 (f ) using f the lowpass filter Π( 2W ). f )X1 (f ) = sinc2 (f Ts )X(f ) Π( 2W or f )X1 (f ) X(f ) = (sinc2 (f Ts ))−1 Π( 2W If Ts f 1/W , then sinc2 (f Ts ) ≈ 1 for |f | < W and X(f ) is available using X(f ) = Π( 2W )X1 (f ).

Problem 2.40 1) W = 50Hz so that Ts = 1/2W = 10−2 sec. The reconstructed signal is


x(t) = = − n=−∞ −1

x(nTs )sinc( sinc( n=−4 t − n) Ts

4 t t − n) + sinc( − n) Ts Ts n=1

With Ts = 10−2 and t = 5 · 10−3 we obtain x(.005) = −
4 1 1 sinc( + n) + sinc( − n) 2 2 n=1 n=1 4

5 7 9 3 = −[sinc( ) + sinc( ) + sinc( ) + sinc( )] 2 2 2 2 1 3 5 7 +[sinc(− ) + sinc(− ) + sinc(− ) + sinc(− )] 2 2 2 2 9 2 π 2 9π 1 sin( ) = sinc( ) − sinc( ) = sin( ) − 2 2 π 2 9π 2 16 = 9π where we have used the fact that sinc(t) is an even function. 2) Note that (see Problem 2.41)
∞ −∞

sinc(2W t − m)sinc∗ (2W t − n)dt = 32

1 δmn 2W

with δmn the Kronecker delta. Thus,
∞ −∞

|x(t)|2 dt = =

∞ −∞ ∞

x(t)x∗ (t)dt x(nTs )x∗ (mTs ) |x(nTs )|2

∞ −∞

sinc(2W t − m)sinc∗ (2W t − n)dt

n=−∞ ∞

= n=−∞ 1 2W
−1 4

Hence
∞ −∞



|x(t)|2 dt =

1  4 = 8 · 10−2 1+ 1 = 2W n=−4 W n=1

Problem 2.41 1) Using Parseval’s theorem we obtain


A = = = =

−∞ ∞ −∞ ∞

sinc(2W t − m)sinc(2W t − n)dt F[sinc(2W t − m)]F[sinc(2W t − n)]dt (

m−n 1 2 2 f ) Π ( )e−j2πf 2W df 2W 2W −∞ W m−n 1 1 δmn e−j2πf 2W df = 2 4W −W 2W

where δmn is the Kronecker’s delta. The latter implies that {sinc(2W t − m)} form an orthogonal set of√ signals. In order to generate an orthonormal set of signals we have to weight each function by 1/ 2W . 2) The bandlimited signal x(t) can be written as


x(t) = n=−∞ x(nTs )sinc(2W t − n)

where x(nTs ) are the samples taken at the Nyquist rate. This is an orthogonal expansion relation where the basis functions {sinc(2W t − m)} are weighted by x(mTs ). 3)
∞ −∞

x(t)sinc(2W t − n)dt = =





−∞ m=−∞ ∞

x(mTs )sinc(2W t − m)sinc(2W t − n)dt


x(mTs ) m=−∞ ∞

−∞

sinc(2W t − m)sinc(2W t − n)dt

= m=−∞ x(mTs )

1 1 δmn = x(nTs ) 2W 2W

33

Problem 2.42 We define a new signal y(t) = x(t + t0 ). Then y(t) is bandlimited with Y (f ) = ej2πf t0 X(f ) and ∞ the samples of y(t) at {kTs }∞ k=−∞ are equal to the samples of x(t) at {t0 + kTs }k=−∞ . Applying the sampling theorem to the reconstruction of y(t), we have


y(t) = k=−∞ ∞

y(kTs )sinc (2W (t − kTs )) x(t0 + kTs )sinc (2W (t − kTs ))

(1) (2)

= k=−∞ and, hence, x(t + t0 ) =

∞ k=−∞

x(t0 + kTs )sinc (2W (t − kTs ))

Substituting t = −t0 we obtain the following important interpolation relation.


x(0) = k=−∞ x(t0 + kTs )sinc (2W (t0 + kTs ))

Problem 2.43 We know that x(t) = xc (t) cos(2πf0 t) − xs (t) sin(2πf0 t) x(t) = xc (t) sin(2πf0 t) + xs (t) cos(2πf0 t) ˆ We can write these relations in matrix notation as x(t) x(t) ˆ = cos(2πf0 t) − sin(2πf0 t) cos(2πf0 t) sin(2πf0 t) xc (t) xs (t) =R xc (t) xs (t)

The rotation matrix R is nonsingular (det(R) = 1) and its inverse is R−1 = Thus xc (t) xs (t) and the result follows. Problem 2.44 xc (t) = Re[xl (t)]. Thus = R−1 x(t) x(t) ˆ = cos(2πf0 t) sin(2πf0 t) − sin(2πf0 t) cos(2πf0 t) x(t) x(t) ˆ cos(2πf0 t) sin(2πf0 t) − sin(2πf0 t) cos(2πf0 t)

1 xc (t) = [xl (t) + x∗ (t)] 2 Taking the Fourier transform of the previous relation we obtain 1 Xc (f ) = [Xl (f ) + Xl∗ (−f )] 2

34

Problem 2.45

x1 (t) = x(t) sin(2πf0 t) 1 1 X1 (f ) = − X(f + f0 ) + X(f − f0 ) 2j 2j

ˆ x2 (t) = x(t) X2 (f ) = −jsgn(f )X(f )

x3 (t) = x1 (t) = x(t) sin(2πf0 t) = −x(t) cos(2πf0 t) ˆ 1 1 X3 (f ) = − X(f + f0 ) − X(f − f0 ) 2 2

x4 (t) = x2 (t) sin(2πf0 t) = x(t) sin(2πf0 t) ˆ 1 ˆ 1 ˆ X4 (f ) = − X(f + f0 ) + X(f − f0 ) 2j 2j 1 1 [−jsgn(f − f0 )X(f − f0 )] = − [−jsgn(f + f0 )X(f + f0 )] + 2j 2j 1 1 = sgn(f + f0 )X(f + f0 ) − sgn(f − f0 )X(f − f0 ) 2 2

x5 (t) = x(t) sin(2πf0 t) + x(t) cos(2πf0 t) ˆ 1 1 X5 (f ) = X4 (f ) − X3 (f ) = X(f + f0 )(sgn(f + f0 ) − 1) − X(f − f0 )(sgn(f − f0 ) + 1) 2 2 x x6 (t) = [ˆ(t) sin(2πf0 t) + x(t) cos(2πf0 t)]2 cos(2πf0 t) X6 (f ) = X5 (f + f0 ) + X5 (f − f0 ) 1 1 = X(f + 2f0 )(sgn(f + 2f0 ) − 1) − X(f )(sgn(f ) + 1) 2 2 1 1 + X(f )(sgn(f ) − 1) − X(f − 2f0 )(sgn(f − 2f0 ) + 1) 2 2 1 1 = −X(f ) + X(f + 2f0 )(sgn(f + 2f0 ) − 1) − X(f − 2f0 )(sgn(f − 2f0 ) + 1) 2 2

x7 (t) = x6 (t) 2W sinc(2W t) = −x(t) f X7 (f ) = X6 (f )Π( ) = −X(f ) 2W

35

2jX1 (f )

−jX2 (f )

1) v −f0 ¡ v ¡ ¡ v ¡

¡ ¡ f0 e

2) e e e e

e

e e

e
2X4 (f )

3) e −f0 ¡ e ¡ ¡ e

2X3 (f )

4) e e e ¡ f0 ¡ e e ¡ e e e
−f0

e

¡

¡ ¡ f0 

 

5)
−f0

X5 (f ) f0

v v

v



 

−2f0

6) e e e

X6 (f )

2f0

e e

e

¡

¡ ¡

¡

¡ ¡

7) e e e

X7 (f )

¡

¡ ¡

Problem 2.46 If x(t) is even then X(f ) is a real and even function and therefore −j sgn(f )X(f ) is an imaginary and odd function. Hence, its inverse Fourier transform x(t) will be odd. If x(t) is odd then X(f ) ˆ is imaginary and odd and −j sgn(f )X(f ) is real and even and, therefore, x(t) is even. ˆ Problem 2.47 Using Rayleigh’s theorem of the Fourier transform we have Ex = and Ex = ˆ
−∞ −∞

∞|x(t)|2 dt =

−∞

∞|X(f )|2 df

∞|ˆ(t)|2 dt = x

−∞

∞| − jsgn(f )X(f )|2 df

Noting the fact that | − = 1 except for f = 0, and the fact that X(f ) does not contain any impulses at the origin we conclude that Ex = Ex . ˆ jsgn(f )|2 Problem 2.48 Here we use Parseval’s Theorem of the Fourier Transform to obtain
−∞

∞x(t)ˆ(t) dt = x

−∞

∞X(f )[−jsgn(f )X(f )]∗ df 36

= −j = 0

0 −∞

|X(f )|2 df + j

+∞ 0

|X(f )|2 df

where in the last step we have used the fact that X(f ) is Hermitian and therefore |X(f )|2 is even. Problem 2.49 We note that C(f ) = M (f ) X(f ). From the assumption on the bandwidth of m(t) and x(t) we see that C(f ) consists of two separate positive frequency and negative frequency regions that do not overlap. Let us denote these regions by C+ (f ) and C− (f ) respectively. A moment’s thought shows that C+ (f ) = M (f ) X+ (f ) and C− (f ) = M (f ) X− (f ) To find the Hilbert Transform of c(t) we note that F[ˆ(t)] = −jsgn(f )C(f ) c = −jC+ (f ) + jC− (f ) = −jM (f ) X+ (f ) + jM (f ) X− (f ) = M (f ) [−jX+ (f ) + jX− (f )] = M (f ) [−jsgn(f )X(f )] = M (f ) F[ˆ(t)] x Returning to the time domain we obtain c(t) = m(t)ˆ(t) ˆ x

Problem 2.50 It is enough to note that ˆ F[x(t)] = (−jsgn(f ))2 X(f ) ˆ and hence ˆ F[x(t)] = −X(f ) ˆ where we have used the fact that X(f ) does not contain any impulses at the origin. Problem 2.51 Using the result of Problem 2.49 and noting that the Hilbert transform of cos is sin we have x(t) cos(2πf0 t) = x(t) sin(2πf0 t) Problem 2.52 θ θ 1 1 j2πf 2f −j2πf 2f 0 + 0 δ(f + f0 )e δ(f − f0 )e 2j 2j

F[A sin(2πf0 t + θ)] = −jsgn(f )A − =

θ θ A j2πf 2f −j2πf 2f 0 − sgn(−f0 )δ(f − f0 )e 0 sgn(−f0 )δ(f + f0 )e 2 θ θ A j2πf 2f −j2πf 2f 0 + δ(f − f0 )e 0 = − δ(f + f0 )e 2 = −AF[cos(2πf0 t + θ)]

37

Thus, A sin(2πf0 t + θ) = −A cos(2πf0 t + θ) Problem 2.53 Taking the Fourier transform of ej2πf0 t we obtain F[ej2πf0 t ] = −jsgn(f )δ(f − f0 ) = −jsgn(f0 )δ(f − f0 ) Thus, ej2πf0 t = F −1 [−jsgn(f0 )δ(f − f0 )] = −jsgn(f0 )ej2πf0 t Problem 2.54 F d x(t) dt = F[x(t) δ (t)] = −jsgn(f )F[x(t) δ (t)] = −jsgn(f )j2πf X(f ) = 2πf sgn(f )X(f ) = 2π|f |X(f ) Problem 2.55 We need to prove that x (t) = (ˆ(t)) . x F[x (t)] = F[x(t) δ (t)] = −jsgn(f )F[x(t) δ (t)] = −jsgn(f )X(f )j2πf = F[ˆ(t)]j2πf = F[(ˆ(t)) ] x x x Taking the inverse Fourier transform of both sides of the previous relation we obtain, x (t) = (ˆ(t)) Problem 2.56 1 1 x(t) = sinct cos 2πf0 t =⇒ X(f ) = Π(f + f0 )) + Π(f − f0 )) 2 2 1 1 2 h(t) = sinc t sin 2πf0 t =⇒ H(f ) = − Λ(f + f0 )) + Λ(f − f0 )) 2j 2j The lowpass equivalents are Xl (f ) = 2u(f + f0 )X(f + f0 ) = Π(f ) 1 Hl (f ) = 2u(f + f0 )H(f + f0 ) = Λ(f ) j  1 −1 < f ≤ 0  2j (f + 1)  2 1 1 (−f + 1) 0 ≤ f < 1 Xl (f )Hl (f ) = Yl (f ) = 2  2j 2  0 otherwise Taking the inverse Fourier transform of Yl (f ) we can find the lowpass equivalent response of the system. Thus, yl (t) = F −1 [Yl (f )] = = 1 2j
0 −1 2

(f + 1)ej2πf t df +

1 2j

1 2

(−f + 1)ej2πf t df
0 −1 2
1

0

1 1 1 f ej2πf t + 2 2 ej2πf t 2j j2πt 4π t −

+

1 1 j2πf t e 2j j2πt

0 −1 2
1 2

2 1 1 1 1 1 j2πf t f ej2πf t + 2 2 ej2πf t e + 2j j2πt 4π t 2j j2πt 0 1 1 sin πt + 2 2 (cos πt − 1) = j − 4πt 4π t

0

38

The output of the system y(t) can now be found from y(t) = Re[yl (t)ej2πf0 t ]. Thus y(t) = Re (j[− 1 1 sin πt + 2 2 (cos πt − 1)])(cos 2πf0 t + j sin 2πf0 t) 4πt 4π t 1 1 sin πt] sin 2πf0 t = [ 2 2 (1 − cos πt) + 4π t 4πt

Problem 2.57 1) The spectrum of the output signal y(t) is the product of X(f ) and H(f ). Thus, Y (f ) = H(f )X(f ) = X(f )A(f0 )ej(θ(f0 )+(f −f0 )θ (f )|f =f0 ) y(t) is a narrowband signal centered at frequencies f = ±f0 . To obtain the lowpass equivalent signal we have to shift the spectrum (positive band) of y(t) to the right by f0 . Hence, Yl (f ) = u(f + f0 )X(f + f0 )A(f0 )ej(θ(f0 )+f θ (f )|f =f0 ) = Xl (f )A(f0 )ej(θ(f0 )+f θ (f )|f =f0 ) 2) Taking the inverse Fourier transform of the previous relation, we obtain yl (t) = F −1 Xl (f )A(f0 )ejθ(f0 ) ejf θ (f )|f =f0 = A(f0 )xl (t + 1 θ (f )|f =f0 ) 2π

With y(t) = Re[yl (t)ej2πf0 t ] and xl (t) = Vx (t)ejΘx (t) we get y(t) = Re[yl (t)ej2πf0 t ] = Re A(f0 )xl (t + = = = = where tg = − 1 θ (f )|f =f0 )ejθ(f0 ) ej2πf0 t 2π 1 1 θ (f )|f =f0 )ej2πf0 t ejΘx (t+ 2π θ (f )|f =f0 ) Re A(f0 )Vx (t + 2π 1 A(f0 )Vx (t − tg ) cos(2πf0 t + θ(f0 ) + Θx (t + θ (f )|f =f0 )) 2π θ(f0 ) 1 θ (f )|f =f0 )) ) + Θx (t + A(f0 )Vx (t − tg ) cos(2πf0 (t + 2πf0 2π 1 θ (f )|f =f0 )) A(f0 )Vx (t − tg ) cos(2πf0 (t − tp ) + Θx (t + 2π 1 θ (f )|f =f0 , 2π tp = − 1 θ(f0 ) 1 θ(f ) =− 2π f0 2π f

f =f0

3) tg can be considered as a time lag of the envelope of the signal, whereas tp is the time 1 corresponding to a phase delay of 2π θ(f00 ) . f Problem 2.58 1) We can write Hθ (f ) as follows f >0 0 f = 0 = cos θ − jsgn(f ) sin θ Hθ (f ) =   cos θ + j sin θ f < 0 Thus, hθ (t) = F −1 [Hθ (f )] = cos θδ(t) + 1 sin θ πt
  cos θ − j sin θ 

39

2) xθ (t) = x(t) hθ (t) = x(t) (cos θδ(t) + = cos θx(t) δ(t) + sin θ = cos θx(t) + sin θˆ(t) x 1 πt x(t) 1 sin θ) πt

3)
∞ −∞

|xθ (t)|2 dt =

∞ −∞

| cos θx(t) + sin θˆ(t)|2 dt x
∞ −∞

= cos2 θ

|x(t)|2 dt + sin2 θ
∞ −∞

∞ −∞

|ˆ(t)|2 dt x
∞ −∞

+ cos θ sin θ
∞ But −∞ |x(t)|2 dt = Thus, ∞ x 2 −∞ |ˆ(t)| dt

x(t)ˆ∗ (t)dt + cos θ sin θ x

x∗ (t)ˆ(t)dt x

= Ex and

∞ x∗ −∞ x(t)ˆ (t)dt

= 0 since x(t) and x(t) are orthogonal. ˆ

Exθ = Ex (cos2 θ + sin2 θ) = Ex Problem 2.59 1) z(t) = x(t) + j x(t) = m(t) cos(2πf0 t) − m(t) sin(2πf0 t) ˆ ˆ +j[m(t)cos(2πf0 t) − m(t)sin(2πf0 t) ˆ ˆ = m(t) cos(2πf0 t) − m(t) sin(2πf0 t) ˆ +jm(t) sin(2πf0 t) + j m(t) cos(2πf0 t) = (m(t) + j m(t))ej2πf0 t ˆ The lowpass equivalent signal is given by ˆ xl (t) = z(t)e−j2πf0 t = m(t) + j m(t)

2) The Fourier transform of m(t) is Λ(f ). Thus X(f ) = Λ(f + f0 ) + Λ(f − f0 ) − (−jsgn(f )Λ(f )) 2 1 1 − δ(f + f0 ) + δ(f − f0 ) 2j 2j 1 1 Λ(f + f0 ) [1 − sgn(f + f0 )] + Λ(f − f0 ) [1 + sgn(f − f0 )] 2 2

=

−f0 − 1

. .. .. . −f0

.

1 ..

.. .. d . f0

d d

f0 + 1

The bandwidth of x(t) is W = 1. 40

3) ˆ z(t) = x(t) + j x(t) = m(t) cos(2πf0 t) + m(t) sin(2πf0 t) ˆ +j[m(t)cos(2πf0 t) + m(t)sin(2πf0 t) ˆ ˆ = m(t) cos(2πf0 t) + m(t) sin(2πf0 t) ˆ +jm(t) sin(2πf0 t) − j m(t) cos(2πf0 t) = (m(t) − j m(t))ej2πf0 t ˆ The lowpass equivalent signal is given by ˆ xl (t) = z(t)e−j2πf0 t = m(t) − j m(t) The Fourier transform of x(t) is X(f ) = Λ(f + f0 ) + Λ(f − f0 ) − (jsgn(f )Λ(f )) 2 1 1 − δ(f + f0 ) + δ(f − f0 ) 2j 2j 1 1 Λ(f + f0 ) [1 + sgn(f + f0 )] + Λ(f − f0 ) [1 − sgn(f − f0 )] 2 2

=

. . . .d .. . d . . .. . . . d −f0 −f0 + 1

1

f0 − 1

. . . . . . . . . . .. . f0

.

41

Chapter 3
Problem 3.1 The modulated signal is u(t) = m(t)c(t) = Am(t) cos(2π4 × 103 t) 250 π 200 t) + 4 sin(2π t + ) cos(2π4 × 103 t) = A 2 cos(2π π π 3 200 200 3 )t) = A cos(2π(4 × 10 + )t) + A cos(2π(4 × 103 − π π π π 250 250 )t + ) − 2A sin(2π(4 × 103 − )t − ) +2A sin(2π(4 × 103 + π 3 π 3 Taking the Fourier transform of the previous relation, we obtain U (f ) = A δ(f − π 200 250 250 200 2 π 2 ) + δ(f + ) + ej 3 δ(f − ) − e−j 3 δ(f + ) π π j π j π

1 [δ(f − 4 × 103 ) + δ(f + 4 × 103 )] 2 200 200 A ) + δ(f − 4 × 103 + ) δ(f − 4 × 103 − = 2 π π π π 250 250 +2e−j 6 δ(f − 4 × 103 − ) + 2ej 6 δ(f − 4 × 103 + ) π π 200 200 ) + δ(f + 4 × 103 + ) +δ(f + 4 × 103 − π π π π 250 250 ) + 2ej 6 δ(f + 4 × 103 + ) +2e−j 6 δ(f + 4 × 103 − π π The next figure depicts the magnitude and the phase of the spectrum U (f ). |U (f )| A . . . . . . . . . . . . . . . . . . . . . .
T T T

T

. . . . . . . . . . . . . . . . . . . . . . .A/2
T T
−fc − 250 c − 200 −fc + 200 c + 250 π−f π π −f π

T fc − 250 fc − 200 π π

T fc + 200 fc + 250 π π

s ......................

π .6.

U (f ) ..... s

s . . . . . . . . . .6. . . . . . . . . . . . . . . . . . . s

−π

To find the power content of the modulated signal we write u2 (t) as u2 (t) = A2 cos2 (2π(4 × 103 + 200 200 )t) + A2 cos2 (2π(4 × 103 − )t) π π π π 250 250 +4A2 sin2 (2π(4 × 103 + )t + ) + 4A2 sin2 (2π(4 × 103 − )t − ) π 3 π 3 +terms of cosine and sine functions in the first power
T 2

Hence, P = lim
T →∞ − T 2

u2 (t)dt =

A2 A2 4A2 4A2 + + + = 5A2 2 2 2 2 42

Problem 3.2 u(t) = m(t)c(t) = A(sinc(t) + sinc2 (t)) cos(2πfc t) Taking the Fourier transform of both sides, we obtain U (f ) = = A [Π(f ) + Λ(f )] (δ(f − fc ) + δ(f + fc )) 2 A [Π(f − fc ) + Λ(f − fc ) + Π(f + fc ) + Λ(f + fc )] 2

Π(f − fc ) = 0 for |f − fc | < 1 , whereas Λ(f − fc ) = 0 for |f − fc | < 1. Hence, the bandwidth of 2 the bandpass filter is 2. Problem 3.3 The following figure shows the modulated signals for A = 1 and f0 = 10. As it is observed both signals have the same envelope but there is a phase reversal at t = 1 for the second signal Am2 (t) cos(2πf0 t) (right plot). This discontinuity is shown clearly in the next figure where we plotted Am2 (t) cos(2πf0 t) with f0 = 3.
1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

Problem 3.4 1 y(t) = x(t) + x2 (t) 2 43

1 m2 (t) + cos2 (2πfc t) + 2m(t) cos(2πfc t) 2 1 1 1 = m(t) + cos(2πfc t) + m2 (t) + + cos(2π2fc t) + m(t) cos(2πfc t) 2 4 4 = m(t) + cos(2πfc t) + Taking the Fourier transform of the previous, we obtain 1 1 Y (f ) = M (f ) + M (f ) M (f ) + (M (f − fc ) + M (f + fc )) 2 2 1 1 1 + δ(f ) + (δ(f − fc ) + δ(f + fc )) + (δ(f − 2fc ) + δ(f + 2fc )) 4 2 8 The next figure depicts the spectrum Y (f )

1/2 1/4

1/8

-2fc
Problem 3.5

-fc

-2W

2W

fc

2fc

u(t) = m(t) · c(t) = 100(2 cos(2π2000t) + 5 cos(2π3000t)) cos(2πfc t) Thus, U (f ) = 100 5 δ(f − 2000) + δ(f + 2000) + (δ(f − 3000) + δ(f + 3000)) 2 2 [δ(f − 50000) + δ(f + 50000)] 5 5 = 50 δ(f − 52000) + δ(f − 48000) + δ(f − 53000) + δ(f − 47000) 2 2 5 5 +δ(f + 52000) + δ(f + 48000) + δ(f + 53000) + δ(f + 47000) 2 2

A plot of the spectrum of the modulated signal is given in the next figure . . . . . . . . . . . . . . . . . . . . . . . . 125
T T T T

. . . . . . . . . . . . . . . . . . . . .50. . . . . . . . . . . . . . . . . . . .
T T T

T

-53 -52

-48 -47

0

47 48

52 53

KHz

Problem 3.6 The mixed signal y(t) is given by y(t) = u(t) · xL (t) = Am(t) cos(2πfc t) cos(2πfc t + θ) A m(t) [cos(2π2fc t + θ) + cos(θ)] = 2

44

The lowpass filter will cut-off the frequencies above W , where W is the bandwidth of the message signal m(t). Thus, the output of the lowpass filter is z(t) = A m(t) cos(θ) 2
2

If the power of m(t) is PM , then the power of the output signal z(t) is Pout = PM A cos2 (θ). The 4 2 power of the modulated signal u(t) = Am(t) cos(2πfc t) is PU = A PM . Hence, 2 Pout 1 = cos2 (θ) PU 2 A plot of
Pout PU

for 0 ≤ θ ≤ π is given in the next figure.
0.5 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 0 0.5 1 1.5 2 Theta (rad) 2.5 3 3.5

Problem 3.7 1) The spectrum of u(t) is U (f ) = 20 [δ(f − fc ) + δ(f + fc )] 2 2 + [δ(f − fc − 1500) + δ(f − fc + 1500) 4 +δ(f + fc − 1500) + δ(f + fc + 1500)] 10 + [δ(f − fc − 3000) + δ(f − fc + 3000) 4 +δ(f + fc − 3000) + δ(f + fc + 3000)]

The next figure depicts the spectrum of u(t). ...................................... 10 T T
T T T T

. 1/2. . . . . . . . . . . . . . . . . . . . . . . . ... .5/2 . . . . . . . . . . . . . . . . . . . ...
T T T

T

-1030-1015-1000 -985 -970

0

970 985 1000 1015 1030

X 100 Hz 2) The square of the modulated signal is u2 (t) = 400 cos2 (2πfc t) + cos2 (2π(fc − 1500)t) + cos2 (2π(fc + 1500)t) +25 cos2 (2π(fc − 3000)t) + 25 cos2 (2π(fc + 3000)t) + terms that are multiples of cosines 45

1 If we integrate u2 (t) from − T to T , normalize the integral by T and take the limit as T → ∞, 2 2 then all the terms involving cosines tend to zero, whereas the squares of the cosines give a value of 1 400 5 2 . Hence, the power content at the frequency fc = 10 Hz is Pfc = 2 = 200, the power content at the frequency Pfc +1500 is the same as the power content at the frequency Pfc −1500 and equal to 1 25 2 , whereas Pfc +3000 = Pfc −3000 = 2 .

3) u(t) = (20 + 2 cos(2π1500t) + 10 cos(2π3000t)) cos(2πfc t) 1 1 cos(2π1500t) + cos(2π3000t)) cos(2πfc t) = 20(1 + 10 2 This is the form of a conventional AM signal with message signal m(t) = 1 1 cos(2π1500t) + cos(2π3000t) 10 2 1 1 2 = cos (2π1500t) + cos(2π1500t) − 10 2

1 1 The minimum of g(z) = z 2 + 10 z − 1 is achieved for z = − 20 and it is min(g(z)) = − 201 . Since 2 400 1 z = − 20 is in the range of cos(2π1500t), we conclude that the minimum value of m(t) is − 201 . 400 Hence, the modulation index is 201 α=− 400

4) u(t) = 20 cos(2πfc t) + cos(2π(fc − 1500)t) + cos(2π(fc − 1500)t) = 5 cos(2π(fc − 3000)t) + 5 cos(2π(fc + 3000)t) The power in the sidebands is Psidebands = 1 1 25 25 + + + = 26 2 2 2 2

The total power is Ptotal = Pcarrier + Psidebands = 200 + 26 = 226. The ratio of the sidebands power to the total power is Psidebands 26 = Ptotal 226 Problem 3.8 1) u(t) = m(t)c(t) = 100(cos(2π1000t) + 2 cos(2π2000t)) cos(2πfc t) = 100 cos(2π1000t) cos(2πfc t) + 200 cos(2π2000t) cos(2πfc t) 100 = [cos(2π(fc + 1000)t) + cos(2π(fc − 1000)t)] 2 200 [cos(2π(fc + 2000)t) + cos(2π(fc − 2000)t)] 2 Thus, the upper sideband (USB) signal is uu (t) = 50 cos(2π(fc + 1000)t) + 100 cos(2π(fc + 2000)t)

46

2) Taking the Fourier transform of both sides, we obtain Uu (f ) = 25 (δ(f − (fc + 1000)) + δ(f + (fc + 1000))) +50 (δ(f − (fc + 2000)) + δ(f + (fc + 2000))) A plot of Uu (f ) is given in the next figure. . . . . . . . . . . . . . . . . . . . . . . . .50. . . . . . . . . . . . . . . . . . . . . .
T T

. . . . . . . . . . . . . . . . . 25 . . . . . . . . . . . . . . ..
T

T

-1002 -1001 Problem 3.9 If we let x(t) = −Π

0

1001

1002

KHz

t+

Tp 4

Tp 2



t−

Tp 4

Tp 2

then using the results of Problem 2.23, we obtain


v(t) = m(t)s(t) = m(t) n=−∞ x(t − nTp )

1 = m(t) Tp where



X( n=−∞ n n j2π Tp t )e Tp

t + 4p n X( ) = F −Π Tp Tp
2

T



t−

Tp 4 n f= T p

Tp 2

= =

Tp Tp Tp Tp sinc(f ) e−j2πf 4 − ej2πf 4 2 2

n f= T

p

Tp n π sinc( )(−2j) sin(n ) 2 2 2

Hence, the Fourier transform of v(t) is V (f ) = π n n 1 ∞ sinc( )(−2j) sin(n )M (f − ) 2 n=−∞ 2 2 Tp
1 Tp ,

The bandpass filter will cut-off all the frequencies except the ones centered at Thus, the output spectrum is

that is for n = ±1.

1 1 1 1 U (f ) = sinc( )(−j)M (f − ) + sinc( )jM (f + ) 2 Tp 2 Tp 1 1 2 2 = − jM (f − ) + jM (f + ) π Tp π Tp = 4 M (f ) π 1 1 1 1 δ(f − ) − δ(f + ) 2j Tp 2j Tp

Taking the inverse Fourier transform of the previous expression, we obtain u(t) = 4 1 m(t) sin(2π t) π Tp 47

which has the form of a DSB-SC AM signal, with c(t) =

4 π

1 sin(2π Tp t) being the carrier signal.

Problem 3.10 Assume that s(t) is a periodic signal with period Tp , i.e. s(t) =


n x(t

− nTp ). Then

v(t) = m(t)s(t) = m(t) n=−∞ x(t − nTp )

= m(t) = 1 Tp

1 Tp




X( n=−∞ n n j2π Tp t )e Tp

X( n=−∞ n j2π n t )m(t)e Tp Tp

n n where X( Tp ) = F[x(t)]|f = T . The Fourier transform of v(t) is p V (f ) = =

1 F Tp 1 Tp



X( n=−∞ ∞

n j2π n t )m(t)e Tp Tp

X( n=−∞ n n )M (f − ) Tp Tp

1 The bandpass filter will cut-off all the frequency components except the ones centered at fc = ± Tp . Hence, the spectrum at the output of the BPF is

U (f ) =

1 1 1 1 1 1 X( )M (f − ) + X(− )M (f + ) Tp Tp Tp Tp Tp Tp

In the time domain the output of the BPF is given by u(t) = = =
1 1 1 1 1 j2π 1 t −j2π T t p X( )m(t)e Tp + X ∗ ( )m(t)e Tp Tp Tp Tp

1 1 j2π 1 t 1 −j2π T1p t m(t) X( )e Tp + X ∗ ( )e Tp Tp Tp 1 1 1 2Re(X( ))m(t) cos(2π t) Tp Tp Tp

As it is observed u(t) has the form a modulated DSB-SC signal. The amplitude of the modulating 1 1 1 signal is Ac = Tp 2Re(X( Tp )) and the carrier frequency fc = Tp . Problem 3.11 1) The spectrum of the modulated signal Am(t) cos(2πfc t) is V (f ) = A [M (f − fc ) + M (f + fc )] 2

The spectrum of the signal at the output of the highpass filter is U (f ) = A [M (f + fc )u−1 (−f − fc ) + M (f − fc )u−1 (f − fc )] 2

Multiplying the output of the HPF with A cos(2π(fc + W )t) results in the signal z(t) with spectrum Z(f ) = A [M (f + fc )u−1 (−f − fc ) + M (f − fc )u−1 (f − fc )] 2 A [δ(f − (fc + W )) + δ(f + fc + W )] 2 48

=

A2 (M (f + fc − fc − W )u−1 (−f + fc + W − fc ) 4 +M (f + fc − fc + W )u−1 (f + fc + W − fc ) +M (f − 2fc − W )u−1 (f − 2fc − W )

+M (f + 2fc + W )u−1 (−f − 2fc − W )) A2 (M (f − W )u−1 (−f + W ) + M (f + W )u−1 (f + W ) = 4 +M (f − 2fc − W )u−1 (f − 2fc − W ) + M (f + 2fc + W )u−1 (−f − 2fc − W )) The LPF will cut-off the double frequency components, leaving the spectrum Y (f ) = A2 [M (f − W )u−1 (−f + W ) + M (f + W )u−1 (f + W )] 4
Y(f)

The next figure depicts Y (f ) for M (f ) as shown in Fig. P-5.12.

-W

W

2) As it is observed from the spectrum Y (f ), the system shifts the positive frequency components to the negative frequency axis and the negative frequency components to the positive frequency axis. If we transmit the signal y(t) through the system, then we will get a scaled version of the original spectrum M (f ). Problem 3.12 The modulated signal can be written as u(t) = m(t) cos(2πfc t + φ) = m(t) cos(2πfc t) cos(φ) − m(t) sin(2πfc t) sin(φ) = uc (t) cos(2πfc t) − us (t) sin(2πfc t) where we identify uc (t) = m(t) cos(φ) as the in-phase component and us (t) = m(t) sin(φ) as the quadrature component. The envelope of the bandpass signal is Vu (t) = = u2 (t) + u2 (t) = c s m2 (t) = |m(t)| m2 (t) cos2 (φ) + m2 (t) sin2 (φ)

Hence, the envelope is proportional to the absolute value of the message signal. Problem 3.13 1) The modulated signal is u(t) = 100[1 + m(t)] cos(2π8 × 105 t) = 100 cos(2π8 × 105 t) + 100 sin(2π103 t) cos(2π8 × 105 t) +500 cos(2π2 × 103 t) cos(2π8 × 105 t) = 100 cos(2π8 × 105 t) + 50[sin(2π(103 + 8 × 105 )t) − sin(2π(8 × 105 − 103 )t)] +250[cos(2π(2 × 103 + 8 × 105 )t) + cos(2π(8 × 105 − 2 × 103 )t)] 49

Taking the Fourier transform of the previous expression, we obtain U (f ) = 50[δ(f − 8 × 105 ) + δ(f + 8 × 105 )] 1 1 +25 δ(f − 8 × 105 − 103 ) − δ(f + 8 × 105 + 103 ) j j 1 1 −25 δ(f − 8 × 105 + 103 ) − δ(f + 8 × 105 − 103 ) j j +125 δ(f − 8 × 105 − 2 × 103 ) + δ(f + 8 × 105 + 2 × 103 ) +125 δ(f − 8 × 105 − 2 × 103 ) + δ(f + 8 × 105 + 2 × 103 ) = 50[δ(f − 8 × 105 ) + δ(f + 8 × 105 )] +25 δ(f − 8 × 105 − 103 )e−j 2 + δ(f + 8 × 105 + 103 )ej 2 π π π

+25 δ(f − 8 × 105 + 103 )ej 2 + δ(f + 8 × 105 − 103 )e−j 2

π

+125 δ(f − 8 × 105 − 2 × 103 ) + δ(f + 8 × 105 + 2 × 103 ) +125 δ(f − 8 × 105 − 2 × 103 ) + δ(f + 8 × 105 + 2 × 103 ) |U (f )| 125 ...................... . . . . . . . . . . . . . . . . . . 50. . . . . . . . . . . . . . . . . . .. T T . . . . . . . . . . . . . . 25. . . . . . . . . . . ..
−fc

T T

T

T

T T

fc −2×103

T

fc +2×103

fc −2×103 π

T

fc

fc +2×103

r . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . . r . .

U (f )

−π r . . . . . . . . . . . . . .2 . . . . . . . . . . . . . . . . . r . . .

fc −103

fc +103

2) The average power in the carrier is Pcarrier = The power in the sidebands is Psidebands = 502 502 2502 2502 + + + = 65000 2 2 2 2 1002 A2 c = = 5000 2 2

3) The message signal can be written as m(t) = sin(2π103 t) + 5 cos(2π2 × 103 t) = −10 sin(2π103 t) + sin(2π103 t) + 5 As it is seen the minimum value of m(t) is −6 and is achieved for sin(2π103 t) = −1 or t = 3 1 + 103 k, with k ∈ Z. Hence, the modulation index is α = 6. 4×103 4) The power delivered to the load is Pload = 1002 (1 + m(t))2 cos2 (2πfc t) |u(t)|2 = 50 50 50

The maximum absolute value of 1 + m(t) is 6.025 and is achieved for sin(2π103 t) =
1 arcsin( 20 ) 2π103

1 20

or t =

+

k . 103

Since 2 × 103

fc the peak power delivered to the load is approximately equal to max(Pload ) = (100 × 6.025)2 = 72.6012 50

Problem 3.14 1) u(t) = 5 cos(1800πt) + 20 cos(2000πt) + 5 cos(2200πt) 1 = 20(1 + cos(200πt)) cos(2000πt) 2 The modulating signal is m(t) = cos(2π100t) whereas the carrier signal is c(t) = 20 cos(2π1000t). 2) Since −1 ≤ cos(2π100t) ≤ 1, we immediately have that the modulation index is α = 1 . 2 3) The power of the carrier component is Pcarrier = 400 = 200, whereas the power in the sidebands 2 2 is Psidebands = 400α = 50. Hence, 2 1 50 Psidebands = = Pcarrier 200 4 Problem 3.15 1) The modulated signal is written as u(t) = 100(2 cos(2π103 t) + cos(2π3 × 103 t)) cos(2πfc t) = 200 cos(2π103 t) cos(2πfc t) + 100 cos(2π3 × 103 t) cos(2πfc t) = 100 cos(2π(fc + 103 )t) + cos(2π(fc − 103 )t) +50 cos(2π(fc + 3 × 103 )t) + cos(2π(fc − 3 × 103 )t) Taking the Fourier transform of the previous expression, we obtain U (f ) = 50 δ(f − (fc + 103 )) + δ(f + fc + 103 ) + δ(f − (fc − 103 )) + δ(f + fc − 103 )

+ 25 δ(f − (fc + 3 × 103 )) + δ(f + fc + 3 × 103 ) + δ(f − (fc − 3 × 103 )) + δ(f + fc − 3 × 103 )

The spectrum of the signal is depicted in the next figure
T T
−1003 −1001 −999

. . . . . . . . . . . . . . .50. . . . . . . . . . . . . . .
T T

T T

. . . . . . . . . . 25 . . . . . . . . ..
T T
−997 997 999 1001

1003

KHz

2) The average power in the frequencies fc + 1000 and fc − 1000 is Pfc +1000 = Pfc −1000 = 1002 = 5000 2 502 = 1250 2

The average power in the frequencies fc + 3000 and fc − 3000 is Pfc +3000 = Pfc −3000 = 51

Problem 3.16 1) The Hilbert transform of cos(2π1000t) is sin(2π1000t), whereas the Hilbert transform ofsin(2π1000t) is − cos(2π1000t). Thus m(t) = sin(2π1000t) − 2 cos(2π1000t) ˆ 2) The expression for the LSSB AM signal is ul (t) = Ac m(t) cos(2πfc t) + Ac m(t) sin(2πfc t) ˆ ˆ Substituting Ac = 100, m(t) = cos(2π1000t)+2 sin(2π1000t) and m(t) = sin(2π1000t)−2 cos(2π1000t) in the previous, we obtain ul (t) = 100 [cos(2π1000t) + 2 sin(2π1000t)] cos(2πfc t) + 100 [sin(2π1000t) − 2 cos(2π1000t)] sin(2πfc t) = 100 [cos(2π1000t) cos(2πfc t) + sin(2π1000t) sin(2πfc t)] + 200 [cos(2πfc t) sin(2π1000t) − sin(2πfc t) cos(2π1000t)] = 100 cos(2π(fc − 1000)t) − 200 sin(2π(fc − 1000)t) 3) Taking the Fourier transform of the previous expression we obtain Ul (f ) = 50 (δ(f − fc + 1000) + δ(f + fc − 1000)) + 100j (δ(f − fc + 1000) − δ(f + fc − 1000)) = (50 + 100j)δ(f − fc + 1000) + (50 − 100j)δ(f + fc − 1000) Hence, the magnitude spectrum is given by |Ul (f )| = 502 + 1002 (δ(f − fc + 1000) + δ(f + fc − 1000)) √ = 10 125 (δ(f − fc + 1000) + δ(f + fc − 1000))

Problem 3.17 The input to the upper LPF is uu (t) = cos(2πfm t) cos(2πf1 t) 1 = [cos(2π(f1 − fm )t) + cos(2π(f1 + fm )t)] 2 whereas the input to the lower LPF is ul (t) = cos(2πfm t) sin(2πf1 t) 1 = [sin(2π(f1 − fm )t) + sin(2π(f1 + fm )t)] 2 If we select f1 such that |f1 − fm | < W and f1 + fm > W , then the two lowpass filters will cut-off the frequency components outside the interval [−W, W ], so that the output of the upper and lower LPF is yu (t) = cos(2π(f1 − fm )t) yl (t) = sin(2π(f1 − fm )t) The output of the Weaver’s modulator is u(t) = cos(2π(f1 − fm )t) cos(2πf2 t) − sin(2π(f1 − fm )t) sin(2πf2 t) 52

which has the form of a SSB signal since sin(2π(f1 − fm )t) is the Hilbert transform of cos(2π(f1 − fm )t). If we write u(t) as u(t) = cos(2π(f1 + f2 − fm )t) then with f1 +f2 −fm = fc +fm we obtain an USSB signal centered at fc , whereas with f1 +f2 −fm = fc − fm we obtain the LSSB signal. In both cases the choice of fc and f1 uniquely determine f2 . Problem 3.18 The signal x(t) is m(t) + cos(2πf0 t). The spectrum of this signal is X(f ) = M (f ) + 1 (δ(f − f0 ) + 2 δ(f + f0 )) and its bandwidth equals to Wx = f0 . The signal y1 (t) after the Square Law Device is y1 (t) = x2 (t) = (m(t) + cos(2πf0 t))2 = m2 (t) + cos2 (2πf0 t) + 2m(t) cos(2πf0 t) 1 1 = m2 (t) + + cos(2π2f0 t) + 2m(t) cos(2πf0 t) 2 2 The spectrum of this signal is given by 1 1 Y1 (f ) = M (f ) M (f ) + δ(f ) + (δ(f − 2f0 ) + δ(f + 2f0 )) + M (f − f0 ) + M (f + f0 ) 2 4 and its bandwidth is W1 = 2f0 . The bandpass filter will cut-off the low-frequency components M (f ) M (f )+ 1 δ(f ) and the terms with the double frequency components 1 (δ(f −2f0 )+δ(f +2f0 )). 2 4 Thus the spectrum Y2 (f ) is given by Y2 (f ) = M (f − f0 ) + M (f + f0 ) and the bandwidth of y2 (t) is W2 = 2W . The signal y3 (t) is y3 (t) = 2m(t) cos2 (2πf0 t) = m(t) + m(t) cos(2πf0 t) with spectrum 1 Y3 (t) = M (f ) + (M (f − f0 ) + M (f + f0 )) 2 and bandwidth W3 = f0 + W . The lowpass filter will eliminate the spectral components 1 (M (f − 2 f0 ) + M (f + f0 )), so that y4 (t) = m(t) with spectrum Y4 = M (f ) and bandwidth W4 = W . The next figure depicts the spectra of the signals x(t), y1 (t), y2 (t), y3 (t) and y4 (t).

53

T 7
−f0

X(f ) . . . . . . . . . . . . . 1
7e 7 T
2

e e
W f0

−W

Y1 (f )
T
−2f0

ƒ  ƒ  ƒ
−f0 −W −f0 +W

¨

¨¨

T ¨rr

rr
2W

ƒ  ƒ  ƒ f0 −W f0 +W

1 4

......
T
2f0

−2W

Y2 (f )
ƒ  ƒ  ƒ
−f0 −W −f0 +W

ƒ  ƒ  ƒ f0 −W f0 +W

Y3 (f )
  
−f0 −W −f0 +W

ƒ  ƒ  ƒ
−W W





 f0 +W

f0 −W

Y4 (f )
ƒ  ƒ  ƒ
−W W

Problem 3.19 1) y(t) = ax(t) + bx2 (t) = a(m(t) + cos(2πf0 t)) + b(m(t) + cos(2πf0 t))2 = am(t) + bm2 (t) + a cos(2πf0 t) +b cos2 (2πf0 t) + 2bm(t) cos(2πf0 t) 2) The filter should reject the low frequency components, the terms of double frequency and pass only the signal with spectrum centered at f0 . Thus the filter should be a BPF with center frequency f0 and bandwidth W such that f0 − WM > f0 − W > 2WM where WM is the bandwidth of the 2 message signal m(t). 3) The AM output signal can be written as u(t) = a(1 + 2b m(t)) cos(2πf0 t) a

Since Am = max[|m(t)|] we conclude that the modulation index is α= 2bAm a

54

Problem 3.20 1) When USSB is employed the bandwidth of the modulated signal is the same with the bandwidth of the message signal. Hence, WUSSB = W = 104 Hz 2) When DSB is used, then the bandwidth of the transmitted signal is twice the bandwidth of the message signal. Thus, WDSB = 2W = 2 × 104 Hz 3) If conventional AM is employed, then WAM = 2W = 2 × 104 Hz

4) Using Carson’s rule, the effective bandwidth of the FM modulated signal is Bc = (2β + 1)W = 2 kf max[|m(t)|] + 1 W = 2(kf + W ) = 140000 Hz W

Problem 3.21 1) The lowpass equivalent transfer function of the system is Hl (f ) = 2u−1 (f + fc )H(f + fc ) = 2 Taking the inverse Fourier transform, we obtain hl (t) = F −1 [Hl (f )] = = 2 = = = 2 W
W 2

1 Wf

+ 1

1 2

|f | ≤ W 2 W 5) because fewer terms are involved in the calculation of the probability p(X + Y > 5). Note also that p(X + Y > 5|X = 0) = p(X + Y > 5|X = 1) = 0.

p(X + Y > 5) = p(X = 2)p(Y = 4) + p(X = 3)[p(Y = 3) + p(Y = 4)] + p(X = 4)[p(Y = 2) + p(Y = 3) + p(Y = 4)] 125 = 4096 Hence, p(X + Y ≤ 5) = 1 − p(X + Y > 5) = 1 −
125 4096

Problem 4.8 1) Since limx→∞ FX (x) = 1 and FX (x) = 1 for all x ≥ 1 we obtain K = 1. 2) The random variable is of the mixed-type since there is a discontinuity at x = 1. lim ) = 1/2 whereas lim →0 FX (1 + ) = 1 3) 1 3 1 1 P ( < X ≤ 1) = FX (1) − FX ( ) = 1 − = 2 2 4 4 73
→0 FX (1 −

4) 1 1 1 1 1 P ( < X < 1) = FX (1− ) − FX ( ) = − = 2 2 2 4 4 5) P (X > 2) = 1 − P (X ≤ 2) = 1 − FX (2) = 1 − 1 = 0 Problem 4.9 1) x < −1 ⇒ FX (x) = 0 −1 ≤ x ≤ 0 ⇒ FX (x) = 1 1 = x2 + x + 2 2 −1 0 x 1 1 (v + 1)dv + (−v + 1)dv = − x2 + x + 0 ≤ x ≤ 1 ⇒ FX (x) = 2 2 −1 0 1 ≤ x ⇒ FX (x) = 1 1 (v + 1)dv = ( v 2 + v) 2 −1 x x

2) 7 1 1 1 p(X > ) = 1 − FX ( ) = 1 − = 2 2 8 8 3) p(X > 0, X < 1 ) FX ( 1 ) − FX (0) 3 1 2 2 = = p(X > 0|X < ) = 1 1 2 7 p(X < 2 ) 1 − p(X > 2 ) 4) We find first the CDF p(X ≤ x, X > 1 ) 1 1 2 FX (x|X > ) = p(X ≤ x|X > ) = 2 2 p(X > 1 ) 2 If x ≤ If x >
1 2 1 2

then p(X ≤ x|X > 1 ) = 0 since the events E1 = {X ≤ 1 } and E1 = {X > 1 } are disjoint. 2 2 2 then p(X ≤ x|X > 1 ) = FX (x) − FX ( 1 ) so that 2 2 FX (x) − FX ( 1 ) 1 2 FX (x|X > ) = 2 1 − FX ( 1 ) 2

Differentiating this equation with respect to x we obtain 1 fX (x|X > ) = 2 fX (x) 1−FX ( 1 ) 2

x> x≤

0

1 2 1 2

5)


E[X|X > 1/2] = =

−∞

xfX (x|X > 1/2)dx 1

1 2

1 − FX (1/2)

1 2

xfX (x)dx
1
1 2

= 8 = 2 3

1 1 x(−x + 1)dx = 8(− x3 + x2 ) 3 2

74

Problem 4.10 1) The random variable X is Gaussian with zero mean and variance σ 2 = 10−8 . Thus p(X > x) = x Q( σ ) and p(X > 10−4 ) = Q p(X > 4 × 10−4 ) = Q 10−4 10−4 = Q(1) = .159 = Q(4) = 3.17 × 10−5

4 × 10−4 10−4

p(−2 × 10−4 < X ≤ 10−4 ) = 1 − Q(1) − Q(2) = .8182

2) p(X > 10−4 |X > 0) =

p(X > 10−4 , X > 0) p(X > 10−4 ) .159 = = = .318 p(X > 0) p(X > 0) .5

3) y = g(x) = xu(x). Clearly fY (y) = 0 and FY (y) = 0 for y < 0. If y > 0, then the equation y = xu(x) has a unique solution x1 = y. Hence, FY (y) = FX (y) and fY (y) = fX (y) for y > 0. FY (y) is discontinuous at y = 0 and the jump of the discontinuity equals FX (0). FY (0+ ) − FY (0− ) = FX (0) = In summary the PDF fY (y) equals 1 fY (y) = fX (y)u(y) + δ(y) 2 The general expression for finding fY (y) can not be used because g(x) is constant for some interval so that there is an uncountable number of solutions for x in this interval. 4)


1 2

E[Y ] = = =

1 y fX (y)u(y) + δ(y) dy 2 −∞ ∞ y2 1 σ √ ye− 2σ2 dy = √ 2 0 2π 2πσ

−∞ ∞

yfY (y)dy

5) y = g(x) = |x|. For a given y > 0 there are two solutions to the equation y = g(x) = |x|, that is x1,2 = ±y. Hence for y > 0 fY (y) = = fX (x2 ) fX (x1 ) + = fX (y) + fX (−y) |sgn(x1 )| |sgn(x2 )| y2 2 √ e− 2σ2 2πσ 2

For y < 0 there are no solutions to the equation y = |x| and fY (y) = 0. E[Y ] = √ 2 2πσ 2
0 ∞ y2 2σ ye− 2σ2 dy = √ 2π

75

Problem 4.11 1) y = g(x) = ax2 . Assume without loss of generality that a > 0. Then, if y < 0 the equation y = ax2 has no real solutions and fY (y) = 0. If y > 0 there are two solutions to the system, namely x1,2 = y/a. Hence, fY (y) = = = fX (x2 ) fX (x1 ) + |g (x1 )| |g (x2 )| fX ( y/a) fX (− y/a) + 2a y/a 2a y/a y 1 e− 2aσ2 √ √ 2 ay 2πσ

2) The equation y = g(x) has no solutions if y < −b. Thus FY (y) and fY (y) are zero for y < −b. If −b ≤ y ≤ b, then for a fixed y, g(x) < y if x < y; hence FY (y) = FX (y). If y > b then g(x) ≤ b < y for every x; hence FY (y) = 1. At the points y = ±b, FY (y) is discontinuous and the discontinuities equal to FY (−b+ ) − FY (−b− ) = FX (−b) and The PDF of y = g(x) is fY (y) = FX (−b)δ(y + b) + (1 − FX (b))δ(y − b) + fX (y)[u−1 (y + b) − u−1 (y − b)] y2 b 1 = Q (δ(y + b) + δ(y − b)) + √ e− 2σ2 [u−1 (y + b) − u−1 (y − b)] σ 2πσ 2 3) In the case of the hard limiter p(Y = b) = p(X < 0) = FX (0) = 1 2 1 2 FY (b+ ) − FY (b− ) = 1 − FX (b)

p(Y = a) = p(X > 0) = 1 − FX (0) = Thus FY (y) is a staircase function and

fY (y) = FX (0)δ(y − b) + (1 − FX (0))δ(y − a)

4) The random variable y = g(x) takes the values yn = xn with probability p(Y = yn ) = p(an ≤ X ≤ an+1 ) = FX (an+1 ) − FX (an ) Thus, FY (y) is a staircase function with FY (y) = 0 if y < x1 and FY (y) = 1 if y > xN . The PDF is a sequence of impulse functions, that is
N

fY (y) = i=1 N

[FX (ai+1 ) − FX (ai )] δ(y − xi ) Q i=1 =

ai σ

−Q

ai+1 σ

δ(y − xi )

76

Problem 4.12 The equation x = tan φ has a unique solution in [− π , π ], that is 2 2 φ1 = arctan x Furthermore x (φ) = Thus, 1 fΦ (φ1 ) = |x (φ1 )| π(1 + x2 ) We observe that fX (x) is the Cauchy density. Since fX (x) is even we immediately get E[X] = 0. However, the variance is fX (x) =
2 σX

sin φ cos φ

=

sin2 φ 1 =1+ = 1 + x2 cos2 φ cos2 φ

= E[X 2 ] − (E[X])2 1 ∞ x2 = dx = ∞ π −∞ 1 + x2

Problem 4.13 1)


E[Y ] =
0

yfY (y)dy ≥




yfY (y)dy α ≥ α Thus p(Y ≥ α) ≤ E[Y ]/α.

α

yfY (y)dy = αp(Y ≥ α)

2) Clearly p(|X − E[X]| > ) = p((X − E[X])2 > question we obtain p(|X − E[X]| > ) = p((X − E[X])2 >

2 ).

Thus using the results of the previous E[(X − E[X])2 ]
2

2

)≤

=

σ2
2

Problem 4.14 The characteristic function of the binomial distribution is n ψX (v) = k=0 n

ejvk n k

n k

pk (1 − p)n−k

= k=0 (pejv )k (1 − p)n−k = (pejv + (1 − p))n

Thus 1 d 1 = n(pejv + (1 − p))n−1 pjejv (pejv + (1 − p))n j dv j v=0 n−1 = n(p + 1 − p) p = np 2 d (2) E[X 2 ] = mX = (−1) 2 (pejv + (1 − p))n dv v=0 d jv n−1 jv = (−1) n(pe + (1 − p) pje dv v=0 E[X] = mX =
(1) v=0

=

n(n − 1)(pejv + (1 − p))n−2 p2 e2jv + n(pejv + (1 − p))n−1 pejv v=0 = n(n − 1)(p + 1 − p)p2 + n(p + 1 − p)p = n(n − 1)p2 + np 77

Hence the variance of the binomial distribution is σ 2 = E[X 2 ] − (E[X])2 = n(n − 1)p2 + np − n2 p2 = np(1 − p) Problem 4.15 The characteristic function of the Poisson distribution is


ψX (v) = k=0 e

jvk λ

k

k!

e

−k

=

(ejv−1 λ)k k! k=0



But

∞ ak k=0 k!

= ea so that ψX (v) = eλ(e E[X] = mX = E[X 2 ] = mX =
(2) (1)

jv−1 )

. Hence

1 d 1 jv−1 ) ψX (v) = eλ(e jλejv =λ j dv j v=0 v=0 d2 d jv−1 ) jv = (−1) 2 ψX (v) = (−1) λeλ(e e j dv dv v=0 ejv + λe λ(ejv−1 )

v=0

λ2 e

λ(ejv−1 )

ejv v=0 = λ2 + λ

Hence the variance of the Poisson distribution is σ 2 = E[X 2 ] − (E[X])2 = λ2 + λ − λ2 = λ Problem 4.16 For n odd, xn is odd and since the zero-mean Gaussian PDF is even their product is odd. Since the integral of an odd function over the interval [−∞, ∞] is zero, we obtain E[X n ] = 0 for n even. ∞ Let In = −∞ xn exp(−x2 /2σ 2 )dx with n even. Then, d In = dx d2 In = dx2 1 n+1 − x22 x e 2σ dx = 0 σ2 −∞ ∞ x2 x2 2n + 1 n − x22 1 n(n − 1)xn−2 e− 2σ2 − x e 2σ + 4 xn+2 e− 2σ2 dx σ2 σ −∞ 2n + 1 1 = n(n − 1)In−2 − In + 4 In+2 = 0 σ2 σ nxn−1 e− 2σ2 − In+2 = σ 2 (2n + 1)In − σ 4 n(n − 1)In−2 √ 2πσ 2 , I2 = σ 2 2πσ 2 . We prove now that √ In = 1 × 3 × 5 × · · · × (n − 1)σ n 2πσ 2

x2

Thus, with initial conditions I0 = √

√ The proof is by induction on n. For n = 2 it is certainly true since I2 = σ 2 2πσ 2 . We assume that the relation holds for n and we will show that it is true for In+2 . Using the previous recursion we have √ In+2 = 1 × 3 × 5 × · · · × (n − 1)σ n+2 (2n + 1) 2πσ 2 √ −1 × 3 × 5 × · · · × (n − 3)(n − 1)nσ n−2 σ 4 2πσ 2 √ = 1 × 3 × 5 × · · · × (n − 1)(n + 1)σ n+2 2πσ 2 Clearly E[X n ] =
√ 1 In 2πσ 2

and E[X n ] = 1 × 3 × 5 × · · · × (n − 1)σ n 78

Problem 4.17 1) fX,Y (x, y) is a PDF so that its integral over the support region of x, y should be one.
1 0 0 1 1 1

fX,Y (x, y)dxdy = K
0 1 0

(x + y)dxdy
1 1 1

= K
0 0 1

xdxdy +
0 0

ydxdy

= K = K Thus K = 1. 2)

1 2 1 1 x y|1 + y 2 x|1 0 2 0 2 0 0

p(X + Y > 1) = 1 − P (X + Y ≤ 1) = 1− = 1− = 1− = 2 3
1 0 1 0 1−x 1−x

(x + y)dxdy x
0 1 0 0

dydx −

1

1−x

dx
0 1 1 0 0

ydy

x(1 − x)dx −

2

(1 − x)2 dx

3) By exploiting the symmetry of fX,Y and the fact that it has to integrate to 1, one immediately sees that the answer to this question is 1/2. The “mechanical” solution is:
1 1

p(X > Y ) =
0 1 y 1

(x + y)dxdy
1 1

=
0 1 y

xdxdy +
0 1 y 1

ydxdy

=
0 1

=
0

1 21 yx dy x dy + 2 y 0 y 1 1 (1 − y 2 )dy + y(1 − y)dy 2 0

=

1 2

4) p(X > Y |X + 2Y > 1) = p(X > Y, X + 2Y > 1)/p(X + 2Y > 1) The region over which we integrate in order to find p(X > Y, X + 2Y > 1) is marked with an A in the following figure. y (1,1) r rr . A r .r . rr . rr x
1/3 x+2y=1

79

Thus
1 x
1−x 2

p(X > Y, X + 2Y > 1) = = = = p(X + 2Y > 1) =

1 3

(x + y)dxdy 1 1−x 1−x 2 ) + (x2 − ( ) ) dx 2 2 2

1
1 3

x(x −

1
1 3

1 15 2 1 x − x− dx 8 4 8

49 108
1 0 1 1
1−x 2

(x + y)dxdy

= = = =

1 1−x 2 1−x ) + (1 − ( ) ) dx 2 2 2 0 1 3 3 3 dx x2 + x + 8 4 8 0 3 1 31 3 1 21 3 1 × x + × x + x 8 3 0 4 2 0 8 0 7 8 x(1 −

Hence, p(X > Y |X + 2Y > 1) = (49/108)/(7/8) = 14/27 5) When X = Y the volume under integration has measure zero and thus P (X = Y ) = 0 6) Conditioned on the fact that X = Y , the new p.d.f of X is fX|X=Y (x) = fX,Y (x, x) 1 0 fX,Y (x, x)dx = 2x.

In words, we re-normalize fX,Y (x, y) so that it integrates to 1 on the region characterized by X = Y . 1 1 The result depends only on x. Then p(X > 2 |X = Y ) = 1/2 fX|X=Y (x)dx = 3/4. 7) fX (x) = fY (y) = 1 2 0 0 1 1 1 (x + y)dx = y + xdx = y + 2 0 0 (x + y)dy = x + ydy = x +
1 1

8) FX (x|X + 2Y > 1) = p(X ≤ x, X + 2Y > 1)/p(X + 2Y > 1) p(X ≤ x, X + 2Y > 1) = = = Hence, fX (x|X + 2Y > 1) = + 6x + 3 3 6 3 8 8 = x2 + x + p(X + 2Y > 1) 7 7 7 80
3 2 8x x 0 x 1
1−v 2

(v + y)dvdy

3 3 2 3 v + v+ dv 8 4 8 0 1 3 3 2 3 x + x + x 8 8 8

1

E[X|X + 2Y > 1] =
0 1

xfX (x|X + 2Y > 1)dx 3 3 6 2 3 x + x + x 7 7 7 0 3 1 41 6 1 31 3 1 2 × x + × x + × x 7 4 0 7 3 0 7 2

= = Problem 4.18 1)

1

=
0

17 28

FY (y) = p(Y ≤ y) = p(X1 ≤ y ∪ X2 ≤ y ∪ · · · ∪ Xn ≤ y) Since the previous events are not necessarily disjoint, it is easier to work with the function 1 − [FY (y)] = 1 − p(Y ≤ y) in order to take advantage of the independence of Xi ’s. Clearly 1 − p(Y ≤ y) = p(Y > y) = p(X1 > y ∩ X2 > y ∩ · · · ∩ Xn > y) = (1 − FX1 (y))(1 − FX2 (y)) · · · (1 − FXn (y)) Differentiating the previous with respect to y we obtain n n n

fY (y) = fX1 (y) i=1 (1 − FXi (y)) + fX2 (y) i=2 (1 − FXi (y)) + · · · + fXn (y)

(1 − FXi (y)) i=n 2) FZ (z) = P (Z ≤ z) = p(X1 ≤ z, X2 ≤ z, · · · , Xn ≤ z) = p(X1 ≤ z)p(X2 ≤ z) · · · p(Xn ≤ z) Differentiating the previous with respect to z we obtain n n n

fZ (z) = fX1 (z) i=1 FXi (z) + fX2 (z) i=2 FXi (z) + · · · + fXn (z) i=n FXi (z)

Problem 4.19 x − x22 1 ∞ 2 − x22 e 2σ dx = 2 x e 2σ dx σ2 σ 0 0 However for the Gaussian random variable of zero mean and variance σ 2 ∞ x2 1 √ x2 e− 2σ2 dx = σ 2 2πσ 2 −∞ E[X] = x Since the quantity under integration is even, we obtain that √ Thus, 1 2πσ 2
0 ∞ x2 1 x2 e− 2σ2 dx = σ 2 2



1 1√ 2πσ 2 σ 2 = σ 2 σ 2 2 ]. In order to find V AR(X) we first calculate E[X E[X] = E[X 2 ] = 1 σ2
∞ 0

π 2
∞ 0 x2 2σ 2 x2

x3 e− 2σ2 dx = −
2

x2

xd[e− 2σ2 ] dx

= −x e

x 2 − 2σ2





+
0 ∞ 0

2xe



= 0 + 2σ 2
0

x − x22 e 2σ dx = 2σ 2 σ2

81

Thus, V AR(X) = E[X 2 ] − (E[X])2 = 2σ 2 − Problem 4.20 Let Z = X + Y . Then, FZ (z) = p(X + Y ≤ z) = Differentiating with respect to z we obtain
∞ ∞ z−y

π 2 π σ = (2 − )σ 2 2 2

−∞ −∞

fX,Y (x, y)dxdy

fZ (z) = = = =

−∞ ∞ −∞ ∞ −∞ ∞ −∞

d dz

z−y −∞

fX,Y (x, y)dxdy d (z − y)dy dz

fX,Y (z − y, y)

fX,Y (z − y, y)dy fX (z − y)fY (y)dy

where the last line follows from the independence of X and Y . Thus fZ (z) is the convolution of fX (x) and fY (y). With fX (x) = αe−αx u(x) and fY (y) = βe−βx u(x) we obtain z fZ (z) =
0

αe−αv βe−β(z−v) dv

If α = β then fZ (z) =
0

z

α2 e−αz dv = α2 ze−αz u−1 (z) αβ e−αz − e−βz u−1 (z) β−α

If α = β then fZ (z) = αβe−βz
0

z

e(β−α)v dv =

Problem 4.21 1) fX,Y (x, y) is a PDF, hence its integral over the supporting region of x, and y is 1.
∞ 0 y ∞ ∞ ∞ y ∞ 0 ∞

fX,Y (x, y)dxdy =
0

Ke−x−y dxdy
∞ y

= K = K
0

e−y

e−x dxdy


1 e−2y dy = K(− )e−2y 2

=K
0

1 2

Thus K should be equal to 2. 2) x fX (x) =
0

2e−x−y dy = 2e−x (−e−y )

x

= 2e−x (1 − e−x ) = 2e−2y



fY (y) = y 2e−x−y dy = 2e−y (−e−x )

0 ∞ y

82

3) fX (x)fY (y) = 2e−x (1 − e−x )2e−2y = 2e−x−y 2e−y (1 − e−x ) = 2e−x−y = fX,Y (x, y) Thus X and Y are not independent. 4) If x < y then fX|Y (x|y) = 0. If x ≥ y, then with u = x − y ≥ 0 we obtain fU (u) = fX|Y (x|y) = 2e−x−y fX,Y (x, y) = = e−x+y = e−u fY (y) 2e−2y

5)


E[X|Y = y] = y xe−x+y dx = ey y ∞ ∞



xe−x dx

= ey −xe−x

+ y y

e−x dx

= ey (ye−y + e−y ) = y + 1

6) In this part of the problem we will use extensively the following definite integral
∞ 0 ∞ ∞ y

xν−1 e−µx dx =

1 (ν − 1)! µν
∞ 0

E[XY ] =
0 ∞

xy2e−x−y dxdy =

2ye−y
∞ 0 y



xe−x dxdy
∞ 0

=
0

2ye−y (ye−y + e−y )dy = 2

y 2 e−2y dy + 2

ye−2y dy

= 2

1 1 2! + 2 2 1! = 1 3 2 2


E[X] = 2
0

xe−x (1 − e−x )dx = 2

∞ 0

xe−x dx − 2

∞ 0

xe−2x dx

= 2−2 E[Y ] = 2
0

∞ ∞

1 3 = 2 2 2 ye−2y dy = 2 1 1 = 22 2
0 ∞ ∞ 0

E[X ] = 2
0

2

x2 e−x (1 − e−x )dx = 2 1 7 2! = 3 2 2 1 1 2! = 23 2

x2 e−x dx − 2

x2 e−2x dx

= 2 · 2! − 2 E[Y 2 ] = 2
0 ∞

y 2 e−2y dy = 2

Hence, COV (X, Y ) = E[XY ] − E[X]E[Y ] = 1 − and ρX,Y = (E[X 2 ] 1 3 1 · = 2 2 4

COV (X, Y ) 1 =√ 2 )1/2 (E[Y 2 ] − (E[Y ])2 )1/2 − (E[X]) 5 83

Problem 4.22

E[X] = E[Y ] = E[XY ] = = COV (X, Y ) =

1 sin θ|π = 0 0 π 0 π 1 2 sin θdθ = (− cos θ)|π = 0 π π 0 π 1 cos θ sin θ dθ π 0 1 2π 1 π sin 2θdθ = sin xdx = 0 2π 0 4π 0 E[XY ] − E[X]E[Y ] = 0 1 π 1 π cos θdθ =

π

Thus the random variables X and Y are uncorrelated. However they are not independent since X 2 + Y 2 = 1. To see this consider the probability p(|X| < 1/2, Y ≥ 1/2). Clearly p(|X| < 1/2)p(Y ≥ 1/2) is different than zero whereas p(|X| < 1/2, Y ≥ 1/2) = √ This is because 0. |X| < 1/2 implies that π/3 < θ < 5π/3 and for these values of θ, Y = sin θ > 3/2 > 1/2. Problem 4.23 √ √ 1) Clearly X > r, Y > r implies that X 2 > r2 , Y 2 > r2 so that X 2 +Y 2 >√ 2 or X 2 + Y 2 > 2r. 2r √ Thus the event E1 (r) = {X > r, Y > r} is a subset of the event E2 (r) = { X 2 + Y 2 > 2r|X, Y > 0} and p(E1 (r)) ≤ p(E2 (r)). 2) Since X and Y are independent p(E1 (r)) = p(X > r, Y > r) = p(X > r)p(Y > r) = Q2 (r) √ Y 3) Using the rectangular to polar transformation V = X 2 + Y 2 , Θ = arctan X it is proved (see text Eq. 4.1.22) that v − v22 e 2σ fV,Θ (v, θ) = 2πσ 2 Hence, with σ 2 = 1 we obtain p( X2 +Y >
2



2r|X, Y > 0) = = =

v − v2 e 2 dvdθ 2r 0 2π v2 1 1 ∞ − v2 ve 2 dv = (−e− 2 ) 4 √2r 4 1 −r2 e 4




π 2

∞ √ 2r

Combining the results of part 1), 2) and 3) we obtain 1 2 Q2 (r) ≤ e−r 4 1 r2 or Q(r) ≤ e− 2 2

Problem 4.24 The following is a program written in Fortran to compute the Q function REAL*8 x,t,a,q,pi,p,b1,b2,b3,b4,b5 PARAMETER (p=.2316419d+00, b1=.31981530d+00, 84

+ + C-

b2=-.356563782d+00, b3=1.781477937d+00, b4=-1.821255978d+00, b5=1.330274429d+00)

pi=4.*atan(1.) C-INPUT PRINT*, ’Enter -x-’ READ*, x Ct=1./(1.+p*x) a=b1*t + b2*t**2. + b3*t**3. + b4*t**4. + b5*t**5. q=(exp(-x**2./2.)/sqrt(2.*pi))*a C-OUTPUT PRINT*, q CSTOP END The results of this approximation along with the actual values of Q(x) (taken from text Table 4.1) are tabulated in the following table. As it is observed a very good approximation is achieved. x Q(x) Approximation −1 1. 1.59 × 10 1.587 × 10−1 −2 1.5 6.68 × 10 6.685 × 10−2 2. 2.28 × 10−2 2.276 × 10−2 −3 2.5 6.21 × 10 6.214 × 10−3 −3 3. 1.35 × 10 1.351 × 10−3 3.5 2.33 × 10−4 2.328 × 10−4 −5 4. 3.17 × 10 3.171 × 10−5 4.5 3.40 × 10−6 3.404 × 10−6 −7 5. 2.87 × 10 2.874 × 10−7 Problem 4.25 The n-dimensional joint Gaussian distribution is fX (x) = 1 (2π)n det(C) e−(x−m)C
−1 (x−m)t

The Jacobian of the linear transformation Y = AXt + b is 1/det(A) and the solution to this equation is x = (y − b)t (A−1 )t We may substitute for x in fX (x) to obtain fY (y). fY (y) = 1 (2π)n/2 (det(C))1/2 |det(A)| 1 (2π)n/2 (det(C))1/2 |det(A)| 1 (2π)n/2 (det(C))1/2 |det(A)| exp −[(y − b)t (A−1 )t − m]C −1 [(y − b)t (A−1 )t − m]t = exp −[yt − bt − mAt ](At )−1 C −1 A−1 [y − b − Amt ] = exp −[yt − bt − mAt ](ACAt )−1 [yt − bt − mAt ]t

85

Thus fY (y) is a n-dimensional joint Gaussian distribution with mean and variance given by mY = b + Amt , Problem 4.26 1) The joint distribution of X and Y is given by fX,Y (x, y) = 1 1 exp − 2πσ 2 2 X Y σ2 0 0 σ2 X Y CY = ACAt

The linear transformations Z = X + Y and W = 2X − Y are written in matrix notation as Z W Thus, (see Prob. 4.25) fZ,W (z, w) = where M =A σ2 0 0 σ2 At = 2σ 2 σ 2 σ 2 5σ 2 = √ 2 2 From the last equality we identify σZ = 2σ 2 , σW = 5σ 2 and ρZ,W = 1/ 10 2) FR (r) = p(R ≤ r) = p(
∞ yr −∞ 2 σZ ρZ,W σZ σW

=

1 1 2 −1

X Y

=A

X Y

1 1 exp − 1/2 2 2πdet(M )

Z W

M −1

Z W ρZ,W σZ σW 2 σW

X ≤ r) Y
0



=
0

fX,Y (x, y)dxdy +

−∞ yr

fX,Y (x, y)dxdy

Differentiating FR (r) with respect to r we obtain the PDF fR (r). Note that d da d db Thus,
∞ a

f (x)dx = f (a) b a b

f (x)dx = −f (b)

FR (r) =
0 ∞

d dr

yr −∞

0

fX,Y (x, y)dxdy +
0 −∞

−∞

d dr



fX,Y (x, y)dxdy yr =
0 ∞

yfX,Y (yr, y)dy − |y|fX,Y (yr, y)dy

yfX,Y (yr, y)dy

= Hence, fR (r) =

−∞

1 − y2 r2 +y2 2σ 2 e dy = 2 2πσ 2 −∞ 1 1 1 2σ 2 = = 2 2πσ 2 2(1 + r2 ) π 1 + r2 |y| 86





y
0

1 −y2 ( 1+r2 ) 2σ 2 dy e 2πσ 2

fR (r) is the Cauchy distribution; its mean is zero and the variance ∞. Problem 4.27 The binormal joint density function is fX,Y (x, y) = 1 2πσ1 σ2 1 − ρ2 exp − 1 × 2(1 − ρ2 )

=

(x − m1 )2 (y − m2 )2 2ρ(x − m1 )(y − m2 ) + − 2 2 σ1 σ2 σ1 σ2 1 exp −(z − m)C −1 (z − m)t (2π)n det(C)

where z = [x y], m = [m1 m2 ] and C=
2 σ1 ρσ1 σ2 2 ρσ1 σ2 σ2

1) With C= 4 −4 −4 9

2 2 we obtain σ1 = 4, σ2 = 9 and ρσ1 σ2 = −4. Thus ρ = − 2 . 3

2) The transformation Z = 2X + Y , W = X − 2Y is written in matrix notation as Z W = 2 1 1 −2 X Y =A X Y

The ditribution fZ,W (z, w) is binormal with mean m = mAt , and covariance matrix C = ACAt . Hence 2 1 4 −4 2 1 9 2 C = = 1 −2 −4 9 1 −2 2 56 The off-diagonal elements of C are equal to ρσZ σW = COV (Z, W ). Thus COV (Z, W ) = 2.
2 3) Z will be Gaussian with variance σZ = 9 and mean

m Z = [ m1 m 2 ] Problem 4.28

2 1

=4

√ fX,Y (x, y) 2πσY exp[−A] f X|Y (x|y) = = fY (y) 2πσX σY 1 − ρ2 X,Y where A = = = (y − mY )2 (x − mX )(y − mY ) (y − mY )2 (x − mX )2 − 2 + 2(1 − ρ2 )σ 2 − 2ρ 2(1 − ρ2 )σ σ 2 2(1 − ρ2 )σX 2σY X Y X,Y X,Y X,Y Y 1 2 2(1 − ρ2 )σX X,Y (x − mX )2 +
2 (y − mY )2 σX ρ2 (x − mX )(y − mY )σX X,Y − 2ρ 2 σY σY 2

1 ρσX x − mX + (y − mY ) 2 )σ 2 σY 2(1 − ρX,Y X 87

Thus f X|Y (x|y) = √ 1 2πσX 1 − ρ2 X,Y exp − 1 ρσX x − mX + (y − mY ) 2 )σ 2 σY 2(1 − ρX,Y X
2

2 which is a Gaussian PDF with mean mX + (y − mY )ρσX /σY and variance (1 − ρ2 )σX . If ρ = 0 X,Y then f X|Y (x|y) = fX (x) which implies that Y does not provide any information about X or X, Y are independent. If ρ = ±1 then the variance of f X|Y (x|y) is zero which means that X|Y is deterministic. This is to be expected since ρ = ±1 implies a linear relation X = AY + b so that knowledge of Y provides all the information about X.

Problem 4.29 1) The random variables Z, W are a linear combination of the jointly Gaussian random variables X, Y . Thus they are jointly Gaussian with mean m = mAt and covariance matrix C = ACAt , where m, C is the mean and covariance matrix of the random variables X and Y and A is the transformation matrix. The binormal joint density function is fZ,W (z, w) = 1 (2π)n det(C)|det(A)| exp −([z w] − m )C
−1

([z w] − m )t

If m = 0, then m = mAt = 0. With C= σ 2 ρσ 2 ρσ 2 σ 2 A= cos θ sin θ − sin θ cos θ

we obtain det(A) = cos2 θ + sin2 θ = 1 and C = = cos θ sin θ − sin θ cos θ σ 2 ρσ 2 ρσ 2 σ 2 cos θ − sin θ sin θ cos θ

σ 2 (1 + ρ sin 2θ) ρσ 2 (cos2 θ − sin2 θ) ρσ 2 (cos2 θ − sin2 θ) σ 2 (1 − ρ sin 2θ)

2) Since Z and W are jointly Gaussian with zero-mean, they are independent if they are uncorrelated. This implies that cos2 θ − sin2 θ = 0 =⇒ θ = π π +k , 4 2 k∈Z

Note also that if X and Y are independent, then ρ = 0 and any rotation will produce independent random variables again. Problem 4.30 1) fX,Y (x, y) is a PDF and its integral over the supporting region of x and y should be one.
∞ ∞ −∞ −∞ 0

fX,Y (x, y)dxdy
0

= = = Thus K = 1

−∞ −∞

K − x2 +y2 2 e dxdy + π

∞ 0 0



K − x2 +y2 2 e dxdy π e− x2 2

0 y2 K 0 − x2 K e 2 dx e− 2 dx + π −∞ π −∞ √ 2 K 1 2( 2π) = K π 2

∞ 0



dx
0

e−

y2 2

dx

88

2) If x < 0 then fX (x) = = If x > 0 then fX (x) = = 1 x2 1 − x2 +y2 2 e dy = e− 2 π π 0 1 − x2 1 √ 1 − x2 e 2 2π = √ e 2 π 2 2π x2 1 x2 1 − x2 +y2 2 e dy = e− 2 π −∞ π 2 1√ x2 1 −x 1 e 2 2π = √ e− 2 π 2 2π
0 ∞

0 −∞

e−

y2 2

dy

∞ 0

e−

y2 2

dy

Thus for every x, fX (x) = √1 e− 2 which implies that fX (x) is a zero-mean Gaussian random 2π variable with variance 1. Since fX,Y (x, y) is symmetric to its arguments and the same is true for the region of integration we conclude that fY (y) is a zero-mean Gaussian random variable of variance 1. 3) fX,Y (x, y) has not the same form as a binormal distribution. For xy < 0, fX,Y (x, y) = 0 but a binormal distribution is strictly positive for every x, y. 4) The random variables X and Y are not independent for if xy < 0 then fX (x)fY (y) = 0 whereas fX,Y (x, y) = 0. 5) E[XY ] = = =
0 x2 +y 2 1 0 1 ∞ ∞ − x2 +y2 2 XY e− 2 dxdy + e dxdy π −∞ −∞ π 0 0 0 ∞ y2 y2 x2 x2 1 0 1 ∞ Xe− 2 dx Y e− 2 dy + Xe− 2 dx Y e− 2 dy π −∞ π 0 −∞ 0 1 2 1 (−1)(−1) + = π π π

Thus the random variables X and Y are correlated since E[XY ] = 0 and E[X] = E[Y ] = 0, so that E[XY ] − E[X]E[Y ] = 0. 6) In general fX|Y (x, y) = fX,Y (x,y) fY (y) .

If y > 0, then 0
2 −x 2

x0 x0 v≤0

for every > 0. Hence, with n = 2000, Z =
2 σY 2

2000 i=1 Xi , 2 σY 2

mXi =

1 4

p(|Z − 500| ≥ 2000 ) ≤
1 2 The variance σY of Y = n with = 0.001 we obtain n i=1 Xi

⇒ p(500 − 2000 ≤ Z ≤ 500 + 2000 ) ≥ 1 −
1 2 n σXi , 2 where σXi = p(1 − p) = 3 16

is

(see Problem 4.13). Thus,

p(480 ≤ Z ≤ 520) ≥ 1 −

3/16 = .063 2 × 10−1
1 n n i=1 Xi

2) Using the C.L.T. the CDF of the random variable Y = σ random variable N (mXi , √n ). Hence P =p 520 480 ≤Y ≤ n n p(1−p) n

converges to the CDF of the
520 n

=Q

480 n

− mXi σ

−Q

− mXi σ

With n = 2000, mXi = 1 , σ 2 = 4 P

we obtain 520 − 500 2000p(1 − p)

= Q

480 − 500 −Q 2000p(1 − p) 20 = 1 − 2Q √ = .682 375 90

Problem 4.33 Consider the random variable vector x = [ ω1 ω1 + ω2 . . . ω1 + ω2 + · · · + ωn ]t where each ωi is the outcome of a Gaussian random variable distributed according to N (0, 1). Since mx,i = E[ω1 + ω2 + · · · + ωi )] = E[ω1 ] + E[ω2 ] + · · · + E[ωi ] = 0 we obtain mx = 0 The covariance matrix is C = E[(x − mx )(x − mx )t ] = E[xxt ] The i, j element (Ci,j ) of this matrix is Ci,j = E[(ω1 + ω2 + · · · + ωi )(ω1 + ω2 + · · · + ωj )] = E[(ω1 + ω2 + · · · + ωmin(i,j) )(ω1 + ω2 + · · · + ωmin(i,j) )] +E[(ω1 + ω2 + · · · + ωmin(i,j) )(ωmin(i,j)+1 + · · · + ωmax(i,j) )] The expectation in the last line of the previous equation is zero. This is true since all the random variables inside the first parenthesis are different from the random variables in the second parenthesis, and for uncorrelated random variables of zero mean E[ωk ωl ] when k = l. Hence, Ci,j = E[(ω1 + ω2 + · · · + ωmin(i,j) )(ω1 + ω2 + · · · + ωmin(i,j) )] min(i,j) min(i,j) min(i,j)

= k=1 min(i,j) l=1

E[ωk ωl ] = k=1 E[ωk ωk ] + k,l=1 k=l

E[ωk ωl ]

= k=1 1 = min(i, j)
 

Thus

C= 


 

1 1 ··· 1 1 2 2   . .  .. . . .  . .  1 2 ··· n

Problem 4.34 The random variable X(t0 ) is uniformly distributed over [−1 1]. Hence, mX (t0 ) = E[X(t0 )] = E[X] = 0 As it is observed the mean mX (t0 ) is independent of the time instant t0 . Problem 4.35 mX (t) = E[A + Bt] = E[A] + E[B]t = 0 where the last equality follows from the fact that A, B are uniformly distributed over [−1 1] so that E[A] = E[B] = 0. RX (t1 , t2 ) = E[X(t1 )X(t2 )] = E[(A + Bt1 )(A + Bt2 )] = E[A2 ] + E[AB]t2 + E[BA]t1 + E[B 2 ]t1 t2 91

The random variables A, B are independent so that E[AB] = E[A]E[B] = 0. Furthermore E[A2 ] = E[B 2 ] = Thus RX (t1 , t2 ) = 1 1 1 x2 dx = x3 |1 = −1 2 6 3 −1 1 1 + t1 t2 3 3
1

Problem 4.36 Since the joint density function of {X(ti }n is a jointly Gaussian density of zero-mean the autoi=1 correlation matrix of the random vector process is simply its covariance matrix. The i, j element of the matrix is RX (ti , tj ) = COV (X(ti )X(tj )) + mX (ti )mX (tj ) = COV (X(ti )X(tj )) = σ 2 min(ti , tj ) Problem 4.37 Since X(t) = X with the random variable uniformly distributed over [−1 1] we obtain fX(t1 ),X(t2 ),···,X(tn ) (x1 , x2 , . . . , xn ) = fX,X,···,X (x1 , x2 , . . . , xn ) for all t1 , . . . , tn and n. Hence, the statistical properties of the process are time independent and by definition we have a stationary process. Problem 4.38 The process is not wide sense stationary for the autocorrelation function depends on the values of t1 , t2 and not on their difference. To see this suppose that t1 = t2 = t. If the process was wide sense stationary, then RX (t, t) = RX (0). However, RX (t, t) = σ 2 t and it depends on t as it is opposed to RX (0) which is independent of t. Problem 4.39 If a process X(t) is M th order stationary, then for all n ≤ M , and ∆ fX(t1 )X(t2 )···X(tn ) (x1 , x2 , · · · , xn ) = fX(t1 +∆)···X(tn +∆) (x1 , · · · xn ) If we let n = 1, then
∞ ∞

mX (0) = E[X(0)] =

−∞

xfX(0) (x)dx =

−∞

xfX(0+t) (x)dx = mX (t)

for all t. Hence, mx (t) is constant. With n = 2 we obtain fX(t1 )X(t2 ) (x1 , x2 ) = fX(t1 +∆)X(t2 +∆) (x1 , x2 ), If we let ∆ = −t1 , then fX(t1 )X(t2 ) (x1 , x2 ) = fX(0)X(t2 −t1 ) (x1 , x2 ) which means that
∞ ∞

∀∆

Rx (t1 , t2 ) = E[X(t1 )X(t2 )] =

−∞ −∞

x1 x2 fX(0)X(t2 −t1 ) (x1 , x2 )dx1 dx2

depends only on the difference τ = t1 − t2 and not on the individual values of t1 , t2 . Thus the M th order stationary process, has a constant mean and an autocorrelation function dependent on τ = t1 − t2 only. Hence, it is a wide sense stationary process. 92

Problem 4.40 1) f (τ ) cannot be the autocorrelation function of a random process for f (0) = 0 < f (1/4f0 ) = 1. Thus the maximum absolute value of f (τ ) is not achieved at the origin τ = 0. 2) f (τ ) cannot be the autocorrelation function of a random process for f (0) = 0 whereas f (τ ) = 0 for τ = 0. The maximum absolute value of f (τ ) is not achieved at the origin. 3) f (0) = 1 whereas f (τ ) > f (0) for |τ | > 1. Thus f (τ ) cannot be the autocorrelation function of a random process. 4) f (τ ) is even and the maximum is achieved at the origin (τ = 0). We can write f (τ ) as f (τ ) = 1.2Λ(τ ) − Λ(τ − 1) − Λ(τ + 1) Taking the Fourier transform of both sides we obtain S(f ) = 1.2sinc2 (f ) − sinc2 (f ) e−j2πf + ej2πf = sinc2 (f )(1.2 − 2 cos(2πf )) As we observe the power spectrum S(f ) can take negative values, i.e. for f = 0. Thus f (τ ) can not be the autocorrelation function of a random process. Problem 4.41 As we have seen in Problem 4.38 the process is not stationary and thus it is not ergodic. This in accordance to our definition of ergodicity as a property of stationary and ergodic processes. Problem 4.42 The random variable ωi takes the values {1, 2, . . . , 6} with probability 1 . Thus 6


EX

= E = E


−∞ ∞ −∞

X 2 (t)dt
2 ωi e−2t u2 (t)dt = E −1 ∞ 0 ∞ 0 2 ωi e−2t dt

=
0

2 E[ωi ]e−2t dt =

1 6

6 i=1

i2 e−2t dt

∞ 91 1 91 ∞ −2t e dt = (− e−2t ) = 6 0 6 2 0 91 = 12 Thus the process is an energy-type process. However, this process is not stationary for 21 mX (t) = E[X(t) = E[ωi ]e−t u−1 (t) = e−t u−1 (t) 6 is not constant.

Problem 4.43 1) We find first the probability of an even number of transitions in the interval (0, τ ]. pN (n = even) = pN (0) + pN (2) + pN (4) + · · · = = = 1 1 + ατ
∞ l=0

ατ 1 + ατ

2

1 1 1 + ατ 1 − (ατ )2 2 (1+ατ ) 1 + ατ 1 + 2ατ 93

ατ The probability pN (n = odd) is simply 1 − pN (n = even) = 1+2ατ . The random process Z(t) takes the value of 1 (at time instant t) if an even number of transitions occurred given that Z(0) = 1, or if an odd number of transitions occurred given that Z(0) = 0. Thus,

mZ (t) = E[Z(t)] = 1 · p(Z(t) = 1) + 0 · p(Z(t) = 0) = p(Z(t) = 1|Z(0) = 1)p(Z(0) = 1) + p(Z(t) = 1|Z(0) = 0)p(Z(0) = 0) 1 1 = pN (n = even) + pN (n = odd) 2 2 1 = 2 2) To determine RZ (t1 , t2 ) note that Z(t + τ ) = 1 if Z(t) = 1 and an even number of transitions occurred in the interval (t, t + τ ], or if Z(t) = 0 and an odd number of transitions have taken place in (t, t + τ ]. Hence, RZ (t + τ, t) = E[Z(t + τ )Z(t)] = 1 · p(Z(t + τ ) = 1, Z(t) = 1) + 0 · p(Z(t + τ ) = 1, Z(t) = 0) +0 · p(Z(t + τ ) = 0, Z(t) = 1) + 0 · p(Z(t + τ ) = 0, Z(t) = 0) = p(Z(t + τ ) = 1, Z(t) = 1) = p(Z(t + τ ) = 1|Z(t) = 1)p(Z(t) = 1) 1 1 + ατ = 2 1 + 2ατ As it is observed RZ (t + τ, t) depends only on τ and thus the process is stationary. The process is not cyclostationary. 3) Since the process is stationary PZ = RZ (0) = Problem 4.44 1) mX (t) = E[X(t)] = E[X cos(2πf0 t)] + E[Y sin(2πf0 t)] = E[X] cos(2πf0 t) + E[Y ] sin(2πf0 t) = 0 where the last equality follows from the fact that E[X] = E[Y ] = 0. 2) RX (t + τ, t) = E[(X cos(2πf0 (t + τ )) + Y sin(2πf0 (t + τ ))) (X cos(2πf0 t) + Y sin(2πf0 t))] = E[X 2 cos(2πf0 (t + τ )) cos(2πf0 t)] + E[XY cos(2πf0 (t + τ )) sin(2πf0 t)] + E[Y X sin(2πf0 (t + τ )) cos(2πf0 t)] + E[Y 2 sin(2πf0 (t + τ )) sin(2πf0 t)] σ2 [cos(2πf0 (2t + τ )) + cos(2πf0 τ )] + = 2 σ2 [cos(2πf0 τ ) − cos(2πf0 (2t + τ ))] 2 = σ 2 cos(2πf0 τ ) 94 1 2

where we have used the fact that E[XY ] = 0. Thus the process is stationary for RX (t + τ, t) depends only on τ . 3) Since the process is stationary PX = RX (0) = σ 2 .
2 2 4) If σX = σY , then

mX (t) = E[X] cos(2πf0 t) + E[Y ] sin(2πf0 t) = 0 and RX (t + τ, t) = E[X 2 ] cos(2πf0 (t + τ )) cos(2πf0 t) + E[Y 2 ] sin(2πf0 (t + τ )) sin(2πf0 t) 2 σX [cos(2πf0 (2t + τ )) − cos(2πf0 τ )] + = 2 2 σY [cos(2πf0 τ ) − cos(2πf0 (2t + τ ))] 2 2 2 σX − σY = cos(2πf0 (2t + τ ) + 2 2 2 σX + σY cos(2πf0 τ ) 2 The process is not stationary for RX (t + τ, t) does not depend only on τ but on t as well. However 1 the process is cyclostationary with period T0 = 2f0 . Note that if X or Y is not of zero mean then the period of the cyclostationary process is T0 = f10 . The power spectral density of X(t) is PX = lim 1 T →∞ T
T 2

−T 2

2 2 2 σX − σY σ 2 + σY cos(2πf0 2t) + X 2 2

dt = ∞

Problem 4.45 1)


mX (t) = E [X(t)] = E 






Ak p(t − kT )

k=−∞

= = m

E[Ak ]p(t − kT ) p(t − kT )

k=−∞ ∞ k=−∞

2) RX (t + τ, t) = E [X(t + τ )X(t)]
 

= E =





Ak Al p(t + τ − kT )p(t − lT )

k=−∞ l=−∞ ∞ ∞ k=−∞ l=−∞ ∞ ∞

E[Ak Al ]p(t + τ − kT )p(t − lT ) RA (k − l)p(t + τ − kT )p(t − lT )

= k=−∞ l=−∞

95

3)
∞ ∞

RX (t + T + τ, t + T ) = k=−∞ l=−∞ ∞ ∞

RA (k − l)p(t + T + τ − kT )p(t + T − lT ) RA (k + 1 − (l + 1))p(t + τ − k T )p(t − l T ) RA (k − l )p(t + τ − k T )p(t − l T )

= k =−∞ l =−∞ ∞ ∞

= k =−∞ l =−∞

= RX (t + τ, t) where we have used the change of variables k = k − 1, l = l − 1. Since mX (t) and RX (t + τ, t) are periodic, the process is cyclostationary. 4) ¯ RX (τ ) = = = = = = where Rp (τ − nT ) = 5) 1 T 1 T 1 T 1 T 1 T 1 T
T

RX (t + τ, t)dt
0 T 0 ∞ ∞

RA (k − l)p(t + τ − kT )p(t − lT )dt
T

k=−∞ l=−∞ ∞ ∞

RA (n) n=−∞ ∞

l=−∞ 0 ∞ T −lT

p(t + τ − lT − nT )p(t − lT )dt p(t + τ − nT )p(t )dt

RA (n) n=−∞ ∞

l=−∞ −lT ∞

RA (n) n=−∞ ∞ n=−∞

−∞

p(t + τ − nT )p(t )dt

RA (n)Rp (τ − nT )

∞ −∞ p(t

+ τ − nT )p(t )dt = p(t) p(−t)|t=τ −nT 1 T
∞ n=−∞

¯ SX (f ) = F[RX (τ )] = F = = = 1 T 1 T 1 T


RA (n)Rp (τ − nT )



RA (n) n=−∞ ∞

−∞ ∞

Rp (τ − nT )e−j2πf τ dτ Rp (τ )e−j2πf (τ
∞ −∞ +nT )

RA (n) n=−∞ ∞ n=−∞

−∞



RA (n)e−j2πf nT

Rp (τ )e−j2πf τ dτ

But, Rp (τ ) = p(τ ) p(−τ ) so that
∞ −∞

Rp (τ )e−j2πf τ dτ



=

= P (f )P ∗ (f ) = |P (f )|2

−∞

p(τ )e−j2πf τ dτ

∞ −∞

p(−τ )e−j2πf τ dτ

where we have used the fact that for real signals P (−f ) = P ∗ (f ). Substituting the relation above to the expression for SX (f ) we obtain SX (f ) = |P (f )|2 T
∞ n=−∞

RA (n)e−j2πf nT 96

=

∞ |P (f )|2 RA (0) + 2 RA (n) cos(2πf nT ) T n=1

where we have used the assumption RA (n) = RA (−n) and the fact ej2πf nT +e−j2πf nT = 2 cos(2πf nT ) Problem 4.46 1) The autocorrelation function of An ’s is RA (k − l) = E[Ak Al ] = δkl where δkl is the Kronecker’s delta. Furthermore t− T T 2 ) = T sinc(T f )e−j2πf 2 P (f ) = F Π( T Hence, using the results of Problem 4.45 we obtain SX (f ) = T sinc2 (T f ) 2) In this case E[An ] = 1 and RA (k − l) = E[Ak Al ]. If k = l, then RA (0) = E[A2 ] = 1 . If k = l, k 2 2 then RA (k − l) = E[Ak Al ] = E[Ak ]E[Al ] = 1 . The power spectral density of the process is 4 SX (f ) = T sinc2 (T f ) 1 1 ∞ + cos(2πkf T ) 2 2 k=1

3) If p(t) = Π( t−3T /2 ) and An = ±1 with equal probability, then 3T SX (f ) =
3T |P (f )|2 1 3T sinc(3T f )e−j2πf 2 RA (0) = T T = 9T sinc2 (3T f )

2

For the second part the power spectral density is SX (f ) = 9T sinc2 (3T f ) 1 1 ∞ + cos(2πkf T ) 2 2 k=1

Problem 4.47 1) E[Bn ] = E[An ] + E[An−1 ] = 0. To find the autocorrelation sequence of Bn ’s we write RB (k − l) = E[Bk Bl ] = E[(Ak + Ak−1 )(Al + Al−1 )] = E[Ak Al ] + E[Ak Al−1 ] + E[Ak−1 Al ] + E[Ak−1 Al−1 ] If k = l, then RB (0) = E[A2 ] + E[A2 ] = 2. If k = l − 1, then RB (1) = E[Ak Al−1 ]] = 1. Similarly, k k−1 if k = l + 1, RB (−1) = E[Ak−1 Al ]] = 1. Thus, RB (k − l) = Using the results of Problem 4.45 we obtain SX (f ) = = |P (f )|2 T


  2 k−l =0 

  0 otherwise

1 k − l = ±1

RB (0) + 2 k=1 RB (k) cos(2πkf T )

|P (f )|2 (2 + 2 cos(2πf T )) T 97

2) Consider the sample sequence of An ’s {· · · , −1, 1, 1, −1, −1, −1, 1, −1, 1, −1, · · ·}. Then the corresponding sequence of Bn ’s is {· · · , 0, 2, 0, −2, −2, 0, 0, 0, 0, · · ·}. The following figure depicts the corresponding sample function X(t). ... ...

If p(t) = Π( t−T /2 ), then |P (f )|2 = T 2 sinc2 (T f ) and the power spectral density is T SX (f ) = T sinc2 (T f )(2 + 2 cos(2πf T )) In the next figure we plot the power spectral density for T = 1.
4 3.5 3 2.5 2 1.5 1 0.5 0 -5

-4

-3

-2

-1

0

1

2

3

4

5

3) If Bn = An + αAn−1 , then RB (k − l) =

  1 + α2    0

α

k−l =0 k − l = ±1 otherwise

The power spectral density in this case is given by SX (f ) = |P (f )|2 (1 + α2 + 2α cos(2πf T )) T

Problem 4.48 In general the mean of a function of two random variables, g(X, Y ), can be found as E[g(X, Y )] = E[E[g(X, Y )|X]] where the outer expectation is with respect to the random variable X. 1) mY (t) = E[X(t + Θ)] = E[E[X(t + Θ)|Θ]] where E[X(t + Θ)|Θ] = = X(t + θ)fX(t)|Θ (x|θ)dx X(t + θ)fX(t) (x)dx = mX (t + θ)

where we have used the independence of X(t) and Θ. Thus mY (t) = E[mX (t + θ)] = 1 T
T

mX (t + θ)dθ = mY
0

98

where the last equality follows from the periodicity of mX (t + θ). Similarly for the autocorrelation function RY (t + τ, t) = E [E[X(t + τ + Θ)X(t + Θ)|Θ]] = E [RX (t + τ + θ, t + θ)] 1 T RX (t + τ + θ, t + θ)dθ = T 0 1 T RX (t + τ, t )dt = T 0 where we have used the change of variables t = t + θ and the periodicity of RX (t + τ, t) 2) SY (f ) = E |YT (f )|2 =E E T →∞ T lim lim |YT (f )|2 Θ T →∞ T lim =E E |XT (f )|2 T →∞ T lim

= E E

|XT (f )ej2πf θ |2 Θ T →∞ T

= E [SX (f )] = SX (f )
1 3) Since SY (f ) = F[ T T 0

RX (t + τ, t)dt] and SY (f ) = SX (f ) we conclude that SX (f ) = F 1 T
T

RX (t + τ, t)dt
0

Problem 4.49 Using Parseval’s relation we obtain
∞ −∞

f 2 SX (f )df



= =

1 (2) δ (τ )RX (τ )dτ 4π 2 −∞ d2 1 = − 2 (−1)2 2 RX (τ )|τ =0 4π dτ 2 1 d = − 2 2 RX (τ )|τ =0 4π dτ − Also,
∞ −∞

−∞ ∞

F −1 [f 2 ]F −1 [SX (f )]dτ

SX (f )df = RX (0)

Combining the two relations we obtain WRM S = Problem 4.50 RXY (t1 , t2 ) = E[X(t1 )Y (t2 )] = E[Y (t2 )X(t1 )] = RY X (t2 , t1 ) If we let τ = t1 −t2 , then using the previous result and the fact that X(t), Y (t) are jointly stationary, so that RXY (t1 , t2 ) depends only on τ , we obtain RXY (t1 , t2 ) = RXY (t1 − t2 ) = RY X (t2 − t1 ) = RY X (−τ )
∞ 2 −∞ f SX (f )df ∞ −∞ SX (f )df

=−

1 d2 RX (τ )|τ =0 4π 2 RX (0) dτ 2

99

Taking the Fourier transform of both sides of the previous relation we obtain SXY (f ) = F[RXY (τ )] = F[RY X (−τ )]


= =

−∞ ∞

RY X (−τ )e−j2πf τ dτ RY X (τ )e−j2πf τ dτ
∗ ∗ = SY X (f )

−∞

Problem 4.51 1) SX (f ) = N0 , RX (τ ) = 2 the output are given by

N0 2 δ(τ ).

The autocorrelation function and the power spectral density of SY (f ) = SX (f )|H(f )|2

RY (t) = RX (τ ) h(τ ) h(−τ ),

f f f With H(f ) = Π( 2B ) we have |H(f )|2 = Π2 ( 2B ) = Π( 2B ) so that

SY (f ) =

f N0 Π( ) 2 2B

Taking the inverse Fourier transform of the previous we obtain the autocorrelation function of the output N0 sinc(2Bτ ) = BN0 sinc(2Bτ ) RY (τ ) = 2B 2 2) The output random process Y (t) is a zero mean Gaussian process with variance
2 σY (t) = E[Y 2 (t)] = E[Y 2 (t + τ )] = RY (0) = BN0

The correlation coefficient of the jointly Gaussian processes Y (t + τ ), Y (t) is ρY (t+τ )Y (t) = COV (Y (t + τ )Y (t)) E[Y (t + τ )Y (t)] RY (τ ) = = σY (t+τ ) σY (t) BN0 BN0

1 1 With τ = 2B , we have RY ( 2B ) = sinc(1) = 0 so that ρY (t+τ )Y (t) = 0. Hence the joint probability density function of Y (t) and Y (t + τ ) is
Y 2 (t+τ )+Y 2 (t) 1 − 2BN0 e 2πBN0

fY (t+τ )Y (t) =

Since the processes are Gaussian and uncorrelated they are also independent. Problem 4.52 The impulse response of a delay line that introduces a delay equal to ∆ is h(t) = δ(t − ∆). The output autocorrelation function is RY (τ ) = RX (τ ) h(τ ) h(−τ ) But,


h(τ ) h(−τ ) = = = Hence,

−∞ ∞ −∞ ∞ −∞

δ(−(t − ∆))δ(τ − (t − ∆))dt δ(t − ∆)δ(τ − (t − ∆))dt δ(t )δ(τ − t )dt = δ(τ )

RY (τ ) = RX (τ ) δ(τ ) = RX (τ ) 100

This is to be expected since a delay line does not alter the spectral characteristics of the input process. Problem 4.53 The converse of the theorem is not true. Consider for example the random process X(t) = cos(2πf0 t) + X where X is a random variable. Clearly mX (t) = cos(2πf0 t) + mX is a function of time. However, passing this process through the LTI system with transfer function f Π( 2W ) with W < f0 produces the stationary random process Y (t) = X. Problem 4.54 ∞ 1) Let Y (t) = −∞ X(τ )h(t − τ )dτ =
∞ ∞ −∞ h(τ )X(t

− τ )dτ . Then the mean mY (t) is
∞ −∞

mY (t) = E[ =

−∞ ∞

h(τ )X(t − τ )dτ ] =

h(τ )E[X(t − τ )]dτ

−∞

h(τ )mX (t − τ )dτ
∞ −∞

If X(t) is cyclostationary with period T then


mY (t + T ) =

−∞

h(τ )mX (t + T − τ )dτ =

h(τ )mX (t − τ )dτ = mY (t)

Thus the mean of the output process is periodic with the same period of the cyclostationary process X(t). The output autocorrelation function is RY (t + τ, t) = E[Y (t + τ )Y (t)]
∞ ∞

= E = Hence,

−∞ −∞ ∞ ∞

h(s)X(t + τ − s)h(v)X(t − v)dsdv

−∞ −∞ ∞

h(s)h(v)RX (t + τ − s, t − v)dsdv


RY (t + T + τ, t + T ) = =

−∞ −∞ ∞ ∞ −∞ −∞

h(s)h(v)RX (t + T + τ − s, t + T − v)dsdv h(s)h(v)RX (t + T + τ − s, t + T − v)dsdv

= RY (t + τ, t) where we have used the periodicity of RX (t+τ, t) for the last equality. Since both mY (t), RY (t+τ, t) are periodic with period T , the output process Y (t) is cyclostationary. 2) The crosscorrelation function is RXY (t + τ, t) = E[X(t + τ )Y (t)] = E X(t + τ )
∞ ∞

−∞

X(t − s)h(s)ds
∞ −∞

=

−∞

E[X(t + τ )X(t − s)]h(s)ds =

RX (t + τ, t − s)h(s)ds
T 2

which is periodic with period T . Integrating the previous over one period, i.e. from − T to 2 obtain ¯ RXY (τ ) = =
∞ −∞ ∞ −∞

we

1 T

T 2

−T 2

RX (t + τ, t − s)dth(s)ds

¯ RX (τ + s)h(s)ds

¯ = RX (τ ) h(−τ ) 101

Similarly we can show that ¯ ¯ RY (τ ) = RX Y (τ ) h(τ ) so that by combining the two we obtain ¯ ¯ RY (τ ) = RX (τ ) h(τ ) h(−τ ) 3) Taking the Fourier transform of the previous equation we obtain the desired relation among the spectral densities of the input and output. SY (f ) = SX (f )|H(f )|2 Problem 4.55 d 1) Y (t) = dt X(t) can be considered as the output process of a differentiator which is known to be a LTI system with impulse response h(t) = δ (t). Since X(t) is stationary, its mean is constant so that mY (t) = mX (t) = [mX (t)] = 0 d To prove that X(t) and dt X(t) are uncorrelated we have to prove that RXX (0) − mX (t)mX (t) = 0 or since mX (t) = 0 it suffices to prove that RXX (0) = 0. But,

RXX (τ ) = RX (τ ) δ (−τ ) = −RX (τ ) δ (τ ) = −RX (τ ) and since RX (τ ) = RX (−τ ) we obtain RXX (τ ) = −RX (τ ) = RX (−τ ) = −RXX (−τ ) Thus RXX (τ ) is an odd function and its value at the origin should be equal to zero RXX (0) = 0 The last proves that X(t) and d dt X(t)

are uncorrelated. d dt X(t)

2) The autocorrelation function of the sum Z(t) = X(t) +

is
X (τ )

RZ (τ ) = RX (τ ) + RX (τ ) + RXX (τ ) + RX If we take the Fourier transform of both sides we obtain

SZ (f ) = SX (f ) + SX (f ) + 2Re[SXX (f )] But, SXX (f ) = F[−RX (τ ) δ (τ )] = SX (f )(−j2πf ) so that Re[SXX (f )] = 0. Thus, SZ (f ) = SX (f ) + SX (f ) Problem 4.56 1) The impulse response of the system is h(t) = L[δ(t)] = δ (t) + δ (t − T ). It is a LTI system so that the output process is a stationary. This is true since Y (t + c) = L[X(t + c)] for all c, so if X(t) and X(t + c) have the same statistical properties, so do the processes Y (t) and Y (t + c). 2) SY (f ) = SX (f )|H(f )|2 . But, H(f ) = j2πf + j2πf e−j2πf T so that SY (f ) = SX (f )4π 2 f 2 1 + e−j2πf T
2

= SX (f )4π 2 f 2 [(1 + cos(2πf T ))2 + sin2 (2πf T )] = SX (f )8π 2 f 2 (1 + cos(2πf T )) 102

3) The frequencies for which |H(f )|2 = 0 will not be present at the output. These frequencies are 1 k f = 0, for which f 2 = 0 and f = 2T + T , k ∈ Z, for which cos(2πf T ) = −1. Problem 4.57 1) Y (t) = X(t) (δ(t) − δ(t − T )). Hence, SY (f ) = SX (f )|H(f )|2 = SX (f )|1 − e−j2πf T |2 = SX (f )2(1 − cos(2πf T )) 2) Y (t) = X(t) (δ (t) − δ(t)). Hence, SY (f ) = SX (f )|H(f )|2 = SX (f )|j2πf − 1|2 = SX (f )(1 + 4π 2 f 2 ) 3) Y (t) = X(t) (δ (t) − δ(t − T )). Hence, SY (f ) = SX (f )|H(f )|2 = SX (f )|j2πf − e−j2πf T |2 = SX (f )(1 + 4π 2 f 2 + 4πf sin(2πf T )) Problem 4.58 Using Schwartz’s inequality E 2 [X(t + τ )Y (t)] ≤ E[X 2 (t + τ )]E[Y 2 (t)] = RX (0)RY (0) where equality holds for independent X(t) and Y (t). Thus |RXY (τ )| = E 2 [X(t + τ )Y (t)]
1 2

≤ RX (0)RY (0)
1/2

1/2

1/2

The second part of the inequality follows from the fact 2ab ≤ a2 + b2 . Thus, with a = RX (0) and 1/2 b = RY (0) we obtain 1 1/2 1/2 RX (0)RY (0) ≤ [RX (0) + RY (0)] 2 Problem 4.59 1) RXY (τ ) = RX (τ ) δ(−τ − ∆) = RX (τ ) δ(τ + ∆) = e−α|τ | δ(τ + ∆) = e−α|τ +∆| RY (τ ) = RXY (τ ) δ(τ − ∆) = e−α|τ +∆| δ(τ − ∆) = e−α|τ |

2)
∞ e−α|v| 1 dv RXY (τ ) = e−α|τ | (− ) = − τ −∞ t − v ∞ ∞ e−α|v| 1 1 RY (τ ) = RXY (τ ) =− dsdv τ −∞ −∞ s − v τ − s

(3) 103

The case of RY (τ ) can be simplified as follows. Note that RY (τ ) = F −1 [SY (f )] where SY (f ) = 2α SX (f )|H(f )|2 . In our case, SX (f ) = α2 +4π2 f 2 and |H(f )|2 = π 2 sgn2 (f ). Since SX (f ) does not contain any impulses at the origin (f = 0) for which |H(f )|2 = 0, we obtain RY (τ ) = F −1 [SY (f )] = π 2 e−α|τ | 3) The system’s transfer function is H(f ) = SXY (f ) = SX (f )H ∗ (f ) = = Thus, RXY (τ ) = F −1 [SXY (f )] 4α τ α − 1 −ατ 1 + α ατ = e e u−1 (−τ ) e u−1 (−τ ) + u−1 (τ ) + 2 1−α 1+α α−1 For the output power spectral density we have SY (f ) = SX (f )|H(f )|2 = SX (f ) 1+4π2 f 2 = SX (f ). 1+4π f Hence, RY (τ ) = F −1 [SX (f )] = e−α|τ | 4) The impulse response of the system is h(t) = RXY (τ ) = e−α|τ | = If τ ≥ T , then RXY (τ ) = − If 0 ≤ τ < T , then RXY (τ ) = =
0 τ +T 1 1 eαv dv + e−αv dv 2T τ −T 2T 0 1 2 − eα(τ −T ) − e−α(τ +T ) 2T α 1 t 2T Π( 2T ).
2 2

−1+j2πf 1+j2πf .

Hence,

−1 − j2πf 2α 2 f 2 1 − j2πf + 4π α−1 1 1+α 1 1 4α + + 2 1 − j2πf 1−α 1 + α α + j2πf α − 1 α − j2πf α2

Hence, τ 1 Π( ) 2T 2T

1 −τ Π( ) = e−α|τ | 2T 2T τ +T

1 2T

τ −T

e−α|v| dv

1 −αv e 2T α

τ +T τ −T

=

1 e−α(τ −T ) − e−α(τ +T ) 2T α

The autocorrelation of the output is given by RY (τ ) = e−α|τ | τ 1 τ 1 Π( ) Π( ) 2T 2T 2T 2T τ 1 −α|τ | Λ( ) = e 2T 2T 2T 1 |x| −α|τ −x| = 1− e dx 2T −2T 2T e−ατ 2αT e + e−2αT − 2 2T α2

If τ ≥ 2T , then RY (τ ) = If 0 ≤ τ < 2T , then RY (τ ) =

τ e−2αT −ατ 1 e−ατ − e + eατ + −2 2 2 4T 2 α2 T α 2T 2 α2 4T α 104

Problem 4.60 Consider the random processes X(t) = Xej2πf0 t and Y (t) = Y ej2πf0 t . Clearly RXY (t + τ, t) = E[X(t + τ )Y ∗ (t)] = E[XY ]ej2πf0 τ However, both X(t) and Y (t) are nonstationary for E[X(t)] = E[X]ej2πf0 t and E[Y (t)] = E[Y ]ej2πf0 t are not constant. Problem 4.61 1) E[X(t)] = = = 4 4 A cos(2πf0 t + θ)dθ π 0 π 4 4A sin(2πf0 t + θ) π 0 π 4A [sin(2πf0 t + ) − sin(2πf0 t)] π 4
1 f0 . π Thus, E[X(t)] is periodic with period T =

RX (t + τ, t) = E[A2 cos(2πf0 (t + τ ) + Θ) cos(2πf0 t + Θ)] A2 E[cos(2πf0 (2t + τ ) + Θ) + cos(2πf0 τ )] = 2 A2 A2 cos(2πf0 τ ) + E[cos(2πf0 (2t + τ ) + Θ)] = 2 2 π A2 4 4 A2 cos(2πf0 τ ) + cos(2πf0 (2t + τ ) + θ)dθ = 2 2 π 0 A2 A2 cos(2πf0 τ ) + (cos(2πf0 (2t + τ )) − sin(2πf0 (2t + τ ))) = 2 π
1 which is periodic with period T = 2f0 . Thus the process is cyclostationary with period T = Using the results of Problem 4.48 we obtain 1 f0 .

SX (f ) = F[

1 T RX (t + τ, t)dt] T 0 A2 A2 cos(2πf0 τ ) + = F 2 Tπ = F = A2 cos(2πf0 τ ) 2

T 0

(cos(2πf0 (2t + τ )) − sin(2πf0 (2t + τ ))dt

A2 (δ(f − f0 ) + δ(f + f0 )) 4

2) RX (t + τ, t) = E[X(t + τ )X(t)] = E[(X + Y )(X + Y )] = E[X 2 ] + E[Y 2 ] + E[Y X] + E[XY ] = E[X 2 ] + E[Y 2 ] + 2E[X][Y ]

105

where the last equality follows from the independence of X and Y . But, E[X] = 0 since X is uniform on [−1, 1] so that RX (t + τ, t) = E[X 2 ] + E[Y 2 ] = 1 1 2 + = 3 3 3

The Fourier transform of RX (t + τ, t) is the power spectral density of X(t). Thus 2 SX (f ) = F[RX (t + τ, t)] = δ(f ) 3 Problem 4.62 1 h(t) = e−βt u−1 (t) ⇒ H(f ) = β+j2πf . The power spectral density of the input process is SX (f ) = 2α F[e−α|τ | ] = α2 +4π2 f 2 . If α = β, then SY (f ) = SX (f )|H(f )|2 = If α = β, then SY (f ) = SX (f )|H(f )|2 = 2α (α2 + 4π 2 f 2 )(β 2 + 4π 2 f 2 ) (α2 2α + 4π 2 f 2 )2

Problem 4.63 ˆ 1) Let Y (t) = X(t)+N (t). The process X(t) is the response of the system h(t) to the input process Y (t) so that RY X (τ ) = RY (τ ) h(−τ ) ˆ = [RX (τ ) + RN (τ ) + RXN (τ ) + RN X (τ )] h(−τ ) Also by definition ˆ RY X (τ ) = E[(X(t + τ ) + N (t + τ ))X(t)] = RX X (τ ) + RN X (τ ) ˆ ˆ ˆ = RX X (τ ) + RN (τ ) h(−τ ) + RN X (τ ) h(−τ ) ˆ Substituting this expression for RY X (τ ) in the previous one, and cancelling common terms we ˆ obtain RX X (τ ) = RX (τ ) h(−τ ) + RXN (τ ) h(−τ ) ˆ 2) ˆ E (X(t) − X(t))2 = RX (0) + RX (0) − RX X (0) − RXX (0) ˆ ˆ ˆ ˆ We can write E (X(t) − X(t))2 in terms of the spectral densities as ˆ E (X(t) − X(t))2


= =

−∞ ∞ −∞

(SX (f ) + SX (f ) − 2SX X (f ))df ˆ ˆ SX (f ) + (SX (f ) + SN (f ) + 2Re[SXN (f )])|H(f )|2

−2(SX (f ) + SXN (f ))H ∗ (f ) df ˆ To find the H(f ) that minimizes E (X(t) − X(t))2 we set the derivative of the previous expression, with respect to H(f ), to zero. By doing so we obtain H(f ) = SX (f ) + SXN (f ) SX (f ) + SN (f ) + 2Re[SXN (f )] 106

3) If X(t) and N (t) are independent, then RXN (τ ) = E[X(t + τ )N (t)] = E[X(t + τ )]E[N (t)] Since E[N (t)] = 0 we obtain RXN (τ ) = 0 and the optimum filter is H(f ) = SX (f ) SX (f ) + N0 2

ˆ The corresponding value of E (X(t) − X(t))2 is ˆ Emin (X(t) − X(t))2 =
∞ −∞

SX (f )N0 df 2SX (f ) + N0

4) With SN (f ) = 1, SX (f ) =

1 1+f 2

and SXN (f ) = 0, then H(f ) =
1 1+f 2 1 + 1+f 2

1

=

1 2 + f2

Problem 4.64 ˆ ˜ 1) Let X(t) and X(t) be the outputs of the systems h(t) and g(t) when the input Z(t) is applied. Then, ˜ ˆ ˆ ˜ E[(X(t) − X(t))2 ] = E[(X(t) − X(t) + X(t) − X(t))2 ] ˆ ˜ ˆ = E[(X(t) − X(t))2 ] + E[(X(t) − X(t))2 ] ˆ ˆ ˜ +E[(X(t) − X(t)) · (X(t) − X(t))] But, ˆ ˆ ˜ E[(X(t) − X(t)) · (X(t) − X(t))] ˆ = E[(X(t) − X(t)) · Z(t) (h(t) − g(t))] ˆ = E (X(t) − X(t))
∞ ∞ −∞

(h(τ ) − g(τ ))Z(t − τ )dτ

=

−∞

ˆ E (X(t) − X(t))Z(t − τ ) (h(τ ) − g(τ ))dτ = 0

ˆ where the last equality follows from the assumption E (X(t) − X(t))Z(t − τ ) = 0 for all t, τ . Thus, ˜ ˆ ˆ ˜ E[(X(t) − X(t))2 ] = E[(X(t) − X(t))2 ] + E[(X(t) − X(t))2 ] and this proves that ˆ ˜ E[(X(t) − X(t))2 ] ≤ E[(X(t) − X(t))2 ] 2) ˆ ˆ E[(X(t) − X(t))Z(t − τ )] = 0 ⇒ E[X(t)Z(t − τ )] = E[X(t)Z(t − τ )] or in terms of crosscorrelation functions RXZ (τ ) = RXZ (τ ) = RZ X (−τ ). However, RZ X (−τ ) = ˆ ˆ ˆ RZ (−τ ) h(τ ) so that RXZ (τ ) = RZ (−τ ) h(τ ) = RZ (τ ) h(τ ) 3) Taking the Fourier of both sides of the previous equation we obtain SXZ (f ) = SZ (f )H(f ) or H(f ) = 107 SXZ (f ) SZ (f )

4) ˆ ˆ E[ 2 (t)] = E (X(t) − X(t))((X(t) − X(t)) ˆ = E[X(t)X(t)] − E[X(t)X(t)] = RX (0) − E = RX (0) − = RX (0) −
∞ −∞ ∞ −∞ ∞ −∞

Z(t − v)h(v)X(t)dv

RZX (−v)h(v)dv RXZ (v)h(v)dv

ˆ ˆ ˆ where we have used the fact that E[(X(t) − X(t))X(t)] = E[(X(t) − X(t))Z(t) h(t)] = 0 Problem 4.65 1) Using the results of Problem 4.45 we obtain
∞ |P (f )|2 RA (0) + 2 RA (k) cos(2πkf T ) SX (f ) = T k=1

Since, An ’s are independent random variables with zero mean RA (k) = σ 2 δ(k) so that SX (f ) = f 2 2 f 1 1 σ2 Π( ) σ = ) Π( 2T T 2W 2W 4W 2W

2) If T =

1 2W

then


PX (f ) =

−∞

f σ2 σ2 Π( )df = 2W 2W 2W

W −W

df = σ 2

3) SX1 (f ) = Hence, E[Ak Aj ] = E[X1 (kT )X1 (jT )] = RX1 ((k − j)T ) = N0 W sinc(2W (k − j)T ) = N0 W sinc(k − j) = N0 W 0 k=j otherwise N0 f Π( ) ⇒ RX1 (τ ) = N0 W sinc(2W τ ) 2 2W

Thus, we obtain the same conditions as in the first and second part of the problem with σ 2 = N0 W . The power spectral density and power content of X(t) will be SX (f ) = f N0 Π( ), 2 2W PX = N0 W

X(t) is the random process formed by sampling X1 (t) at the Nyquist rate. Problem 4.66 the noise equivalent bandwidth of a filter is Bneq =
∞ 2 −∞ |H(f )| df 2 2Hmax

108

If we have an ideal bandpass filter of bandwidth W , then H(f ) = 1 for |f − f0 | < W where f0 is the central frequency of the filter. Hence, Bneq 1 = 2
−f0 + W 2 −f0 − W 2

df +

f0 + W 2 f0 − W 2

df = W

Problem 4.67 In general SXc (f ) = SXs (f ) = If f0 = fc −
W 2

SX (f − f0 ) + SX (f + f0 ) |f | < f0 0 otherwise
  N0  2

in Example 4.6.1, then using the previous formula we obtain SXc (f ) = SXs (f ) = < |f | < |f | < W 2 otherwise
W 2 3W 2

N  0  0

The cross spectral density is given by SXc Xs (f ) = Thus, with f0 = fc −
W 2

j[SX (f + f0 ) − SX (f − f0 )] |f | < f0 0 otherwise

we obtain
  −j N0  2  

SXc Xs (f ) =

0 0

 j N0  2  

− 3W < f < W 2 2 |f | < W 2 3W W 2 K) = But,


p(1 − p)k−1 p(X = k, X > K) = p(X > K) p(X > K)

p(X > K) = k=K+1 p(1 − p)k−1 = p

∞ k=1

K

(1 − p)k−1 − k=1 (1 − p)k−1

= p so that

1 − (1 − 1 − 1 − (1 − p) 1 − (1 − p)

p)K

= (1 − p)K

p(X = k|X > K) = If we let k = K + l with l = 1, 2, . . ., then p(X = k|X > K) =

p(1 − p)k−1 (1 − p)K

p(1 − p)K (1 − p)l−1 = p(1 − p)l−1 (1 − p)K

that is p(X = k|X > K) is the geometrically distributed. Hence, using the results of the first part we obtain H(X|X > K) = −
∞ l=1

p(1 − p)l−1 log2 (p(1 − p)l−1 ) 1−p log2 (1 − p) p

= − log2 (p) − Problem 6.5

H(X, Y ) = H(X, g(X)) = H(X) + H(g(X)|X) = H(g(X)) + H(X|g(X)) But, H(g(X)|X) = 0, since g(·) is deterministic. Therefore, H(X) = H(g(X)) + H(X|g(X)) Since each term in the previous equation is non-negative we obtain H(X) ≥ H(g(X)) Equality holds when H(X|g(X)) = 0. This means that the values g(X) uniquely determine X, or that g(·) is a one to one mapping. Problem 6.6 The entropy of the source is
6

H(X) = − i=1 pi log2 pi = 2.4087

bits/symbol

The sampling rate is fs = 2000 + 2 · 6000 = 14000 Hz

129

This means that 14000 samples are taken per each second. Hence, the entropy of the source in bits per second is given by H(X) = 2.4087 × 14000 (bits/symbol) × (symbols/sec) = 33721.8 bits/second Problem 6.7 Consider the function f (x) = x − 1 − ln x. For x > 1, 1 df (x) =1− >0 dx x Thus, the function is monotonically increasing. Since, f (1) = 0, the latter implies that if x > 1 then, f (x) > f (1) = 0 or ln x < x − 1. If 0 < x < 1, then df (x) 1 =1− f (1) = 0 or ln x < x − 1. Therefore, for every x > 0, ln x ≤ x − 1 with equality if x = 0. Applying the inequality with x = ln
1/N pi ,

we obtain

1/N 1 − ln pi ≤ −1 N pi

Multiplying the previous by pi and adding, we obtain
N 1 − pi ln pi ln pi ≤ N i=1 i=1 N N i=1

1 − pi = 0 N
N

Hence,
N

H(X) ≤ − i=1 pi ln

1 = ln N N

pi = ln N i=1 But, ln N is the entropy (in nats/symbol) of the source when it is uniformly distributed (see Problem 6.2). Hence, for equiprobable symbols the entropy of the source achieves its maximum. Problem 6.8 Suppose that qi is a distribution over 1, 2, 3, . . . and that


iqi = m i=1 1 qi m

Let vi =

1−

1 m

i−1

and apply the inequality ln x ≤ x − 1 to vi . Then, 1 1 1− m m i−1 ln

− ln qi ≤

1 1 1− qi m m

i−1

−1

Multiplying the previous by qi and adding, we obtain


qi ln i=1 1 1 1− m m

i−1



∞ i=1

qi ln qi ≤

∞ i=1

∞ 1 1 (1 − )i−1 − qi = 0 m m i=1

130

But, 1 1 qi ln 1− m m i=1
∞ i−1 ∞

= i=1 qi ln(

1 1 ) + (i − 1) ln(1 − ) m m

= ln( = ln( = ln(

1 ∞ 1 ) + ln(1 − ) (i − 1)qi m m i=1 1 1 ) + ln(1 − ) m m
∞ i=1

iqi −



qi i=1 1 1 ) + ln(1 − )(m − 1) = −H(p) m m

where H(p) is the entropy of the geometric distribution (see Problem 6.4). Hence, −H(p) −
∞ i=1

qi ln qi ≤ 0 =⇒ H(q) ≤ H(p)

Problem 6.9 The marginal probabilities are given by p(X = 0) = k p(X = 0, Y = k) = p(X = 0, Y = 0) + p(X = 0, Y = 1) = p(X = 1, Y = k) = p(X = 1, Y = 1) = k 2 3

p(X = 1) = p(Y = 0) = k 1 3 1 3 2 3

p(X = k, Y = 0) = p(X = 0, Y = 0) =

p(Y = 1) = k p(X = k, Y = 1) = p(X = 0, Y = 1) + p(X = 1, Y = 1) =

Hence, H(X) = − H(X) = − H(X, Y ) = − i=0 1 1 1 1 pi log2 pi = −( log2 + log2 ) = .9183 3 3 3 3 i=0 1 1 1 1 pi log2 pi = −( log2 + log2 ) = .9183 3 3 3 3 i=0
2 1

1

1 1 log2 = 1.5850 3 3

H(X|Y ) = H(X, Y ) − H(Y ) = 1.5850 − 0.9183 = 0.6667 H(Y |X) = H(X, Y ) − H(X) = 1.5850 − 0.9183 = 0.6667 Problem 6.10 H(Y |X) = − x,y p(x, y) log p(y|x)

But, p(y|x) = p(g(x)|x) = 1. Hence, log p(g(x)|x) = 0 and H(Y |X) = 0. Problem 6.11 1) H(X) = −(.05 log2 .05 + .1 log2 .1 + .1 log2 .1 + .15 log2 .15 +.05 log2 .05 + .25 log2 .25 + .3 log2 .3) = 2.5282 131

2) After quantization, the new alphabet is B = {−4, 0, 4} and the corresponding symbol probabilities are given by p(−4) = p(−5) + p(−3) = .05 + .1 = .15 p(0) = p(−1) + p(0) + p(1) = .1 + .15 + .05 = .3 p(4) = p(3) + p(5) = .25 + .3 = .55 Hence, H(Q(X)) = 1.4060. As it is observed quantization decreases the entropy of the source. Problem 6.12 Using the first definition of the entropy rate, we have H = = n→∞ n→∞

lim H(Xn |X1 , . . . Xn−1 ) lim (H(X1 , X2 , . . . , Xn ) − H(X1 , X2 , . . . , Xn−1 ))

However, X1 , X2 , . . . Xn are independent, so that n n−1

H = lim

n→∞

H(Xi ) − i=1 i=1

H(Xi )

= lim H(Xn ) = H(X) n→∞ where the last equality follows from the fact that X1 , . . . , Xn are identically distributed. Using the second definition of the entropy rate, we obtain H = 1 H(X1 , X2 , . . . , Xn ) n 1 n H(Xi ) = lim n→∞ n i=1 n→∞ lim

=

n→∞

lim

1 nH(X) = H(X) n

The second line of the previous relation follows from the independence of X1 , X2 , . . . Xn , whereas the third line from the fact that for a DMS the random variables X1 , . . . Xn are identically distributed independent of n. Problem 6.13 lim H(Xn |X1 , . . . , Xn−1 ) − x1 ,...,xn

H = = =

n→∞

n→∞

lim

p(x1 , . . . , xn ) log2 p(xn |x1 , . . . , xn−1 ) p(x1 , . . . , xn ) log2 p(xn |xn−1 ) x1 ,...,xn

n→∞

lim






= =

n→∞

lim − xn ,xn−1

p(xn , xn−1 ) log2 p(xn |xn−1 )

n→∞

lim H(Xn |Xn−1 )

However, for a stationary process p(xn , xn−1 ) and p(xn |xn−1 ) are independent of n, so that H = lim H(Xn |Xn−1 ) = H(Xn |Xn−1 ) n→∞ 132

Problem 6.14 H(X|Y ) = − x,y p(x, y) log p(x|y) = − x,y p(x|y)p(y) log p(x|y) = y = y p(y) − x p(x|y) log p(x|y)

p(y)H(X|Y = y)

Problem 6.15 1) The marginal distribution p(x) is given by p(x) = H(X) = − x y

p(x, y). Hence, p(x, y) log p(x) x y

p(x) log p(x) = − p(x, y) log p(x) x,y = − Similarly it is proved that H(Y ) = −

x,y

p(x, y) log p(y). p(x)p(y) p(x,y) ,

2) Using the inequality ln w ≤ w − 1 with w = ln

we obtain

p(x)p(y) p(x)p(y) ≤ −1 p(x, y) p(x, y)

Multiplying the previous by p(x, y) and adding over x, y, we obtain p(x, y) ln p(x)p(y) − x,y x,y

p(x, y) ln p(x, y) ≤ x,y p(x)p(y) − x,y p(x, y) = 0

Hence, H(X, Y ) ≤ − x,y p(x, y) ln p(x)p(y) = − x,y p(x, y)(ln p(x) + ln p(y))

= − x,y p(x, y) ln p(x) − x,y p(x, y) ln p(y) = H(X) + H(Y )

Equality holds when Problem 6.16

p(x)p(y) p(x,y)

= 1, i.e when X, Y are independent.

H(X, Y ) = H(X) + H(Y |X) = H(Y ) + H(X|Y ) Also, from Problem 6.15, H(X, Y ) ≤ H(X) + H(Y ). Combining the two relations, we obtain H(Y ) + H(X|Y ) ≤ H(X) + H(Y ) =⇒ H(X|Y ) ≤ H(X) Suppose now that the previous relation holds with equality. Then, − x p(x) log p(x|y) = − x p(x) log p(x) ⇒ x p(x) log(

p(x) )=0 p(x|y)

However, p(x) is always greater or equal to p(x|y), so that log(p(x)/p(x|y)) is non-negative. Since p(x) > 0, the above equality holds if and only if log(p(x)/p(x|y)) = 0 or equivalently if and only if p(x)/p(x|y) = 1. This implies that p(x|y) = p(x) meaning that X and Y are independent.

133

Problem 6.17 ¯ To show that q = λp1 + λp2 is a legitimate probability vector we have to prove that 0 ≤ qi ≤ 1 and i qi = 1. Clearly 0 ≤ p1,i ≤ 1 and 0 ≤ p2,i ≤ 1 so that 0 ≤ λp1,i ≤ λ, If we add these two inequalities, we obtain ¯ 0 ≤ qi ≤ λ + λ =⇒ 0 ≤ qi ≤ 1 Also, qi = i i

¯ ¯ 0 ≤ λp2,i ≤ λ

¯ (λp1,i + λp2,i ) = λ i ¯ p1,i + λ i ¯ p2,i = λ + λ = 1

Before we prove that H(X) is a concave function of the probability distribution on X we show that 1 1 1 1 ln x ≥ 1 − x . Since ln y ≤ y − 1, we set x = y so that − ln x ≤ x − 1 ⇒ ln x ≥ 1 − x . Equality 1 holds when y = x = 1 or else if x = 1. ¯ ¯ H(λp1 + λp2 ) − λH(p1 ) − λH(p2 ) = λ i p1,i log

p1,i ¯ λp1,i + λp2,i

¯ +λ i p2,i log

p2,i ¯ λp1,i + λp2,i ¯ λp1,i + λp2,i p2,i

¯ λp1,i + λp2,i ≥ λ p1,i 1 − p1,i i ¯ = λ(1 − 1) + λ(1 − 1) = 0 Hence,

¯ +λ i p2,i 1 −

¯ ¯ λH(p1 ) + λH(p2 ) ≤ H(λp1 + λp2 ) Problem 6.18 Let pi (xi ) be the marginal distribution of the random variable Xi . Then, n n

H(Xi ) = i=1 i=1

− xi pi (xi ) log pi (xi ) n = − x1 x2

··· xn p(x1 , x2 , · · · , xn ) log i=1 pi (xi )

Therefore, n H(Xi ) − H(X1 , X2 , · · · Xn ) i=1 = x1 x2

··· xn p(x1 , x2 , · · · , xn ) log p(x1 , x2 , · · · , xn ) 1 − xn p(x1 , x2 , · · · , xn ) n i=1 pi (xi ) n i=1 pi (xi )

≥ x1 x2

··· ··· x1 x2 xn

p(x1 , x2 , · · · , xn ) ··· p1 (x1 )p2 (x2 ) · · · pn (xn ) xn =

p(x1 , x2 , · · · , xn ) − x1 x2

= 1−1=0

134

where we have used the inequality ln x ≥ 1 −

1 x

(see Problem 6.17.) Hence, n H(X1 , X2 , · · · Xn ) ≤ i=1 H(Xi )

with equality if

n i=1 pi (xi )

= p(x1 , · · · , xn ), i.e. a memoryless source.

Problem 6.19 1) The probability of an all zero sequence is p(X1 = 0, X2 = 0, · · · , Xn = 0) = p(X1 = 0)p(X2 = 0) · · · p(Xn = 0) = 1 2 n 2) Similarly with the previous case p(X1 = 1, X2 = 1, · · · , Xn = 1) = p(X1 = 1)p(X2 = 1) · · · p(Xn = 1) = 1 2 n 3) p(X1 = 1, · · · , Xk = 1, Xk+1 = 0, · · · Xn = 0) = p(X1 = 1) · · · p(Xk = 1)p(Xk+1 = 0) · · · p(Xn = 0) = 1 2 k 1 2

n−k

=

1 2

n

4) The number of zeros or ones follows the binomial distribution. Hence p(k ones ) = n k 1 2 k 1 2

n−k

=

n k

1 2

n

5) In case that p(Xi = 1) = p, the answers of the previous questions change as follows p(X1 = 0, X2 = 0, · · · , Xn = 0) = (1 − p)n p(X1 = 1, X2 = 1, · · · , Xn = 1) = pn p(first k ones, next n − k zeros) = pk (1 − p)n−k p(k ones ) = n k pk (1 − p)n−k

Problem 6.20 From the discussion in the beginning of Section 6.2 it follows that the total number of sequences of length n of a binary DMS source producing the symbols 0 and 1 with probability p and 1 − p respectively is 2nH(p) . Thus if p = 0.3, we will observe sequences having np = 3000 zeros and n(1 − p) = 7000 ones. Therefore, # sequences with 3000 zeros ≈ 28813 Another approach to the problem is via the Stirling’s approximation. In general the number of binary sequences of length n with k zeros and n − k ones is the binomial coefficient n k = n! k!(n − k)!

135

To get an estimate when n and k are large numbers we can use Stirling’s approximation n! ≈ Hence, # sequences with 3000 zeros = 1 10000! ≈ √ 1010000 3000!7000! 21 2π30 · 70 √ 2πn n e n Problem 6.21 1) The total number of typical sequences is approximately 2nH(X) where n = 1000 and H(X) = − i pi log2 pi = 1.4855

Hence, # typical sequences ≈ 21485.5 2) The number of all sequences of length n is N n , where N is the size of the source alphabet. Hence, 2nH(X) # typical sequences ≈ n ≈ 1.14510−30 # non-typical sequences N − 2nH(X) 3) The typical sequences are almost equiprobable. Thus, p(X = x, x typical) ≈ 2−nH(X) = 2−1485.5 4) Since the number of the total sequences is 2nH(X) the number of bits required to represent these sequences is nH(X) ≈ 1486. 5) The most probable sequence is the one with all a3 ’s that is {a3 , a3 , . . . , a3 }. The probability of this sequence is 1 n 1 1000 p({a3 , a3 , . . . , a3 }) = = 2 2 6) The most probable sequence of the previous question is not a typical sequence. In general in a typical sequence, symbol a1 is repeated 1000p(a1 ) = 200 times, symbol a2 is repeated approximately 1000p(a2 ) = 300 times and symbol a3 is repeated almost 1000p(a3 ) = 500 times. Problem 6.22 1) The entropy of the source is
4

H(X) = − i=1 p(ai ) log2 p(ai ) = 1.8464

bits/output

2) The average codeword length is lower bounded by the entropy of the source for error free reconstruction. Hence, the minimum possible average codeword length is H(X) = 1.8464. 3) The following figure depicts the Huffman coding scheme of the source. The average codeword length is ¯ R(X) = 3 × (.2 + .1) + 2 × .3 + .4 = 1.9 136

0

.4 0 0 1 .6 .3 1

0

10 .3 110 .2 111 .1

1

4) For the second extension of the source the alphabet of the source becomes A2 = {(a1 , a1 ), (a1 , a2 ), . . . (a4 , a4 )} and the probability of each pair is the product of the probabilities of each component, i.e. p((a1 , a2 )) = .2. A Huffman code for this source is depicted in the next figure. The average codeword length in bits per pair of source output is ¯ R2 (X) = 3 × .49 + 4 × .32 + 5 × .16 + 6 × .03 = 3.7300 ¯ ¯ The average codeword length in bits per each source output is R1 (X) = R2 (X)/2 = 1.865. 5) Huffman coding of the original source requires 1.9 bits per source output letter whereas Huffman coding of the second extension of the source requires 1.865 bits per source output letter and thus it is more efficient. (a4 , a4 ) .16 000 0 010 100 110 0010 0011 0110 1010 1110 01110 01111 10110 10111 11110 111110 111111 (a4 , a3 ) (a3 , a4 ) (a3 , a3 ) (a4 , a2 ) (a2 , a4 ) (a3 , a2 ) (a2 , a3 ) (a4 , a1 ) (a2 , a2 ) (a1 , a4 ) (a3 , a1 ) (a1 , a3 ) (a2 , a1 ) (a1 , a2 ) (a1 , a1 ) .12 u 0 0 0 0 u .12 .09 .08 .08 .06 .06 u 0

0 u 1

1 0 0

u u

1 0 u .04 .04 .04 .03 u u 1 u

1 0

1

u

1

1 u 1

.03 .02 .02 .01 u 1

0 0 1 u 1

1

Problem 6.23 The following figure shows the design of the Huffman code. Note that at each step of the algorithm the branches with the lowest probabilities (that merge together) are those at the bottom of the tree. 137

0 10

1 2 1 4

0 0 1

. . .
11...10 111...10 111...11 The entropy of the source is n−1 1 2n−2 1 2n−1 1 2n−1

. . .
0 0 1 1

1 0

1

H(X) = i=1 n−1

1 1 log2 2i + n−1 log2 2n−1 i 2 2 1 1 i log2 2 + n−1 (n − 1) log2 2 i 2 2 i n−1 + n−1 i 2 2

= i=1 n−1

= i=1 In the way that the code is constructed, the first codeword (0) has length one, the second codeword (10) has length two and so on until the last two codewords (111...10, 111...11) which have length n − 1. Thus, the average codeword length is n−1 ¯ R = x∈X p(x)l(x) = i=1 n−1 i + n−1 2i 2

= 2 1 − (1/2)n−1 = H(X) Problem 6.24 The following figure shows the position of the codewords (black filled circles) in a binary tree. Although the prefix condition is not violated the code is not optimum in the sense that it uses more bits that is necessary. For example the upper two codewords in the tree (0001, 0011) can be substituted by the codewords (000, 001) (un-filled circles) reducing in this way the average codeword length. Similarly codewords 1111 and 1110 can be substituted by codewords 111 and 110. e e u u u

0 T c 1

u u e e u u

138

Problem 6.25 The following figure depicts the design of a ternary Huffman code. 0 .22 0 10 11 12 20 21 22 The average codeword length is ¯ R(X) = x .18 .17 .15 .13 .1 .05

0 1 2 0 1 2 .28 2 .50 1

p(x)l(x) = .22 + 2(.18 + .17 + .15 + .13 + .10 + .05)

= 1.78 (ternary symbols/output) For a fair comparison of the average codeword length with the entropy of the source, we compute the latter with logarithms in base 3. Hence, H(X) = − x p(x) log3 p(x) = 1.7047

¯ As it is expected H(X) ≤ R(X). Problem 6.26 If D is the size of the code alphabet, then the Huffman coding scheme takes D source outputs and it merges them to 1 symbol. Hence, we have a decrease of output symbols by D − 1. In K steps of the algorithm the decrease of the source outputs is K(D − 1). If the number of the source outputs is K(D − 1) + D, for some K, then we are in a good position since we will be left with D symbols for which we assign the symbols 0, 1, . . . , D − 1. To meet the above condition with a ternary code the number of the source outputs should be 2K + 3. In our case that the number of source outputs is six we can add a dummy symbol with zero probability so that 7 = 2 · 2 + 3. The following figure shows the design of the ternary Huffman code. 0 .4 0 1 .17 1 20 .15 0 21 .13 1 2 0 220 .1 1 221 .05 2 2 220 .0 Problem 6.27 Parsing the sequence by the rules of the Lempel-Ziv coding scheme we obtain the phrases 0, 00, 1, 001, 000, 0001, 10, 00010, 0000, 0010, 00000, 101, 00001, 000000, 11, 01, 0000000, 110, 0, ... The number of the phrases is 19. For each phrase we need 5 bits plus an extra bit to represent the new source output.

139

Dictionary Location 1 00001 2 00010 3 00011 4 00100 5 00101 6 00110 7 00111 8 01000 9 01001 10 01010 11 01011 12 01100 13 01101 14 01110 15 01111 16 10000 17 10001 18 10010 19 Problem 6.28

Dictionary Contents 0 00 1 001 000 0001 10 00010 0000 0010 00000 101 00001 000000 11 01 0000000 110 0

Codeword 00000 0 00001 0 00000 1 00010 1 00010 0 00101 1 00011 0 00110 0 00101 0 00100 0 01001 0 00111 1 01001 1 01011 0 00011 1 00001 1 01110 0 01111 0 00000

I(X; Y ) = H(X) − H(X|Y ) = − x p(x) log p(x) + x,y p(x, y) log p(x|y) p(x, y) log p(x|y) x,y = − x,y p(x, y) log p(x) + p(x, y) log p(x|y) = p(x)

= x,y p(x, y) log x,y p(x, y) p(x)p(y)

1 1 Using the inequality ln y ≤ y − 1 with y = x , we obtain ln x ≥ 1 − x . Applying this inequality with p(x,y) x = p(x)p(y) we obtain

I(X; Y ) = x,y p(x, y) log

p(x, y) p(x)p(y) p(x)p(y) p(x, y) = x,y ≥ x,y p(x, y) 1 −

p(x, y) − x,y p(x)p(y) = 0

1 ln x ≥ 1 − x holds with equality if x = 1. This means that I(X; Y ) = 0 if p(x, y) = p(x)p(y) or in other words if X and Y are independent.

Problem 6.29 1) I(X; Y ) = H(X)−H(X|Y ). Since in general, H(X|Y ) ≥ 0, we have I(X; Y ) ≤ H(X). Also (see Problem 6.30), I(X; Y ) = H(Y ) − H(Y |X) from which we obtain I(X; Y ) ≤ H(Y ). Combining the two inequalities, we obtain I(X; Y ) ≤ min{H(X), H(Y )} 2) It can be shown (see Problem 6.7), that if X and Z are two random variables over the same set X and Z is uniformly distributed, then H(X) ≤ H(Z). Furthermore H(Z) = log |X |, where |X | is 140

the size of the set X (see Problem 6.2). Hence, H(X) ≤ log |X | and similarly we can prove that H(Y ) ≤ log |Y|. Using the result of the first part of the problem, we obtain I(X; Y ) ≤ min{H(X), H(Y )} ≤ min{log |X |, log |Y|} Problem 6.30 By definition I(X; Y ) = H(X) − H(X|Y ) and H(X, Y ) = H(X) + H(Y |X) = H(Y ) + H(X|Y ). Combining the two equations we obtain I(X; Y ) = H(X) − H(X|Y ) = H(X) − (H(X, Y ) − H(Y )) = H(X) + H(Y ) − H(X, Y ) = H(Y ) − (H(X, Y ) − H(X)) = H(Y ) − H(Y |X) = I(Y ; X) Problem 6.31 1) The joint probability density is given by p(Y = 1, X = 0) = p(Y = 1|X = 0)p(X = 0) = p p(Y = 0, X = 1) = p(Y = 0|X = 1)p(X = 1) = (1 − p) p(Y = 1, X = 1) = (1 − )(1 − p) p(Y = 0, X = 0) = (1 − )p The marginal distribution of Y is p(Y = 1) = p(Y = 0) = Hence, H(X) = −p log2 p − (1 − p) log2 (1 − p) H(Y ) = −(1 + 2 p − − p) log2 (1 + 2 p − − p) −( + p − 2 p) log2 ( + p − 2 p) H(Y |X) = − x,y p + (1 − )(1 − p) = 1 + 2 p − − p (1 − p) + (1 − )p = + p − 2 p

p(x, y) log2 (p(y|x)) = − p log2 − (1 − p) log2

−(1 − )(1 − p) log2 (1 − ) − (1 − )p log2 (1 − ) = − log2 − (1 − ) log2 (1 − ) H(X, Y ) = H(X) + H(Y |X) = −p log2 p − (1 − p) log2 (1 − p) − log2 − (1 − ) log2 (1 − ) H(X|Y ) = H(X, Y ) − H(Y ) = −p log2 p − (1 − p) log2 (1 − p) − log2 − (1 − ) log2 (1 − ) (1 + 2 p − − p) log2 (1 + 2 p − − p) +( + p − 2 p) log2 ( + p − 2 p) I(X; Y ) = H(X) − H(X|Y ) = H(Y ) − H(Y |X) = log2 + (1 − ) log2 (1 − ) −(1 + 2 p − − p) log2 (1 + 2 p − − p) −( + p − 2 p) log2 ( + p − 2 p) 2) The mutual information is I(X; Y ) = H(Y ) − H(Y |X). As it was shown in the first question H(Y |X) = − log2 − (1 − ) log2 (1 − ) and thus it does not depend on p. Hence, I(X; Y ) 141

is maximized when H(Y ) is maximized. However, H(Y ) is the binary entropy function with probability q = 1 + 2 p − − p, that is H(Y ) = Hb (q) = Hb (1 + 2 p − − p) Hb (q) achieves its maximum value, which is one, for q = 1 . Thus, 2 1+2 p− −p= 1 1 =⇒ p = 2 2

3) Since I(X; Y ) ≥ 0, the minimum value of I(X; Y ) is zero and it is obtained for independent X and Y . In this case p(Y = 1, X = 0) = p(Y = 1)p(X = 0) =⇒ p = (1 + 2 p − − p)p or = 1 . This value of epsilon also satisfies 2 p(Y = 0, X = 0) = p(Y = 0)p(X = 0) p(Y = 1, X = 1) = p(Y = 1)p(X = 1) p(Y = 0, X = 1) = p(Y = 0)p(X = 1) resulting in independent X and Y . Problem 6.32 I(X; Y ZW ) = I(Y ZW ; X) = H(Y ZW ) − H(Y ZW |X) = H(Y ) + H(Z|Y ) + H(W |Y Z) −[H(Y |X) + H(Z|XY ) + H(W |XY Z)] = [H(Y ) − H(Y |X)] + [H(Z|Y ) − H(Z|Y X)] +[H(W |Y Z) − H(W |XY Z)] = I(X; Y ) + I(Z|Y ; X) + I(W |ZY ; X) = I(X; Y ) + I(X; Z|Y ) + I(X; W |ZY ) This result can be interpreted as follows: The information that the triplet of random variables (Y, Z, W ) gives about the random variable X is equal to the information that Y gives about X plus the information that Z gives about X, when Y is already known, plus the information that W provides about X when Z, Y are already known. Problem 6.33 1) Using Bayes rule, we obtain p(x, y, z) = p(z)p(x|z)p(y|x, z). Comparing this form with the one given in the first part of the problem we conclude that p(y|x, z) = p(y|x). This implies that Y and Z are independent given X so that, I(Y ; Z|X) = 0. Hence, I(Y ; ZX) = I(Y ; Z) + I(Y ; X|Z) = I(Y ; X) + I(Y ; Z|X) = I(Y ; X) Since I(Y ; Z) ≥ 0, we have I(Y ; X|Z) ≤ I(Y ; X)

2) Comparing p(x, y, z) = p(x)p(y|x)p(z|x, y) with the given form of p(x, y, z) we observe that p(y|x) = p(y) or, in other words, random variables X and Y are independent. Hence, I(Y ; ZX) = I(Y ; Z) + I(Y ; X|Z) = I(Y ; X) + I(Y ; Z|X) = I(Y ; Z|X) 142

Since in general I(Y ; X|Z) ≥ 0, we have I(Y ; Z) ≤ I(Y ; Z|X)

3) For the first case consider three random variables X, Y and Z, taking the values 0, 1 with equal probability and such that X = Y = Z. Then, I(Y ; X|Z) = H(Y |Z) − H(Y |ZX) = 0 − 0 = 0, whereas I(Y ; X) = H(Y )−H(Y |X) = 1−0 = 1. Hence, I(Y ; X|Z) < I(X; Y ). For the second case consider two independent random variables X, Y , taking the values 0, 1 with equal probability and a random variable Z which is the sum of X and Y (Z = X +Y .) Then, I(Y ; Z) = H(Y )−H(Y |Z) = 1 − 1 = 0, whereas I(Y ; Z|X) = H(Y |X) − H(Y |ZX) = 1 − 0 = 1. Thus, I(Y ; Z) < I(Y ; Z|X). Problem 6.34 1) I(X; Y ) = H(X) − H(X|Y ) = − x p(x) log p(x) + x y

p(x, y) log p(x|y)

Using Bayes formula we can write p(x|y) as p(x|y) = Hence, I(X; Y ) = − x p(x, y) = p(y)

p(x)p(y|x) x p(x)p(y|x)

p(x) log p(x) + x y

p(x, y) log p(x|y) p(x)p(y|x) log x y

= − x p(x) log p(x) + p(x)p(y|x) log

p(x)p(y|x) x p(x)p(y|x)

= x y

p(y|x) x p(x)p(y|x)

Let p1 and p2 be given on X and let p = λp1 + (1 − λ)p2 . Then, p is a legitimate probability ¯ vector, for its elements p(x) = λp1 (x) + λp2 (x) are non-negative, less or equal to one and p(x) = x x

¯ λp1 (x) + λp2 (x) = λ x ¯ p1 (x) + λ x ¯ p2 (x) = λ + λ = 1

Furthermore, ¯ ¯ λI(p1 ; Q) + λI(p2 ; Q) − I(λp1 + λp2 ; Q) p(y|x) p1 (x)p(y|x) log = λ p1 (x)p(y|x) x x y ¯ +λ x y

p2 (x)p(y|x) log

p(y|x) x p2 (x)p(y|x) p(y|x) ¯ x (λp1 (x) + λp2 (x))p(y|x)

− x y

¯ (λp1 (x) + λp2 (x))p(y|x) log λp1 (x)p(y|x) log

= x y

¯ (λp1 (x) + λp2 (x))p(y|x) x p1 (x)p(y|x)

+ x y

¯ (λp1 (x) + λp2 (x))p(y|x) ¯ λp2 (x)p(y|x) log x p2 (x)p(y|x)

143

≤ x y

λp1 (x)p(y|x) + x y

¯ (λp1 (x) + λp2 (x))p(y|x) −1 x p1 (x)p(y|x)

¯ (λp1 (x) + λp2 (x))p(y|x) ¯ −1 λp2 (x)p(y|x) x p2 (x)p(y|x)

= 0 where we have used the inequality log x ≤ x − 1. Thus, I(p; Q) is a concave function in p. ¯ 2) The matrix Q = λQ1 + λQ2 is a legitimate conditional probability matrix for its elements ¯ p(y|x) = λp1 (y|x) + λp2 (y|x) are non-negative, less or equal to one and p(y|x) = x y x y

¯ λp1 (y|x) + λp2 (y|x) ¯ p1 (y|x) + λ x y x y

= λ

p2 (y|x)

¯ = λ+λ=λ+1−λ=1 ¯ ¯ I(p; λQ1 + λQ2 ) − λI(p; Q1 ) + λI(p; Q2 ) = x y

¯ p(x)(λp1 (y|x) + λp2 (y|x)) log − x y

¯ λp1 (y|x) + λp2 (y|x) ¯ x p(x)(λp1 (y|x) + λp2 (y|x))

p(x)λp1 (y|x) log ¯ p(x)λp2 (y|x) log x y

p1 (y|x) x p(x)p1 (y|x) p2 (y|x) x p(x)p2 (y|x) ¯ λp1 (y|x) + λp2 (y|x) ¯ x p(x)(λp1 (y|x) + λp2 (y|x)) ¯ λp1 (y|x) + λp2 (y|x) ¯ x p(x)(λp1 (y|x) + λp2 (y|x)) x p(x)p1 (y|x)

− = x y

p(x)λp1 (y|x) log + x y

p1 (y|x) x p(x)p2 (y|x)

¯ p(x)λp2 (y|x) log p(x)λp1 (y|x)

p2 (y|x) −1 −1

≤ x y

¯ λp1 (y|x) + λp2 (y|x) ¯ x p(x)(λp1 (y|x) + λp2 (y|x)) ¯ λp1 (y|x) + λp2 (y|x) ¯ x p(x)(λp1 (y|x) + λp2 (y|x)) λp(x)p1 (y|x) x x p(x)p1 (y|x)

p1 (y|x) p2 (y|x)

+ x y

¯ p(x)λp2 (y|x)

x p(x)p2 (y|x)

= y x p(x)p1 (y|x) x p(x)(λp1 (y|x)

¯ + λp2 (y|x))

¯ λp1 (y|x) + λp2 (y|x)) p1 (y|x)

−λ x y

p(x)p1 (y|x) x p(x)p2 (y|x) y x p(x)(λp1 (y|x)

+ ¯ −λ x ¯ + λp2 (y|x))

x

¯ λp1 (y|x) + λp2 (y|x)) ¯ λp(x)p2 (y|x) p2 (y|x)

p(x)p2 (y|x) y = 0 Hence, I(p; Q) is a convex function on Q. Problem 6.35 1) The PDF of the random variable Y = αX is fY (y) = 1 y fX ( ) |α| α

144

Hence, h(Y ) = − = −
∞ −∞ ∞

fY (y) log(fY (y))dy

1 1 y y fX ( ) log fX ( ) dy |α| α |α| α −∞ ∞ 1 ∞ 1 y y y 1 fX ( )dy − fX ( ) log fX ( ) dy = − log |α| −∞ |α| α α α −∞ |α| 1 + h(X) = log |α| + h(X) = − log |α| 2) A similar relation does not hold if X is a discrete random variable. Suppose for example that X takes the values {x1 , x2 , . . . , xn } with probabilities {p1 , p2 , . . . , pn }. Then, Y = αX takes the values {αx1 , αx2 , . . . , αxn } with probabilities {p1 , p2 , . . . , pn }, so that H(Y ) = − i pi log pi = H(X)

Problem 6.36 1) h(X) = − 1 −x 1 x e λ ln( e− λ )dx λ λ 0 ∞ 1 ∞ 1 x x x 1 −λ e dx + e− λ dx = − ln( ) λ 0 λ λ λ 0 1 ∞ 1 −x = ln λ + e λ xdx λ 0 λ 1 = ln λ + λ = 1 + ln λ λ = 1 and E[x] = x ∞ 1 −λ dx 0 xλe



where we have used the fact 2)

x ∞ 1 −λ dx 0 λe

= λ.

h(X) = −

|x| 1 − |x| 1 e λ ln( e− λ )dx 2λ −∞ 2λ ∞ 1 |x| |x| 1 1 ∞ 1 = − ln( ) e− λ dx + |x| e− λ dx 2λ −∞ 2λ λ −∞ 2λ 0 ∞ x 1 x 1 1 −x e λ dx + x e− λ dx = ln(2λ) + λ −∞ 2λ 2λ 0 1 1 λ+ λ = 1 + ln(2λ) = ln(2λ) + 2λ 2λ



3) h(X) = −
0 −λ

x+λ x+λ ln dx − 2 λ λ2 1 λ2
0 −λ

λ 0 λ

−x + λ −x + λ ln dx 2 λ λ2

= − ln −
0 −λ

x+λ dx + λ2

0 λ 0

−x + λ dx λ2 −x + λ ln(−x + λ)dx λ2

x+λ ln(x + λ)dx − λ2 145

= ln(λ2 ) −
2

2 λ2

λ

z ln zdz
0 λ

2 z 2 ln z z 2 − = ln(λ ) − 2 λ 2 4 1 = ln(λ2 ) − ln(λ) + 2

0

Problem 6.37 1) Applying the inequality ln z ≤ z − 1 to the function z = ln p(x) + ln p(y) − ln p(x, y) ≤

p(x)p(y) p(x,y) ,

we obtain

p(x)p(y) −1 p(x, y)


Multiplying by p(x, y) and integrating over x, y, we obtain
∞ ∞ −∞ −∞

p(x, y) (ln p(x) + ln p(y)) dxdy − ≤
∞ ∞ −∞ −∞



p(x)p(y)dxdy −

−∞ −∞ ∞ ∞

p(x, y) ln p(x, y)dxdy p(x, y)dxdy

= 1−1=0 Hence, h(X, Y ) ≤ −
∞ ∞ −∞ −∞

−∞ −∞

p(x, y) ln p(x)dxdy −





−∞ −∞

p(x, y) ln p(y)dxdy

= h(X) + h(Y ) Also, h(X, Y ) = h(X|Y ) + h(Y ) so by combining the two, we obtain h(X|Y ) + h(Y ) ≤ h(X) + h(Y ) =⇒ h(X|Y ) ≤ h(X) Equality holds if z = p(x)p(y) p(x,y)

= 1 or, in other words, if X and Y are independent.

2) By definition I(X; Y ) = h(X) − h(X|Y ). However, from the first part of the problem h(X|Y ) ≤ h(X) so that I(X; Y ) ≥ 0 Problem 6.38 Let X be the exponential random variable with mean m, that is fX (x) = x 1 −m me

0

x≥0 otherwise

Consider now another random variable Y with PDF fY (x), which is non-zero for x ≥ 0, and such that ∞ E[Y ] = xfY (x)dx = m
0

Applying the inequality ln z ≤ z − 1 to the function x = ln(fX (x)) − ln(fY (x)) ≤

fX (x) fY (x) ,

we obtain

fX (x) −1 fY (x)
∞ 0 ∞

Multiplying both sides by fY (x) and integrating, we obtain
∞ 0

fY (x) ln(fX (x))dx −

∞ 0

fY (x) ln(fY (x))dx ≤ 146

fX (x)dx −

fY (x)dx = 0
0

Hence, h(Y ) ≤ −
∞ 0

fY (x) ln

∞ 1 xfY (x)dx m 0 1 = ln m + m = 1 + ln m = h(X) m

= − ln

1 −x e m dx m ∞ 1 fY (x)dx + m 0

where we have used the results of Problem 6.36. Problem 6.39 Let X be a zero-mean Gaussian random variable with variance σ 2 and Y another zero-mean random variable such that ∞ y 2 fY (y)dy = σ 2
−∞ √
2πσ 2 1

Applying the inequality ln z ≤ z − 1 to the function z = 1 2πσ 2
2 − x2 2σ

e

2 − x2 2σ

fY (x)

, we obtain
2

ln √

e

− ln fY (x) ≤

x √ 1 e− 2σ2 2 2πσ

fY (x)

−1

Multiplying the inequality by fY (x) and integrating, we obtain
∞ −∞

fY (x) ln √

1 2πσ 2



x2 dx + h(Y ) ≤ 1 − 1 = 0 2σ 2

Hence, h(Y ) ≤ − ln √ 1 2πσ 2 + 1 2σ 2
∞ −∞

x2 fX (x)dx

√ √ 1 1 = ln( 2πσ 2 ) + 2 σ 2 = ln(e 2 ) + ln( 2πσ 2 ) 2σ = h(X)

Problem 6.40 1) The entropy of the source is H(X) = −.25 log2 .25 − .75 log2 .75 = .8113 bits/symbol Thus, we can transmit the output of the source using 2000H(X) = 1623 bits/sec with arbitrarily small probability of error. 2) Since 0 ≤ D ≤ min{p, 1 − p} = .25 the rate distortion function for the binary memoryless source is R(D) = Hb (p) − Hb (D) = Hb (.25) − Hb (.1) = .8113 − .4690 = .3423 Hence, the required number of bits per second is 2000R(D) = 685. 3) For D = .25 the rate is R(D) = 0. We can reproduce the source at a distortion of D = .25 with no transmission at all by setting the reproduction vector to be the all zero vector.

147

Problem 6.41 1) For a zero-mean Gaussian source with variance σ 2 and with squared error distortion measure, the rate distortion function is given by R(D) = With R = 1 and σ 2 = 1, we obtain 2 = log 1 =⇒ D = 2−2 = 0.25 D
1 2

log σ D 0

2

0 ≤ D ≤ σ2 otherwise

2) If we set D = 0.01, then 1 1 1 log = log 100 = 3.322 bits/sample 2 0.01 2 Hence, the required transmission capacity is 3.322 bits per source symbol. R= Problem 6.42 λ λ 1) Since R(D) = log D and D = λ , we obtain R(D) = log( λ/2 ) = log(2) = 1 bit/sample. 2 2) The following figure depicts R(D) for λ = 0.1, .2 and .3. As it is observed from the figure, an increase of the parameter λ increases the required rate for a given distortion.
7 6 5 4 3 2 1 0 0

R(D)

l=.3 l=.1 0.05 l=.2 0.1 0.15 Distortion D 0.2 0.25 0.3

Problem 6.43 1) For a Gaussian random variable of zero mean and variance σ 2 the rate-distortion function is 2 given by R(D) = 1 log2 σ . Hence, the upper bound is satisfied with equality. For the lower bound 2 D recall that h(X) = 1 log2 (2πeσ 2 ). Thus, 2 h(X) − 1 log2 (2πeD) = 2 = 1 1 log2 (2πeσ 2 ) − log2 (2πeD) 2 2 1 2πeσ 2 log2 = R(D) 2 2πeD

As it is observed the upper and the lower bounds coincide. 2) The differential entropy of a Laplacian source with parameter λ is h(X) = 1 + ln(2λ). The variance of the Laplacian distribution is σ2 =
∞ −∞

x2

1 − |x| e λ dx = 2λ2 2λ 148

√ Hence, with σ 2 = 1, we obtain λ = 1/2 and h(X) = 1+ln(2λ) = 1+ln( 2) = 1.3466 nats/symbol = 1500 bits/symbol. A plot of the lower and upper bound of R(D) is given in the next figure.
5 Laplacian Distribution, unit variance

4

3
R(D)

2

1

Upper Bound Lower Bound

0

-1 0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Distortion D

3) The variance of the triangular distribution is given by σ2 = = =
0 −λ λ −x + λ x+λ x2 dx + x2 dx 2 λ λ2 0 1 1 λ 1 4 λ 3 0 x + x + 2 − x4 + x3 4 3 λ 4 3 −λ

√ √ Hence, with σ 2 = 1, we obtain λ = 6 and h(X) = ln(6)−ln( 6)+1/2 = 1.7925 bits /source output. A plot of the lower and upper bound of R(D) is given in the next figure.
4.5 4 3.5 3 2.5
R(D)

1 λ2 λ2 6

λ 0

Triangular distribution, unit variance

2 1.5 1 0.5 0 -0.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Lower Bound Upper Bound

Distortion D

Problem 6.44 For a zero-mean Gaussian source of variance σ 2 , the rate distortion function is given by R(D) = 1 σ2 2 −2R . Hence, 2 log D . Expressing D in terms of R, we obtain D(R) = σ 2 σ 2 2−2R1 D(R1 ) 1 D(R1 ) = 2 −2R2 =⇒ R2 − R1 = log2 D(R2 ) σ 2 2 D(R2 ) With
D(R1 ) D(R2 )

= 1000, the number of extra bits needed is R2 − R1 = 149

1 2

log2 1000 = 5.

Problem 6.45 1) Consider the memoryless system Y (t) = Q(X(t)). At any given time t = t1 , the output Y (t1 ) depends only on X(t1 ) and not on any other past or future values of X(t). The nth order density fY (t1 ),...,Y (tn ) (y1 , . . . , yn ) can be determined from the corresponding density fX(t1 ),...,X(tn ) (x1 , . . . , xn ) using J fX(t1 ),...,X(tn ) (x1 , . . . , xn ) fY (t1 ),...,Y (tn ) (y1 , . . . , yn ) = |J(xj , . . . , xj )| n j=1 1 where J is the number of solutions to the system y1 = Q(x1 ), y2 = Q(x2 ), ···, yn = Q(xn )

and J(xj , . . . , xj ) is the Jacobian of the transformation system evaluated at the solution {xj , . . . , xj }. n n 1 1 Note that if the system has a unique solution, then J(x1 , . . . , xn ) = Q (x1 ) · · · Q (x2 ) From the stationarity of X(t) it follows that the numerator of all the terms under summation, in the expression for fY (t1 ),...,Y (tn ) (y1 , . . . , yn ), is invariant to a shift of the time origin. Furthermore, the denominators do not depend on t, so that fY (t1 ),...,Y (tn ) (y1 , . . . , yn ) does not change if ti is replaced by ti + τ . Hence, Y (t) is a strictly stationary process. 2) X(t) − Q(X(t)) is a memoryless function of X(t) and since the latter is strictly stationary, we ˜ conclude that X(t) = X(t) − Q(X(t)) is strictly stationary. Hence, SQNR = E[X 2 (t)] PX E[X 2 (t)] RX (0) = = = ˜ 2 (t)] E[(X(t) − Q(X(t)))2 ] RX (0) PX E[X ˜ ˜

Problem 6.46 1) From Table 6.2 we find that for a unit variance Gaussian process, the optimal level spacing for a 16-level uniform quantizer is .3352. This number has to be multiplied by σ to provide the optimal √ level spacing when the variance of the process is σ 2 . In our case σ 2 = 10 and ∆ = 10 · 0.3352 = 1.060. The quantization levels are x x1 = −ˆ16 = −7 · 1.060 − ˆ x x2 = −ˆ15 = −6 · 1.060 − ˆ x3 = −ˆ14 = −5 · 1.060 − ˆ x x4 = −ˆ13 = −4 · 1.060 − ˆ x x5 = −ˆ12 = −3 · 1.060 − ˆ x x6 = −ˆ11 = −2 · 1.060 − ˆ x x7 = −ˆ10 = −1 · 1.060 − ˆ x 1 2 1 2 1 2 1 2 1 2 1 2 1 2 · 1.060 = −7.950 · 1.060 = −6.890 · 1.060 = −5.830 · 1.060 = −4.770 · 1.060 = −3.710 · 1.060 = −2.650 · 1.060 = −1.590

1 x8 = −ˆ9 = − · 1.060 = −0.530 ˆ x 2 The boundaries of the quantization regions are given by a1 = a15 = −7 · 1.060 = −7.420 150

a2 = a14 = −6 · 1.060 = −6.360 a3 = a13 = −5 · 1.060 = −5.300 a4 = a12 = −4 · 1.060 = −4.240 a5 = a11 = −3 · 1.060 = −3.180 a6 = a10 = −2 · 1.060 = −2.120 a7 = a9 = −1 · 1.060 = −1.060 a8 = 0 2) The resulting distortion is D = σ 2 · 0.01154 = 0.1154. 3) The entropy is available from Table 6.2. Nevertheless we will rederive the result here. The probabilities of the 16 outputs are a15 p(ˆ1 ) = p(ˆ16 ) = Q( √ ) = 0.0094 x x 10 a14 a15 p(ˆ2 ) = p(ˆ15 ) = Q( √ ) − Q( √ ) = 0.0127 x x 10 10 a13 a14 p(ˆ3 ) = p(ˆ14 ) = Q( √ ) − Q( √ ) = 0.0248 x x 10 10 a12 a13 p(ˆ4 ) = p(ˆ13 ) = Q( √ ) − Q( √ ) = 0.0431 x x 10 10 a11 a12 p(ˆ5 ) = p(ˆ12 ) = Q( √ ) − Q( √ ) = 0.0674 x x 10 10 a11 a10 x p(ˆ6 ) = p(ˆ11 ) = Q( √ ) − Q( √ ) = 0.0940 x 10 10 a10 a9 x p(ˆ7 ) = p(ˆ10 ) = Q( √ ) − Q( √ ) = 0.1175 x 10 10 a8 a9 p(ˆ8 ) = p(ˆ9 ) = Q( √ ) − Q( √ ) = 0.1311 x x 10 10 Hence, the entropy of the quantized source is
1

ˆ H(X) = − i=1 6p(ˆi ) log2 p(ˆi ) = 3.6025 x x

This is the minimum number of bits per source symbol required to represent the quantized source. 4) Substituting σ 2 = 10 and D = 0.1154 in the rate-distortion bound, we obtain R= 1 σ2 = 3.2186 log2 D 2

5) The distortion of the 16-level optimal quantizer is D16 = σ 2 · 0.01154 whereas that of the 8-level optimal quantizer is D8 = σ 2 · 0.03744. Hence, the amount of increase in SQNR (db) is 10 log10 SQNR16 0.03744 = 5.111 db = 10 · log10 SQNR8 0.01154

Problem 6.47 With 8 quantization levels and σ 2 = 400 we obtain ∆ = σ.5860 = 20 · 0.5860 = 11.72 151

Hence, the quantization levels are 1 x x1 = −ˆ8 = −3 · 11.72 − 11.72 = −41.020 ˆ 2 1 x2 = −ˆ7 = −2 · 11.72 − 11.72 = −29.300 ˆ x 2 1 x3 = −ˆ6 = −1 · 11.72 − 11.72 = −17.580 ˆ x 2 1 x4 = −ˆ5 = − 11.72 = −5.860 ˆ x 2 The distortion of the optimum quantizer is D = σ 2 · 0.03744 = 14.976 As it is observed the distortion of the optimum quantizer is significantly less than that of Example 6.5.1. The informational entropy of the optimum quantizer is found from Table 6.2 to be 2.761. Problem 6.48 Using Table 6.3 we find the quantization regions and the quantized values for N = 16. These values √ 1/2 should be multiplied by σ = PX = 10, since Table 6.3 provides the optimum values for a unit variance Gaussian source. √ a1 = −a15 = − 10 · 2.401 = −7.5926 √ a2 = −a14 = − 10 · 1.844 = −5.8312 √ a3 = −a13 = − 10 · 1.437 = −4.5442 √ a4 = −a12 = − 10 · 1.099 = −3.4753 √ a5 = −a11 = − 10 · 0.7996 = −2.5286 √ a6 = −a10 = − 10 · 0.5224 = −1.6520 √ a7 = −a9 = − 10 · 0.2582 = −0.8165 a8 = 0 The quantized values are √ x x1 = −ˆ16 = − 10 · 2.733 = −8.6425 ˆ √ x x2 = −ˆ15 = − 10 · 2.069 = −6.5428 ˆ √ x x3 = −ˆ14 = − 10 · 1.618 = −5.1166 ˆ √ x x4 = −ˆ13 = − 10 · 1.256 = −3.9718 ˆ √ x x5 = −ˆ12 = − 10 · 0.9424 = −2.9801 ˆ √ x x6 = −ˆ11 = − 10 · 0.6568 = −2.0770 ˆ √ x x7 = −ˆ10 = − 10 · 0.3881 = −1.2273 ˆ √ x x8 = −ˆ9 = − 10 · 0.1284 = −0.4060 ˆ

The resulting distortion is D = 10 · 0.009494 = 0.09494. From Table 6.3 we find that the minimum 2 ˆ number of bits per source symbol is H(X) = 3.765. Setting D = 0.09494, σ 2 = 10 in R = 1 log2 σ 2 D we obtain R = 3.3594. Thus, the minimum number of bits per source symbol is slightly larger that the predicted one from the rate-distortion bound. Problem 6.49 1) The area between the two squares is 4 × 4 − 2 × 2 = 12. Hence, fX,Y (x, y) = 2 probability fX (x) is given by fX (x) = −2 fX,Y (x, y)dy. If −2 ≤ X < −1, then
2

1 12 .

The marginal

fX (x) =

−2

fX,Y (x, y)dy = 152

1 y 12

2 −2

=

1 3

If −1 ≤ X < 1, then fX (x) = Finally, if 1 ≤ X ≤ 2, then
2

−1 −2

1 dy + 12

2 1

1 1 dy = 12 6

fX (x) =

−2

fX,Y (x, y)dy =

1 y 12

2 −2

=

1 3

The next figure depicts the marginal distribution fX (x). . . . . .1/3. . .. 1/6 -2 Similarly we find that fY (y) = -1
  1  3  
1 6 1 3

1

2

−2 ≤ y < −1 −1 ≤ y < −1 1≤y≤2
1 2

2) The quantization levels x1 , x2 , x3 and x4 are set to − 3 , − 1 , ˆ ˆ ˆ ˆ 2 2 distortion is
−1

and

3 2

respectively. The resulting

DX

= 2 = = = 2 3 2 3 1 12

0 3 1 (x + )2 fX (x)dx + 2 (x + )2 fX (x)dx 2 2 −2 −1 −1 2 0 2 9 1 (x2 + 3x + )dx + (x + x + )dx 4 6 −1 4 −2 −1 1 3 3 2 9 2 1 3 1 2 1 x + x + x + x + x + x 3 2 6 3 4 2 4 −2

0 −1

The total distortion is Dtotal = DX + DY = whereas the resulting number of bits per (X, Y ) pair

1 1 1 + = 12 12 6

R = RX + RY = log2 4 + log2 4 = 4 3) Suppose that we divide the region over which p(x, y) = 0 into L equal subregions. The case of L = 4 is depicted in the next figure.

For each subregion the quantization output vector (ˆ, y ) is the centroid of the corresponding rectx ˆ angle. Since, each subregion has the same shape (uniform quantization), a rectangle with width

153

equal to one and length 12/L, the distortion of the vector quantizer is
1

D =
0

= = If we set D = 1 , we obtain 6

L 12 L 12

1 12 2 L )] dxdy [(x, y) − ( , 2 2L 12 0 12 1 L 1 12 2 ) dxdy (x − )2 + (y − 2 2L 0 0 123 1 12 12 1 1 + 3 + 2 = L 12 L 12 12 L

12 L

√ 12 1 =⇒ L = 144 = 12 = 2 L 12 Thus, we have to divide the area over which p(x, y) = 0, into 12 equal subregions in order to achieve the same distortion. In this case the resulting number of bits per source output pair (X, Y ) is R = log2 12 = 3.585. Problem 6.50 1) The joint probability density function is fXY (x, y) = fX (x) is fX (x) = y fXY (x, y)dy. 1 √ (2 2)2 1 8.

=

The marginal distribution

If −2 ≤ x ≤ 0,then x+2 fX (x) = If 0 ≤ x ≤ 2,then fX (x) =

1 x+2 fX,Y (x, y)dy = y|x+2 = −x−2 8 4 −x−2
−x+2 x−2

1 −x+2 −x + 2 fX,Y (x, y)dy = y|x−2 = 8 4

The next figure depicts fX (x).
1 2 3—— 33 — 3

−2

33

3

——

— —

2

From the symmetry of the problem we have fY (y) = y+2 4 −y+2 4

−2 ≤ y < 0 0≤y≤2

2) DX = 2 = = = 1 2 1 2 1 12
0 3 1 (x + )2 fX (x)dx + 2 (x + )2 fX (x)dx 2 2 −2 −1 −1 0 3 1 1 (x + )2 (x + 2)dx + (x + )2 (−x + 2)dx 2 2 −1 2 −2 −1 1 4 5 3 33 2 9 1 1 4 9 1 x + x + x + x x + x3 + x2 + x + 4 3 8 2 2 4 8 2 −2 −1

0 −1

The total distortion is Dtotal = DX + DY =

1 1 1 + = 12 12 6

154

whereas the required number of bits per source output pair R = RX + RY = log2 4 + log2 4 = 4 3) We divide the square over which p(x, y) = 0 into 24 = 16 equal square regions. The area of each square is 1 and the resulting distortion 2 D = 1 1 (x − √ )2 + (y − √ )2 dxdy 2 2 2 2 0 0 1 1 √ √ 1 2 2 (x − √ )2 dxdy = 4 2 2 0 0 1 √ x 1 4 2 (x2 + − √ )dx = √ 8 2 0 2 16 8 = = 4 √ 2 1 12 1 1 3 1 x + x − √ x2 3 8 2 2
1 √ 2 1 √ 2 1 √ 2

0

Hence, using vector quantization and the same rate we obtain half the distortion. Problem 6.51 X ˘ X = xmax = X/2. Hence, ˘ E[X 2 ] = ˘ With ν = 8 and X 2 = 1 , we obtain 3 SQNR = 3 · 48 · Problem 6.52 1) σ 2 = E[X 2 (t)] = RX (τ )|τ =0 = Hence, ˘ SQNR = 3 · 4ν X 2 = 3 · 4ν With SQNR = 60 db, we obtain 10 log10 3 · 4q 2 = 60 =⇒ q = 9.6733 X2 A2 = 3 · 4ν 2 x2 2A max A2 2 1 = 48 = 48.165(db) 3 1 4
2 −2

1 X2 dx = x3 4 16 · 3

2 −2

=

1 3

The smallest integer larger that q is 10. Hence, the required number of quantization levels is ν = 10. 2) The minimum bandwidth requirement for transmission of a binary PCM signal is BW = νW . Since ν = 10, we have BW = 10W .

155

Problem 6.53 1) E[X 2 (t)] = = = Hence, SQNR =
2 x+2 −x + 2 dx + x2 dx 4 4 −2 0 2 1 1 4 2 3 0 1 1 2 x + x + − x4 + x3 4 4 3 4 4 3 −2 0 2 3 0

x2

3 × 4ν × x2 max

2 3

=

3 × 45 × 22

2 3

= 512 = 27.093(db)

2) If the available bandwidth of the channel is 40 KHz, then the maximum rate of transmission is ν = 40/5 = 8. In this case the highest achievable SQNR is SQNR = 3 × 48 × 22
2 3

= 32768 = 45.154(db)

3) In the case of a guard band of 2 KHz the sampling rate is fs = 2W + 2000 = 12 KHz. The highest achievable rate is ν = 2BW = 6.6667 and since ν should be an integer we set ν = 6. Thus, fs the achievable SQNR is 3 × 46 × 2 3 SQNR = = 2048 = 33.11(db) 22 Problem 6.54 1) The probabilities of the quantized source outputs are p(ˆ1 ) = p(ˆ4 ) = x x p(ˆ2 ) = p(ˆ3 ) = x x Hence, ˆ H(X) = − xi ˆ

x+2 1 −1 1 −1 1 + x = dx = x2 4 8 −2 2 −2 8 −2 1 −x + 2 1 1 1 1 3 dx = − x2 + x = 4 8 0 2 0 8 0
−1

p(ˆi ) log2 p(ˆi ) = 1.8113 bits / output sample x x

˜ ˜ ˜ ˜ 2) Let X = X − Q(X). Clearly if |X| > 0.5, then p(X) = 0. If |X| ≤ 0.5, then there are four ˜ = X − Q(X), which are denoted by x1 , x2 , x3 and x4 . The solution solutions to the equation X x1 corresponds to the case −2 ≤ X ≤ −1, x2 is the solution for −1 ≤ X ≤ 0 and so on. Hence, −(˜ + 0.5) + 2 x −x3 + 2 = 4 4 −x4 + 2 −(˜ + 1.5) + 2 x fX (x4 ) = = 4 4 ˜ The absolute value of (X − Q(X)) is one for X = x1 , . . . , x4 . Thus, for |X| ≤ 0.5 fX (x1 ) = fX (x3 ) =
4

(˜ − 1.5) + 2 x x1 + 2 = 4 4 x2 + 2 (˜ − 0.5) + 2 x fX (x2 ) = = 4 4

fX (˜) = ˜ x i=1 fX (xi ) |(xi − Q(xi )) |

x x x (˜ − 1.5) + 2 (˜ − 0.5) + 2 −(˜ + 0.5) + 2 −(˜ + 1.5) + 2 x + + + 4 4 4 4 = 1 = 156

Problem 6.55 1) RX (t + τ, t) = E[X(t + τ )X(t)] = E[Y 2 cos(2πf0 (t + τ ) + Θ) cos(2πf0 t + Θ)] 1 E[Y 2 ]E[cos(2πf0 τ ) + cos(2πf0 (2t + τ ) + 2Θ)] = 2 and since E[cos(2πf0 (2t + τ ) + 2Θ)] = we conclude that 1 2π


cos(2πf0 (2t + τ ) + 2θ)dθ = 0
0

1 3 RX (t + τ, t) = E[Y 2 ] cos(2πf0 τ ) = cos(2πf0 τ ) 2 2

2) 10 log10 SQNR = 10 log10 Thus,

3 × 4ν × RX (0) x2 max

= 40

4ν = 4 or ν = 8 2 The bandwidth of the process is W = f0 , so that the minimum bandwidth requirement of the PCM system is BW = 8f0 . log10 3) If SQNR = 64 db, then ν = log4 (2 · 106.4 ) = 12 Thus, ν − ν = 4 more bits are needed to increase SQNR by 24 db. The new minimum bandwidth requirement is BW = 12f0 . Problem 6.56 Suppose that the transmitted sequence is x. If an error occurs at the ith bit of the sequence, then the received sequence x is x = x + [0 . . . 010 . . . 0] where addition is modulo 2. Thus the error sequence is ei = [0 . . . 010 . . . 0], which in natural binary coding has the value 2i−1 . If the spacing between levels is ∆, then the error introduced by the channel is 2i−1 ∆. 2) ν Dchannel = i=1 ν

p(error in i bit) · (2i−1 ∆)2 pb ∆2 4i−1 = pb ∆2 i=1 =

1 − 4ν 1−4

= pb ∆ 2

4ν − 1 3

3) The total distortion is Dtotal = Dchannel + Dquantiz. = pb ∆2 = pb x2 4 · x2 4ν − 1 max + max2 N2 3 3·N 157 x2 4ν − 1 + max2 3 3·N

or since N = 2ν Dtotal = 4)

x2 x2 max (1 + 4pb (4ν − 1)) = max (1 + 4pb (N 2 − 1)) 3 · 4ν 3N 2

SNR = ˘ If we let X =
X xmax ,

E[X 2 ] E[X 2 ]3N 2 = 2 Dtotal xmax (1 + 4pb (N 2 − 1))

then

E[X 2 ] x2 max

˘ ˘ = E[X 2 ] = X 2 . Hence, ˘ ˘ 3 · 4ν X 2 3N 2 X 2 = 1 + 4pb (N 2 − 1) 1 + 4pb (4ν − 1)

SNR =

Problem 6.57 1)

log(1 + µ x|x| ) max sgn(x) g(x) = log(1 + µ)

Differentiating the previous using natural logarithms, we obtain g (x) = µ/xmax 1 sgn2 (x) ln(1 + µ) (1 + µ |x| ) xmax

Since, for the µ-law compander ymax = g(xmax ) = 1, we obtain D ≈ = = =
2 ymax 3 × 4ν ∞

−∞ 2 xmax [ln(1 + µ)]2 3 × 4ν µ2

fX (x) dx [g (x)]2
∞ −∞

1 + µ2

|x| |x|2 + 2µ 2 xmax xmax

fX (x)dx

x2 [ln(1 + µ)]2 max ˘ ˘ 1 + µ2 E[X 2 ] + 2µE[|X|] 3 × 4ν µ2 x2 [ln(1 + µ)]2 max ˘ ˘ 1 + µ2 E[X 2 ] + 2µE[|X|] 3 × N 2 µ2

˘ where N 2 = 4ν and X = X/xmax . 2) SQNR = = = E[X 2 ] D E[X 2 ] µ2 3 · N 2 2 2 ˘2 ˘ x2 max [ln(1 + µ)] (µ E[X ] + 2µE[|X|] + 1) ˘ 3µ2 N 2 E[X 2 ] ˘ ˘ [ln(1 + µ)]2 (µ2 E[X 2 ] + 2µE[|X|] + 1)

˘ 3) Since SQNRunif = 3 · N 2 E[X 2 ], we have SQNRµlaw = SQNRunif µ2 ˘ ˘ [ln(1 + µ)]2 (µ2 E[X 2 ] + 2µE[|X|] + 1)

˘ = SQNRunif G(µ, X) 158

where we identify ˘ G(µ, X) = µ2 ˘ ˘ [ln(1 + µ)]2 (µ2 E[X 2 ] + 2µE[|X|] + 1)

3) The truncated Gaussian distribution has a PDF given by
− x2 K e 2σx fY (y) = √ 2πσx
2

where the constant K is such that K Hence, ˘ E[|X|] = = = = K 2πσx 2K √ 2 4 2πσx √
4σx −4σx 4σx 0
2 x |x| − 2σ2 e x dx 4σx 2

4σx −4σx

− x2 1 1 √ e 2σx dx = 1 =⇒ K = = 1.0001 1 − 2Q(4) 2πσx

2

xe



x2 2 2σx

dx
4σx

x K 2 − 2 √ −σx e 2σx 2 2 2πσx 0 K √ (1 − e−2 ) = 0.1725 2 2π

˘ In the next figure we plot 10 log10 SQNRunif and 10 log10 SQNRmu−law vs. 10 log10 E[X 2 ] when the latter varies from −100 to 100 db. As it is observed the µ-law compressor is insensitive to the ˘ dynamic range of the input signal for E[X 2 ] > 1.
200

150 uniform
SQNR (db)

100

50 mu-law 0

-50 -100

-80

-60

-40

-20

0 E[X^2] db

20

40

60

80

100

Problem 6.58 The optimal compressor has the form



g(x) = ymax  where ymax = g(xmax ) = g(1).
∞ −∞

2

1 x 3 −∞ [fX (v)] dv  − 1 ∞ [fX (v)] 3 dv −∞



[fX (v)] 3 dv = = 2

1

1 −1

[fX (v)] 3 dv =
1

1

0 −1

(v + 1) 3 dv +
0

1

1

(−v + 1) 3 dv

1

x 3 dx =
0

1

3 2 159

If x ≤ 0, then x −∞

[fX (v)] 3 dv = =

1

x −1

(v + 1) 3 dv =
0

1

x+1

1 3 4 z 3 dz = z 3 4

x+1 0

4 3 (x + 1) 3 4

If x > 0, then x −∞

[fX (v)] 3 dv = =

1

0 −1

(v + 1) 3 dv +
0

1

x

(−v + 1) 3 dv =

1

3 + 4

1 1−x

z 3 dz

1

4 3 3 + 1 − (1 − x) 3 4 4

Hence, g(x) =

  g(1) (x + 1) 4 − 1 3  g(1) 1 − (1 − x)
4 3

−1 ≤ x < 0 0≤x≤1

The next figure depicts g(x) for g(1) = 1. Since the resulting distortion is (see Equation 6.6.17)
1 0.8 0.6 0.4 0.2 g(x) 0 -0.2 -0.4 -0.6 -0.8 -1 -1 -0.8 -0.6 -0.4 -0.2 0 x 0.2 0.4 0.6 0.8 1

D= we have

1 12 × 4ν

∞ −inf ty

[fX (x)] 3 dx

1

3

=

1 12 × 4ν

3 2

3

SQNR =

32 16 32 1 E[X 2 ] = × 4ν E[X 2 ] = × 4 ν · = 4ν D 9 9 6 27

Problem 6.59 The sampling rate is fs = 44100 meaning that we take 44100 samples per second. Each sample is quantized using 16 bits so the total number of bits per second is 44100 × 16. For a music piece of duration 50 min = 3000 sec the resulting number of bits per channel (left and right) is 44100 × 16 × 3000 = 2.1168 × 109 and the overall number of bits is 2.1168 × 109 × 2 = 4.2336 × 109

160

Chapter 7
Problem 7.1 The amplitudes Am take the values d Am = (2m − 1 − M ) , 2 Hence, the average energy is Eav = = = = = 1 M d2 4M d2 4M
M

m = 1, . . . M

s2 = m m=1 M

M d2 (2m − 1 − M )2 Eg 4M m=1

Eg m=1 [4m2 + (M + 1)2 − 4m(M + 1)]
M M

Eg 4 m=1 m2 + M (M + 1)2 − 4(M + 1) m=1 m

d2 M (M + 1)(2M + 1) M (M + 1) Eg 4 + M (M + 1)2 − 4(M + 1) 4M 6 2 2 − 1 d2 M Eg 3 4

Problem 7.2 The correlation coefficient between the mth and the nth signal points is γmn = sm · sn |sm ||sn |

where sm = (sm1 , sm2 , . . . , smN ) and smj = ± Es . Two adjacent signal points differ in only one N coordinate, for which smk and snk have opposite signs. Hence,
N

sm · sn = j=1 smj snj = j=k smj snj + smk snk

= (N − 1) Furthermore, |sm | = |sn | = (Es ) 2 so that
1

N −2 Es Es − = Es N N N

γmn =

N −2 N

The Euclidean distance between the two adjacent signal points is d= |sm − sn |2 = ±2 Es /N
2

=

4

Es =2 N

Es N

161

Problem 7.3 a) To show that the waveforms ψn (t), n = 1, . . . , 3 are orthogonal we have to prove that
∞ −∞

ψm (t)ψn (t)dt = 0,

m=n

Clearly,
∞ 4

c12 = =

−∞ 2 0

ψ1 (t)ψ2 (t)dt =
0 4

ψ1 (t)ψ2 (t)dt ψ1 (t)ψ2 (t)dt
2

ψ1 (t)ψ2 (t)dt +
2 0

1 4 = 0 = Similarly,

dt −

1 4

4

dt =
2

1 1 × 2 − × (4 − 2) 4 4



4

c13 = =

−∞ 1 1 0

ψ1 (t)ψ3 (t)dt =
0

ψ1 (t)ψ3 (t)dt 1 4
3

4 = 0 and

dt −

1 4

2 1

dt −

dt +
2

1 4

4

dt
3



4

c23 = =

−∞ 1 1 0

ψ2 (t)ψ3 (t)dt =
0

ψ2 (t)ψ3 (t)dt 1 4
3 2

4 = 0

dt −

1 4

2

dt +
1

dt −

1 4

4

dt
3

Thus, the signals ψn (t) are orthogonal. b) We first determine the weighting coefficients


xn =
4

−∞

x(t)ψn (t)dt, 1 2
0 1

n = 1, 2, 3
2 1

x1 =
0 4

x(t)ψ1 (t)dt = − x(t)ψ2 (t)dt = 1 2

dt +
0 4

1 2

dt −

1 2

3

dt +
2

1 2

4

dt = 0
3

x2 =
0 4

x(t)dt = 0 1 2
1 0

x3 =
0

x(t)ψ3 (t)dt = −

dt −

1 2

2

dt +
1

1 2

3

dt +
2

1 2

4

dt = 0
3

As it is observed, x(t) is orthogonal to the signal waveforms ψn (t), n = 1, 2, 3 and thus it can not represented as a linear combination of these functions. Problem 7.4 a) The expansion coefficients {cn }, that minimize the mean square error, satisfy
∞ 4

cn =

−∞

x(t)ψn (t)dt =
0

sin

πt ψn (t)dt 4

162

Hence, c1 = 1 πt 1 2 πt ψ1 (t)dt = sin dt − 4 2 0 4 2 0 2 πt 2 2 πt 4 = − cos + cos π 4 0 π 4 2 2 2 = − (0 − 1) + (−1 − 0) = 0 π π sin
4 4

sin
2

πt dt 4

Similarly, c2 = πt 1 4 πt ψ2 (t)dt = sin dt 4 2 0 4 0 4 2 2 πt 4 = − cos = − (−1 − 1) = π 4 0 π π sin
4

and c3 = πt ψ3 (t)dt 4 0 1 1 1 πt = sin dt − 2 0 4 2 = 0 sin
4

2

sin
1

1 πt dt + 4 2

3

sin
2

1 πt dt − 4 2

4

sin
3

πt dt 4

Note that c1 , c2 can be found by inspection since sin πt is even with respect to the x = 2 axis and 4 ψ1 (t), ψ3 (t) are odd with respect to the same axis. b) The residual mean square error Emin can be found from
∞ 3

Emin = Thus, Emin =

−∞

|x(t)|2 dt − i=1 |ci |2

4 2 1 πt 2 dt − = 4 π 2 0 πt 4 16 1 16 = 2 − sin − =2− 2 π 2 0 π2 π
4

4 0

sin

1 − cos

πt 16 dt − 2 2 π

Problem 7.5 a) As an orthonormal set of basis functions we consider the set ψ1 (t) = ψ3 (t) = 1 0≤t < 1 =⇒ r −A A > < 0 −A λ −λ|n| e 2

f (r|A) = e−λ[|r−A|−|r+A|] f (r| − A)

180

The average probability of error is P (e) = = = = = = 1 1 P (e|A) + P (e| − A) 2 2 1 0 1 ∞ f (r|A)dr + f (r| − A)dr 2 −∞ 2 0 1 0 1 ∞ λ2 e−λ|r−A| dr + λ2 e−λ|r+A| dr 2 −∞ 2 0 λ ∞ −λ|x| λ −A −λ|x| e dx + e dx 4 −∞ 4 A λ 1 λx −A λ 1 −λx ∞ e + − e 4λ 4 λ −∞ A 1 −λA e 2

b) The variance of the noise is
2 σn =

λ 2

∞ −∞ ∞ 0

e−λ|x| x2 dx 2! 2 = 2 3 λ λ

= λ Hence, the SNR is

e−λx x2 dx = λ

SNR = and the probability of error is given by

A2
2 λ2

=

A2 λ2 2 √

1 1 √ 2 2 P (e) = e− λ A = e− 2 2 For P (e) = 10−5 we obtain

2SNR

√ ln(2 × 10−5 ) = − 2SNR =⇒ SNR = 58.534 = 17.6741 dB If the noise was Gaussian, then P (e) = Q √ 2Eb = Q SNR N0

where SNR is the signal to noise ratio at the output of the matched filter. With P (e) = 10−5 we √ find SNR = 4.26 and therefore SNR = 18.1476 = 12.594 dB. Thus the required signal to noise ratio is 5 dB less when the additive noise is Gaussian. Problem 7.24 The energy of the two signals s1 (t) and s2 (t) is Eb = A2 T The dimensionality of the signal space is one, and by choosing the basis function as ψ(t) =
1 √ T 1 − √T

0≤t< T 2 T ≤t≤T 2

181

we find the vector representation of the signals as √ s1,2 = ±A T + n with n a zero-mean Gaussian random variable of variance signals is given by, where Eb = A2 T . Hence, P (e) = Q 2Eb N0
N0 2 .

The probability of error for antipodal




= Q

2A2 T  N0

Problem 7.25 The three symbols A, 0 and −A are used with equal probability. Hence, the optimal detector uses two thresholds, which are A and − A , and it bases its decisions on the criterion 2 2 A: 0: −A : r> − A 2

A A T ) = P (n1 − n2 > T + 0.5) = The average probability of error is P (e) = = 1 1 P (e|s1 ) + P (e|s2 ) 2 2 T −1 − x2 1 1 2 e 2σn dx + 2 −∞ 2 2 2πσn 2 2πσn 1 2 2πσn
∞ T +0.5 − x2 2 2σn

e

dx

∞ T +0.5

e



x2 2 2σn

dx

To find the value of T that minimizes the probability of error, we set the derivative of P (e) with respect to T equal to zero. Using the Leibnitz rule for the differentiation of definite integrals, we obtain (T −1)2 (T +0.5)2 − − ϑP (e) 1 2 2 2σn = e 2σn − e =0 2 ϑT 2 2πσn or (T − 1)2 = (T + 0.5)2 =⇒ T = 0.25 Thus, the optimal decision rule is r1 − r 2 s1 > 0.25 < s2

Problem 7.31 a) The inner product of si (t) and sj (t) is
∞ −∞ ∞ n n

si (t)sj (t)dt = =

−∞ k=1 n n

cik p(t − kTc ) l=1 ∞

cjl p(t − lTc )dt

cik cjl k=1 l=1 n n

−∞

p(t − kTc )p(t − lTc )dt

= k=1 l=1 n

cik cjl Ep δkl cik cjk k=1 = Ep

The quantity n cik cjk is the inner product of the row vectors C i and C j . Since the rows of the k=1 matrix Hn are orthogonal by construction, we obtain
∞ −∞ n

si (t)sj (t)dt = Ep k=1 c2 δij = nEp δij ik

Thus, the waveforms si (t) and sj (t) are orthogonal. 186

b) Using the results of Problem 7.28, we obtain that the filter matched to the waveform n si (t) = k=1 cik p(t − kTc )

can be realized as the cascade of a filter matched to p(t) followed by a discrete-time filter matched to the vector C i = [ci1 , . . . , cin ]. Since the pulse p(t) is common to all the signal waveforms si (t), we conclude that the n matched filters can be realized by a filter matched to p(t) followed by n discrete-time filters matched to the vectors C i , i = 1, . . . , n. Problem 7.32 a) The optimal ML detector selects the sequence C i that minimizes the quantity n D(r, C i ) = k=1 (rk −

Eb Cik )2

The metrics of the two possible transmitted sequences are w n

D(r, C 1 ) = k=1 (rk −

Eb )2 + k=w+1 n

(rk −

Eb )2

and D(r, C 2 ) =

w

(rk − k=1 Eb )2 + k=w+1 (rk +

Eb )2

Since the first term of the right side is common for the two equations, we conclude that the optimal ML detector can base its decisions only on the last n − w received elements of r. That is C2 > < 0 C1

n

n

(rk − k=w+1 Eb ) −
2 k=w+1

(rk +

Eb )

2

or equivalently n rk k=w+1 C1 > < 0 C2

b) Since rk =



Eb Cik + nk , the probability of error P (e|C 1 ) is


P (e|C 1 ) = P  Eb (n − w) +


n k=w+1



nk < 0


= P The random variable u = P (e|C 1 ) = n k=w+1 nk

n k=w+1

nk < −(n − w) Eb 

2 is zero-mean Gaussian with variance σu = (n − w)σ 2 . Hence

1 2π(n − w)σ 2

√ − Eb (n−w) −∞

x2 exp(− )dx = Q  2π(n − w)σ 2 187



Eb (n − w)  σ2



Similarly we find that P (e|C 2 ) = P (e|C 1 ) and since the two sequences are equiprobable


P (e) = Q 

Eb (n − w)  σ2



c) The probability of error P (e) is minimized when Eb (n−w) is maximized, that is for w = 0. This σ2 implies that C 1 = −C 2 and thus the distance between the two sequences is the maximum possible. Problem 7.33 1) The dimensionality of the signal space is two. An orthonormal basis set for the signal space is formed by the signals ψ1 (t) =
2 T,

0,

0≤t< T 2 otherwise

ψ2 (t) =

2 T,

0,

≤t = √ 1 2πN0

A2 T 2

N0 2 .

The probability of error

A2 T ) 2

e

x2 − 2N 0





dx = Q 

A2 T 2N0



where we have used the fact the n = n2 −n1 is a zero-mean Gaussian random variable with variance A2 T N0 . Similarly we find that P (e|s1 ) = Q 2N0 , so that
 

1 1 P (e) = P (e|s1 ) + P (e|s2 ) = Q  2 2

A2 T  2N0

4) The signal waveform ψ1 ( T − t) matched to ψ1 (t) is exactly the same with the signal waveform 2 ψ2 (T − t) matched to ψ2 (t). That is, ψ1 ( T − t) = ψ2 (T − t) = ψ1 (t) = 2 188
2 T,

0,

0≤t< T 2 otherwise

Thus, the optimal receiver can be implemented by using just one filter followed by a sampler which samples the output of the matched filter at t = T and t = T to produce the random variables r1 2 and r2 respectively. 5) If the signal s1 (t) is transmitted, then the received signal r(t) is 1 T r(t) = s1 (t) + s1 (t − ) + n(t) 2 2 The output of the sampler at t = r1 = A r2 = A 2
T 2

and t = T is given by 2T 3A + T 4 2 2T 5 + n1 = T 4 2 A2 T + n2 8 A2 T + n1 8

2T 1 + n2 = T 4 2

If the optimal receiver uses a threshold V to base its decisions, that is s1 > V < s2

r1 − r2

then the probability of error P (e|s1 ) is
 

P (e|s1 ) = P (n2 − n1 > 2 If s2 (t) is transmitted, then

A2 T − V ) = Q 2 8

V A2 T −√  8N0 N0

1 T r(t) = s2 (t) + s2 (t − ) + n(t) 2 2 The output of the sampler at t =
T 2

and t = T is given by

r1 = n 1 r2 = A = The probability of error P (e|s2 ) is
 

2T 3A + T 4 2 A2 T + n2 8

2T + n2 T 4

5 2

P (e|s2 ) = P (n1 − n2 >

5 2

A2 T 8

+ V ) = Q

5 2

A2 T 8N0

V +√  N0

Thus, the average probability of error is given by P (e) = = 1 1 P (e|s1 ) + P (e|s2 ) 2  2  1  Q 2 2





V 1 A2 T 5 − √  + Q 8N0 2 2 N0

V A2 T +√  8N0 N0

189

The optimal value of V can be found by setting differentiate definite integrals, we obtain


ϑP (e) ϑV

equal to zero. Using Leibnitz rule to
2

ϑP (e) = 0 = 2 ϑV or by solving in terms of V

A2 T 8N0

2



V 5 − √  − 2 N0

A2 T 8N0

V +√  N0

V =−

1 8

A2 T 2

6) Let a be fixed to some value between 0 and 1. Then, if we argue as in part 5) we obtain P (e|s1 , a) = P (n2 − n1 > 2 A2 T − V (a)) 8 A2 T + V (a)) 8

P (e|s2 , a) = P (n1 − n2 > (a + 2) and the probability of error is

1 1 P (e|a) = P (e|s1 , a) + P (e|s2 , a) 2 2 For a given a, the optimal value of V (a) is found by setting find that a A2 T V (a) = − 4 2 The mean square estimation of V (a) is
1 ϑP (e|a) ϑV (a)

equal to zero. By doing so we

V =
0

V (a)f (a)da = −

1 4

A2 T 2

1 0

ada = −

1 8

A2 T 2

Problem 7.34 For binary phase modulation, the error probability is
 

P2 = Q With P2 = 10−6 we find from tables that

2Eb = Q N0

A2 T N0



A2 T = 4.74 =⇒ A2 T = 44.9352 × 10−10 N0 If the data rate is 10 Kbps, then the bit interval is T = 10−4 and therefore, the signal amplitude is A= 44.9352 × 10−10 × 104 = 6.7034 × 10−3

Similarly we find that when the rate is 105 bps and 106 bps, the required amplitude of the signal is A = 2.12 × 10−2 and A = 6.703 × 10−2 respectively. Problem 7.35 1) The impulse response of the matched filter is s(t) = u(T − t) =
A T (T

0

− t) cos(2πfc (T − t)) 0 ≤ t ≤ T otherwise 190

2) The output of the matched filter at t = T is
T

g(T )

= = v=T −τ

u(t) s(t)|t=T =

0

u(T − τ )s(τ )dτ

=

= =

A2 T (T − τ )2 cos2 (2πfc (T − τ ))dτ T2 0 A2 T 2 v cos2 (2πfc v)dv T2 0 1 v cos(4πfc v) v2 A2 v 3 − sin(4πfc v) + + 2 3 T 6 4 × 2πfc 8 × (2πfc ) 4(2πfc )2 A2 T2 T3 6 + T2 4 × 2πfc − 1 8 × (2πfc )3 sin(4πfc T ) + T cos(4πfc T ) 4(2πfc )2

T 0

3) The output of the correlator at t = T is
T

q(T ) =
0

u2 (τ )dτ

A2 T 2 = τ cos2 (2πfc τ )dτ T2 0 However, this is the same expression with the case of the output of the matched filter sampled at t = T . Thus, the correlator can substitute the matched filter in a demodulation system and vise versa. Problem 7.36 1) The signal r(t) can be written as r(t) = ± 2Ps cos(2πfc t + φ) + = = 2Pc sin(2πfc t + φ) Ps Pc

2(Pc + Ps ) sin 2πfc t + φ + an tan−1 2PT sin 2πfc t + φ + an cos−1 Pc PT

where an = ±1 are the information symbols and PT is the total transmitted power. As it is observed the signal has the form of a PM signal where θn = an cos−1 Pc PT

Any method used to extract the carrier phase from the received signal can be employed at the receiver. The following figure shows the structure of a receiver that employs a decision-feedback PLL. The operation of the PLL is described in the next part. t = Tb
E Threshold E

v(t)
E

E× n T

E

Tb 0 (·)dt

’ ’

DFPLL cos(2πfc t + φ)

2) At the receiver the signal is demodulated by crosscorrelating the received signal r(t) = 2PT sin 2πfc t + φ + an cos−1 191 Pc PT + n(t)

ˆ ˆ with cos(2πfc t + φ) and sin(2πfc t + φ). The sampled values at the output of the correlators are r1 = r2 = 1 2 1 2 1 ˆ ˆ 2PT − ns (t) sin(φ − φ + θn ) + nc (t) cos(φ − φ + θn ) 2 1 ˆ ˆ 2PT − ns (t) cos(φ − φ + θn ) + nc (t) sin(φ − φ − θn ) 2

where nc (t), ns (t) are the in-phase and quadrature components of the noise n(t). If the detector has made the correct decision on the transmitted point, then by multiplying r1 by cos(θn ) and r2 by sin(θn ) and subtracting the results, we obtain (after ignoring the noise) r1 cos(θn ) = r2 sin(θn ) = 1 2 1 2 ˆ ˆ 2PT sin(φ − φ) cos2 (θn ) + cos(φ − φ) sin(θn ) cos(θn ) ˆ ˆ 2PT cos(φ − φ) cos(θn ) sin(θn ) − sin(φ − φ) sin2 (θn ) 1 2 ˆ 2PT sin(φ − φ)

e(t) = r1 cos(θn ) − r2 sin(θn ) =

The error e(t) is passed to the loop filter of the DFPLL that drives the VCO. As it is seen only the phase θn is used to estimate the carrier phase. 3) Having a correct carrier phase estimate, the output of the lowpass filter sampled at t = Tb is r = ± = ± = ± 1 2 1 2 1 2 2PT sin cos−1 2PT 1− Pc PT +n

Pc +n PT Pc PT +n

2PT 1 −

where n is a zero-mean Gaussian random variable with variance
2 σn = E Tb 0 Tb 0 0 Tb

n(t)n(τ ) cos(2πfc t + φ) cos(2πfc τ + φ)dtdτ

= =

N0 2 N0 4

cos2 (2πfc t + φ)dt

Note that Tb has been normalized to 1 since the problem has been stated in terms of the power of the involved signals. The probability of error is given by P (error) = Q 2PT N0 1− Pc PT

The loss due to the allocation of power to the pilot signal is SNRloss = 10 log10 1 − When Pc /PT = 0.1, then Pc PT

SNRloss = 10 log10 (0.9) = −0.4576 dB

The negative sign indicates that the SNR is decreased by 0.4576 dB.

192

Problem 7.37 1) If the received signal is r(t) = ±gT (t) cos(2πfc t + φ) + n(t) then by crosscorrelating with the signal at the output of the PLL ψ(t) = we obtain
T 0

2 ˆ gt (t) cos(2πfc t + φ) Eg

r(t)ψ(t)dt = ± +

2 Eg
T 0

T 0

2 ˆ gT (t) cos(2πfc t + φ) cos(2πfc t + φ)dt

n(t) 2 Eg
T 0

2 ˆ gt (t) cos(2πfc t + φ)dt Eg
2 gT (t) ˆ ˆ cos(2π2fc t + φ + φ) + cos(φ − φ) dt + n 2

= ± = ±

Eg ˆ cos(φ − φ) + n 2

where n is a zero-mean Gaussian random variable with variance N0 . If we assume that the signal 2 s1 (t) = gT (t) cos(2πfc t + φ) was transmitted, then the probability of error is


P (error|s1 (t)) = P 


Eg ˆ cos(φ − φ) + n < 0 2 Eg cos2 (φ N0 ˆ − φ) 
 



= Q

= Q

2Es

cos2 (φ N0

ˆ − φ) 



ˆ where Es = Eg /2 is the energy of the transmitted signal. As it is observed the phase error φ − φ reduces the SNR by a factor ˆ SNRloss = −10 log10 cos2 (φ − φ) ˆ 2) When φ − φ = 45o , then the loss due to the phase error is SNRloss = −10 log10 cos2 (45o ) = −10 log10 Problem 7.38 1) The closed loop transfer function is H(s) = G(s) 1 G(s)/s = = 2 √ 1 + G(s)/s s + G(s) s + 2s + 1 1 = 3.01 dB 2

The poles of the system are the roots of the denominator, that is √ √ 1 − 2± 2−4 1 = −√ ± j √ ρ1,2 = 2 2 2 Since the real part of the roots is negative, the poles lie in the left half plane and therefore, the system is stable. 193

2) Writing the denominator in the form
2 D = s2 + 2ζωn s + ωn

we identify the natural frequency of the loop as ωn = 1 and the damping factor as ζ = Problem 7.39 1) The closed loop transfer function is H(s) = G(s) K G(s)/s = = = 2 2+s+K 1 + G(s)/s s + G(s) τ1 s s +
K τ1 1 τ1 s

1 √ . 2

+

K τ1

The gain of the system at f = 0 is |H(0)| = |H(s)|s=0 = 1

2) The poles of the system are the roots of the denominator, that is √ −1 ± 1 − 4Kτ1 = ρ1,2 = 2τ1 In order for the system to be stable the real part of the poles must be negative. Since K is greater than zero, the latter implies that τ1 is positive. If in addition we require that the damping factor 1 ζ = 2√τ K is less than 1, then the gain K should satisfy the condition
1

K>

1 4τ1

Problem 7.40 The transfer function of the RC circuit is G(s) =
1 R2 + Cs 1 + τ2 s 1 + R2 Cs 1 = 1 + (R + R )Cs = 1 + τ s R1 + R2 + Cs 1 2 1

From the last equality we identify the time constants as τ2 = R2 C, τ1 = (R1 + R2 )C

Problem 7.41 Assuming that the input resistance of the operational amplifier is high so that no current flows through it, then the voltage-current equations of the circuit are V2 = −AV1 V1 − V2 = R1 + 1 i Cs V1 − V0 = iR where, V1 , V2 is the input and output voltage of the amplifier respectively, and V0 is the signal at the input of the filter. Eliminating i and V1 , we obtain V2 = V1 1+
1 R1 + Cs R 1 R1 + Cs 1 A − AR

194

If we let A → ∞ (ideal amplifier), then V2 1 + R1 Cs 1 + τ2 s = = V1 RCs τ1 s Hence, the constants τ1 , τ2 of the active filter are given by τ1 = RC, τ2 = R1 C

Problem 7.42 Using the Pythagorean theorem for the four-phase constellation, we find d 2 2 r1 + r1 = d2 =⇒ r1 = √ 2 The radius of the 8-PSK constellation is found using the cosine rule. Thus,
2 2 2 d2 = r2 + r2 − 2r2 cos(45o ) =⇒ r2 =

d 2−



2

The average transmitted power of the 4-PSK and the 8-PSK constellation is given by P4,av = d2 , 2 P8,av = d2 √ 2− 2

Thus, the additional transmitted power needed by the 8-PSK signal is P = 10 log10 2d2 √ = 5.3329 dB (2 − 2)d2

We obtain the same results if we use the probability of error given by PM = 2Q 2ρs sin π M

where ρs is the SNR per symbol. In this case, equal error probability for the two signaling schemes, implies that sin π π ρ8,s 2 π 4 =⇒ 10 log10 ρ4,s sin = 20 log10 = 5.3329 dB = ρ8,s sin 4 8 ρ4,s sin π 8
2

Problem 7.43 The constellation of Fig. P-7.43(a) has four points at a distance 2A from the origin and four points √ at a distance 2 2A. Thus, the average transmitted power of the constellation is √ 1 4 × (2A)2 + 4 × (2 2A)2 = 6A2 8 √ The second constellation has four points at a distance 7A from the origin, two points at a dis√ tance 3A and two points at a distance A. Thus, the average transmitted power of the second constellation is √ √ 1 9 4 × ( 7A)2 + 2 × ( 3A)2 + 2A2 = A2 Pb = 8 2 Since Pb < Pa the second constellation is more power efficient. Pa =

195

Problem 7.44 The optimum decision boundary of a point is determined by the perpendicular bisectors of each line segment connecting the point with its neighbors. The decision regions for the V.29 constellation are depicted in the next figure.

Problem 7.45 The following figure depicts a 4-cube and the way that one can traverse it in Gray-code order (see John F. Wakerly, Digital Design Principles and Practices, Prentice Hall, 1990). Adjacent points are connected with solid or dashed lines.
1110 1111

0110 0111 0010 0011

1010 1100

1011 1101

1000 0100 0101

1001

0000

0001

196

One way to label the points of the V.29 constellation using the Gray-code is depicted in the next figure. Note that the maximum Hamming distance between points with distance between them as large as 3 is only 2. Having labeled the innermost points, all the adjacent nodes can be found using the previous figure.

1000

1
1011

1

1001

2

0011

1 2 1
1111 0111 0001

1 1 1
0010 0110

1

1
0101

2 2

1 1
0000

1

2 1
1110

0100

1 1 2
1100 1010

1
1101

Problem 7.46 1) Consider the QAM constellation of Fig. P-7.46. Using the Pythagorean theorem we can find the radius of the inner circle as 1 a2 + a2 = A2 =⇒ a = √ A 2 The radius of the outer circle can be found using the cosine rule. Since b is the third side of a triangle with a and A the two other sides and angle between then equal to θ = 75o , we obtain √ 1+ 3 2 2 2 o A b = a + A − 2aA cos 75 =⇒ b = 2 2) If we denote by r the radius of the circle, then using the cosine theorem we obtain A2 = r2 + r2 − 2r cos 45o =⇒ r = A 2− √

2

3) The average transmitted power of the PSK constellation is
 2

PPSK = 8 ×

1  × 8

A 2−



 =⇒ P PSK = 2 − √2 2

A2

197

whereas the average transmitted power of the QAM constellation √ √ (1 + 3)2 2 1 2 + (1 + 3)2 A2 +4 A =⇒ PQAM = PQAM = 4 A2 8 2 4 8 The relative power advantage of the PSK constellation over the QAM constellation is gain = PPSK 8 √ √ = 1.5927 dB = PQAM (2 + (1 + 3)2 )(2 − 2)

Problem 7.47 1) The number of bits per symbol is k= 4800 4800 = =2 R 2400

Thus, a 4-QAM constellation is used for transmission. The probability of error for an M-ary QAM system with M = 2k , is PM 1 =1− 1−2 1− √ M Q 3kEb (M − 1)N0
2

With PM = 10−5 and k = 2 we obtain Q Eb 2Eb = 5 × 10−6 =⇒ = 9.7682 N0 N0

2 If the bit rate of transmission is 9600 bps, then k= 9600 =4 2400

In this case a 16-QAM constellation is used and the probability of error is PM Thus, Q 1 3 × Eb Eb = × 10−5 =⇒ = 25.3688 3 15 × N0 N0 1 =1− 1−2 1− Q 4 3 × 4 × Eb 15 × N0
2

3 If the bit rate of transmission is 19200 bps, then k= 19200 =8 2400

In this case a 256-QAM constellation is used and the probability of error is PM With PM = 10−5 we obtain 1 =1− 1−2 1− Q 16 Eb = 659.8922 N0 198 3 × 8 × Eb 255 × N0
2

4) The following table gives the SNR per bit and the corresponding number of bits per symbol for the constellations used in parts a)-c). k SNR (db) 2 9.89 4 14.04 8 28.19

As it is observed there is an increase in transmitted power of approximately 3 dB per additional bit per symbol. Problem 7.48 1) Although it is possible to assign three bits to each point of the 8-PSK signal constellation so that adjacent points differ in only one bit, this is not the case for the 8-QAM constellation of Figure P-7.46. This is because there are fully connected graphs consisted of three points. To see this consider an equilateral triangle with vertices A, B and C. If, without loss of generality, we assign the all zero sequence {0, 0, . . . , 0} to point A, then point B and C should have the form B = {0, . . . , 0, 1, 0, . . . , 0} C = {0, . . . , 0, 1, 0, . . . , 0}

where the position of the 1 in the sequences is not the same, otherwise B=C. Thus, the sequences of B and C differ in two bits. 2) Since each symbol conveys 3 bits of information, the resulted symbol rate is Rs = 90 × 106 = 30 × 106 symbols/sec 3

3) The probability of error for an M-ary PSK signal is PM = 2Q 2Es π sin N0 M

whereas the probability of error for an M-ary QAM signal is upper bounded by PM = 4Q 3Eav (M − 1)N0

Since, the probability of error is dominated by the argument of the Q function, the two signals will achieve the same probability of error if 2SNRPSK sin With M = 8 we obtain 2SNRPSK sin π = 8 SNRPSK 3 3SNRQAM =⇒ = = 1.4627 7 SNRQAM 7 × 2 × 0.38272 π = M 3SNRQAM M −1

4) Assuming that the magnitude of the signal points is detected correctly, then the detector for the 8-PSK signal will make an error if the phase error (magnitude) is greater than 22.5o . In the case of the 8-QAM constellation an error will be made if the magnitude phase error exceeds 45o . Hence, the QAM constellation is more immune to phase errors.

199

Problem 7.49 Consider the following waveforms of the binary FSK signaling: u1 (t) = u2 (t) = The correlation of the two signals is γ12 = = = If fc
1 T,

2Eb cos(2πfc t) T 2Eb cos(2πfc t + 2π∆f t) T

1 Eb 1 Eb 1 T

T

u1 (t)u2 (t)dt
0 T 0 T 0

2Eb cos(2πfc t) cos(2πfc t + 2π∆f t)dt T 1 T cos(2π∆f t)dt + cos(2π2fc t + 2π∆f t)dt T 0 1 T
T

then γ12 = cos(2π∆f t)dt =
0

sin(2π∆f T ) 2π∆f T

To find the minimum value of the correlation, we set the derivative of γ12 with respect to ∆f equal to zero. Thus, cos(2π∆f T )2πT ϑγ12 sin(2π∆f T ) =0= 2πT − ϑ∆f 2π∆f T (2π∆f T )2 and therefore, 2π∆f T = tan(2π∆f T ) Solving numerically the equation x = tan(x), we obtain x = 4.4934. Thus, 2π∆f T = 4.4934 =⇒ ∆f = 0.7151 T

and the value of γ12 is −0.2172. Note that when a gradient method like the Gauss-Newton is used to solve the equation f (x) = x − tan(x) = 0, then in order to find the smallest nonzero root, the initial value of the algorithm x0 should be selected in the range ( π , 3π ). 2 2 The probability of error can be expressed in terms of the distance d12 between the signal points, as   2 d12  pb = Q  2N0 The two signal vectors u1 , u2 are of equal energy u1
2

= u2

2

= Eb

and the angle θ12 between them is such that cos(θ12 ) = γ12 Hence, d2 = u1 12 and therefore,
2

+ u2

2

− 2 u1


u2 cos(θ12 ) = 2Es (1 − γ12 )




pb = Q 

2Es (1 − γ12 )  = Q 2N0 200

Es (1 + 0.2172)  N0



Problem 7.50 1) The first set represents a 4-PAM signal constellation. The points of the constellation are {±A, ±3A}. The second set consists of four orthogonal signals. The geometric representation of the signals is s1 = [ A 0 0 0 ] s2 = [ 0 A 0 0 ] s3 = [ 0 0 A 0 ] s4 = [ 0 0 0 A ]

This set can be classified as a 4-FSK signal. The third set can be classified as a 4-QAM signal constellation. The geometric representation of the signals is s1 = [ s2 = [
A √ 2 A √ 2 A √ ] 2 A − √2 A s3 = [ − √2 A ] s4 = [ − √2 A − √2 ] A √ ] 2

2) The average transmitted energy for sets I, II and III is Eav,I Eav,II Eav,III = = = 1 4 1 4 1 4
4

si i=1 4

2

1 = (A2 + 9A2 + 9A2 + A2 ) = 5A2 4 1 = (4A2 ) = A2 4 1 A 2 A2 = (4 × ( + )) = A2 4 2 2

si i=1 4

2

si i=1 2

3) The probability of error for the 4-PAM signal is given by P4,I 2(M − 1) Q = M


6Eav,I 3 = Q (M 2 − 1)N0 2

6 × 5 × A2  3  = Q 15N0 2







2A2  N0

4) When coherent detection is employed, then an upper bound on the probability of error is given by   P4,II,coherent ≤ (M − 1)Q Es = 3Q  N0 A2  N0

If the detection is performed noncoherently, then the probability of error is given by
M −1

P4,II,noncoherent = n=1 (−1)n+1

M −1 n

1 −nρs /(n=1) e n+1

= =

2ρs 3 − ρs 1 3ρs e 2 − e− 3 + e− 4 2 4 A2 2A2 3A 3 − 2N 1 − 4N2 − 0 − e 3N0 + 0 e e 2 4

5) It is not possible to use noncoherent detection for the signal set III. This is because all signals have the same square amplitude for every t ∈ [0, 2T ]. 6) The following table shows the bit rate to bandwidth ratio for the different types of signaling and the results for M = 4. 201

R To achieve a ratio W signal set (QAM).

Type R/W M =4 PAM 2 log2 M 4 QAM log2 M 2 2 log2 M FSK (coherent) 1 M log2 M FSK (noncoherent) 0.5 M of at least 2, we have to select either the first signal set (PAM) or the second

Problem 7.51 1) If the transmitted signal is u0 (t) = then the received signal is r(t) = 2Es cos(2πfc t + φ) + n(t) T 2Es cos(2πfc t), T 0≤t≤T

In the phase-coherent demodulation of M -ary FSK signals, the received signal is correlated with ˆ ˆ each of the M -possible received signals cos(2πfc t + 2πm∆f t + φm ), where φm are the carrier phase estimates. The output of the mth correlator is
T

rm =
0 T

ˆ r(t) cos(2πfc t + 2πm∆f t + φm )dt 2Es ˆ cos(2πfc t + φ) cos(2πfc t + 2πm∆f t + φm )dt T
T

=
0

+
0

ˆ n(t) cos(2πfc t + 2πm∆f t + φm )dt
T 0

= =

2Es T 2Es 1 T 2

1 ˆ ˆ cos(2π2fc t + 2πm∆f t + φm + φ) + cos(2πm∆f t + φm − φ) + n 2
T

0

ˆ cos(2πm∆f t + φm − φ)dt + n
N0 2 .

where n is a zero-mean Gaussian random variable with variance

2) In order to obtain orthogonal signals at the demodulator, the expected value of rm , E[rm ], should be equal to zero for every m = 0. Since E[n] = 0, the latter implies that
T 0

ˆ cos(2πm∆f t + φm − φ)dt = 0,
1 T.

∀m = 0

The equality is true when m∆f is a multiple of condition for orthogonality is

Since the smallest value of m is 1, the necessary 1 T

∆f =

Problem 7.52 The noise components in the sampled output of the two correlators for the mth FSK signal, are given by
T

nmc =
0 T

n(t) n(t)
0

nms =

2 cos(2πfc t + 2πm∆f t)dt T 2 sin(2πfc t + 2πm∆f t)dt T 202

Clearly, nmc , nms are zero-mean random variables since
T

E[nmc ] = E
0 T

n(t) E[n(t)]

2 cos(2πfc t + 2πm∆f t)dt T

=
0

T

E[nms ] = E
0 T

2 cos(2πfc t + 2πm∆f t)dt = 0 T 2 sin(2πfc t + 2πm∆f t)dt n(t) T 2 sin(2πfc t + 2πm∆f t)dt = 0 T

=
0

E[n(t)]

Furthermore,
T T 0 T

E[nmc nkc ] = E = = = = 2 T 2 T 2 T 2 T

0 T 0

2 n(t)n(τ ) cos(2πfc t + 2πm∆f t) cos(2πfc t + 2πk∆f τ )dtdτ T

E[n(t)n(τ )] cos(2πfc t + 2πm∆f t) cos(2πfc t + 2πk∆f τ )dtdτ N0 2 N0 2 N0 2
0 T

cos(2πfc t + 2πm∆f t) cos(2πfc t + 2πk∆f t)dt
0 T 0 T 0

1 (cos(2π2fc t + 2π(m + k)∆f t) + cos(2π(m − k)∆f t)) dt 2 1 N0 δmk dt = δmk 2 2
1 T

where we have used the fact that for fc
T 0

cos(2π2fc t + 2π(m + k)∆f t)dt ≈ 0

and for ∆f =

1 T 0

T

cos(2π(m − k)∆f t)dt = 0,

m=k

Thus, nmc , nkc are uncorrelated for m = k and since they are zero-mean Gaussian they are independent. Similarly we obtain
T T 0

E[nmc nks ] = E = = = = E[nms nks ] =

0 T

2 n(t)n(τ ) cos(2πfc t + 2πm∆f t) sin(2πfc t + 2πk∆f τ )dtdτ T

T 2 E[n(t)n(τ )] cos(2πfc t + 2πm∆f t) sin(2πfc t + 2πk∆f τ )dtdτ T 0 0 2 N0 T cos(2πfc t + 2πm∆f t) sin(2πfc t + 2πk∆f t)dt T 2 0 2 N0 T 1 (sin(2π2fc t + 2π(m + k)∆f t) − sin(2π(m − k)∆f t)) dt T 2 0 2 0 N0 δmk 2

Problem 7.53 1) The noncoherent envelope detector for the on-off keying signal is depicted in the next figure.

203

t=T
E× l E
2 T

r(t)

E

T ' c

t 0 (·)dτ

d d

rc

(·)2 cr k + E T E

cos(2πfc t)

−π 2

c l E×

E

t 0 (·)dτ

d d

t=T rs (·)2

VT Threshold Device

2) If s0 (t) is sent, then the received signal is r(t) = n(t) and therefore the sampled outputs rc , rs are zero-mean independent Gaussian random variables with variance N0 . Hence, the random 2 2 2 variable r = rc + rs is Rayleigh distributed and the PDF is given by p(r|s0 (t)) = r r − r22 2r − N2 e 2σ = e 0 σ2 N0

If s1 (t) is transmitted, then the received signal is r(t) = Crosscorrelating r(t) by
T 2 T

2Eb cos(2πfc t + φ) + n(t) Tb

cos(2πfc t) and sampling the output at t = T , results in

rc = = = =

2 cos(2πfc t)dt r(t) T 0 √ T 2 E T b cos(2πfc t + φ) cos(2πfc t)dt + n(t) Tb 0 0 √ 2 Eb T 1 (cos(2π2fc t + φ) + cos(φ)) dt + nc Tb 0 2 Eb cos(φ) + nc
N0 2 .

2 cos(2πfc t)dt T

where nc is zero-mean Gaussian random variable with variance component we have rs = Eb sin(φ) + ns The PDF of the random variable r =

Similarly, for the quadrature

2 2 rc + rs = Eb + n2 + n2 is (see Problem 4.31) c s √ √ 2 +E r Eb 2r Eb r − r2 +Eb 2r − r N b 2 I 0 I0 = e p(r|s1 (t)) = 2 e 2σ 0 σ σ2 N0 N0

that is a Rician PDF. 3) For equiprobable signals the probability of error is given by P (error) = 1 2
VT −∞

p(r|s1 (t))dr +

1 2



p(r|s0 (t))dr
VT

Since r > 0 the expression for the probability of error takes the form P (error) = = 1 2 1 2
VT 0 VT 0

p(r|s1 (t))dr + r − r2 +Eb e 2σ2 I0 σ2 204

1 ∞ p(r|s0 (t))dr 2 VT √ r Eb 1 ∞ r − r22 dr + e 2σ dr σ2 2 VT σ 2

The optimum threshold level is the value of VT√that minimizes the probability of error. However, E E when Nb 1 the optimum value is close to 2 b and we will use this threshold to simplify the 0 analysis. The integral involving the Bessel function cannot be evaluated in closed form. Instead of I0 (x) we will use the approximation ex I0 (x) ≈ √ 2πx which is valid for large x, that is for high SNR. In this case 1 2
VT 0

r e σ2

r 2 +Eb − 2σ 2

I0

r Eb σ2





1 dr ≈ 2

Eb 2

0

2πσ

r √ 2

Eb

e−(r−



Eb )2 /2σ 2

dr

This integral is further simplified if we observe that for high SNR, the integrand is dominant in the √ vicinity of Eb and therefore, the lower limit can be substituted by −∞. Also r √ 2 Eb ≈ 1 2πσ 2

2πσ and therefore,


1 2

Eb 2

0

2πσ

r √ 2

Eb

e

−(r− Eb



√ )2 /2σ 2

dr ≈ =

1 2

Eb 2

−∞

1 −(r−√Eb )2 /2σ2 e dr 2πσ 2 Eb 2N0

1 Q 2

Finally P (error) = ≤ 1 Q 2 1 Q 2 1 Eb + 2N0 2
∞ √ r 2r − N2 e 0 dr N0

Eb 2

1 − Eb Eb + e 4N0 2N0 2

Problem 7.54 (a) Four phase PSK If we use a pulse shape having a raised cosine spectrum with a rolloff α, the symbol rate is determined from the relation 1 (1 + α) = 50000 2T Hence, 1 105 = T 1+α where W = 105 Hz is the channel bandwidth. The bit rate is 2 2 × 105 = T 1+α bps

(b) Binary FSK with noncoherent detection In this case we select the two frequencies to have a frequency separation of symbol rate. Hence 1 f1 = fc − 2T 1 f2 = f + c + 2T 205

1 T,

where

1 T

is the

where fc is the carrier in the center of the channel band. Thus, we have 1 = 50000 2T or equivalently 1 = 105 T Hence, the bit rate is 105 bps. (c) M = 4 FSK with noncoherent detection In this case we require four frequencies with adjacent frequencies separation of f1 = fc − |

1 T.

Hence, we select

1.5 1 1 1.5 , f2 = fc − , f3 = fc + , f4 = fc + T 2T 2T T
1 2T

where fc is the carrier frequency and

= 25000, or, equivalently,

1 = 50000 T Since the symbol rate is 50000 symbols per second and each symbol conveys 2 bits, the bit rate is 105 bps. Problem 7.55 a) For n repeaters in cascade, the probability of i out of n repeaters to produce an error is given by the binomial distribution n Pi = pi (1 − p)n−i i However, there is a bit error at the output of the terminal receiver only when an odd number of repeaters produces an error. Hence, the overall probability of error is Pn = Podd = i=odd n i

pi (1 − p)n−i

Let Peven be the probability that an even number of repeaters produces an error. Then Peven = i=even n i

pi (1 − p)n−i

and therefore, n Peven + Podd = i=0 n i

pi (1 − p)n−i = (p + 1 − p)n = 1

One more relation between Peven and Podd can be provided if we consider the difference Peven −Podd . Clearly, Peven − Podd = i=even n i n i

pi (1 − p)n−i − i=odd n i

pi (1 − p)n−i n i (−p)i (1 − p)n−i

= i=even a

(−p)i (1 − p)n−i + i=odd = (1 − p − p)n = (1 − 2p)n where the equality (a) follows from the fact that (−1)i is 1 for i even and −1 when i is odd. Solving the system Peven + Podd = 1 Peven − Podd = (1 − 2p)n 206

we obtain

1 Pn = Podd = (1 − (1 − 2p)n ) 2

b) Expanding the quantity (1 − 2p)n , we obtain (1 − 2p)n = 1 − n2p + Since, p n(n − 1) (2p)2 + · · · 2

1 we can ignore all the powers of p which are greater than one. Hence, 1 Pn ≈ (1 − 1 + n2p) = np = 100 × 10−6 = 10−4 2

Problem 7.56 The overall probability of error is approximated by P (e) = KQ Eb N0

Eb Thus, with P (e) = 10−6 and K = 100, we obtain the probability of each repeater Pr = Q N0 = 10−8 . The argument of the function Q[·] that provides a value of 10−8 is found from tables to be

Eb = 5.61 N0 Hence, the required Problem 7.57 a) The antenna gain for a parabolic antenna of diameter D is GR = η πD λ
2 Eb N0

is 5.612 = 31.47

If we assume that the efficiency factor is 0.5, then with λ= we obtain GR = GT = 45.8458 = 16.61 dB b) The effective radiated power is EIRP = PT GT = GT = 16.61 dB c 3 × 108 = 0.3 m = f 109 D = 3 × 0.3048 m

c) The received power is PR = PT GT GR
4πd λ 2

= 2.995 × 10−9 = −85.23 dB = −55.23 dBm

207

Note that dBm = 10 log10 Problem 7.58 a) The antenna gain for a parabolic antenna of diameter D is GR = η πD λ
2

actual power in Watts 10−3

= 30 + 10 log10 (power in Watts )

If we assume that the efficiency factor is 0.5, then with λ= we obtain GR = GT = 54.83 = 17.39 dB b) The effective radiated power is EIRP = PT GT = 0.1 × 54.83 = 7.39 dB c) The received power is PR = PT GT GR
4πd λ 2

c 3 × 108 = 0.3 m = f 109

and

D=1m

= 1.904 × 10−10 = −97.20 dB = −67.20 dBm

Problem 7.59 The wavelength of the transmitted signal is λ= The gain of the parabolic antenna is GR = η πD λ
2

3 × 108 = 0.03 m 10 × 109
2

= 0.6

π10 0.03

= 6.58 × 105 = 58.18 dB

The received power at the output of the receiver antenna is PR = Problem 7.60 a) Since T = 3000 K, it follows that N0 = kT = 1.38 × 10−23 × 300 = 4.14 × 10−21 W/Hz If we assume that the receiving antenna has an efficiency η = 0.5, then its gain is given by πD GR = η λ
2

PT GT GR 3 × 101.5 × 6.58 × 105 = = 2.22 × 10−13 = −126.53 dB 7 d (4π λ )2 (4 × 3.14159 × 4×10 )2 0.03



= 0.5

3.14159 × 50  
3×108 2×109

2

= 5.483 × 105 = 57.39 dB

208

Hence, the received power level is PR = b) If
Eb N0

PT GT GR 10 × 10 × 5.483 × 105 = = 7.8125 × 10−13 = −121.07 dB d 108 (4π λ )2 (4 × 3.14159 × 0.15 )2

= 10 dB = 10, then R= PR N0 Eb N0
−1

=

7.8125 × 10−13 × 10−1 = 1.8871 × 107 = 18.871 Mbits/sec 4.14 × 10−21

Problem 7.61 The overall gain of the system is Gtot = Ga1 + Gos + GBPF + Ga2 = 10 − 5 − 1 + 25 = 29 dB Hence, the power of the signal at the input of the demodulator is Ps,dem = (−113 − 30) + 29 = −114 dB The noise-figure for the cascade of the first amplifier and the multiplier is F1 = Fa1 + Fos − 1 100.5 − 1 = 3.3785 = 100.5 + Ga1 10

We assume that F1 is the spot noise-figure and therefore, it measures the ratio of the available PSD out of the two devices to the available PSD out of an ideal device with the same available gain. That is, Sn,o (f ) F1 = Sn,i (f )Ga1 Gos where Sn,o (f ) is the power spectral density of the noise at the input of the bandpass filter and Sn,i (f ) is the power spectral density at the input of the overall system. Hence, Sn,o (f ) = 10
−175−30 10

× 10 × 10−0.5 × 3.3785 = 3.3785 × 10−20

The noise-figure of the cascade of the bandpass filter and the second amplifier is F2 = FBPF + Fa2 − 1 100.5 − 1 = 100.2 + = 4.307 GBPF 10−0.1

Hence, the power of the noise at the output of the system is Pn,dem = 2Sn,o (f )BGBPF Ga2 F2 = 7.31 × 10−12 = −111.36 dB The signal to noise ratio at the output of the system (input to the demodulator) is SNR = Ps,dem Pn,dem = −114 + 111.36 = −2.64 dB

Problem 7.62 The wavelength of the transmission is λ= 3 × 108 c = = 0.75 m f 4 × 109 209

If 1 MHz is the passband bandwidth, then the rate of binary transmission is Rb = W = 106 bps. Hence, with N0 = 4.1 × 10−21 W/Hz we obtain PR Eb = Rb =⇒ 106 × 4.1 × 10−21 × 101.5 = 1.2965 × 10−13 N0 N0 The transmitted power is related to the received power through the relation PR = PT GT GR PR d =⇒ PT = 4π d 2 GT GR λ (4π λ )
2

Substituting in this expression the values GT = 100.6 , GR = 105 , d = 36 × 106 and λ = 0.75 we obtain PT = 0.1185 = −9.26 dBW Problem 7.63 Since T = 2900 + 150 = 3050 K, it follows that N0 = kT = 1.38 × 10−23 × 305 = 4.21 × 10−21 W/Hz The transmitting wavelength λ is λ= 3 × 108 c = = 0.130 m f 2.3 × 109

Hence, the gain of the receiving antenna is GR = η πD λ
2

= 0.55

3.14159 × 64 0.130

2

= 1.3156 × 106 = 61.19 dB

and therefore, the received power level is PR = PT GT GR 17 × 102.7 × 1.3156 × 106 = = 4.686 × 10−12 = −113.29 dB d 2 1.6×1011 2 (4π λ ) (4 × 3.14159 × 0.130 )

If Eb /N0 = 6 dB = 100.6 , then R= PR N0 Eb N0
−1

=

4.686 × 10−12 × 10−0.6 = 4.4312 × 109 = 4.4312 Gbits/sec 4.21 × 10−21

Problem 7.64 In the non decision-directed timing recovery method we maximize the function Λ2 (τ ) = m 2 ym (τ )

with respect to τ . Thus, we obtain the condition dΛ2 (τ ) =2 dτ ym (τ ) m dym (τ ) =0 dτ

Suppose now that we approximate the derivative of the log-likelihood Λ2 (τ ) by the finite difference dΛ2 (τ ) Λ2 (τ + δ) − Λ2 (τ − δ) ≈ dτ 2δ 210

Then, if we substitute the expression of Λ2 (τ ) in the previous approximation, we obtain dΛ2 (τ ) dτ = = 1 2δ
2 m ym (τ

+ δ) − 2δ

2 m ym (τ

− δ)
2

r(t)u(t − mT − τ − δ)dt m −

r(t)u(t − mT − τ + δ)dt

2

where u(−t) = gR (t) is the impulse response of the matched filter in the receiver. However, this is the expression of the early-late gate synchronizer, where the lowpass filter has been substituted by the summation operator. Thus, the early-late gate synchronizer is a close approximation to the timing recovery system. Problem 7.65 An on-off keying signal is represented as s1 (t) = A cos(2πfc t + θc ), 0 ≤ t ≤ T (binary 1) s2 (t) = 0, 0 ≤ t ≤ T (binary 0) Let r(t) be the received signal, that is r(t) = s(t; θc ) + n(t) where s(t; θc ) is either s1 (t) or s2 (t) and n(t) is white Gaussian noise with variance N0 . The 2 likelihood function, that is to be maximized with respect to θc over the interval [0, T ], is proportional to T 2 Λ(θc ) = exp − [r(t) − s(t; θc )]2 dt N0 0 Maximization of Λ(θc ) is equivalent to the maximization of the log-likelihood function ΛL (θc ) = − 2 N0 2 = − N0
T 0 T 0

[r(t) − s(t; θc )]2 dt r2 (t)dt + 4 N0
T 0

r(t)s(t; θc )dt −

2 N0

T 0

s2 (t; θc )dt

Since the first term does not involve the parameter of interest θc and the last term is simply a constant equal to the signal energy of the signal over [0, T ] which is independent of the carrier phase, we can carry the maximization over the function
T

V (θc ) =
0

r(t)s(t; θc )dt

Note that s(t; θc ) can take two different values, s1 (t) and s2 (t), depending on the transmission of a binary 1 or 0. Thus, a more appropriate function to maximize is the average log-likelihood 1 ¯ V (θc ) = 2
T

r(t)s1 (t)dt +
0

1 2

T

r(t)s2 (t)dt
0

¯ Since s2 (t) = 0, the function V (θc ) takes the form 1 ¯ V (θc ) = 2
T

r(t)A cos(2πfc t + θc )dt
0

211

¯ Setting the derivative of V (θc ) with respect to θc equal to zero, we obtain ¯ ϑV (θc ) =0 = ϑθc 1 2
T

r(t)A sin(2πfc t + θc )dt
0

= cos θc

1 2

T

r(t)A sin(2πfc t)dt + sin θc
0

1 2

T

r(t)A cos(2πfc t)dt
0

Thus, the maximum likelihood estimate of the carrier phase is θc,M L = − arctan
T 0 T 0

r(t)A sin(2πfc t)dt r(t)A cos(2πfc t)dt

212

Chapter 8
Problem 8.1 1) The following table shows the values of Eh (W )/T obtained using an adaptive recursive NewtonCotes numerical integration rule. WT Eh (W )/T 0.5 0.2253 1.0 0.3442 1.5 0.3730 2.0 0.3748 2.5 0.3479 3.0 0.3750

A plot of Eh (W )/T as a function of W T is given in the next figure
0.4 0.35 0.3
Energy / T

0.25 0.2 0.15 0.1 0.05 0 0.5 1 1.5 WT 2 2.5 3 3.5

2) The value of Eh (W ) as W → ∞ is
W →∞

lim Eh (W ) = =



−∞ 1 T

2 gT (t)dt =

T 0

2 gT (t)dt

4 0 T 1 = + 4 2 1 T + 8 0 T T + = 4 8

1 + cos

2 2π T t− dt T 2 T 2π T cos t− dt T 2 0 2π T 1 + cos 2 t − dt T 2 3T = = 0.3750T 8

Problem 8.2 We have y=

  a+n− 1  2   a+n

a+n+

1 2

with Prob. with Prob. with Prob.

1 4 1 4 1 2

By symmetry, Pe = P (e|a = 1) = P (e|a = −1), hence, Pe = P (e|a = −1) = = 1 P (n − 1 > 0) + 2 1 1 1 Q + Q 2 σn 4 213 1 3 1 1 P n− >0 + P n− >0 4 2 4 2 3 1 1 + Q 2σn 4 2σn

Problem 8.3 a) If the transmitted signal is


r(t) = n=−∞ an h(t − nT ) + n(t)

then the output of the receiving filter is


y(t) = n=−∞ an x(t − nT ) + ν(t)

where x(t) = h(t) h(t) and ν(t) = n(t) h(t). If the sampling time is off by 10%, then the samples 1 1 at the output of the correlator are taken at t = (m ± 10 )T . Assuming that t = (m − 10 )T without loss of generality, then the sampled sequence is


ym = n=−∞ an x((m −

1 1 T − nT ) + ν((m − )T ) 10 10

1 If the signal pulse is rectangular with amplitude A and duration T , then ∞ n=−∞ an x((m− 10 T −nT ) is nonzero only for n = m and n = m − 1 and therefore, the sampled sequence is given by

ym = am x(− =

1 1 1 T ) + am−1 x(T − T ) + ν((m − )T ) 10 10 10 9 1 1 am A2 T + am−1 A2 T + ν((m − )T ) 10 10 10

The power spectral density of the noise at the output of the correlator is Sν (f ) = Sn (f )|H(f )|2 = Thus, the variance of the noise is σn u2 = and therefore, the SNR is SNR = 9 10
2 ∞ −∞

N0 2 2 A T sinc2 (f T ) 2

N0 2 N0 2 2 1 N0 2 2 A T sinc2 (f T )df = A T = A T 2 2 T 2 81 2A2 T 2(A2 T )2 = N 0 A2 T 100 N0 = −0.9151 dB due to the mistiming.

As it is observed, there is a loss of 10 log10

81 100

b) Recall from part a) that the sampled sequence is ym =
2

1 9 am A2 T + am−1 A2 T + νm 10 10

The term am−1 A T expresses the ISI introduced to the system. If am = 1 is transmitted, then the 10 probability of error is P (e|am = 1) = = = 1 1 P (e|am = 1, am−1 = 1) + P (e|am = 1, am−1 = −1) 2 2 8 −A2 T − ν 2 − 10 A2 T − ν 2 1 1 √ e N0 A2 T dν + √ e N0 A2 T dν 2 πN0 A2 T −∞ 2 πN0 A2 T −∞
  

1  Q 2

2A2 T  1  + Q N0 2 214

8 10

2



2A2 T  N0

Since the symbols of the binary PAM system are equiprobable the previous derived expression is the probability of error when a symbol by symbol detector is employed. Comparing this with the probability of error of a system with no ISI, we observe that there is an increase of the probability of error by     1 Pdiff (e) = Q  2 Problem 8.4 1) The power spectral density of X(t) is given by Sx (f ) = The Fourier transform of g(t) is GT (f ) = F[g(t)] = AT Hence, |GT (f )|2 = (AT )2 sinc2 (f T ) and therefore, Sx (f ) = A2 T Sa (f )sinc2 (f T ) = A2 T sinc2 (f T ) 2) If g1 (t) is used instead of g(t) and the symbol interval is T , then Sx (f ) = = 1 Sa (f )|G2T (f )|2 T 1 (A2T )2 sinc2 (f 2T ) = 4A2 T sinc2 (f 2T ) T sin πf T −jπf T e πf T 1 Sa (f )|GT (f )|2 T 8 10
2

2A2 T  1  − Q N0 2

2A2 T  N0

3) If we precode the input sequence as bn = an + αan−3 , then
  1 + α2 

Rb (m) =

 

α 0

m=0 m = ±3 otherwise

and therefore, the power spectral density Sb (f ) is Sb (f ) = 1 + α2 + 2α cos(2πf 3T ) To obtain a null at f =
1 3T ,

the parameter α should be such that
1 f= 3

1 + α2 + 2α cos(2πf 3T )|

= 0 =⇒ α = −1

4) The answer to this question is no. This is because Sb (f ) is an analytic function and unless it is identical to zero it can have at most a countable number of zeros. This property of the analytic functions is also referred as the theorem of isolated zeros.

215

Problem 8.5 1) The power spectral density of s(t) is Ss (f ) =
2 σa 1 |GT (f )|2 = |GT (f )|2 T T

The Fourier transform GT (f ) of the signal g(t) is GT (f ) = F Π = = = Hence, T T |GT (f )|2 = T 2 sinc2 ( f ) sin2 (2πf ) 2 4 and therefore, T T Ss (f ) = T sinc2 ( f ) sin2 (2πf ) 2 4 2) If the precoding scheme bn = an + kan−1 is used, then
  1 + k2 

t−
T 2

T 4

−Π

t−
T 2

3T 4

T 3T T T T T sinc( f )e−j2πf 4 − sinc( f )e−j2πf 4 2 2 2 2 T T T T T sinc( f )e−j2πf 2 ej2πf 4 − e−j2πf 4 2 2 T T T T sinc( f ) sin(2πf )2je−j2πf 2 2 2 4

Rb (m) = Thus,

 

k 0

m=0 m = ±1 otherwise

Sb (f ) = 1 + k 2 + 2k cos(2πf T ) and therefore the spectrum of s(t) is T T Ss (f ) = (1 + k 2 + 2k cos(2πf T ))T sinc2 ( f ) sin2 (2πf ) 2 4 In order to produce a frequency null at f =
1 T

we have to choose k in such a way that

1 + k 2 + 2k cos(2πf T )|f =1/T = 1 + k 2 + 2k = 0 The appropriate value of k is −1. 3) If the precoding scheme of the previous part is used, then in order to have nulls at frequencies n f = 4T , the value of the parameter k should be such that 1 + k 2 + 2k cos(2πf T )|f =1/4T = 1 + k 2 = 0 As it is observed it is not possible to achieve the desired nulls with real values of k. Instead of the pre-coding scheme of the previous part we suggest pre-coding of the form bn = an + kan−2

216

In this case Rb (m) = Thus,

  1 + k2   

k 0

m=0 m = ±2 otherwise

Sb (f ) = 1 + k 2 + 2k cos(2π2f T ) n and therefore Sb ( 2T ) = 0 for k = 1.

Problem 8.6 a) The power spectral density of the FSK signal may be evaluated by using equation (8.5.32) with k = 2 (binary) signals and probabilities p0 = p1 = 1 . Thus, when the condition that the carrier 2 phase θ0 and and θ1 are fixed, we obtain S(f ) = 1 4Tb2
∞ n=−∞

|S0 (

n n n 1 ) + S1 ( )|2 δ(f − ) + |S0 (f ) − S1 (f )|2 Tb Tb Tb 4Tb

where S0 (f ) and S1 (f ) are the Fourier transforms of s0 (t) and s1 (t). In particular, S0 (f ) =
0 Tb

s0 (t)e−j2πf t dt
Tb 0

= = Similarly, S1 (f ) =
0

2Eb Tb 1 2

cos(2πf0 t + θ0 )ej2πf t dt,

f0 = fc −

∆f 2

2Eb sin πTb (f − f0 ) sin πTb (f + f0 ) −jπf Tb jθ0 + e e Tb π(f − f0 ) π(f + f0 )

Tb

s1 (t)e−j2πf t dt 2Eb sin πTb (f − f1 ) sin πTb (f + f1 ) −jπf Tb jθ1 e e + Tb π(f − f1 ) π(f + f1 )

= where f1 = fc +
∆f 2 .

1 2

By expressing S(f ) as 1 4Tb2
∞ n=−∞

S(f ) =

|S0 (

n 2 n n ∗ n n )| + |S1 ( )|2 + 2Re[S0 ( )S1 ( )]δ(f − ) Tb Tb Tb Tb Tb

1 ∗ + |S0 (f )|2 + |S1 (f )|2 − 2Re[S0 (f )S1 (f )] 4Tb
∗ we note that the carrier phases θ0 and θ1 affect only the terms Re(S0 S1 ). If we average over the random phases, these terms drop out. Hence, we have

S(f ) =

1 4Tb2

∞ n=−∞

|S0 (

n 2 n n )| + |S1 ( )|2 δ(f − ) Tb Tb Tb

1 |S0 (f )|2 + |S1 (f )|2 + 4Tb where |Sk (f )|2 = Tb Eb sin πTb (f − fk ) sin πTb (f + fk ) + , 2 π(f − fk ) π(f + fk ) k = 0, 1

Note that the first term in S(f ) consists of a sequence of samples and the second term constitutes the continuous spectrum. 217

b) It is apparent from S(f ) that the terms |Sk (f )|2 decay proportionally as |Sk (f )|2 = because the product Tb Eb 2 sin πTb (f − fk ) π(f − fk )
2

1 . (f −fk )2 2

also note that

+

sin πTb (f + fk ) π(f + fk )

sin πTb (f − fk ) sin πTb (f + fk ) × ≈0 π(f − fk ) π(f + fk )
1 Tb .

due to the relation that the carrier frequency fc

Problem 8.7 1) The autocorrelation function of the information symbols {an } is Ra (k) = E[a∗ a + n + k] = n Thus, the power spectral density of v(t) is SV (f ) = where G(f ) = F[g(t)]. If g(t) = AΠ( t− T 2 T

1 × |an |2 δ(k) = δ(k) 4

1 1 Sa (f )|G(f )|2 = |G(f )|2 T T ), we obtain |G(f )|2 = A2 T 2 sinc2 (f T ) and therefore,

SV (f ) = A2 T sinc2 (f T ) In the next figure we plot SV (f ) for T = A = 1.
1 0.9 0.8 0.7 0.6

Sv(f)

0.5 0.4 0.3 0.2 0.1 0 -5 -4 -3 -2 -1 0 1 frequency f 2 3 4 5

2) If g(t) = A sin( πt )Π( t−T /2 ), then 2 T G(f ) = A = = Thus, |G(f )|2 = A2 T 2 1 1 sinc2 ((f + )T ) + sinc2 ((f − )T ) 4 4 4 1 1 πT −2sinc((f + )T )sinc((f − )T ) cos 4 4 2 218
T 1 1 1 1 δ(f − ) − δ(f + ) T sinc(f T )e−j2πf 2 2j 4 2j 4 T π 1 1 AT [δ(f − ) − δ(f + )] sinc(f T )e−j(2πf 2 + 2 ) 2 4 4 πT 1 AT −jπ[(f − 1 )T + 1 ] 1 4 2 e sinc((f − )T ) − sinc((f − )T )e−j 2 2 4 4

and the power spectral of the transmitted signal is SV (f ) = A2 T 1 1 sinc2 ((f + )T ) + sinc2 ((f − )T ) 4 4 4 1 1 πT −2sinc((f + )T )sinc((f − )T ) cos 4 4 2

In the next figure we plot SV (f ) for two special values of the time interval T . The amplitude of the signal A was set to 1 for both cases.
0.45 0.4 0.35 0.3
Sv(f)

0.8 0.7 T=1
Sv(f)

0.6 0.5 T=2 0.4 0.3 0.2 0.1

0.25 0.2 0.15 0.1 0.05 0 -5 -4 -3 -2 -1 0 1 frequency f 2 3 4 5

0 -5

-4

-3

-2

-1 0 1 frequency f

2

3

4

5

3) The first spectral null of the power spectrum density in part 1) is at position Wnull = 1 T

The 3-dB bandwidth is specified by solving the equation: 1 SV (W3dB ) = SV (0) 2 Since sinc2 (0) = 1, we obtain sinc2 (W3dB T ) = 1 1 =⇒ sin(πW3dB T ) = √ πW3dB T 2 2

Solving the latter equation numerically we find that W3dB = 1.3916 0.443 = πT T

To find the first spectral null and the 3-dB bandwidth for the signal with power spectral density in part 2) we assume that T = 1. In this case SV (f ) = A2 1 1 sinc2 ((f + )) + sinc2 ((f − )) 4 4 4

and as it is observed there is no value of f that makes SV (f ) equal to zero. Thus, Wnull = ∞. To find the 3-dB bandwidth note that SV (0) = Solving numerically the equation SV (W3dB ) = 1 A2 1.6212 2 4 1 A2 A2 2sinc( ) = 1.6212 4 4 4

we find that W3dB = 0.5412. As it is observed the 3-dB bandwidth is more robust as a measure for the bandwidth of the signal. 219

Problem 8.8 The transition probability matrix P is


P=

1   2

0 0 1 1

1 0 1 0

0 1 0 1

1 1 0 0

    

Hence, P2 =



1   4

1 2 0 1

0 1 1 2

2 1 1 0


1 0 2 1

    



and

P4 =

1    16 

2 4 4 6

4 2 6 4

4 6 2 4

6 4 4 2


    

and therefore, 1    16 


P4 γ =

2 4 4 6

4 2 6 4

4 6 2 4

6 4 4 2

    

1 0 0 −1 0 1 −1 0    0 −1 1 0  −1 0 0 1
  1  =− γ  4

=

1    16 

−4 0 0 4 0 −4 4 0 0 4 −4 0 4 0 0 −4

Thus, P4 γ = − 1 γ and by pre-multiplying both sides by Pk , we obtain 4 1 Pk+4 γ = − Pk γ 4 Problem 8.9 a) Taking the inverse Fourier transform of H(f ), we obtain h(t) = F −1 [H(f )] = δ(t) + Hence, y(t) = s(t) h(t) = s(t) + α α δ(t − t0 ) + δ(t + t0 ) 2 2 α α s(t − t0 ) + s(t + t0 ) 2 2

b) If the signal s(t) is used to modulate the sequence {an }, then the transmitted signal is


u(t) = n=−∞ an s(t − nT )

The received signal is the convolution of u(t) with h(t). Hence,


y(t) = u(t) h(t) = n=−∞ ∞

an s(t − nT )

δ(t) +

α α δ(t − t0 ) + δ(t + t0 ) 2 2

= n=−∞ an s(t − nT ) +

α ∞ α ∞ an s(t − t0 − nT ) + an s(t + t0 − nT ) 2 n=−∞ 2 n=−∞ 220

Thus, the output of the matched filter s(−t) at the time instant t1 is
∞ ∞

w(t1 ) = n=−∞ an + +

−∞

s(τ − nT )s(τ − t1 )dτ
∞ −∞ ∞ −∞

α ∞ an 2 n=−∞ α ∞ an 2 n=−∞

s(τ − t0 − nT )s(τ − t1 )dτ s(τ + t0 − nT )s(τ − t1 )dτ

If we denote the signal s(t) s(t) by x(t), then the output of the matched filter at t1 = kT is


w(kT ) = n=−∞ an x(kT − nT )

+

α ∞ α ∞ an x(kT − t0 − nT ) + an x(kT + t0 − nT ) 2 n=−∞ 2 n=−∞

c) With t0 = T and k = n in the previous equation, we obtain wk = ak x0 + n=k an xk−n

α α α α + ak x−1 + an xk−n−1 + ak x1 + an xk−n+1 2 2 n=k 2 2 n=k = ak x0 + α α α α an xk−n + xk−n−1 + xk−n+1 x−1 + x1 + 2 2 2 2 n=k

The terms under the summation is the ISI introduced by the channel. Problem 8.10 a) Each segment of the wire-line can be considered as a bandpass filter with bandwidth W = 1200 Hz. Thus, the highest bit rate that can be transmitted without ISI by means of binary PAM is R = 2W = 2400 bps b) The probability of error for binary PAM transmission is P2 = Q 2Eb N0

Hence, using mathematical tables for the function Q[·], we find that P2 = 10−7 is obtained for 2Eb Eb = 5.2 =⇒ = 13.52 = 11.30 dB N0 N0 c) The received power PR is related to the desired SNR per bit through the relation Eb PR =R N0 N0 Hence, with N0 = 4.1 × 10−21 we obtain PR = 4.1 × 10−21 × 1200 × 13.52 = 6.6518 × 10−17 = −161.77 dBW 221

Since the power loss of each segment is Ls = 50 Km × 1 dB/Km = 50 dB the transmitted power at each repeater should be PT = PR + Ls = −161.77 + 50 = −111.77 dBW Problem 8.11 The pulse x(t) having the raised cosine spectrum is x(t) = sinc(t/T ) cos(παt/T ) 1 − 4α2 t2 /T 2

The function sinc(t/T ) is 1 when t = 0 and 0 when t = nT . On the other hand g(t) = cos(παt/T ) = 1 − 4α2 t2 /T 2 1 t=0 bounded t = 0
T 2.

The function g(t) needs to be checked only for those values of t such that 4α2 t2 /T 2 = 1 or αt = However, cos( π x) cos(παt/T ) 2 = lim lim 2 2 2 x→1 1 − x αt→ T 1 − 4α t /T 2 and by using L’Hospital’s rule cos( π x) π π π 2 = lim sin( x) = < ∞ lim x→1 1 − x x→1 2 2 2 Hence, x(nT ) = 1 n=0 0 n=0

meaning that the pulse x(t) satisfies the Nyquist criterion. Problem 8.12 Substituting the expression of Xrc (f ) in the desired integral, we obtain
∞ −∞

Xrc (f )df

=

− 1−α 2T − 1+α 2T
1+α 2T 1−α 2T − 1−α 2T

1−α T πT (−f − ) df + 1 + cos 2 α 2T 1−α T πT (f − ) df 1 + cos 2 α 2T T df + T 2 1−α T +
1+α 2T 1−α 2T

1−α 2T

− 1−α 2T

T df

+ =

− 1+α 2T

T df 2
1+α 2T 1−α 2T

+

− 1−α 2T − 1+α 2T 0 α −T α T α −T

1−α πT (f + )df + cos α 2T α T

cos

1−α πT (f − )df α 2T

= 1+ = 1+

πT xdx + cos α cos

cos

0

πT xdx α

πT xdx = 1 + 0 = 1 α 222

Problem 8.13 Let X(f ) be such that Re[X(f )] =
1 T Π(f T ) + U (f ) |f | < T 0 otherwise

Im[X(f )] =

1 V (f ) |f | < T 0 otherwise

1 with U (f ) even with respect to 0 and odd with respect to f = 2T Since x(t) is real, V (f ) is odd 1 with respect to 0 and by assumption it is even with respect to f = 2T . Then,

x(t) = F −1 [X(f )] = =
1 2T 1 −T 1 2T 1 − 2T

X(f )e

j2πf t

df +
1 T 1 −T

1 2T 1 − 2T

X(f )e

j2πf t

df +

1 T 1 2T

X(f )ej2πf t df

T ej2πf t df +
1 T 1 −T

[U (f ) + jV (f )]ej2πf t df

= sinc(t/T ) +
1 T

[U (f ) + jV (f )]ej2πf t df

Consider first the integral
1 T 1 −T

1 −T

U (f )ej2πf t df . Clearly,
0
1 −T

U (f )ej2πf t df =

U (f )ej2πf t df +
0

1 T

U (f )ej2πf t df

1 1 and by using the change of variables f = f + 2T and f = f − 2T for the two integrals on the right hand side respectively, we obtain
1 T 1 −T

U (f )ej2πf t df π 1 2T 1 − 2T

= e−j T t = a U (f −

π 1 j2πf t df + ej T t )e 2T

1 2T 1 − 2T

U (f +

1 j2πf t df )e 2T

e

π jT t

−e

π −j T t 1 2T 1 − 2T

1 2T 1 − 2T

U (f +

1 j2πf t df )e 2T

π = 2j sin( t) T

U (f +

1 j2πf t )e df 2T
1 2T ,

where for step (a) we used the odd symmetry of U (f ) with respect to f = U (f − For the integral
1 T 1 −T 1 −T 1 T

that is

1 1 ) = −U (f + ) 2T 2T

V (f )ej2πf t df we have

V (f )ej2πf t df
0

= = e

1 −T

V (f )ej2πf t df +
0
1 2T 1 − 2T

1 T

V (f )ej2πf t df
1 2T 1 − 2T

π −j T t

π 1 j2πf t )e V (f − df + ej T t 2T

V (f +

1 j2πf t )e df 2T

223

1 1 However, V (f ) is odd with respect to 0 and since V (f + 2T ) and V (f − 2T ) are even, the translated spectra satisfy 1 1 2T 2T 1 j2πf t 1 j2πf t V (f − df = − V (f + df )e )e 1 1 2T 2T − 2T − 2T

Hence, π x(t) = sinc(t/T ) + 2j sin( t) T π −2 sin( t) T and therefore, x(nT ) = 1 n=0 0 n=0
1 2T 1 − 2T 1 2T 1 − 2T

U (f +

1 j2πf t df )e 2T

U (f +

1 j2πf t )e df 2T

Thus, the signal x(t) satisfies the Nyquist criterion. Problem 8.14 The bandwidth of the channel is W = 3000 − 300 = 2700 Hz Since the minimum transmission bandwidth required for bandpass signaling is R, where R is the rate of transmission, we conclude that the maximum value of the symbol rate for the given channel is Rmax = 2700. If an M -ary PAM modulation is used for transmission, then in order to achieve a bit-rate of 9600 bps, with maximum rate of Rmax , the minimum size of the constellation is M = 2k = 16. In this case, the symbol rate is R= 9600 = 2400 symbols/sec k

1 1 and the symbol interval T = R = 2400 sec. The roll-off factor α of the raised cosine pulse used for transmission is is determined by noting that 1200(1 + α) = 1350, and hence, α = 0.125. Therefore, the squared root raised cosine pulse can have a roll-off of α = 0.125.

Problem 8.15 Since the bandwidth of the ideal lowpass channel is W = 2400 Hz, the rate of transmission is R = 2 × 2400 = 4800 symbols/sec The number of bits per symbol is k= 14400 =3 4800

Hence, the number of transmitted symbols is 23 = 8. If a duobinary pulse is used for transmission, then the number of possible transmitted symbols is 2M − 1 = 15. These symbols have the form bn = 0, ±2d, ±4d, . . . , ±12d where 2d is the minimum distance between the points of the 8-PAM constellation. The probability mass function of the received symbols is P (b = 2md) = 8 − |m| , 64 224 m = 0, ±1, . . . , ±7

An upper bound of the probability of error is given by (see (8.4.33))


PM

1 < 2 1 − 2 Q M

π 4

2



kEb,av  6 2−1 N M 0

With PM = 10−6 and M = 8 we obtain kEb,av = 1.3193 × 103 =⇒ Eb,av = 0.088 N0 Problem 8.16 a) The spectrum of the baseband signal is SV (f ) = where T =
1 2400

1 1 Sa (f )|Xrc (f )|2 = |Xrc (f )|2 T T

and
  T 

Xrc (f ) =

  0

T 2 (1

+ cos(2πT (|f | −

1 4T ))

1 0 ≤ |f | ≤ 4T 1 3 4T ≤ |f | ≤ 4T otherwise

If the carrier signal has the form c(t) = A cos(2πfc t), then the spectrum of the DSB-SC modulated signal, SU (f ), is A SU (f ) = [SV (f − fc ) + SV (f + fc )] 2 A sketch of SU (f ) is shown in the next figure.
AT2 2

-fc-3/4T

-fc

-fc+3/4T

fc-3/4T

fc

fc+3/4T

b) Assuming bandpass coherent demodulation using a matched filter, the received signal r(t) is first passed through a linear filter with impulse response gR (t) = Axrc (T − t) cos(2πfc (T − t)) The output of the matched filter is sampled at t = T and the samples are passed to the detector. The detector is a simple threshold device that decides if a binary 1 or 0 was transmitted depending on the sign of the input samples. The following figure shows a block diagram of the optimum bandpass coherent demodulator. Bandpass r(t) E matched filter gR (t) t=  T Detector d E E d (Threshold .. ¢. ‚ device)

225

Problem 8.17 a) If the power spectral density of the additive noise is Sn (f ), then the PSD of the noise at the output of the prewhitening filter is Sν (f ) = Sn (f )|Hp (f )|2 In order for Sν (f ) to be flat (white noise), Hp (f ) should be such that Hp (f ) = 1 Sn (f )

2) Let hp (t) be the impulse response of the prewhitening filter Hp (f ). That is, hp (t) = F −1 [Hp (f )]. Then, the input to the matched filter is the signal s(t) = s(t) hp (t). The frequency response of ˜ the filter matched to s(t) is ˜
∗ ˜ ˜ Sm (f ) = S ∗ (f )e−j2πf t0 == S ∗ (f )Hp (f )e−j2πf t0

where t0 is some nominal time-delay at which we sample the filter output. 3) The frequency response of the overall system, prewhitening filter followed by the matched filter, is S ∗ (f ) −j2πf t0 ˜ G(f ) = Sm (f )Hp (f ) = S ∗ (f )|Hp (f )|2 e−j2πf t0 = e Sn (f ) 4) The variance of the noise at the output of the generalized matched filter is σ2 =
∞ −∞

Sn (f )|G(f )|2 df =

∞ −∞

|S(f )|2 df Sn (f )

At the sampling instant t = t0 = T , the signal component at the output of the matched filter is


y(T ) = = Hence, the output SNR is

−∞ ∞ −∞

Y (f )ej2πf T df = S(f ) S ∗ (f ) df = Sn (f )
∞ −∞



−∞ ∞ |S(f )|2 −∞

s(τ )g(T − τ )dτ Sn (f ) df

SNR =

y 2 (T ) = σ2

|S(f )|2 df Sn (f )

Problem 8.18 The bandwidth of the bandpass channel is W = 3300 − 300 = 3000 Hz In order to transmit 9600 bps with a symbol rate R = 2400 symbols per second, the number of information bits per symbol should be k= 9600 =4 2400

Hence, a 24 = 16 QAM signal constellation is needed. The carrier frequency fc is set to 1800 Hz, which is the mid-frequency of the frequency band that the bandpass channel occupies. If a pulse 226

with raised cosine spectrum and roll-off factor α is used for spectral shaping, then for the bandpass signal with bandwidth W R = 1200(1 + α) = 1500 and α = 0.25 A sketch of the spectrum of the transmitted signal pulse is shown in the next figure.

1/2T

-3300

-1800

-300

300 600 1800

3300 3000

f

Problem 8.19 The channel bandwidth is W = 4000 Hz. (a) Binary PSK with a pulse shape that has α = 1 . Hence 2 1 (1 + α) = 2000 2T
1 and T = 2667, the bit rate is 2667 bps. 1 (b) Four-phase PSK with a pulse shape that has α = 1 . From (a) the symbol rate is T = 2667 and 2 the bit rate is 5334 bps. 1 (c) M = 8 QAM with a pulse shape that has α = 1 . From (a), the symbol rate is T = 2667 and 2 3 hence the bit rate T = 8001 bps. (d) Binary FSK with noncoherent detection. Assuming that the frequency separation between the 1 1 1 1 two frequencies is ∆f = T , where T is the bit rate, the two frequencies are fc + 2T and fc − 2T . 1 1 Since W = 4000 Hz, we may select 2T = 1000, or, equivalently, T = 2000. Hence, the bit rate is 2000 bps, and the two FSK signals are orthogonal. (e) Four FSK with noncoherent detection. In this case we need four frequencies with separation 1 1 1 of T between adjacent frequencies. We select f1 = fc − 1.5 , f2 = fc − 2T , f3 = fc + 2T , and T 1.5 1 1 f4 = fc + T , where 2T = 500 Hz. Hence, the symbol rate is T = 1000 symbols per second and since each symbol carries two bits of information, the bit rate is 2000 bps. (f) M = 8 FSK with noncoherent detection. In this case we require eight frequencies with frequency 1 separation of T = 500 Hz for orthogonality. Since each symbol carries 3 bits of information, the bit rate is 1500 bps.

Problem 8.20 1) The bandwidth of the bandpass channel is W = 3000 − 600 = 2400 Hz Since each symbol of the QPSK constellation conveys 2 bits of information, the symbol rate of transmission is 2400 = 1200 symbols/sec R= 2 Thus, for spectral shaping we can use a signal pulse with a raised cosine spectrum and roll-off factor α = 1, that is 1 T π|f | Xrc (f ) = [1 + cos(πT |f |)] = cos2 2 2400 2400 227

If the desired spectral characteristic is split evenly between the transmitting filter GT (f ) and the receiving filter GR (f ), then GT (f ) = GR (f ) = 1 π|f | cos , 1200 2400 |f | < 1 = 1200 T

A block diagram of the transmitter is shown in the next figure. an E QPSK GT (f ) l E to Channel E× T

cos(2πfc t) 2) If the bit rate is 4800 bps, then the symbol rate is R= 4800 = 2400 symbols/sec 2

In order to satisfy the Nyquist criterion, the the signal pulse used for spectral shaping, should have the spectrum f X(f ) = T Π W √ f Thus, the frequency response of the transmitting filter is GT (f ) = T Π W . Problem 8.21 The bandwidth of the bandpass channel is W = 4 KHz. Hence, the rate of transmission should be less or equal to 4000 symbols/sec. If a 8-QAM constellation is employed, then the required symbol rate is R = 9600/3 = 3200. If a signal pulse with raised cosine spectrum is used for shaping, the maximum allowable roll-off factor is determined by 1600(1 + α) = 2000 which yields α = 0.25. Since α is less than 50%, we consider a larger constellation. With a 16-QAM constellation we obtain 9600 R= = 2400 4 and 1200(1 + α) = 2000 0r α = 2/3, which satisfies the required conditions. The probability of error for an M -QAM constellation is given by PM = 1 − (1 − P√M )2 where 1 P√M = 2 1 − √ M


Q

3Eav (M − 1)N0


With PM = 10−6 we obtain P√M = 5 × 10−7 and therefore 1 2 × (1 − )Q  4 3Eav  = 5 × 10−7 15 × 2 × 10−10

Using the last equation and the tabulation of the Q[·] function, we find that the average transmitted energy is Eav = 24.70 × 10−9 228

Note that if the desired spectral characteristic Xrc (f ) is split evenly between the transmitting and receiving filter, then the energy of the transmitting pulse is
∞ −∞ 2 gT (t)dt = ∞ −∞

|GT (f )|2 df =

∞ −∞

Xrc (f )df = 1

Hence, the energy Eav = Pav T depends only on the amplitude of the transmitted points and the 1 symbol interval T . Since T = 2400 , the average transmitted power is Eav = 24.70 × 10−9 × 2400 = 592.8 × 10−7 T If the points of the 16-QAM constellation are evenly spaced with minimum distance between them equal to d, then there are four points with coordinates (± d , ± d ), four points with coordinates 2 2 (± 3d , ± 3d ), four points with coordinates (± 3d , ± d ), and four points with coordinates (± d , ± 3d ). 2 2 2 2 2 2 Thus, the average transmitted power is Pav = Pav = 1 2 × 16 10−7 ,
16

(A2 + A2 ) = mc ms i=1 9d2 10d2 1 d2 +4× +8× 4× = 20d2 2 2 2 4

Since Pav = 592.8 ×

we obtain d= Pav = 0.00172 20

Problem 8.22 The roll-off factor α is related to the bandwidth by the expression 1+α = 2W , or equivalently T R(1 + α) = 2W . The following table shows the symbol rate for the various values of the excess bandwidth and for W = 1500 Hz. α R Problem 8.23 The following table shows the precoded sequence, the transmitted amplitude levels, the received signal levels and the decoded sequence, when the data sequence 10010110010 modulates a duobinary transmitting filter. Data seq. dn : Precoded seq. pn : Transmitted seq. an : Received seq. bn : Decoded seq. dn : Problem 8.24 The following table shows the precoded sequence, the transmitted amplitude levels, the received signal levels and the decoded sequence, when the data sequence 10010110010 modulates a modified duobinary transmitting filter. Data seq. dn : Precoded seq. pn : Transmitted seq. an : Received seq. bn : Decoded seq. dn : 1 1 1 2 1 0 0 -1 0 0 229 0 1 1 0 0 1 1 1 2 1 0 1 1 0 0 1 0 -1 -2 1 1 0 -1 -2 1 0 0 -1 0 0 0 0 -1 0 0 1 1 1 2 1 0 0 -1 0 0 1 1 1 0 1 0 1 1 2 0 0 1 1 2 0 1 0 -1 0 1 0 0 -1 -2 0 1 1 1 0 1 1 0 -1 0 1 0 0 -1 -2 0 0 0 -1 -2 0 1 1 1 0 1 0 1 1 2 0 .25 2400 .33 2256 .50 2000 .67 1796 .75 1714 1.00 1500

0 -1

0 -1

0 -1

Problem 8.25 Let X(z) denote the Z-transform of the sequence xn , that is X(z) = n xn z −n

Then the precoding operation can be described as P (z) = D(z) X(z) mod − M

where D(z) and P (z) are the Z-transforms of the data and precoded sequences respectively. For example, if M = 2 and X(z) = 1 + z −1 (duobinary signaling), then P (z) = D(z) =⇒ P (z) = D(z) − z −1 P (z) 1 + z −1

which in the time domain is written as pn = dn − pn−1 and the subtraction is mod-2. 1 However, the inverse filter X(z) exists only if x0 , the first coefficient of X(z) is relatively prime with M . If this is not the case, then the precoded symbols pn cannot be determined uniquely from the data sequence dn . Problem 8.26 In the case of duobinary signaling, the output of the matched filter is x(t) = sinc(2W t) + sinc(2W t − 1) and the samples xn−m are given by xn−m = x(nT − mT ) =
  1 n−m=0    0

1 n−m=1 otherwise

Therefore, the metric µ(a) in the Viterbi algorithm becomes µ(a) = 2 n an rn − n an am xn−m m a2 n

= 2 n an rn − n − n an an−1

= n an (2rn − an − an−1 )

Problem 8.27 The precoding for the duobinary signaling is given by pm = d m pm−1

The corresponding trellis has two states associated with the binary values of the history pm−1 . For the modified duobinary signaling the precoding is pm = dm ⊕ pm−2 230

Hence, the corresponding trellis has four states depending on the values of the pair (pm−2 , pm−1 ). The two trellises are depicted in the next figure. The branches have been labelled as x/y, where x is the binary input data dm and y is the actual transmitted symbol. Note that the trellis for the modified duobinary signal has more states, but the minimum free distance between the paths is dfree = 3, whereas the minimum free distance between paths for the duobinary signal is 2. Modified Duobinary (pm−2 , pm−1 ) u 0/-1 u E Eu Eu 00 Duobinary pm−1 0/-1 0 u Eu Eu ƒ 1/1  U  U 01 10 11 Problem 8.28 1) The output of the matched filter demodulator is
∞ ∞

 ƒ    ƒ 1/1  ƒ  w ƒu u u 1  E

0/-1

d ‚ d u ¡ u u ¡ ¡ ¡ d d e  ¡ d e¡ d ¡ d d ¡ e ‚¡ du u edu ¡ ‚  e e e Eu u … u

d 1/1 ¡

d

! ¡

¡

! ¡

u u u

y(t) = k=−∞ ∞

ak

−∞

gT (τ − kTb )gR (t − τ )dτ + ν(t)

= k=−∞ ak x(t − kTb ) + ν(t)

where, x(t) = gT (t) gR (t) = Hence,


sin πt cos πt T T πt T

t 1 − 4T2

2

y(mTb ) = k=−∞ ak x(mTb − kTb ) + v(mTb ) 1 1 am−1 + am+1 + ν(mTb ) π π

= am +

1 1 The term π am−1 + π am+1 represents the ISI introduced by doubling the symbol rate of transmission.

2) In the next figure we show one trellis stage for the ML sequence detector. Since there is postcursor ISI, we delay the received signal, used by the ML decoder to form the metrics, by one sample. Thus, the states of the trellis correspond to the sequence (am−1 , am ), and the transition labels correspond to the symbol am+1 . Two branches originate from each state. The upper branch is associated with the transmission of −1, whereas the lower branch is associated with the transmission of 1. a (am−1 , am ) m+1 -1 -1 -1 u € €

u    € 1 € -1 1 u -1 €€ u €  €  1 € -1 €€€  €  1  u €  1 -1 u   -1  u   u €€

11

1

231

Problem 8.29 a) The output of the matched filter at the time instant mT is ym = k 1 am xk−m + νm = am + am−1 + νm 4

The autocorrelation function of the noise samples νm is E[νk νj ] = Thus, the variance of the noise is N0 N0 x0 = 2 2 √ If a symbol by symbol detector is employed and we assume that the symbols am = am−1 = Eb √ have been transmitted, then the probability of error P (e|am = am−1 = Eb ) is
2 σν =

N0 xk−j 2

P (e|am = am−1 =

Eb ) = P (ym < 0|am = am−1 = = P (νm = 1 √ 2π 5

Similar Documents

Free Essay

Accoustic Communication

...Challenges for Efficient Communication in Underwater Acoustic Sensor Networks Ian F. Akyildiz, Dario Pompili, Tommaso Melodia Broadband & Wireless Networking Laboratory School of Electrical & Computer Engineering Georgia Institute of Technology, Atlanta, GA 30332 Tel: (404) 894-5141 Fax: (404) 894-7883 e-mail:{ian, dario, tommaso}@ece.gatech.edu Abstract— Ocean bottom sensor nodes can be used for oceanographic data collection, pollution monitoring, offshore exploration and tactical surveillance applications. Moreover, Unmanned or Autonomous Underwater Vehicles (UUVs, AUVs), equipped with sensors, will find application in exploration of natural undersea resources and gathering of scientific data in collaborative monitoring missions. Underwater acoustic networking is the enabling technology for these applications. Underwater Networks consist of a variable number of sensors and vehicles that are deployed to perform collaborative monitoring tasks over a given area. In this paper, several fundamental key aspects of underwater acoustic communications are investigated. Different architectures for two-dimensional and three-dimensional underwater sensor networks are discussed, and the underwater channel is characterized. The main challenges for the development of efficient networking solutions posed by the underwater environment are detailed at all layers of the protocol stack. Furthermore, open research issues are discussed and possible solution approaches are outlined...

Words: 4664 - Pages: 19

Free Essay

Computer

...CONTENT Page No 1 2 3 4 5 6 7 8 9 10 Academic calendar Digital Communications Microprocessors and microcontrollers Digital Signal Processing Object Oriented Programming Through Java Managerial Economics And Financial Analysis Digital Signal Processing Lab Microprocessor & Microcontroller Lab Advanced English Communication Skills Lab Object Orient Programming Through Java Lab 2 3 48 85 129 186 217 219 222 224 ACADEMIC CALENDAR VIGNAN INSTITUTE OF TECHNOLOGY AND SCIENCE DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING DEPARTMENT ACADEMIC CALENDAR B. Tech Academic Year 2013 - 2014 - II - Semester S.No Event Date th 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. Submission of abstracts of main project by IV years Faculty orientation program Commencement of Class work Spell for UNIT – I Instructions Fresher’s day Spell for UNIT – II Instructions Alumni meet VIGNAN TARANG Spell for Unit-III Instructions st nd Assignment -1/ Unit test-1 on I & II Units Submission of results & week students list to Dept Spell for Unit-IV Instructions University I-Mid-Exam- II & IV Year rd University I-Mid-Exam- III Year Spell for UNIT – V Instructions for II &IV years rd Spell for UNIT – V Instructions for III year LAB INTERNAL-1 Commencement of Special classes for Slow learners Spell for UNIT – VI Instructions for II &IV years rd Spell for UNIT – VI Instructions for III year Submission of Mini project title along with guide for III year Spell for Unit...

Words: 28702 - Pages: 115

Free Essay

Digital Tools for Networking

...Digital Tools for Networking Raquel Bilbraut Communication is any message send through digital devices. It is fast and easier, the message can be stored in the device for longer periods of time, without being damage, and it can be done over large distances. Also it provides facilities like video conferencing wish save a lot of time, money, and effort. Some examples of digital communication are: E-mailing, texting, fax, teleconferencing, and videoconferencing. One of the most important digital tools that I think is very helpful is E-mailing. Email is a store-and-forward method of writing, sending receiving, and saving messages through computers. E-mailing is very fast and provides the receiver the opportunity to answer immediately. Email messages are very easy to locate and is a secure way of sending messages. Although it has its disadvantages like, Spam, and viruses, I still think that is a very efficient, easy and low cost way to transmit messages. Emailing is also one of the tools most used in schools and colleges, it allows immediate communication between the student and the professor. It’s the way of send important information like grades, or to give an advance absent notice. Another helpful digital tool is Teleconferencing. This is a telephone or video meeting between multiple persons in two or more locations. Teleconferencing allows people to participate in regional, national, or worldwide meetings without actually leaving their location. This meetings, are...

Words: 458 - Pages: 2

Free Essay

Ofdm

...Document Type: Prentice Hall Author: John G. Proakis and Masoud Salehi Book: Communication Systems Engineering Copyright: 2002 ISBN: 0-13-061793-8 NI Supported: No Publish Date: Sep 6, 2006 Multicarrier Modulation and OFDM Overview This tutorial is part of the National Instruments Signal Generator Tutorial series. Each tutorial in this series, will teach you a specific topic of common measurement applications, by explaining the theory and giving practical examples. This tutorial covers multicarrier modulation and OFDM. For additional signal generator concepts, refer to the Signal Generator Fundamentals main page. Table of Contents 1. Multicarrier Modulation and OFDM 2. Further Reading 3. Relevant NI products Multicarrier Modulation and OFDM In the preceding sections, we considered digital transmission through nonideal channels and observed that such channels cause intersymbol interference when the reciprocal of the system rate is significantly smaller than the time dispersion (duration of the impulse response) of the nonideal channel. In such a case, a channel equalizer is employed at the receiver to compensate for the channel distortion. If the channel is a bandpass channel with a specified bandwidth, the information-bearing signal may be generated at the baseband and then translated in frequency to the passband of the channel. Thus, the information-bearing signal is transmitted on a single carrier. We also observed that intersymbol interference usually results...

Words: 2381 - Pages: 10

Free Essay

Text-to-Speech Synthesis of Two-Syllable Filipino Words

...CONCATENATIVE TEXT-TO-SPEECH SYNTHESIS OF TWO-SYLLABLE FILIPINO WORDS Lourdes T. Tupas, Rowena Cristina L. Guevara, Ph.D., and Melvin Co Digital Signal Processing Laboratory Department of Electrical and Electronics Engineering University of the Philippines, Diliman ABSTRACT In concatenative-based speech synthesizers, one of the most important problems is proper union of speech units to achieve an intelligible and natural-sounding synthetic speech. For that purpose, speech units need to be processed and concatenated so that discontinuities at concatenation points are minimized. Another possible solution to this is by using a larger speech unit to decrease the number of concatenation points. In this project, which utilized two-syllable Filipino words, the speech unit is syllable. Characterization of these Filipino words is done to differentiate words of the same spelling but of different meanings. This characterization took note of the pitch, duration of utterance of each syllable in the word, and the first three formant frequencies. A digital signal processing (DSP) block is also implemented. It accepts two-syllable text and outputs all the possible utterances of that word; this block is the text-to-speech synthesizer. A two-interval forced choice test was conducted to evaluate the level of naturalness of the synthesized speech. Words of the same spelling but of different meanings are distinguished using the prosody and intelligibility test. 1. INTRODUCTION ...

Words: 2642 - Pages: 11

Free Essay

Accounting

...Click here to download the solutions manual / test bank INSTANTLY!! http://testbanksolutionsmanual.blogspot.com/2011/02/accounting-information-systems-romney.html ------------------------------------------------------------------------------------------------------------------------ Accounting Information Systems Romney 11th Edition Solutions Manual Accounting Information Systems Romney 11th Edition Solutions Manual Accounting Information Systems Romney 11th Edition Solutions Manual Accounting Information Systems Romney Steinbart 11th Edition Solutions Manual Accounting Information Systems Romney Steinbart 11th Edition Solutions Manual ------------------------------------------------------------------------------------------------------------------------ ***THIS IS NOT THE ACTUAL BOOK. YOU ARE BUYING the Solution Manual in e-version of the following book*** Name: Accounting Information Systems Author: Romney Steinbart Edition: 11th ISBN-10: 0136015182 Type: Solutions Manual - The file contains solutions and questions to all chapters and all questions. All the files are carefully checked and accuracy is ensured. - The file is either in .doc, .pdf, excel, or zipped in the package and can easily be read on PCs and Macs.  - Delivery is INSTANT. You can download the files IMMEDIATELY once payment is done. If you have any questions, please feel free to contact us. Our response is the fastest. All questions will always be answered in 6...

Words: 18533 - Pages: 75

Free Essay

Analog and Digital Comparison Paper

...Analog and Digital Comparison Paper Amanda Dyer, Derick Campos, Jesse Ford, Mehran Gerami, Nicolas Monteiro, Wendell Taylor NTC/362 October 15, 2015 Richard Swafford, Jr. Analog and Digital Technology: A Comparison Analog and digital are two different types of signals used to transmit audio or visual information from one place to another. Analog signals are continuous, meaning that there are no breaks or interruptions and digital signals are not continuous, they use specific values to represent information (Strickland, 2008). Analog transmissions are sent via electronic pulses of varying amplitude, while digital transmissions are converted into binary format to represent two individual amplitudes. Analog is cheap and has been used quite some time now, but the biggest issue with analog signals is the limitation of data that can be transmitted. Nowadays almost all equipment being produced is digital based. Analog to digital conversions or A/D conversions is the process of changing a continuous variable signal to a multi-level signal without altering the vital contents or the information or data. A prime example of a telecommunication that uses this form of conversion is a telephone modem. Voice communications vary in range and are not in binary form, so these analog signals must be translated into digital signals. Digital to analog conversions or DAC is the conversion of binary code to analog signal. In order words, signals having few defined levels or states are...

Words: 1984 - Pages: 8

Free Essay

Data Communication Theory

...[pic] DATA COMMUNICATION THEORY LAB REPORT Report on Digital Transmission of Analogue Signals Pulse Amplitude Modulation INTRODUCTION: The experiment uses the L.J.Electronics Modicom-1 board to investigate the sampling of signals, and the filter effects on the reconstruction of the original signal from the sampled input. The sample/Hold operation is also investigated in this experiment. The Modicum -1 board allows an analogue signal to be sampled at a number of different rate(2kHz,4kHZ,8kHZ,16kHz,32kHz).The pulse width is varied in the steps of 0% from 0 to 90 of the sampling interval. Second order and fourth order low pass filter are available with cut-off frequencies set at 3.4 kHz. BACKGROUND: L.J.Electronics MODICOM-1 board is used to investigate the sampling of signals, and the effects of filter on the reconstruction of the original signal from the sampled input. In addition to this Sample/Hold operation is also investigated. [pic] MODICOM-1 board The MODICOM-1 board is considered as five different blocks namely 1.) Power Input 2.) Sampling Control Logic 3.) Sampling Circuit 4.) Second Order Low Pass Filter and 5.) Fourth Order Low Pass Filter Sampling Control logic: It is used to generate the timing and control signals that sample the input waveform, and also creates a sinusoidal...

Words: 2643 - Pages: 11

Free Essay

Energy Based Detection Scheme for Orthogonal Frequency Division Multiplexing

...International Conference on VLSI, Communication & Instrumentation (ICVCI) 2011 Proceedings published by International Journal of Computer Applications ® (IJCA) Energy based Detection Scheme for Orthogonal Frequency Division Multiplexing Bala Aditya Kota, Bharadwaj S #, Kiran C S, Nithin Krishna B M, Sutharshun V Students (B.Tech), Dept. of ECE, Amrita Vishwa Vidyapeetham, Ettimadai, Coimbatore – 641105. P.Sudheesh, Dr.M.Jayakumar Assistant Professor, Associate Professor, Dept. of ECE, Amrita Vishwa Vidyapeetham, Ettimadai, Coimbatore – 641105. 1 2 ABSTRACT Orthogonal Frequency Division M ultiplexing (OFDM ) has been accepted as the modulation scheme of choice for the next generation high-speed wireless communication systems due to the advantages that it offers like high spectral efficiency, resistance to multipath fading and resistance to frequency selective fading. M oreover, it lends itself to simple channel equalization. Conventional single carrier systems do not provide such advantages and hence, OFDM would almost ubiquitously be used for high speed wireless data transmission. However, the main drawback of such systems over single carrier systems is that in the presence of noise, there is an increased computational complexity at the receiver end to decode the data. In this paper, a low complexity detection algorithm is proposed for OFDM systems. M aximum likelihood detection is taken as the baseline detection algorithm and the proposed algorithm is compared with...

Words: 2583 - Pages: 11

Free Essay

Sysotolic Array

...IEEE T R A N S A C T I O N S O N I N F O R M A T I O N T H E O R Y . VOL. 37, N O . I , J A N U A R Y 1991 43 A Class of Least-Squares Filtering and Identification Algorithms with Systolic Array Architectures Seth Z. Kalson, Member, IEEE, and Kung Yao, Member, IEEE Abstract -A unified approach is presented for deriving a large class of new and previously known time and order recursive least-squares algorithms with systolic array architectures, suitable for high throughput rate and VLSl implementations of space-time filtering and system identification problems. The geometrical derivation given here is unique in that no assumption is made concerning the rank of the sample data correlation matrix. Our method utilizes and extends the concept of oblique projections, as used previously in the derivations of the leastsquares lattice algorithms. Both the growing and sliding memory, exponentially weighted least-squares criteria are considered. Index Terms-Least-squares systolic arrays. tions of the least-squares estimation problem: 1) the filtering problem is to find the filtered output y , , ( t ) , where n . Y,!(t)S Cgl'(t)xi(t), i=l 1ItIT; (1.2) 2) the identification problem is to find the filter weights g ; ( t ) , i = 1;. ., n, for any t I. T This generalization of the least-squares estimation problem is important whenever practical space-time or multichannel filtering arises, such as in adaptive antenna arrays, I. INTRODUCTION decision feedback and...

Words: 8075 - Pages: 33

Free Essay

Eigrp

...Lockett, Roblee Page 1 of 48 6/3/2003 GENETIC ALGORITHM BASED DESIGN AND IMPLEMENTATION OF MULTIPLIERLESS TWODIMENSIONAL IMAGE FILTERS by Douglas J. Lockett and Christopher D. Roblee ********* Senior Capstone Design Project Submitted in partial fulfillment of the requirements for the degree of Bachelor of Science Department of Electrical and Computer Engineering Union College Steinmetz Hall Schenectady, New York 12308 U.S.A. Submitted May 30, 2003 Final Project Report Senior Capstone Design Project, Department of Electrical and Computer Engineering Union College, 2003. © 2003 Douglas Lockett, Christopher Roblee Lockett, Roblee Page 2 of 48 6/3/2003 Table of Contents: Abstract……………………………………………………………………………….3 1. Introduction…………………………………………………………………………..4 2. Theory of Multiplierless Arithmetic………………………………………………...5 3. Image Filters 3.1. Motivations for IIR vs. FIR……………………………………………………....7 3.2. Edge Detection …………………………………………………………………..8 3.3. Canny Edge Detection……………………………………………………………9 4. Genetic Algorithms 4.1. Motivations……………………………………………………………………...10 4.2. Basic Theory…………………………………………………………………….10 4.3. Description of the Designed Genetic Algorithm………………………………..13 4.3.1. Fitness Function Definition and Crossover Selection…………………...17 4.3.2. Magnitude Response and Relative Error………………………………...19 4.3.3. GA Parameters…………………………………………………………...19 5. Results 5.1. Magnitude Frequency Analysis ……………………………………………...…21 5.2. Spatial Analysis…………………………………………………………………24...

Words: 9389 - Pages: 38

Free Essay

Add Friends (2/6)

...ACKNOWLEDEMENT We thank most of all God Almighty for his mercies and grace that kept us all through our seminar research and for giving us wisdom that was implemented in course of the research work. We are greatly indebted to our supervisor, ENGR. JOHN CHUKWU for his love, courage, guidance and investment to the group, who sacrificed his time and schedule just to make sure that the best is been brought out from this group and also to the group leader who consistently made every effort and spent sleepless night ensuring that the seminar topic research was a worthwhile and fulfilling one, also to us been the group members who contributed to the success of the research topic. We also want to thank the head of engineering department(HOD) who also contributed in his own way and also to school for bringing out the seminar format which guided us well and made our work easier. And to our most beloved parents, guardians which God used in providing the financial resources for us. We say a very big thank you to them all and pray that God bless us all. THANKS CHAPTER ONE INTRODUCTION DEFINITIONS OF SOME TERMS A DIGITAL SYSTEM is a data technology that uses discrete (discontinuous) values. By contrast, non-digital (or analog) systems represent information using a continuous function. Although digital representations are discrete, the information represented can be either discrete, such as numbers and letters or continuous, such as sounds, images, and other measurements...

Words: 5016 - Pages: 21

Free Essay

Dr Manger

...Organizational Theory, Design, and Change Jones 6th Edition Test Bank Click here to download the solutions manual / test bank INSTANTLY!!! http://solutionsmanualtestbanks.blogspot.com/2011/10/organizational-theory-d esign-and-change_18.html ----------------------------------------------------------------------Organizational Organizational Organizational Organizational Theory, Theory, Theory, Theory, Design, Design, Design, Design, and and and and Change Change Change Change Jones Jones Jones Jones 6th 6th 6th 6th Edition Edition Edition Edition Test Test Test Test Bank Bank Bank Bank -------------------------------------------------------------------------***THIS IS NOT THE ACTUAL BOOK. YOU ARE BUYING the Test Bank in e-version of the following book*** Name: Organizational Theory, Design, and Change Author: Jones Edition: 6th ISBN-10: 0136087310 Type: Test Bank - The test bank is what most professors use an a reference when making exams for their students, which means there’s a very high chance that you will see a very similar, if not exact the exact, question in the test! - The file is either in .doc, .pdf, excel, or zipped in the package and can easily be read on PCs and Macs. - Delivery is INSTANT. You can download the files IMMEDIATELY once payment is done. If you have any questions, please feel free to contact us. Our response is the fastest. All questions will always be answered in 6 hours. This is the quality of service we are providing and we hope to be your...

Words: 29834 - Pages: 120

Free Essay

Multi User Detection

...A project reported submitted in partial fulfillment of the requirements of the course EEL6503: Spread Spectrum and CDMA, Fall 2001 Submitted by Arun Avudainaygam LINEAR AND ADAPTIVE LINEAR MULTIUSER DETECTION IN CDMA SYSTEMS Project Website: http://arun-10.tripod.com/mud/mud.html SECTION 0 Introduction Multiuser detection is a technology that spawned in the early 80’s. It has now developed into an important, full-fledged field in multi-access communications. Multiuser Detection (MUD) is the intelligent estimation/demodulation of transmitted bits in the presence of Multiple Access Interference (MAI). MAI occurs in multi-access communication systems (CDMA/ TDMA/ FDMA) where simultaneously occurring digital streams of information interfere with each other. Conventional detectors based on the matched filter just treat the MAI as additive white gaussian noise (AWGN). However, unlike AWGN, MAI has a nice correlative structure that is quantified by the cross-correlation matrix of the signature sequences. Hence, detectors that take into account this correlation would perform better than the conventional matched filter-bank. MUD is basically the design of signal processing algorithms that run in the black box shown in figure 0.1. These algorithms take into account the correlative structure of the MAI. 0.1 Overview of the project This project investigates a couple of different approaches to linear multiuser detection in CDMA systems. Linear MUDs are detectors that...

Words: 8974 - Pages: 36

Free Essay

Idrivesa

...2007-2008 JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY, HYDERABAD B.TECH. ELECTRONICS AND COMMUNICATION ENGINEERING I YEAR COURSE STRUCTURE |Code |Subject |T |P/D |C | | |English |2+1 |- |4 | | |Mathematics - I |3+1 |- |6 | | |Mathematical Methods |3+1 |- |6 | | |Applied Physics |2+1 |- |4 | | |C Programming and Data Structures |3+1 |- |6 | | |Network Analysis |2+1 |- |4 | | |Electronic Devices and Circuits |3+1 |- |6 | | |Engineering Drawing |- |3 |4 | | |Computer Programming Lab. |- |3 |4 | | |IT Workshop |- |3 |4 | | |Electronic Devices and Circuits Lab |- |3...

Words: 26947 - Pages: 108