Free Essay

Equalization Alogorithm and Its Analysis

In: Computers and Technology

Submitted By lavoisier
Words 3019
Pages 13
Improved Robustness Adaptive Step Size LMS Equalization Algorithm and Its Analysis
Lan-Jian Cao
Department of Communications

Zhi-Zhong Fu
Department of Communications

Qing-Kun Yang
Department of Communications

University of Electronic Science and Technology of China
Chengdu, China caolanjian@126.com

University of Electronic Science and Technology of China
Chengdu, China fuzz@uestc.edu.cn

University of Electronic Science and Technology of China
Chengdu, China Yqk86@yahoo.com.cn

Abstract- In order to improve the performance of LMS (Least Mean Square) adaptive filtering algorithm, an improved robustness adaptive step-size LMS equalization algorithm was presented by establishing a nonlinear relationship between the two relevant statistics for step-size factor μ ( n ) and the error signal e( n ) . Compared with other algorithms, this algorithm overcomes of sensitivity to the noise coming from outside by introducing the statistics for the correlation of error signal e( n ) . Meanwhile, this algorithm presents some improvement on the principle of robustness. Theoretical analysis and simulation results indicate that this algorithm has a faster convergence speed and a better steady-state error, and can go back to steady state quickly when the channel is varying with time, which shows a better robustness and convergence than other traditional ones. Key words- adaptive equalize; variable step-size; Robustness; least mean square; error signal

I.

INTRODUCTION

Adaptive equalization technology has always being an important technical topic in wireless mobile communication, which has an important significance to solve the inter-symbol interference (ISI) brought by multipath effect and channel’s finite bandwidth Among the numerous equalization algorithm, least mean square error algorithm (LMS) has attracted a wide spread attention due to its simple structure, high efficiency and convenience for real-time processing. LMS was firstly proposed by Widrow and Hoff, but there exists a contradiction between its convergence speed and its convergence accuracy bigger step-size is beneficial to convergence speed while smaller step-size is good for convergence accuracy. A lot of variable step-size LMS algorithms has been proposed to overcome this drawback based on the basic idea that select a larger step-size in the initial stage to obtain faster convergence speed, then adjust to a smaller one automatically to acquire higher convergence accuracy after the algorithm has converged. It is one of the adaptive LMS algorithms that stepsize changes with error signal. Widrow analyzed the contradiction between convergence speed and convergence accuracy of LMS in [1], he urged that we should consider convergence speed and convergence accuracy comprehensively when selecting step-size μ (n ) . Shan established a relationship between input signals and error
This research work was financed by the National Nature Science Foundation of China under contract number 61075013 and China's Postdoctoral Science Foundation under contract number 20100471671.

signals by using the mutual-correlation between them in [2], but it has high complexity. Gelfand and Wei introduced a SVSLMS algorithm to adjust step-size using sigmoid function in [3]. Although it has a fast convergence speed, it can not get a low steady error because its step-size can not change gently when its error signals e( n ) is close to 0. Pazaitis and Luo established a function, which is similar to sigmoid function’s fixed parameters form, by using the higher-order statistics of error signals in [4] and [5]. They all can adjust step-size slowly when its error signals is close to 0, but the higher-order statistics of error signals e( n ) is sensitive to extraneous noises, and they used fixed parameters α , β , which is weak in tracking the time-varying of channel ,thus obtaining a bad robustness. Sun and Li introduced autocorrelation of error signals to control step-size in [6], but it decreased the system’s ability to track error signals using autocorrelation of error signals to control parameter β . In this paper, we integrate the method of Sigmoid function, use the relativity of current error value and last error value as the statistic value, and adjust the step factor adaptively to acquire relatively high convergence precision-large step factor will be adopted when the error range is wide; on the contrary, the step factor will selfadaptively turn to a smaller one when the error range becomes narrow. During the period of step factor adjusting, the statistics of relativity between error functions not only can effectively eliminate the interference step adjustment caused by other dependent noises , but also can track the channel time varying swiftly by improving the parameter of Sigmoid function, thus achieving a better robustness. II. VARIABLE STRP-SIZE LMS EQUALIZER BASED ON SIGMOID FUNCATION

LMS algorithm is based on minimizing the mean square error between the output signals of linear filter and desired response. If e( n ) are error signals, d ( n) are desired output signals, y ( n) are output signals of linear filter, w ( n ) are coefficient vector for linear filter, u ( n ) are input signals vector, the LMS algorithm can be characterized by H y ( n ) = w ( n )u ( n ) (1) e( n ) = d ( n) − y ( n) (2)

978-981-08-6322-7©Memetic Computing Society

141

ICCP 2010 Proceedings

The update function of coefficient vector for equalization filter is adapted by w ( n + 1) = w ( n) + 2 μ ( n )u ( n)e( n ) (3) Where μ ( n) is step-size factor, it determines the update weights of coefficient vector at each iteration and is the key factor affecting algorithm’s convergence. In fixed step-size LMS algorithm, μ ( n) is a constant value, it led to the unbalance between convergence speed and the steady-state error. Variable step-size LMS algorithm based on Sigmoid function establishes a relationship between step-size factor μ ( n) and error signals e( n ) , which can be shown as

steady-state error in steady-state phase, we establish a relationship between α , β and e( n ) :

α ( n) = e( n) / e( n − 1) β ( n) = ηβ ( n − 1) + k e( n)
0.5

½ ° ¾ ° ¿

(5)

Where η is an adjusting factor for parameter β ( n) , it will make β ( n) adjust recursively as the system convergences, is a compensation value which is in 0 < η < 1 ; k e( n) proportion to the error signal e( n ) . When the channel is timevarying, error signals e( n) will jump to a big value quickly, at the same time, statistics value e( n ) / e( n − 1) and compensation value k e( n)
0.5 0.5

Where • is modular arithmetic, ψ j ( n), j = 1, 2 are functions which control the shape of step-size function, J ( n ) is a function of error signals, α , β are constants and more than 0. Such algorithms are all have features of faster convergence at the initial stage, smaller step-size after steady-state error shrinks. They can achieve higher convergence accuracy. SVSLMS algorithm in [3] let J ( n ) = e( n ) , it uses the firstorder statistics of error signals e( n) to get a faster convergence speed, but step-size factor μ ( n ) do not have the characteristics of slow-changing when error signals e ( n ) approaches to 0; They use the high-order statistics of error signals e( n ) to establish function of J ( n ) in [4] and [5], but the high-order statistics of e( n ) is sensitive to extraneous noise, so it is weak in anti-noise. Moreover, sigmoid function and its improved algorithms use the fixed parameter α , β to adjust μ ( n ) , the capacity for time-varying channel track is not strong enough.
III. IMPROVED ADAPTIVE VARIABLE STEP-SIZE LMS ALGORITHM The parameters a, b in improved algorithms based on Sigmoid function are fixed, figure 1 and figure 2 show some curves of step-size factor μ ( n ) varying with the error signal e( n ) in improved algorithms based on fixed-parameters’ 2 Sigmoid function, and J ( n) = e ( n) in these curves. In figure 1, β = 0.3 , α gets some different values; In figure 2, α = 10 , β gets some different values. The graphs indicate that a smaller α , β can get a smoother curve and the step-size factor μ ( n ) can change slowly, but at the system’s initialization stage the μ ( n ) does not have a large value, resulting in slow convergence of the system, lower capacity for tracking time-varying channel and a weaker robustness; if α , β begin to increase, it will inevitably lead to step-size factor’s rapid change and will cause the oscillation of the system steady-state error . Higher convergence accuracy can be achieved. In order to improve system’s robustness to make the system follow the time-varying channel quickly and converge faster at the initialization stage of the system and achieve better

½ ° ψ 1 ( n) = 1 − exp( −α J ( n) ) ¾ ψ 2 ( n ) = 1 / (1 + exp( −α J ( n) )) − 0.5 ° ¿ μ ( n) = βψ j ( n ), j = 1, 2

(4)

will

increased quickly,

making α ( n) and β ( n) adjust to a higher value, which will reset the step-size factor μ ( n) to follow the channel’s timevarying.
0.16 0.14 0.12 0.10

α =100 β = 0.3 α =10 β = 0.3

μ ( n ) 0.08
0.06 0.04 0.02 0 -0.8 0 0.2 0.4 0.6 0.8 e( n) Fig.1 change curves of step-size when β = 0.3 , α get some different values 0.25 0.20 0.15 -0.6 -0.4 -0.2

α =1 β = 0.3

α =10 β = 0.5 α =10 β = 0.4 α =10 β = 0.3 α =10 β = 0.2 α =10 β = 0.1
0 0.2 0.4 0.6 0.8 e( n) Fig.2 change curves of step-size when α = 10 , β get some different values -0.6 -0.4 -0.2

μ (n)
0.10 0.05 0 -0.8

In order to get a faster convergence speed and higher convergence accuracy, μ ( n) will need to adjust with error’s function J ( n) . μ ( n) should get a larger value in order to make the error signals converge faster when J ( n) is bigger. On the other hand, When J ( n) is smaller, μ ( n ) will also adjust to a smaller value in order to get a higher convergence

142

accuracy. To reduce J ( n) ’s jitter caused by input noise, we use the autocorrelation estimation between the current error and the previous error to control J ( n) . So it follows that the proposed algorithm excludes J ( n) ’s variety caused by input noise at steady-state. J ( n ) can update by J ( n) = e( n)e( n − 1) (6) Based on this, this paper establishes a variable-parameters function relationship between step-size factor μ ( n ) and error signals e( n ) ’s autocorrelation, the new adaptive step-size updating function can be formulated as

2) 3)

4) 5)

occurs at the 500th sampling point, and P = 3.1 , the channel impulse response ISI = [0.2798 1 0.2798]. The order of channel equalization filter is 3. Channel input signals d ( n) are composed of pseudorandom sequence which valued by +1,-1 and have zero mean, variable is 1. The introduced noise in channel is Gaussian white noise which is irrelevant with d ( n) . There are 1000 samplings points and 100 times of experiments, which is mutually independent. μ (n)
0.09

μ ( n ) = β ( n)(1 / (1 + exp( −α ( n) J ( n) )) − 0.5) ½
J ( n) = e( n)e( n − 1)

α ( n) = e( n) / e( n − 1) β ( n ) = ηβ ( n − 1) + k e( n )
0.5

Where α ( n) is a function to control the shape of μ ( n) , β ( n) is a function to control the value range of μ ( n) . It is necessary to explained that step-size factor μ ( n) equals to 0 if the previous error e( n − 1) = 0 . Instead of causing the system’s instability, μ ( n) =0 will accurately reflect the real situation that the last error signals is 0. Certainly, we can set an initial value for e( n − 1) , such as e( n − 1) = e( n) , to adjust the step-size when e( n − 1) = 0 at the initialization stage of the system. When η is too small, it will make β ( n ) decrease too fast. We have proved that η = 0.5 , k = 0.2 is the ideal value through a lot of experiments.
IV. SIMULATION According to the proposed Adaptive variable step-size LMS algorithm, we establish an adaptive channel equalization environment to measure the performance of proposed algorithm, adaptive channel equalizer can be described as
Training sequence _ d (n) y (n) ¦

° ° ¾ ° ° ¿

(7)

0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 100 200 300 400 500 600 700 800 900 1000 Iterations Fig.4 comparison of step-size between proposed algorithm and algorithm in [5] e ( n ) / dB 5

Algorithm Algorithm in [5] in this paper

0 0

0 -5 -10 -15 -20 Algorithm in [5]

+ Equalizer Channel + + Noise
¦

Source

e( n)

Algorithm in this paper 100 200 300 400 500 600 700 800 900 1000 Iterations Fig.5 comparison of convergence performance between proposed algorithm and algorithm in [5]

Proposed LMS

-25 0

Fig.3 Block diagram of adaptive channel equalization

The simulation environment is configured as follows: 1) Using FIR filter to simulate channel, and introduce an inter-symbol interference, the channel impulse response can be formulated as

h=

{

(1 + cos(2π (i − 2) / P )) / 2, i = 1, 2, 3 ot her s 0

(8)

Where P is a parameter to control channel distortion. In the system initial stage, P = 2.9 , and channel impulse response ISI = [0.2194 1 0.2194]; channel’s time-varying

Figure 4 shows a comparison for step-size factor μ (n ) , which changes with iterations, between proposed robustness adaptive LMS algorithm and improved SVSLMS algorithm proposed in [5]. Compared with the improved SVSLMS algorithm proposed in [5], the proposed algorithm can get a larger step-size factor at the initialization stage of system when bigger errors occurs, compelling the proposed algorithm to converge faster. Moreover, the proposed algorithm can obtain such an accurate step size that it can reach better convergence accuracy. The proposed algorithm in this paper

143

has a better robustness because the step-size factor μ ( n ) can follow the channel’s time-varying by turning to a higher value quickly when system suffers some unknown time-varying. Besides this, α ( n) and β ( n) can quickly convert to a higher value at that time. That is to say, in respect of step-size factor, the proposed algorithm is better than the improved SVSLMS algorithm in [5]. Figure 5 shows a convergence performance comparison between proposed robustness adaptive LMS algorithm and improved SVSLMS algorithm proposed in [5] under the same channel environment. We set η = 0.5 , k = 0.2 in the proposed algorithm in this paper; we can prove that α = 1 , β = 0.2 are the ideal parameters for the improved SVSLMS algorithm proposed in [5] through a lot of experiments. The figures reveal that in respect of convergence speed, the error signals of the proposed algorithm can converges to a lower level and has a faster convergence speed during the same short period. The proposed algorithm is also excellent in respect of convergence accuracy. We can obtain a smaller step-size factor when the steady-state error is close to zero. The proposed algorithm can get a lower steady-state error. The error signals in proposed algorithm have a smaller fluctuation while the one in [5] is bigger, indicating that the proposed algorithm has a better stability, which is also consistent with the previous analysis. Because α ( n) and β ( n) can dynamically regulate step-size factor μ ( n) with high reactive speed, the proposed algorithm can go back to steady-state faster than the improved SVSLMS algorithm proposed in [5] when channel occur unknown time-varying. It is a strong proof of better robustness of the proposed algorithm. CONCLUSION

In this paper, we build a nonlinear relationship between step-size factor μ ( n) and error signals’ related statistics. Then we have made great improvements on robustness based on the previous algorithms and proposed a new improved robustness adaptive step-size LMS equalization algorithm. Proved with computer simulation, the algorithm can obtain its ideal performance when η = 0.5 and k = 0.2 . We have attached the simulation results in this paper. The proposed algorithm not only overcome the drawback that SVSLMS algorithm can not change gently at the steady stage, but also overcome the drawback that the higher-order statistics of error signals e( n ) is sensitive to extraneous noise. The introduction of variable parameter endows the proposed algorithm with a better robustness. Computer simulation shows that the proposed algorithm has a faster convergence speed and a better steadystage error. It can go back to steady-stage quickly when the system suffers unknown time-varying compared with other algorithms, which indicates a strong robustness. REFERENCES
[1] [2] [3] [4] [5] [6] WIDROW B, MCCOOL J M, LARMORE M G, JOHNSON C R. Stationary and nonstationary learning characteristics of the LMS adaptive filter [J]. Proc IEEE, 1976, 64(8): 1947-1951. Shan T J, Kailaith T. Adaptive algorithm with an automatic gain control feature [J]. IEEE Trans. on Acoust, speech, and signal processing (S0096-3518), 1991, 35(1): 122-127. GELFAND S B, WEI Y, KROGMEIER J V. The stability of variable step-size LMS algorithms[J]. IEEE Trans on Signal Processing, 1999, 47(12): 3277-3288. PAZAITIS D I, CONSTANTINIDES A G. A novel kurtosis driven variable step-size adaptive algorithm[J]. IEEE Trans. Signal Processing, 1999, 47(3): 864-872. Luo X D, Jia Z H, Wang Q. A new variable step-size LMS adaptive filtering algorithm [J]. Chinese Journal of Electronics, 2006, 34(6): 1123-1126. Sun E C, Li Y H, Zhang D Y, Yi K C. Adaptive Variable-step Size LMS Filtering Algorithm and Its Analysis [J]. Journal of System Simulation, 2007, 19(14): 3172-3175.

144

Similar Documents