Free Essay

Dsp Lessons

In: Computers and Technology

Submitted By umajoisa
Words 73093
Pages 293
1

A DSP A-Z http://www.unex.ucla.edu Digital Signal Processing An “A” to “Z”

R.W. Stewart
Signal Processing Division Dept. of Electronic and Electrical Eng. University of Strathclyde Glasgow G1 1XW, UK Tel: +44 (0) 141 548 2396 Fax: +44 (0) 141 552 2487 E-mail: r.stewart@eee.strath.ac.uk

M.W. Hoffman
Department of Electrical Eng. 209N Walter Scott Eng. Center PO Box 880511 Lincoln, NE 68588 0511 USA Tel: +1 402 472 1979 Fax: +1 402 472 4732 Email:hoffman@unlinfo.unl.edu

© BlueBox Multimedia, R.W. Stewart 1998

2
The

DSPedia

DSPedia
An A-Z of Digital Signal Processing
This text aims to present relevant, accurate and readable definitions of common and not so common terms, algorithms, techniques and information related to DSP technology and applications. It is hoped that the information presented will complement the formal teachings of the many excellent DSP textbooks available and bridge the gaps that often exist between advanced DSP texts and introductory DSP. While some of the entries are particularly detailed, most often in cases where the concept, application or term is particularly important in DSP, you will find that other terms are short, and perhaps even dismissive when it is considered that the term is not directly relevant to DSP or would not benefit from an extensive description. There are 4 key sections to the text:

• • • •

DSP terms A-Z Common Numbers associated with DSP Acronyms References

page 1 page 427 page 435 page 443 the authors can be emailed at

Any comment on this text is welcome, and r.stewart@.eee.strath.ac.uk, or hoffman@ee.unl.edu.

Bob Stewart, Mike Hoffman 1998 Published by BlueBox Multimedia.

A-series Recommendations:

1

A
A-series Recommendations: Recommendations from the International Telecommunication Union (ITU) telecommunications committee (ITU-T) outlining the work of the committee. See also International Telecommunication Union, ITU-T Recommendations. A-law Compander: A defined standard nonlinear (logarithmic in fact) quantiser characteristic useful for certain signals. Non-linear quantisers are used in situations where a signal has a large dynamic range, but where signal amplitudes are more logarithmically distributed than they are linear. This is the case for normal speech. Speech signals have a very wide dynamic range: Harsh “oh” and “b” type sounds have a large amplitude, whereas softer sounds such as “sh” have small amplitudes. If a uniform quantization scheme were used then although the loud sounds would be represented adequately the quieter sounds may fall below the threshold of the LSB and therefore be quantized to zero and the information lost. Therefore non-linear quantizers are used such that the quantization level at low input levels is much smaller than for higher level signals. To some extent this also exploits the logarithmic nature of human hearing.
Linear ADC
Binary Output 15 12 8 4 -2 -1 -4 -8 -12 -16 1 2 -2 -1 -4 -8 -12 -16

Non-linear ADC
Binary output 15 12 8 4 1 2

Voltage Input

Voltage Input

A linear, and a non-linear (A-law in fact) input-output characteristic for two 4 bit ADCs. Note that the linear ADC has uniform quantisation, whereas the non-linear ADC has more resolution for low level signals by having a smaller step size for low level inputs.

A-law quantizers are often implemented by using a nonlinear circuit followed by a uniform quantizer. Two schemes are widely in use, the µ -law in the USA: ln ( 1 + µ x ) y = ----------------------------ln ( 1 + µ ) and the A-law in Europe and Japan: + -----------------------y = 1 ln A x1 + ln A (2) (1)

2

DSPedia

where “ln” is the natural logarithm (base e), and the input signal x is in the range 0 to 1. The ITU have defined standards (G.711) for these quantisers where µ = 255 and A = 87.56 . The input/ output characterisitcs of Eqs. 1 and 2 for these two values are virtually identical. Although a non-linear quantiser can be produced with analogue circuitry, it is more usual that a linear quantiser will be used, followed by a digital implementation of the compressor. For example, if a signal has been digitised by a 12 bit linear ADC, then digital µ -law compression can be performed to compress to 8 bits using a modified version of Eq. 2: ln ( 1 + µ x ⁄ 2 11 ) ln ( 1 + µ x ⁄ 2048 ) y = 2 7 ----------------------------------------- = 128 ---------------------------------------------ln ( 1 + µ ) ln ( 1 + µ ) (3)

where y is rounded to the nearest integer. After a signal has been compressed and transmitted, at the receiver it can be expanded back to its linear form by using an expander with the inverse characteristic to the compressor.
127

Digital output
96 64 32

µ = 255

A-Law Compression
-2048 -1536 -1024 -512 0 -32 -64 -96 -128 512 1024 1536 2047

Digital input

12 bits input

Digital A-law compressor

8 bits output

The ITU µ -law characteristic for compression from 12 bits to 8 bits. Note that if a value of µ = 0 was used then the characteristic is linear, and for µ → ∞ the characteristic tends to a sigmoid/step function.

Listening tests for µ -law encoded speech reveal that compressing a linear resolution 12 bit speech signal (sampled at 8 kHz) to 8 bits, and then expanding back to a linearly quantised 12 bit signal does not degrade the speech quality to any significant degree. This can be quantitatively shown by considering the actual quantisation noise signals for the compressed and uncompressed speech signals. In practice the use of DSP routines to perform Eq. 3 is not performed and a piecewise linear approximation (defined in G.711) to the µ - or A-law characteristic is used. See also Companders, Compression,G-series Recommendations, m-law. Absolute Error: Consider the following example, if an analogue voltage of exactly v = 6.285 volts is represented to only one decimal place by rounding then v′ = 6.3 , and the absolute error, ∆v , is defined as the difference between the true value and the estimated value. Therefore, v = v′ + ∆v (4)

Absolute Pitch: and ∆v = v – v′

3

(5)

For this case ∆v = -0.015 volts. Notice that absolute error does not refer to a positive valued error, but only that no normalization of the error has occurred. See also Error Analysis, Quantization Error, Relative Error. Absolute Pitch: See entry for Perfect Pitch. Absolute Value: The absolute value of a quantity, x, is usually denoted as x . If x ≥ 0 , then x = x , and if x < 0 then x = – x . For example 12123 = 12123 , and – 234.5 = 234.5 . The absolute value function y = x is non-linear and is non-differentiable at x = 0 . y 5 4 3 2 1 -5 -4 -3 -2 -1 1 2 3 4 5

y = x

0

x

Absorption Coefficient: When sound is absorbed by materials such as walls, foam etc., the amount of sound energy absorbed can be predicted by the material’s absorption coefficient at a particular frequency. The absorption coefficients for a few materials are shown below. A 1.0 indicates that all sound energy is absorbed, and a 0, that none is absorbed. Sound that is not absorbed is reflected. The amplitude of reflected sound waves is given by 1 – A times the amplitude of the impinging sound wave.
1.0 Polyurethane Foam Glass-Wool Thick Carpet Incident Sound Reflected Sound Absorbed Sound

Absorption Coefficient

0.8 0.6 0.4 0.2 0 0.1 0.2 0.4 0.5 1 2 3 4 5

Brick

Frequency (kHz)

Wall

Accelerometer: A sensor that measures acceleration, often used for vibration sensing and attitude control applications. Accumulator: Part of a DSP processor which can add two binary numbers together. The accumulator is part of the ALU (arithmetic logic unit). See also DSP Processor. Accuracy: The accuracy of DSP system refers to the error of a quantity compared to its true value. See also Absolute Error, Relative Error, Quantization Noise.

4

DSPedia

Acoustic Echo Cancellation: For teleconferencing applications or hands free telephony, the loudspeaker and microphone set up in both locations causes a direct feedback path which can cause instability and therefore failure of the system. To compensate for this echo acoustic echo cancellers can be introduced:
A + echoes of B’ + echoes of A’ ....etc. + A H1(f) “feedback” Adaptive Filter − Adaptive Filter A’ “feedback” H2(f) − B’ Room 1 + Room 2 B + echoes of A’ + echoes of B’ ....etc. B

When speaker A in room 1 speaks into microphone 1, the speech will appear at loudspeaker 2 in room 2. However the speech from loudspeaker 2 will be picked up by microphone 2, and transmitted back into room 1 via loudspeaker 1, which in turn is picked up by loudspeaker 1, and so on. Hence unless the loudspeaker and microphones in each room are acoustically isolated (which would require headphones), there is a direct feedback path which may cause stability problems and hence failure of the full duplex speakerphone. Setting up an adaptive filter at each end will attempt to cancel the echo at each outgoing line. Amplifiers, ADCs, DACs, communication channels etc. have been omitted to allow the problem to be clearly defined.

Teleconferencing is very dependent on adaptive signal processing strategies for acoustic echo control. Typically teleconferencing will sample at 8 or 16 kHz and the length of the adaptive filters could be thousands of weights (or coefficients), depending on the acoustic environments where they are being used. See also Adaptive Signal Processing, Echo Cancellation, Least Mean Squares Algorithm, Noise Cancellation, Recursive Least Squares. Acoustics: The science of sound. See also Absorption, Audio, Echo, Reverberation. Actuator: Devices which take electrical energy and convert it into some other form, e.g. loudspeakers, AC motors, Light emitting diodes (LEDs). Active Filter: An analog filter that includes amplification components such as op-amps is termed an active filter; a filter that only has resistive, capacitive and inductive elements is termed a passive filter. In DSP systems analog filters are widely used for anti-alias and reconstruction filters, where good roll-off characteristics above fs /2 are required. A simple RC circuit forms a first order (single pole) passive filter with roll of 20dB/decade (or 6dB/ocatve). By cascading RC circuits with an (active) buffer amplifier circuit, higher order filters (with more than one pole) can be easily designed. See also Anti-alias Filter, Filters (Butterworth, Chebyshev, Bessel etc.) , Knee, Reconstruction Filter , RC Circuit, Roll-off.

Active Noise Control (ANC):

5

Active Noise Control (ANC): By introducing anti-phase acoustic waveforms, zones of quiet can be introduced at specified areas in space caused by the destructive interference of the offending noise and an artificially induced anti-phase noise:
ANC Loudspeaker Anti-phase noise

NOISE

Periodic noise The simple principle of active noise control.

Quiet Zone: (destructive interference)

ANC works best for low frequencies up to around 600Hz. This can be intuitively argued by the fact that the wavelength of low frequencies is very long and it is easier to match peaks and troughs to create relatively large zones of quiet. Current applications for ANC can be found inside aircraft, in automobiles, in noisy industrial environments, in ventilation ducts, and in medical MRI equipment. Future applications include mobile telephones and maybe even noisy neighbors! The general active noise control problem is: NOISE
Reference microphone T(f) n(t) Q(f) Hr(f) He(f) Secondary Loudspeaker Error microphone d(t) Desired zone of quiet

x(t)

Adaptive Noise Controller

y(t)

ye(t)

e(t) = d(t) + ye(t) The general set up of an active noise controller as a feedback loop where the aim is to minimize the error signal power.

6

DSPedia

To implement an ANC system in real time the filtered-X LMS or filtered-U LMS algorithms can be used [68], [69]:
NOISE

T(f)

Reference microphone Q(f) d(k)
Filter Zeroes a

x(k)

+

Σ

y(k)

ˆ He ( z ) a ( k + 1 ) = a ( k ) + 2µe ( k )f ( k )

+
Filter Poles b

He(f) ˆ He ( z ) g(k) Loud speaker

Error microphone

b ( k + 1 ) = b ( k ) + 2µe ( k )g ( k )

f(k)

The filtered-U LMS algorithm for active noise control. Note that if there are no poles, this architecture simplifies to the filtered-X LMS.

The figure below shows the time and frequency domains for the ANC of an air conditioning duct. Note that the signals shown are represent the sound pressure level at the error microphone. In

Active Vibration Control (AVT):

7

general the zone of quiet does not extend much greater than λ ⁄ 4 around the error microphone (where λ is the noise wavelength):
TIme Analysis
2500 1500

Before ANC

Amplitude (units)

500 0 -500 -1500 -2500/2500 1500 500 0 -500 -1500 -2500 0 5 10 15 20 25 30 35 40 45 50

After ANC

Time (ms)

Power Spectra Analysis
0 -8

Before ANC

Magnitude (dB)

-16 -24 -32 -40/0 -8 -16 -24 -32 -40 0 100 200 300 400 500 600 700 800 900 1000

After ANC

Frequency (Hz) ANC inside air conditioning duct. The sound pressure levels shown represent the noise at an error microphone before and after switching on the noise canceller. The noise canceller clearly reduces the low frequency (periodic) noise components.

Sampling rates for ANC can be as low as 1kHz if the offending noise is very low in frequency (say 50-400Hz) but can be as high as 50 kHz for certain types of ANC headphones where very rapid adaption is required, even although the maximum frequency being cancelled is not more than a few kHz which would make the Nyquist rate considerably lower. See also Active Vibration Control, Adaptive Line Enhancer, Adaptive Signal Processing, Least Mean Squares Algorithm, Least Mean Squares Filtered-X Algorithm Convergence, Noise Cancellation. Active Vibration Control (AVT): DSP techniques for AVT are similar to active noise cancellation (ANC) algorithms and architectures. Actuators are employed to introduce anti-phase vibrations in an attempt to reduce the vibrations of a mechanical system. See also Active Noise Cancellation.

8

DSPedia

AC-2: An Audio Compression algorithm developed by Dolby Labs and intended for applications such as high quality digital audio broadcasting. AC-2 claims compression ratios of 6:1 with sound quality almost indistinguishable from CD quality sound under almost all listening conditions. AC-2 is based on psychoacoustic modelling of human hearing. See also Compression, Precision Adaptive Subband Coding (PASC). Adaptation: Adaptation is the auditory effect whereby a constant and noisy signal is perceived to become less loud or noticeable after prolonged exposure. An example would be the adaptation to the engine noise in a (loud!) propeller aircraft. See also Audiology, Habituation, Psychoacoustics. Adaptive Differential Pulse Code Modulation (ADPCM): ADPCM is a family of speech compression and decompression algorithms which use adaptive quantizers and adaptive predictors to compress data (usually speech) for transmission. The CCITT standard of ADPCM allows an analog voice conversation sampled at 8kHz to be carried within a 32kbits/second digital channel . Three or four bits are used to describe each sample which represent the difference between two adjacent samples. See also Differential Pulse Code Modulation (ADPCM), Delta Modulation, Continuously Variable Slope Delta Modulation (CVSD), G.721. Adaptive Beamformer: A spatial filter (beamformer) that has time-varying, data dependent (i.e., adaptive) weights. See also Beamforming. Adaptive Equalisation: If the effects of a signal being passed through a particular system are to be “removed” then this is equalisation. See Equalisation. Adaptive Filter: The generic adaptive filter can be represented as: d( k)

x( k )

Adaptive Filter, w(k)

y(k)

+ e( k)



Adaptive Algorithm y ( k ) = Filter { x ( k ), w ( k ) } w ( k + 1 ) = w ( k ) + e ( k )f { d ( ( k ), x ( k ) ) } In the generic adaptive filter architecture the aim can intuitively be described as being to adapt the impulse response of the digital filter such that the input signal x ( k ) is filtered to produce y ( k ) which when subtracted from desired signal d ( k ) , will minimize the power of the error signal e ( k ) .

The adaptive filter output y ( k ) is produced by the filter weight vector, w ( k ) , convolved (in the linear case) with x ( k ) . The adaptive filter weight vector is updated based on a function of the error signal e ( k ) at each time step k to produce a new weight vector, w ( k + 1 ) to be used at the next time step. This adaptive algorithm is used in order that the input signal of the filter, x ( k ) , is filtered to produce an output, y ( k ) , which is similar to the desired signal, d ( k ) , such that the power of the error signal, e ( k ) = d ( k ) – y ( k ) , is minimized. This minimization is essentially achieved by exploiting the correlation that should exist between d ( k ) and y ( k ) .

Adaptive Filter:

9

The adaptive digital filter can be an FIR, IIR, Lattice or even a non-linear (Volterra) filter, depending on the application. The most common by far is the FIR. The adaptive algorithm can be based on gradient techniques such as the LMS, or on recursive least squares techniques such as the RLS. In general different algorithms have different attributes in terms of minimum error achievable, convergence time, and stability. There are at least four general architectures that can be set up for adaptive filters: (1) System identification; (2) Inverse system identification; (3) Noise cancellation; (4) Prediction. Note that all of these architectures have the same generic adaptive filter as shown below (the “Adaptive Algorithm” block explicitly drawn above has been left out for illustrative convenience and clarity):
Unknown System x(k) Delay d(k) + e(k) d(k) + e(k)

x(k)

Adaptive Filter

y(k) -

s(k)

Unknown System

x(k)

Adaptive Filter

y(k) -

System Identification s(k) + n(k) d(k) + e(k)

Inverse System Identification

n’(k)

x(k)

Adaptive Filter

y(k) -

s(k)

Delay

x(k)

Adaptive Filter

y(k) -

d(k) + e(k)

Noise Cancellation

Prediction

Four adaptive signal processing architectures

Consider first the system identification; at an intuitive level, if the adaptive algorithm is indeed successful at minimizing the error to zero, then by simple inspection the transfer function of the “Unknown System” must be identical to the transfer function of the adaptive filter. Given that the error of the adaptive filter is now zero, then the adaptive filters weights are no longer updated and will remain in a steady state. As long as the unknown system does not change its characteristics we have now successfully identified (or modelled) the system. If the adaption was not perfect and the error is “very small” rather than zero (which is more likely in real applications) then it is fair to say the we have a good model rather than a perfect model. Similarly for the inverse system identification if the error adapts to zero over a period of time, then by observation the transfer function of the adaptive filter must be the exact inverse of the “Unknown System”. (Note that the “Delay” is necessary to ensure that the problem is causal and therefore solvable with real systems, i.e. given that the “Unknown System” may introduce a time delay in producing x ( k ) , then if the “Delay” was not present in the path to the desired signal the system would be required produced an anti-delay or look ahead in time - clearly this is impossible.) For the noise cancellation architecture, if the input signal is s ( k ) which is corrupted by additive noise, n ( k ) , then the aim is to use a correlated noise reference signal, n′ ( k ) as an input to the

10

DSPedia

adaptive filter, such that when performing the adaption there is only information available to implicitly model the noise signal, n ( k ) and therefore when this filter adapts to a steady state we would expect that e ( k ) ≈ s ( k ) . Finally, for the prediction filter, if the error is set to be adapted to zero, then the adaptive filter must predict future elements of the input s ( k ) based only on past observations. This can be performed if the signal s ( k ) is periodic and the filter is long enough to “remember” past values. One application therefore of the prediction architecture could be to extract periodic signals from stochastic noise signals. The prediction filter can be extended to a “smoothing filter” if data are processed off-line -- this means that samples before and after the present sample are filtered to obtain an estimate of the present sample. Smoothing cannot be done in real-time, however there are important applications where real-time processing is not required (e.g., geophysical seismic signal processing). A particular application may have elements of more than one single architecture, for example in the following, if the adaptive filter is successful in modelling “Unknown System 1”, and inverse modelling “Unknown System 2”, then if s ( k ) is uncorrelated with r ( k ) then the error signal is likely to be e ( k ) ≈ s ( k ) : s(k) +

Unknown System 1 Unknown System 2 x(k) Delay

+

r(k)

Adaptive Filter

y(k) -

d(k) + e(k)

An adaptive filtering architecture incorporating elements of system identification, inverse system identification and noise cancellation

In the four general architectures shown above the unknown systems being investigated will normally be analog in nature, and therefore suitable ADCs and DACs would be used at the various

Adaptive Infinite Impulse Response (IIR) Filters:

11

analog input and output points as appropriate. For example if an adaptive filter was being used to find a model of a small acoustic enclosure the overall hardware set up would be:

DAC

x(t)

d(t)

ADC

d(k) Adaptive Filter y(k) + e(k)

x(k)

Digital Signal Processor The analog-digital interfacing for a system identification, or modelling, of an acoustic transfer path using a loudspeaker and microphone.

See also Adaptive Signal Processing, Acoustic Echo Cancellation, Active Noise Control, Adaptive Line Enhancer, Echo Cancellation, Least Mean Squares (LMS) Algorithm, Least Squares, Noise Cancellation, Recursive Least Squares, Wiener-Hopf Equations. Adaptive Infinite Impulse Response (IIR) Filters: See Least Mean Squares IIR Algorithms. Adaptive Line Enhancer (ALE): An adaptive signal processing structure that is designed to enhance or extract periodic (or predictable) components: d( k) x(k ) + e(k )

∆ p(k ) + n( k)

Adaptive Filter

− y(k)

p(k – ∆) + n(k – ∆)

An adaptive line enhancer. The input signal consists of a periodic component, p ( k ) and a stochastic component, n ( k ) . The delay, ∆, is long enough such that the stochastic component at the input to the adaptive filter, n ( k – ∆ ) is decorrelated with the input n ( k ) . For periodic signal the delay does not decorrelate p ( k ) and p ( k – ∆ ) . When the adaptive filter adapts it will therefore only cancel the periodic signal.

The delay, ∆, should be long enough to decorrelate the broadband “noise-like” signal, resulting in an adaptive filter which extracts the narrowband periodic signal at filter output y ( k ) (or removes the periodic noise from a wideband signal at e ( k ) ). An ALE exploits the knowledge that the signal of interest is periodic, whereas the additive noise is stochastic. If the decorrelation delay, ∆, is long enough then the stochastic noise presented to the d ( k ) input is uncorrelated with the noise presented to the x ( k ) input, however the periodic noise remains correlated:

12 r(n) q( n)

DSPedia

Lag, n Correlation r ( n ) = E { p ( k )p ( k + n ) } periodic (sine wave) signal of a

-∆



Lag, n

Correlation q ( n ) = E { n ( k )n ( k + n ) } of a stochastic signal

Typically an ALE may be used in communication channels or in radar and sonar applications where a low level sinusoid is masked by white or colored noise. In a telecommunications system, an ALE could be used to extract periodic DTMF signals from very high levels of stochastic noise. Alternatively note that the ALE can be used to extract the periodic noise from the stochastic signal by observing the signal e ( k ) . See also Adaptive Signal Processing, Least Mean Squares Algorithm, Noise Cancellation. Adaptive Noise Cancellation: See Adaptive Signal Processing, Noise Cancellation. Adaptive Signal Processing: The discrete mathematics of adaptive filtering, originally based on the least squares minimization theory of the celebrated 19th Century German mathematician Gauss. Least squares is of course widely used in statistical analysis and virtually every branch of science and engineering. For many DSP applications, however, least squares minimization is applied to real time data and therefore presents the challenge of producing a real time implementation to operate on data arriving at high data rates (from 1kHz to 100kHz), and with loosely known statistics and properties. In addition, other cost functions besides least squares are also used. One of the first suggestions of adaptive DSP algorithms was in Widrow and Hoff’s classic paper on the adaptive switching circuits and the least mean squares (LMS) algorithm at the IRE WESCON Conference in 1960. This paper stimulated great interest by providing a practical and potentially real time solution for least squares implementation. Widrow followed up this work with two definitive and classic papers on adaptive signal processing in the 1970s [152], [153]. Adaptive signal processing has found many applications. A generic breakdown of these applications can be made into the following categories of signal processing problems: signal detection (is it there?), signal estimation (what is it?), parameter or state estimation, signal compression, signal synthesis, signal classification, etc. The common attributes of adaptive signal processing applications include time varying (adaptive) computations (processing) using sensed input values (signals).See also Acoustic Echo Cancellation, Active Noise Control, Adaptive Filter, Adaptive Line Enhancer, Echo Cancellation, Least Mean Squares (LMS) Algorithm, Least Squares, Noise Cancellation, Recursive Least Squares, Wiener-Hopf Equations. Adaptive Spectral Perceptual Entropy Coding (ASPEC): ASPEC is a means of providing psychoacoustic compression of hifidelity audio and was developed by AT&T Bell Labs, Thomson and the Fraunhofer society amongst others. In 1990 features of the ASPEC coding system were incorporated into the International Organization for Standards MPEG-1 standard ISO in combination with MUSICAM. See also Masking Pattern Adapted Universal Subband Integrated

Adaptive Step Size:

13

Coding and Multiplexing (MUSICAM), Precision Adaptive Subband Coding (PASC), Spectral Masking, Psychoacoustics, Temporal Masking. Adaptive Step Size: See Step Size Parameter. Adaptive Transform Acoustic Coding (ATRAC): ATRAC coding is used for compression of hifidelity audio (usually starting with 16 bit data at 44.1kHz) to reduce storage requirement on recording mediums such as the mini-disc (MD) [155]. ATRAC achieves a compression ratio of almost 5:1 with very little perceived difference to uncompressed PCM quality. ATRAC exploits psychoacoustic (spectral) masking properties of the human ear and effectively compresses data by varying the bit resolution used to code different parts of the audio spectrum. More information on the mini-disc (and also ATRAC) can be found in [155]. ATRAC has three key coding stages. First is the subband filtering which splits the signal into three subbands, (low:0 - 5.5 kHz; mid:5.5 - 11kHz; high:11- 22kHz) using a two stage quadrature mirror filter (QMF) bank. The second stage them performs a modified discrete cosine transform (MDCT) to produce a frequency representation of the signal. The actual length (no. of samples) of the transform is controlled adaptively via an internal decision process and either uses time frame lengths of 11.6ms (when in long mode) for all frequency bands, and 1.45ms (when in short mode) for the high frequency band, and 2.9ms (also called short mode) for the low and mid frequency bands. The choice of mode is usually long, however if a signal has rapidly varying instantaneous power (when say a cymbal is struck) short mode may be required in the low and mid frequency bands to adequately code the rapid attack portion of the waveform. Finally the third stage is to consider the spectral characteristics of the three subbands and allocate bit resolution such that spectral components below the threshold of hearing, are not encoded, and that the spectral masking attributes of the signal spectrum are exploited such that the number of bits required to code certain frequency bands is greatly reduced. (See entry for Precision Adaptive Subband Coding (PASC) for a description of quantization noise masking.) ATRAC splits the frequencies from the MDCT into a total of 52 frequency bins which are of varying bandwidth based on the width of the critical bands in the human auditory mechanism. ATRAC then compands and requantizes using a block floating point representation. The wordlength is determined by the bit

14

DSPedia

allocation process based on psychoacoustic models. Each input 11.6 ms time frame of 512 × 16 bit samples or 1024 bytes is compressed to 212 bytes (4.83:1 compression ratio).
Delay 11.025 - 22.05kHz MDCT High Bit allocation and spectral quantization

QMF-1 Digital Audio input 44.1kHz, 16 bits; 1.4112 Mbits/s QMF-2

5.5125 - 11.025kHz

MDCT Mid

0 - 5.5125 kHz

MDCT Low

Compressed output 292 Imbeds/sec

The three stages of adaptive transform acoustic coding (ATRAC): (1) Quadrature mirror filter (QMF) subband coding; (2) Modified Discrete Cosine Transform (MDCT); (3) Bit allocation and spectral masking/quantization decision. Data is input for coding in time frames of 512 samples (1024 bytes) and compressed into 212 bytes.

ATRAC decoding from compressed format back to 44.1kHz PCM format is achieved by first performing an inverse MDCT on the three subbands (using long mode or short mode data lengths as specified in the coded data). The three time domain signals produced are then reconstructed back into a time domain signal using QMF synthesis filters for output to a DAC. See also Compact Disc, Data Compression, Frequency Range of Hearing, MiniDisc (MD), Psychoacoustics, Precision Adaptive Subband Coding (PASC), Spectral Masking, Subband Filtering, Temporal Masking, Threshold of Hearing. Additive White Gaussian Noise: The most commonly assumed noise channel in the analysis and design of communications systems. Why is this so? Well, for one, this assumption allows analysis of the resulting system to be tractable (i.e., we can do the analysis). In addition, this is a very good model of electronic circuit noise. In communication systems the modulated signal is often so weak that this circuit noise becomes a dominant effect. The model of a flat (i.e., white) spectra is good in electronic circuits up to about 1012Hz. See also White Noise. Address Bus: A collection of wires that are used for sending memory address information either inter-chip (between chips) or intra-chip (within a chip). Typically DSP address buses are 16 or 32 bits wide. See also DSP Processor. Address Registers: Memory locations inside a DSP processor that are used as temporary storage space for addresses of data stored somewhere in memory. The address register width is always greater than or equal to (normally the same) the width of the DSP processor address bus. Most DSP processors have a number of address registers. See also DSP Processor. AES/EBU: See Audio Engineering Society, European Broadcast Union. Aliasing: An irrecoverable effect of sampling a signal too slowly. High frequency components of a signal (over one-half the sampling frequency) cannot be accurately reconstructed in a digital system. Intuitively, the problem of sampling too slowly (aliasing) can be understood by considering that rapidly varying signal fluctuations that take place in between samples cannot be represented at the output. The distortion created by sampling these high frequency signals too slowly is not

Algorithm:

15

reversible and can only be avoided by proper aliasing protection as provided by an anti-alias filter or a an oversampled Analog to Digital converter. period = 1/f Voltage

0.01

0.02

0.03

time

Sampling a 100 Hz sine wave at only 80 Hz causes aliasing, and the output signal is interpreted as a 20 Hz sine wave, i.e.

See also Anti-alias Filter, Oversampling. Algorithm: A mathematical based computational method which forms a set of well defined rules or equations for performing a particular task. For example, the FFT algorithm can be coded into a DSP processor assembly language and then used to calculate FFTs from stored (or real-time) digital data. All-pass Filter: An all-pass filter passes all input frequencies with the same gain, although the phase of the signal will be modified. (A true all-pass filter has a gain of one.) All-pass filters are used for applications such as group delay equalisation, notch filtering design, Hilbert transform implementation, musical instruments synthesis [43] . The simplest all pass filter is a simple delay! This “filter” passes all frequencies with the same gain, has linear phase response and introduces a group delay of one sample at all frequencies: time domain x(k) y(k ) = x(k – 1) y(k) z-domain Y ( z ) = z–1 H ( z ) ) Y(zH ( z ) = ----------- = z – 1 X(z)

A simple all pass filter. All frequencies are passed with the same gain.

A more general representation of some types of all pass filters can be represented by the general z-domain transfer function for an infinite impulse response (IIR) N pole, N zero filter:
* * * * – N * –1 a 0 z –N + a 1 z – N + 1 + … + a N – 1 z – 1 + a N -------------------------------------H ( z ) = Y ( z ) = --------------------------------------------------------------------------------------------------- = z A ( z ) X(z) A( z ) a 0 + a 1 z –1 + … + a N – 1 z – N + 1 + a N z –N

(6)

where a * is the complex conjugate of a . Usually the filter weights are real, therefore a = a * , and we set a 0 = 1 :
–1 –N z – N + a 1 z –N + 1 + … + a N – 1 z –1 + a N Y(z) ------------------------H ( z ) = ----------- = ------------------------------------------------------------------------------------------------ = z A ( z ) X(z) A( z ) 1 + a 1 z – 1 + … + a N – 1 z – N + 1 + a N z –N

(7)

16

DSPedia

We can easily show that H ( z ) = a N (see below) for all frequencies. Note that the numerator polynomial z –N A ( z ) is simply the ordered reversed z-polynomial of the denominator A ( z ) . For an input signal x ( k ) the discrete time output of an all-pass filter is: y ( k ) = aN x ( k ) + aN – 1 x ( k – 1 ) + … + a1 x ( k – N + 1 ) + x ( k – N ) + + a1 y ( k – 1 ) + … + aN – 1 y ( k + N – 1 ) + aN x ( k – N ) (8)

In order to be stable, the poles of the all-pass filter must lie within the unit circle. Therefore for the denominator polynomial, if the N roots of the polynomial A ( z ) are: A ( z ) = ( 1 – p 1 z –1 ) ( 1 – p 2 z –1 )… ( 1 – p N z –1 ) (9)

then p n < 1 for n = 1 to N in order to ensure all poles are within the unit circle. The poles and zeroes of the all pass filter are therefore:
– – – a N ( 1 – p 1 1 z –1 ) ( 1 – p 2 1 z – 1 )… ( 1 – p N1 z –1 ) H ( z ) = -----------------------------------------------------------------------------------------------------------( 1 – p 1 z –1 ) ( 1 – p 2 z –1 )… ( 1 – p N z –1 )

(10)

where the roots of the zeroes polynomial A ( z –1 ) are easily calculated to be the inverse of the poles (see following example).
To illustrate the relationship between roots of z-domain polynomial and of its order reversed polynomial, consider a polynomial of order 3 with roots at z = p 1 and z = p 2 : 1 + a 1 z – 1 + a 2 z – 2 + a 3 z –3 = ( 1 – p 1 z – 1 ) ( 1 – p 2 z – 1 ) ( 1 – p 3 z – 1 ) = 1 – ( p 1 + p 2 + p 3 )z – 1 + ( p 1 p 2 + p 2 p 3 + p 1 p 3 )z – 2 + p 1 p 2 p 3 z – 3 Then replacing z with z – 1 gives: 1 + a 1 z1 + a 2 z2 + a3 z 3 = ( 1 – p1 z ) ( 1 – p2 z ) ( 1 – p3 z ) and therefore multiplying both sides by z – 3 gives: z–3 ( 1 + a1 z 1 + a2 z 2 + a3 z 3 ) = z –3 ( 1 – p1 z ) ( 1 – p2 z ) ( 1 – p3 z ) z – 3 + a 1 z – 2 + a 2 z –1 + a 3 = ( z – 1 – p 1 ) ( z – 1 – p 2 ) ( z – 1 – p 3 )
– – – = –p1 p 2 p3 ( 1 – p 11 z–1 ) ( 1 – p 21 z–1 ) ( 1 – p 31 z–1 ) – – – = – a 3 ( 1 – p 1 1 z –1 ) ( 1 – p 2 1 z –1 ) ( 1 – p 3 1 z –1 )

hence revealing the roots of the order reversed polynomial to be at z = 1 ⁄ p 1 , z = 1 ⁄ p 2 and z = 1 ⁄ p 3 .

Of course, if all of the poles of Eq. 10 lie within the z-domain unit circle then all of the zeroes of the denominator of Eq. 10 will necessarily lie outside of the unit circle of the z-domain, i.e. when p n < 1 – for n = 1 to N then p n 1 > 1 for n = 1 to N . Therefore an all pass filter is maximum phase. The magnitude frequency response of the pole at z = p i and the zero at z = p i–1 is: 1 – p i–1 z –1 H i ( e jω ) = ---------------------------1 – p i z –1 1= -----pi (11)

z = e jω

All-pass Filter:

17

If we let p i = x + jy then the frequency response is found by evaluating the transfer function at z = e jω :
–j ω 1 – p i– 1 e – j ω 1  p i – e - 1 H i ( e jω ) = ---------------------------- = ---  ------------------------  = --- G ( e jω ) – jω p i  1 – p i e – jω pi 1 – pi e

where G ( e jω ) = 1 . This can be shown by first considering that: ( x – cos ω ) + j ( y – sin ω ) x + jy – ( cos ω – j sin ω ) G ( e jω ) = = ------------------------------------------------------------------- = ----------------------------------------------------------------------------------------------------1 – ( x + jy ) ( cos ω – j sin ω ) 1 – x cos ω – y sin ω + j ( x sin ω – y cos ω ) and therefore the (squared) magnitude frequency response of G ( e jω ) is: G ( e jω )
2

( x – cos ω ) 2 + ( y – sin ω ) 2 = ------------------------------------------------------------------------------------------------------------------( 1 – ( x cos ω + y sin ω ) ) 2 + ( x sin ω – y cos ω ) 2

( x 2 – 2x cos ω + cos2 ω ) + ( y 2 – 2y sin ω + sin2 ω ) = -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------1 – 2x cos ω – 2y sin ω + ( x cos ω + y sin ω ) 2 + x 2 sin2 ω + y 2 cos2 ω – 2xy sin ω cos ω ( sin2 ω + cos2 ω ) + x 2 + y 2 – 2x cos ω – 2b sin ω = ------------------------------------------------------------------------------------------------------------------------------------------------------------------------2 ( sin2 ω + cos2 ω ) + y 2 ( sin2 ω + cos2 ω ) – 2x cos ω + 2y sin ω 1+x 1 + x 2 + y 2 – 2x cos ω – 2y sin ω = -------------------------------------------------------------------------------- = 1 1 + x 2 + y 2 – 2x cos ω + 2y sin ω 1 1Hence: H i ( e jω ) = ------ = --------------------pi 2 + y2 x

Therefore the magnitude frequency response of the all pass filter in Eq. 10 is indeed “flat” and given by: aN H ( e jω ) = a N H1 ( e jω ) H 2 ( e jω ) … H N ( e jω ) = ---------------------------------- = 1 p1 p2 … pN From Eq. 7 and 10 it is easy to show that a N = p 1 p 2 … p N .
Imag z-domain
1

(12)

Consider the poles and zeroes of a simple 2nd order all-pass filter transfer function (found by simply using the quadratic formula): 1 + 2z – 1 + 3z – 2 H ( z ) = --------------------------------------3 + 2z – 1 + z – 2 ( 1 – ( 1 + j 2 )z – 1 ) ( 1 – ( 1 – j 2 )z – 1 ) = -----------------------------------------------------------------------------------------------------------------------------3 ( 1 – ( 1 ⁄ 3 + j 2 ⁄ 3 )z – 1 ) ( 1 – ( 1 ⁄ 3 – j 2 ⁄ 3 )z – 1 )

-1

0

1

Real

–1 –1 –1 –1 1 - ( 1 – p1 z ) ( 1 – p2 z ) = ----------------- ⋅ --------------------------------------------------------------p1 p2 ( 1 – p 1 z – 1 ) ( 1 – p 2 z –1 )

-1

and obviously p 1 = 1 ⁄ 3 – j 2 ⁄ 3 and p 2 = 1 ⁄ 3 + j 2 ⁄ 3 and – – p 1 1 = 1 – j 2 and p 2 1 = 1 + j 2 . This example demonsrates that given that the poles must be inside the unit circle for a stable filter, the zeroes will always be outside of the unit circle, i.e. maximum phase.

Any non-minimum phase system (i.e. zeroes outside the unit circle) can always be described as a cascade of a minimum phase filter and a maximum phase all-pass filter. Consider the non-minimum phase filter:

18
( 1 – α 1 z –1 ) ( 1 – α 2 z –1 ) ( 1 – α 3 z –1 ) ( 1 – α 4 z –1 ) H ( z ) = --------------------------------------------------------------------------------------------------------------------( 1 – β 1 z – 1 ) ( 1 – β2 z –1 ) ( 1 – β 3 z –1 )

DSPedia
(13)

where the poles, β 1, β 2, and β 3 are inside the unit circle (to ensure a stable filter) and the zeroes α1 and α 2 are inside the unit circle, but the zeroes α 3 and α4 are outside of the unit circle. This filter can be written in the form of a minimum phase system cascaded with an all-pass filter by rewriting as:
– –  ( 1 – α 1 z – 1 ) ( 1 – α 2 z – 1 ) ( 1 – α 3 z – 1 ) ( 1 – α 4 z – 1 )  ( 1 – α 3 1 z – 1 ) ( 1 – α 4 1 z – 1 ) H ( z ) =  -----------------------------------------------------------------------------------------------------------------------  ----------------------------------------------------------------  – – ( 1 – β1 z–1 ) ( 1 – β2 z –1 ) ( 1 – β3 z –1 )    ( 1 – α 3 1 z – 1 ) ( 1 – α 4 1 z – 1 ) – –  ( 1 – α 1 z – 1 ) ( 1 – α 2 z – 1 ) ( 1 – α 3 1 z – 1 ) ( 1 – α 4 1 z – 1 )  ( ( 1 – α 3 z – 1 ) ( 1 – α 4 z – 1 ) )  =  -----------------------------------------------------------------------------------------------------------------------------  ----------------------------------------------------------------  – – ( 1 – β1 z–1 ) ( 1 – β2 z –1 ) ( 1 – β3 z –1 )    ( 1 – α 3 1 z – 1 ) ( 1 – α 4 1 z – 1 )

(14)

Minimum phase filter

All-pass maximum phase filter

– – Therefore the minimum phase filter has zeroes inside the unit circle at z = α 3 1 , z = α 4 1 and has exactly the same magnitude frequency response as the original filter and the gain of the all-pass filter being 1. See also All-pass Filter-Phase Compensation, Digital Filter, Infinite Impulse Response Filter, Notch Filter.

All-pass Filter, Phase Compensation: All pass filters are often used for phase compensation or group delay equalisation where the aim is to cascade an all-pass filter with a particular filter in order to achieve a linear phase response in the passband and leave the magnitude frequency response unchanged. (Given that signal information in the stopband is unwanted then there is usually no need to phase compensate there!). Therefore if a particular filter has a non-linear phase response and therefore non-constant group delay, then it may be possible to design a phase compensating all-pass filter:
Gain (dB) 0 -10 -20 0 frequency (Hz)

G ( e jω )
Phase

0 -2π -4π 0

G ( e jω ) Magnitude and phase response of G ( z ) frequency (Hz)

Input

All-pass filter G(z) HA(z) G ( e jω )H A ( e jω )

Output

Gain (dB)

0 -10 -20 0

G ( e jω )H A ( e jω )
Phase

0 -2π -4π 0

Magnitude and phase response of G ( z )H A ( z )

frequency (Hz)

frequency (Hz)

Cascading an all pass filter H A ( z ) with a non-linear phase filter G ( z ) in order to linearise the phase response and therefore produce a constant group delay. The magnitude frequency response of the cascaded system is the same as the original system.

See also Digital Filter, Infinite Impulse Response Filter, Notch Filter.

All-pass Filter, Fractional Sample Delay Implementation:

19

All-pass Filter, Fractional Sample Delay Implementation: If it is required to delay a digital signal by a number of discrete sample delays this is easily accomplished using delay elements: x(k) x(k) 1 T = --- secs fs 0 k y(k) = x(k – 3) y(k)

0

k

time (secs/fs) Delaying a signal by 3 samples, using simple delay elements.

time (secs/fs)

Using DSP techniques to delay a signal by a time that is an integer number of sample delays t s = 1 ⁄ f s is therefore relatively straightforward. However delaying by a time that is not an integer number of sampling delays (i.e a fractional delay) is less straightforward. Another method uses a simple first order all pass filter, to “approximately” implement a fractional sampling delay. Consider the all-pass filter: z – 1 + aH ( z ) = -------------------1 + az –1 To find the phase response, we first calculate: e –j ω + a cos ω – j sin ω + a ( a + cos ω ) – j sin ω H ( e jω ) = ----------------------- = --------------------------------------------------- = --------------------------------------------------–j ω 1 + a cos ω – ja sin ω 1 + a cos ω – ja sin ω 1 + ae and therefore: – sin ω a sin ω ∠H ( e jω ) = tan–1  ----------------------- + tan–1  ---------------------------  1 + a cos ω  a + cos ω (17) (16) (15)

For small values of x the approximation tan– 1 x ≈ x , cos x ≈ 1 and sin x ≈ x hold. Therefore in Eq. 17, for small values of ω we get: 1–a –ω aω ∠H ( e jω ) ≈ ------------ + ------------ = – ------------ ω = δω 1+a a+1 1+a (18)

where δ = ( 1 – a ) ⁄ ( 1 + a ) . Therefore at “small” frequencies the phase response is linear, thus giving a constant group delay of δ . Hence if a signal with a low frequency value f i , where: 2πf i --------- Vmax

Clock: A device which produces a periodic square wave that can be used to synchronize a DSP system. Current technology can produce extremely accurate clocks into the MHz range of frequencies. Clock Jitter: If the clock edges of a clock vary in time about their nominal position in a stochastic manner, then this is clock jitter. In ADCs and DACs clock jitter will manifest itself as a raising of the noise floor [78]. See also Quantization Noise. CMOS (Complimentary Metal Oxide Silicon): The (power efficient) integration technology used to fabricate most DSP processors. Cochlea: The mechanics of the cochlea convert the vibrations from the bones of the middle ear (i.e., the ossicles, often called the hammer, anvil and stirrup) into excitation of the acoustic nerve endings. This excitation is perceived as sound by the brain. See also Ear. Codebook Coding: A technique for data compression based on signal prediction. The compressed estimate is derived by finding the model that most closely matches the signal based on previous signals. Only the error between the selected model and the actual signal needs to be transmitted. For many types of signal this provides excellent data compression since, provided the codebook is sufficiently large, errors will be small. See also Compression.

56

DSPedia

Codec: A COder and DECoder. Often used to describe a matched pair of A/D and D/A converters on a single CODEC chip usually with logarithmic quantizers (A-law for Europe and µ -law for the USA.) Coded Excited Linear Prediction Vocoders (CELP): The CELP vocoder is a speech encoding scheme that can offer good quality speech as relatively low bit rates (4.8kbits/sec) [133]. The drawback is that this vocoder scheme has a very high computational requirement. CELP is essentially a vector quantization scheme using a codebook at both analyzer and synthesizer. Using CELP a 200Mbyte hard disk drive could store close to 100 hours of digitized speech. See also Compression. Coherent: Refers to a detection or demodulation technique that exploits and requires knowledge of the phase of the carrier signal. Incoherent or Noncoherent refers to techniques that ignore or do not require this phase information. Color Subsampling: A technique widely used in video compression algorithms such as MPEG1. Color subsampling exploits the fact that the eye is less sensitive to the color (or chrominance) part of an image compared to the luminance part. Since the eye is not as sensitive to changes in color in a small neighborhood of a given pixel, this information is subsampled by a factor of two in each dimension. This subsampling results in one-fourth of the number of chrominance pixels (for each of the two chrominance fields) as are used for the luminance field (or brightness). See also Moving Picture Experts Group. Column Vector: See Vector. Comb Filter: A comb digital filter is so called because the magnitude frequency response is periodic and resembles that of a comb. (It is worth noting that the term “comb filter” is not always used consistently in the DSP community.) Comb filters are very simple to implement either as an FIR filter type structure where all weights are either 1, or 0, or as single pole IIR filters. Consider a simple FIR comb filter: x(k-N) x(k)

N-delay elements “+” or “-” y ( k ) = x ( k )±x ( k – N )

The simple comb filter can be viewed as an FIR filter where the first and last filter weights are 1, and all other weights are zero. The comb filter can be implemented with only a shift register, and an adder; multipliers are not required. If the two samples are added then the comb filter has a linear gain factor of 2 (i.e 6 dB) at 0 Hz (DC) thus in some sense giving a low pass characteristics at low frequencies. And if they are subtracted the filter has a gain of 0 giving in some sense a band stop filter characteristic at low frequencies.

The transfer function for the FIR comb filters can be found as: Y ( z ) = X ( z ) ± z –N X ( z ) = ( 1 ± z –N )X ( z ) Y(z) ⇒ H ( z ) = ----------- = ( 1 ± z –N ) X(z)

(46)

Comb Filter:

57

The zeroes of the comb filter, are the N roots of the z-domain polynomial 1 ± z – N : Therefore for the case where the samples are subtracted: 1 – z –N = 0 ⇒ zn = ⇒ zn =
N N

1 where n = 0…N – 1 e j2πn noting e j2πn = 1 ⇒ zn = e j2πn ----------N

(47)

And for the case where the samples are added: 1 + z –N = 0 ⇒ zn = ⇒ zn =
N N

– 1 where n = 0…N – 1 noting e
1 j2π  n + --   2

e

1 j2π  n + --   2

= –1

(48)

⇒ zn =

1 j2π  n + --   2 --------------------------N e

58

DSPedia
Imag h(n) 1
1

As an example, consider a comb filter H ( z ) = 1 + z –8 and a sampling rate of f s = 10000 Hz . The impulse response, h ( n ) , frequency response, H ( f ) , and zeroes of the filter can be illustrated as:

z-domain

Impulse Response
-1 -0.5

0.5 0 -0.5 -1 0.5 1

Real

0

1 2 3 4 5 6 7 8 time, n

20 log H ( f ) (dB)

Log Magnitude Freq. Response
10 0 -10 -20 -30 -40 0 1000 2000 3000 4000 5000

H(f) Gain

Linear Magnitude Freq. Response
2

1.5 1 0.5 0 1000 2000 3000 4000 5000

frequency, (Hz)

frequency, (Hz)

The impulse response, z-domain plot of the zeroes, and magnitude frequency response of the comb filter, H ( z ) = 1 + z – 8 . Note that the comb filter is like a set of frequency selective bandpass filters, with the first half-band filter having a low pass characteristic. The number of bands from 0 Hz to fs/2 is N/2. The zeroes are spaced equally around the unit circle and symmetrically about the x-axis with no zero at z = 1 . (There is a zero at z = – 1 if N is odd.)

Comb Filter:

59

For the comb filter H ( z ) = 1 – z – 8 and a sampling rate of f s = 10000 Hz . The impulse response, h ( n ) , frequency response, H ( f ) , and zeroes of the filter are: h(n) 1 Imag Impulse Response
1 0.5

z-domain

0 -1

1 2 3 4 5 6 7

8 time, n

-1

-0.5

0 -0.5 -1

0.5

1

Real

20 log H ( f ) (dB)

Log Magnitude Freq. Response
10 0 -10 -20 -30 -40 0 1000 2000 3000 4000 5000

H(f) Gain

Linear Magnitude Freq. Response
2

1.5 1 0.5 0 1000 2000 3000 4000 5000

frequency, (Hz)

frequency, (Hz)

The impulse response, z-domain plot of the zeroes, and magnitude frequency response of the comb filter, H ( z ) = 1 – z – 8 . The zeroes are spaced equally around the unit circle and symmetrically about the x-axis. There is a zero at z = 1 .There is not a zero a z = – 1 if N is odd.

FIR comb filters have linear phase and are unconditionally stable (as are all FIR filters). For more information on unconditional stability and linear phase see entry for Finite Impulse response Filters. Another type of comb filter magnitude frequency response can be produced from a single pole IIR filter: x(k) “+” or “-” b y(k-N) y ( k ) = x ( k ) ±y ( k – N )

N-delay elements

A single pole IIR comb filter. The closer the weight value b is to 1, then the sharper the teeth of the comb filter in the frequency domain (see below). b is of course less than 1, or instability results.

This type of comb filter is often used in music synthesis and for soundfield processing [43]. Unlike the FIR comb filter note that this comb filter does require at least one multiplication operation. Consider the difference equation of the above single pole IIR comb filter:

60 y ( k ) = x ( k )±b y ( k – N ) Y( z) 1 ⇒ G ( z ) = ----------- = --------------------X( z) 1 ± bz – N

DSPedia
(49)

For a sampling rate of f s = 10000 Hz , N = 8 and b = 0.6 the impulse response g ( n ) , the frequency response, G ( f ) , and poles of the filter are:
Imag z-domain (dB)
1 0.5 -1 -0.5 0 -0.5 -1 0.5 1

Log Magnitude Freq. Response
20 15 10 5 0 -5 -10 0 1000 2000 3000 4000 5000

Real

1 G ( z ) = ------------------------1 – 0.6z – 8

20 log G ( f )

frequency, (Hz)

The z-domain plot of the filter poles and magnitude frequency response of one pole comb filter. The poles are inside the unit circle and lie on a circle of radius 0.6 1 / 8 = 0.938…. As the feedback weight value, b, is decreased (closer to 0), then the poles move away from the unit circle towards the origin, and the peaks of the magnitude frequency response become less sharp and provide less gain.

Increasing the feedback weight, b , to be very close to 1, the “teeth” of the filter become sharper and the gain increases:
Imag 20 log G ( f ) (dB)
1 0.5 -1 -0.5 0 -0.5 -1 0.5 1

z-domain

Log Magnitude Freq. Response
20 15 10 5 0 -5 -10 0 1000 2000 3000 4000 5000

Real

1 G ( z ) = ------------------------1 – 0.9z – 8

frequency, (Hz)

The z-domain plot of the filter poles and magnitude frequency response of a one pole comb filter. The poles are just inside the unit circle and lie on a circle of radius 0.9 1 / 8 = 0.987….

Of course if b is increased such that b ≥ 1 then the filter is unstable. The IIR comb filter is mainly used in computer music [43] for simulation of musical instruments and in soundfield processing [33] to simulate reverberation. Finally it is worth noting again that the term “comb filter” is used by some to refer to the single pole IIR comb filter described above, and the term “inverse comb filter” to the FIR comb filter both

Comité Consultatif International Télégraphique et Téléphonique

61

described above. Other authors refer to both as comb filters. The uniting feature however of all comb filters is the periodic (comb like) magnitude frequency response. See also Digital Filter, Finite Impulse Response Filter, Finite Impulse Response FiIter-Linear Phase, Infinite Impulse Response Filter, Moving Average Filter. . English Comité Consultatif International Télégraphique et Téléphonique (CCITT): The translation of this French name is the International Consultative Committee on Telegraphy and Telecommunication and is now known as the ITU-T committee. The ITU-T (formerly CCITT) is an advisory committee to the International Telecommunications Union (ITU) whose recommendations covering telephony and telegraphy have international influence among telecommunications engineers and manufacturers. See also International Telecommunication Union, ITU-T. Comité Consultatif International Radiocommunication (CCIR): The English translation of this French names is the International Consultative Committee on Radiocommunication and is now known as the ITU-R committee. The ITU-R (formerly CCIR) is an advisory committee to the International Telecommunications Union (ITU) whose recommendations covering radiocommunications have international influence among radio engineers and manufacturers. See also International Telecommunication Union, ITU-R. is the Comité Européen de Normalisation Electrotechnique (CENELEC): CENELEC European Committee for Electrotechnical Standardization. They provide European standards over a wide range of electrotechnology. CENELEC has drawn up an agreement with European Telecommunications Standards Institute (ETSI) to study telecommunications, information technology and broadcasting. See also European Telecommunications Standards Institute, International Telecommunication Union, International Organisation for Standards, Standards. Common Intermediate Format (CIF): The CIF image format has 288 lines by 360 pixels/line of luminance information and 144 x 180 of chrominance information and is used in the |TU-T H261 digital video recommendation. A reduced version of CIF called quarter CIF (QCIF) is also defined in H261. The choice between CIF and QCIF depends on channel bandwidth and desired quality. See also H-series Recommendations, International Telecommunication Union, Quarter Common Intermediate Format. Compact Cassette Tape: Compact cassette tapes were first introduced in the 1960s for convenient home recording and audio replay. By the end of the 1970s compact cassette was one of the key formats for the reproduction of music. Currently available compact cassettes afford a “good” response of about 65dB dynamic range from 100Hz to 12000Hz or better. Compact cassette outlived vinyl records, and is still a very popular format for music particularly in automobile audio systems. In the early 1990s DCC (Digital Compact Cassette) was introduced which had backwards compatibility with compact cassette. See also Digital Compact Cassette. Compact Disc (CD): The digital audio system that stores two channels (stereo) of 16-bit music sampled at 44.1kHz. Current CDs allow almost 70 minutes of music to be stored on one disc (without compression). This is equivalent to a total of 2 × 44100 × 70 × 60 × 16 = 5927040000 bits of information. (50)

CDs use cross-interleaved Reed-Solomon coding for error protection. See also Digital Audio Tape (DAT), Red Book, Cross-Interleaved Reed-Solomon Coding.

62

DSPedia

Compact Disc-Analogue Records Debate: Given that the bandwidth of hi-fidelity digital audio systems is up to 22.05kHz for compact disc (CD) and 24kHz for DAT it would appear that the full range of hearing is more than covered. However this is one of the key issues of the CD-analogue records debate. The argument of some analog purists is that although humans cannot perceive individual tones above 20kHz, when listening to musical instruments which produce harmonic frequencies above the human range of hearing these high frequencies are perceived in some “collective” fashion. This adds to the perception of live as opposed to recorded music; the debate will probably continue into the next century. See also Compact Disc, Frequency Range of Hearing, Threshold of Hearing. Compact Disc ROM (CD-ROM): As well as music, CDs can be used to store general purpose computer data, or even video. Thus the disk acts like a Read Only Memory (ROM). Companders: Compressor and expander (compander) systems are used to improve the SNR of channels. Such systems initially attenuate high level signal components and amplify low level signals (compression). When the signal is received the lower level signals appear at the receiving end at a level above the channel noise, and when expansion (the inverse of the compression function) is applied an improved signal to noise ratio is maintained. In addition, the original signal is preserved by the inverse relationship between the compression and expansion functions. In the absence of quantization, companders provide two inverse 1-1 mappings that allow the original signal to be recovered exactly. Quantization introduces an irreversible distortion, of course, that does not allow exact recovery of the original signal. See also A-law and µ -law. Comparator: A device which compares two inputs, and gives an output indicating which input was the largest. Complex Base: In everyday life base 10 (decimal) is used for numerical manipulation, and inside computers base 2 (binary) is used. When complex numbers are manipulated inside a DSP processor, the real parts and complex parts are treated separately. Therefore to perform a complex multiplication of: ( a + jb ) ( c + jd ) = ( ac – bd ) + j ( ad + bc ) (51)

where 16 bit numbers are used to represent a, b, c, and d will require four separate real number multiplications and two additions. Therefore an interesting alternative (although not used in an practice to the authors’ knowledge) is to use the complex base ( 1 + j ) , where only the digits 0, 1, and j are used. Setting up a table of the powers of this base gives: (1+j)4 -4
0 0 0 1

(1+j)3 -2+2j
0 0 j 0

(1+j)2 2j
1 0 1 1

(1+j)1 1+j
1 j 1 1

(1+j)0 1
0 0 1 0

Complex Decimal 1 + 3j -1 - j j -3 + 3j

Numbers in the complex base ( 1 + j ) can then be arithmetically manipulated (addition, subtraction, multiplication) although this is not as straightforward as for binary!

Complex Conjugate:

63

Complex Conjugate: A complex number is conjugated by negating the complex part of the number. The complex conjugate is often denoted by a "*". For example, if a = 5 + 7j , then a∗ = 5 – 7j . (A complex number and its conjugate are often called a conjugate pair.) Note that the product of aa* is always a real number: aa∗ = ( 5 + 7j ) ( 5 – 7j ) = 25 + 35j – 35j + 49 = 25 + 49 = 74 (52)

and can clearly be calculated by summing the squares of the real and complex parts. (Taking the square root of the product aa∗ is often referred to as the magnitude of a complex number.) The conjugate of a complex number expressed as complex exponential is obtained by negating the exponential power: ( e jω )∗ = e – jω This can be easily seen by noting that: e jω = cos ω + j sin ω , and e –jω = cos ( – ω ) + j sin ( – ω ) = cos ω – j sin ω given that cosine is an even function, and sine is an odd function. Therefore: e jω e – jω = e 0 = cos2 ω + sin2 ω (56) (54) (55) (53)

A simple rule for taking a complex conjugate is: “replace any j by -j “. See also Complex Numbers. Complex Conjugate Reciprocal: The complex conjugate reciprocal of a complex number is found by taking the reciprocal of the complex conjugate of the number. For example, if z = a + bj , then the complex conjugate reciprocal is: a + bj1 1----- = ------------- = ----------------a – bj a2 + b2 z∗ See also Complex Numbers, Pole-Zero Flipping. Complex Exponential Functions: An exponent of a complex number times t, the time variable, provides a fundamental and ubiquitous signal type for linear systems analysis: the damped exponential. These signals describe many electrical and mechanical systems encountered in everyday life, like the suspension system for an automobile. See also Damped Exponential. Complex LMS: See LMS algorithm. Complex Numbers: A complex number contains both a real part and a complex part. The complex part is multiplied by the imaginary number j, where j is the square root of -1. (In other branches of applied mathematics i is usually used to represent the imaginary number, however in electrical engineering j is used because the letter i is used to denote electrical current.) For the complex number: a + jb (58) (57)

64

DSPedia

a is the real part, where a ∈ ℜ (ℜ is the set of real numbers) and jb is the imaginary part, where b ∈ ℜ . Complex arithmetic can be performed and the result expressed as a real part and imaginary part. For addition: ( a + jb ) + ( c + jd ) = ( a + c ) + j ( b + d ) and for multiplication: ( a + jb ) ( c + jd ) = ( ac – bd ) + j ( ad + bc ) (60) (59)

Complex number notation is used to simplify Fourier analysis by allowing the expression of complex sinusoids using the complex exponential e jω = cos ω + j sin ω . Also in DSP complex numbers represent a convenient way of representing a two dimensional space, for example in an adaptive beamformer (two dimensional space), or an adaptive decision feedback analyser where the inphase component is a real number, and the quadrature phase component is a complex number. See also Complex Conjugate, Complex Sinusoid. Complex Plane: The complex plane allows the representation of complex numbers by plotting the real part of a complex number on the x-axis, and the imaginary part of the number on the y-axis.
Imaginary, ℑ
4 3 2 1 -4 -3 -2 -1 -1 0 1 2 3 4

2 + 3j

3

Real, ℜ

-3.51- 3.49j

-2 -3 -4

If a complex number is written as a complex exponential, then the complex plane plot can be interpreted as a phasor diagram, such that for the complex number a + jb : a + jb = Me jθ , where M = a2 + b2 (62) (61)

b . θ = tan– 1  --   a

Conjugate Reciprocal:

65

If θ is a time dependent function such that θ = ωt , then the phasor will rotate in a counter-clockwise direction with angular frequency of ω radians per second (or ω ⁄ ( 2π ) rotations per second, i.e., cycles per second or Hertz). See also z-plane, Complex Exponential. Imaginary, ℑ b M θ ω

a Real, ℜ

Conjugate Reciprocal: See Complex Conjugate Reciprocal. Complex Roots: When the roots of a polynomials are calculated, if there is no real solution, then roots are said to be complex. As an example consider the following quadratic polynomial: y = x2 + x + 1 (63)

The roots of this polynomial are when y = 0 . Geometrically this is where the are where the graph of y cuts the x-axis. However plotting this polynomial it is clear that the graph does not cut the x-axis: y
7 6 5 4 3 2 1 -4 -3 -2 -1 0 1 2 3 4

x

In this case the roots of the polynomial are not real. Using the quadratic formula we can calculated the roots as: –1 ± 12 – 4 x = ------------------------------2 = – 1 ± 3j ---------------2 and therefore: 3 1 3 -x 2 + x + 1 =  x + 1 + ------ j  x + -- – ------ j   2 2  2 2 (65)

(64)

66

DSPedia

This example indicates the fundamental utility of complex number systems. Note that the coefficients of the polynomial are real numbers. It is obvious from the plot of the polynomial that no real solution to y(x) = 0 exists. However, the solution does exist if we choose x from the larger set of complex numbers. In applications involving linear systems, these complex solutions provide a tremendous amount of information about the nature of the problem. Thus real world phenomena can be understood and predicted simply and accurately in a way not possible without the intuition provided by complex mathematics. See also Poles, Zeroes. Complex Sinusoid: See Damped Exponential. Compression: Over the last few years compression has emerged as one of the largest areas of real time DSP application for digital audio and video. The simple motivation is that the bandwidth required to transmit digital audio and video signals is considerably higher than the analogue transmission of the baseband analogue signal, and also that storage requirements for digital audio and video are very high. Therefore data rates are reduced by essentially reducing the data required to transmit of store a signal, while attempting to maintain the signal quality. For example, the data rate of a stereo CD sampling at 44.1kHz, using 16 bit samples on stereo channels is: Data Rate = 44100 × 16 × 2 = 1411200 bits/sec (66)

The often quoted CD transmission bandwidth (assuming binary signalling) is 1.5MHz. Compare this bandwidth with the equivalent analog bandwidth of around 30kHz for two 15kHz analog audio channels. The storage requirements for 60 minutes of music in CD format are: CD Storage Requirement = 44100 × 2 × 2 × 60 × 60 = 635 Mbytes/60 minutes (67)

In general therefore CD quality PCM audio is difficult to transmit, and storage requirements are very high. As discussed above, if the sampling rate is reduced or the data wordlength reduced, then of course the data rate will be reduced, however the audio quality will also be affected. Therefore there is a requirement for audio compression algorithms which will reduced the quantity of data, but will not reduce the perceived quality of the audio. For telecommunications where speech is coded at 8kHz using, for example, 8 bit words the data rate is 64000 bits per second. The typical bandwidth of a telephone line is around 4000Hz, and therefore powerful compression algorithms are clearly necessary. Similarly teleconferencing systems require to compress speech coded at the higher rate of 16 kHz, and a video signal. Ideally no information will be lost by a compression algorithm (i.e. lossless). However, the compression achievable with lossless techniques is typically quite limited. Therefore most audio compression techniques are lossy such that the aim of compression algorithm is to reduce the components of the signal that do not matter such as periods of silence, or sounds that will not be heard due to the psychoacoustic behaviour of the ear whereby loud sounds mask quieter ones. For hi-fidelity audio the psychoacoustic or perceptual coding technique is now widely used to compress by factors between 2:1 and almost 12:1. Two recent music formats, the mini-disk and DCC (digital compact cassette) both use perceptual coding techniques and produce compress of 5:1 and 4:1 with virtually no (perceptual) degradation in the quality of the music. Digital audio

Condition Code Register (CCR):

67

compression will continue to be a particularly large area of research and development over the next few years. Applications that will be enabled by real time DSP compression techniques include:
Telecommunications: Using toll-quality telephone lines to transmit compressed data and speech; Digital Audio Broadcasting (DAB): DAB data rates must be as low as possible to minimise the required bandwidth; Teleconferencing/Video-phones: Teleconferencing or videophones via telephone circuits and cellular telephone networks; Local Video: Using image/video compression schemes medium quality video broadcast for organisations such as the police, hospitals etc are feasible over telephones, ISDN lines, or AM radio channels; Audio Storage: If a signal is compressed by a factor of M, then the amount of data that can be stored on a particular medium increases by a factor of M.

The table below summarises a few of the well known audio compression techniques for both hifidelity audio and telecommunications. Currently there exist many different “standard” compression algorithms, and different algorithms have different performance attributes, some remaining proprietary to certain companies.
Algorithm PASC Dolby AC-2 MUSICAM NICAM ATRAC ADPCM (G721) IS-54 VSELP LD-CELP (G728) Compressio n Ratio 4:1 6:1 4:1 to 12:1 2:1 5:1 8:5 to 4:1 8:1 4:1 Bit/rate, kbits/sec 384 256 192 to 256 676 307 16, 24, 32, 40 8 8 Audio Bandwidth (Hz) 20kHz 20kHz 20kHz 16kHz 20kHz 4kHz 4kHz 4kHz Example Application DCC Cinema Sound Professional Audio Stereo TV audio Mini-disc Telecommunications Telecommunications Telecommunications

Video compression schemes are also widely researched, developed and implemented. The best known schemes are Moving Picture Experts Group (MPEG) which is in fact both audio and video, and the ITU H-Series Recommendations (H261 etc). The Joint Photographic Experts Group (JPEG) standards and Joint Bi-level Image Group (JBIG) consider the compression of still images. See also Adaptive Differential Pulse Code Modulation, Adaptive Transform Acoustic Coding (ATRAC), Entropy Coding, Huffman Coding, Arithmetic Coding, Differential Pulse Code Modulation, Digital Compact Cassette, G-Series Recommendations, H-Series Recommendations, Joint Photographic Experts Group, MiniDisc, Moving Picture Experts Group, TransformCoding, Precision Adaptive Subband Coding, Run Length Encoding. Condition Code Register (CCR): The register inside a DSP processor which contains information on the result of the last instruction executed by the processor. Typically bits (or flags) in the CCR will indicate if the previous instruction had a zero result, positive result, overflow

68

DSPedia

occurred, the carry bit value. The CCR bits are then used to make conditional decisions (branching). The CCR is sometimes called the Status Register (SR). See also DSP Processor. Condition Number: See Matrix Properties - Condition Number. Conditioning: See Signal Conditioning. Conductive Hearing Loss: If there is a defect in the middle ear this can often reduce the transmission of sound to the inner ear [30]. A simple conductive hearing loss can be caused by as simple a problem as excessive wax in the ear. The audiogram of a person with a conductive hearing loss will often indicate that the hearing loss is relatively uniform over the hearing frequency range. In general a conductive hearing loss can be alleviated with an amplifying hearing aid. See also Audiology, Audiometry, Ear, Hearing Aids, Hearing Impairment, Loudness Recruitment, Sensorineural Hearing Loss, Threshold of Hearing. Conjugate: See Complex Conjugate. Conjugate Pair: See Complex Conjugate. Conjugate Transpose: See Matrix Properties - Hermitian Transpose Constructive Interference: The addition of two waveforms with nearly identical phase. Constructive interference is exploited to produce resonance in physical and electrical systems. Constructive interference is also responsible for energy peaks in diffraction patterns. See also Destructive Interference, Beamforming, Diffraction.

Boundary Incident Waves

Wave Peaks

Wave Valleys

Wave Peak Constructive Interference Wave Valley Constructive Interference Destructive Interference, i.e., Cancellation

Reflected Waves

Continuous Phase Modulation (CPM): A type of modulation in which abrupt phase changes are avoided to reduce the bandwidth of the modulated signal. CPM requires increased decoder complexity. See also Minimum Shift Keying, Viterbi Algorithm. Continuous Variable Slope Delta Modulator (CVSD): A speech compression technique that was used before ADPCM became popular and standardized by the ITU [133]. Although CVSD

Control Bus:

69

generally produces lower quality speech it is less sensitive to transmission errors than ADPCM. See also Compression, Delta Modulation. Control Bus: A collection of wires on a DSP processor used to transmit control information on chip and off chip. An example of control information is stating whether memory is to be read from, or written to. This would be indicated by the single R ⁄ W line. See also DSP Processor. Convergence: Algorithms such as adaptive algorithms, are attempting to find a particular solution to a problem by converging or iterating to the correct solution. Convergence implies that the correct solution is found by continuously reducing the error between the current iterated value and the true solution. When the error is zero (or, more practically, relatively small), the algorithm is said to have converged. For example consider an algorithm which will update the value of a variable xn to converge to the square root of a number, a. The iterative update is given by: 1 ax n + 1 = --  x n + ----  2 x n (68)

where the initial guess, x0, is a/2. The error of e n = x n – a will reduce at each iteration, and converge to zero. Because most algorithms converge asymptotically, convergence is often stated to have occurred when a specified error quantity is less than a particular value.
Finding the square root of a = 15, using an iterative algorithm to converge to the solution of a = 5.477 . Note that after only 6 iterations the algorithm has converged to within 0.03 of the correct answer Variable, xn
16 14 12 10 8 6 4 2 0 10 8 6 4 2 0 1 2 3 4 5 6

Error, en

1

2

3

4

5

6

Iteration, n

Iteration, n

Another example is a system identification application using an adaptive LMS FIR filter to model an unknown system. Convergence is said to have occurred when the mean squared error between the output of the actual system and the modelled one (given the same input) is less than a certain value determined by the application designer. Algorithms that do not converge and perhaps diverge, are usually labelled as unstable. See also Adaptive Signal Processing, Iterative Techniques. Convolution: When a signal is input to a particular linear system the impulse response of the system is convolved with the input signal to yield the output signal. For example, when a sampled speech signal is operated on by a digital low pass filter, then the output is formed from the convolution of the input signal and the impulse response of the low pass filter:

70 y(n ) = h(n ) ⊗ x(n ) =

DSPedia
∑ h ( k )x ( n – k ) k (69)

h(n)

x(n) For n < 0 both the signal x(n) and the filter h(n) are zero. n n For n < 0 the convolution output is 0. The summation occurs over the summation variable, k.

n = -1 h(k) x(-1-k)

n=0 h(k) x(0-k)

k

y(n)

n=0

Σ k y(n) n=1 n

n=1 x(1-k) h(k)

Σ k y(n) n n=2

n=2 h(k) x(2-k) k n=7 h(k) x(7-k)

Σ y(n) n=7

n

Σ k n

Cooley-Tukey: J.W. Cooley and J.W. Tukey published a noteworthy paper in 1965 highlighting that the discrete Fourier transform (DFT) could be computed in fewer computations by using the fast Fourier transform (FFT) [66]. Reference to the Cooley-Tukey algorithm usually means the FFT. See also Fast Fourier Transform, Discrete Fourier Transform. Co-processor: Inside a PC, a processor that is additional to the general purpose processor (such as the Intel 80486) is described as a co-processor and will usually only perform demanding

CORDIC:

71

computational tasks. For multi-media applications, DSP processors inside the PC to facilitate speech processing, video and communications are co-processors. CORDIC: An arithmetic technique that can be used to calculate sin, cos, tan and trigonometrical values using only shift and adds of binary operands [25]. Core: All DSP applications require very fast MAC operations to be performed, however the algorithms to be implemented, and the necessary peripherals to input data, memory requirements, timers and on-chip CODEC requirements are all slightly different. Therefore companies like Motorola are releasing DSP chips which have a common core but have on-chip special purpose modules and interfaces. For example Motorola’s DSP56156 has a 5616 core but with other modules, such as on-chip CODEC and PLL to tailor the chip for telecommunications applications. See also DSP Processor. Correlation: If two signals are correlated then this means that they are in some sense similar. Depending on how similar they are, signals may be described as being weakly correlated or strongly correlated. If two signals, x(k) and y(k), are ergodic then the correlation function, rxy(n) can be estimated as: 1 ˆ ( n ) = ----------------r xy 2M + 1
M

∑ k = –M

x ( k )y ( n + k )

for largeM

(70)

Taking the discrete Fourier transform (DFT) of the autocorrelation function gives the cross spectral density. See also Autocorrelation. Correlation Matrix: Assuming that a signal x ( k ) is a wide sense stationary ergodic processes, a 3 × 3 correlation matrix can be formed by taking the expectation, E { . } , of the elements of the matrix formed by multiplying the signal vector, x ( k ) = [ x ( k ) x ( k – 1 ) x ( k – 2 ) ] by its transpose to produce the correlation matrix: x(k) R = E [ x ( k )x T ( k ) ] = E x ( k – 1 ) x ( k ) x ( k – 1 ) x ( k – 2 ) x(k – 2) x2( k ) = E x ( k )x ( k – 1 ) x ( k )x ( k – 1 ) x2( k – 1) x ( k )x ( k – 2 ) x ( k – 1 )x ( k – 2 ) x2( k – 2 ) (71)

x ( k )x ( k – 2 ) x ( k – 2 )x ( k – 1 ) r0 r1 r2 = r1 r0 r1 r2 r1 r0

where r n = E [ x ( k )x ( k – n ) ] . The correlation matrix, R is Toeplitz symmetric and for a more general N point data vector the matrix will be N x N in dimension:

72 r0 r1 R = r 2 r1 r0 r1 r2 r1 r0 … rN – 1 … rN – 2 … rN – 3

DSPedia
(72)

: : : …: rN – 1 rN – 2 rN – 3 … ro The Toeplitz structure (i.e., constant diagonal entries) results from the fact that the diagonal entries all correspond to the same time lag estimate of the correlation, that is, n + k – n = n is constant. To calculate r n statistical averages should be used, or if the signal is ergodic then time averages can be used. See also Adaptive Signal Processing, Cross Correlation Vector, Ergodic, Expected Value, Matrix, Matrix Structured - Toeplitz, Wide Sense Stationarity, Wiener-Hopf Equations. Correlation Vector: See Cross Correlation Vector. CORTES Algorithm: Coordinate Reduction Time Encoding Scheme (CORTES) is an algorithm for the data compression of ECG signals. CORTES is based on the ATZEC and TP algorithms, using the AZTEC to discard clinically insignificant data in the isoelectric region, and applying the TP algorithm to clinically significant high frequency regions of the ECG data [48]. See also AZTEC, Electrocardiogram, TP. Critical Bands: It is conjectured that a suitable model of the human auditory system is composed of a series of (constant fractional bandwidth) bandpass filters [30] which comprise critical bands. When trying to detect a signal of interest in broadband background noise the listener is thought to make use of a bandpass filter with a centre frequency close to that of the signal of interest. The perception to the listener is that the background noise is somewhat filtered out and only the components within the background noise that lie in the critical band remain. The threshold of hearing of the signal of interest is thus determined by the amount of noise passing through the filter. See also Auditory Filters, Audiology, Audiometry, Fractional Bandwidth, Threshold of Hearing. Critical Distance: In a reverberant environment, the critical distance is defined as the separation between source and receiver that results in the acoustic energy of the reflected waveforms being equal to the acoustic energy in the direct path. A single number is often used to classify a given environment, although the specific acoustics of a given room may produce different critical distances for alternate source (or receiver) positions. Roughly, the critical distance characterizes how much reverberation exists in a given room. See also Reverberation. Cross Compiler: This is a piece of software which allows a user to program in a high level language (such as ‘C’) and generate cross compiled code for the target DSP’s assembly language. This code can in turn be assembled and the actual machine code program downloaded to the DSP processor. Although cross-compilers can make program writing much easier, they do not always produce efficient code (i.e. using minimal instructions) and hence it is often necessary to write in assembly language (or hand code) either the entire program or critical sections of the program (via in-line assembly commands in the higher level language program). Motorola produce a C cross compiler for the DSP56000 series, and Texas Instruments produce one for the TMS320 series of DSP processors. Cross Correlation Vector: A 3 element cross correlation vector, p, for a signal d ( k ) and a signal x ( k ) can be calculated from:

Cross Interleaved Reed Solomon Coding (CIRC): p0 d ( k )x ( k ) p = E { d ( k )x ( k ) } = E d ( k )x ( k – 1 ) = p 1 d ( k )x ( k – 2 ) p2 Hence for an N element vector: p0 p = p1 : pN – 1

73

(73)

(74)

where p n = E { d ( k )x ( k – n ) } , and E { . } is the expected value function. To calculate p n statistical averages should be used, or if the signal is ergodic then time averages can be used. See also Adaptive Signal Processing, Correlation Matrix, Ergodic, Matrix, Expected Value, Wide Sense Stationarity, Wiener-Hopf Equations. Cross Interleaved Reed Solomon Coding (CIRC): CIRC is an error correcting scheme which was adopted for use in compact discs (CD) systems [33]. CIRC is an interleaved combination of block (Reed-Solomon) and convolutional error correcting schemes. It is used to correct both burst errors and random bit errors. On a CD player errors can be caused by manufacturing defects, dust, scratches and so on. CIRC coding can be decoded to correct several thousand consecutive bit errors. It is safe to say that without the signal processing that goes into CD error correction and error concealment, the compact discs we see today would be substantially more expensive to produce and, subsequently, the CD players would not be nearly the ubiquitous appliance we see today. See also Compact Disc. Cross-Talk: The interference of one channel upon another causing the signal from one channel to be detectable (usually at a reduced level) on another channel. Cut-off Frequency: The cut-off frequency of a filter is the point at which the attenuation of the filter drops by 3dB. Although the term cut-off conjures up the image of a sharp attenuation, 3dB is equivalent to 20log10 2 , i.e. the filtered signal output has half of the power of the input signal, 10log10 2 . For example the cut-off frequency of a low pass filter, is the frequency at which the filter

74

DSPedia
Bandwidth Gain Factor 0 -5 -10 -15 -20 Gain, dB 1 0.75 0.5 0.25 0 Bandwidth

attenuation drops by 3dB when plotted on a log magnitude scale, and reduces by 2 on a linear scale. A bandpass filter will have two cut-off frequencies. See also Attenuation, Decibels

Cut-off frequency

frequency

Cut-off frequency

frequency

The cut-off frequency, or 3dB point of a filter. The left hand side illustrates the cut-off followed by the slow roll-off characteristic. The right hand side shows the same filter plotted as attenuation factor (linear scale, not decibel) against frequency. The cut off occurs when the attenuation is at 1 ⁄ [ 2 ]

Cyberspace: The name given to the virtual dimension that the world wide network (internet) of connected computers gives rise to in the minds of people who spend a large amount of time “there”. Without the DSP modems there would be no cyberspace! See also Internet. Cyclic Redundancy Check (CRC): A cyclic redundancy check can be performed on digital data transmission systems whereby it is required at the receiver end to check the integrity of the data transmitted. This is most often used as an error detection scheme -- detected errors require retransmission. If both ends know the algebraic method of encoding the original data the raw data can be CRC coded at the transmission end, and then at the received end the cyclic (i.e., efficient) redundancy can be checked. This redundancy check highlights the fact that bit transmission errors have occurred. CRC techniques can be easily implemented using shift registers [40]. See also Characteristic Polynomial, V-series Recommendations. Cyclostationary: If the autocorrelation function (or second order statistics) of a signal fluctuates periodically with time, then this signal is cyclostationary. See [75] for a tutorial article.

75

D
Damped Sinusoid: A common solution to linear system problems takes the form e
( a + jb )t

= e e

at jbt

= e [ cos ( bt ) + j sin ( bt ) ]

at

.

(75)

where the complex exponent gives rise to two separate components, an exponential decay term, at e and a sinusoidal variation term [ cos ( bt ) + j sin ( bt ) ] . Common examples of systems that give rise to damped sinusoidal solutions are the suspension system in an automobile or the voltage in a passive electrical circuit that has energy storage elements (capacitors and inductors). Because many physical phenomena can be accurately described by coupled differential equations (for which damped sinusoids are common solutions), real world experiences of damped sinusoids are quite common. Data Acquisition: The general name given to the reading of data using an analog-to-digital converter (ADC) and storing the sampled data on some form of computer memory (e.g., a hard disk drive). Data Bus: The data bus is a collection of wires on a DSP processor that is used to transmit actual data values between chips, or within the chip itself. See also DSP Processor. Data Compression: See Compression. Data Registers: Memory locations inside a DSP processor that can be used for temporary storage of data. The data registers are at least as long as the wordlength of the processor. Most DSP processors have a number of data registers. See also DSP Processor. Data Vector: The most recent N data values of a particular signal, x(k), can be conveniently represented as a vector, xk , where k denotes the most recent element in the vector. For example, if N = 5: xk 40 20 0 -20 -40 1 2 3 4 5 6 7 8 9 10 11 12 13 time, k

xk xk – 1 If x k = x k – 1 xk – 3 xk – 4

x7 x6 then, for example x 7 = x 5 x4 x3

– 23 – 20 = –9 11 29

More generally any type of data stored or manipulated as a vector can reasonable be referred to as a data vector. See also Vector, Vector Properties, Weight Vector. Data Windowing: See Window.

76

DSPedia

Daughter Module: Most DSP boards are designed to be hosted by an IBM PC. To provide input/ output facilities or additional DSP processors some DSP boards (then called motherboards) have spaces for optional daughter modules to be inserted. Decade: An decade refers the interval between two frequencies where one frequency is ten times other. Therefore as an example from 10Hz to 100Hz is a decade, and from 100Hz to 1000Hz is a decade and so on. See also Logarithmic Frequency, Octave, Roll-off. Decibels (dB): The logarithmic unit of decibels is used to quantify power of any signal relative to a reference signal. A power signal dB measure is calculated as 10log10(P1/P0). In DSP since input signals are voltage, and Power = (Voltage)2 divided by Resistance we conventionally convert a voltage signal into its logarithmic value by calculating 20log10(V1/V0). Decibels are widely used to represent the attenuation or amplification of signals: PVAttentuation = 10 log  -----  = 20 log  -----   P 0  V 0 (76)

where P o is the reference power, and V 0 is the reference voltage. dB’s are used because they often provide a more convenient measure for working with signals (e.g., plotting power spectra) than do linear measures. Often the symbol dB is followed by a letter that indicates how the decibels were computed. For example, dBm indicates a power measurement relative to a milliwatt, whereas dBW indicates power relative to a watt. In acoustics applications, dB can be measured relative to various perceptually relevant scales, such as A-weighting. In this case, noise levels are reported as dB(A) to indicate the relative weighting (A) selected for the measurement. See Sound Pressure Level Weighting Curves, Decibels SPL. Decibels (dB) SPL: The decibel is universally used to measure acoustic power and sound pressure levels (SPL). The decibel rating for a particular sound is calculated relative to a reference power W o : W1 10 log  -------   W 0 (77)

dB SPL is sound pressure measured relative to 20 µ-Pascals ( 2 × 10 – 5 Newtons/m2). Acoustic power is proportional to pressure squared, so pressure based dB are computed via 20log10 pressure ratios. Intensity (or power) based dB computations use 10log10 intensity ratios. The sound level 0dB SPL is a low sound level that was selected to be around the absolute threshold of average human hearing for a pure 1000Hz sinusoid [30]. Normal speech has an SPL value of about 70dB SPL. The acoustic energy 200 feet from a jet aircraft at take-off about 125dB SPL, this is above the threshold of feeling (meaning you can feel the noise as well as hear it). See also Sound Pressure Level. Decibels (dB) HL (3): Hearing Level (HL). See Hearing Level, Audiogram. Decimation: Decimation is the process of reducing the sampling rate of a signal that has been oversampled. When a signal is bandlimited to a bandwidth that is a factor of 0.5 or less than half of the sampling frequency ( f s ⁄ 2 ) then the sampling rate can be reduced without loss of information. Oversampling simply means that a signal has been sampled at a rate higher than dictated by the

77
Nyquist criteria. In DSP systems oversampling is usually done at integral multiples of the Nyquist rate, f n , and usually by a power of two factor such as 4 x’s, 8 x’s or 64 x’s. For a discrete signal oversampled by a factor R, then the sampling frequency, f s , is: f s ≡ f ovs = Rf n (78)

For an R x’s oversampled signal the only portion of interest is the baseband signal extending from 0 to f n ⁄ 2 Hz. Therefore decimation is required. The oversampled signal is first digitally low pass filtered to f n ⁄ 2 using a digital filter with a sharp cut-off. The resulting signal is therefore now bandlimited to f n ⁄ 2 and can be downsampled by retaining only every R-th sample. Decimation for a system oversampling by a factor of R = 4 can be illustrated as:
Amplitude Amplitude Amplitude tn 1 f n = ---t n time t Magnitude 4 f = 4f = ---n ovs tn freq fn/2 fn 2fn 4fn d

time 0 t ovs = n ⁄ 4 Baseband signal fovs/2

time

0 Magnitude t

Magnitude

freq fovs

freq fn/2 fn 2fn 4fn

fovs Attenuation Oversampling ADC Attenuation Analog anti-alias filter freq fn/2 fn fovs/2 Digital Low Pass Filter freq fn/2 fn 2fn 4fn

Downsampler

4
To DSP Processor

Analog Input

Decimation of a 4 x’s oversampled signal, f ovs = 4f n by low pass digital filtering then downsampling by 4, which retains every 4th sample. The decimation process is essentially a technique whereby anti-alias filtering is being done partly in the analog domain and partly in the digital domain. Note that the decimated Nyquist rate or baseband signal will be delayed by the group delay, t d of the digital low pass filter (which we assume to be linear phase).

For the oversampling example above where R = 4 , any frequencies that exist between f n ⁄ 2 Hz and f ovs ⁄ 2 = 4f n after the analog anti-alias filter can be removed with a digital low pass filter prior to downsampling by a factor of 4. Hence the complexity of the analogue low pass anti-alias filter has been reduced by effectively adding a digital low pass stage of anti-alias filtering. So why not just oversample, but not decimate? To illustrate the requirement for decimation where possible, linear digital FIR filtering using an oversampled signal will require RN filter weights (corresponding to T secs ) whereas the number of weights in the equivalent function Nyquist rate filter will only be N (also corresponding to T secs ) Hence the oversampled DSP processing would require to perform R2Nfn multiply/adds per second, compared to the Nyquist rate DSP processing which requires Nfn multiply/adds per second, a factor of R 2 more. This is clearly not very desirable and a considerable disadvantage of an oversampled system compared to a Nyquist rate system. Therefore this is why an oversampled signal is usually decimated to the Nyquist rate, first by digital low pass filtering, then by downsampling (retaining only every R-th sample).

78

DSPedia

The word decimation originally comes from a procedure within the Roman armies, where for acts of cowardice the legionaires were lined up, and every 10th man was executed. Hence the prefix “dec” meaning ten. See also Anti-alias Filter, Downsampling, Oversampling, Upsampling, Interpolation, Sigma Delta. Decimation-in-Frequency (DIF): The DFT can be reformulated to give the FFT either as a DIT or a DIF algorithm. Since the input data and output data values of the FFT appear in bit-reversed order, decimation-in-frequency computation of the FFT provides the output frequency samples in bit-reversed order. See also Bit Reverse Addressing, Discrete Fourier Transform, Fast Fourier Transform, Cooley-Tukey. Decimation-in-Time (DIT): The DFT can be reformulated to give the FFT either as a DIF or a DIT algorithm. Since the input data and output data values of the FFT appear in bit-reversed order, decimation-in-time computation of the FFT provides the output frequency samples in proper order when the input time samples are arranged in bit-reversed order. See also Bit Reverse Addressing, Discrete Fourier Transform, Fast Fourier Transform, Cooley-Tukey. Delay and Sum Beamformer: A relatively simple beamformer in which the output from an array of sensors are subject to independent time delays and then summed together. The delays are typically selected to provide a look direction from which the desired signal will constructively interfere at the summer while signals from other directions are attenuated because they tend to destructively interfere. The delays are dictated by the geometry of the array of sensors and the speed of propagation of the wavefront. See also Adaptive Beamformer, Beamformer, Broadside, Endfire.
Delays Summer

τ1 τ2 τ3 τM
Sensors

d1 d2

dn τ n = ----c θ

Output

Σ

c is propagation velocity

90o

Look Direction θ

In a delay-and-sum beamformer, the output from each of the sensors in an array is delayed an appropriate amount (to time-align the desired signal) and then combined via a summation to generate the beamformed output. No amplitude weighting of the sensors is performed.

Delay LMS: See Least Mean Squares Algorithm Variants. Delta Modulation: Delta modulation is a technique used to take a sampled signal, x(n), and encode the magnitude change from the previous sample and transmit only the single bit difference ( ∆ ) between adjacent samples [2]. If the signal has increased from the previous sample, then encode a 1, if it had decreased then encode as a -1. The received signal is then demodulated by taking successive delta samples and summing to reconstruct the original signal using an integrator. Delta modulation can reduce the number of bits per second to be transmitted down a channel,

79 compared to PCM. However when using a delta modulator, the sampling rate and step size must be carefully chosen or slope overload and/or granularity problems may occur. See also Adaptive Differential Pulse Code Modulation,Continuously Variable Slope Delta Modulation, Differential Pulse Code Modulation, Integrator, Slope Overload, Granularity Effects.. x(n) Σ xd(n) 1-bit Quantizer fs

∆(n)

Channel



Low Pass Filter

x(n)


Modulator x(n)
4 3 2 1 0 -1 -2 -3 -4

De-modulator xd(n)

time

∆(n)

1

-1

time

Delta-Sigma: Synonymous term with Sigma Delta. See Sigma-Delta. Descrambler: See Scrambler/Descrambler. Destructive Interference: The addition of two waveforms with nearly opposite phase. Destructive interference is exploited to cancel unwanted noise, vibrations, and interference in physical and electrical systems. Destructive interference is also responsible for energy nulls in diffraction patterns. See also Diffraction, Constructive Interference, Beamforming. Determinant: See Matrix Properties - Determinant. Diagonal Matrix: See Matrix Structured - Diagonal.

80

DSPedia

Dial Tone: Tones at 350 Hz and 440 Hz make up the dialing tone for telephone systems. See also Dual Tone Multifrequency, Busy Tone, Ringing Tone.
~440 Hz ~350 Hz 50 Hz mains hum

Dichotic: A situation where the aural stimulation reaching both ears is not the same. For example, setting up a demonstration of binaural beats is a dichotic stimulus. The human ear essentially provides dichotic hearing whereby it is possible for the auditory mechanism to process the differing information arriving at both ears and subsequently localize the source. See also Audiometry, Binaural Unmasking, Binaural Beats,Diotic, Lateralization, . Difference Limen (DL): The smallest noticeable difference between two audio stimuli, or the Just Noticeable Difference (JND) between these stimuli. Determination of DL’s usually requires that subjects be given a discrimination task. Typically, DL’s (or JND’s) are computed for two signals that are identical in all respects save the parameter being tested for a DL. For example, if the DL is desired for sound intensity discrimination, two stimuli differing only in intensity would be presented to the subject under test. These stimuli could be tones at a given frequency that are presented for a fixed period. It is interesting to note that the DL for sound intensity (measured in dB) is generally found to be constant over a very wide range (this is known as Weber’s law). To have meaning a DL must be specified along with the set up and conditions used to establish the value. For example stating that the frequency DL for the human ear is 1Hz between the frequencies of 1- 4 kHz requires that sound pressure levels, stimuli duration, and stimuli decomposition are clearly stated as varying these parameters will cause variation in the measured frequency DL. See also Audiology, Audiometry, Frequency Range of Hearing, Threshold of Hearing. Differentiation: See Differentiator. Differential Phase Shift Keying (DPSK): A type of modulation in which the information bits are encoded in the change of the relative phase from one symbol to the next. DPSK is useful for communicating over time varying channels. DPSK also removes the need for absolute phase synchronization, since the phase information is encoded in a relative way. See also Phase Shift Keying. Differentiator: A (linear) device that will produce an output that is the derivative of the input. In digital signal processing terms a differentiator is quite straightforward. The output of a differentiator, y(t), will be the rate of change of the signal curve, x(t), at time t. For sampled digital signals the input will be constant for one sampling period, and therefore to differentiate the signal the previous sample value is subtracted from the current value and divided by the sampling period. If the sampling period is normalized to one, then a signal is differentiated in the discrete domain by

81 subtracting consecutive input samples. A differentiator is implemented using a digital delay element, and a summing element to calculate: y ( n ) = x ( n ) –x( n – 1 ) In the z-domain the transfer function of a differentiator is: Y ( z ) = X ( z ) –z –1 X ( z ) ⇒ Y(z ) ----------- = 1 – z –1 X(z ) (80) (79)

When viewed in the frequency domain a differentiator has the characteristics of a high pass filter. Thus differentiating a signal with additive noise tends to emphasize or enhance the high frequency components of the additive noise. See also Analog Computer, Integrator, High Pass Filter. x(t) x(t) y(t) y(t) 3 2 1 time

dx ( t ) -----------dt
Analog Differentiation

time x(n) x(n) Discrete time, n

y(n)

∆x ( n ) -------------∆t
Discrete Differentiation

y(n)

∆t

Discrete time, n

x(n)

+ x(n-1) Σ

y(n)

X(z)



1–z

–1

Y(z)


Time Domain Discrete Differentiator SFG z-domain differentiator representation

Differential Pulse Code Modulation (DPCM): DPCM is an extension of delta modulation that makes use of redundancy in analog signals to quantize the difference between a discrete input signal and a predicted value to one of P values [2]. (Note a delta modulator has only one level ± 1 ). The integrator shown below performs a summation of all input values as the predictor. More x(n) Σ ˆ x(n)
P-level Quantizer fs ∆(n) Channel



Low Pass Filter

˜ x(n)


Modulator

De-modulator

82

DSPedia x(n) ∆(n)

complex DPCM systems require a predictor filter in place of the simple integrator. Note that the ˜ x(n)

Σ ˆ x(n)

P-level Quantizer fs

Channel

Linear Predictor

Linear Predictor Modulator De-modulator

predictor at the modulator end uses the same quantized error values as inputs that are available to the predictor at the demodulator end. If the unquantized error values were used at the modulator end then there would be an accumulated error between demodulator output and the modulator input with a strictly increasing variance. This does not happen in the above configuration. See also Adaptive Differential Pulse Code Modulation (ADPCM), Delta Modulation, Continuously Variable Slope Delta Modulation (CVSD), Slope Overload, Granularity. Diffraction: Diffraction is the bending of waves around an object via wave propagation of incident and reflected waves impinging on the object. See also Constructive Interference, Destructive Interference, Head Shadow.
Boundary

Incident Waves

Diffracted Waves

Example of diffraction of incident waves through an opening in a boundary.

Digital: Represented as a discrete countable quantity. When an analog voltage is passed through an ADC the output is a digitized and sampled version of the input. Note that digitization implies quantization. Digital Audio: Any aspect of audio reproduction or recording that uses a digital representation of analogue acoustic signals is often referred to generically as digital audio [33], [34], [37]. Over the last 10-20 years digital audio has evolved into three distinguishable groups of application dependent quality:
1. Telephone Speech 300 - 3400Hz: Typically speech down a telephone line is carried over a channel with a bandwidth extending from around 300Hz to 3400Hz. This bandwidth is adequate for good coherent and intelligible conversation. Music is coherent but unattractive. Clearly intelligible speech can be obtained by

83 sampling at 8kHz with 8 bit PCM samples, corresponding to an uncompressed bit rate of 64kbits/s. 2. Wideband Speech: 50 - 7000Hz: For applications such as teleconferencing prolonged conversation requires a speech quality that has more naturalness and presence. This is accomplished by retaining low and high frequency components of speech compared to a telephone channel. Music with the same bandwidth will have almost AM radio quality. Good quality speech can be obtained by sampling at 16kHz with 12 bit PCM samples, corresponding to a bit rate of 192kbits/s. 3. High Fidelity Audio: 20 - 20000Hz: For high fidelity music reproduction audio the reproduced sound should be of comparable quality to the original sound. Wideband audio is sampled at one of the standard frequencies of 32 kHz, 44.1 kHz, or 48 kHz using 16 bit PCM. A stereo compact disc (44.1kHz, 16 bits) has a data rate of 1.4112 Mbits/s.

Generally, when one refers to digital audio applications involving speech materials only (e.g., speech coding) the term speech is directly included in the descriptive term. Consequently, digital audio has come to connote high fidelity audio, with speech applications more precisely defined. The table below summarizes the key parameters for a few well known digital audio applications. Note that to conserve bandwidth and storage requirements DSP enabled compression techniques are applied in a few of these applications.
Technology
Digital Audio Tape (DAT) Compact Disc (CD) Digital Compact Cassette (DCC) MiniDisc (MD) Dolby AC-2 MUSICAM (ISO Layer II) NICAM PCM A/µ-law (G711) ADPCM (G721) LD-CELP (G728) RPE-LTP (GSM) Subband ADPCM (G722)

Example Application
Professional recording Consumer audio Consumer audio Consumer audio Cinema sound Consumer broadcasting TV audio Telephone Telephone Telephone Telephone Teleconferencing

Sampling Rate (kHz)
48 44.1 32, 44.1, 48 44.1 48 32, 44.1, 48 32 8 8 8 8 16

Compression
No No Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes

Single Channel Bit Rate (kbits/s)
768 705.6 192 146 128 16 - 192 338 64 16,24,32,40 16 13.3 64

Digital Audio Systems

Although the digital audio market is undoubtedly very mature, the power of DSP systems is stimulating research and development in a number of areas:
1. Improved compression strategies based on perceptual and predictive coding; compression ratios of up to 20:1 for hifidelity audio may eventually be achievable. 2. The provision of surround sound using multichannel systems to allow cinema and “living room” audiences to experience 3-D sound. 3. DSP effects processing: remastering, de-scratching recordings, sound effects, soundfield simulation etc. 4. Noise reduction systems such as adaptive noise controllers, echo cancellers, acoustic echo cancellers, equalization systems.

84

DSPedia
5. Super-fidelity systems sampling at 96kHz to provide ultrasound [154] (above 20kHz and which is perhaps more tactile than audible), and systems to faithfully reproduce infrasound [138] (below 20Hz and which is most definitely tactile and in some cases rather dangerous!)

Real-time digital audio systems are one of three types: (1) input/output system (e.g. telephone/ teleconferencing system); (2) output only (e.g. CD player); or (3) input only (e.g. DAT professional recording). The figure below shows the key elements of a single channel input/output digital audio system. The input signal from a microphone is signal conditioned/amplified as appropriate to the input/output characteristic of the analogue to digital converter (ADC) at a sampling rate of f s Hz. Prior to being input to the ADC stage the analogue signal is low pass filtered to remove all frequencies above f s ⁄ 2 by the analogue anti-alias filter. The output from ADC is then a stream of binary numbers, which are then compressed, coded and modulated for transmission, broadcasting or recording via/to a suitable medium (e.g. FM radio broadcast, telephone call or CD mastering). When a digital audio signal is received or read it is a stream of binary numbers which are demodulated and decoded/decompressed with DSP processing into a sampled data PCM format for input to a digital to analogue converter (DAC) which outputs to an analogue low pass reconstruction filter stage (also cutting off at f s ⁄ 2 prior to being amplified and output to a loudspeaker (e.g. reception of digital audio FM radio or a telephone call, or playback of a CD).

Acoustic

Analogue fs
Amp ADC & AntiAlias Filter DSP Processing: Coding/ Compression/ Modulation

Digital
DSP Processing: Decoding/ Decompression / Demodulation

Analogue fs
DAC & Reconstruction Filter Amp

Acoustic

Input

Output Signal conditioning and reproduction

Signal recording and conditioning

Data transmission/ broadcasting/ recording & playback

The generic single input, single output channel digital audio signal processing system.

See also Compact Disc, Data Compression, Digital Audio Tape, Digital Compact Cassette, MiniDisc, Speech Coding. Digital Audio Broadcasting (DAB): The transmission of electromagnetic carriers modulated by digital signals. DAB will permit the transmission of high fidelity audio and is more immune to noise and distortion than conventional techniques. Repeater transmitters can receive a DAB signal, clean the signal and retransmit a noise free version. Currently there is a large body of interest in developed DAB consumer systems using a combination of satellite, terrestrial and cable transmission. For terrestrial DAB however there is currently no large bandwidth specifically allocated for DAB, and therefore FM radio station owners may be required to volunteer their bands for digital audio broadcasting. See also Compression,Standards. Digital Audio Tape (DAT): An audio format introduced in the late 1980s to compete with compact disc. DAT samples at 48kHz and used 16 bit data with stereo channels. Although DAT was a commercial failure for the consumer market it has been adopted as a professional studio recording

85 medium. A very similar format of 8mm digital tape is also quite commonly used for data storage. See also Digital Compact Cassette, MiniDisc. Digital Communications: The process of transmitting and receiving messages (information) by sending and decoding one of a finite number of symbols during a sequence of symbol periods. One primary requirement of a digital communication system is that the information must be represented in a digital (or discrete) format. See also Message,Symbol, Symbol Period. Digital Compact Cassette (DCC): DCC was introduced by Philips in the early 1990s as a combination of the physical format of the popular compact cassette, and featuring new digital audio signal processing and magnetic head technology [83], [52], [150]. Because of physical constraints DCC uses psychoacoustic data compression techniques to increase the amount of data that can be stored on a tape. The DCC mechanism allows it to play both (analog) compact cassette tapes and DCC tapes. The tape speed is 4.75cm/s for both types of tapes and a carefully designed thin film head is used to achieve both digital playback and analog playback. The actual tape quality is similar to that used for video tapes. DCC is a competing format to Sony’s MiniDisc which also uses psychoacoustic data compression techniques. If normal stereo 16 bit, 48kHz (1.536 Mbits/sec) PCM digital recording were done on a DCC tape, only about 20 minutes of music could be stored due to the physical restrictions of the tape. Therefore to allow more than an hour of music on a single tape data compression is required. DCC uses precision adaptive subband coding (PASC) to compress the audio by a factor of 4:1 to a data rate of 384 Mbits/s (192 Mbits/s per channel) thus allowing more than an hour of music to be stored. PASC is based on psychoacoustic compression principles and is similar to ISO/MPEG layer 1 standard. The input to a PASC encoder can be PCM data of up to 20 bits resolution at sampling rates of 48kHz, 44.1kHz or 32kHz. The quality of music from a PASC encoded DCC is arguably as good as a CD, and in fact for some parameters such as dynamic range a prerecorded DCC tape can have improved performance over a CD (see Precision Adaptive Subband Coding). Eight to ten modulation and cross interleaved Reed-Solomon coding (CIRC) is used for the DCC tape channel coding and error correction. In addition to the audio tracks DCC features an auxiliary channel capable of storing 6.75kbits/sec and which can be used for storing timing, textual information and copyright protection codes.
L R in out L R

ADC Digital I/O DAC The Digital Compact Cassette (DCC) compresses PCM encoded 48kHz, 44.1kHz or 32kHz digital audio to a bit rate of 384 bits/s. The PCM input data can have up to 20 bits precision. 32 Channel Subband Filter Psychoacoustic Coding: PASC Error Coding/ Error Correction Data Modulation Read/ Write Head

In terms of DSP algorithms the DCC also uses an IIR digital filter for equalization of the thin film magnetic head frequency response, and a 12 weight FIR filter to compensate for the high frequency roll-off of the magnetic channel. See also Compact Disc, Digital Audio, Digital Audio Tape (DAT), MiniDisc, Precision Adaptive Subband Coding (PASC), Psychoacoustics.

86

DSPedia

Digital European Cordless Telephone (DECT): The DECT is a telephone whereby a wireless radio connection at 1.9GHz communicates with a base station and is normally connected to the public switched telephone network. One or more handsets can communicate with each other or the outside world. Digital Filter: A DSP system that will filter a digital input (i.e., selectively discriminate signals in different frequency bands) according to some pre-designed criteria is called a digital filter. In some situations digital filters are used to modify phase only [10], [7], [21], [31], [29]. A digital filter’s characteristics are usually viewed via their frequency response and for some applications their phase response (discussed in Finite Impulse Response Filter, and Infinite Impulse Response Filter). For the frequency response, the filter attenuation or gain characteristic can either be specified on a linear gain scale, or more commonly a logarithmic gain scale:
P out Attenuation = 10 log ---------P in = 20 log Y ( f ) ---------X( f) -3dB point Attenuation = H ( f ) = Y ( f ) ---------X( f)

Gain Factor

0 -3 -20

1 0.8 0.6 0.4 0.2

Gain (dB)

-40 -60 -80

1000

0

frequency (Hz) Digital Filter, H(f)

1000

frequency (Hz)

Logarithmic Response X( f) Y(f)

Linear Response

The above digital filter is a low pass filter cutting off at 1000Hz. Both the linear and logarithmic magnitude responses of the transfer function, H ( f ) = Y ( f ) ⁄ X ( f ) are shown. The cut-off frequency of a filter is usually denoted as the “3dB frequency”, i.e. at f3dB = 1000 Hz, the filter attenuates the power of a sinusoidal component signal at this frequency by 0.5, i.e. P out 10 log ----------P in Y ( f 3dB ) = 20 log ------------------ = 10 log 0.5 = 20 log 0.707… = – 3 dB X ( f 3dB )

f3dB

The power of the output signal relative to the input signal at f3dB is therefore 0.5, and the signal amplitude is attenuated by 1 ⁄ 2 = 0.707… . For a low pass filter signals with a frequency higher than f3dB are attenuated by more than 3dB.

Digital filters are usually designed as either low pass, high pass, band-pass or band-stop:
0 Gain 0 0 0

frequency

frequency

frequency

frequency

Low Pass

High Pass

Band-Pass

Band-Stop

87
A number of filter design packages will give the user the facility to design a filter for an arbitrary frequency response by “sketching” graphically:
0 Gain

frequency

User Defined Frequency Response

There are two types of linear digital filters, FIR (finite impulse response filter) and IIR (infinite impulse response filter). An FIR filter is a digital filter that performs a moving, weighted average on a discrete input signal, x ( n ) , to produce an output signal. (For a more intuitive discussion of FIR filtering operation see entry for Finite Impulse Response Filter). The arithmetic computation required by the digital filter is of course performed on a DSP processor or equivalent: x(t) x(k)

0

time, t

0

time, k

fs DAC y(t) AntiAlias Filter

ADC fs

DSP Processor y(k) Recons truction Filter

0

time, k

0

time, t

Analogue

Digital

Analogue

The digital filter equations are implemented on the DSP Processor which processes the time sampled data signal to produce a time sampled output data signal.

The actual frequency and phase response of the filter is found by taking the discrete frequency transform (DFT) of the weight values of w 0 to w N – 1 . An FIR digital filter is usually represented in a signal flow graph or with a summation (convolution) equation:

88 x(k) DSPedia x(k-1) x(k-2) x(k-3) x(k-N+2) x(k-N+1)

w0

w1

w2

w3

wN-2

wN-1 y(k)

y ( k ) = w 0 x ( k ) + w 1 x ( k – 1 ) + w 2 x ( k – 2 ) + w 3 x ( k – 3 ) + ..... + w N – 1 x ( k – N + 1 )
N–1

=

∑ n=0 wn x ( k – n ) = w T xk

where w = w 0 w 1 w 2 … w N – 1 and x k = x ( k ) x ( k – 1 ) x ( k – 2 ) : x ( k – N + 1 ) The signal flow graph and the output equation for an FIR digital filter. The filter output y(k) can be expressed as a summation equation, a difference equation or using vector notation.

The signal flow graph can be drawn in a more modular fashion by splitting the N element summer into a series of two element summers: x(k) x(k-1) x(k-2) x(k-3) x(k-N+2) x(k-N+1)

w0

w1

w2

w3

wN-2

wN-1 y(k)

The signal flow graph for an FIR filter is often modularized in order that the large N element summer is broken down into a series of N-1 two element summing nodes. The operation, of course, of this filter is identical to the above.

An IIR digital utilizes feedback (or recursion) in order to achieve a longer impulse response and therefore the possible advantage of a filter with a sharper cut off frequency (i.e., smaller transition bandwidth - see below) but with fewer weights than an FIR digital filter with an analogous frequency response. (For a more intuitive discussion on the operation of an IIR filter see entry for Infinite Impulse Response Filter.) The attraction of few weights is that the filter is cheaper to implement (in

89 terms of power consumption, DSP cycles and/or cost of DSP hardware). The signal flow graph and output equation for an IIR filter is: y(k-3) y(k-2) y(k-1) y(k)

x(k)

x(k-1)

x(k-2)

a0

a1

a2

b3

b2

b1

2

3

y( k) =

∑ n=0 an x ( k – n ) +

∑ n=1 bn y( k – n)

= a 0 x ( k ) + a1 x ( k – 1 ) + a 2 x ( k – 2 ) + b 1 y ( k – 1 ) + b 2 y ( k – 2 ) + b3 y ( k – 3 ) x(k ) y(k – 1) = a T x k + b T yk – 1 = a 0 a 1 a 2 x ( k – 1 ) + b1 b 2 b 3 y ( k – 2 ) x(k – 2) y(k – 3 )) A signal flow graph and equation for a 2 zero, 3 pole IIR digital filter. The filter output y(k) can be expressed as a summation equation, a difference equation or using vector notation.

Design algorithms to find suitable weights for digital FIR filters are incorporated into many DSP software packages and typically allow the user to specify the parameters of:

• • • • • • •

Sampling frequency; Passband; Transition band; Stopband; Passband ripple; Stopband attenuation; No. of weights in the filter.

90

DSPedia

These parameters allow variations from the ideal (brick wall) filter, with the trade-offs being made by the design engineer. In general, the less stringent the bounds on the various parameters, then the fewer weights the digital filter will require:
Passband Transition Band Stopband Transition Stopband Band Passband Ripple Passband

-3 Gain (dB)

Passband Ripple

-3 Gain (dB)

Low Pass

Stopband Attenuation

Stopband Attenuation

High Pass
“Ideal” Filter

“Ideal” Filter frequency fs/2 frequency fs/2

Stop- Transition Band band

Passband

Transition StopBand band

Transition Transition Band Stop- Band Passband Passband band

-3 Gain (dB)

Stopband Attenuation

Gain (dB)

Band-Pass

Passband Ripple

-3 Stopband Attenuation

Band-Stop

“Ideal” Filter frequency fs/2

“Ideal” Filter frequency fs/2

Parameters for specifying low pass, high pass, band-pass and band stop filters

91
After the filter weights are produced by DSP filter design software the impulse response of the digital filter can be plotted, i.e. the filter weights shown against time: h(n) 0.25 0.20 0.15 0.10 0.05 0 -0.05 10 20 time, n 30 1 T = --------------- secs 10000 w0 = w30 = 0.00378... w1 = w29 = 0.00977... w2 = w28 = 0.01809... w3 = w27 = 0.02544... w4 = w26 = 0.027154... w5 = w25 = 0.019008... w6 = w24 = 0.00003... w7 = w23 = -0.02538... w8 = w22 = -0.04748... w9 = w21 = -0.05394... w10 = w20 = -0.03487... w11 = w19 = 0.01214... w12 = w18 = 0.07926... w13 = w17 = 0.14972... w14 = w16 = 0.20316... w15 = 0.22319... (Truncated to 5 decimal places)

DESIGN 1: Low Pass FIR Filter Impulse Response The impulse response h ( n ) = w n of the low pass filter specified in the above SystemView dialog boxes: cut-off frequency 1000 Hz; passband gain 0dB; stopband attenuation 60dB; transition band 500 Hz; passband ripple 5dB and sampling at fs = 10000 Hz. The filter is linear phase and has 31 weights and therefore an impulse response of duration 31/10000 seconds. For this particular filter the weights are represented with floating point real numbers. Note that the filter was designed with 0dB in the passband. As a quick check the sum of all of the coefficients is approximately 1, meaning that if a 0 Hz (DC) signal was input, the output is not amplified or attenuated, i.e. gain = 1 or 0 dB.

From the impulse response the DFT (or FFT) can be used to produce the filter magnitude frequency response and the actual filter characteristics can be compared with the original desired specification:

H(f)

1.2 1.0 0 -10 -20 -30 -40 -50 -60 -70 -80 0

20 log H ( f )

Gain

0.8 0.6 0.4 0.2 0 1000 2000 3000 4000 5000

Gain (dB)

1000

2000

3000

4000

5000

frequency (Hz) Linear Magnitude Response

frequency (Hz) Logarithmic Magnitude Response

The 1024 point FFT (zero padded) of the above DESIGN 1 low pass filter impulse response. The passband ripple is easier to see in the linear plot, whereas the stopband ripple is easier to see in the logarithmic plot.

92

DSPedia

To illustrate the operation of the above digital filter, a chirp signal starting at a frequency of 900 Hz, and linearly increasing to 1500 Hz over 0.05 seconds (500 samples) can be input to the filter and the output observed (individual samples are not shown):
Amplitude
0 1.00e-2 2.00e-2 3.00e-2 4.00e-2

0

time

0

1.00e-2

2.00e-2

3.00e-2

4.00e-2

1--------- secs 900
Amplitude

1 ------------ secs 1500 1000 Hz Cut off Low Pass Digital Filter
1.00e-2 2.00e-2 3.00e-2 4.00e-2

0

0

time

0

1.00e-2

2.00e-2

3.00e-2

4.00e-2

As the chirp frequency reaches about 1000 Hz, the digital filter attenuates the amplitude output signal by a factor of around 0.7 (3dB) until at 1500 Hz the signal amplitude is attenuated by more than 60 dB or a factor of 0.001. If a low pass filter with less passband ripple and a sharper cut off is required then another filter can be designed, although more weights will be required and the implementation cost of the filter has therefore increased. To illustrate this point, if the above low pass filter is redesigned, but this time with a stopband attenuation of 80dB, a passband ripple of 0.1dB and a transition band of, again,

93
500 Hz, the impulse response of the filter produced by the DSP design software now requires 67 weights: h(n) 0.25 0.20 0.15 0.10 0.05 0 -0.05 10 20 30 40 50 60 DESIGN 2: Low Pass FIR Filter Impulse Response The impulse response h ( n ) = w n of a low pass filter with: cut-off frequency 1000 Hz; passband gain 0dB; stopband attenuation 80dB; transition band 500 Hz; passband ripple 0.1dB and sampling at fs = 10000 Hz. The filter is linear phase and has 67 weights (compare to the above Design 1 which had 31 weights) and therefore an impulse response of duration 67/10000 seconds. time, n 70 1 T = --------------- secs 10000

The frequency response of this Design 2 filter can be found by taking the FFT of the digital filter impulse response:
H(f)
1.2 1.0 0 -10 -20 -30 -40 -50 -60 -70 -80 0 1000 2000 3000 4000 5000

20 log H ( f )

0.6 0.4 0.2 0 1000 2000 3000 4000 5000

frequency (Hz) Linear Magnitude Response

Gain (dB)

Gain

0.8

frequency (Hz) Logarithmic Magnitude Response

The 1024 point FFT (zero padded) of the DESIGN 2 impulse response low pass filter impulse response. Note that, as specified, the filter roll-off is now steeper, the stopband is almost 80 dB and the inband ripple is only fractions of a dB.

Therefore low pass, high pass, bandpass, and bandstop digital filters can all be released by using the formal digital filter design methods that are available in a number of DSP software packages. (Or if you have a great deal of time on your hands you can design them yourself with a paper and pencil and reference to one of the classic DSP textbooks!) There are of course many filter design

94

DSPedia

trade-offs. For example, as already illustrated above, to design a filter with a fast transition between stopband and passband requires more filter weights than a low pass filter with a slow roll-off in the transition band. However the more filter weights, the higher the computational load on the DSP processor, and the larger the group delay through the filter is likely to be. Care must therefore be taken to ensure that the computational load of the digital filter does not exceed the maximum processing rate of the DSP processor (which can be loosely measured in multiply-accumulates, MACs) being used to implement it. The minimum computation load of DSP processor implementing a digital filter in the time domain is at least:
Computational Load of Digital Filter = ( Sampling Rate × No. of Filter Weights ) MACs

(81)

and likely to be a factor greater than 1 higher due to the additional overhead of other assembly language instructions to read data in/out, to implement loops etc. Therefore a 100 weight digital filter sampling at 8000 Hz requires a computational load of 800,000 MACs/second (readily achievable in the mid-1990’s), whereas for a two channel digital audio tape (DAT) system sampling at 48kHz and using stereo digital filters with 1000 weights requires a DSP processor capable of performing almost 100 million MACs per second (verging on the “just about” achievable with late-1990s DSP processor technology). See also Adaptive Filter, Comb Filter, Finite Impulse Response (FIR) Filter, Infinite Impulse Response (IIR) Filter, Group Delay, Linear Phase. Digital Filter Order: The order of a digital filter is specified from the degree of the z-domain polynomial. For example, an N weight FIR filter: y ( k ) = w 0 x ( k ) + w 1 x ( k – 1 ) + …w N – 1 x ( k – N + 1 ) can be written as an N-1th order z-polynomial: Y ( z ) = X ( z ) [ w 0 + w 1 z –1 + w N – 1 z – N + 1 ] = X ( z )z – N + 1 [ w 0 z N – 1 + w 1 z N – 2 + …w N – 1 ] (82)

(83)

For an IIR filter, the order of the feedforward and feedback sections of the filter can both be specified. For example an IIR filter with a 0-th order feedforward section (i.e. N = 1 above meaning w 0 = 1 and all other weights are 0), and an M-1th order feedback section is given by the difference equation: y ( k ) = x ( k ) + b 1 y ( k – 1 ) + b 2 y ( k – 2 ) + …b M – 1 y ( k – M + 1 ) and the M-1th order denominator polynomial is shown below as: Y ( z ) = ----------------------------------------------------------------------------------------------------------------1 ----------–1 + … + b X(z ) z– M + 2 + bM – 1z– M + 1 1 + b1z M–2 zM – 1 = ------------------------------------------------------------------------------------------------z M – 1 + b1 zM – 2 + … + bM – 2 z + bM – 1 (84)

(85)

It is worth noting that for an IIR filter the coefficients are indexed starting at 1, i.e. b 1 If a b 0 coefficient were added in the above signal flow graph, then this would introduce a scaling of the output, y(k). See also Digital Filter, Finite Impulse Response Filter, Infinite Impulse Response Filter.

95
Digital Soundfield Processing (DSfP): The name given to the artificial addition of echo and reverberation to a digital audio signal. For example music played in a car can add echo and reverberation to the digital signal prior to being played through the speakers thus giving the impression of the acoustics of a large theatre or a stadium. Digital Television: The enabling technologies of digital television are presented in detail in [95], [96]. Digital to Analog Converter (D/A or DAC): A digital to analog converter is a device which will take a stream of digital numbers and convert to a continuous voltage signal. Every digital to analog converter has an input-output characteristic that specifies the output voltage for a given binary number input. The output of a DAC is very steppy, and will in fact produce frequency components above the sampling frequency. Therefore a reconstruction filter should be used at the output of a DAC to smooth out the steps. Most D/As used in DSP operate using 2’s complement arithmetic. See also Reconstruction Filter, Analog to Digital Converter.
Digital Value
15 12 8 4 0 -4 -8 -12 -16

Voltage
2 1

DAC time, k

0 -1 -2

time, k

Output (Volts)

2

-16 -12

-8

-4

1 Example of a 5 bit DAC converting a train of binary values to an analog waveform. Binary Input

10000

10100

11000

11001

00100

01000
8

-1

4

-2

Digital Video Interactive (DVI): Intel Inc. have produced a proprietary digital video compression technology which is generally known as DVI. Files that are encoded as DVI usually have the suffix, “.dvi” (as do LaTeXTM device independent files -- these are different). See also Standards. Diotic: A situation where the aural stimulation reaching both ears is the same. For example, diotic audiometric testing would play the exactly the same sounds into both ears. See also Audiometry, Dichotic, Monauralic.

01100 01111
12 15

96

DSPedia

Dirac Impulse or Dirac Delta Function: The continuous time analog to the unit impulse function. See Unit Impulse Function. Direct Broadcast Satellite (DBS): Satellite transmission of television and radio signals may be received directly by a consumer using a (relatively small) parabolic antenna (dish) and a digital tuner. This form of broadcasting is gaining popularity in Europe, Japan, the USA and Australia. Direct Memory Access: Allowing access to read or write RAM without interrupting normal operation of the processor. The TMS320C40 DSP Processor has 6 independent DMA channels that are 8 bits wide and allow access to memory without interrupting the DSP computation operation. See also DSP Processor. Directivity: A measure of the spatial selectivity of an array of sensors, or a single microphone or antenna. Loosely, directivity is the ratio of the gain in the look direction to the average gain in all directions. The higher the directivity, the more concentrated the spatial selectivity of a device is in the look direction compared to all other directions. Mathematically, directivity is defined for a (power) gain function G(θ,φ,f) as: G ( 0, 0, f ) D ( f ) = ------------------------------------------------1 ------ ∫ G ( θ, φ, f ) dΩ 4π
FOV

(86)

where the look direction (and the maximum of the gain function) is assumed to be θ=0 and φ=0 and the field of view (FOV) is assumed to be Ω = 4π steradians (units of solid angle). Note that the directivity defined above is a function of frequency, f, only. If directivity as a function of frequency, D(f), is averaged (i.e., integrated) over frequency then a single directivity number can be obtained for a wideband system. See also Superdirectivity, Sidelobe, Main Lobe, Endfire. Discrete Cosine Transform (DCT): The DCT is given by the equation:
N–1

X(k) =

∑ x ( n ) cos ------------N n=0 2πkn

for k = 0 to N – 1

(87)

The DCT is essentially discrete Fourier transform (DFT) evaluated only for the real part of the complex exponential:
N–1

X( k) =

∑ n=0 x ( n )e

– j 2πkn -----------------N

for k = 0 to N – 1

(88)

The DCT is used in a number of speech and image coding algorithms. See also Discrete Fourier Transform. Discrete Fourier Transform: The Fourier transform [57], [58], [93] for continuous signals can be defined as:

97


x(t) =

∫ X ( f )ej2πft df
–∞ ∞

Synthesis

(89) x ( t )e –j 2πft dt
Analysis

X( f) =


–∞

Fourier Transform Pair

x( n)

NT s seconds
10 8 6 4 2 0 0 -1 -2 1 3 4

Ts
N-3 N-2 N-1

sample

Sampling an analogue signal, x ( t ) , to produce a discrete time signal, x ( nTs ) written as x ( n ) . The sampling period is T s and the sampling frequency is therefore f s = 1 ⁄ T s . The total time duration of the N samples is NT s seconds. Just as there exists a continuous time Fourier transform, we can also derive a discrete Fourier transform (DFT) in order to assess what sinusoidal frequency components comprise this signal.

In the case where a signal is sampled at intervals of Ts seconds and is therefore discrete, the Fourier transform analysis equation will become:


X(f) = and hence we can write:


∫ x ( nTs )e–j2πfnT d( nTs ) s (90)

–∞



X(f) =

∑ n = –∞

x ( nT 0 )e

– j 2πfnT 0

=

∑ n = –∞

x ( nT0 )e

– j 2πfn ---------------fs

(91)

To further simplify we can write the discrete time signal simply in terms of its sample number:
∞ ∞

X(f) =

∑ n = –∞

x ( nT 0 )e

– j 2πfnT 0

=

∑ n = –∞

x ( n )e

– j 2πfn ---------------fs

(92)

Of course if our signal is causal then the first sample is at n = 0 , and the last sample is at n = N – 1 , giving a total of N samples:

98
N – j 2πfn ---------------fs

DSPedia
X( f) =

∑ x ( n )e n=0 (93)

By using a finite number of data points this also forces the implicit assumption that our signal is now periodic, with a period of N samples, or NT s seconds (see above figure). Therefore noting that Eq. 93 is actually calculated for a continuous frequency variable, f , then in actual fact we need only evaluate this equation at specific frequencies which are the zero frequency (DC) and hamonics of the “fundamental” frequency, f 0 = 1 ⁄ NT s = f s ⁄ N , i.e. N – 1 discrete frequencies of 0, f 0 , 2f 0 , upto f s . kf s X  ------  =  N
N–1

∑ x ( n )e n=0 – j 2πkfs n ---------------------Nfs

for k = 0 to N – 1

(94)

Simplifying to use only the time indice, n , and the frequency indice, k , gives the discrete Fourier transform:
N–1

X( k) =

∑ n=0 x ( n )e

– j 2πkn -----------------N

for k = 0 to N – 1

(95)

If we recall that the discrete signal x ( k ) was sampled at f s then the signal has image (or alias) components above f s ⁄ 2 , then when evaluating Eq. 95 it is only necessary to evaluate up to f s ⁄ 2 , and therefore the DFT is further simplified to:
N–1

X(k) =

∑ n=0 – j 2πkn -----------------x ( n )e N

for k = 0 to N ⁄ 2

(96)

Discrete Fourier Transform

Clearly because we have evaluated the DFT at only N frequencies, then the frequency resolution is limited to the DFT “bins” of frequency width f s ⁄ N Hz. Note that the discrete Fourier transform only requires multiplications and since each complex exponential is computed in its complex number form.
– j2πkn -----------------e N

2πkn 2πkn = cos ------------- – j sin ------------N N

(97)

If the signal x ( k ) is real valued, then the DFT computation requires approximately N 2 real multiplications and adds (noting that a real value multiplied by a complex value requires two real multiplies). If the signal x ( k ) is complex then a total of 2N 2 MACs are required (noting that the multiplication of two complex values requires four real multiplications). From the DFT we can calculate a magnitude and a phase response: X ( k ) = X ( k ) ∠X ( k ) From a given DFT sequence, we can of course calculate the inverse DFT from: (98)

99
1 x ( n ) = --N
N–1

∑ k=0 X ( k )e

j2πnk -------------N

(99)

As an example consider taking the DFT of 128 samples of an 8Hz sine wave sampled at 128 Hz:
Time Signal x ( nT s ) 1
A m p l i t u d e -500.e-3 0 500.e-3

Ts

-1

0

250.e-3

500.e-3

750.e-3

time/s

X ( kf 0 )
M a g n i t u d e

Magnitude Response
500.e-3

400.e-3

300.e-3

200.e-3

100.e-3

0

0

8

16

24

32

40

48

56

64

frequency/Hz The time signal shows 128 samples of an 8 Hz sine wave sampled at 128Hz: x ( n ) = sin ( 16πn ) ⁄ 128 . Note that there are exactly an integral number of periods (eight) present over the 128 samples. Taking the DFT exactly identifies the signal as an 8 Hz sinusoid. The DFT magnitude spectrum has an equivalent negative frequency portion which is identical to that of the positive frequencies if the time signal was real valued.

100

DSPedia
0 250.e 3 500.e 3 750.e 3

If we take the DFT of the slightly more complex signal consisting of an 8Hz and a 24Hz sine wave of half the amplitude of the 8Hz then: x ( nTs ) 1
A m p l i t u d e -1 -500.e-3 0 500.e-3

Ts Time Signal

0

250.e-3

500.e-3

750.e-3

time/s

X ( kf 0 )
M a g n i t u d e

Magnitude Response
500.e-3

400.e-3

300.e-3

200.e-3

100.e-3

0

frequency/Hz The time signal shows 128 samples of an 8 Hz and 24 Hz sine waves sampled at 128Hz: x ( n ) = sin ( 16πn ) ⁄ 128 + 0.5 sin ( 48πn ) ⁄ 128 . Note that there are exactly an integral number of periods present for both sinusoids over the 128 samples.

0

8

16

24

32

40

48

56

64

101
Now consider taking the DFT of 128 samples of an 8.5 Hz sine wave sampled at 128 Hz:
Time Signal x ( nT s )
A m p l i t u d e -500.e-3 0 500.e-3 1

Ts

-1 0 250.e-3 500.e-3 750.e-3

time/s

X ( kf 0 )
M a g n i t u d e

Magnitude Response
350.e-3 300.e-3 250.e-3 200.e-3 150.e-3 100.e-3 50.e-3 0 0 8 16 24 32 40 48 56 64

frequency/Hz The time signal shows 128 samples of an 8.5 Hz sine wave sampled at 128Hz: x ( n ) = sin ( 17πn ) ⁄ 128 . Note that because the 8.5Hz sine wave does not lie exactly on a frequency bin, then its energy appears spread over a number of frequency bins around 8Hz.

So why is the signal energy now spread over a number of frequency bins? We can interpret this by recalling that the DFT implicitly assumes that the signal is periodic, and the N data points being analysed are one full period of the signal. Hence the DFT assumes the signal has the form: x( t)

time

N samples

Repeated samples

Repeated samples

and so on.....

If there are an integral number of sine wave periods in the N samples input to the DFT computation, then the spectral peaks will fall exactly on one of the frequency bins as shown earlier. Essentially the result produced for the DFT computation has assumed that the signal was periodic, and the N samples form one period of the signal and thereafter the period repeats. Hence the DFT assumes the complete signal is as illustrated above (the discrete samples are not shows for clarity.

102

DSPedia x(t) Discontinuity

If there are not an integral number of periods in the signal (as for the 8.5Hz example), then:

time

N samples

Repeated samples

Repeated samples

and so on.....

If there are not an integral number of sine wave periods in the N samples input to the DFT computation, then the spectral peaks will not fall exactly on one of the frequency bins. As the DFT computation has assumed that the signal was periodic, the DFT interprets that the signal undergoes a “discontinuity” jump at the end of the N samples. Hence the result of the DFT interprets the time signal as if this discontinuity was part of it. Hence more than one single sine wave is required to produce this waveform and thus a number of frequency bins indicate sine wave components being present.

In order to address the problem of spectral leakage, the DFT is often used in conjunction with a windowing function. See also Basis Function, Discrete Cosine Transform, Discrete Fourier Transform - Redundant Computation, Fast Fourier Transform, Fourier, Fourier Analysis, Fourier Series, Fourier Transform, Frequency Response. Discrete Fourier Transform, Redundant Computation: If we rewrite the form of the DFT in Eq. 96 as:
N–1 kn ∑ x ( n )WN n=0

X(k) = j2π ------eN

for k = 0 to N ⁄ 2

(100)

where W =

Therefore to calculated the DFT of a (trivial) signal with 8 samples requires:

X( 0) = x(0 ) + x( 1) + x( 2) + x( 3) + x(4 ) + x(5 ) + x( 6) + x( 7)
– – – – – – – X ( 1 ) = x ( 0 ) + x ( 1 )W 8 1 + x ( 2 )W 8 2 + x ( 3 )W 8 3 + x ( 4 )W 8 4 + x ( 5 )W 8 5 + x ( 6 )W 8 6 + x ( 7 )W 8 7 12 – – – – – – X ( 2 ) = x ( 0 ) + x ( 1 )W 8 2 + x ( 2 )W 8 4 + x ( 3 )W 8 6 + x ( 4 )W 8 8 + x ( 5 )W 8 10 + x ( 6 )W 8 + x ( 7 )W 8 14 – – – – – – – X ( 3 ) = x ( 0 ) + x ( 1 )W 8 3 + x ( 2 )W 8 6 + x ( 3 )W 8 9 + x ( 4 )W 8 12 + x ( 5 )W 8 15 + x ( 6 )W 8 18 + x ( 7 )W 8 21

(101)

However note that there is redundant computation in Eq. 101. Consider the third term in the second line of Eq. 101:
– x ( 2 )W 8 2

= x ( 2 )e

–2 j2π  -----  8

=

– jπ ------x ( 2 )e 2

(102)

Now consider the computation of the third term in the fourth line of Eq. 101
– x ( 2 )W 8 6

= x ( 2 )e

–6 j2π  -----   8

= x ( 2 )e

– j3π ----------2

= x ( 2 )e jπ e

– jπ ------2

= – x ( 2 )e

– jπ ------2

(103)

103
– – There we can save one multiply operation by noting that the term x ( 2 )W 8 6 = – x ( 2 )W 8 2 . In fact kn every term in the fourth line of Eq. 101 is available from the terms because of the periodicity of W N in the second line of the equation. Hence a considerable saving in multiplicative computations can be achieved. This is the basis of the fast (discrete) Fourier transform discussed under item Fast Fourier Transform.

Discrete Fourier Transform, Spectral Aliasing: Note that the discrete Fourier transform of a signal x ( n ) is periodic in the frequency domain. If we assume that the signal was real and was sampled above the Nyquist rate f s , then there are no frequency components of interest above f s ⁄ 2 . From the Fourier transform, if we calculate the frequency components up to frequency f s ⁄ 2 then this is equivalent to evaluating the DFT for the first N ⁄ 2 – 1 discrete frequency samples:
N–1

X(k) =

∑ n=0 x ( n )e

– j 2πkn -----------------N

for k = 0 to N ⁄ 2 – 1

(104)

Of course if we evaluate for the next N ⁄ 2 – 1 discrete frequencies (i.e. from f s ⁄ 2 to f s ) then:
N–1

X( k ) =

∑ n=0 x ( n )e

– j 2πkn -----------------N

for k = N ⁄ 2 to N – 1

(105)

In Eq. 11 if we substitute for the variable i = N – k ⇒ k = N – i and calculate over range i = 1 to N ⁄ 2 (equivalent to the range k = N ⁄ 2 to N – 1 ) then:
N–1

X( i) = and we can write:
N–1

∑ n=0 – j 2πin ---------------x ( n )e N

for i = 1 to N ⁄ 2

(106)

X(N – k) =

∑ n=0 N–1

– j 2π ( N – k )n --------------------------------N x ( n )e N–1

=

∑ n=0 N–1

j2πkn -------------------------------- – j 2πNn x ( n )e N e N

=

∑ n=0 j2πkn --------------x ( n )e N e –j 2πn

(107)

=

∑ n=0 j2πkn -------------x ( n )e N

for k = N ⁄ 2 to N – 1

since e j2πn = 1 for all integer values of n . Therefore from Eq. 107 it is clear that: X( k ) = X(N – k ) (108)

104

DSPedia

Hence when we plot the DFT it is symmetrical about the N ⁄ 2 frequency sample, i.e. the frequency value f s ⁄ 2 Hz depending on whether we plot the x-axis as a frequency indice or a true frequency value. We can further easily show that if we take a value of frequency index k above N – 1 (i.e. evaluate the DFT above frequency f s , then:
N–1

X ( k + mN ) =

∑ n=0 N–1

– j 2π ( k + mN )n --------------------------------------N x ( n )e

N–1

=

∑ n=0 x ( n )e

– j 2πkn -----------------N e – j 2πmn

=

∑ n=0 – j 2πkn -----------------x ( n )e N

(109)

= X(k) where m is a positive integer and we note that e j2πmn = 1 . Therefore we can conclude that when evaluating the magnitude response of the DFT the components of specific interest cover the (baseband) frequencies from 0 to f s ⁄ 2 , and the magnitude spectra will be symmetrical about the f s ⁄ 2 line and periodic with period f s : x(n) NT s seconds
10 8 6 4 2 0 0 -1 -2 1 2 3 4 N-3 N-2 N-1

Ts sample index Discrete Fourier transform

1 ⁄ NT s Hz X( k)

1 ⁄ T s Hz

fs/2

fs

3/2fs

2fs

5/2fs

N discrete frequency points

3fs frequency/Hz

Spectral aliasing. The main portion of interest of the magnitude response is the “baseband” from 0 to f s ⁄ 2 Hz. The “baseband” spectra is symmetrical about the point f s ⁄ 2 and thereafter periodic with period f s Hz.

See also Discrete Fourier Transform, Fast Fourier Transform, Fast Fourier Transform - Zero Padding, Fourier Analysis, Fourier Series, Fourier Transform.

105
Discrete Time: After an analog signal has been sampled at regular intervals, each sample corresponds to the signal magnitude at a particular discrete time. If the sampling period was τ secs, then sampling a continuous time analog signal: x(t) every τ seconds would produce samples x n = x ( n ) = x ( nτ ) , for n = 0, 1, 2, 3, … (111) (110)

For notational convenience the τ is usually dropped, and only the discrete time index, n, is used. Of course, any letter can be used to denote the discrete time index, although the most common are: “n”, “k” and “i”.
Analog Signal Before Sampling x(t) x(n)
5 0.001 0.002 0.003 0.004 0.005 0.006 0.007 0.008 0.009 0.010 0.011 0.012 8 6 7 9 10 11 12

Digital Signal After Sampling

time,t (secs)

1 2 3 4

Discrete time,n

Sampling a signal x(t) at 1000Hz. The sampling interval is therefore: 1 τ = ------------ seconds 1000 The sampled signal is denoted as x ( n ) , where the explicit reference to τ has been dropped or notational convenience.

Distortion: If the output of a system differs from the input in a non-linear fashion then distortion has occurred. For example, if a signal is clipped by a DSP system then the output is said to be distorted. By the very nature of non-linear functions, a distorted signal will contain frequency components that were not present in the input signal. Distortion is also sometimes used to describe linear frequency shaping. See also Total Harmonic Distortion. Distribution Function: See Random Variable. Dithering (audio): Dithering is a technique whereby a very low level of noise is added to a signal in order to improve the quality of the psychoacoustically perceived sound. Although the addition of dithering noise to a signal clearly reduces the signal to noise ratio (SNR) because it actually adds more noise to the original signal, the overall sound is likely to be improved by breaking up the correlation between the various signal components and quantization error (which, without dithering, results in the quantization noise being manifested as harmonic or tonal distortion).

106

DSPedia

One form of dithering adds a white noise dither signal, d ( t ) with a power of q 2 ⁄ 12 , where q is the quantization level of the analog to digital converter (ADC), to the audio signal, x ( t ) prior to conversion: time Dither signal d(t)

time

x(t)

Analog to Digital Converter (ADC)

y(k) k Input signal

Dithered sampled output signal

Note that without dithering, the quantization noise power introduced by the ADC is q 2 ⁄ 12 , and therefore after dithering, the noise power in the digital signal is q 2 ⁄ 6 , i.e. the noise has doubled or increased by 3dB ( 20 log 2 ). However the dithered output signal will have decorrelated the quantization error of the ADC and the input signal, thus reducing the harmonic distortion components. This reduction improves the perceived sound quality. The following example illustrates dithering. A 600Hz sine wave of amplitude 6.104 × 10 –5 ( = 2 ⁄ 32767 ) volts was sampled at 48000Hz with a 16 bit ADC which had the following input/output characteristic:
Binary Output
32767

16384

-1

-0.5 -16384

0.5

1

Voltage Input (volts)

-32768

16 bit Analogue to Digital Converter Input/Output Characteristic.

After analog to digital conversion (with d ( t ) = 0 , i.e. no dithering) the digital output has an amplitude of 2. On a full scale logarithmic plot, 2 corresponds to -84 dB ( = 20 log ( 2 ⁄ 32767 ) ) where

107 the full scale amplitude of 32767 ( = 2 15 – 1 ) is 0dB. Time and frequency representations of the output of the ADC are shown below, along with a 16384 point FFT of the ADC output:

Magnitude, |X(f)| (dB)

Amplitude, x(n)

time(ms)

frequency (kHz)

The frequency representation of the 600Hz sine wave clearly shows that the quantization noise manifests itself as harmonic distortion. Therefore when this signal is reconverted to analog and replayed, the harmonic distortion may be audible.

The magnitude frequency spectrum of the (undithered) signal clearly highlights the tonal distortion components which result from the conversion of this low level signal. The main distortion components are at 1800Hz, 3000Hz, 4200Hz, and so on, (i.e. at 3, 5, 7,..., times the signal’s fundamental frequency of 600 Hz). However if the signal was first dithered by adding an analog white noise dithering signal, d ( t ) of power q 2 ⁄ 12 prior to ADC conversion then the time and frequency representations of the ADC output are:

Magnitude, |Y(f)| (dB)

Amplitude, y(n)

time(ms)

frequency (kHz)

The frequency representation of the dithered 600Hz sine wave clearly shows that the correlation between signal and the quantization error has been removed. Therefore if the signal is reconverted to analog and replayed then the quantization noise is now effectively whitened and harmonic distortion of the signal is no longer perceived.

Note that the magnitude frequency spectrum of the dithered signal has a higher average noise floor, but the tonal nature of the quantization noise has been removed. This dithered signal is more

108

DSPedia

perceptually tolerable to listen to as the background white noise is less perceptually annoying that than the harmonic noise generated without dithering. Note that a common misconception is that dithering can be used to improve the quality of prerecorded 16 bit hifidelity audio signals. There are, however, no techniques by which a 16 bit CD output can be dithered to remove or reduce harmonic distortion other than add levels of noise to mask it! It may appear in the previous figure as if simply perturbing the quantized values would be a relatively simple and effective dithering technique. There are a number of important differences between dithering before and after the quantizer. First, after the quantizer the noise is simply additive and the spectra of the dither and the harmonically distorted signal add (this is the masking of the harmonic distortion referred to above -- requiring a relatively high power dither). The additive dithering before quantization does not result in additive spectra because the quantization is nonlinear. Another difference can be thought of this way: the dither signal is much more likely to cause a change in the quantized level when the input analog signal is close to a quantization boundary (i.e., it does not have to move the signal value very far). After quantization, we have no way of knowing (in the general case) how close an input signal was to a quantization boundary -so mimicking the dither effect is not, in general, possible. However if a master 20 bit (or higher) resolution recording exists and it is to be remastered to 16 bits, then digital dithering is appropriate, whereby the 20 bit signal can be dithered prior to requantizing to 16 bits. The benefits will be similar to those described above for ADCs. Some simple mathematical analysis of the benefits of dithering for breaking up correlation between the signal and the quantization noise can be done. The following figure shows the correlation between a sine wave input signal and the quantization error for 1 to 8 bits of signal resolution:
0.45 Correlation Coefficient 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 1 Single bit dither 2 3 4 5 6 7 8 Number of bits of signal resolution No dither SNR (dB) 50 45 40 35 30 25 20 15 10 5 0

No dither

Single bit dither

1

2 3 4 5 6 7 8 Number of bits of signal resolution

For low resolution signals the correlation between the signal and quantization error is high. This will be see as tonal or harmonic distortion. however if simple dithering scheme is performed prior to analog to digital conversion the correlation can be greatly reduced.

For less than 8 bits resolution the correlation between the signal and quantization noise increases to 0.4 and the signal will sound very (harmonically) distorted. The solid line shows the correlation and signal to noise ratio (SNR) of the signal before and after dither has been added. Clearly the dither is successful at breaking up the correlation between signal and quantization noise and the benefits are greatest for low resolutions. However the total quantization noise in the digital signal after dithering is increased by 3dB for all bit resolutions.

109
A uniformly distributed probability density function (PDF) and maximum amplitude of a half bit ( ± q ⁄ 2 ) is often used for dithering. Adding a single half bit dither signal successfully decorrelates the expected error, however the second moment of the error remains correlated. To decorrelate the second order moment a second uniformly distributed signal can be added. Higher order moments can be decorrelated by adding additional single bits (with uniform probability density functions), however it is found in practice that two uniform random variables (combining to give a triangular probability density function) are sufficient. The effect of adding two random variables with uniform PDFs of p ( x ) is equivalent to adding a random binary sequence with a triangular PDF (TPDF): p( x) p( y)

-q/2 p(x )

q/2 d 1 -q d1 ( t )

q y

-q/2

q/2 d 2

When two uniformly distributed random variables d 1 and d 2 , are added together, the probability density function (PDF) of the result, y is a random variable with a triangular PDF (TPDF) obtained by a convolution of the PDFs of d 1 and d 2 .

The noise power added to the output signal by one uniform PDF is q 2 ⁄ 12 , and therefore with two of these dithering signals q 2 ⁄ 6 noise power is added to the output signal. Noting that the quantization noise power of the ADC is q 2 ⁄ 12 and therefore the total noise power of an audio signal dithered with a TPDF is q 2 ⁄ 4 , i.e. total noise power in the output signal has increased by a factor of 3 or by 4.8 dB ( 10 log 3 ) over the noise power from the ADC being used without dither. Despite this increase in total noise, the noise power is now more uniformly distributed over frequency (i.e., more white and sounding like a broadband hissing) and the harmonic distortion components caused by correlation between quantization error and the input signal has been effectively attenuated. In order to mathematically illustrate why dither works, an extreme case of low bit resolution will be addressed. For a single bit ADC (stochastic conversion) the quantizer is effectively reduced to a comparator where:  x ( k ) = sign ( x ( k ) ) =  1, v ( n ) ≥ 0  – 1, v ( n ) < 0 (112)

For an input constant (dc) input signal of v ( t ) = V 0 then x ( k ) = 1 , if V 0 > 0 regardless of the exact magnitude. However by adding a dither signal d(n) with uniform probability density function over the values Q ⁄ 2 and – Q ⁄ 2 before performing the conversion, such that:  x ( k ) =  1, v ( n ) + d ( n ) ≥ 0  – 1, v ( n ) + d ( n ) < 0 (113)

110 and taking the mean (expected) value of x ( n ) gives: E [ x ( n ) ] = E [ sign ( v ( n ) + d ( n ) ) ] = E [ sign ( n′ ( k ) ) ]

DSPedia
(114)

where the n′ ( k ) is a uniformly distributed random variable with a uniform distribution over values of V 0 – Q ⁄ 2 and V 0 + Q ⁄ 2 . We can therefore show that the expected or mean value of the dither signal is:

E [ x ( n ) ] = ( –1 )

V0 – Q ⁄ 2



0

1 --- dn′ + Q

V0 + Q ⁄ 2



0

1 --- dn′ Q

(115)

1 2 1 = ---  V 0 – Q + ---  V 0 + Q = --- V ----  Q  Q 0 Q 2 2 Therefore in the mean, the quantizer average dithered output is proportional to V 0 . The same intuitive argument can be seen for time varying x(n), as long as the sampling rate is sufficiently fast compared to the changes in the signal. Dither can be further addressed with oversampling techniques to perform noise shaped dithering. See also Analog to Digital Conversion, Digital to Analog Conversion, Digital Audio, Noise Shaping, Tonal Distortion. Divergence: When an algorithm does not converge to a stable solution and instead progresses ever further away from a solution it may be said to be diverging. See also the Convergence entry. Divide and Conquer: The name given to the general problem solving strategy of first dividing the overall problem into a series of smaller sub-problems, solving these subproblems, and finally using the solutions to the subproblems to give the overall solution. Some people also use this as an approach to competing against external groups or managing people within their own organization. Division: Division is rarely required by real time DSP algorithms such as filtering, FFTs, correlation, adaptive algorithms and so on. Therefore DSP processors do not provide a provision for performing fast division, in the same way that single cycle parallel multipliers are provided. Therefore division is usually performed using a serial algorithm producing a bit at a time result, or using an iterative technique such as Newton-Raphson. Processors such as the DSP56002 can perform a fixed point division in around 12 clock cycles. It is worth pointing out however that some DSP algorithms such the QR for adaptive signal processing have excellent convergence and stability properties and do require division. Therefore is it possible that in the future some DSP devices may incorporate fast divide and square roots to allow these techniques to be implemented in real time. See also DSP Processor, Parallel Adder, Parallel Multiplier. Dosemeter: See Noise Dosemeter. Dot Product: See Vector Properties - Inner Product. Downsampling: The sampling rate of a digital signal sampled at fs can be downsampled by a factor of M to a sampling frequency fd = fs/M by retaining only every M-th sample. Downsampling can lead to aliasing problems and should be performed in conjunction with a low pass filter that cuts-

111 off at fs/2M; this combination is usually referred to as a decimator. See also Aliasing, Upsampling, Decimation, Interpolation, Fractional Sampling Rate Conversion. ts x(k)

1 f s = --ts

y(k)

td

---fd = M td

time Input Output

time

4
Downsampler |Y(f)| |X(f)|

0

fs /2

fs

0

fd /2

fd

3fd /2 2fd

5fd /2 3fd

7fd /2 4fd

frequency

frequency

Dr. Bub: The electronic bulletin board operated by Motorola and providing public domain source code, and Motorola DSP related information and announcements. Driver: The power output from a DAC is usually insufficient to drive an actuator such as a loudspeaker. Although the voltage may be at the correct level, the DAC cannot source enough current to deliver the required power. Therefore a driver in the form of an amplifier is required. See also Signal Conditioning.
DSP Processor

DAC Driver Amplifier

DSP Board: A DSP board is a generic name for a printed circuit board (PCB) which has a DSP processor, memory, A/D and D/A capabilities, and digital input ports (parallel and serial). For development work most DSP boards are plug-in modules for computers such as the IBM-PC, and Macintosh. The computer is used as a host to allow assembly language programs to be conveniently developed and tested using assemblers and cross compilers. When an application

112

DSPedia

has been fully developed, a stand-alone DSP board can be realized. See also Daughter Module, DSP Processor, Motherboard.

Address bus Interface to Host Computer DSP Processor Digital to Analog Converter Data bus Voltage Output Voltage Input Analog to Digital Converter Parallel and Serial I/O

ROM

RAM

DSP Processor: A microprocessor that has been designed for implementing DSP algorithms. The main features of these chips are fast interrupt response times, a single cycle parallel multiplier, and a subset of the assembly language instructions found on a general purpose microprocessor (e.g. Motorola 68030) to save on silicon area and optimize DSP type instructions. The main DSP processors are the families of the DSP56/96 (Motorola), TMS320 (Texas Instruments), ADSP 2100 (Analog Devices), and DSP16/32 (AT&T). DSP Processors are either floating point or fixed point devices. See also DSP Board.
Data and Address Registers Parallel Multiplier Arithmetic Logic Unit Control Bus Timers Instruction Decoder Interrupt Handler RAM ROM EPROM

Address Bus

Data Bus

A Generic DSP Processor

DSPLINKTM: A bidirectional and parallel 16 bit data interface path used on Loughborough Sound Images Ltd. (UK) and Spectron (USA) DSP boards to allow high speed communication between separate DSP boards and peripheral boards. The use of DSPLINK means that data between separate boards in a PC do not need to communicate data via the PC bus. Dual: A prefix to mean “two of”. For example the Burr Brown DAC2814 chip is described as a Dual 12 Bit Digital to Analog Converter (DAC) meaning that the chip has two separate (or independent) DACs. In the case of DACs and ADCs, if the device is used for hi-fidelity audio dual devices are often referred to as stereo. See also Quad. Dual Slope: A type of A/D converter.

113
Dual Tone Multifrequency (DTMF): DTMF is the basis of operation of push button tone dialing telephones. Each button on a touch tone telephone is a combination of two frequencies, each from a group of four. 2 4 = 16 possible combinations of tones pairs can be encoded using the two groups of four tones. The two groups of four frequencies are: (low) 697Hz, 770Hz, 852Hz, 941Hz, and (high) 1209Hz, 1336Hz, 1477Hz, and 1633Hz:
1209 Hz 697 Hz 1336 Hz 1477Hz 1633Hz

1 4 7 .

2 5 8 0

3 6 9 #

A B C D
Each button on the keypad is a combination of two DTMF frequencies. (Note most telephones do not have keys A,B, C, D)

770 Hz

852 Hz

941 Hz

The standards for DTMF signal generation and detection are given in the ITU (International Telecommunication Union) standards Q.23 and Q.24. In current telephone systems, virtually every telephone now uses DTMF signalling to allow transmission of a 16 character alphabet for applications such as number dialing, data entry, voice mail access, password entry and so on. The DTMF specifications commonly adopted are: Signal Frequencies:
• Low Group 697, 770, 852, 941 Hz • High Group: 1209, 1336, 1477, 1633 Hz

Frequency tolerance:
• Operation: ≤ 1.5%

Power levels per frequency:
• Operation: 0 to -25dBm • Non-operation: -55dBm max

Power level difference between frequencies
• +4dB to -8dB

Signal Reception timing:
• Signal duration: operation: 40ms (min) • Signal duration: non-operation: 23ms (max) • Pause duration: 40ms (min); • Signal interruption: 10ms (max); • Signalling velocity: 93 ms/digit (min).

114

DSPedia

See also Dual Tone Multifrequency - Tone Detection, Dual Tone Multifrequency - Tone Generation, Goertzel’s Algorithm. Dual Tone Multifrequency (DTMF), Tone Generation: One method to generate a tone is to use a sine wave look up table. For example some members of the Mototola DSP56000 series of processors include a ROM encoded 256 element sine wave table which can be used for this purpose. Noting that each DTMF signal is a sum of two tones, then it should be possible to use a look up table at different sampling rates to produce a DTMF tone. An easier method is to design a “marginally stable” IIR (infinite impulse response) filter whereby the poles of the filter are on the unit circle and the filter impulse response is a sinusoid at the desired frequency. This method of tone generation requires only a few lines of DSP code, and avoids the requirement for “expensive” look-up tables. The structure of an IIR filter suitable for tone generation is simply:

time

x(k)

y(k-2)

y(k-1)

y(k)
Sinusoidal Output

time

Impulse input

-1

b1

A two pole “marginally stable” IIR filter. For an input of an impulse the filter begins to oscillate.

This operation of this 2 pole filter can be analysed by considering the z-domain representation. The discrete time equation for this filter is:
2

y(k) = x(k) +

∑ n=1 b n y ( k – n ) = x ( k ) + by ( k – 1 ) – y ( k – 2 )

(116)

where we now write b 1 = b and b 2 = – 1 . Writing this in the z-domain gives: Y ( z ) = X ( z ) + b z –1 Y ( z ) – z – 2 Y ( z ) The transfer function, H ( z ) , is therefore: Y( z ) H ( z ) = ----------X( z ) 1 1 1 = ----------------------------------- = --------------------------------------------------------- = ------------------------------------------------------------------– 1 + z –2 –1 ) ( 1 – p z –1 ) 1 – bz ( 1 – p1 z 1 – ( p 1 + p 2 )z –1 + p 1 p 2 z – 2 2 (117)

(118)

115 where, p 1 and p 2 are the poles of the filter, and b = p 1 + p 2 and p 1 p 2 = 1 . The poles of the filter, p 1, 2 (where the notation p 1, 2 means p 1 and p 2 ) can be calculated from the quadratic formula as:
2– b ± j 4 – b2 ---------------------------p 1, 2 = b ± b 4 = -----------------------------2 2

(119)

Given that b is a real value, then p 1 and p 2 are complex conjugates. Rewriting Eq. 119 in polar form gives: p 1, 2 =
4 – b2 ± j tan–1 -----------------b e

(120)

Considering the denominator polynomial of Eq. 118, the magnitude of the complex conjugate values p 1 and p 2 are necessarily both 1, and the poles will lie on the unit circle. In terms of the frequency placement of the poles, noting that this is given by: p 1, 2 = 1 = e
± j2 πf ------------fs

(121)

(where e jω = 1 for any ω ) for a sampling frequency f s , from Eqs. 121 and 120 it follows that: 4 – b2 2πf = -------tan–1 -----------------b fs (122)

For most telecommunication systems the sampling frequency is f s = 8000Hz . The values of b for the various desired DTMF frequency of oscillations can therefore be calculated from Eq. 122 to be:

b 1.707737809 1.645281036 1.568686984 1.478204568 1.164104023 0.996370211 0.798618389 0.568532707

frequency, f / Hz 697 770 852 941 1209 1336 1477 1633

116

DSPedia

For example, in order to generate the DTMF signal for the digit #1, it is required to produce two tones, one at 697 Hz and one at 1209 Hz. This can be accomplished by using the IIR filter :

time

x(k)

y(k)
Dual tone Output

time

Impulse input

-1
1.707737...

-1
1.164104...

An IIR filter to produce the DTMF signal for the digit #1. The filter consists of two “marginally stable” two pole IIR files producing the 697 Hz tone (top) and the 1209 Hz tone (bottom) added together. Note that the filters will have different magnitude responses and therefore the two tones are unlikely to have the same amplitude. The ITU standard allows for this amplitude difference.

See also Dual Tone Multifrequency (DTMF) - Tone Detection, Dual Tone Multifrequency (DTMF) Tone Detection, Goertzel’s Algorithm. Dual Tone Multifrequency (DTMF), Tone Detection: DTMF tones can be detected by performing a discrete Fourier transform (DFT), and considering the level of power that is present in a particular frequency bin. Because DTMF tones are often used in situations where speech may also be present, it is important that any detection scheme used can distinguish between a tone and a speech signal that happens to have strong tonal components at a DTMF frequency. Therefore for a DTMF tone at f Hz, a detection scheme should check for the signal component at f Hz and also check that there is no discernable component at 2f Hz; quasi-periodic speech components (such as vowel sounds) are rich in (even) harmonics, whereas DTMF tones are not. The number of samples used in calculating the DFT should be shorter than the number of samples in half of a DTMF signalling interval, typically of 50ms duration equivalent to 400 samples at a sampling frequency of f s = 8000 Hz , but be large enough to give a good frequency resolution. The DTMF standards of the International Telecommunication Union (ITU) therefore suggest a value of 205 samples in standards Q.23 and Q.24. Using this 205 point DFT the DTMF fundamental and the second harmonics of the 8 possible tones can be successfully discerned. Simple decision logic is applied to the DFT output to specify which tone is present. The second harmonic is also detected in order that the tones can be discriminated from speech utterances that happen to include a frequency component at one of the 8 frequencies. Speech can have very strong harmonic content, whereas the DTMF tone will not. To add robustness against noise, the same DTMF tones require to be detected in a row to give a valid DTMF signal .

117
If a 205 point DFT is used, then the frequency resolution will be: -----------Frequency Resolution = 8000 = 39.02 Hz 205 (123)

The DTMF tones therefore do not all lie exactly on the frequency bins. For example the tone at 770 Hz will be detected at the frequency bin of 780 Hz ( 20 × 39.02 Hz ). In general the frequency bin, k to look for a single tone can be calculated from: f tone N k = int  ---------------  fs  (124)

where f tone is a DTMF frequency, N = 205 and f s = 8000 Hz . The bins for all of the DTMF tones for these parameters are therefore: frequency, f / Hz 697 770 852 941 1209 1336 1477 1633 bin 18 20 22 24 31 34 38 42

When the 2nd harmonic of a DTMF frequency is to be considered, then the bin at twice the fundamental frequency bin value is detected (there should be no appreciable signal power there for a DTMF frequency). When calculating the DFT for DTMF detection because we are only interested in certain frequencies, then it is only necessary to calculate the frequency components at the frequency bins of interest. Therefore an efficient algorithm based on the DFT called Goertzel’s algorithm is usually used for DTMF tone detection. See also Dual Tone Multifrequency , Dual Tone Multifrequency - Tone Generation, Goertzel’s Algorithm. Dynamic Link Library: A library of compiled software routines in a separate file on disk that can be called by a Microsoft Windows program. Dynamic RAM (DRAM): Random access memory that needs to be periodically refreshed (electrically recharged) so that information that is stored electrically is not lost. See also Non-volatile RAM, Static RAM. Dynamic Range: Dynamic range specifies the numerical range, giving an indication of the largest and smallest values that can be correctly represented by a DSP system. For example if 16 bits are used in a system then the linear (amplitude) dynamic range is -215 → 215-1 (-32768 to +32767). Usually dynamic range is given in decibels (dB) calculated from 20 log10 (Linear Range), e.g. for 16 bits 20log10216 =96dB.

118

DSPedia

119

E e: The natural logarithm base, e = 2.7182818… . e can be derived by taking the following limit: n -e ≡ lim  1 + 1 n n → ∞

(125)

See also Exponential Function. Ear: The ear is a basically the system of flesh, bone, nerves and brain allowing mammals to perceive and react to sound. It is probably fair to say that a very large percentage of DSP is dealing with the processing, coding and reproduction of audio signals for presentation to the human ear.
Pinna Semicircular canals

Inner ear bones Cochlear nerves to the brain Auditory canal

Cochlea Eardrum

A Simplified Diagram of the Human Ear The human ear can be generally described as consisting of three parts, the outer, middle and inner ear. The outer ear consists of the pinna and the ear canal. The shape of the external ear has evolved such that is has good sensitivity to frequencies in the range 2 - 4kHz. Its complex shape provides a number of diffracted and reflected acoustic paths into the middle ear which will modify the spectrum of the arriving sound. As a result a single ear can actually discriminate direction of arrival of broadband sounds. The ear canal leads to the ear drum (tympanic membrane) which can flex in response to sound. Sound is then mechanically conducted to the inner ear interconnection of bones (the ossicles), the malleus (hammer), the incus (anvil) and the stapes (stirrup) which act as an impedance matching network (with the ear drum and the oval window of the cochlea) to improve the transmission of acoustic energy to the inner ear. Muscular suppression of the ossicle movement provides for additional compression of very loud sounds. The inner ear consists mainly of the cochlea and the vestibular system which includes the semicircular canals (these are primarily used for balance). The cochlea is a fluid filled snail-shell shaped organ that is divided along its length by two membranes. Hair cells attached to the basilar membrane detect the displacement of the membrane along the distance from the oval window to the end of the cochlea. Different frequencies are mapped to different spots along the basilar membrane. The further the distance from the oval window, the lower the frequency. The basilar membrane and its associated components can be viewed as acting like a series of bandpass filters

120

DSPedia

sending information to the brain to interpret [30]. In addition, the output of these filters is logarithmically compressed. The combination of the middle and inner ear mechanics allows signals to be processed over the amazing dynamic range of 120dB. See also See also Audiology, Audiometer, Audiometry, Auditory Filters, Hearing Impairment, Threshold of Hearing. EBCDIC: See also ASCII. Echo: When a sound is reflected of a nearby wall or object, this reflection is called an echo. Subsequent echoes (of echoes), as would be clearly heard in a large, empty room are referred to collectively as reverberations. Echoes also occur on telecommunication systems where impedance mismatches reflect a signal back to the transmitter. Echoes can sometimes be heard on long distance telephone calls. See also Echo Cancellation, Reverberation. Echo Cancellation: An echo canceller can be realised [53] with an adaptive signal processing system identification architecture. For example if a telephone line is causing an echo then by incorporating an adaptive echo canceller it should be possible to attenuate this echo:
A Input Signal Adaptive Filter Output Signal − B DAC Simulated echo of A + B + echo of A A simple adaptive echo canceller. The success of the cancellation will depend on the statistics and relative powers of the signals A and B. ADC Echo “Generator” e.g. Hybrid Telephone Connection

To Speaker B

When speaker A (or data source A) sends information down the telephone line, mismatches in the telephone hybrids can cause echoes to occur. Therefore speaker A will hear an echo of their own voice which can be particularly annoying if the echo path from the near and far end hybrids is particularly long. (Some echo to the earpiece is often desirable for telephone conversation, and the local hybrid is deliberately mismatched. However for data transmission echo is very undesirable and must be removed.) If the echo generating path can be suitably modelled with an adaptive filter, then a negative simulated echo can be added to cancel out the signal A echo. At the other end of the line, telephone user B can also have an echo canceller. In general local echo cancellation (where the adaptive echo canceller is inside the consumer’s telephone/data communication equipment) is only used for data transmission and not speech. Minimum specifications for the ITU V-series of recommendations can be found in the CCITT Blue Book. For V32 modems (9600 bits/sec with Trellis code modulation) an echo reduction ratio of 52dB is required. This is a power reduction of around 160,000 in the echo. Hence the requirement for a powerful DSP processor. For long distance telephone calls where the round trip echo delay is more than 0.1 seconds and suppressed by less than 40dB (this is typical via satellite or undersea cables) line echo on speech

121 can be a particularly annoying problem. Before adaptive echo cancellers were cost effective to implement, the echo problem was be solved by setting up speech detectors and allowing speech to be half duplex. This was inconvenient for speakers who were required to take turns speaking. Adaptive echo cancellers at telephone exchanges have helped to solve this problem. The set up of the telephone exchange echo cancellers is a little different from the above example and the echo is cancelled on the outgoing signal line, rather than the incoming signal line. See also Acoustic Echo Cancellation, Adaptive Filtering, Least Mean Squares Algorithm. Eigenanalysis: See Matrix Decompositions - Eigenanalysis. Eigenvalue: See Matrix Decompositions - Eigenanalysis. Eigenvector: See Matrix Decompositions - Eigenanalysis. Eight to Fourteen Modulation (EFM): EFM is used in compact disc (CD) players to convert 8 bit symbols to a 14 bit word using a look-up table [33]. When the 14 bit words are used fewer 1-0 and 0-1 transitions are needed than would be the case with the 8 bit words. In addition, the presence of the transitions are guaranteed. This allows required synchronization information to be placed on the disc for every possible data set. In addition, the forced presence of zeros allows the transitions (ones) to occur less frequently than would otherwise be the case. This increases the playing time since more bits can be put on a disk with a fixed minimum feature size (i.e., pit size). See also Compact Disc. Electrocardiogram (ECG): The general name given to the electrical potentials of the heart sensed by electrodes placed externally on the body (i.e., surface leads) [48]. These potentials can also be sensed by placing electrodes directly on the heart as is done with implantable devices (sometimes referred to as pacemakers). The bandwidth used for a typical clinical ECG signal is about 0.05-100Hz. The peak amplitude of a sensed ECG signal is about 1 mV and for use in a DSP system the ECG will typically require to be amplified by a low noise amplifier with gain of about 1000 or more.

Amplitude (mV)

0.4 0.2 0 time (secs)

Example ECG

Electroencephalogram (EEG): The EEG measures small microvolt potentials induced by the brain that are picked up by electrodes placed on the head [48]. The frequency range of interest is about 0.5-60Hz. A number of companies are now making multichannel DSP acquisition boards for recording EEGs at sampling rates of a few hundred Hertz. Electromagnetic Interference (EMI): Unwanted electromagnetic radiation resulting from energy sources that interfere with or modulate desired electrical signals within a system. Electromagnetic Compatibility (EMC): With the proliferation of electronic circuit boards in virtually every walk of life particular care must be taken at the design stage to avoid the electronics

122

DSPedia

acting as a transmitter of high frequency electromagnetic waves. In general a strip of wire with a high frequency current passing through can act as an antenna and transmit radio waves. The harmonic content from a simple clock in a simple microprocessor system can easily give of radio signals that may interfere with nearby radio communications devices, or other electronic circuitry. A number of EMC regulations have recently been introduced to guard against unwanted radio wave emissions from electronic systems. Electromagnetic Spectrum: Electromagnetic waves travel through space at approximately 3 × 10 8 m/s, i.e. the speed of light. In fact, light is a form of electromagnetic radiation for which we have evolved sensors (eyes). The various broadcasting bands are classified as very low (VLF), low (LF), medium (MF), high (HF), very high (VHF), ultra high (UHF), super high (SHF), and extremely high frequencies (EHF). One of the most familiar bands in everyday life is VHF (very high) used by FM radio stations.
VLF LF MF
AM Radio

HF

VHF
FM Radio

UHF

SHF
Satellite

EHF

Infrared Visible Light

3kHz

30kHz

300kHz

3MHz

30MHz 300MHz

3GHz

30GHz 300GHz

Electromyogram (EMG): Signals sensed by electrodes placed inside muscles of the body. The frequency range of interest is 10-200Hz. Electroreception: Electroreception is a means by which fish, animals and birds use electric fields for navigation or communication. There are two type of electric fish: “strongly electric” such as the electric eel which can uses its electrical energy as a defense mechanism, and; “weakly electric” which applies to many common sea and freshwater fish who use electrical energy for navigation and perhaps even communication [151]. Weakly electric fish can have one of two differing patterns of electric discharge: (1) Continuous wave where a tone like signal is output at frequencies of between 50 and 1000 Hz, and (2) Pulse wave where trains of pulses lasting about a millisecond and spaced about 25 milliseconds apart. The signals are generated by a special tubular organ that extends almost from the fish head to tail. By sensing the variation in electrical conductivity caused by objects distorting the electric field, an electrical image of can be conveyed to the fish via receptors on its body. The relatively weak electric field, however, means that fish are in general electrically short sighted and cannot sense distances any more than one or two fish lengths away. However this is enough to avoid rocks and other poor electrical conductors which will disperse electrical shadows that the fish can pick up on. See also Mammals. Elementary Signals: A set of elementary signals can be defined which have certain properties and can be combined in a linear or non linear fashion with time shifts and periodic extensions to create more complicated signals. Elementary signals are useful for the mathematical analysis and description of signals and systems [47]. Although there is no universally agreed list of elementary signals, a list of the most basic functions is likely to include:
1. Unit Step; 2. Unit Impulse; 3. Rectangular Pulse; 4. Triangular Pulse 5. Ramp Function;

123
6. Harmonic Oscillation (sine and cosine waves); 7. Exponential Functions; 8. Complex Exponentials; 9. Mother Wavelets and Scaling Functions;

Both analog and discrete versions of the above elementary signals can be defined. Elementary signals are also referred to as signal primitives. See also Convolution, Elementary Signals, Fourier Transform Properties, Impulse Response, Sampling Property, Unit Impulse Function, Unit Step Function. Elliptic Filter: See Filters. Embedded Control: DSP processors and associated A/D and D/A channels can be used for control of a mechanical system. For example a feedback control algorithm with could be used to control the revolution speed of the blade in a sheet metal cutter. Typically the term embedded will imply a real-time system. Emulator: A hardware board or device which has (hopefully!) the same functionality as an actual DSP chip, and can be used conveniently and effectively for developing and debugging applications before actual implementation on the DSP chip. Endfire: A beamformer configuration in which the desired signal is located along a line that contains a linear array of sensors. See also Broadside, Superdirectivity. di,i+1 Endfire Look Direction

Sensors

τ1
Summer or DSP processor

τ2 τ3 τM
Delays

Output

d 1, n τ n = ---------c c is propagation velocity

Engaged Tone: See also Busy Tone. Ensemble Averages: A term used interchangeably with statistical average. See Expected Value. Entropy: See Information Theory Entropy Coding: Any type of data compression technique which exploits the fact that some symbols are likely to occur less often than others and assigns fewer bits for coding to the more frequent. For example the letter “e” occurs more often in the English language that the letter “z”. Therefore the transmission code for “e” may only use 2 bits, whereas the transmission code for “z” might require 8 bits. The technique can be further enhanced by assigning codes to comment groups of letters such as “ch”, or “sh”. See also Huffman Coding.

124

DSPedia

Equal Loudness Contours: Equal loudness gives a measure of the actual SPL of a sound compared to the perceived or judged loudness. i.e. a purely subjective measure. The equal loudness contours are therefore presented for equal phons (the subjective measure of loudness). Equal Loudness Contours
140 Phons: 120 100 SPL (dB) 80 60 40 20 0 Threshold of Hearing 120 100 80 60 40 20 10

10

50

100

500

1000

5000 10000 20000 frequency (Hz)

The curves are obtained by averaging over a large cross section of the population who do not have hearing impairments [30]. These measurements were first performed by Fletcher and Munson in 1933 [73], and later by Robinson and Dadson in 1956 [126]. See also Audiometry, Auditory Filters, Frequency Range of Hearing, Hearing, Loudness Recruitment, Sound Pressure Level, Sound Pressure Level Weighting Curves, Spectral Masking, Temporal Masking, Temporary Threshold Shift, Threshold of Hearing, Ultrasound. Equal Tempered Scale: See Equitempered Scale. Equalisation: If a signal is passed through a channel (e.g., it is filtered) and the effects of the channel on the signal are removed by making an inverse channel filter using DSP, then this is referred to as equalization. Equalization attempts to restore the frequency and phase characteristic of the signal to the values prior to transmission and is widely used in telecommunications to maximize the reliable transmission data rate, and reduce errors caused by the channel frequency and phase response. Equalization implementations are now commonly found in FAX machines and

125 telephone MODEMS. Most equalization algorithms are adaptive signal processing least squares or least mean squares based. See also Inverse System Identification.
USA Telephone Channel T(f) SCOTLAND Equalization Digital Filter E(f)

A/D

D/A

T(f)

Channel Frequency Response 2 4 frequency (kHz)

E(f)

Equalizer Frequency Response 2 4 frequency (kHz)

USA

T(f)E(f)

2

4 frequency (kHz)

Combined Frequency Response of Channel and Equalizer

SCOTLAND

Equitempered Scale: Another name for the well known Western music scale of 12 musical notes in an octave where the ratio of the fundamental frequencies of adjacent notes is a constant of value 2 1 / 12 = 1.0594631… . The frequency different between adjacent notes on the equitempered scale is therefore about 6%. The difference between the logarithm of the fundamental frequency of adjacent notes is therefore a constant of: log ( 2 1 / 12 ) = 0.0250858… (126)

Hence if a piece of digital music is replayed at a sampling rate that mismatches the original by more or less than 6%, the key of the music will be changed (as well as everything sounding that little bit slower!). See also Music, Music Synthesis, Western Music Scale. Equivalent Sound Continuous Level (Leq): Sound pressure level in units of dB (SPL), gives a measure of the instantaneous level of sound. To produce a measure of averaged or integrated sound pressure level a time interval T, the equivalent sound continuous level can be calculated [46]: 1  -- ∫ P 2 ( t ) T 0   = 10 log  ----------------------- 2  P ref   
T

L eq,T

(127)

where P ref is the standard SPL reference pressure of 2 × 10-5 N/m2 = 20 µ Pa, and P ( t ) is the time varying sound pressure. If a particular sound pressure level weighting curve was used, such as the A-weighting scale, then this may be indicated as LAeq,T Leq measurements can usually be calculated by good quality SPL meters which will average the sound over a specified time typically from a few seconds to a few minutes. SPL meters which provide this facility will correspond to IEC 804: 1985 (and BS 6698 in the UK). See also Hearing

126

DSPedia

Impairment, Sound Exposure Meters, Sound Pressure Level, Sound Pressure Level Weighting Curves, Threshold of Hearing. Ergodic: If a stationary random process (i.e., a signal) is ergodic, then its statistical average (or ensemble average) equal the time average of a single realization of the process. For example given a signal x ( n ) , with a probability density function p { x ( n ) } the mean or expected value is calculated from: Mean of x ( n ) = E { x ( n ) } = and the mean squared value is calculated as: Mean Squared Value of x ( n ) = E { [ x ( n ) ] 2 } =

∑ x ( n )p { x ( n ) } n (128)

∑ [ x( n ) ]2 p{ x( n ) } n (129)

For a stationary signal the probability density function or a number of realizations of the signal may be difficult or inconvenient to obtain. Therefore if the signal is ergodic the time averages can be used: 1 E { x ( n ) } ≈ -------------------M2 – M1 and 1 E { [ x ( n ) ] 2 } ≈ -------------------M2 – M1
M2 – 1 M2 – 1

∑ n = M1

x ( n ) for large ( M 2 – M 1 )

(130)

∑ n = M1

[ x ( n ) ] 2 for large ( M 2 – M 1 )

(131)

See also Expected Value, Mean Value, Mean Squared Value, Variance, Wide Sense Stationarity. Error Analysis: When the cumulative effect of arithmetic round-off errors in an algorithm is calculated, this is referred to as an error analysis. Most error analysis is performed from consideration of relative and absolute errors of quantities. For example, consider two real numbers x and y, that are estimated as x’ and y’ with absolute errors ∆x and ∆y . Therefore: x = x′ + ∆x y = y′ + ∆y If x and y are added: w = x+y (133) (132)

then the error, ∆w , caused by adding the estimated quantities such that w′ = x′ + y′ is calculated by noting that: w = w′ + ∆w = x′ + ∆x + y′ + ∆y (134)

127 and therefore: ∆w = ∆x + ∆y (135)

Therefore the (worst case) error caused by the adding (or subtracting) two values is calculated as the sum of the absolute errors. When the product z = xy is formed then: z = xy = ( x′ + ∆x ) ( y′ + ∆y ) = x′y′ + ∆xy′ + ∆yx′ + ∆x∆y Using the estimated quantities to calculate z′ = x′y′ , the product error, ∆z , is given by: ∆z = z – z′ = ∆xy′ + ∆yx′ + ∆x∆y (137) (136)

If we assume that the quantities ∆x and ∆y . are small with respect to x′ and y′ then the term ∆x∆y can be neglected and the error in the product given by: ∆z ≅ ∆xy′ + ∆yx′ (138)

Dividing both sides of the equation by z, we can express the relative error in z as the sum of the relative errors of x and y: ∆z ∆x ∆y ------ ≅ ------ + -----z x y (139)

The above two results can be used to simplify the error analysis of the arithmetic of many signal processing algorithms. See also Absolute Error, Quantization Noise, Relative Error. Error Budget: See Total Error Budget. Error Burst: See Burst Errors. Error Performance Surface: See Wiener-Hopf Equations. Euclidean Distance: Loosely, Euclidean distance is simply linear distance, i.e., distance “as the crow flies”. More specifically, Euclidean distance is the square root of the sum of the squared differences between two vectors. One example would be the distance between the endpoints of the hypotenuse of a right triangle. This distance satisfies the Pythagorean Theorem, i.e., the square root of the sum of the squares. See also Hamming Distance, Viterbi Algorithm. Euler’s Formula: An important mathematical relationship in dealing with complex numbers and harmonic relationships is given by Euler’s Formula: e jθ jθ

= cos θ + j sin θ

(140)

If we think of e as being a 2-dimensional unit length vector (or phasor) that rotates around the origin as θ is varied, then the real part ( cos θ ) is given by the projection of that vector onto the xaxis, and the imaginary part ( sin θ ) is given by the projection of that vector onto the y-axis.

128

DSPedia

European Broadcast Union (EBU): The EBU define standards and recommendations for broadcast of audio, video and data. The EBU has a special relationship with the European Telecommunications Standards Institute (ETSI) through which joint standards are produced such as NICAM 728 (ETS 300 163).
“a network, in general evolving from a telephony integrated digital network (IDN), that provides end to end connectivity to support a wide range of services including voice and non-voice services, to which users have a limited set of standard multi-purpose user network interfaces.”

The ITU-T I-series of recommendations fully defines the operation and existence of ISDN. See also European Telecommunications Standards Institute, International Telecommunication Union, International Organisation for Standards, Standards, I-series Recommendations, ITU-T Recommendations. European Telecommunications Standards Institute (ETSI): ETSI provides a forum at which all European countries sit to decide upon telecommunications standards. The institute was set up in 1988 for three main reasons: (1) the global (ISO/IEC) standards often left too many questions open; (2) they often do not prescribe enough detail to achieve interoperability; (3) Europe cannot always wait for other countries to agree or follow the standards of the USA and Asia. ETSI has 12 committees covering telecommunications, wired fixed networks, satellite communications, radio communications for the fixed and mobile services, testing methodology, and equipment engineering. ETSI were responsible for the recommendations of GSM (Group Specialé Mobile, or Global System for Mobile Communications). See also Comité Européen de Normalisation Electrotechnique, International Telecommunication Union, International Organisation for Standards, Standards. Evaluation Board: A printed circuit board produced in volume by a company, and intended for evaluation and benchmarking purposes. An evaluation board is often a cut down version of a production board available from the company. A DSP evaluation board is likely to have limited memory available, use a slow clock DSP processor, and be restricted in its convenient expandability. See also DSP Board. Even Function: The graph of an even function is symmetric about the y-axis such that y = f ( x ) = f ( – x ) . This simple 1-dimensional intuition is quickly extended to more complex functions by noting that the basic requirement is still f ( x ) = f ( – x ) whether x or f(x) are vectors or vector-valued functions or some combination. Example even functions include y = cos x and y = x 2 . In contrast an odd function has point symmetry about the origin such that y = f ( x ) = – f ( x ) . See also Odd Function. y y x y = x2 x y = cos x

Evoked Potentials: When the brain is excited by audio or visual stimuli, small voltage potentials can be measured on the head, emanating from brain [48]. These Visually Evoked Potentials (VEP), and Audio Evoked Potentials (AEP) can be sampled, and processed using a DSP system. Evoked potentials can also be measured directly on the brain or the brainstem.

129
Excess Mean Square Error: See Least Mean Squares (LMS) Algorithm. Exp: Common notation used for the exponential function. See Exponential Function. Expected Value: The expected value, E { . } , of a random variable (or a function of a random variable) is simply the average value of the random variable (or of the function of a random variable). The statistical average or mean value of signal x ( n ) is computed from: Mean of x ( n ) = E { x ( n ) } =

∑ x ( n )p { x ( n ) } n (141)

where E { x ( n ) } is “the expected value of x ( n ) ”, and p { x ( n ) } is the probability density function of the random variable x ( n ) . An another example of expected values, the mean squared value of x ( k ) is calculated as: Mean Squared Value of x ( n ) = E { x 2 ( n ) } = Expected value is a linear operation, i.e.,: E { ax ( n ) + by ( n ) } = aE { x ( n ) } + bE { y ( n ) } (143)

∑ x 2 ( n )p { x ( n ) } n (142)

where a and b are constants and x ( n ) and y ( n ) are random signals generated by known probability density functions, p y { y ( n ) } and p x { x ( n ) } . For most signals encountered in real time DSP the probability density function is unlikely to be known and therefore the expected value cannot be calculated as suggested above. However if the signal is ergodic, then time averages can be used to approximate the statistical averages. See also Ergodic, Mean Value, Mean Squared Value, Variance, Wide Sense Stationarity. Exponential Averaging: An exponential averager with parameter α computes an average x ( n ) of a sequence {x(n)} as: x ( n ) = ( 1 – α )x ( n – 1 ) + αx ( n ) (144)

where α is contained in the interval [0,1]. An exponential average (a one pole lowpass filter) is simpler to compute than a moving rectangular window since older data points are simply forgotten by the exponentially decreasing powers of (1 - α). A convenient rule of thumb approximation for the “equivalent rectangular window” of an exponential averager is 1/α data samples. See also Waveform Averaging, Moving Average, Weighted Moving Average. Exponential Function: The simple exponential function is:

130 y = e x = exp ( x ) y 20 15 10 5

DSPedia
(145)

y = ex

-1

0

1

2

3

x

where “e” is the base of the natural logarithm, e = 2.7182818 . A key property of the exponential function is that the derivative of e x is e x , i.e. d x = x e e dx (146)

Real causal exponential functions can be used to represent the natural decay of energy in a passive system, such as the voltage decay in an RC circuits. For example consider the discrete time exponential: x(k) A

0 1 2 3 4.....

k

x ( k ) = Ae –λkts u ( k )

(147)

where u(k) is the unit step function, t s is the sampling period, and A and λ are constants. See also Complex Exponential Functions, Damped Sinusoid, RC Circuit.

131

F
F-Series Recommendations: The F-series telecommunication recommendations from the International Telecommunication (ITU), advisory committee on telecommunications (denoted ITUT and formerly known as CCITT) provide standards for services other than telephone (ops, quality, service definitions and human factors). Some of the current recommendations (http://www.itu.ch) include:
F.1 F.2 F.4 F.10 F.11 F.14 F.15 F.16 F.17 F.18 F.20 F.21 F.23 F.24 F.30 F.31 F.35 F.40 F.41 F.59 F.60 F.61 F.63 F.64 F.65 F.68 F.69 F.70 F.71 F.72 F.73 F.74 F.80 F.82 F.86 F.87 F.89 F.91 Operational provisions for the international public telegram service. Operational provisions for the collection of telegram charges. Plain and secret language. Character error rate objective for telegraph communication using 5-unit start-stop equipment. Continued availability of traditional services. General provisions for one-stop-shopping arrangements. Evaluating the success of new services. Global virtual network service. Operational aspects of service telecommunications. Guidelines on harmonization of international public bureau services. The international gentex service. Composition of answer-back codes for the international gentex service. Grade of service for long-distance international gentex circuits. Average grade of service from country to country in the gentex service. Use of various sequences of combinations for special purposes. Telegram retransmission system. Provisions applying to the operation of an international public automatic message switching service for equipments utilizing the International Telegraph Alphabet No. 2. International public telemessage service. Interworking between the telemessage service and the international public telegram service. General characteristics of the international telex service. Operational provisions for the international telex service. Operational provisions relating to the chargeable duration of a telex call. Additional facilities in the international telex service. Determination of the number of international telex circuits required to carry a given volume of traffic. Time-to-answer by operators at international telex positions. Establishment of the automatic intercontinental telex network. The international telex service Service and operational provisions of telex destination codes and telex network identification codes. Evaluating the quality of the international telex service. Interconnection of private teleprinter networks with the telex network. The international telex service - General principles and operational aspects of a store and forward facility. Operational principles for communication between terminals of the international telex service and data terminal equipment on packet switched public data networks. Intermediate storage devices accessed from the international telex service using single stage selection answerback format. Basic requirements for interworking relations between the international telex service and other services. Operational provisions to permit interworking between the international telex service and the intex service. Interworking between the international telex service and the videotex service. Operational principles for the transfer of messages from terminals on the telex network to Group 3 facsimile terminals connected to the public switched telephone network. Status enquiry function in the international telex service. General statistics for the telegraph services.

132
F.93 F.95 F.96 F.100 F.104 F.105 F.106 F.107 F.108 F.111 F.112 F.113 F.115 F.120 F.122 F.125 F.127 F.130 F.131 F.140 F.141 F.150 F.160 F.162 F.163 F.170 F.171 F.180 F.182 F.184 F.190 F.200 F.201 F.202 F.203 F.220 F.230 F.300 F.350 F.351 F.353 F.400 F.401 F.410 F.415

DSPedia
Routing tables for offices connected to the gentex service. Table of international telex relations and traffic. List of destination indicators. Scheduled radiocommunication services. International leased circuit services - Customer circuit designations. Operational provisions for phototelegrams. Operational provisions for private phototelegraph calls. Rules for phototelegraph calls established over circuits normally used for telephone traffic. Operating rules for international phototelegraph calls to multiple destinations. Principles of service for mobile systems. Quality objectives for 50-baud start-stop telegraph transmission in the maritime mobile-satellite service. Service provisions for aeronautical passenger communications supported by mobile-satellite systems. Service objectives and principles for future public land mobile telecommunication systems. Ship station identification for VHF/UHF and maritime mobile-satellite services. Operational procedures for the maritime satellite data transmission service. Numbering plan for access to the mobile-satellite services of INMARSAT from the international telex service. Operational procedures for interworking between the international telex service and the service offered by INMARSAT-C system. Maritime answer-back codes. Radiotelex service codes. Point-to-multipoint telecommunication service via satellite. International two-way multipoint telecommunication service via satellite. Service and operational provisions for the intex service. General operational provisions for the international public facsimile services. Service and operational requirements of store-and-forward facsimile service. Operational requirements of the interconnection of facsimile store-and-forward units. Operational provisions for the international public facsimile service between public bureaux (bureaufax). Operational provisions relating to the use of store-and-forward switching nodes within the bureaufax service. General operational provisions for the international public facsimile service between subscriber stations (telefax). Operational provisions for the international public facsimile service between subscribers' stations with Group 3 facsimile machines (Telefax 3). Operational provisions for the international public facsimile service between subscriber stations with Group 4 facsimile machines (Telefax 4). Operational provisions for the international facsimile service between public bureaux and subscriber stations and vice versa (bureaufax-telefax and vice versa). Teletex service. Interworking between teletex service and telex service - General principles. Interworking between the telex service and the teletex service - General procedures and operational requirements for the international interconnenction of telex/teletex conversion facilities. Network based storage for the teletex service. Service requirements unique to the processable mode number eleven (PM11) used within teletex service. Service requirements unique to the mixed mode (MM) used within the teletex service Videotex service. Application of T Series recommendations. General principles on the presentation of terminal identification to users of the telematic services. Provision of telematic and data transmission services on integrated services digital network (ISDN). Message handling services: Message Handling System and service overview. X.400 Message handling services: naming and addressing for public message handling services. Message Handling Services: the public message transfer service. Message handling services: Intercommunication with public physical delivery services.

133
F.420 F.421 F.422 F.423 F.435 F.440 F.500 F.551 F.581 F.600 F.701 F.710 F.711 F.720 F.721 F.730 F.732 F.740 F.761 F.811 F.812 F.813 F.850 F.851 F.901 F.902 F.910 Message handling services: the public interpersonal messaging service. Message handling services: Intercommunication between the IPM service and the telex service. Message handling services: Intercommunication between the IPM service and the teletex service. Message Handling Services: intercommunication between the interpersonal messaging service and the telefax service. Message handling: electronic data interchange messaging service. Message handling services: the voice messaging service. International public directory services. Service for the telematic file transfer within Telefax 3, Telefax 4, Teletex services and message handling services. Guidelines for programming communication interfaces (PCIs) definition: Service Service and operational principles for public data transmission services. Teleconference service. General principles for audiographic conference service. Audiographic conference teleservice for ISDN. Videotelephony services - general. Videotelephony teleservice for ISDN. Videoconference service- general. Broadband Videoconference Services. Audiovisual interactive services. Service-oriented requirements for telewriting applications. Broadband connection-oriented bearer service. Broadband connectionless data bearer service. Virtual path service for reserved and permanent communications. Principles of Universal Personal Telecommunication (UPT). Universal personal telecommunication (UPT) - Service description (service set 1) Usability evaluation of telecommunication services. Interactive services design guidelines. Procedures for designing, evaluating and selecting symbols, pictograms and icons.

For additional detail consult the appropriate standard document or contact the ITU. See also International Telecommunication Union, ITU-T Recommendations, Standards. Far End Echo: Signal echo that is produced by components in far end telephone equipment. Far end echo arrives after near end echo. See also Echo Cancellation, Near End Echo. Fast Fourier Transform (FFT): The FFT [66], [93] is a method of computing the discrete Fourier transform (DFT) that exploits the redundancy in the general DFT equation:
N–1

X( k) =

∑ n=0 – j 2πkn -----------------x ( n )e N

for k = 0 to N – 1

(148)

Noting that the DFT computation of Eq. 148 requires approximately N 2 complex multiply accumulates (MACs), where N is a power of 2, the radix-2 FFT requires only Nlog2N MACs. The computational savings achieved by the FFT is therefore a factor of N/log2N. When N is large this

134

DSPedia
N 32 1024 32768 DFT MACs 1024 1048576 ~ 1 x 109 FFT MACs 160 10240 ~ 0.5 x106

saving can be considerable. The following table compares the number of MACs required for different values of N for the DFT and the FFT:

There are a number of different FFT algorithms sometimes grouped via the names Cooley-Tukey, prime factor, decimation-in-time, decimation-in-frequency, radix-2 and so on. The bottom line for all FFT algorithms is, however, that they remove redundancy from the direct DFT computational algorithm of Eq. 148. We can highlight the existence of the redundant computation in the DFT by inspecting Eq. 148. First, for notational simplicity we can rewrite Eq. 148 as:
N–1

X( k) =

– ∑ x ( n )WNkn n=0

for k = 0 to N – 1

(149)

where W = e j2π ⁄ N = cos 2π ⁄ N + j sin 2π ⁄ N Using the DFT algorithm to calculate the first four components of the DFT of a (trivial) signal with only 8 samples requires the following computations:
X( 0) = x(0 ) + x( 1) + x( 2) + x( 3) + x(4 ) + x( 5) + x( 6) + x( 7)
– – – – – – – X ( 1 ) = x ( 0 ) + x ( 1 )W 8 1 + x ( 2 )W 8 2 + x ( 3 )W 8 3 + x ( 4 )W 8 4 + x ( 5 )W 8 5 + x ( 6 )W 8 6 + x ( 7 )W 8 7 – – – – – – – X ( 2 ) = x ( 0 ) + x ( 1 )W 8 2 + x ( 2 )W 8 4 + x ( 3 )W 8 6 + x ( 4 )W 8 8 + x ( 5 )W 8 10 + x ( 6 )W 8 12 + x ( 7 )W 8 14 – – – – – – – X ( 3 ) = x ( 0 ) + x ( 1 )W 8 3 + x ( 2 )W 8 6 + x ( 3 )W 8 9 + x ( 4 )W 8 12 + x ( 5 )W 8 15 + x ( 6 )W 8 18 + x ( 7 )W 8 21

(150)

However note that there is redundant (or repeated) arithmetic computation in Eq. 150. For example, consider the third term in the second line of Eq. 150:
– x ( 2 )W 8 2

= x ( 2 )e

–2 j2π  -----  8

= x ( 2 )e

– jπ ------2

(151)

Now consider the computation of the third term in the fourth line of Eq. 150:
– x ( 2 )W 8 6

= x ( 2 )e

–6 j2π  -----  8

=

– j3π ----------x ( 2 )e 2

=

– jπ ------–jπ e 2 x ( 2 )e

=

– jπ ------– x ( 2 )e 2

(152)

– – Therefore we can save one multiply operation by noting that the term x ( 2 )W 8 6 = – x ( 2 )W 8 2 . In fact kn every term in the fourth line of Eq. 150 is available from the because of the periodicity of W N computed terms in the second line of the equation. Hence a considerable saving in multiplicative computations can be achieved if the computational order of the DFT algorithm is carefully considered.

More generally we can show that the terms in the second line of Eq. 150 are:

135
– x ( n )W 8 n

=

– j 2πn -------------x ( n )e 8

=

– j πn ----------x ( n )e 4

(153)

and for terms in the fourth line of Eq. 150:
– x ( n )W 8 3n

= =

– j 6πn -------------x ( n )e 8

=

– j 3πn -------------x ( n )e 4

= x ( n )e

π π - – j  -- + --  n  2 4

= x ( n )e

πn πn – j ------ – j -----2 e 4

x ( n ) ( –j ) n e

πn – j -----4

(154)

– = ( – j ) n x ( n )W 8 n

This exploitation of the computational redundancy is the basis of the FFT which allows the same result as the DFT to be computed, but with less MACs. To more formally derive one version of the FFT (decimation-in-time radix-2), consider splitting the DFT equation into two “half signals” consisting of the odd numbered and even numbered samples, where the total number of samples is a power of 2 ( N = 2 n ):
N⁄2–1 – j 2πk ( 2n ) -------------------------N x ( 2n )e N⁄2–1 – j 2πk ( 2n + 1 ) -----------------------------------N 1 )e

X(k) =

∑ n=0 N⁄2–1

+



x ( 2n +

n=0 N⁄2–1 – x ( 2n )W N2 nk +

=

∑ n=0 N⁄2–1



– x ( 2n + 1 )W N( 2n + 1 ) k

(155)

n=0 N⁄2–1 – – x ( 2n )W N2 nk + W Nk

=

∑ n=0 ∑ n=0 – x ( 2n + 1 )W N2 nk

Notice in Eq. 155 that the N point DFT which requires N 2 MACs in Eq. 148 is now accomplished by performing two N ⁄ 2 point DFTs requiring a total of 2 × N 2 ⁄ 4 MACs which is a computational saving of 50%. Therefore a next logical step is to take the N ⁄ 2 point DFTs and perform as N ⁄ 4 point DFTs, saving 50% computation again, and so on. As the number of points we started with was a power of 2, then we can perform this decimation of the signal a total of N times, and each time reduce the total computation of each stage to that of a “butterfly” operation. If N = 2 n then the computational saving is a factor of: N -------------log 2 N (156)

136

DSPedia

In general equations for an FFT are awkward to write mathematically, and therefore the algorithm is very often represented as a “butterfly” based signal flow graph (SFG), the butterfly being a simple signal flow graph of the form:
Splitting node a c Summing node b k WN

d -1 Multiplier

k The butterfly signal flow graph. The multipler W N is a complex number, and the input data, a and b may also be compex. One butterfly computation requires one complex multiply and two complex additions (assuming the data is complex).

A more complete SFG for an 8 point decimation in time radix 2 FFT computation is:

x(0) x(4) x(2) x(6) x(1) x(5) x(3) x(7)
0 W8 0 W8 0 W8 0 W8

X(0) -1 X(1) -1 -1
0 W8

0 W8

W8 -1

2

X(2) X(3) -1 -1 -1 -1 X(4) X(5) X(6) X(7)

W8 -1
0 W8 2 W8 2 W8

1

-1 -1

3 W8

-1

kn A radix-2 Decimation-in-time (DIT) Cooley-Tukey FFT, for N = 8; W N = e – 2π ⁄ N . Note that the butterfly computation is repeated through the SFG.

See also Bit Reverse Addressing, Cooley-Tukey, Discrete Cosine Transform, Discrete Fourier Transform, Fast Fourier Transform - Decimation-in-Time (DIT), Fast Fourier Transform Decimation-in-Frequency (DIF), Fast Fourier Transform - Zero Padding, Fourier, Fourier Analysis, Fourier Series, Fourier Transform, Frequency Response, Phase Response. Fast Fourier Transform, Decimation-in-Frequency (DIF): The DFT can be reformulated to give the FFT either as a DIT or a DIF algorithm. Since the input data and output data values of the FFT appear in bit-reversed order, decimation-in-frequency computation of the FFT provides the output frequency samples in bit-reversed order. See also Discrete Fourier Transform, Fast Fourier Transform, Fast Fourier Transform - Decimation-in-Frequency.

137
Fast Fourier Transform, Decimation-in-Time (DIT): The DFT can be reformulated to give the FFT either as a DIF or a DIT algorithm. Since the input data and output data values of the FFT appear in bit-reversed order, decimation-in-time computation of the FFT provides the output frequency samples in proper order when the input time samples are arranged in bit-reversed order. See also Discrete Fourier Transform, Fast Fourier Transform - Decimation-in-Time, Fast Fourier Transform - Decimation-in-Frequency. See also Discrete Fourier Transform. Fast Fourier Transform, Zero Padding: When performing an FFT, the number of data points used in the algorithm is a power of 2 (for radix-2 FFT algorithms). What if a particular process only produces 100 samples and the FFT is required? There are two choices: (1) Truncate the sequence to 64 samples; (2) Pad out the signal by setting the last 28 values of the FFT to be the same as the first 28 samples; (3) Zero pad the data by setting the last 28 values of the FFT to zero. Solution (1) will lose signal information and solution (2) will add information which is not necessarily part of the signal (i.e. discontinuities). However, solution (3) will only increase the frequency resolution of the FFT by adding more harmonics and does not affect the integrity of the data. Fast Given’s Rotations: See Matrix Decompositions - Square Root Free Given’s Rotations. Filtered-U LMS: See Active Noise Cancellation. Filtered-X LMS: See Least Mean Squares Filtered-X Algorithm. Filters: A circuit designed to pass signals of certain frequencies, and attenuate others Filters can be analog or digital [45]. In general a filter with N poles (where N is usually the number of reactive circuit elements used, such as capacitors or inductors) will have a roll-off of 6N dB/octave or 20N dB/decade. Although the above second order (two pole) active filter increases the final rate of roll-off, the sharpness of the knee (at the 3dB frequency) of the filter is not improved and the further increase in order will not produce a filter that approaches the ideal filter. Other designs, such as the Butterworth, Chebychev and Bessel filter, produce filters that have a flatter passband characteristic or a much sharper knee. In general, for a fixed order filter, the sharper the knee of the filter the more variation in the gain of the passband. A simple active filter is illustrated below.
+

Vin

Vin

Vout

A simple 3rd order active filter.

The cut-off frequency can be changed by modifying the resistor values. This filter has a roll-off of 18dB/octave, therefore meaning that if used as an anti-alias filter cutting of at fs/2 where f s is the sampling frequency, the filter would only provide attenuation of 18 dB at fs and hence aliasing problems may occur. A popular (though not necessarily appropriate) rule of thumb anti-alias filters

138

DSPedia
First Order (Passive) Filter Second Order (Active) Filter

R
Vin

R C
Vout f 3dB 1 = --------------2πRC Vin

R C
Buffer Amplifier

C

Vou

V out 1 ---------- = ------------------------------------V in 1 + ( f ⁄ f 3dB ) 2 Ideal filter
0 -5 -10 -15 -20 -25 -30 -35 -40 -45 -50 -55 -60

V out 1 ---------- = -----------------------------------------------------------------------V in 1 + 2 ( f ⁄ f 3dB ) 2 + ( f ⁄ f 3dB ) 4

20log10 Vout/Vin (dB)

Log10 frequency (decade) 1st order (passive) RC circuit: Roll-off = 20dB/decade

2nd order active RC circuit: Roll-off = 40dB/decade
0.1 0.5 1 5 10 50 100 500 1000

log10(f/f3dB)

should provide at least the same attenuation at the sampling frequency as the dynamic range of the wordlength. For example, if using 16 bit arithmetic the dynamic range is 20 log 2 16 = 96dB and the roll-off of the filter above the 3dB frequency is at least 96dB/octave. In designing anti-alias fitters, the key requirement is limiting the significance of any aliased frequency components. Because it is the nature of lowpass filters to provide more attenuation at higher frequencies that at lower ones, the aliased components at fs/2 are usually the limiting factor. See also Active Filter, Anti-alias Filter, Bandpass Filter, Digital Filter, High Pass Filter, Low Pass Filter, Knee, Reconstruction Filter, RC Filter, Roll-off.
Bessel Filter: A filter that has a maximally flat phase response in its passband. Butterworth Filter: This is a filter based on certain mathematical constraints and defining equations. These filters have been used for a very long time in designing stable analog filters. In general the Butterworth filter has a passband that is very flat, at the expense of a slow roll off. The gain of the order n (analog) Butterworth can be given as V out 1 ---------- = ---------------------------------------V in 1 + ( f ⁄ f 3db ) 2n Chebyshev Filter: A type of filter that has a certain amount of ripple in the passband, but has a very steep roll-off. The gain of the order n (analog) Chebyshev filter can be given as below where C n is a special

(157)

139 polynomial and ε is a constant that determines the magnitude of the passband ripple. The spelling of Chebyshev has many variants (such as Tschebyscheff). V out 1 ---------- = -----------------------------------------------V in 2 1 + ε 2 C n ( f ⁄ f 3db ) Elliptic Filter: A type of filter that achieves the maximum possible roll-off for a particular filter order. The phase response of an elliptic filter is extremely non-linear.

(158)

Finite Impulse Response (FIR) Filter: (See first Digital Filter). An FIR filter digital filter performs a moving weighted average on an input stream of digital data to filter a signal according to some predefined frequency criteria such as a low pass, high pass, band pass, or band-stop filter:
0 Gain 0 0 0

frequency

frequency

frequency

frequency

Low Pass

High Pass

Band-Pass

Band-Stop

FIR Filters are usually designed with software to be low pass, high pass, band pass or band-stop.

As discussed under Digital Filter, an FIR filter is integrated to the real world via analogue to digital converters (ADC) and digital to analogue converters (DAC) and suitable anti-alias and reconstruction filters. An FIR digital filter can be conveniently represented in a signal flow graph: x(k) x(k-1)

x(k-2)

x(k-3)

x(k-N+2)

x(k-N+1)

w0

w1

w2

w3

wN-2

wN-1 y(k)

The signal flow graph and the output equation for an FIR digital filter. The last N input samples are weighted by the filter coefficients to produce the output y ( k )

The general output equation (convolution) for an FIR filter is: y ( k ) = w 0 x ( k ) + w 1 x ( k – 1 ) + w 2 x ( k – 2 ) + w 3 x ( k – 3 ) + ..... + wN – 1 x ( k – N + 1 )
N–1

=

∑ wn x ( k – n ) n=0 (159)

140

DSPedia

The term finite impulse response refers to the fact that the impulse response results in energy at only a finite number of samples after which the output is zero. Therefore if the input sequence is a unit impulse the FIR filter output will have a finite duration: δ(k) 1 0 Unit Impulse 1 T = --- secs fs
1 2 3 4 5 6 7 K

h(k)

Finite Impulse Response 1 T = --- secs fs

0

1

3

4

6

7

time (secs/fs) x(k) Digital FIR Filter y(k)

time (secs/fs)

K

The discrete output of a finite impulse response (FIR) filter sampled at fs Hz has a finite duration in time, i.e. the output will decay to zero within a finite time.

This can be illustrated by considering that the FIR filter is essentially a shift register which is clocked once per sampling period. For example consider a simple 4 weight filter:
1 0 w1 Time, k=0 0 0 w1 1 w2 0 w3 w2 Time, k=2 0 0 w1 0 w2 0 w3 0 Time, k=4 Time, k=5 etc.....etc..... Time, k=3 0 0 w1 0 w2 0 w3 0 0 w2 0 w3 w0 Time, k=1 0 0 w1 0 w2 1 w3 w3 0 1 w1 0 w2 0 w3 w1

When applying a unit impulse response to a filter, the 1 value passes through the filter “shift register” causing the filter impulse response to be output.

As an example, a simple low pass FIR filter can be designed using the DSP design software SystemView by Elanix , with a sampling rate of 10000 Hz, a cut off frequency of around 1000Hz, a

141 stopband attenuation of about 40dB, passband ripple of less than 1 dB and limited to 15 weights. The resulting filter is:
Low Pass FIR Filter Impulse Response h(n) 0.25 0.20 0.15 0.10 0.05 0 -0.05 5 10 1 T = --------------- secs 10000 time, n 15 w0 = w14 = -0.01813... w1 = w13 = -0.08489... w2 = w12 =-0.03210... w3 = w11 = -0.00156... w4 = w10 = 0.07258... w5 = w9 = 0.15493... w6 = w8 = 0.22140... w7 = 0.25669... (Truncated to 5 decimal places)

The impulse response h ( n ) = w n of a low pass filter, FIR1 with 15 weights, a sampling rate of 10000 Hz, and cut off frequency designed at around 1000Hz.

Noting that a unit impulse contains “all frequencies”, then the magnitude frequency response and phase response of the filter are found from the DFT (or FFT) of the filter weights:
H(f)
1.2 10

20 log H ( f )

Gain

0.8 0.6 0.4 0.2 0 1000 2000 3000 4000 5000

Gain (dB)

1.0

0 -10 -20 -30 -40 0 1000 2000 3000 4000 5000

frequency (Hz) Linear Magnitude Response

frequency (Hz) Logarithmic Magnitude Response

The 1024 point FFT (zero padded) of the above low pass filter impulse response, FIR1. As the sampling rate is 10000 Hz the frequency response is only plotted up to 5000 Hz. (Note that the y-axis is labelled Gain rather than Attenuation, this is because -10dB gain is the same as 10dB attenuation. Hence if attenuation was plotted the above figures would be inverted.)

142

DSPedia
H(f) Phase (radians)
0 -π -2π -3π -4π -5π -6π 0 1000 2000 3000 4000 5000

Phase Response (unwrapped)

H(f) Phase (radians)

π

Phase Response (wrapped)

π/2 0 -π/2 -π 0 1000 2000 3000 4000 5000

frequency (Hz)

frequency (Hz)

The 1024 point FFT generated phase response (phase shift versus frequency) above low pass filter impulse response, FIR1. Note that the the filter is linear phase and the wrapped and unwrapped phase responses are different ways of representing the same information. The “wrapped” phase response will often produced by DSP software packages and gives phase values between -π and π only. As the phase is calculated as modulo 2π. i.e. a phase shift of θ is the same as a phase shift of θ + 2π and so on. Phase responses are also often plotted using degrees rather than radians.

From the magnitude and phase response plots we can therefore calculate the attenuation and phase shift of different input signal frequencies. For example, if a single frequency at 1500Hz, with an amplitude of 150 is input to the above filter, then the amplitude of the output signal will be around 30, and phase shifted by a little over -2π radians. However, if a single frequency of 500Hz was input, then the output signal amplitude is amplified by a factor of about 1.085 and phase shifted by about -0.7π radians. As a more intuitive and illustrative example of filtering, consider inputing the signal, x ( k ) below to a suitably designed “low pass filter” to produce the output signal, y ( k ) : x(k) time, k x(k) y(k) y(k) time, k

Low Pass Digital Filter

Example of an FIR Filter performing low pass filtering, i.e. removing high frequencies by performing a weighted moving average with suitable low pass characteristic weights. The remaining low frequencies are phase shifted (i.e. time delayed) as a result of passing through the filter.

So, how long is a typical FIR filter? This of course depends on the requirement of the problem being addressed. For the generic filter characteristic shown below more weights are required if:

• A sharper transition bandwidth is required; • More stopband attenuation is required;

143 • Very small passband ripple is required.
Transition Band 0

Gain (dB)

-3

Passband Ripple Stopband Attenuation

Low Pass Ideal Filter frequency

fs/2

Generic low pass filter magnitude response. The more stringent the filter requirements of stopband attenuation, transition bandwidth and to a lesser extent passband ripple, the more weights that are required.

Consider again the design of the above FIR filter (FIR1) which was a low pass filter cutting of at about 1000Hz. Using SystemView, the above criteria can be varied such that the number of filter weights can be increased and a more stringent filter designed. Consider the design of three low pass filters cutting off at 1000 Hz, with stopband attenuation of 40dB and transition bandwidths 500 Hz, 200 Hz and 50 Hz:

0

Gain (dB)

FIR 1

0 -20 -40 -60 -80

FIR 2

0 -20 -40 -60 -80

FIR 3

-20 -40 -60 -80 0 1000 2000 3000 4000 5000 frequency (Hz)

0

1000 2000 3000 4000 5000 frequency (Hz)

0

1000 2000 3000 4000 5000 frequency (Hz)

Transition Band: 1000 - 1500Hz No. of weights: 29

Transition Band: 1000 - 1200Hz No. of weights: 69

Transition Band: 1000 - 1100Hz No. of weights: 269

Low pass filters designed parameters: Stopband Attenuation = 40dB; Passband Ripple = 1dB and transition bandwidths, of 500, 200, and 50 Hz. The sharper the transition band the more filter weights that are required.

144

DSPedia
1 ------------ secs 0.2 10000 0.2

The respective impulse responses of FIR1 , FIR2 and FIR3 are respectively 15, 69 and 269 weights long, with group delays of 7, 34 and 134 samples respectively.
FIR2, 1 ------------ secs 10000 time 69 weights

time

0.2

1 ------------ secs 10000

FIR3, 269 weights time The impulse responses of low pass filters FIR1, FIR2, and FIR3, all with 40 dB stopband attenuation, 1dB passband ripple, but transition bandwidths of 500, 200 and 50 Hz respectively. Clearly the more stringent the filter parameters, the longer the required impulse response.

Similarly if the stopband attenuation specification is increased, the number of filter weights required will again require to increase. For a low pass filter with a cut off frequency again at 1000 Hz, a transition bandwidth of 500 Hz and stopband attenuations of 40 dB , 60 dB and 80 dB :
0

Gain (dB)

FIR 1

0 -20 -40 -60 -80

FIR 4

0 -20 -40 -60 -80

FIR 5

-20 -40 -60 -80 0 1000 2000 3000 4000 5000 frequency (Hz)

0

1000 2000 3000 4000 5000 frequency (Hz)

0

1000 2000 3000 4000 5000 frequency (Hz)

Transition Band: 1000 - 1500Hz No. of weights: 29

Transition Band: 1000 - 1200Hz No. of weights: 41

Transition Band: 1000 - 1100Hz No. of weights: 55

Low pass filters designed parameters: Transition Bandwidth = 500Hz; Passband Ripple = 1dB and stopband attenuations of 40 dB, 60 dB, and 80 dB.

0.2

FIR 1

0.2 1 -----------10000 secs

FIR 4

0.2 1 -----------10000 secs

FIR 5

1 ------------ secs 10000 time

time

time

The impulse responses of low pass filters FIR1, FIR4, and FIR5, all 1dB passband ripple, and transition bandwidths of 500 Hz and stopband attenuation of 40, 60 and 80dB respectively. Clearly the more stringent the filter parameters, the longer the required impulse response.

145
Similarly if the passband ripple parameter is reduced, then a longer impulse response will be required. See also Adaptive Filter, Digital Filter, Low Pass Filter, High Pass Filter, Bandpass Filter, Bandstop Filter, IIR Filter. Finite Impulse Response (FIR) Filter, Bit Errors: If we consider the possibility of a random single bit error in the weights of an FIR filter, the effect on the filter magnitude and phase response can be quite dramatic. Consider a simple 15 weight filter : h(n) 0.25 0.20 0.15 0.10 0.05 0 -0.05 “Correct” Filter Gain (dB)
10 0 -10 -20 -30 -40 0 800 1600 2400 3200 4000

20 log H ( f )

1 T = ------------ secs 8000 time

frequency (Hz)

Fifteen weight low pass FIR filter cutting off at 800 Hz.

The 3rd coefficient is of value -0.0725..., and in 16 bit fractional binary notation this is 0.0001001010010102. If a single bit occurs in the 3rd bit of this binary coefficient then the value becomes: 0.0011001010010102 = -0.1957... The impulse response clearly changes “a little” whereas the effect on the frequency response changes is a little more substantial and causes a loss of about 5 dB attenuation. h(n) 0.25 0.20 0.15 0.10 0.05 0 -0.05 Bit Error Filter Gain (dB)
10 0 -10 -20 -30 -40 0 800 1600 2400 3200 4000

20 log H ( f )

1 T = ------------ secs 8000 time, n 15

10 5 15 weights low pass FIR filter cutting off at 800 Hz with the 3rd coefficient being in error by a single bit. Note the change to the frequency response compared to the correct filter above.

frequency (Hz)

146

DSPedia
H(f) Phase (radians)
0 -π -2π -3π -4π -5π -6π 0 800 1600 2400 3200 4000

Also because the impulse response is no longer symmetric the phase response is no longer linear:
Correct Filter H(f) Phase (radians)
0 -π -2π -3π -4π -5π -6π 0 800 1600 2400 3200 4000

Bit Error Filter

frequency (Hz)

frequency (Hz)

Phase response of the original (“correct”) filter and the bit error filter. The result of the error in a single coefficient has caused the phase to be no longer exactly linear.

Of course the bit error may have occured at the least significant bits and the frequency domain effect would be much less pronounced. However because of the excellent reliability of DSP processors the occurence of bit errors in filter coefficients is unlikely. See also Digital Filter, Finite Impulse Response Filter. Finite Impulse Response (FIR), Group Delay: See Finite Impulse Response Filter - Linear Phase. Finite Impulse Response Filter (FIR), Linear Phase: If the weights of an N weight real valued FIR filter are symmetric or anti-symmetric, i.e. : w ( n ) = ±w ( N – 1 – n ) (160)

then the filter has linear phase. This means that all frequencies passing through the filter are delayed by the same amount. The impulse response of a linear phase FIR filter can have either an even or odd number of weights. line of symmetry wk wk line of symmetry

0

k

0

k

Symmetric impulse response of an 11 (odd number) weight linear phase FIR filter.

Symmetric impulse response of an 8 (even number) weight linear phase FIR filter.

147

Location of anti-symmetry wk wk

Location of anti-symmetry

0

k

0

k

Anti-symmetric impulse response of an 11 (odd number) weight linear phase FIR filter.

Anti-symmetric impulse response of an 8 (even number) weight linear phase FIR filter.

The z-domain plane pole zero plot of a linear phase filter will always have conjugate pair zeroes, i.e. the zeroes are symmetric about the real axis: The desirable property of linear phase is particularly important in applications where the phase of a signal carries important information. To illustrate the linear phase response, consider inputting a cosine wave of frequency f , sampled at f s samples per second (i.e. cos 2πfk ⁄ f s ) to a symmetric impulse response FIR filter with an even number of weights N (i.e. w n = w N – n for n = 0, 1, …, N ⁄ 2 – 1 ). For notational convenience let ω = 2πf ⁄ f s :
N–1 N⁄2–1

y( k ) =

∑ wn cos ω ( k – n ) n=0 N⁄2–1

=

∑ n=0 w n ( cos ω ( k – n ) + cos ω ( k – N + n ) )

=

∑ n=0 ∙ 2w n cos ω ( k – N ⁄ 2 ) cos ω ( n – N ⁄ 2 )
N⁄2–1

(161) w n cos ω ( n – N ⁄ 2 )
N⁄2–1

= 2 cos ω ( k – N ⁄ 2 )

∑ n=0 = M ⋅ cos ω ( k – N ⁄ 2 ),

where M =

∑ n=0 2w n cos ω ( n – N ⁄ 2 )

where the trigonometric identity, cos A + cos B = 2 cos ( ( A + B ) ⁄ 2 ) cos ( ( A – B ) ⁄ 2 ) has been used. From this equation it can be seen that regardless of the input frequency, the input cosine wave is delayed only by N ⁄ 2 samples, often referred to as the group delay, and its magnitude is scaled by the factor M. Hence the phase response of such an FIR is simply a linear plot of the straight line defined by ωN ⁄ 2 . Group delay is often defined as the differentiation of the phase response with respect to angular frequency. Hence, a filter that provides linear phase has a group delay that is constant for all frequencies. An all-pass filter with constant group delay (i.e., linear phase) produces a pure delay for any input time waveform.

148

DSPedia

Linear phase FIR filters can be implemented with N ⁄ 2 multiplies and N accumulates compared to the N MACs required by an FIR filter with a non-symmetric impulse response. This can be illustrated by rewriting the output of a symmetric FIR filter with an even number of coefficients:
N–1 N⁄2–1

y(k ) =

∑ wn x ( k – n ) n=0 =

∑ n=0 wn [ x ( k – n ) + x ( k – N + n ) ]

(162)

Although the number of multiplies is halved, most DSP processors can perform a multiplyaccumulate in the same time as an addition so there is not necessarily a computational advantage for the implementation of a symmetric FIR filter on a DSP device. One drawback of a linear phase filter is of course that they always introduce a delay. Linear phase FIR filters are non-minimum phase, i.e. they will always have zeroes that are on are outside of the unit circle. For the z-domain plane plot of the z-transform of a linear phase filter, for all zeroes that are not on the unit circle, there will be a complex conjugate reciprocal of that zero. For example : h(n) 0.4 0.3 0.2 0.1 0 1 2 3 4 time, n
-1 0 1 1

Imag
2

z-domain

The impulse response of a simple 5 weight linear phase FIR filter and the corresponding z-domain plane plot. Note that for the zeroes inside the unit circle at z = – 0.286 ± 0.3526 j , there are conjugate reciprocal zeroes at: 1 z = ----------------------------------------- = – 1.384 ± 1.727j – 0.286 ±0.3526 j

Real

-1

-2

See also Digital Filter, Finite Impulse Response Filter. Finite Impulse Response (FIR), Minimum Phase: If the zeroes of an FIR filter all lie within the unit circle on the z-domain plane, then the filter is said to be minimum phase. One simple property is that the inverse filter of a minimum phase FIR filter is a stable IIR filter, i.e. all of the poles lie within the unit circle. See also Finite Impulse Response Filter. Finite Impulse Response (FIR) Filter, Order Reversed: Consider the general finite impulse response filter with transfer function denoted as H ( z ) : H ( z ) = a 1 + a 2 z – 1 + … + a N – 1 z – N + 1 + a N z –N The order reversed FIR filter transfer function, H r ( z ) is given by: H r ( z ) = a N + a N – 1 z –1 + … + a 1 z – N + 1 + a 0 z –N (164) (163)

149
The respective FIR filter signal flow graphs (SFG) are simply:
FIR Filter x(k) a1 a2 aN−1 aN y(k) x(k) aN aN−1 a2 a1 y(k) Order Reversed FIR Filter

The signal flow graph for an N+1 weight FIR filter and the order reversed FIR filter. The order reversed FIR filter is same order as the original FIR filter but with the filter weights in opposite order.

From the z-domain functions above it is easy to show that H r ( z ) = z –N H ( z –1 ) . The order reversed FIR filter has exactly the same magnitude frequency response as the original FIR filter: H r ( z ) z = e jω = z –N H ( z –1 ) z = e jω = e –jωN H ( e –j ω ) = H ( e –jω ) = H ( e jω ) = H ( z ) z = e jω

(165)

The phase response of the two filters are however different. The difference to the phase response can be noted by considering that the zeroes of the order reversed FIR filter are the inverse of the zeroes of the original FIR filter, i.e. if the zeroes of Eq. 164 are α 1, α 2, …α N – 1, α N : H ( z ) = ( 1 – α1 z –1 ) ( 1 – α 2 z – 1 )… ( 1 – α N – 1 z –1 ) ( 1 – α N z –1 ) (166)

– – – – then the zeroes of the order reversed polynomial are α 1 1, α2 2, …α N1– 1 , αN1 which can be seen from:

H r ( z ) = z –N H ( z –1 ) = z –N ( 1 – α 1 z ) ( 1 – α 2 z )… ( 1 – α N – 1 z ) ( 1 – α N z ) = ( z – 1 – α 1 ) ( z –1 – α 2 )… ( z – 1 – α N – 1 ) ( z – 1 – α N ) ( –1 )N – – – – = ----------------------------------------- ( 1 – α 1 1 z –1 ) ( 1 – α 2 1 z – 1 )… ( 1 – α N1– 1 z –1 ) ( 1 – α N1 z – 1 ) α 1 α2 …α N – 1 α N As examples consider the 8 weight FIR filter H ( z ) = 10 + 5z –1 – 3z –2 – z –3 + 3z –4 + 2z – 5 – z – 6 + 0.5z – 7 and the corresponding order reversed FIR filter: H ( z ) = 0.5 – z – 1 + 2z –2 + 3z –3 – z –4 – 3z –5 + 5z –6 + 10z –7 (169) (168) (167)

150

DSPedia h(k )
10 8 6 4 2 0

Assuming a sampling frequency of f s = 1 , the impulse response of both filters are easily plotted as : hr ( k )
10 8 6 4 2 0

k

k

Impulse response, h ( k ) of simple FIR filter

Order reversed impulse response, h r ( k ) .

The corresponding magnitude and phase frequency responses of both filters are:
30 Gain (dB) 25 20 15 10 5 0 0.1 0.2 0.3 0.4 0.5

20 log H ( e jω ) Magnitude Response
Phase (radians)

π π/2 0 -π/2 -π 0

H ( e jω ) Phase Response

0.1

0.2

0.3

0.4

0.5

frequency (Hz)

frequency (Hz)

Magnitude and phase frequency response of FIR filter H ( z ) = 10 + 5z – 1 – 3z –2 – z –3 + 3z – 4 + 2z – 5 – z – 6 + 0.5z – 7 20 log H r ( e jω ) Magnitude Response
Phase (radians)

30 Gain (dB) 25 20 15 10 5 0

π π/2 0 -π/2 -π 0

H r ( e jω ) Phase Response (wrapped)

0.1

0.2

0.3

0.4

0.5

0.1

0.2

0.3

0.4

0.5

frequency (Hz)

frequency (Hz)

Magnitude and phase frequency response of order reversed FIR filter H r ( z ) = 0.5 – z – 1 + 2z –2 + 3z – 3 – z – 4 – 3z – 5 + 5z – 6 + 10z – 7

151 and the z-domain plots of both filter zeroes are:
Imag z-domain
1

- Zeroes of FIR filter H(z) - Zeroes of order reversed FIR filter Hr(z) For a zero α = x + jy we note that α = x 2 + y 2 and therefore for related the order reversed filter zero at 1 ⁄ α we note: x – jy x 2 + y 21 1 1 -- = ------------- = ----------------- = --------------------- = --------------------2 + y2 2 + y2 2 + y2 α x + jy x x x For this particular example H ( z ) is clearly minimum phase (all zeroes inside the unit circle), and therefore H r ( z ) is maximum phase (all zeroes outside of the unit circle.

-1

0

1

Real

-1

See also All-pass Filter, Digital Filter, Finite Impulse Response Filter. Finite Impulse Response (FIR) Filter, Real Time Implementation: For each input sample, an FIR filter requires to perform N multiply accumulate (MAC) operations:
N–1

y(k) =

∑ wn x ( k – n ) n=0 (170)

Therefore if a particular FIR filter is sampling data at fs Hz, then the number of arithmetic operations per second is: MACs/sec = Nf s (171)

Finite Impulse Response (FIR) Filter, Wordlength: For a real time implementation of a digital filter, the wordlength used to represent the filter weights will of course have some bearing on the achievable accuracy of the frequency response. Consider for example the design of a high pass digital filter using 16 bit filter weights:
0

Gain (dB)

FIR 1

0 -20 -40 -60

FIR 4

0 -20 -40 -60

FIR 5

-20 -40 -60 0 1000 2000 3000 4000 5000 frequency (Hz)

0

1000 2000 3000 4000 5000 frequency (Hz)

0

1000 2000 3000 4000 5000 frequency (Hz)

16 bit coefficients

8 bit coefficients

4 bit coefficients

Low pass filters designed parameters: Transition Bandwidth = 500Hz; Passband Ripple = 1dB and stopband attenuations of 40 dB, 60 dB, and 80 dB.

152

DSPedia

Finite Impulse Response (FIR) Filter, Zeroes: An important way of representing an FIR digital filter is with a z-domain plot of the filter zeroes. By writing the transfer function of an FIR filter in the z-domain, the resulting polynomial in z can be factorised to find the roots, which are in fact the “zeroes” of the digital filter. Consider a simple 5 weight FIR filter : y ( k ) = – 0.3 x ( k ) + 0.5x ( k – 1 ) + x ( k – 2 ) + 0.5x ( k – 3 ) – 0.3x ( k – 4 ) The signal flow graph of this filter can be represented as: x(k) x(k-1) x(k-2) x(k-3) x(k-4)

(172)

-0.3

0.5

1

0.5

-0.3 y(k)

The signal flow graph for a 5 weight FIR filter.

The z-domain transfer function of this polynomial is therefore: ) ----------H ( z ) = Y ( z - = – 0.3 + 0.5z –1 + z – 2 + 0.5z – 3 – 0.3z –4 X(z) (173)

If the z-polynomial of Eq. 173 is factorised (using DSP design software rather than with paper and pencil!) then this gives for this example:
H ( z ) = – 0.3 ( 1 – 2.95z –1 ) ( 1 – ( – 0.811 + 0.584j )z – 1 ) ( 1 – ( – 0.811 + 0.584j )z – 1 ) ( 1 – 0.339z –1 )

(174)

and the zeroes of the FIR filter (corresponding to the roots of the polynomial are, z = 2.95, 0.339, – 0.811 + 0.584j, and –0.811 – 0.584j . (Note all quantities have been rounded to 3 decimal places). The corresponding SFG of the FIR filter written in the zero form of Eq. 174 is therefore: x(k) 2.95 0.339 -0.811+ 0.584j -0.8110.584j -0.3 y(k)

The signal flow graph of four first order cascaded filters corresponding to the same impulse response as the 5 weight filter shown above. The first order filter coefficients correspond to the zeroes of the 5 weight filter.

153
The zeroes of the FIR filter can also be plotted on the z-domain plane:
Imag
1 0.5 -1 -0.5 0 -0.5 -1 0.5 1 2 3

z-domain

Real

The zeroes of the FIR filter in Eq. 173. Note that some of roots are complex. In the case of an FIR filter with real coefficients the zeroes are always symmetric about the x-axis (conjugate pairs) such that when the factorised polynomial is multiplied out there are no imagniary values.

If all of the zeroes of the FIR filter are within the unit circle then the filter is said to be minimum phase. FIR Filter: See Finite Impulse Response Filter. First Order Hold: Interpolation between discrete samples using a straight line. First order hold is a crude form of interpolation. See also Interpolation, Step Reconstruction, Zero Order Hold. Fixed point: Numbers are represented as integers. 16 bit fixed point can represent a range of 65536 (216) numbers (including zero). 24 bit fixed point as used by some Motorola fixed point DSP processors can represent a range of 16777216 (224) numbers. See also Binary, Binary Point, Floating Point, Two’s Complement. Fixed Point DSP: A DSP processor that can manipulate only fixed point numbers, such as the Motorola DSP56002, the Texas Instruments TMS320C50, the AT&T DSP16, or the Analog Devices ADSP2100. See also Floating Point DSP. Flash Converter: A type (expensive) analog to digital converter. Fletcher-Munson Curves: Fletcher and Munson’s 1933 paper [73] studied the definition of sound intensity, the subjective loudness of human hearing, and associated measurements. Most notably they produced a set of equal loudness contours which showed the variation in SPL of tones at different frequencies that are perceived as having the same loudness. The work of Fletcher and Munson was re-evaluated a few years later by Robinson and Dadson [126]. See also Equal Loudness Contours, Frequency Range of Hearing, Loudness Recruitment, Sound Pressure Level, Threshold of Hearing. Floating Point: Numbers are represented in a floating point notation with a mantissa and an exponent. 32 bit floating point numbers have a 24 bit mantissa and an 8 bit exponent. Motorola DSP processors use the IEEE 754 floating point number format whereas Texas Instruments use their own floating point number format. Both formats give a dynamic range of approximately 2-128 to 2128 with a resolution of 24 bits. fs: Abbreviation for the sampling frequency (in Hz) of a DSP system.

154
Floating Point Arithmetic Standards: See IEEE Standard 754.

DSPedia

Fourier: Jean Baptiste Fourier (died 1830) made a major contribution to modern mathematics with his work in using trigonometric functions to represent heat and diffusion equations. Fourier’s work is now collectively refered to as Fourier Analysis. See also Discrete Fourier Transform, Fourier Analysis, Fourier Series, Fourier Transform. Fourier Analysis: The mathematical tools of the Fourier series, Fourier transform, discrete Fourier transform, magnitude response, phase response and so on can be collectively refered to as Fourier analysis tools. Fourier analysis is widely used in science, engineering and business mathematics. In DSP representing a signal in the frequency domain using Fourier techniques, can bring a number of advantages:
Physical Meaning: Many real world signals are produced as a sum of harmonic oscillations, e.g. vibrating music strings; vibration induced from the reciprocating motion of an engine; vibration of the vocal tract and other forms of simple harmonic motion. Hence reliable mathematical models can be produced. Filtering: It is often useful to filter in a frequency selective manner, e.g. filter out low frequencies. Signal Compression: If a signal is periodic over a long time, then rather than transmit the time signal, we can transmit the frequency domain parameters (amplitude, frequencies and phase) and the signal can be reconstructed at the other end of a communications line.

See also Discrete Fourier Transform, Fast Fourier Transform, Fourier Transform. Fourier Series: There exists mathematical theory called the Fourier series that allows any periodic waveform in time to be decomposed into a sum of harmonically related sine and cosine waves. The first requirement in realising the Fourier series is to calculate the fundamental period, T , which is the shortest time over which the signal repeats, i.e. for a signal x ( t ) , then: x ( t ) = x ( t + T ) = x ( t + 2T ) = … = x ( t + kT ) (175)

1 T = --f0 x( t)

t0

t0 + T

t 0 + 2T

time

The (fundamental) period of a signal x ( t ) identified as T . The fundamental frequency, f 0 , is calculated as f 0 = 1 ⁄ T . Clearly x ( t 0 ) = x ( t 0 + T ) = x ( t 0 + 2T ) .

For a periodic signal with fundamental period T seconds, the Fourier series represents this signal as a sum of sine and cosine components that are harmonics of the fundamental frequency, f 0 = 1 ⁄ T Hz. The Fourier series can be written in a number of different ways:

155


x( t) =

∑ n=0 2πnt A n cos  -----------  +  T 




∑ Bn sin  ------------  T  n=1 2πnt

(176)

= A0 +


∑ n=1 2πnt 2πnt A n cos  -----------  + B n sin  -----------   T   T 

= A0 +

∑ n=1 ∞

[ A n cos ( 2πnf 0 t ) + B n sin ( 2πnf 0 t ) ]

= A0 +


∑ n=1 [ A n cos ( nω 0 t ) + B n sin ( nω 0 t ) ]

=

∑ [ An cos ( nω0 t ) + Bn sin ( nω0 t ) ] n=0 = A 0 + A 1 cos ( ω 0 t ) + A 2 cos ( 2ω 0 t ) + A 2 cos ( 3ω 0 t ) + … + B 1 sin ( ω 0 t ) + B 2 sin ( 2ω 0 t ) + B 2 sin ( 3ω 0 t ) + … where A n and B n are the amplitudes of the various cosine and sine waveforms, and angular frequency is denoted by ω 0 = 2πf 0 radians/second. Depending on the actual problem being solved we can choose to specify the fundamental periodicity of the waveform in terms of the period ( T ), frequency ( f 0 ), or angular frequency (ω 0 ) as shown in Eq. 176. Note that there is actually no requirement to specifically include a B 0 term since sin 0 = 0 , although there is an A 0 term, since cos 0 = 1 , which represents any DC component that may be present in the signal. In more descriptive language the above Fourier series says that any periodic signal can be reproduced by adding a (possibly infinite) series of harmonically related sinusoidal waveforms of amplitudes A n or B n . Therefore if a periodic signal with a fundamental period of say 0.01 seconds is identified, then the Fourier series will allow this waveform to be represented as a sum of various cosine and sine waves at frequencies of 100 Hz (the fundamental frequency, f 0 ), 200Hz, 300Hz (the harmonic frequencies 2f 0, 3f 0 ) and so on. The amplitudes of these cosine and sine waves are given by A 0, A 1, B 1, A 2, B 2, A 3 ..... and so on. So how are the values of A n and B n calculated? The answer can be derived by some basic trigonometry. Taking the last line of Eq. 176, if we multiply both sides by cos ( pω 0 t ) , where p is an arbitrary positive integer, then we get:


cos ( pω 0 t ) x ( t ) = cos ( pω 0 t )

∑ [ An cos ( nω0 t ) + Bn sin ( nω0 t ) ] n=0 (177)

156

DSPedia
A0 T A1 time B1 time A2 T/2 time B2 time T/3 A3 time B3 time Fourier series for a periodic signal x ( t ) . If we analyse a periodic signal and realise the cosine and sine wave Fourier coefficients of appropriate amplitudes A n and B n , then summing these components will lead to exactly the original signal. T x( t ) time
3

time

+

A0 +

∑ n=1 2πnt 2πnt A n cos  -----------  + B n sin  -----------   T   T 

If we now take the average of one fundamental period of both sides, this can be done by integrating the functions over any one period, T :
T T T    ∫ cos ( pω0 t ) x ( t )dt = ∫  cos ( pω0 t ) ∑ An cos ( nω0 t ) + ∑ Bn sin ( nω0 t ) dt   n=0 n=0 0 0  ∞ T ∞ T T

(178)

=

∑ ∫ { An cos ( pω0 t ) cos ( nω0 t ) } dt + ∑ ∫ { Bn cos ( pω0 t ) sin ( nω0 t ) } dt n=0 0 n=0 0

Noting the zero value of the second term in the last line of Eq. 178, i.e. :

157
T

Bn ∫ { Bn cos ( pω0 t ) sin ( nω0 t ) } dt = ------ ∫ ( sin ( p + n )ω0 t – sin ( p – n )ω0 t ) dt 2
0

T

Bn Bn ( p + n )2πt( p – n )2πt = ----- ∫ sin ---------------------------- dt – ----- ∫ sin ---------------------------- dt T T 2 2
0 0

0 T

T

(179)

= 0 using the trigonometric identity 2 cos A sin B = sin ( A + B ) – sin ( A – B ) and noting that the integral over one period, T, of any harmonic of the term sin [ 2πt ⁄ T ] is zero:
T 2πt sin -------- = sin ω 0 t T T 6πt sin -------- = sin 3ω 0 t T

time The integral over T of any sine/cosine waveform of frequency f 0 = 1 ⁄ T or harmonics thereof, 2f 0, 2f 0, 3f 0, … is zero, regardless of the amplitude or phase of the signal.

time

Eq. 179 is true for all values of the positive integers p and n . For the first term in the last line of Eq. 178 the average is only zero if p ≠ n , i.e. :
T

An ∫ An cos ( pω0 t ) cos ( nω0 t ) dt = ------ ∫ ( cos ( p + n ) ω0 t + cos ( p – n ) ω0 t ) dt = 0, 2
0 0

T

p≠n

(180)

this time using the trigonometric identity 2 cos A cos B = cos ( A + B ) + cos ( A – B ) . If p = n then:
T T

∫ An cos ( nω0 t ) cos ( nω0 t ) dt
0

= A n ∫ cos2 ( nω 0 t ) dt An An An t = ----- ∫ ( 1 + cos 2n ω 0 t ) dt = ----- ∫ 1 dt = ------2 2 2
0 0 0 T T T 0

(181) AnT = --------2

Therefore using Eqs. 179, 180, 181 in Eq. 178 we note that:
T

∫ cos ( pω0 t ) x ( t )dt
0

An T = --------2

(182)

and therefore:

158
2 A n = -- ∫ x ( t ) cos ( nω 0 t ) dt T
0 T

DSPedia
(183) and using a similar set of

By premultiplying and time averaging Eq. 178 by sin ( pω 0 t ) simplifications to Eqs. 179, 180, 181 we can similarly show that: 2 B n = -- ∫ x ( t ) sin ( nω 0 t ) dt T
0 T

(184)

Hence the three key equations for calculating the Fourier series of a periodic signal with fundamental period T are:


x(t) =

∑ n=0 T

2πnt A n cos  -----------  +  T 



∑ n=1 2πnt B n sin  -----------   T 

2 A n = -- ∫ x ( t ) cos ( nω 0 t ) dt T 2 B n = -- ∫ x ( t ) sin ( nω 0 t ) dt T 0 Fourier Series Equations
0 T

(185)

See also Basis Function, Discrete Cosine Transform, Discrete Fourier Transform, Fast Fourier Transform, Fourier, Fourier Analysis, Fourier Series - Amplitude/Phase Representation, Fourier Series - Complex Exponential Representation, Fourier Transform, Frequency Response, Impulse Response, Gibbs Phenomenon, Parseval’s Theorem. Fourier Series, Amplitude/Phase Representation: It is often useful to abbreviate the notation of the Fourier series such that the series is a sum of cosine (or sine) only terms with a phase shift. To perform this notational simplification, first consider the simple trigonometric function: A cos ωt + B sin ωt where A and B are real numbers. If we introduce another variable, M such that M = then: (186) A2 + B2

159
A2 + B2 A cos ωt + B sin ωt = ----------------------- ( A cos ωt + B sin ωt ) 2 + B2 A A B = M  ----------------------- cos ωt + ----------------------- sin ωt  A2 + B2  A2 + B2 = M ( cos θ cos ωt + sin θ sin ωt ) = M cos ( ωt – θ ) = A 2 + B 2 cos ( ωt – { tan–1 B ⁄ A } ) (187)

since θ is the angle made by a right angle triangle of hypotenuese M and sides of A and B , i.e. tan–1 ( B ⁄ A ) = θ .
M = A2 + B2 B θ A A simple right angled triangle with arbitrary length sides of A and B. The sine of the angle θ is the ratio of the opposite side over the hypotenuese, B ⁄ M and the cosine of the angle θ is the ratio of the adjacent side over the hypotenuese, A ⁄ M . the tangent of the angle θ is the ratio of the opposite side over the adjacent side, B ⁄ A .

This result shows that the sum of a sine and a cosine waveform of arbitrary amplitudes is a sinusoidal signal of the same frequency but different amplitude and phase from the original sine and cosine terms. Using this result of Eq. 187 to combine each sine and cosine term, we can rewrite the Fourier series of Eq. 176 as:


x( t) =

∑ n=0 ∞

2πnt A n cos  -----------  +  T 



∑ Bn sin  ------------  T  n=1 2πnt

x( t) =

∑ Mn cos ( nω0 t – θn ) n=0 2 2 A n + Bn

(188)

θ n = tan–1 B n ⁄ A n Mn =

160 where A n and B n are calculated as before using Eqs. 183 and 184.
A0 T M1 time T/2 M2 T/3 time A0 + time T time

DSPedia

+

x(t) time
3

M3

∑ n=1 2πnt M n cos  ----------- – θ n  T 

Comparing this Fourier series with the one on page 156 note that the sine and cosine terms have been combined for each frequency to produce a single cosine waveform of amplitude 2 2 M n = A n + B n and phase θ n = B ⁄ A .

From this representation of the Fourier series, can plot an amplitude line spectrum and a phase spectrum:
T x( t ) time Fourier series calculation Amplitude
M2 M3 100 200 300 M1

Phase

100 o 200

300

frequency/Hz

frequency/Hz Amplitude Spectrum

-30

Phase Spectrum

The Fourier series components of the form: M n cos ( 2πf 0 t – θ n ) . The amplitude spectrum shows the amplitudes of each of the sine waves, and the phase spectrum shows the phase shift (in degrees in this example) of each cosine component. Note that the combination of the amplitude and phase spectrum completely defines the time signal.

See also Discrete Cosine Transform, Discrete Fourier Transform, Fast Fourier Transform - Zero Padding, Fourier, Fourier Analysis, Fourier Series, Fourier Series - Complex Exponential Representation, Fourier Transform, Impulse Response, Gibbs Phenomenon, Parseval’s Theorem. Fourier Series, Complex Exponential Representation: It can be useful and instructive to represent the Fourier series in terms of complex exponentials rather than sine and cosine waveforms. (In the derivation presented below we will assume that the signal under analysis is real valued, although the result extends easily to complex signals.) From Euler’s formula, note that:

161 e jω = cos ω + j sin ω e jω + e –jω ⇒ cos ω = ----------------------2 and e jω – e – jω sin ω = ----------------------2j (189)

Substituting the complex exponential definitions for sine and cosine in Eq. 176 (defined in item Fourier Series) and rearranging gives:


x ( t ) = A0 +

∑ An cos ( nω0 t ) + Bn sin ( nω0 t ) n=1 ∞

(190) +e –e e e A n  ------------------------------------ + B n  -----------------------------------      2 2j jnω 0 t – j nω 0 t jnω 0 t – j nω 0 t

= A0 +

∑ n=1 ∞

= A0 +

∑ n=1 ∞

Bn  A n + -----  e jnω0 t +  A n – B n e – j nω0 t --------- ----2  2 2j  2j   A n – jB n e jnω 0 t + ------------------- 2 


= A0 +

∑ n=1 ∑  --------------------- e–jnω t   2
0

A n + jB n

n=1

For the second summation term, if the sign of the complex sinusoid is negated and the summation limits are reversed, then we can rewrite as:


x ( t ) = A0 +


∑ n=1  A n – jB n e jnω 0 t + ------------------- 2 

–1

∑ n = –∞

 A n + jB n e jnω0 t -------------------  2 (191)

=

∑ n = –∞

C n e jnω 0 t

Writing C n in terms of the Fourier series coefficients of Eqs. 183 and 184 gives: C0 = A0 C n = ( A n – jB n ) ⁄ 2 for n > 0 C n = ( A n + jB n ) ⁄ 2 for n < 0 From Eq 192, note that for n ≥ 0 : (192)

162
A n – jB n 1 1 C n = -------------------- = -- ∫ x ( t ) cos ( nω 0 t ) dt – j -- ∫ x ( t ) sin ( nω 0 t ) dt T T 2
0 0 T T

DSPedia

1 = -- ∫ x ( t ) [ cos ( nω 0 t ) – j sin ( nω 0 t ) ] dt T 1 = -- ∫ x ( t )e –j nωo t dt T
0 0 T

T

(193)

* For n < 0 it is clear from Eq. 192 that C n = C –n where “*” denotes complex conjugate. Therefore we have now defined the Fourier series of a real valued signal using a complex analysis and a synthesis equation: ∞

x(t) =

∑ n=∞ T

C n e jnω0 t

Synthesis (194)

1 Analysis C n = -- ∫ x ( t )e –j nωo t dt T 0 Complex Fourier Series Equations The complex Fourier series also introduces the concept of “negative frequecies” whereby we view signals of the form e j2πf 0 as a positive complex sinusoid of frequency f 0 Hz, and signals of the form e –j 2πf 0 as a complex sinusoid of frequency – f 0 Hz. Note that the complex Fourier series is more notationally compact, and probably simpler to work with than the general Fourier series. (The “probably” depends on how clear you are in dealing with complex exponentials!) Also if the signal being analysed is in fact complex the general Fourier series of Eq. 176 (see Fourier Series) is insufficient but Eqs. 194 can be used. (For complex signals the coefficient relationship in Eq. 192 will not in general hold.) Assuming the waveform being analysed is real (usually the case), then it is easy to convert C n coefficients into A n and B n . Also note from Eq. 188 (see item Fourier Series) and Eq. 192 that: Mn = noting that C n =
2 2 An + B n = 2 Cn

(195)

2 2 A n + B n ⁄ 2 . Clearly we can also note that for the complex number C n :

B ∠C n = tan– 1 --- = θ n A

i.e. C n = C n e jθn

(196)

Therefore although a complex exponential does not as such exist as a real world (single wire voltage) signal, we can easily convert from a complex exponential to a real world sinusoid simply by taking the real or imaginary part of the complex Fourier coefficients and use in the Fourier series equation (see Eq. 176, Fourier Series):

163


x(t) =

∑ n=0 [ A n cos ( nω 0 t ) + B n sin ( nω 0 t ) ]

(197)

There are of course certain time domain signals which can be considered as being complex, i.e. having a separate real and imaginary components. This type of signal can be found in some digital communication systems or may be created within a DSP system to allow certain types of computation to be performed. If a signal is decomposed into its complex Fourier series, the resulting values for the various components can be plotted as a line spectrum. As we now have both complex and real values and positive and negative frequencies, this will require two plots, one for the imaginary components and one for the real components:
T x(t) time Complex Fourier series calculation

Real Valued Line Spectrum (An)
300 200 100

Amplitude

100

200

300

frequency/Hz Imaginary Valued Line Spectrum (Bn)
300 200 100

Amplitude

100

200

300

frequency/Hz The complex Fourier series line spectra. Note that there are both positive and negative frequencies, and for the complex Fourier series of a real valued signal the real line spectrum is symmetrical about f = 0 and the imaginary spectrum has point symmetry about the origin.

164

DSPedia
T x( t ) time Complex Fourier series calculation
2 2 Magnitude A n + B n

Rather than showing the real and imaginary line spectra, it is more usual to plot the magnitude spectrum and phase spectrum:

A n + jB n

Bn Phase tan– 1 -----An Phase

Magnitude

M1 M2 M3 100 200 300

100

200

300

frequency/Hz Magnitude Spectrum

-30o

frequency/Hz

Phase Spectrum

Calculating the magnitude and phase spectra from the complex Fourier series. For a real valued signal, the result will be identical, except for a magnitude scaling factor of 2, to that obtained from the amplitude phase form of the Fourier series as on page 160. As both spectra are symmetrical about the y-axis the negative frequency values are not plotted.

The “ease” of working with complex exponentials over sines and cosines can be illustrated by asking the reader to simplify the following equation to a sum of sine waves: sin ( ω 1 t ) sin ( ω 2 t ) This requires the recollection (or re-derivation!) of trigonometric identities to yield: 1 1 sin ( ω 1 t ) sin ( ω 2 t ) = -- cos ( ω 1 – ω 2 )t + -- cos ( ω 1 + ω 2 )t 2 2 (199) (198)

While not particularly arduous, it is somewhat easier to simplify the following expression to a sum of complex exponentials: e jω1 t e jω2 t = e j ( ω1 + ω2 )t (200)

Although a seemingly simple comment, this is the basis of using complex exponentials rather than sines and cosines; they make the maths easier. Of course in situations where the signal being analysed is complex, then the complex exponential Fourier series must be used. See also Discrete Fourier Transform, Fast Fourier Transform, Fast Fourier Transform - Decimationin-Time, Fourier, Fourier Analysis, Fourier Series, Fourier Series - Amplitude/Phase Representation, Fourier Transform, Frequency Response, Impulse Response, Gibbs Phenomenon, Parseval’s Theorem. Fourier Transform: The Fourier series (rather than transform) allows a periodic signal to be broken down into a sum of real valued sine and cosine waves (in the case of a real valued signal) or more generally a sum of complex exponentials. However most signals are aperiodic, i.e. not

165 periodic. Therefore the Fourier transform was derived in order to analyse the frequency content of an aperiodic signal. Consider the complex Fourier series of a periodic signal:


x(t) =

∑ n = –∞ T

C n e jnω0 t (201)

1 C n = -- ∫ x ( t )e –j nωo t dt T
0

1 T = --f0 x(t)

t0

t0 + T

t 0 + 2T

time

A periodic signal x ( t ) with period T . The fundamental frequency, f 0 is calculated simply as f 0 = 1 ⁄ T . Clearly x ( t 0 ) = x ( t 0 + T ) = x ( t 0 + 2T ) .

The period of the signal has been identified as T and the fundamental frequency is f 0 = 1 ⁄ T . Therefore the Fourier series harmonics occur at frequencies f 0, 2f 0, 3f 0, … .
Time Signal
Amplitude / Volts

Magnitude Response
0.5 0.4 0.3 0.2 0.1 C0=0.5

T
1 0.5 0 1 2 3 4 5

0

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

time/s Fourier Series Computation

frequency/Hz

Magnitude response of a (periodic) square wave. The phase response is zero for all components. The fundamental period is T = 2 and therefore the fundamental frequency is f 0 = 1 ⁄ 2 = 0.5 Hz and harmonics are therefore 0.5 Hz apart when the Fourier series is calculated.

For the above square wave we can calculate the Fourier series using Eq. 201 as:

166
1 1 tC0 = -- ∫ s ( t ) dt = -- ∫ 1 dt = -T 2 2
0 T 1 0 T 1 1 0

DSPedia
= 1 -2
1 0

(202)

1 1 e –j πnt C n = -- ∫ s ( t )e –j ω0 nt dt = -- ∫ e –j πnt dt = -------------T 2 – 2jπn
0 jπn – j πn 0 ----------- – j πn – j πn  -------2 2 sin πn ⁄ 2 2  e – e - e ----------- = ---------------------- e ----------2 = --------------------------- 2jπn  πn  

– j πn – 1 = e --------------------– 2jπn

(203)

recalling that sin x = ( e jx – e –jx ) ⁄ 2j Noting that e –j πn ⁄ 2 = cos πn ⁄ 2 – j sin πn ⁄ 2 = 0, j or –j (depending on the value of n ) and recalling from Eq. 190 and 191 (see Fourier Series) that C n = A n + jBn then the square wave can be decomposed into a sum of harmonically related sine waves of amplitudes: A0 = 1 ⁄ 2  1 ⁄ nπ for odd n An =   0 for even n The amplitude response of the Fourier series is plotted above. Now consider the case where the signal is aperiodic, and is in fact just a single pulse:
Time Signal
Amplitude / Volts

(204)

1 0.5 0 1 2 3 4 5

time/s A single aperiodic pulse. This signal is most defintely not periodic and therefore the Fourier series cannot be calculated.

167
One way to obtain “some” information on the sinusoidal components comprising this aperiodic signal would be to assume the existence of a periodic “relative” or “pseudo-period” of this signal:
Time Signal
Amplitude / Volts

Tp
1 0.5 0 1 2 3 4 5

“Pseudo-period”

6

7

8

9

time/s

Fourier Series Magnitude Response
0.25 0.2 0.15 0.1 0.05 0

frequency/Hz A periodic signal that is clearly a relative of the single pulse aperiodic signal. By adding the pseudo-periods we essentially assume that the single pulse of interest is a periodic signal and therefore we can now use the Fourier series tools to analyse. The fundamental period, T p = 4 and therefore the harmonics of the Fourier series are placed f 0 = 0.25 Hz apart.

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

168

DSPedia
Time Signal
Amplitude / Volts

If we assumed that the “periodicity” of the pulse was even longer, say 8 seconds, then the spacing between the signal harmonics would further decrease:

Tp
1 0.5 0 1 2 3 4 5 6 7 8 9

“Pseudo-period”

time/s

Fourier Series Magnitude Response
0.125 0.1 0.075 0.05 0.025 0 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5

frequency/Hz If we increase the fundamental pseudo-period to T p = 8 the harmonics of the Fourier series are more closely spaced at f 0 = 1 ⁄ 8 = 0.125 Hz apart. The magnitude of all the harmonics proportionally decreases with the increase in the pseudo-period. This is expected since the power of the signal decreases as the number of harmonics decreases.

If we further assumed that the period of the signal was such that T → ∞ then f 0 → ∞ and given the finite energy in the signal, the magnitude of each of the Fourier series sine waves will tend to zero given that the harmonics are now so closely spaced! Hence if we multiply the magnitude response

169 by T and plot the Fourier series we have now realised a graphical interpretation of the Fourier transform:
Time Signal
Amplitude / Volts

1 0.5 0 1

Period, T → ∞

time/s Fourier Series Magnitude Response
0.5/T 0.4/T 0.3/T 0.2/T 0.1/T 0 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5

frequency/Hz If we increase the fundamental pseudo-period such that T → ∞ the frequency spacing between the harmonics of the Fourier series tends to zero, i.e. f 0 → 0 . Note that the magnitude of the Fourier series components are scaled proportionally down by the value of the “pseudo” period and in the limit as T → ∞ will tend to zero. Hence the y-axis is plotted as 1 ⁄ T .

To realise the mathematical version of the Fourier transform first define a new function based on the general Fourier series of Eq. 201 such that: Cn X ( f ) = ------ = C n T f0 then:


(205)

x(t) =

∑ n = –∞ T⁄2

C n e j2πnf 0 t


(206)

X( f) =


–T ⁄ 2

x ( t )e –j 2πnf0 t dt =

∫ x ( t )e–j2πft dt
–∞

where nf 0 becomes the continuous variable f as f 0 → 0 and n → ∞ . This equation is refered to as the Fourier transform and can of course be written in terms of the angular frequency:

170


DSPedia
X(ω) =

∫ x ( t )e –jωt dt
–∞

(207)

Knowing the Fourier transform of a signal, of course allows us to transform back to the original aperiodic signal:
∞ ∞

x(t) =

∑ n = –∞ ∞

C n e j2πnf 0 t =

∑ n = –∞

X ( f )f 0 e j2πnf0 t

 ∞  =  ∑ X ( f )e j2πnf 0 t f 0    n = –∞ 

(208)

⇒ x(t) =

∫ X ( f )e j2πft df
–∞

This equation is refered to as the inverse Fourier transform and can also be written in terms of the angular frequency: 1 x ( t ) = -----2π


∫ X ( ω )e jωt dω
–∞

(209)

Hence we have realised the Fourier transform analysis and synthesis pair of equations:


x(t) =

∫ X ( f )e j2πft df
–∞ ∞

Synthesis

(210)
Analysis

X(f) =

∫ x ( t )e–j 2πft dt
–∞

Fourier Transform Pair

Therefore the Fourier transform of a continuous time signal, x ( t ) , will be a continuous function in frequency. See also Discrete Cosine Transform, Discrete Fourier Transform, Fast Fourier Transform, Fourier Analysis, Fourier Series, Fourier Series - Complex Exponential Representation, Fourier Transform. Forward Substitution: See Matrix Algorithms - Forward Substitution. Fractals: Fractals can be used to define seemingly irregular 1-D signals or 2-D surfaces using, amongst other things, properties of self similarity. Self similarity occurs when the same pattern repeats itself at different scalings, and is often seen in nature. A good introduction and overview of fractals can be found in [86]. Fractional Binary: See Binary Point.

171
Fractional Bandwidth: A definition of (relative) bandwidth for a signal obtained by dividing the difference of the highest and lowest frequencies of the signal by its center frequency. The result is a number between 0 and 2. When this number is multiplied by 100, the relative bandwidth can be stated in terms of percentage. See also Bandwidth. Fractional Delay Implementation: See All-pass Filter - Fractional Sample Delay Implementation. Fractional Sampling Rate Conversion: Sometimes sampling rate conversions are needed between sampling rates that are not integer multiples of each other and therefore simple integer downsampling or upsampling cannot be performed. One method of changing sampling rate is to convert a signal back to its analog form using a DAC, then resample the signal using an ADC sampling at the required frequency. In general this is not acceptable solution as two levels of noise are introduced by the DAC and ADC Interpolation by a factor of N, followed by decimation by a factor of M results in a sampling rate change of N/M. The higher the values of N and M, the more computation that is required. For example to convert from CD sampling rates of 44100Hz to DAT sampling rate of 48000Hz requires upsampling by a factor of 160, and downsampling by a factor of 147. When performing fractional sampling rate conversion the low pass anti-alias filter associated with decimation, and the low pass filter used in interpolation can be combined into one digital filter. See also Upsampling, Downsampling, Decimation, Interpolation.
Low Pass Filter Cut-Off = fs/2 max(N,M)

fs

N
Upsampler

M
Downsampler

N/Mfs

Frequency: Frequency is measured in Hertz (Hz) and gives a measure of the number of cycles per second of a signal. For example if a sine wave has a frequency of 300Hz, this means that the signal has 300 single wavelength cycles in one second. Square waves also can be assigned a frequency that is defined as 1/T where T is the period of one cycle of the square wave. See also Sine Wave. Frequency Domain Adaptive Filtering: The LMS (and other adaptive algorithms) can be configured to operate of time series data that has been transformed into the frequency domain [53], [131]. Frequency, Logarithmic: See Logarithmic Frequency. Frequency Modulation: One of the three ways of modulating a sine wave signal to carry information. The sine wave or carrier has its frequency changed in accordance with the information signal to be transmitted. See also Amplitude Modulation, Phase Modulation. Frequency Range of Hearing: The frequency range of hearing typically goes from around 20Hz to up to 20kHz in healthy young people. For adults the upper range of hearing is more likely to be in the range 11-16kHz as age erodes the high frequency sensitivity. The threshold of hearing varies over the frequency range, with the most sensitive portion being from around 1-5kHz, where speech frequencies occur. Low frequencies, below 20Hz, are tactile and only audible at very high sound pressure levels. Also listening to frequencies below 20Hz does not produce any further perception of reducing pitch. Inaudible sound below the lowest perceptible frequency is termed infrasound, and above the highest perceptible frequency, is known as ultrasound.

172

DSPedia

Discrimination between tones at similar frequencies (the JND - just noticeable difference or DL Difference Limen), depends on a number of factors such as the frequency, sound pressure level (SPL), and sound duration. The ear can discriminate by about 1Hz for frequencies in the range 12kHz where the SPL is about 20dB above the threshold of hearing, and the duration is at least 1/4 seconds [30]. See also Audiogram, Audiometry, Auditory Filters, Beat Frequencies, Binaural Beats, Difference Limen, Ear, Equal Loudness Contours, Hearing Aids, Hearing Impairment, Hearing Level, Infrasound, Sensation Level, Sound Pressure Level, Spectral Masking, Temporal Masking, Threshold of Hearing, Ultrasound. Frequency Response: The frequency response a system defines how the magnitude and phase of signal components at different frequencies will be changed as the signal passes through, or is convolved with a linear system. For example the frequency response of a digital filter may attenuate low frequency magnitudes, but amplify those at high frequencies. The frequency response of a linear system is calculated by taking the discrete Fourier transform (DFT) of the impulse response or evaluating the z-transform of the linear system for z = e jω = e j2πf . See also Discrete Fourier Transform, Fast Fourier Transform. .
Digital Filter Impulse Responseh(n) Frequency Response (Magnitude Only) Magnitude
N–1

|H(k)|

H(k) = frequency, k

∑ n=0 h ( n )e

2πnk – j  -------------   N 

Frequency Shift Keying (FSK): A digital modulation technique in which the information bits are encoded in the frequency of a symbol. Typically, the frequencies are chosen so that the symbols are orthogonal over the symbol period. FSK demodulation can be either coherent (phase of carrier signal known) or noncoherent (phase of carrier signal unknown). Given a symbol period of T seconds, signals separated in frequency by 1/T Hz will be orthogonal and will have continuous phase. Signals separated by 1/(2T) Hz will be orthogonal (if demodulated coherently) but will result in phase discontinuities. See also Amplitude Shift Keying, Continuous Phase Modulation, Minimum Shift Keying, Phase Shift Keying. Frequency Transformation: The transformation of any time domain signal into the frequency domain. Frequency Weighting Curves: See Sound Pressure Level Weighting Curves. Frobenius Norm: See Matrix Properties - Norm. Formants: The vocal tract (comprising throat, mouth and lips) can act as an acoustics resonator with more than one resonant frequency. These resonant frequencies are known as formants and they change in frequency while we move tongue and lips in the process of joining speech sounds together (articulation).

173
Four Wire Circuit: A circuit containing two pairs of wires (or their logical equivalent) for simultaneous (full duplex) two transmission. See also Two Wire Channel, Full Duplex, Half Duplex, Simplex. Fricatives: One of the elementary sounds of speech, namely plosives, fricatives, sibilant fricative, semi-vowels, and nasals. Fricatives are formed from the lower lip and teeth with air through as when “f” is used in the word “fin”. See also Nasals, Plosives, Semi-vowels, and Sibilant Fricatives. Full Adder: The full adder is the basic single bit arithmetic building block for design of multibit binary adders, multipliers and arithmetic logic units. The full adder has three single bit inputs and two single bit outputs: a 0 0 0 0 1 1 1 1 b 0 0 1 1 0 0 1 1 cin 0 1 0 1 0 1 0 1 cout 0 0 0 1 0 1 1 1 sout 0 1 1 0 1 0 0 1 c out = abc + abc + abc + abc = ab + bc + ac s out = abc + abc + abc + abc = ( a ⊕ b ) ⊕ c a b b cin a cin a b cin Logic Circuit sout Boolean Algebra

a cout cout

b

FA

cin

sout Symbol

Truth Table

Boolean Algebra: (a+b) represents (a OR b); (ab) represents (a AND b); a ⊕ b represents (a Exclusive-OR b). The full adder (FA) simply adds three bits (0 or 1) together to produce a sum bit, s out and carry bit, c out

See also Arithmetic Logic Unit, Parallel Adder, Parallel Multiplier, DSP Processor. Full Duplex: Pertaining to the capability to send and receive simultaneously. See also Half Duplex, Simplex. Fundamental Frequency: The name of the lowest (and usually) dominant frequency component which has associated with it various harmonics (integer multiples of the frequency). In music for example the fundamental frequency identifies the note being played, and the various harmonics (and occasionally sub-harmonics) give the note its rich characteristic quality pertaining to the instrument being played. See also Fourier Series, Harmonics, Music, Sub-Harmonic, Western Music Scale. Fundamental Period: See also Fourier Series. Fuzzy Logic: A mathematical set theory which allows systems to be described in natural language rules. Binary for example uses only two level logic: 0 and 1. Fuzzy logic would still have the levels 0 and 1, but it would also be capable of describing all logic levels in between perhaps ranging through: almost definitely low, probably low, maybe high or low, probably high, to almost definitely high. Control of systems defined by fuzzy logic are currently being implemented in conjunction with

174

DSPedia

DSP algorithms. Essentially fuzzy logic is a technique for representing information and combining objective knowledge (such as mathematical models and precise definitions) with subjective knowledge (a linguistic description of a problem). One advantage often cited about fuzzy systems is that they can produce results almost as good as an “optimum” system, but they are much simpler to implement. A good introduction, with tutorial papers, can be found in [63].

175

G
G-Series Recommendations: The G-series recommendations from the International Telecommunication (ITU), advisory committee on telecommunications (denoted ITU-T, and formerly known as CCITT) propose a number of standards for transmission systems and media, digital systems and networks. From a DSP perspective the G164/5/6/7 define aspects of echo and acoustic echo cancellation, and some of the G.7XX define various coding and compression schemes which underpin digital audio telecommunication. The ITU-T G-series recommendations (http://www.itu.ch) can be summarised as:
G.100 G.101 G.102 G.103 G.105 G.111 G.113 G.114 G.117 G.120 G.121 G.122 G.123 G.125 G.126 G.132 G.133 G.134 G.135 G.141 G.142 G.143 G.151 G.152 G.153 G.162 G.164 G.165 G.166 G.167 G.172 G.173 G.174 G.180 G.181 G.191 G.211 G.212 G.213 G.214 Definitions used in Recommendations on general characteristics of international telephone connections and circuits. The transmission plan. Transmission performance objectives and Recommendations. Hypothetical reference connections. Hypothetical reference connection for crosstalk studies. Loudness ratings (LRs) in an international connection. Transmission impairments. One-way transmission time. Transmission aspects of unbalance about earth (definitions and methods). Transmission characteristics of national networks. Loudness ratings (LRs) of national systems. Influence of national systems on stability and talker echo in international connections. Circuit noise in national networks. Characteristics of national circuits on carrier systems. Listener echo in telephone networks. Attenuation distortion. Group-delay distortion. Linear crosstalk. Error on the reconstituted frequency. Attenuation distortion. Transmission characteristics of exchanges. Circuit noise and the use of Companders. General performance objectives applicable to all modern international circuits and national extension circuits. Characteristics appropriate to long-distance circuits of a length not exceeding 2500 km. Characteristics appropriate to international circuits more than 2500 km in length. Characteristics of Companders for telephony. Echo suppressors. Echo cancellers. Characteristics of syllabic Companders for telephony on high capacity long distance systems. Acoustic echo controllers. Transmission plan aspects of international conference calls. Transmission planning aspects of the speech service in digital public land mobile networks. Transmission performance objectives for terrestrial digital wireless systems using portable terminals to access the PSTN. Characteristics of N + M type direct transmission restoration systems for use on digital and analogue sections, links or equipment. Characteristics of 1 + 1 type restoration systems for use on digital transmission links. Software tools for speech and audio coding standardization. Make-up of a carrier link. Hypothetical reference circuits for analogue systems. Interconnection of systems in a main repeater station. Line stability of cable systems.

176
G.215 G.221 G.222 G.223 G.224 G.225 G.226 G.227 G.228 G.229 G.230 G.231 G.232 G.233 G.241 G.242 G.243 G.322 G.325 G.332 G.333 G.334 G.341 G.343 G.344 G.345 G.346 G.352 G.411 G.421 G.422 G.423 G.431 G.441 G.442 G.451 G.473 G.601 G.602 G.611 G.612 G.613 G.614 G.621 G.622 G.623 G.631 G.650 G.651

DSPedia
Hypothetical reference circuit of 5000 km for analogue systems. Overall recommendations relating to carrier-transmission systems. Noise objectives for design of carrier-transmission systems of 2500 km. Assumptions for the calculation of noise on hypothetical reference circuits for telephony. Maximum permissible value for the absolute power level (power referred to one milliwatt) of a signalling pulse. Recommendations relating to the accuracy of carrier frequencies. Noise on a real link. Conventional telephone signal. Measurement of circuit noise in cable systems using a uniform-spectrum random noise loading. Unwanted modulation and phase jitter. Measuring methods for noise produced by modulating equipment and through-connection filters. Arrangement of carrier equipment. 12-channel terminal equipments. Recommendations concerning translating equipments. Pilots on groups, supergroups, etc. Through-connection of groups, supergroups, etc. Protection of pilots and additional measuring frequencies at points where there is a throughconnection. General characteristics recommended for systems on symmetric pair cables. General characteristics recommended for systems providing 12 telephone carrier circuits on a symmetric cable pair [(12+12) systems]. 12 MHz systems on standardized 2.6/9.5 mm coaxial cable pairs. 60 MHz systems on standardized 2.6/9.5 mm coaxial cable pairs. 18 MHz systems on standardized 2.6/9.5 mm coaxial cable pairs. 1.3 MHz systems on standardized 1.2/4.4 mm coaxial cable pairs. 4 MHz systems on standardized 1.2/4.4 mm coaxial cable pairs. 6 MHz systems on standardized 1.2/4.4 mm coaxial cable pairs. 12 MHz systems on standardized 1.2/4.4 mm coaxial cable pairs. 18 MHz systems on standardized 1.2/4.4 mm coaxial cable pairs. Interconnection of coaxial carrier systems of different designs. Use of radio-relay systems for international telephone circuits. Methods of interconnection. Interconnection at audio-frequencies. Interconnection at the baseband frequencies of frequency-division multiplex radio-relay systems. Hypothetical reference circuits for frequency-division multiplex radio-relay systems. Permissible circuit noise on frequency-division multiplex radio-relay systems. Radio-relay system design objectives for noise at the far end of a hypothetical reference circuit with reference to telegraphy transmission. Use of radio links in international telephone circuits. Interconnection of a maritime mobile satellite system with the international automatic switched telephone service; transmission aspects. Terminology for cables. Reliability and availability of analogue cable transmission systems and associated equipments (10) Characteristics of symmetric cable pairs for analogue transmission. Characteristics of symmetric cable pairs designed for the transmission of systems with bit rates of the order of 6 to 34 Mbit/s. Characteristics of symmetric cable pairs usable wholly for the transmission of digital systems with a bit rate of up to 2 Mbits. Characteristics of symmetric pair star-quad cables designed earlier for analogue transmission systems and being used now for digital system transmission at bit rates of 6 to 34 Mbit/s. Characteristics of 0.7/2.9 mm coaxial cable pairs. Characteristics of 1.2/4.4 mm coaxial cable pairs. Characteristics of 2.6/9.5 mm coaxial cable pairs. Types of submarine cable to be used for systems with line frequencies of less than about 45 MHz. Definition and test methods for the relevant parameters of single-mode fibres. Characteristics of a 50/125 µm multimode grades index optical fibre cable.

177
G.652 G.653 G.654 G.661 G.662 G.701 G.702 G.703 G.704 G.705 G.706 G.707 G.708 G.709 G.711 G.712 G.720 G.722 G.724 G.725 G.726 G.727 G.728 G.731 G.732 G.733 G.734 G.735 G.736 G.737 G.738 G.739 G.741 G.742 G.743 G.744 G.745 G.746 G.747 G.751 G.752 G.753 G.754 Characteristics of a single-mode optical fibre cable. Characteristics of a dispersion-shifted single-mode optical fibre cable. Characteristics of a 1550 nm wavelength loss-minimized single-mode optical fibre cable. Definition and test methods for relevant generic parameters of optical fibre amplifiers. Generic characteristics of optical fibre amplifier devices and sub-systems. Vocabulary of digital transmission and multiplexing, and pulse code modulation (PCM) terms. Digital hierarchy bit rates. Physical/electrical characteristics of hierarchical digital interfaces. Synchronous frame structures used at primary and secondary hierarchical levels. Characteristics required to terminate digital links on a digital exchange. Frame alignment and cyclic redundancy check (CGC) procedures relating to basic frame structures defined in Recommendation G.704. Synchronous digital hierarchy bit rates. Network node interface for the synchronous digital hierarchy. Synchronous multiplexing structure. Pulse code modulation (PCM) of voice frequencies. Transmission performance characteristics of pulse code modulation. Characterization of low-rate digital voice coder performance with non-voice signals. 7 kHz audio-coding within 64 kbit/s; Annex A: Testing signal-to-total distortion ratio for kHz audiocodecs at 64 kbit/s. Characteristics of a 48-channel low bit rate encoding primary multiplex operating at 1544 kbit/s. System aspects for the use of the 7 kHz audio codec within 64 kbit/s. 40, 32, 24, 16 kbit/s Adaptive Differential Pulse Code Modulation (ADPCM). Annex A: Extensions of Recommendation G.726 for use with uniform-quantized input and output. 5-, 4-, 3- and 2-bits sample embedded adaptive differential pulse code modulation (ADPCM). Coding of speech at 16 kbit/s using low-delay code excited linear prediction. Annex G to Coding of speech at 16 kbit/s using low-delay code excited linear prediction: 16 kbit/s fixed point specification. Primary PCM multiplex equipment for voice frequencies. Characteristics of primary PCM multiplex equipment operating at 2048 kbit/s. Characteristics of primary PCM multiplex equipment operating at 1544 kbit/s. Characteristics of synchronous digital multiplex equipment operating at 1544 kbit/s. Characteristics of primary PCM multiplex equipment operating at 2048 kbit/s and offering synchronous digital access at 384 kbit/s and/or 64 kbit/s. Characteristics of a synchronous digital multiplex equipment operating at 2048 kbit/s. Characteristics of an external access equipment operating at 2048 kbit/s offering synchronous digital access at 384 kbit/s and/or 64 kbit/s. Characteristics of primary PCM multiplex equipment operating at 2048 kbit/s and offering synchronous digital access at 320 kbit/s and/or 64 kbit/s. Characteristics of an external access equipment operating at 2048 kbit/s offering synchronous digital access at 320 kbit/s and/or 64 kbit/s. General considerations on second order multiplex equipments. Second order digital multiplex equipment operating at 8448 kbit/s and using positive justification. Second order digital multiplex equipment operating at 6312 kbit/s and using positive justification. Second order PCM multiplex equipment operating at 8448 kbit/s. Second order digital multiplex equipment operating at 8448 kbit/s and using positive/zero/negative justification. Characteristics of second order PCM multiplex equipment operating at 6312 kbit/s. Second order digital multiplex equipment operating at 6312 kbit/s and multiplexing three tributaries at 2048 kbit/s. Digital multiplex equipments operating at the third order bit rate of 34368 kbit/s and the fourth order bit rate of 139264 kbit/s and using positive justification. Characteristics of digital multiplex equipments based on a second order bit rate of 6312 kbit/s and using positive justification. Third order digital multiplex equipment operating at 34368 kbit/s and using positive/zero/negative justification. Fourth order digital multiplex equipment operating at 139264 kbit/s and using positive/zero/negative justification.

178
G.755 G.761 G.762 G.763 G.764 G.765 G.766 G.772 G.773 G.774

DSPedia
Digital multiplex equipment operating at 139264 kbit/s and multiplexing three tributaries at 44736 kbit/s. General characteristics of a 60-channel transcoder equipment. General characteristics of a 48-channel transcoder equipment. Summary of Recommendation G.763. Voice packetizationpacketized voice protocols. Packet circuit multiplication equipment. Facsimile demodulation/remodulation for DCME. Protected monitoring points provided on digital transmission systems. Protocol suites for Q-interfaces for management of transmission systems. Synchronous Digital Hierarchy (SDH) management information model for the network element view. G.774.01: Synchronous digital hierarchy (SDH) performance monitoring for the network element view. G.774.02: Synchronous digital hierarchy (SDH) configuration of the payload structure for the network element view. G.774.03: Synchronous digital hierarchy (SDH) management of multiplex-section protection for the network element view. Loss of signal (LOS) and alarm indication signal (AIS) defect detection and clearance criteria. Vocabulary of terms for synchronous digital hierarchy (SDH) networks and equipment. Structure of Recommendations on equipment for the synchronous digital hierarchy (SDH). Types and general characteristics of synchronous digital hierarchy (SDH) equipment. Characteristics of synchronous digital hierarchy (SDH) equipment functional blocks. Synchronous digital hierarchy (SDH) management. General considerations on transmultiplexing equipments. Characteristics common to all transmultiplexing equipments. Characteristics of 60-channel transmultiplexing equipments. Characteristics of 24-channel transmultiplexing equipments. Characteristics of codecs for FDM assemblies. Characteristics of a 64 kbit/s cross-connect equipment with 2048 kbit/s access ports. Characteristics of a flexible multiplexer in a plesiochronous digital hierarchy environment. Digital transmission models. Interworking between networks based on different digital hierarchies and speech encoding laws. Architectures of transport networks based on the synchronous digital hierarchy (SDH). ATM cell mapping into plesiochronous digital hierarchy (PDH). Error performance of an international digital connection forming part of an integrated services digital network. Controlled slip rate objectives on an international digital connection. The control of jitter and wander within digital networks which are based on the 2048 kbit/s hierarchy. The control of jitter and wander within digital networks which are based on the 1544 kbit/s hierarchy. The control of jitter and wander within digital networks which are based on the Synchronous Digital Hierarchy (SDH). Error performance parameters and objectives for international, constant bit rate digital paths at or above the primary rate. Management capabilities of transport networks based on the Synchronous Digital Hierarchy (SDH). Transport of SDH elements on PDH networks: Frame and multiplexing structures. General considerations on digital sections and digital line systems. Parameters and calculation methodologies for reliability and availability of fibre optic systems. Digital sections based on the 2048 kbit/s hierarchy. Digital line sections at 3152 kbit/s. General considerations on digital line systems. Digital line systems based on the 1544 kbit/s hierarchy on symmetric pair cables. Digital line systems based on the 2048bit/s hierarchy on symmetric pair cables. Digital line systems based on the 1544 kbit/s hierarchy on coaxial pair cables. Digital line systems based on the 2048 kbit/s hierarchy on coaxial pair cables. Digital line systems based on the 1544 kbit/s and the 2048 kbit/s hierarchy on optical fibre cables. Optical interfaces for equipments and systems relating to the synchronous digital hierarchy. Digital line systems based on the synchronous digital hierarchy for use on optical fibre cables. Access digital section for ISDN basic rate access. Digital transmission system on metallic local lines for ISDN basic rate access. Access digital section for ISDN primary rate at 2048 kbit/s.

G.775 G.780 G.781 G.782 G.783 G.784 G.791 G.792 G.793 G.794 G.795 G.796 G.797 G.801 G.802 G.803 G.804 G.821 G.822 G.823 G.824 G.825 G.826 G.831 G.832 G.901 G.911 G.921 G.931 G.950 G.951 G.952 G.953 G.954 G.955 G.957 G.958 G.960 G.961 G.962

179
G.963 Access digital section for ISDN primary rate at 1544 kbit/s. G.964 V-Interfaces at the digital local exchange (LE)V5.1-Interface (based on 2048 kbit/s) for the support of access network (AN). G.965 V-Interfaces at the digital local exchange (LE)V5.2 interface (based on 2048 kbit/s) for th support of Access Network (AN). G.971 General features of optical fibre submarine cable systems. G.972 Definition of terms relevant to optical fibre submarine cable systems. G.974 Characteristics of regenerative optical fibre submarine cable systems. G.981 PDH optical line systems for the local network.

For additional detail consult the appropriate standard document or contact the ITU. See also International Telecommunication Union, ITU-T Recommendations, Standards. Gabor Spectrogram: An algorithm to transform signals from the time domain to the joint timefrequency domain (similar to the Short Time FFT spectrogram). The Gabor is most useful for analyzing signals who frequency content is time varying, but which does not show up on conventional spectrogram methods. For example in a particular jet engine the casing vibrates at 50Hz when running at full speed. If the frequency actually fluctuates about ±1Hz around 50Hz, then when using the conventional FFT the fluctuations may not have enough energy to be detected or may be smeared due to windowing effects. The Gabor spectrogram on the other hand should be able to highlight the fluctuations. Gain: An increase in the voltage, or power level of a signal usually accomplish by an amplifier. Gain is expressed as a factor, or in dB. See also Amplifier. Gauss Transform: See Matrix Decompositions - Gauss Transform. Gaussian Distribution: See Random Variable. Gaussian Elimination: See Matrix Decompositions - Gaussian Elimination. Gibbs Phenomenon: The Fourier series for a periodic signal with (almost) discontinuities will tend to an infinite series. If the signal is approximated using a finite series of harmonics then the reconstructed signal will tend to oscillate near or on the discontinuities. For example, the Fourier series of a signal, x ( t ) , is given by:


x( t) =

∑ n=0 2πnt A n cos  -----------  +  T 



∑ Bn sin  ------------  T  n=1 2πnt

(211)

For a signal such as a square wave, the series will be infinite. If however we try to produce the signal using just the first few Fourier series coefficients up to M:
M

x( t) =

∑ n=0 2πnt A n cos  -----------  +  T 

M

∑ Bn sin  ------------  T  n=1 2πnt

(212)

180

DSPedia
Time Signal x(t)
A m p l i t u d e -50 0 50

then “ringing” will be seen near the discontinuties since to adequately represent these parts of the waveform we require the high frequency components which have been truncated. This ringing is refered to as Gibb’s phenonmenon.

100

Ts

time/s
-100 0 10.e-3 20.e-3 30.e-3

time/s

The Fourier series for a square wave is an infinite series of sine waves at frequencies of f 0, 3f 0, 5f 0, … . and relative amplitudes of 1, 1 ⁄ 3, 1 ⁄ 5, … If this series is truncated to the 15th harmonic, then the resulting “square wave” rings at the discontinuities.

See also Discrete Fourier Transform, Fourier Series, Fourier Series - Amplitude/Phase Representation, Fourier Series - Complex Exponential Representation, Fourier Transform. Given’s Rotations: See Matrix Decompositions - Given’s Rotations. Global Information Infrastructure (GII): The Global Information Infrastructure will be jointly defined by the International Organization for Standards (ISO), International Electrotechnical Committee (IEC) and the International Telecommunication Union (ITU). The ISO, IEC and ITU have all defined various standards that have direct relevance to interchange of graphics, audio, video and data information via computer and telephone networks and all therefore have a relevant role to play in the definition of the GII. Global Minimum: The global minimum is the smallest value taken on by that function. For example for the function, f(x), the global minimum is at x = xg. The minima are x1, x2 and x3 are termed local minima: f(x)

x1

xg

x2

x3

x

The existence of local minima can cause problems when using a gradient descent based adaptive algorithm. In these cases, the algorithm can get stuck in a local minimum. This is not a problem when the cost function is quadratic in the parameter of interest (e.g., the filter coefficients), since

181 quadratic functions (such as a parabola) have a unique minimum (or maximum) or, worst case, a set of continuous minima that all give the same cost. See also Hyperparaboloid, Local Minima, Adaptive IIR Filters, Simulated Annealing. Glue Logic: To connect different chips on printed circuit boards (PCBs) it is often necessary to use buffers, inverters, latches, logic gates etc. These components are often referred to a glue logic. Many DSP chip designers pride themselves in having eliminated glue logic for chip interfacing, especially between D/A and A/D type chips. Golden Ears: A term often used to describe a person with excellent hearing, both in terms of frequency range and threshold of hearing. Golden ear individuals can be in demand from recording studios, audio equipment manufacturers, loudspeaker manufacturers and so on. Although a necessary qualification for golden ears is excellent hearing, these individuals most probably learn their trade from many years of audio industry experience. It would be expected that a golden ears individual could “easily” distinguish Compact Disc (CD) from analog records. The big irony is that golden eared individuals cannot distinguish recordings of REO Speedwagon from those of Styx. See also Audiometry, Compact Disc, Frequency Range of Hearing, Threshold of Hearing. Goertzel’s Algorithm: Goertzel’s algorithm is used to calculate if a frequency component is present at a particular frequency bin of a discrete Fourier transform (DFT). Consider the DFT equation calculating the discrete frequency domain representation, X ( m ) , of N samples of a discrete time signal x ( k ) :
N–1

X(m) =

∑ n=0 x ( n )e

2πnm – j  ---------------  N 

, for all k = 0 to N – 1

(213)

This computation requires N 2 complex multiply accumulates (CMACs), and the frequency representation will have a resolution of f s ⁄ N Hz. If we require to calculate the frequency component at the p-th frequency bin, only N CMACs are required. Of course the fast Fourier transform (FFT) is usually used instead of the DFT, and this requires Nlog 2 N CMACs. Therefore if a Fourier transform is being performed simply to find if a tonal component is present at one frequency only, it makes more sense to use the DFT. Note that by the nature of the calculation data flow, the FFT cannot calculate a frequency component at one frequency only - it’s all bins or none. Goertzel’s algorithm provides a formal algorithmic procedure for calculating a single bin DFT. Goertzel’s algorithm to calculate the p-th frequency bin of an N point DFT is given by: 2πp sp ( k ) = x ( k ) + 2 cos  ---------  s p ( k – 1 ) – s p ( k – 2 )  N 
2πp j --------N

(214)

yp ( k ) = sp ( k ) –

p WN sp ( k

– 1)

p where W N = e

and the initial conditions s p ( – 2 ) = s p ( – 1 ) = 0 apply.

182

DSPedia x(k) sp ( k ) yp ( k )

Eq. 214 calculates the p-th frequency bin of the DFT after the algorithm has processed N data points, i.e. X ( p ) = y p ( N ) . Goertzel’s algorithm can be represented as a second order IIR:

2πn 2 cos --------N

WN = e

p

2πp j ---------N

-1 An IIR filter representation of Goertzel’s algorithm. Note that the non-recursive part of the filter has complex weights, whereas the recursive part has only real weights. The recursive part of this filter is in fact a simple narrowband filter. For an efficient implementation it is best to compute s p ( k ) for N samples, and thereafter evaluate y p ( N ) .

For tone detection (i.e. tone present or not-present), only the signal power of the p-th frequency bin is of interest, i.e. X ( p ) 2 . Therefore from Eq. 214:
* X ( p ) 2 = X ( p )X * ( p ) = yp ( N )yp ( N )

2πp = s p ( N )sp ( N ) + 2 cos  ---------  sp ( N )s p ( N – 1 ) + s p ( N – 1 )s p ( N – 1 )  N 

(215)

Goertzel’s algorithm is widely used for dual tone multifrequency (DTMF) tone detection because of its simplicity and that it requires less computation than the DFT or FFT. For DTMF tones, there are 8 separate frequencies which must be detected. Therefore a total of 8 frequency bins are required. The International Telecommunication Union (ITU) suggest in standards Q.23 and Q24 that a 205 point DFT is performed for DTMF detection. To do a full DFT would require 205 × 205 = 42025 complex multiplies and adds (CMACs). To use a zero padded 256 point FFT would require 256log 2 256 = 2048 CMACs. Given that we are only interested in 8 frequency bins (and not 205 or 256), the computation required by Goerztel’s algorithm is 8 × 205 = 1640 CMACs. Compared to the FFT, Goertzel’s algorithm is simple and requires little memory or assembly language code to program. For DTMF tone detection the frequency bins corresponding to the second harmonic of each tone are also calculated. Hence the total computation of Goertzel’s algorithm in this case is 3280 CMACs which is more than for the FFT. However the simplicity of Goertzel’s algorithm means it is still the technique of choice. In order to detect the tones at the DTMF frequencies, and using a 205 point DFT with f s = 8000 Hz , the frequency bins to evaluate via Geortzel’s algorithm are: frequency, f / Hz 697 770 852 bin 18 20 22

183 frequency, f / Hz 941 1209 1336 1477 1633 bin 24 31 34 38 42

Note that if the sampling frequency is not 8000 Hz, or a different number of data points are used, then the bin numbers will be different from above. See also Discrete Fourier Transform, Dual Tone Multifrequency, Fast Fourier Transform. Gram-Schmidt: See Matrix Decompositions - Gram-Schmidt. Granular Synthesis: A technique for musical instrument sound synthesis [13], [14], [32]. See also Music, Western Music Scale. Granularity Effects: If the step size is too large in a delta modulator, then the delta modulated signal will give rise to a large error and completely fail to encode signals with a magnitude less than the step size. See also Delta Modulation, Slope Overload. x(n) time

Graphic Interchange Format (GIF): The GIF format has become a de facto industry standard for the interchange of raster graphic data. GIF was first developed by Compuserve Inc, USA. GIF essentially defines a protocol for on-line transmission and interchange of raster graphic data such that it is completely independent of the hardware used to create or display the image. GIF has a limited, non-exclusive, royalty-free license and has widespread use on the Internet and in many DSP enabled multimedia systems. See also Global Information Infrastructure, Joint Photographic Experts Group, Standards.

Graphical Compiler: A system that allows you to draw your algorithm and application architecture on a computer screen using a library of icons (FIR filters, FFTs etc.) which will then be compiled into executable code, usually ‘C’, which can then be cross compiled to an appropriate assembly language for implementation on a DSP processor. See also Cross Compiler. Graphical Equalizer: This is a device used in music systems which can be used to control the frequency content of the output. A graphic equalizer is therefore effectively a set of bandpass filters with independent gain settings that can be implemented in the analog or digital domains. Group Delay: See Finite Impulse Response Filter.

184

DSPedia

Group Delay Equalisation: A technique to equalise the phase response of a system to be linear (i.e. constant group delay) by cascading the output of the system with an all pass filter designed to have suitable phase shifting characteristics. The magnitude frequency response of the system cascaded with the all pass filter is the same as that of the system on its own.
Gain (dB) 0 -10 -20 0 frequency (Hz)

G ( e jω )
Phase

0 -2π -4π 0

G ( e jω )

frequency (Hz)

Input

All-pass filter G(z) HA(z) G ( e jω )H A ( e jω )

Output

Gain (dB)

0 -10 -20 0

G ( e jω )H A ( e jω )
Phase

0 -2π -4π 0

frequency (Hz)

frequency (Hz)

Group delay equalisation by cascading an all pass filter H A ( z ) with a non-linear phase filter G ( z ) in order to linearise the phase response and therefore produce a constant group delay. The magnitude frequency response of the cascaded system, G ( e jω )H A ( e jω ) is the same as the original system, G ( e jω ) ..

The design of group delay equalisers is not a trivial procedure. See also All-pass Filter, Equalisation, Finite Impulse Reponse Filter - Linear Phase . Group Speciale Mobile (GSM): The European mobile communication system that implements 13.5kbps speech coding (with half-rate 6.5kbps channels optional) and uses Gaussian Minimum Shift Keying (GMSK) modulation [85]. Data transmission is also available at rates slightly below the speech rates. See also Minimum Shift Keying.

185

H
H261: See H-Series Recommendations - H261. H320: See H-Series Recommendations - H320. H-Series Recommendations: The H-series recommendations from the International Telecommunication (ITU), advisory committee on telecommunications (denoted ITU-T, and formerly known as CCITT) propose a number of standards for the line transmission of nontelephone signals. Some of the current ITU-T H-series recommendations (http://www.itu.ch) can be summarised as:
H.100 H.110 H.120 H.130 H.140 H.200 H.221 H.224 H.230 H.231 H.233 H.234 H.242 H.243 H.261 H.281 H.320 H.331 Visual telephone systems. Hypothetical reference connections for videoconferencing using primary digital group transmission. Codecs for videoconferencing using primary digital group transmission. Frame structures for use in the international interconnection of digital codecs for videoconferencing or visual telephony A multipoint international videoconference system Framework for Recommendations for audiovisual services Frame structure for a 64 to 1920 kbit/s channel in audiovisual teleservices A real time control protocol for simplex application using the H.221 LSD/HSD/MLP channels. Frame-synchronous control and indication signals for audiovisual systems. Multipoint control units for audiovisual systems using digital channels up to 2 Mbit/s. Confidentiality system for audiovisual services. Encryption key management and authentication system for audiovisual services. System for establishing communication between audiovisual terminals using digital channels up to 2 Mbit/s. Procedures for establishing communication between three or more audiovisual terminals using digital channels up to 2 Mbit/s. Video codec for audiovisual services at p x 64 kbit/s. A far end camera control protocol for videoconferences using H.224. Narrow-band visual telephone systems and terminal equipment below. Broadcasting type audiovisual multipoint systems and terminal equipment.

From the interest point of DSP and multimedia systems and algorithms the above title descriptions of H242, H261 and H320 can be expanded upon as per http://www.itu.ch:
• H.242: The H242 recommendation defines audiovisual communication using digital channels up to 2 Mbit/s. This recommendation should be read in conjunction with ITU-T recommendations G.725, H.221 and H.230. H242 is suitable for applications that can use narrow (3 kHz) and wideband (7 kHz) speech together with video such as video-telephony, audio and videoconferencing and so on. H242 can produce speech, and optionally video and/ or data at several rates, in a number of different modes. Some applications will require only a single channel, whereas others may require two or more channels to provide the higher bandwidth. • H.261: The H.261 recommendation describes video coding and decoding methods for the moving picture component of audiovisual services at the rate of p x 64 kbit/s, where p is an integer in the range 1 to 30, i.e. 64kbits/s to 1.92Mbits/s. H261 is suitable for transmission of video over ISDN lines, for applications such as videophones and videoconferencing. The videophone application can tolerate a low image quality and can be achieved for p = 1 or 2 . For videoconferencing applications where the transmission image is likely to include a few people and last for a long period, higher picture quality is required and p > 6 is required. H.261 defines two picture formats: CIF (Common Intermediate Format) has 288 lines by 360 pixels/line of luminance information and 144 x 180 of chrominance information; and QCIF (Quarter Common Intermediate Format) which is 144 lines by 180 pixels/line of luminance and 72 x 90 of chrominance. The choice of CIF or QCIF depends on available channel capacity and desired quality.

186

DSPedia
The H261 encoding algorithm is similar in structure that of MPEG, however they are not compatible. It is also worth noting that H.261 requires considerably less CPU power for encoding than MPEG. Also the algorithm makes available use of the bandwidth by trading picture quality against motion. Therefore a fast moving image will have a lower quality than a static image. H.261 used in this way is thus a constant-bit-rate encoding rather than a constant-quality, variable-bit-rate encoding. • H.320: H.320 specifies a narrow-band visual telephone services for use in channels where the data rate cannot exceed 1920 kbit/s.

For additional detail consult the appropriate standard document or contact the ITU. See also International Telecommunication Union, ITU-T Recommendations, Standards. Haas Effect: In a reverberant environment the sound energy received by the direct path can be much lower than the energy received by indirect reflective paths. However the human ear is still able to localize the sound location correctly by localizing the first components of the signal to arrive. Later echoes arriving at the ear increase the perceived loudness of the sound as they will have the same general spectrum. This psychoacoustic effect is commonly known as the precedence effect, the law of the first wavefront, or sometimes the Haas effect [30]. The Haas effect applies mainly to short duration sounds or those of a discontinuous or varying form. See also Ear, Lateralization, Source Localization, Threshold of Hearing. Habituation: Habituation is the effect of the auditory mechanism not perceiving a repetitive noise (which is above the threshold of hearing) such as the ticking of a nearby clock or passing of nearby traffic until attention is directed towards the sound. See also Adaptation, Psychoacoustics, Threshold of Hearing. Hamming Distance: Often used in channel coding applications, Hamming distance refers to the number of bit locations in which two binary codewords differ. For example the binary words 10100011 and 10001011 differ in two positions (the third and the fifth from the left) so the Hamming distance between these words is 2. See also Euclidean Distance, Channel Coding, Viterbi Algorithm. Hamming Window: See Windows. Half Duplex: Pertaining to the capability to send and receive data on the same line, but not simultaneously. See also Full Duplex, Simplex. Hand Coding: When writing programs for DSP processors ‘C’ cross compilers are often available. Although algorithm development with cross compilers is faster than when using assembly language, the machine code produced is usually less efficient and compact as would be achieved by writing in assembler. Cleaning up this less efficient assembly code is sometimes referred to as hand-coding. Coding directly in machine code is also referred to as hand-coding. See also Assembly Language, Cross-Compiler, Machine Code. Handshaking: A communication technique whereby one system acknowledges receipt of data from another system by sending a handshaking signal.

187
Harmonic: Given a signal with fundamental frequency of M Hz, harmonics of this signal are at integer multiples of M, i.e. at 2M, 3M, 4M, and so on. See also Fundamental Frequency, Music, Sub-harmonic, Total Harmonic Distortion. fundamental frequency
Magnitude

harmonics

M

2M

3M

4M

frequency (Hz)

The frequency domain representation of a tone at M Hz with associated harmonics.

harris Window: See Windows. Hartley Transform: The Hartley transform is “similar” in computational structure (although different in properties) to the Fourier transform. One key difference is that the Hartley transform uses real numbers rather than complex numbers. A good overview of the mathematics and application of the Hartley transform can be found in [121]. Harvard Architecture: A type of microprocessor (and microcomputer) architecture where the memory used to store the program, and the memory used to store the data are separate therefore allowing both program and data to be accessed simultaneously. Some DSPs are described as being a modified Harvard architecture where both program and data memories are separate, but with cross-over links. See also DSP Processor. Head Shadow: Due to the shape of the human head, incident sounds can be diffracted before reaching the ears. Hence the actual waveform arriving at the ears is different than what would have been received by an ear without the head present. Headshadow is an important consideration in the design of virtual sound systems and in the design of some types of advanced DSP hearing aids. See also Diffraction. Hearing: The mechanism and process by which mammals perceive changes in acoustic pressure waves, or sound. See also Audiology, Audiometry, Ear, Psychoacoustics, Threshold of Hearing. Hearing Aids: A hearing aid can be described as any device which aids the wearer by improving the audibility of speech and other sounds. The simplest form of hearing aid is an acoustic amplification device (such as an ear trumpet), and the most complex is probably a cochlear implant system (surgically inserted) which electrically stimulates nerves using acoustic derived signals received from a body worn radio transmitter and microphone. More commonly, hearing aids are recognizable as analogue electronic amplification devices consisting of a microphone and amplifier connected to an acoustic transducer usually just inside the ear. However a hearing aid which simply makes sounds louder is not all that is necessary to allow hearing impaired individuals to hear better. In everyday life we are exposed to a very wide range of sounds coming from all directions with varying intensities, and various degrees of reverberation. Clearly hearing aids are required to be very versatile instruments, that are carefully designed around known parameters and functions of the ear, and providing compensation techniques that are suitable for the particular type of hearing loss, in particular acoustic environments.

188

DSPedia

Simple analogue electronic hearing aids can typically provide functions of volume and tone control. More advanced devices may incorporate multi-band control (i.e., simple frequency shaping) and automatic gain control amplifiers to adjust the amplification when loud noises are present. Hearing aids offering multi-band compression with a plethora of digitally adjustable parameters such as attack and release times, etc., have become more popular. Acoustic feedback reduction techniques have also been employed to allow more amplification to be provided before the microphone/ transducer loop goes unstable due to feedback (this instability is often detected as an unsatisfied hearing aid wearer with a screeching howl in their ear). Acoustic noise reduction aids that exploit the processing power of advanced DSP processing have also been designed. Digital audio signal processing based hearing aids may have advantages over traditional analogue audio hearing aids. They provide a greater accuracy and flexibility in the choice of electroacoustic parameters and can be easily interfaced to a computer based audiometer. More importantly they can use powerful adaptive signal processing techniques for enhancing speech intelligibility and reducing the effects of background noise and reverberation. Currently however, power and physical size constraints are limiting the availability of DSP hearing aids. See also Audiology, Audiometry, Beamforming, Ear, Head Shadow, Hearing Impairment, Threshold of Hearing. Hearing Impairment: A reduction in the ability to perceive sound, as compared to the average capability of a cross section of unimpaired young persons. Hearing impairment can be caused by exposure to high sound pressure levels (SPL), drug induced, virus-induced, or simply as a result of having lived a long time. A hearing loss can be simply quantified by an audiogram and qualified with more exact audiological language such as sensorineural loss or conductive loss, etc., [4], [30]. See also Audiology, Audiometry, Conductive Hearing Loss, Ear, Hearing, Loudness Recruitment, Sensorineural Hearing Loss, Sound Pressure Level, Threshold of Hearing. Hearing Level (HL): When the hearing of person is to be tested, the simplest method is to play pure tones through headphones (using a calibrated audiometer) over a range of frequencies, and determine the minimum sound pressure level (SPL) at which the person can hear the tone. The results could then be plotted as minimum perceived SPL versus frequency. To ascertain if the person has a hearing impairment the plot can be compared with the average minimum level of SPL for a cross section of healthy young people with no known hearing impairments. However if the minimum level of SPL (the threshold of hearing) is plotted as SPL versus frequency, the curve obtained is not a straight line and comparison can be awkward. Therefore for Hearing Level (dB) plots (or audiograms), the deviation from the average threshold of hearing of young people is plotted with hearing loss indicated by a positive measurement that is plotted lower on the audiogram. The threshold of hearing is therefore the 0dB line on the Hearing Level (dB) scale. The equivalent dB (HL) and dB (SPL) for some key audiometric frequencies in the UK are [157]:
Frequency (Hz) dB (HL) dB (SPL) 250 0 26 500 0 15.6 1000 0 8.2 2000 0 5.2 4000 0 7 8000 0 20

See also Audiogram, Audiometry, Equal Loudness Contours, Frequency Range of Hearing, Hearing Impairment, Loudness Recruitment, Sensation Level, Sound Pressure Level, Threshold of Hearing. Hearing Loss: See Hearing Impairment.

189
Hermitian: See Matrix Properties - Hermitian Transpose. Hermitian Transpose: See Matrix Properties - Hermitian Transpose. Hertz (Hz): The unit of frequency measurement named after Heinrich Hertz. 1 Hz is 1 cycle per second. Hexadecimal, Hex: Base 16. Conversion from binary to hex is very straightforward and therefore hex digits have become the standard way of representing binary quantities to programmers. A 16 bit binary number can be easily represented in 4 hex digits by grouping four bits together starting from the binary point and converting to the corresponding hex digit. The hex digits are 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F. Hexadecimal entries in DSP assembly language programs are prefixed by either by $ or 0x to differentiate them from decimal entries. An example (with base indicated as subscript): 0010 1010 0011 11112 = 2A3F16 = (2 x 163) + (10 x 162) + (3 x 161) + 15 = 1081510 High Pass Filter: A filter which passes only the portions of a signal that have frequencies above a specified cut-off frequency. Frequencies below the cut-off frequency are highly attenuated. See also Digital Filter, Low Pass Filter, Bandpass Filter, Filters.
|G(f)| Input High pass Filter G(f) Output Bandwidth

Magnitude

Cut-off frequency

frequency

Higher Order Statistics: Most stochastic DSP techniques such as the power spectrum, least mean squares algorithm and so on, are based on first and second order statistical measures such as mean, variance and autocorrelation. The higher order moments, such as the 3rd order moment (note that the first order moment is the mean, the second order central moment is the variance) are usually not considered. However there is information to be gathered from a consideration of these higher order statistics. One example is detecting the baud rate of PSK signals. Recently there has been considerable interest in higher order statistics within the DSP community. For information refer to the tutorial article [117]. See also Mean, Variance. Hilbert Transform: Simply described, a Hilbert transform introduces a phase shift of 90 degrees at all frequencies for a given signal. A Hilbert transform can be implemented by an all-pass phase shift network. Mathematically, the Hilbert transform of a signal x(t) can be computed by linear filtering (i.e., convolution) with a special function: 1x h ( t ) ≡ x ( t ) ⊗ ---πt (216)

It may be more helpful to think about the Hilbert transform as a filtered version of a signal rather than a “transform” of a signal. The Hilbert transform is useful in constructing single sideband signals (thus conserving bandwidth in communications examples). The transform is also useful in signal analysis by allowing real bandpass signals (such as a radio signal) to be analyzed and simulated

190

DSPedia

as an equivalent complex baseband (or lowpass) process. Virtually all system simulation packages exploit this equivalent representation to allow for timely completion of system simulations. Not obvious from the definition above is the fact that the Hilbert transform of the Hilbert transform of x(t) is -x(t). This may be expected from the heuristic description of the Hilbert transform as a 90 degree phase shift -- i.e., two 90 degree phase shifts are a 180 degree phase shift which means multiplying by a minus one. Host: Most DSP boards can be hosted by a general purpose computer, such as an IBM compatible PC. The host allows a DSP designer to develop code using the PC, and then download the DSP program to the DSP board. The DSP board therefore has a host interface. The host usually supplies power (analog, 12V and digital, 5V) to the board. See also DSP Board. Householder Transformation: See Matrix Decompositions - Householder Transformation. Huffman Coding: This type of coding exploits the fact that discrete amplitudes of a quantized signal may not occur with equal probability. Variable length codewords can therefore be assigned to a particular data sequence according to their frequency of occurrence. Data that occurs frequently are assigned shorter code words, hence data compression is possible. Hydrophone: An underwater transducer of acoustic energy for sonar applications. Hyperchief: A MacIntosh program developed by a DSP graduate student from 1986 - 1991, somewhere on the west coast of the USA, to simulate the wisdom of a Ph.D. supervisor. However, while accurately simulating the wisdom of a Ph.D. supervisor, Hyperchief precisely illustrated the pitfalls of easy access to powerful computers. Hyperchief is sometime spelled as Hypercheif (pronounced Hi-per-chife). Hyperparaboloid: Consider the equation: e = x T Rx + 2p T x + s (217)

where x is an n ×1 vector, R is a positive definite n ×n matrix, p is an n ×1 vector, and s is a scalar. The equation is quadratic in x. If n = 1, then e will form a simple parabola, and if n = 2, e can be represented as a (solid) paraboloid: e e x = x1 x2

x2

x n = 1 n = 2

x1

The positive definiteness of R ensures that the parabola is up-facing. Note that in both cases the e has exactly one minimum point (a global minimum) at the bottom of the parabolic shape. For systems with n ≥ 3 e cannot be shown diagrammatically as four or more dimensions are required! Hence we are asked to imagine the existence of a hyperparaboloid for n ≥ 3 and which will also have exactly one minimum point for e. The existence of the hyperparaboloid is much referred to for

191 least squares, and least mean squares algorithm derivations. See also Global Minimum, Local Minima. Hypersignal: An IBM PC based program for DSP written by Hyperception Inc. Hypersignal provides facilities for real time data acquisition in conjunction with various DSP processors, and a menu driven system to perform off-line processing of real-time FFTs, digital filtering, signal acquisition, signal generation, power spectra and so on. DOS and Windows versions are available. HyTime: HyTime (Hypermedia/Time-Based Structuring Language) is a standardised infrastructure for the representation of integrated, open hypermedia documents produced by the International Organization for Standards (ISO), Joint Technical Committee, Sub Committee (SC) 18, Working Group (WG) 8 (ISO JTC1/SC18/WG8). See also Bento, Multimedia and Hypermedia Information Coding Experts Group, Standards.

192

DSPedia

193

I i: ”i” (along with “k” and “n”) is often used as a discrete time index for in DSP notation. See Discrete Time. I: Often used to denoted the identity matrix. See Matrix. I-Series Recommendations: The I-series telecommunication recommendations from the International Telecommunication (ITU), advisory committee on telecommunications (denoted ITUT and formerly known as CCITT) provide standards for Integrated Services Digital Networks. Some of the current recommendations (http://www.itu.ch) include:
I.112 I.113 I.114 I.120 I.121 I.122 I.140 I.141 I.150 I.200 I.210 I.211 I.220 I.221 I.230 I.231 I.231.9 I.231.10 I.232 I.232.3 I.233 I.233.1-2 I.241.7 I.250 I.251.1-9 I.252.2-5 I.253.1-2 I.254.2 I.255.1 I.255.3-5 I.256 I.257.1 I.258.2 I.310 I.311 I.312 I.320 I.321 I.324 Vocabulary of terms for ISDNs. Vocabulary of terms for broadband aspects of ISDN. Vocabulary of terms for universal personal telecommunication. Integrated services digital networks (ISDNs). Broadband aspects of ISDN. Framework for frame mode bearer services. Attribute technique for the characterization of telecommunication services supported by an ISDN and network capabilities of an ISDN. ISDN network charging capabilities attributes. B-ISDN asynchronous transfer mode functional characteristics. Guidance to the I.200-series of Recommendations. Principles of telecommunication services supported by an ISDN and the means to describe them. B-ISDN service aspects. Common dynamic description of basic telecommunication services. Common specific characteristics of services. Definition of bearer service categories. Circuit-mode bearer service categories. Circuit mode 64 kbit/s 8 kHz structured multi-use bearer service category. Circuit-mode multiple-rate unrestricted 8 kHz structured bearer service category. Packet-mode bearer services categories. User signalling bearer service category (USBS). Frame mode bearer services. ISDN frame relaying bearer service/ ISDN frame switching bearer service. Telephony 7 kHz teleservice. Definition of supplementary services. Direct-dialling-in/ Multiple subscriber number/ Calling line identification presentation/ Calling line identification restriction/ Connected Line Identification Presentation (COLP)/ Connected Line Identification Restriction (COLR)/ Malicious call identification/ Sub-addressing supplementary service. Call forwarding busy/ Call forwarding no reply/ Call forwarding unconditional/ Call deflection. Call waiting (CW) supplementary service/ Call hold. Three-party supplementary service. Closed user group. Multi-level precedence and preemption service (MLPP)/ Priority service/ Outgoing call barring. Advice of charge User-to-user signalling. In-call modification (IM). ISDN Network functional principles. B-ISDN general network aspects. (See also Q.1201.) Principles of intelligent network architecture. ISDN protocol reference model. B-ISDN protocol reference model and its application. ISDN network architecture.

194
I.325 I.327 I.328 I.329 I.330 I.331 I.333 I.334 I.350 I.351 I.352 I.353 I.354 I.355 I.356 I.361 I.362 I.363 I.364 I.365.1 I.370 I.371 I.372 I.373 I.374 I.376 I.410 I.411 I.412 I.413 I.414 I.420 I.421 I.430 I.431 I.432 I.460 I.464 I.470 I.500 I.501 I.510 I.511 I.515 I.520 I.525 I.530 I.555 I.570 I.580 I.601 I.610

DSPedia
Reference configurations for ISDN connection types. B-ISDN functional architecture. Intelligent Network - Service plane architecture. Intelligent Network - Global functional plane architecture. ISDN numbering and addressing principles. Numbering plan for the ISDN era. Terminal selection in ISDN. Principles relating ISDN numbers/subaddresses to the OSI reference model network layer addresses. General aspects of quality of service and network performance in digital networks, including ISDNs. Relationships among ISDN performance recommendations. Network performance objectives for connection processing delays in an ISDN. Reference events for defining ISDN performance parameters. Network performance objectives for packet mode communication in an ISDN. ISDN 64 kbit/s connection type availability performance. B-ISDN ATM layer cell transfer performance. B-ISDN ATM layer specification. B-ISDN ATM Adaptation Layer (AAL) functional description. B-ISDN ATM adaptation layer (AAL) specification. Support of broadband connectioneless data service on B-ISDN. Frame relaying service specific convergence sublayer (FR-SSCS). Congestion management for the ISDN frame relaying bearer service. Traffic control and congestion control in B-ISDN. Frame relaying bearer service network-to-network interface requirements. Network capabilities to support Universal Personal Telecommunication (UPT). Framework Recommendation on “Network capabilities to support multimedia services”. ISDN network capabilities for the support of the teleaction service. General aspects and principles relating to Recommendations on ISDN user-network interfaces. ISDN user-network interfaces - references configurations. ISDN user-network interfaces - Interface structures and access capabilities. B-ISDN user-network interface. Overview of Recommendations on layer 1 for ISDN and B-ISDN customer accesses. Basic user-network interface. Primary rate user-network interface. Basic user-network interface - Layer 1 specification. Primary rate user-network interface - Layer 1 specification. B-ISDN user-network interface - Physical layer specification. Multiplexing, rate adaption and support of existing interfaces. Multiplexing, rate adaption and support of Existing interfaces for restricted 64 kbit/s transfer capability. Relationship of terminal functions to ISDN. General structure of the ISDN interworking Recommendations. Service interworking. Definitions and general principles for ISDN interworking. ISDN-to-ISDN layer 1 internetwork interface. Parameter exchange for ISDN interworking. General arrangements for network interworking between ISDNs. Interworking between ISDN and networks which operate at bit rates of less than 64 kbit/s. Network interworking between an ISDN and a public switched telephone network (PSTN). Frame relaying bearer service interworking. Public/private ISDN interworking. General arrangements for interworking between B-ISDN and 64 kbit/s based ISDN. General maintenance principles of ISDN subscriber access and subscriber installation. B-ISDN operation and maintenance principles and functions.

For additional detail consult the appropriate standard document or contact the ITU. See also International Telecommunication Union, ITU-T Recommendations, Standards.

195
Ideal Filter: The ideal filter for a DSP application is one which will give absolute discrimination between passband and stopband. The impulse response of an ideal filter is always non-causal, and therefore impossible to build. See also Brick Wall Filter, Digital Filter .
Magnitude

4000Hz

frequency

A brick wall filter cutting off at 4000Hz is the ideal anti-alias filter for a DSP application with fs = 8000Hz. All frequencies below 4000Hz are passed perfectly with no amplitude or phase distortion, and all frequencies above 4000Hz are removed. In practice the ideal filter cannot be achieved as it would be non-causal. In an FIR implementation, the more weights that are used, the closer the frequency response will be to the ideal.

Identity Matrix: See Matrix Structured - Identity. IEEE 488 GPIB: Many DSP laboratory instruments such as data loggers and digital oscilloscopes are equipped with a GPIB (General Purpose Interface Bus). Note that this bus is also referred to as HPIB by Hewlett-Packard, developers of the original bus on which the standard is based. Different devices can then communicate through cables of maximum length 20 metres using an 8-bit parallel protocol with a maximum data transfer of 2Mbytes/sec. IEEE Standard 754: The IEEE Standard for binary floating point arithmetic specifies basic and extended floating-point number formats; add, subtract, multiply, divide, remainder, and square root. It also provides magnitude compare operations, conversion from/to integer and floating-point formats and conversions between different floating-point formats and decimal strings. Finally the standard also specifies floating-point exceptions and their handling, including non-numbers caused by divide by zero. The Motorola DSP96000 is an IEEE 754 compliant floating point processor. Devices such as the Texas Instruments TMS320C30 use their similar (but different!) floating point format. The IEEE Standard 754 has also been adopted by ANSI and is therefore often referred to as ANSI/IEEE Standard 754. See also Standards. IEEE Standards: The IEEE publish standards in virtually every conceivable area of electronic and electrical engineering. These standards are available from the IEEE and the titles, classifications and a brief synopsis can be browsed at http://stdsbbs.ieee.org. See also Standards. Ill-Conditioned: See Matrix Properties - Ill-Conditioned. Image Interchange Facility (IIF): The IIF has been produced by the International Organization for Standards (ISO,) Joint Technical Committee (JTC) 1, sub-committee (SC) 24 (ISO/IEC JTC1/ SC24) which is responsible for standards on “Computer graphics and image processing”. The IIF standard is ISO 12087-3 and is the definition of a data format for exchanging image data of an arbitrary structure. The IIF format is designed to allow easy integration into international telecommunication services. See also International Organisation for Standards, JBIG, JPEG, Standards. Imaginary Number: The imaginary number denoted by j for electrical engineers (and by most other branches of science and mathematics) is the square root of -1. Using imaginary numbers

196 allows the square root of any negative number to be expressed. For example, also Complex Numbers, Fourier Analysis, Euler’s Formula.

DSPedia
– 25 = 5j . See

Impulse: An impulse is a signal with very large magnitude which lasts only for a very short time. A mechanical impulse could be applied by striking an object with a hammer; a very large force for a very short time. A voltage impulse would be a very large voltage signal which only lasts for a few milli- or even microseconds. A digital impulse has magnitude of 1 for one sample, then zero at all other times and is sometimes called the unit impulse or unit pulse. The mathematical notation for an impulse is usually δ ( t ) for an analog signal, and δ ( n ) for a digital impulse. For more details see Unit Impulse Function,. See also Convolution, Elementary Signals, Fourier Transform Properties, Impulse Response, Sampling Property, Unit Impulse Function, Unit Step Function. Impulse Response: When any system is excited by an impulse, the resulting output can be described as the impulse response (or the response of the system to an impulse). For example, striking a bell with a hammer gives rise to the familiar ringing sound of the bell which gradually decays away. This ringing can be thought of as the bell’s impulse response, which is characterized by a slowly decaying signal at a fundamental frequency plus harmonics. The bell’s physical structure supports certain modes of vibrations and suppresses others. The impulsive input has energy at all frequencies -- the frequencies associated with the supported modes of vibration are sustained while all other frequencies are suppressed. These sustained vibrations gives rise to the bell’s ringing sound that we hear (after the extremely brief “chink” of the impulsive hammer blow). We can also realize the digital impulse response of a system by applying a unit impulse and observing the output samples that result. From the impulse response of any linear system we can calculate the output signal for any given input signal simply by calculating the convolution of the impulse response with the input signal. Taking the Fourier transform of the impulse response of a system gives the frequency response. See also Convolution, Elementary Signals, Fourier Transform Properties, Impulse, Sampling Property, Unit Impulse Function, Unit Step Function. Incoherent: See Coherent. Infinite Impulse Response (IIR) Filter: A digital filter which employs feedback to allow sharper frequency responses to be obtained for fewer filter coefficients. Unlike FIR filters, IIR filters can exhibit instability and must therefore be very carefully designed [10], [42]. The term infinite refers to the fact that the output from a unit pulse input will exhibit nonzero outputs for an arbitrarily long time.

197
If the digital filter is IIR, then two weight vectors can be defined: one for the feedforward weights and one for the feedback weights: yk-3 a2 b3 b2 yk-2 b1 yk-1 yk

xk a0

xk-1 a1

xk-2

Feedforward Zeroes (non-recursive)
2 3

Feedback Poles (recursive)

yk =

∑ n=0 an xk – n +

∑ n=1 bn y k – n = a 0 xk + a 1 x k – 1 + a 2 xk – 2 + b 1 y k – 1 + b2 y k – 2 + b3 y k – 3 xk yk – 1 yk – 3

⇒ yk =

aTx

k

+

bT y

k–1

= a 0 a 1 a2 xk – 1 + b 1 b2 b3 yk – 2 xk – 2

A signal flow graph and equation for a 3 zero, 4 pole infinite impulse response filter.

See also Digital Filter, Finite Impulse Response Filter, Least Mean Squares IIR Algorithms. Infinite Impulse Response (IIR) LMS: See Least Mean Squares IIR Algorithms. Infinity (∞) Norm: See Matrix Properties - ∞ Norm. Information Theory: The name given to the general study of the coding of information. In 1948 Claude E. Shannon presented a mathematical theory describing, among other things, the average amount of information, or the entropy of a information source. For example, a given alphabet is composed of N symbols (s1, s2, s3, s4,......., sN). Symbols from a source that generates random elements from this alphabet are encoded and transmitted via a communication line. The symbols are decoded at the other end. Shannon described a useful relationship between information and the probability distribution of the source symbols: if the probability of receiving a particular symbol is very high then it does not convey a great deal of information, and if low, then it does convey a high degree of information. In addition, his measure was logarithmically based. According to Shannon’s measure, the self information conveyed by a single symbol that occurs with probability Pi is: 1I ( s i ) = log  ---- 2  P i (218)

The average amount of information, or first order entropy, of a source can then be expressed as:
N

Hr ( s ) =

i=1

---∑ Pi log 2  P-i  

1

(219)

198
Infrasonic: Of, or relating to infrasound. See Infrasound.

DSPedia

Infrasound: Acoustics signals (speed in air, 330ms-1) having frequencies below 20Hz, the low frequency limit of human hearing, are known as infrasound. Although sounds as low as 3Hz have been shown to be aurally detectable, there is no perceptible reduction in pitch and the sounds will also be tactile. Infrasound is a topic close to the heart of a number of professional recording engineers who believe that it is vitally important to the overall sound of music. In general CDs and DATs can record down to around 5Hz. Exposure to very high levels infrasound can be extremely dangerous and certain frequencies can set cause organs and other body parts to resonate::
Area of Body Motion sickness Abdomen Spine/pelvis Testicle/Bladder Head/Shoulders Eyeball Jaw/Skull Approximate Resonance Range (Hz) 0.3-0.6 3-5 4-6 10 20-30 60-90 120-200

Infrasound has been considered as a weapon for the military and also as a means of crowd control, whereby the bladder is irritated. See also Sound, Ultrasound. Inner Product: See Vector Operations - Inner Product. In-Phase: See Quadrature. Instability: A system or algorithm goes unstable when feedback (either physical or mathematical) causes the system output to oscillate uncontrollably. For example if a microphone is connected to an amplifier then to a loudspeaker, and the microphone is brought close to the speaker then the familiar feedback howl occurs; this is instability. Similarly in a DSP algorithm mathematical feedback in equations being implemented (recursion) may cause instability. Therefore to ensure a system is stable, feedback must be carefully controlled. Institute of Electrical Engineers (IEE): The IEE is a UK based professional body representing electronic and electrical engineers The IEE publish a number of signal processing related publications each month, and also organize DSP related colloquia and conferences. Institute of Electrical and Electronic Engineers, Inc. (IEEE): The IEEE is a USA based professional body covering every aspect of electronic and electrical engineering. IEEE publishes a very large number of journals each month which include a number of notable signal processing journals such Transactions on Signal Processing, Transactions on Speech and Audio Processing, Transactions on Biomedical Engineering, Transactions on Image Processing and so on. Integration (1): The simplest mathematical interpretation of integration is taking the area under a graph.

199
Integration (2): The generic term for the implementation of many transistors on a single substrate of silicon. The technology refers to the actual process used to produce the transistors: CMOS is the integration technology for MOSFET transistors; Bipolar is the integration technology for TTL. The number of transistors on a single device is often indicated by one of the acronyms, SSI, MSI, LSI, VLSI, or ULSI.
Acronym SSI MSI LSI VLSI ULSI Technology Small scale integration Medium Scale Integration Large Scale Integration Very Large Scale Integration Ultra Large Scale Integration No. of Transistors < 10 < 1000 < 10000 n . Note that if m = n then the vectors span the entire space ℜ m . See also Vector Properties - Space/Subspace. • Transpose Vector: The transpose of a vector is formed by interchanging the rows and columns and is denoted by the superscript T. For example for a vector, x: a x = b c • 2-norm: See Vector Properties - Norm. • Unit Vector: A unit vector with respect to the p-norm is one that - Norm. x p then

xT = a b c

(590)

= 1 . See also Vector Properties

• Weight Vector: The name given to the vector formed by the weights of an FIR filter.

See also Matrix, Vector Operations. Vector Scaling: See Vector Operations - Scaling. Vector Sum Excited Linear Prediction (VSELP): Similar to CELP vocoders except that VSELP uses more than one codebook. VSELP also has the additional advantage that it can be run on fixed point DSP processors, unlike CELP which requires floating point computation. Vector Transpose: See Vector Operations - Transpose. Vibration: A continuous to and fro motion, or reciprocating motion. Vibrations at audible frequencies give rise to sound. Vibrato: This is a simple frequency modulating effect applied to the output of a musical instrument. For example a mechanical arm on a guitar can be used to frequency modulate the output to produce a warbling effect. Vibrato can also be performed digitally by simple frequency modulation of a signal. See also Music, Tremolo. Virtual Instrument: The terminology used by some companies for a measuring instrument that is implemented on a PC but is presented in a form that resembles the well know analog version of the instrument. For example a virtual oscilloscope forms all of the normal controls as buttons and dials actually drawn on the screen in order that the instrument can immediately be used by an engineer whether they are familiar with DSP or not. Virtual Reality: A virtual instrument (substitute) for living. Ultimately, this application of DSP image and audio may prove to be very addictive. Visually Evoked Potential: See Evoked Potentials. Viterbi Algorithm: This algorithm is a means of solving an optimization problem (that can be framed on a trellis -- or structured set of pathways) by calculating the cost (or metric) for each possible path and selecting the path with the minimum metric [103]. The algorithm has proven extremely useful for decoding convolutional codes and trellis coded modulation. For these applications, the paths are defined on a trellis and the metrics are Hamming distance for convolutional codes and Euclidean distance for trellis coded modulation. These metrics result in the smallest possible probability of error when signals are transmitted over an additive white Gaussian noise channel (this is a common modelling assumption in communications). See also Additive

404

DSPedia

White Gaussian Noise (AWGN), Channel Coding, Trellis Coded Modulation, Euclidean Distance, Hamming Distance. Viterbi Decoder: A technique for decoding convolutionally encoded data streams that uses the Viterbi algorithm (with a Hamming distance metric) to minimize the probability of data errors in a digital receiver. See Viterbi Algorithm. See also Channel Coding. VLSI: Very Large Scale Integration. The name given to the process of integrating millions of transistors on a single silicon chip to realize various digital devices (logic gates, flip-flops) which in turn are used to make system level components such as microprocessors, all on a single chip. VME Bus: A bus found in SUN workstations, VAXs and others. Many DSP board manufacturers make boards for VME bus, although they are usually a little more expensive than for the PC-Bus. Vocoders: A vocoder analyzes the spectral components of speech to try to identify the parameters of the speech waveform that are perceived by the human ear. These parameters are then extracted, transmitted and used at the receiver to synthesize (approximately) the original speech pattern. The resulting waveform may differ considerably from the original, although it will sound like the original speech signal. Vocoders have become popular at very low bit rates (2.4kbits/sec). Volatile: Semiconductor Memory that loses its contents when the power is removed is volatile. See also Non-Volatile, Dynamic RAM, Static RAM. Volterra Filter: A filter based on the non linear Volterra series, and used in DSP to model certain types of non-linearity. The second order Volterra filter includes second order terms such that the output of the filter is given by:
N–1 N – 1N – 1

y(k) =

∑ wn ( k )x ( k – n ) + ∑ ∑ wij x ( k – i ) x ( k – j ) n=0 i=0j=0

(591)

where w n are the linear weights and w ij are the quadratic weights. Adaptive LMS based Volterra filters are also widely investigated and a good tutorial article can be found in [109]. Voice Grade Channel: A communications channel suitable for transmission of speech, analog data, or facsimile, generally over a frequency band from 300Hz to 3400Hz. Volume Unit (VU): VU meters have been used in recording for many years and give a measure of the relative loudness of a sound [14], [46]. In general a sound of long duration is actually perceived by the human ear as louder than a short duration burst of the same sound. VU meters have rather a “sluggish” mechanical response, and therefore have an in built capability to model the human ear temporal loudness response. An ANSI standard exists for the design of VU meters. See also Sound Pressure Level. Von Hann Window: See Windows. VXI Bus: A high performance bus used with instruments that can fit on a single PCB card. This standard is a capable of transmitting data at up to 10Mbytes/sec.

405

W
Waterfall Plot: A graphical 3-D plot that shows frequency plotted on the X-axis, signal power on the Y-axis, and time elapsing on the Z-axis (into the computer screen). As time elapses and segments of data are transformed by the FFT, the screen can appear like a waterfall the 2-D spectra pass along the Z-axis. Warble Tone: If an audible pure tone is frequency modulated (FM) by a smaller pure tone (typically a few Hz) the perceived signal is often referred to as a warble tone, i.e. the signal is perceived to be varying between two frequencies around the carrier tone frequency. Warble tones are often used in audiometric testing where stimuli signals are played to a subject through a loudspeaker in a testing room. If pure tones were used there is a possibility that a zone of acoustic destructive interference would occur at or near the patient’s head thus making the test erroneous. The use of warble tones greatly reduces this possibility as the zones of destructive interference will not be static. To produce a warble tone, consider a carrier tone at frequency f c , frequency modulated by another tone at frequency f m : w ( t ) = sin ( 2πf c t + β sin 2πf m t ) = sin θ ( t ) i.e. θ ( t ) = 2πf c t + β sin 2πf m t (592)

where β is the modulation index which controls the maximum frequency deviation from the carrier frequency. For example if a carrier tone f c = 1000 Hz is to be modulated by a tone f m = 5 Hz such that the warble tone signal frequency varies between 900Hz and 1000Hz at a rate 5 times per second, then noting that the instantaneous frequency of an FM tone, f , is given by: 1 dθ ( t ) f = ------ ------------ = f c + βf m cos 2πf m t 2π dt (593)

the modulation index required is β = 20 to give the required frequency swing. See also Audiometer, Audiometry, Binaural Beats, Constructive Interference, Destructive Interference.
Amplitude

time (secs) A warble tone where an audible frequency tone carrier is modulated by a lower frequency modulating tone usually of a few Hz.

Watt: The surname of the Scottish engineer James Watt who gave his name to the unit of power. In an electrical system power is calculated from:
2 -----P = V ⋅ I = I2 = V R

(594)

Waveform: The representation of a signal plotted (usually) as voltage against time, where the voltage will represent some analog time varying quantity (e.g. audio, speech and so on).

406

DSPedia

Waveform Averaging: (Ensemble Averaging) The process of taking a number of measurements of a periodic signal, summing the respective elements in each record and dividing by the number of measurements. Waveform Averaging is often used to reduce the noise when the noise and periodic signal are uncorrelated. As an example, averaging is widely used in ECG signal analysis where the process retains correlated frequencies in the periodic signal and the removes the uncorrelated one to reveal the distinctive ECG complex. Wavelet Transform: The wavelet transform is an operation that transforms a signal integrated with specific functions, often known as the kernel functions. This kernel functions may be referred to as the mother wavelet and the associated scaling function. Using the scaling function and mother wavelet, multi-scale translations and compressions of these functions can be produced. The wavelet transform actually generalizes the time frequency representation of the short time Fourier Transform (STFT). Compared to the STFT the wavelet transform allows non-uniform bandwidths or frequency bins and allows resolution to be different at different frequencies. Over the last few years DSP has seen considerable interest and application of the wavelet transform, and the interested reader is referred to [49]. Web: See World Wide Web. Weight Vector: Weighted Moving Average (WMA): See Finite Impulse Response (FIR) filter. See also Moving Average. Weight Vector: The weights of an FIR digital filter can be expressed in vector notation such that the output of a digital filter can be conveniently expressed as a row-column vector product (or inner product). xk-1 xk w0 w1 w2 w3 yk xk-2 xk-3

3

yk =

∑ n=0 wn x k – n = w 0 xk + w 1 xk – 1 + w2 x k – 2 + w3 x k – 3 xk

⇒ yk = wT x = w0 w 1 w2 w3

xk – 1 xk – 2 xk – 3

If the digital filter is IIR, then two weight vectors can be defined: one for the feedforward weights and one for the feedback weights. For further notational brevity the two weight vectors and two data

407 vectors can be respectively combined into a single weight vector, and a data vector consisting of past input data and past output samples:. xk-1 xk a0 a1 a2 yk yk-3 b3 b2 yk-2 b1 yk-1 xk-2 xk-3

2

3

yk =

∑ n=0 a n xk – n +

∑ n=1 b n y k – n = a0 x k + a 1 xk – 1 + a2 x k – 2 + b1 y k – 1 + b2 y k – 2 + b3 y k – 3 xk yk – 1 yk – 3

⇒ yk =

aT x

k

+

bTy

k–1

= a0 a1 a2 x k – 1 + b1 b2 b 3 yk – 2 xk – 2 xk yk – 1

⇒ yk = a T bT

= wT uk

See also Vector Properties and Definitions - Weight Vector.

Weighting Curves: See Sound Pressure Level Weighting Curves. Weights: The name given to the multipliers of a digital filter. For example, a particular FIR may be described as having 32 weights. The terms weights and coefficients are used interchangeably. See also FIR filter, IIR filter, Adaptive Filter. Well-Conditioned Matrix: See Matrix Properties - Well Conditioned. Western Music Scale: The Western music scale is based around musical notes separated by octaves [14]. If a note, X, is an octave higher than another note, Y, then the fundamental frequency of X is twice that of Y. From one octave frequency to the next in the Western music scale, there are twelve equitempered frequencies which are spaced one semitone apart, where a semitone is a logarithmic increase in frequency (If the two octave frequencies are counted then there are thirteen

408

DSPedia b b F# A3 B3 3 b # C4 E 4 b b F# A4 B4 4 b C# E 4 4

notes). The Western music scale can be best illustrated on the well known piano keyboard which comprises a full chromatic scale:
F# 5

F3 G3 A3 B3 C4 D4 E4 F4 G4 A4 B4 C5 D5 E5 F5 G5 One octave increasing fundamental frequency A section of the familiar piano keyboard with the names of the notes marked. One octave is twelve equitempered notes (sometimes called the chromatic scale), or eight notes of a major scale. The black keys represent various sharps (#) and flats (b). The piano keyboard extends in both directions repeating the same twelve note scale. Neighboring keys (black or white) are defined as being a semitone apart. If one note separates two keys, then they are a tone apart. The letters A to G are the names given to the notes.

The International Pitch Standard defines the fundamental frequency of the note A4 as being 440 Hz. The note A4 is the first A above middle C (C4) which is located near the middle of a piano keyboard. Each note on the piano keyboard is characterised by its fundamental frequency, f 0 , which is usually the loudest component caused by the fundamental mode of vibration of the piano string being played. The “richness” of the sound of a single note is caused by the existence of other modes of vibration which occur at harmonics (or integer multiples) of the fundamental, i.e. 2f 0, 3f 0 and so on. The characteristic sound of a musical instrument is produced by the particular harmonics that make up each note. On the equitempered Western music scale the logarithmic difference between the fundamental frequencies of all notes is equal. Therefore noting that in one octave the frequency of the thirteenth note in sequence is double that of the first note, then if the notes are equitempered the ratio of the fundamental frequencies of adjacent notes must be 2 1 / 12 = 1.0594631… . As defined the ratio between the first and thirteenth note is then of course ( 2 1 / 12 ) 12 = 2 , or an octave. The actual logarithmic difference in frequency between two adjacent notes on the keyboard is: log 2 1 / 12 = 0.025085… (595)

Two adjacent notes in the Western music scale are defined as being one semitone apart, and two notes separated by two semitones are a tone apart. For example, musical notes B and C are a semitone apart, whereas G and A are a tone apart as they are separated by Ab.

409
Therefore the fundamental frequencies of 3 octaves of the Western music scale can be summarised in the following table, where the fundamental frequency of the next semitone is calculated by multiplying the current note fundamental frequency by 1.0594631...:
Note C3
# C3

Fundamental frequency (Hz) 130.812 138.591 146.832 155.563 164.814 174.614 184.997 195.998 207.652 220 233.068 246.928

Note C4
# C4

Fundamental frequency (Hz) 261.624 277.200 293.656 311.124 329.648 349.228 370.040 392.040 415.316 440 466.136 493.856

Note C5
# C5

Fundamental frequency (Hz) 523.248 554.400 587.312 622.248 659.296 698.456 740.080 784.080 830.632 880 932.327 987.767

D3 Eb 3 E3 F3 F# 3 G3 Ab 3 A3 Bb 3 B3

D4 Eb 4 E4 F4 F# 4 G4 Ab 4 A4 Bb 4 B4

D5 Eb 5 E5 F5 F# 5 G5 Ab 5 A5 Bb 5 B5

A correctly tuned musical instrument will therefore produce notes with the frequencies as stated above. However it is the existence of subtle fundamental frequency harmonics that gives every instrument its unique sound qualities. It is also worth noting that certain instruments may have some or all notes tuned “sharp” or “flat” to create a desired effect. Also noting that pitch perception and frequency is not a linear relationship the high frequencies of certain instruments may be tuned slightly “sharp”. Music is rarely represented in terms of its fundamental frequencies and instead music staffs are used to represent the various notes that make up a particular composition. A piece of music is usually played in a particular musical key which is a subset of eight notes of an octave and where those eight notes have aesthetically pleasing perceptible qualities. The major key scales are

410

DSPedia
G One tone 1 One Semitone 1/2 C-major Scale C G Ab A Bb B C C# D 1 D Eb E 1 E F 1/2 F F# G Ab A 1 G 1 A Bb B 1 B C 1/2 C 1 A 1 B C 1/2 D 1 1 E F# G 1 1/2 G-major Scale

realised by starting at a root note and selecting the other notes of the key in intervals of 1, 1, 1/2, 1, 1, 1, 1/2 tones (where 1/2 tone is a semitone). For example the C-major and G-major scales are:

Starting at the any root note, X, of the chromatic scale, the, X-major scale can be produced by selecting notes in steps of 1,1,1/2,1,1,1,1/2 tones. The above shows example of the C- and G-major scales. There are a total of 12 major scales possible.

There are many other forms of musical keys, such as the natural minors which are formed by the root note and then choosing in steps of 1, 1/2, 1, 1, 1/2, 1, 1. For more information on the rather elegant and simple mathematics of musical keys, refer to a text on music theory.
C-major Scale Treble Staff C4 D4 E4 F4 G4 A4 B4 C5 D5 E5 F5 G5

Bass Staff G2 A2 B2 C2 D3 E 3 F3 G 3 A3 B 3 C 4 D4

Music notation for the C major scale which has no sharps or flats (i.e., only the white notes of the piano keyboard). Different notes are represented by different lines and spaces on the staff (the five parallel lines). The treble clef (the “g” like letter marking the G-line on the top left hand side of the staff) usually defines the melody of a tune, whereas the bass clef (the “f” like letter marking the F-line on the bottom left hand side of the staff) defines the bass line. Note that middle C (C4) is represented on a “ledger” line existing between the treble and bass staffs. On a piano the treble is played with the right hand, and the bass with the left hand. For other scales (major or minor), the required sharps and flats are shown next to the bass and treble clefs. Many musical instruments only have the capability of playing either the treble or bass, e.g. the flute can only play the treble clef, or the double bass can only play the bass clef.

411

G-major Scale Treble Staff

#
# C4 D4 E 4 F 4

G4 A 4

B 4 C5

D5

# E 5 F5

G5

Bass Staff

#
G2 A2 B2 C3 D3 E3
# F3 G 3

A3

B3

C4 D4

Music notation for the G major scale which has one sharp (sharps and flats are the black notes of the piano keyboard). Therefore whenever an F note is indicated by the music, then an F# should be played in order to ensure that the G-major scale is used.

So what are the qualities of the Western music scale that make it pleasurable to listen to? The first reason is familiarity. We are exposed to music from a very early age and most people can recognise and recall a simple major scale or a tune composed of notes from a major scale. The other reasons are that the ratios of the frequencies of certain notes when played together are “almost” low integer ratios and these chords of more than one note take on a very “full” sound. For example the C-major chord is composed of the 1st, 3rd and 5th notes of the C-major scale, i.e. C,E,G. If we consider the ratios of the fundamental frequencies of these notes: 5 C = 4 / 12 = --2 1.2599… ≈ -4 E 3 C --- = 2 7 / 12 = 1.4983… ≈ -2 G 6 E --- = 2 3 / 12 = 1.189… ≈ -5 G they can be approximated by “almost” integer ratios of the fundamental frequencies. (Note that on the very old scales -- the Just scale and the Pythagorean scale -- these ratios were exact). When these three notes are played together the frequency differences actually reinforce the fundamental which produces a rich strong sound. This can be seen by considering the simple trigonometric identities: 1–5⁄4 1+5⁄4 C + E = cos C 0 + cos ( 2 1 / 3 C 0 ) ≈ 2 cos  ------------------- C 0 cos  -------------------  C 0  2   2  1 9 = 2 cos -- C o cos -- C 0 8 8 and (596)

(597)

412
1–3⁄2 1+3⁄2 C + G = cos C 0 + cos ( 2 7 / 12 C 0 ) ≈ 2 cos  ------------------- C 0 cos  -------------------  C 0  2   2  5 1 = 2 cos -- C 0 cos -- C 0 4 4

DSPedia
(598)

where C 0 = 2πf C t and f C is the fundamental frequency of the C note. Adding together the C and E results in a sound that may be interpreted as a C three octaves below C0 modulating a D. Similarly the addition of the C and G results in sound that may be interpreted as a C two octaves below C0 that is modulated an E. The existence of these various modulating subharmonics leads to the “full” and aesthetically pleasing sound of the chord. In addition to major chords, there are many others such as the minor, the seventh and so on. All of the chords have there own distinctive sound to which we have become accustomed and associated certain styles of music. Prior to the existence of the equitempered scale there were other scales which used perfect integer ratios between notes ratios. Also around the world there are still many other music scales to be found, particularly in Asia. See also Digital Audio, Just Music Scale, Music, Music Synthesis, Pythagorean Scale. White Noise: A signal that (in theory) contains all frequencies and is (for most purposes) completely unpredictable. Most white noise is defined as being Gaussian, which means that it has definable properties of mean (average value) and variance (a measure of its power). White noise has a constant power per unit bandwidth, and is labelled white because of the analogy with white light (containing all visible light frequencies with nearly equal power). In a digital system, a white noise sequence has a flat spectrum from 0Hz to half the sampling frequency. Wide Sense Stationarity: If a discrete time signal, x ( k ) , has a time invariant mean: E{x(k)} = and a time invariant autocorrelation function: r( n) =

∑ x ( k )p { x ( k ) } k (599)

∑ x ( k )x ( k – n )p { x ( k ) } k (600)

that is a function only of the time separation, n – k , but not of time, k, is said to be wide sense stationary. Therefore if the signal, x ( k ) , is also ergodic, then: 1 E { x ( k ) } ≅ -------------------M2 – M1 and 1 E { x 2 ( k ) } ≅ -------------------M2 – M1
M2 – 1 M2 – 1

∑ n = M1

x ( k ),

for any M 1 and M 2 where M 2 » M 1

(601)

∑ n = M1

[ x( k ) ]2 ,

for any M 1 and M 2 where M 2 » M 1

(602)

413
For derivation and subsequent implementation of least means squares DSP algorithms using stochastic signals, assuming wide sense stationarity is usually satisfactory. See Autocorrelation, Expected Value, Least Mean Squares, Mean Value, Mean Squared Value, Strict Sense Stationary, Variance, Wiener-Hopf Equations. Wideband: A signal that uses a large portion of a particular frequency band may be described as wideband. The classification into wideband and narrowband depends on the particular application being described. For example, the noise from a reciprocating (piston) engine may be described as narrowband as it consists of a one main frequency (the drone of the engine) plus a some frequency components around this frequency, whereas the noise from a jet engine could be described as wideband as it covers a much larger frequency band and is more white (random) in its make-up. In telecommunications wideband or broadband may describe a circuit that provides more bandwidth than a voice grade telephone line (300-3000Hz) i.e. a circuit or channel that allows frequencies of upto 20kHz to pass. These type of telecommunication broadband channels are used for voice, high speed data communications, radio, TV and local area data networks.
Sound Pressure (dB) Narrowband Engine Noise Sound Pressure (dB) Wideband Engine Noise

0.1

0.4

1.6

6.4 25.6 Frequency (kHz)

0.1

0.4

1.6

6.4 25.6 Frequency (kHz)

Widrow: Professor Bernard Widrow of Stanford University, USA, generally credited with developing the LMS algorithm for adaptive digital signal processing systems. The LMS algorithm is occasionally referred to as Widrow’s algorithm. Wiener-Hopf Equations: Consider the following architecture based on a FIR filter and a subtraction element: x( k) w 0 w 1 w 2 wN –2 wN –1

d( k )

+ y(k ) -

e( k)

The output of an FIR filter, y ( k ) is subtracted from a desired signal, d ( k ) to produce an error signal, e ( k ) . If there is some correlation between the input signal, x ( k ) and the desired signal, d ( k ) then values can be calculated for the filter weights, w ( 0 ) to w ( N – 1 ) in order to minimize the mean squared error, E { e 2 ( k ) } .

If the signal x ( k ) and d ( k ) are in some way correlated, then certain applications and systems may require that the digital filter weights, w ( 0 ) to w ( N – 1 ) are set to values such that the power of the error signal, e ( k ) is minimised. If weights are found that minimize the error power in the mean squared sense, then this is often referred to as the Wiener-Hopf solution.

414

DSPedia

To derive the Wiener Hopf solution it is useful to use a vector notation for the input vector and the weight vector. The output of the filter, y(k), is the convolution of the weight vector and the input vector:
N–1

y(k) = where,

∑ wn x ( k – n ) n=0 = wTx(k )

(603)

w = [ w 0 w 1 w 2 … w N – 2 wN – 1 ] T and, x ( k ) = [ x( k ) x( k – 1 ) x( k – 2 ) … x( k – N + 2 ) x( k – N + 1 ) ]T

(604)

(605)

Assuming that x ( k ) and d ( k ) are wide sense stationary processes and are correlated in some sense, then the error, e ( k ) = d ( k ) – y ( k ) can be minimised in the mean squared sense. To derive the Wiener-Hopf equations consider first the squared error: e2( k ) = [ d(k ) – y(k )]2 = d 2 ( k ) – [ w T x ( k ) ] 2 – 2d ( k )w T x ( k ) = d 2 ( k ) – w T x ( k )x T ( k )w – 2w T d ( k )x ( k ) Taking expected (or mean) values we can write the mean squared error (MSE), E { e 2 ( k ) } as: E { e 2 ( k ) } = E { d 2 ( k ) } – w T E { x ( k )x T ( k ) }w – 2w T E { d ( k )x ( k ) } Writing in terms of the N × N correlation matrix, r0 R = E { x ( k )x T ( k ) } r1 = r 2 r1 r0 r1 r2 r1 r0 … rN – 1 … rN – 2 … rN – 3 (608) (607) (606)

: : : …: rN – 1 rN – 2 rN – 3 … ro and the N × 1 cross correlation vector,

415 p0 p = E { d ( k )x ( k ) } = p 2 : pN – 1 gives, ζ = E { e 2 ( k ) } = E { d 2 ( k ) } + w T Rw – 2w T p (610) p1 (609)

where ζ is used for notational convenience to denote the MSE performance surface. Given that this equation is quadratic in w then there is only one minimum value. The minimum mean squared error (MMSE) solution, w opt , can be found by setting the (partial derivative) gradient vector, ∇ , to zero: ∇ = ∂ζ = 2Rw – 2p = 0 ∂w ⇒ w opt = R – 1 p desired signal input signal x(k) FIR Digital Filter, y(k) =wTx(k) Calculate w = R-1p A simple block diagram for the Wiener-Hopf calculation. Note that there is no feedback and therefore, assuming R is non-singular, the algorithm is unconditionally stable. y(k) Output − signal d(k) + e(k) error signal

(611)

(612) To appreciate the quadratic and single minimum nature of the error performance surface consider the trivial case of a one weight filter: ζ = E { d 2 ( k ) } + rw 2 – 2wp (613)

416

DSPedia

where E [ d 2 ( k ) ] , r, and p are all constant scalars. Plotting mean squared error (MSE), ζ , against the weight vector, w, produces a parabola (upfacing):

MSE, ζ

Point of zero gradient ∇ =

dζ = 2rw – 2p = 0 dw

w opt = r –1 p MMSE w wopt The mean square error (MSE) performance surface, ζ, of for a single weight filter.

The MMSE solution occurs when the surface has gradient, ∇ = 0 . If the filter has two weights the performance surface is a paraboloid which can be drawn in 3 dimensions:
Point of zero gradient MSE, ζ ∇ = w1(opt) w0(opt) w1 w opt = w0 dζ = 2Rw – 2p = 0 dw w0 w1 opt MMSE

=

r0 r1 r1 r0

–1

p0 p1

The mean square error (MSE) performance surface, ζ, of for a two weight filter.

If the filter has more than three weights then we cannot draw the performance surface in three dimensions, however, mathematically there is only one minimum point which occurs when the gradient vector is zero. A performance surface with more than three dimensions is often called a hyperparaboloid. To actually calculated the Wiener-Hopf solution, w opt = R – 1 p requires that the R matrix and p vector are realised from the data x ( k ) and d ( k ) , and the R matrix is then inverted prior to premultiplying vector p. Given that we assumed that x ( k ) and d ( k ) are stationary and ergodic, then we can estimate all elements of R and p from: 1r n = ---M
M–1

∑ i=0 xi xi + n

and

1p n = ---M

M–1

∑ i=0 di xi + n

(614)

Calculation of R and p requires approximately 2MN multiply and accumulate (MAC) operations where M is the number of samples in a “suitably” representative data sequence, and N is the

417 adaptive filter length. The inversion of R requires around N3 MACs, and the matrix-vector multiplication, N2 MACs. Therefore the total number of computations in performing this one step algorithm is 2MN + N3 + N2 MACs. The computation load is therefore very high and real time operation is computationally expensive. More importantly, if the statistics of signals x ( k ) or d ( k ) change, then the filter weights will need to be recalculated, i.e. the algorithm has no tracking capabilities. Hence direct implementation of the Wiener-Hopf solution is not practical for real time DSP implementation because of the high computational load, and the need to recalculate when the signal statistics change. For this reason real time systems which need to minimize an error signal power use gradient descent based adaptive filters such as the least mean squares (LMS) or recursive least squares (RLS) type algorithms. See also Adaptive Filter, Correlation Matrix, Correlation Vector, Least Mean Squares Algorithm, Least Squares. Whitening Filter: A filter that takes a stochastic signal and produces a white noise output [77]. If the input stochastic signal is an autoregressive process, the whitening filters are all-zero FIR filters. See also Autoregressive Model. Window: A window is a set of numbers that multiply a set of N adjacent data samples. If the data was sampled at frequency f s , then the window weights N ⁄ f s second of data. There a number of semi-standardized data weighting windows used to pre-weight data prior to frequency domain calculations (FFT/DFT). The most common are the Bartlett, Von Hann, Blackman, Blackmannharris, Hamming, and Hanning:
• Bartlett Window: A data weighting window used prior to frequency transformation (FFT) to reduce spectral leakage. Compared to the uniform window (no weighting) the Bartlett window doubles the width of the main lobe, while attenuating the main sidelobe by 26dB, compared to the 13dB of the uniform window. For N data samples, the Barlett window is defined by: nh ( n ) = 1.0 – ---------N⁄2 N N 2 1 for n = – --- ..... – , – , 0, 1, 2, .... --2 2

(615)

• Blackmann Window: A data weighting window used prior to frequency transformation (FFT) providing improvements over the Bartlett and Von Hann windows by increasing spectral leakage rejection. For N data samples, the Blackmann window is defined by:
2

h (n ) =

∑ k=0 N 2knπ N --a ( k ) cos  ------------- for n = --- .....-2,-1,0,1,2 ..... –  N  2 2

(616)

with coefficients: a(0) = 0.42659701, a(1) = 0.49659062, a(2) = 0.07684867 • Blackmann-harris Window: A type of data window often used in the calculation of FFTs/DFTs for reducing spectral leakage. Similar to the Blackman window, but with four cosine terms:
3

h (n ) =

∑ k=0 N 2knπ N --a ( k ) cos  ------------- for n = --- .....-2,-1,0,1,2 ..... –  N  2 2

(617)

with coefficients: a(0) = 0.3635819, a(1) = 0.4891775, a(2) = 0.1365995, a(3) = 0.0106411 • Hamming Window: A data weighting window used prior to frequency transformation (FFT) to reduce spectral leakage. Compared to the uniform window (no weighting) the Bartlett window doubles the width

418

DSPedia of the main lobe, while attentuating the main sidelobe by 46dB, compared to the 13dB of the uniform window. Compared to the similar Von Hann window, the Hamming window sidelobes do not decay as rapidly. For N data samples, the Barlett window is defined by: N 2nπ N --h ( n ) = 0.54 + 0.46 cos  ---------  for n = --- .....-2,-1,0,1,2 ..... –  N  2 2 • harris Window: A data weighting window used prior to frequency transformation (FFT) to reduce spectral leakage (similar to the Bartlett and Von Hann windows). For N data samples, the harris window is defined by: h( n) = with coefficients: a(0) = 0.3066923, a(1) = 0.4748398, a(2) = 0.1924696, a(3) = 0.0259983 • Vonn Hann Window: A data weighting window used prior to frequency transformation (FFT). Compared to the uniform window (no weighting) the Von Hann doubles the width of the main lobe, while attentuating the main sidelobe by 32dB, compared to the 13dB of the uniform window. For N data samples, the Von Hann window is defined by: 2nπ h ( n ) = 0.5 + 0.5 cos  ---------   N  N N 2 1 for n = – --- ..... – , – , 0, 1, 2, .... --2 2

(618)

k =0



3

N 2knπ N --a ( k ) cos  ------------- for n = --- .....-2,-1,0,1,2 ..... –  N  2 2

(619)

(620)

Wold Decomposition: H. Wold showed that any stationary stochastic discrete time process, x ( n ) , can be decomposed into two components: (1) a general linear regression of white noise; and (2) a predictable process. The general linear regression of white noise is given by:
∞ ∞

u( k) = 1 +

∑ bnv ( k – n ) n=1 with

∑ n=1 bn < ∞

(621)

and the predictable process, s ( n ) , can be entirely predicted from its own past samples. s ( n ) and v ( n ) are uncorrelated, i.e. E { v ( n )s ( k ) } = 0 for all n, k [77]. See also Autoregressive Modelling, Yule Walker Equations. Woodbury’s Identity: See Matrix Properties - Inversion Lemma. Wordlength: The size of the basic unit of arithmetic computation inside a DSP processor. For a fixed point DSP processor the wordlength is at least 16 bits, and in the case of the DSP56000, it is 24 bits. Floating point DSP processors usually use 32 bit wordlengths. See also DSP Processor, Parallel Multiplier. World Wide Web (WWW): The World Wide Web (or the web) has become the de facto standard on the internet for storing, finding and transferring open information; hypertext (with text, graphics and audio) is used to access information. Most universities and companies involved in DSP now have web servers with home pages where the information available on a particular machine is summarised. There are also likely to be hypertext links available for cross referencing to additional information. The best way to understand the existence and usefulness of the World Wide Web is to use it with tools such as Mosaic or Netscape. Speak to your system manager or call up your phone company or internet service provider for more information.

419
Woofer: The section of a loudspeaker that reproduces low frequencies is often called the woofer. The name is derived from the low pitched woof of a dog. The antithesis to the woofer is the tweeter. See also Tweeter.

420

DSPedia

421

X
X-Series Recommendations: The X-series telecommunication recommendations from the International Telecommunication Union (ITU), advisory committee on telecommunications (denoted ITU-T and formerly known as CCITT) provide standards for data networks and open system communication. For details on this series of recommendations consult the appropriate standard document or contact the ITU. The well known X.400 standards are defined for the exchange of multimedia messages by storeand-forward transfer. The X.400 standards therefore provide an international service for the movement of electronic messages without restriction on the types of encoded information conveyed. The ITU formed a collaborative partnership with the International Organization for Standards for the development and continued definition of X.400 in 1988 (See ISO 10021 (Parts 17).) A joint technical committee was also formed by the ISO and the International Electrotechnical Commission (IEC). See also International Electrotechnical Commission, International Organization for Standards, International Telecommunication Union, ITU-T Recommendations, Standards. xk: x k or x(k) is often the name assigned to the input signal of a DSP system. x(k) DSP System y(k)

422

DSPedia

423

Y yk: y k or y(k) is usually the name assigned to the output signal of a DSP system. x(k) DSP System y(k)

Yule Walker Equations: Consider a stochastic signal, u ( k ) produced by inputting white noise, v ( k ) to an all-pole filter:
White Noise v(k) Autoregressive Model {b1, b2,..., bM Modelled Signal, or Autoregressive Process u(k)

The output signal u ( k ) is referred to as an autoregressive process, and was generated by a white noise input at v ( k ) .

If the inverse problem is posed such that you are given the autoregressive signal u ( k ) and the order of the process (say M), then the autoregressive filter weights {b1, b2, ... bM} that produced the given process from a white noise signal, v ( n ) can be found by solving the Yule Walker equations: ⇒ b AR = R –1 r where the vector b = [ b 1 … b M – 1 b M ] T , R is the M × M correlation matrix: r0 R = E { u ( k – 1 )u T ( k – 1 ) } = : rM – 2 … rM – 2 rM–1 … … : r0 r1 : r1 r0 (623) (622)

rM – 1 … and r the M × 1 correlation vector, r1 r = E { u ( k )u ( k – 1 ) } = r2 : rM

(624)

where r n = E { u ( k )u ( k – n ) } = E { u ( k – n )u ( k ) } , where E { . } is the expectation operator. See also Autoregressive Modelling.

424

DSPedia

425

Z
Z-1: Derived from the z-transform of signal, z – 1 is taken to mean a delay of one sample period. Sometimes denoted simply as ∆ . Zeroes: A sampled impulse response (e.g. of a digital filter) can be transferred into the Z-domain, and the zeroes of the function can be found by factorizing the polynomial to find the roots: H ( z ) = 1 – 3z –1 + 2z – 1 = ( 1 – z –1 ) ( 1 – 2z –2 ) i.e. the zeros are z = 1 and z = 2. Zero Order Hold: If a signal is upsampled or reconstructed by holding the same value until the next sample value, then this is a zero order hold. Also called step reconstruction. See First Order Hold, Reconstruction Filter. Zero-Padding: See Fast Fourier Transform - Zero Padding. Zoran: A manufacturer and designer of special purpose DSP devices. Z-transform: A mathematical transformation used for theoretical analysis of discrete systems. Transforming a signal or a system into the z-domain can greatly facilitate the understanding of a particular system [10]. (625)

426

DSPedia

427

Common Numbers Associated with DSP
In this section numerical values which are in some way associated with DSP and its applications are listed. The entries are given in an alphabetical type order, where 0 is before 1, 1 is before 2 and so on, with no regard to the actual magnitude of the number. Decimal points are ignored. 0 dB: If a system attenuates a signal by 0 dB then the signal output power is the same as the signal input power, i.e. P out 10 log --------- = 10 log 1 = 0 dB P in 0x: Used as a prefix by Texas Instruments processors to indicate hexadecimal numbers. 0.0250858... : The base 10 logarithm of the ratio of the fundamental frequency of any two neighboring notes (one semi-tone apart) on a musical instrument tuned to the Western music scale. See also Western Music Scale. 0.6366197: An approximation of 2 ⁄ π . See also 3.92dB. 1 bit A/D: An alternative name for a Sigma-Delta ( Σ-∆ ) A/D. 1 bit D/A: An alternative name for a Sigma-Delta ( Σ-∆ ) D/A. 1 bit idea: An alternative name for a really stupid concept. 10-12 W/m2: See entry for 2 x10-5 N/m2. 1004Hz: When measuring the bandwidth of a telephone line, the 0dB point is taken at 1004 Hz. 10149: The ISO/IEC standard number compact disc read only system description. Sometimes refered to as the Yellow Book. See also Red Book. 10198: The ISO/IEC standard number for JPEG compression. 1024: 210. The number of elements in 1k, when refering to memory sizes, i.e. 1 kbyte = 1024 bytes. 1.024 Mbits/sec: The bit rate of a digital audio system sampling at f s = 32000 Hz with 2 (stereo) channels and 16 bits per sample. 1070 Hz: One of the FSK (frequency shift keying) carrier frequencies for the Bell 103, 300 bits/sec modem. Other frequencies are 1270 Hz, 2025 Hz and 2225 Hz. 103: The Bell 103 was a popular 300 bits/sec modem standard. 1.05946...: The twelfth root of 2, i.e 2 1 / 12 . This number is the basis of the modern western music scale whereby the ratio of the fundamental frequencies of any two adjacent notes on the scale is 1.05946... See also Music, Western Music Scale. 10.8dB: Used in relation to quantisation noise power calculations; 10 log 1 ⁄ 12 = 10.8 dB . 11.2896 MHz: 2 × 5.6448 MHz and used as a clock for oversampling sigma delta ADCs and DACs. 5.6448 MHz sampling frequency can be decimated by a factor of 128 to 44.1kHz ,a standard hifidelity audio sampling frequency for CD players. (626)

428

DSPedia

115200 bits/sec: The 111520 bits/sec modem is an eight times speed version of the very popular 14400 modem and became available in the mid 1990s. This modem uses echo cancellation, data equalisation, and data compression technique to achieve this data rate. See also 300, 2400, Vseries recommendations. 11544: The ISO/IEC standard number for JBIG compression. 11172: The ISO/IEC standard number for MPEG-1 video compression. 120 dB SPL: The nominal threshold of pain from a sound expressed as a sound pressure level. 1200 Hz: The carrier frequency of the originating end of the ITU V22 modem standard. The answering end uses a carrier frequency of 2400Hz. Also one of the carrier frequencies for the FSK operation of the Bell 202 and 212 standards, the other one being 2400Hz. 1209 Hz: One of the frequency tones used for DTMF signalling. See also Dual Tone Multifrequency. 12.288 MHz: 2 × 6.144 MHz and used as a clock for oversampling sigma delta ADCs and DACs. 6.144 MHz sampling frequency can be decimated by a factor of 128 to 48kHz, a standard hifidelity audio sampling frequency for DAT. 128: 27 12.8 MHz: 2 × 6.4 MHz and used as a clock for oversampling sigma delta ADCs and DACs. 6.4 MHz sampling frequency can be decimated by a factor of 64 to a sampling frequency of 100kHz. 13 dB: The attentuation of the first sidelobe of the function 10 log sin x ⁄ x is approximately 13 dB. See also Sine Function. 1336 Hz: One of the frequency tones used for DTMF signalling. See also Dual Tone Multifrequency. 13522: The ISO/IEC standard number for MHEG multimedia coding. 13818: The ISO/IEC standard number for MPEG-2 video compression. -13 dB: The ISO/IEC standard number for MPEG-2 video compression. 1.4112 Mbits/sec: The bit rate of a CD player sampling at fs = 44100Hz, with 2 (stereo) channels and 16 bits per sample. 14400 bits/sec: The 14400 bits/sec modems was six times speed version of the very popular 2400 modem and became available in the early 1990s, with the cost falling dramatically in a few years. See also 300, 2400, V-series recommendations. 1.452 - 1.492 GHz: The 40 MHz radio frequency band allocated for satellite DAB (digital audio broadcasting) at the 1992 World Administrative Radio Conference in Spain. Due to other plans for this bandwidth, a number of countries selected other bandwidths such as 2.3 GHz in the USA, and 2.5 GHz in fifteen other countries. 147: The number of the European digital audio broadcasting (DAB) project started in 1987, and formally named Eureka 147. This system has been adopted by ETSI (the European

429
Telecommunication Standards Institute) for DAB and currently uses MPEG Audio Layer 2 for compression. 147:160: The largest (integer) common denominator of the sampling rates of a CD player, and a DAT player, i.e. 44100 48000 --------------- : --------------- = 147 : 160 300 300 (627)

1477 Hz: One of the frequency tones used for DTMF signalling. See also Dual Tone Multifrequency. 1.536 Mbits/sec: The bit rate of a DAT player sampling at fs = 48000Hz, with 2 (stereo) channels and 16 bits per sample. 160: See 147. 1633 Hz: One of the frequency tones used for DTMF signalling. See also Dual Tone Multifrequency. 16384: 214 1.76 dB: Used in relation to quantisation noise power calculations; 10 log 1.5 = 1.76 dB . 176.4kHz: The sample rate when 4 ×’s oversampling a CD signal where the sampling frequency f s = 44.1kHz . 1800 Hz: The carrier frequency of the QAM (quadrature amplitude modelling) ITU V32 modem standard. 2 bits: American slang for a quarter (dollar). 2-D FFT: The extension of the (1-D) FFT into two dimensions to allow Fourier transforms on images. 2 × 10-5 N/m2: The reference intensity, sometimes denoted as Iref , for the measurement of sound pressure levels (SPL). This intensity can also be expressed as 10-12 W/m2, or as 20 µ Pa (micropascals). This intensity was chosen as it was close to the absolute level of a tone at 1000Hz that can just be detected by the human ear; the average human threshold of hearing at 1000Hz is about 6.5dB. The displacement of the eardrum at this sound power level is suggested to be 1/10th the diameter of a hydrogen molecule! 20 dB/octave: Usually used to indicate how good a low pass filter attenuates at frequencies above the 3dB point. 20dB per octave means that each time the frequency doubles then the attenuation of the filter increases by a factor of 10, since 20dB = 20 log 2 ( 10 ) . 20dB/decade is the same rolloff as 6dB/decade. See also Decibels, Roll-off. 20 µ Pa (micropascals): See entry for 2 x10-5 N/m2. 205: The number of data points in used in Goertzel’s algorithm (a form of discrete Fourier transform (DFT)) for tone detection.

430

DSPedia

2025 Hz: One of the FSK (frequency shift keying) carrier frequencies for the Bell 103, 300 bits/sec modem. Other frequencies are 1070 Hz, 1270 Hz and 2225 Hz. 2048: 211 2100: The part number of most Analog Devices fixed point DSP processors. 21000: The part number of most Analog Devices floating point DSP processors. 2225 Hz: One of the FSK (frequency shift keying) carrier frequencies for the Bell 103, 300 bits/sec modem. Other frequencies are 1070 Hz, 1270 Hz and 2025 Hz. 24 bits: The fixed point wordlength of some members of the Motorola DSP56000 family of DSP processors. 2400 bits/sec: The 2400 bits/sec modems appeared in the early 1990s as low cost communication devices for remote computer access and FAX transmission. The bit rate of 2400 was chosen as it is a factor of 8 faster than the previous 300 bits/sec modem. Data rates of 2400 were achieved by using echo cancellation and data equalisation techniques. The 2400 bits/sec modem dominated the market until the cost of the 9600 modems started to fall in about 1992. To ensure a simple backwards operation compatibility all modems are now produced in factors of 2400, i.e. 4800, 7200, 9600, 14400, 28800, 57600, 115200. See also V-series recommendations. 2400 Hz: The carrier frequency of the answering end of the ITU V22 modem standard. The originating end uses a carrier frequency of 1200Hz. Also one of the carrier frequencies for the FSK operation of the Bell 202 and 212 standards, the other one being 1200Hz. 256: 28 26 dB: The attentuation of the first sidelobe of the function 20 log sin x ⁄ x is approximately 26 dB. See also Sine Function. 261.624 Hz: The fundamental frequency of middle C on a piano tuned to the Western music scale. See also 440 Hz. 2.718281... : The (truncated) value of e, the natural logarithm. 28800 bits/sec: The 28800 bits/sec modem is an eight times speed version of the very popular 14400 modem and became available in the mid 1990s. This modem uses echo cancellation, data equalisation, and data compression technique to achieve this data rate. See also 300, 2400, Vseries recommendations. 2.8224 MHz: An intermediate oversampling frequency used for sigma delta ADCs and DACs used with CD audio systems. 2.8224 MHz can be decimated by a factor of 64 to 44.1 kHz, the standard sampling frequency of CD players. 3 dB: See 3.01dB. 3.01 dB: The approximate value of 10 log 10 ( 0.5 ) = 3.0103 . If a signal is attenuated by 3dB then its power is halved. 300: The largest (integer) common denominator of the sampling rates of a CD player, and a DAT player, i.e.

431
44100 48000 --------------- : --------------- = 147 : 160 300 300 (628)

300 bits/sec: The bit rate of the first commercial computer modems. Although 28800 bits/sec is now easily achievable, 300 bits/sec modems probably outsell all other speeds of modems by virtue of the fact that most credit card telephone verification systems can perform the verification task at 300 bits/sec in a few seconds. See also Bell 103, 2400, V-series recommendations. 3.072 MHz: An intermediate oversampling frequency used for sigma delta ADCs and DACs used with DAT and other professional audio systems. 3.072 MHz can be decimated by a factor of 64 to 48kHz, the current standard professional hifidelity audio sampling frequency. 32 kHz: A standard hifidelity audio sampling rate. The sampling rate of NICAM for terrestrial broadcasting of stereo audio for TV systems in the United Kingdom. 32 bits: The wordlength of most floating point DSP processors. 24 bits are used for the mantissa, and 8 bits for the exponent. 3.2 MHz: An intermediate oversampling frequency for sigma delta ADCs and DACs that can be decimated by a factor of 32 to 100 kHz. 320: The part number for most Texas Instruments DSP devices. 32768: 215 3.3 Volt Devices: DSP processor manufacturers are now releasing devices that will function with 3 volt power supplies, leading to a reduction of power consumption. 350 Hz: Tones at 350 Hz and 440 Hz make up the dialing tone for telephone systems. 35786 km: The height above the earth of a satellite geostationary orbit. This leads to between 240 and 270ms one way propagation delay for satellite enabled telephone calls. On a typical international telephone connection the round-trip delay can be as much as 0.6 seconds making voice conversation difficult. In the likely case of additional echoes voice conversation is almost impossible without the use of echo cancellation strategies. +++ 352.8 bits/sec: One quarter of the bit rate of hifidelity CD audio sampled at 44.1 kHz, with 16 bit samples and stereo channels ( 44100 × 16 × 2 = 1411200 bits/sec ). The data compression scheme known as PASC (psychoacoustic subband coding) used on DCC (digital compact cassette) compresses by a factor 4:1 and therefore has a data rate of 384 bits/sec when used on data sampled at 44.1kHz. & 352.8kHz: The sample rate when 8 ×’s oversampling a CD signal where the sampling frequency is f s = 44.1kHz . +++ 384 bits/sec: One quarter of the bit rate of hifidelity audio sampled at 48kHz, with 16 bit samples and stereo channels ( 48000 × 16 × 2 = 1536000 bits/sec ). The data compression scheme known as PASC (psychoacoustic subband coding) used on DCC (digital compact cassette) compresses by a factor 4:1 and therefore has a data rate of 384 bits/sec when used on data sampled at 44.1kHz. & 3.92dB: The attenuation of the frequency response of a step reconstructed signal at f s ⁄ 2 . The attenuation is the result of the zero order hold “step” reconstruction which is equivalent to

432

DSPedia sin πft s H ( f ) = ----------------πft s

convolving the signal with a unit pulse of time duration t s = 1 ⁄ f s , or in the frequency domain, multiplying by the sinc function, H ( f ) :: (629)

Therefore at f s ⁄ 2 , the droop in the output signal spectrum has a value of: sin ( π ⁄ 2 ) -H ( f s ⁄ 2 ) = ----------------------- = 2 = 0.63662 π⁄2 π which in dB’s can be expressed as: 20 log ( 2 ⁄ π ) = 3.922398 4 dB: Sometimes used as an approximation to 3.92dB. See also 3.92dB 4096: 212 4294967296: 232 440 Hz: The fundamental frequency of the first A note above middle C on a piano tuned to the Western music scale. Definition of the frequency of this one note allows the fundamental tuning frequency of all other notes to be defined. Also the pair of tones at 440 Hz and 350 Hz make up the telephone dialing tone, and 440 Hz and 480 Hz make up the ringing tone for telephone systems. 44.1kHz: The sampling rate of Compact Disc (CD) players. This sampling frequency was originally chosen to be compatible with U-matic video tape machines which had either a 25 or 30Hz frame rate, i.e. 25 and 30 are both factors of 44100. 44.056kHz: The sampling rate of Compact Disc (CD) players. was originally chosen to be compatible with U-matic video tape machines which had either a 25 or 30Hz frame rate, i.e. 25 and 30 are both factors of 44100. When master recording was done on a 29.97Hz frame rate video machine, this required the sampling rate to be modified to a nearby number that was a factor of 29.97, i.e. 44.056kHz. This sampling rate is redundant now. 4.76cm/s: The tape speed of compact cassette players, and also of digital compact cassette players (DCC). 4.77 dB: 10 log 3 ≈ 4.77dB , i.e. a signal that has its power amplfied by a factor of 3, has an amplification of 4.77dB. 48kHz: The sampling rate of digital audio tape (DAT) recorders, and the sampling rate used by most professional audio systems. 480 Hz: The tone pair 480 Hz and 620 Hz make up the busy signal on telephone systems. (631) (630)

433
4800 bits/sec: The 4800 bits/sec modems was a double speed version of the very popular 2400 modem. Data rates of 4800 were achieved using echo cancellation and data equalisation techniques. See also 2400, V-series recommendations. 512: 29 56000: The part number for most Motorola fixed point DSP devices. 5.6448 MHz: An oversampling frequency for sigma delta ADCs and DACs used with CD players. 5.6448 MHz can be decimated by a factor of 128 to 44.1kHz the standard hifidelity audio sampling frequency for CD players. 57200 bits/sec: The 57200 bits/sec data rate modem is an 4 times speed version of the very popular 14400 modem and became available in the mid 1990s. This modem uses echo cancellation, data equalisation, and data compression technique to achieve this data rate. See also 300, 2400, V-series recommendations. 6dB/octave: The “6” is an approximation for 20 log 10 2 = 6.0206 . Usually used to indicate how good a low pass filter attenuates at frequencies above the 3dB point. 6dB per octave means that each time the frequency doubles then the attenuation of the filter increases by a factor of 2, since. 6dB/octave is the same roll-off as 20dB/decade. See also Decibels, Roll-off. 6.144 MHz: An oversampling frequency for sigma delta ADCs and DACs used with DAT and other professional audio systems. 6.144 MHz can be decimated by a factor of 128 to 48kHz to the current standard professional hifidelity audio sampling frequency. 620 Hz: The tone pair 480 Hz and 620 Hz make up the busy signal on telephone systems. 6.4 MHz: An oversampling frequency for sigma delta ADCs and DACs that can be decimated by a factor of 64 to 100 kHz. 64kBits/sec: A standard channel bandwidth for data communications. If a channel has a bandwidth of approximately 4kHz, then the Nyquist sampling rate would be 8kHz, and data of 8 bit wordlength is sufficient to allow good fidelity of speech to be transmitted. Note that 64000 bits/sec = 8000Hz × 8 bits. 6.4 MHz: A common sampling rate for a 64 times oversampled sigma-delta ( Σ-∆ ) A/D, resulting in up to 16 or more bits of resolution at 100kHz after decimation by 64. 65536: 216 697 Hz: One of the frequency tones used for DTMF signalling. See also Dual Tone Multifrequency. & 705600 bits/sec: The bit rate of a single channel of a CD player, with 16 bit samples, and sampling at f s = 44100kHz . & 705.6 kHz: The sample rate when 16 ×’s oversampling a CD signal where the sampling frequency f s = 44100kHz . 7200 bits/sec: The 7200 bits/sec modems was a three times speed version of the very popular 2400 modem and became available in the early 1990s, with the cost falling dramatically in a few

434

DSPedia

years. Data rates of 7200 were achieved using echo cancellation and data equalisation techniques. See also 2400, V-series recommendations. 741 Op-Amp: The part number of a very popular operational amplifier chip widely used for signal conditioning, amplification, and anti-alias, reconstruction filters. 768000 bits/sec: The bit rate of a single channel DAT player with 16 bits per sample, and sampling at f s = 48000 Hz . 770 Hz: One of the frequency tones used for DTMF signalling. See also Dual Tone Multifrequency. 8 kHz: The sampling rate of most telephonic based speech communication. 8192: 213 852 Hz: One of the frequency tones used for DTMF signalling. See also Dual Tone Multifrequency. 941 Hz: One of the frequency tones used for DTMF signalling. See also Dual Tone Multifrequency. 9.54dB: 20 log 3 ≈ 9.54dB , i.e. a signal that has its voltage amplfied by a factor of 3, has an amplification of 9.54 dB. 9600 bits/sec: The 9600 bits/sec modems was a four times speed version of the very popular 2400 modem and became available in the early 1990s, with the cost falling dramatically in a few years. Data rates of 9600 were achieved by using echo cancellation and data equalisation techniques. See also 2400, V-series recommendations. 96000: The part number for most Motorola 32 bit floating point devices.

435

Acronyms:
ADC - Analogue to Digital Converter. ADSL - Advanced Digital Subscriber Line ADSR - Attack-Decay-Sustain-Release. AES/EBU - Audio Engineering Society/European Broadcast Union. A/D - Analogue to Digital Converter. ADPCM - Adaptive Differential Pulse Code Modulation. ANC - Active noise cancellation. ANSI - American National Standards Institute. AIC - Analogue Interfacing Chip. ARB - Arbitrary Waveform Generation. ASCII - American Standard Code for Information Interchange. ASIC - Application Specific Integrated Circuit. ASK - Amplitude Shift Keying. ASPEC - Adaptive Spectral Perceptual Entropy Coding . ASSP - Acoustics, Speech and Signal Processing. AVT - Active Vibration Control. AWGN - Additive White Gausssian Noise. BER - Bit Error Rate. BISDN - Broadband Integrated Services Digital Network. BPF - Band pass filter. BPSK - Binary Phase Shift Keying. CCR - Condition Code Register. CCITT - Comité Consultatif International Télégraphique et Téléphonique. (International Consultative Committee on Telegraphy and Telecommunication, now known as ITU-T.) CCIR - Comité Consultatif International Radiocommunication. (International Consultative Committee on Radiocommunication, now known as ITU-R.) CD - Compact Disc CD-DV: Compact Disc Digital Video.

436
CELP - Coded Excited Linear Prediction Vocoders.

DSPedia

CENELEC - Comité Européen de Normalisation Electrotechnique (European Committee for Electrotechnical Standardization). CIF - Common Intermediate Format. CIRC - Cross Interleaved Reed Solomon code. CISC - Complex Instruction Set Computer. CPM - Continuous Phase Modulation. CPU - Central Processing Unit. CQFP - Ceramic Quad Flat Pack. CRC - Cyclic Redundancy Check. CVSD - Continuous variable slope delta modulator. D/A - Digital to analogue converter. DAB - Digital Audio Broadcasting. DAC - Digital to analogue converter. dB - decibels. DECT - Digital European Cordless Telephone. DL - Difference Limen. & DARS - Digital Audio Radio Services. DBS - Direct Broadcast Satellites. DCC - Digital Compact Cassette. DCT - Discrete Cosine Transform. & DDS - Direct Digital Synthesis. DECT - Digital European Cordless Telephone. DFT - Discrete Fourier Transform. DLL - Dynamic Link Library. DMS - Direct Memory Access. DPCM - Differential Pulse Code Modulation. DPSK - Differential Phase Shift Keying. DRAM - Dynamic Random Acces Memory.

437
DSL - Digital Subscriber Line DSP - Digital Signal Processing. DTMF - Dual tone Multifrequency. DSfP - Digital Soundfield Processing. ECG - Electrocardiograph. EEG - Electroencephalograph. EFM - Eight to Fourteen Modulation. EMC - Electromagnetic compatibility. EPROM - Electrically programmable read only memory. EEPROM - Electrically Erasable Programmable Read Only Memory. EQ - Equalization (usually in acoustic applications). ETSI - European Telecommunications Standards Institute. FIR - Finite Impulse Response. FFT - Fast Fourier Transform. FSK - Frequency Shift Keying. G - prefix meaning 10 9 , as in GHz, thousands of millions of Hertz GII - Global Information Infrastructure. GIF - Graphic Interchange Format. GSM - Global System For Mobile Communications (Group Speciale Mobile). HDSL - High speed Digital Subscriber Line hhtp - Hypertext Transfer Protocol. IEEE - Institute of Electrical and Electronic Engineers (USA). IEE - Institute of Electrical Engineers (UK). IEC - International Electrotechnical Commission. IIR - Infinite impulse response. IIF - Image Interchange Facility. INMARSAT - International Mobile Satellite Organization. ISDN - Integrated Services Digital Network. ISO - International Organisation for Standards.

438
ISO/IEC JTC - International Organization Commission Joint Technical Committee. ITU - International Telecommunications Union. ITU-R - International Telecommunications Union - Radiocommunication. ITU-T - International Telecommunications Union - Telecommunication. I/O - Input/Output. JBIG - Joint Binary Image Group. JND - Just Noticeable Difference. JPEG - Joint Photographic Expert Group. JTC - Joint Technical Committee. k - prefix meaning 10 3 , as in kHz, thousands of Hertz. LFSR - Linear Feedback Shift Register Coding. LPC - Linear Predictive Coding. LSB - Least Significant Bit. M - prefix meaning 10 6 as in MHz, millions of Hertz. MAC - Multiply Accumulate. MFLOPS - Millions of Floating Point Operations per Second. MIDI - Music... MAF - Minimum Audible Field. MAP - Minimum Audible Pressure. MIPS - Millions of Instructions per second. MLPC - Multipulse Linear Predictive Coding. MA - Moving Average. MD - Mini-Disc. MMSE - Minimum Mean Squared Error. MHEG - Multimedia and Hypermedia Experts Group. MPEG - Moving Picture Experts Group. MRELP - M.. ms - millisecond ( 10 –3 ). for Standards/ International

DSPedia
Electrotechnical

439
MSB - Most Significant Bit. MSE - Mean Squared Error. MSK - Minimum Shift Keying. MIX - Modular Interface eXtension. MUSICAM - Masking pattern adapted Universal Subband Integrated Coding And Multiplexing. NRZ - Non Return to Zero. ns - nanosecond ( 10 –9 seconds). OKPSK - Offset-Keyed Phase Shift Keying. OKQAM - Offset-Keyed Quadrature Amplitude Modulation. OOK - On Off Keying. OPSK - Offset-Keyed Phase Shift Keying. OQAM - Offset-Keyed Quadrature Amplitude Modulation. PAM - Pulse Amplitude Modulation. PASC - Precision Adaptive Subband Coding. PCM - Pulse Code Modulation. PCMCIA - Personal Computer Memory Card International Association. PN - Pseudo-Noise. ppm - Parts per million. PPM - Pulse Position Modulation. PRBS - Pseudo Random Binary Sequence. PSK - Phase Shift Keying. PSTN - Public Switched Telephone Network. PTS - Permanent Threshold Shift. PWM - Pulse Width Modulation. PDA - Personal Digital Assistant. PGA - Pin Grid Array. PID - Proportional Integral Controller. PQFP - Plastic Quad Flat Pack. PRNS - Pseudo Random Noise Sequence.

440
QAM - Quadrature Amplitude Modulation. QPSK - Quadrature Phase Shift Keying. RAM - Random access memory. RBDS - Radio Broadcasting.....? RELP - Residual Excited Linear Prediction Vocoder. RIFF - Resource Interchange File Format. RISC - Reduced Instruction Set Computer. RLC - Run Length Coding. RLE - Run Length Encoding. ROM - Read only memory. RPE - Recursive Predictor Error or Regular Pulse Excitation RZ - Return to Zero. Rx - Receive. SBM - Super Bit Mapping (A trademark of Sony). SCMS - Serial Copy Management System. SFG - Signal Flow Graph. SGML - Standard Generalized Markup Language. S/H - Sample and Hold. SINR - Signal to Interference plus Noise Ratio. SNR - Signal to Noise Ratio. S/N - Signal to Noise ratio. S/P-DIF - Sony/Philips Digital Interface Format. SR - Status Register. SPL - Sound Pressure Level. SRAM - Static random access memory. SRC - Sample Rate Converter. TBDF - Triangular Probability Density Function. TCM - Trellis Coded Modulation. THD - Total Harmonic Distortion.

DSPedia

441
THD+N - Total Harmonic Distortion plus Noise. TTS - Temporary Threshold Shift. Tx - Transmit. VSELP - Vector Sum Excited Linear Prediction. VU - Volume Unit. WMA - Weighted Moving Average. WWW - World Wide Web.

µ sec - microsecond ( 10 –6 )

Standards Organisation ANSI - American National Standards Institute. BS - British Standard. IEC - International Electrotechnical Committee. IEEE - Institute of Electronic and Electrical Engineers. ISO - International Organisation for Standards.

442

DSPedia

443

References and Further Reading
Goto Papers Textbooks
[1] [2] [3] [4] [5] [6] [7] [8] [9] S. Banks. Signal Processing, Image Processing and Pattern Recognition. Prentice Hall, Englewood Cliffs, NJ, 1990. T.P. Barnwell III, K. Nayebi, C.H. Richardson. Speech Coding. A Computer Laboratory Textbook. John Wiley and Sons, 1996. A. Bateman and W. Yates. Digital Signal Processing Design. Pitman Publishing 1988. E.H. Berger, W.D. Ward, J.C. Morrill, L.H. Royster. Noise and Hearing Conservation Manual, 4th Edition. American Industrial Hygiene Association. R.L. Brewster. ISDN Technology. Chapman & Hall, London, 1993. R.G. Brown, P.Y.C. Hwang. Introduction to Random Signals and Applied Kalman Filtering, John Wiley and Sons, 1992. C.S. Burrus, J.H. McLellan, A.V. Oppenheim, T.W. Parks, R.W. Schafer, H.W. Schuessler. Computer Based Exercises for Signal Processing Using Matlab. Prentice Hall, 1994. J.C. Candy, G.C. Temes. Oversampling Delta-Sigma Data Converters. Piscataway, NJ; IEEE Press, 1992. L.W. Couch II. Modern Communication Systems: Principles and Applications. Prentice-Hall, Englewood Cliffs, NJ, 1995.

[10] D.J. DeFatta, J.G. Lucas, W.S. Hodgkiss. Digital Signal Processing: A System Design Approach. John Wiley, New York, 1988. [11] J.R. Deller, J.G. Proakis, J.H.K. Hansen. Discrete Time Processing of Speech Signals. MacMillan, New York, 1993. [12] P.D. Denyer and D. Renshaw. VLSI Signal Processing - A Bit Serial Approach. Addison-Wesley, 1995. [13] G. De Poli, A Piccialli, C. Roads. Representations of Musical Signals. The MIT Press, Boston, USA, 1991. [14] J.M. Eargle. Music Sound and Technology. Van Nostrand Reinhold, 1990. [15] G.H. Golub, C.F. Van Loan. Matrix Computations. John Hopkins University Press, 1989. [16] J.G. Gibson, The Mobile Communications Handbook. CRC Press/IEEE Press, 1996. [17] S. Haykin. Adaptive Filter Theory (2nd Edition). Prentice Hall, Englewood Cliffs, NJ, 1990. [18] S. Haykin. Neural Networks: A Comprehensive Foundation. MacMillan College, 1994. [19] D.R. Hush and B.G. Horne. Progress in supervisied neural networks. IEEE Signal Processing Magazine, Vol. 10, No. 1, pp. 8-39, January 1993. [20] K. Hwang, F. Briggs. Computer Architecture and Parallel Processing. McGraw-Hill, 1985. [21] E.C. Ifeachor, B.W. Jervis. Digital Signal Processing: A Practical Approach. Addison-Wesley, 1993. [22] N. Kalouptsidis,Theodoridis. Adaptive System Identification and Signal Processing Algorithms. Prentice Hall, 1993.

444

DSPedia

[23] A. Kamas, E.A. Lee. Digital Signal Processing Experiments. Prentice-Hall, Englewood Cliffs, NJ, 1989. [24] S.Y. Kung. Digital Neurocomputing. Prentice-Hall, Englewood Cliffs, NJ, 1992. [25] S.Y. Kung. VLSI Array Processors. Prentice-Hall, Englewood Cliffs, NJ, 1987. [26] P.A. Lynn. An Introduction to the Analysis and Processing of Signals, 1982. [27] J.D. Martin. Signals and Processes: A Foundation Course. [28] C. Marven, G. Ewers. A Simple Approach to Digital Signal Processing. Texas Instruments Publication, 1993. [29] R.M. Mersereau, M.J.T. Smith. Digital Filtering. John Wiley, New York, 1993. [30] B.C.J Moore. An Introduction to the Psychology of Hearing. [31] A.V. Oppenheim, R.W. Schafer. Discrete Time Signal Processing. Prentice Hall, Englewood Cliffs, NJ, 1989. [32] R.A. Penfold. Synthesizers for Musicians. PC Publishing, London, 1989. [33] K. Pohlmann. Advanced Digital Audio. Howards Sams, Indiana, 1991. [34] K. Pohlmann. An Intoduction to Digital Audio, Howard Sams, Indiana, 1989. [35] T.S. Rappaport. Wireless Communications. IEEE Press, New York, 1996. [36] P. Regalia. Adaptive IIR Filtering. Marcel Dekker, 1995. [37] F. Rumsey. Digital Audio. Butterworth-Heinemann, 1991 [38] E. Rogers and Y. Li. Parallel Processing in a Control Systems Environment. Prentice Hall, Englewood Cliffs, NJ, 1993. [39] K. Sayood. Introduction to Data Compression. Morgan-Kaufman, 1995. [40] M. Schwartz. Information, Transmission, and Modulation Noise. McGraw-Hill. [41] N.J.A. Sloane, A.D. Wyner (Editors). Claude Elwood Shannon: Collected Papers. IEEE Press, 1993, Piscataway, NJ. ISBN 0-7803-0434-9. [42] M.J.T. Smith, R.M Mersereau. Introduction to Digital Signal Processing: A Computer Laboratory Textbook. John Wiley and Sons, 1992. [43] K. Steiglitz. A Digital Signal Processing Primer. Addison-Wesley, 1996. [44] C.A. Stewart and R. Atkinson. Basic Analogue Computer Techniques. McGraw-Hill, London, 1967. [45] N. Storey. Electronics: A Systems Approach. Addison-Welsey, 1992. [46] M. Talbot-Smith (Editor). Audio Engineer’s Reference Book, Focal Press, ISBN 0 7506 0386 0, 1994. [47] F.J. Taylor. Principles of Signals and Systems. New York; McGraw-Hill, 1994. [48] W.J. Tompkins. Biomedical Digital Signal Processing. Prentice Hall, Englewood Cliffs, NJ, 1993. [49] P.P. Vaidyanathan. Multirate Systems and Filter Banks. Prentice Hall, Englewood Cliffs, NJ,1993. [50] S.V. Vaseghi. Advanced Signal Processing and Digital Noise Reduction. John Wiley/B.G. Tuebner, 1996. [51] J. Watkinson. An Introduction to Digital Audio. Focal Press, ISBN 0 240 51378 9, 1994. [52] J. Watkinson. Compression in Video and Audio. ISBN 0240513940, Focal Press, April 1994.

445
[53] B. Widrow and S. Stearns. Adaptive Signal Processing. Prentice Hall, 1985. [54] J. Watkinson. The Art of Digital Audio, 2nd Edition. ISBN 0240 51320 7, 1993.

Technical Papers
[55] P.M. Aziz, H.V. Sorenson, J.V. Der Spiegel. An overview of sigma delta converters. IEEE Signal Processing Magazine, Vol. 13, No. 1, pp. 61-84, January 1996. [56] J.W. Arthur. Modern SAW-based pulse compression systems for radar application. Part 2: Practical systems. IEE Electronics Communication Engineering Journal, Vol. 8, No. 2, pp. 57-78, April 1996. [57] G.M. Blair. A Review of the Discrete Fourier Transform. Part 1: Manipulating the power of two. IEE Electronics and Communication Engineering Journal, Vol. 7, No.4, pp. 169-176, August 1995. [58] G.M. Blair. A review of the discrete Fourier Transform. Part 2: Non-radix algorithms, real transforms and noise. IEE Electronics Communication Engineering Journal, Vol. 7, No. 5, pp. 187-194, October 1995. [59] J.A. Cadzow. Blind deconvolution via cumulant extrema. IEEE Signal Processing Magazine, Vol. 13, No. 3, pp. 24-42, May 1996. [60] J. Cadzow. Signal processing via least squares error modelling. IEEE ASSP Magazine, Vol. 7, No. 4, pp 12-31, October 1990. [61] G. Cain, A. Yardim, D. Morling. All-Thru DSP Provision, Essential for the modern EE. Proceedings of IEEE International Conference on Acoutics, Speech and Signal Processing 93, pp. I-4 to I-9, Minneapolis, 1993. [62] C. Cellier. Lossless audio data compression for real time applications. 95th AES Convention, New York, Preprint 3780, October 1993. [63] S. Chand and S.L. Chiu (editors). Special Issue on Fuzzy Logic with Engineering Applications. Proceedings of the IEEE, Vol. 83, No. 3, pp. 343-483, March 1995. [64] R. Chellappa, C.L. Wilson and S. Sirohey. Human and machine recognition of faces. Proceedings of the IEEE, Vol. 83, No. 5 pp. 705-740, May 1995. [65] J. Crowcroft. The Internet: a tutorial. IEE Electronics Communication Engineering Journal, Vol. 8, No. 3, pp. 113122, June 1996. [66] J.W. Cooley. How the FFT gained acceptance. IEEE Signal Processing Magazine, Vol. 9, No. 1, pp. 10-13, January 1992. [67] J.R. Deller, Jr. Tom, Dick and Mary discover the DFT. IEEE Signal Processing Magazine, Vol. 11, No. 2, pp. 3650, April 1994. [68] S.J. Elliot and P.A. Nelson. Active Noise Control. IEEE Signal Processing Magazine, Vol. 10, No. 4, pp 12-35, October 1993. [69] L.J. Eriksson. Development of the filtered-U algorithm for active noise control. Journal of the Acoustical Society of America, Vol. 89 (No.1), pp 27-265, 1991. [70] H. Fan. A (new) Ohio yankee in King Gustav’s country. IEE Signal Processing Magazine, Vol.12, No. 2, pp. 3840, March 1995. [71] P.L. Feintuch. An adaptive recursive LMS filter. Proceedings of the IEEE, Vol. 64, No. 11, pp. 1622-1624, November 1976. [72] D. Fisher. Coding MPEG1 image data on compact discs. Electronic Product Design (UK), Vol. 14, No. 11, pp. 2633. November 1993.

446

DSPedia

[73] H. Fletcher and W.A. Munson. Loudness, its definition, measurement and calculation. Journal of the Acoustical Society of America, Vol. 70, pp. 1646-1654, 1933. [74] M. Fontaine and D.G. Smith. Bandwidth allocation and connection admission control in ATM networks. IEE Electronics Communication Engineering Journal, Vol. 8, No. 4, pp. 156-164, August 1996. [75] W. Gardner. Exploitation of spectral redundancy in cyclostationary signals. IEEE Signal Processing Magazine, Vol. 8, No. 2, pp 14-36, April 1991. [76]H. Gish and M. Schmidt. Text independent speaker identification. Vol. 11, No. 4, pp 18- 32, October 1994. [77] P.M. Grant. Signal processing hardware and software. IEEE Signal Processing Magazine, Vol. 13, No. 1, pp. 8688, January 1996. [78] S. Harris. The effects of sampling clock jitter on Nyquist sampling analog to digital converters, and on oversampling delta sigma ADCs. Journal of the Audio Engineering Society, July 1990. [79] S. Heath. Multimedia standards and interoperability. Electronic Product Design (UK), Vol. 15, No. 9, pp. 33-37. November 1993. [80] F. Hlawatsch and G.F. Boudreaux-Bartels. Linear and quadratic time-frequency signal representations. IEEE Signal Processing Magazine, Vol. 9, No. 2, pp. 21-67, April 1992. [81]D.R. Hush and B.G. Horne. Progress in Supervised Neural Networks. IEEE Signal Processing Magazine, Vol. 10, No. 1, pp. 8-39, January 1993. [82] Special Issue on DSP Education, IEEE Signal Processing Magazine, Vol. 9, No.4, October 1992. [83] Special Issue on Fuzzy Logic with Engineering Applications. Proceedings of the IEEE, Vol. 83, No. 3, March 1995. [84] A. Hoogendoorn. Digital Compact Cassette. Proceedings of the IEEE, Vol. 82, No. 10, pp. 1479-1489, October 1994. [85] B. Jabbari (editor). Special Issue on Wireless Networks for Mobile and Personal Communications. Vol. 82, No. 9, September 1994. [86] D.L. Jaggard (editor). Special Section on Fractals in Electrical Engineering. Proceedings of the IEEE, Vol. 81, No. 10, pp. 1423-1523, October 1993. [87] N. Jayant, J. Johnston, R. Safranek. Signal compression based on models of human perception. Proceedings of the IEEE, Vol. 81, No. 10, pp. 1385-1382, October 1993. [88] C.R. Johnson. Yet still more on the interaction of adaptive filtering, identification, and control. IEE Signal Processing Magazine, Vol.12, No. 2, pp. 22-37, March 1995. [89] R.K. Jurgen. Broadcasting with Digital Audio. IEEE Spectrum, Vol. 33, No. 3, pp. 52-59. March 1996. [90] S.M. Kay and S.L. Marple. Spectrum Analysis - A Modern Perspective. Proceedings of the IEEE, Vol. 69, No. 11, pp 1380-1419, November 1981. [91] K. Karnofsky. Speeding DSP algorithm design. IEEE Spectrum, Vol. 33, No. 7, pp. 79-82, July 1996. [92] W. Klippel. Compensation for non-linear distortion of horn loudspeakers by digital signal processing. Journal of the Audio Engineering Society, Vol. 44, No. 11, pp 964-972, Novemeber 1996. [93] P. Kraniauskas. A plain man’s guide to the FFT. IEEE Signal Processing Magazine, Vol. 11, No. 2, pp. 24-36, April 1994. [94] F. Kretz and F. Cola. Standardizing Hypermedia Information Objects. IEEE Communications Magazine, May 1992.

447
[95] M. Kunt (Editor). Special Issue on Digital Television, Part 1: Technologies. Proceedings of the IEEE, Vol. 83, No. 6, June 1995. [96] M. Kunt (Editor). Special Issue on Digital Television, Part 2: Hardware and Applciations. Proceedings of the IEEE, Vol. 83, No. 7, July 1995. [97] T.I. Laakso, V. Valimaki, M. Karjalainen and U.K. Laine. Splitting the unit delay. IEEE Signal Processing Magazine, Vol. 13, No. 1, pp. 30-60, January 1996. [98] T.I. Laasko, V. Valimaki, M. Karjalainen, U.K. Laine. Splitting the Unit Delay. IEEE Signal Processing Magazine, Vol. 13, No. 1, pp. 30-60, January 1996. [99] P. Lapsley and G. Blalock. How to estimate DSP processor perfomance. IEEE Spectrum, Vol. 33, No. 7, pp. 7478, July 1996. [100]V.O.K. Li and X. Qui. Personal communication systems. Proceedings of the IEEE, Vol. 83, No. 9, pp. 1210-1243, September 1995. [101]R.P. Lippmann. An introduction to computing with neural nets. IEEE ASSP Magazine, Vol. 4, No. 2, pp. 4-22, April 1987. [102]G.C.P. Lokhoff. DCC: Digital Compact Cassette. IEEE Transactions on Consumer Electronics, Vol. 37, No. 3 pp 702-706, August 1991. [103]H. Lou. Implementing the Viterbi Algorithm. IEEE Signal Processing Magazine. Vol 12, No. 5 pp. 42-52, September 1995. [104].J. Lipoff. Personal communications networks bridging the gap between cellular and cordless phones. Proceedings of the IEEE, Vol. 82, No. 4, pp. 564-571, April 1994. [105]M. Liou. Overview of the p*64 kbit/s Video Coding Standard. Communications of the ACM, April 1991. G.K. Ma and F.J. Taylor. Multiplier policies for digital signal processing. IEEE ASSP Magazine, Vol. 7, No. 4, pp 6-20, January 1990. [106]G-K. Ma and F.J. Taylor. Multiplier policies for digital signal processing. IEEE ASSP Magazine, Vol. 7, No. 1, January 1990. [107]Y. Mahieux, G. Le Tourneur and A. Saliou. A microphone array for multimedia workstations. Journal of the Audio Engineering Society, Vol. 44, No. 5, pp. 331-353, May 1996. [108]D.T. Magill, F.D. Natali and G.P. Edwards. Spread-spectrum technology for commercial applications. Proceedings of the IEEE, Vol. 82, No. 4, pp. 572-584, April 1994. [109]V.J. Mathews. Adaptive Polynomial Filters. IEEE Signal Processing Magazine, Vol. 8, No. 3, pp. 10-26, July 1991. [110]N. Morgan and H. Bourland. Neural networks for statistical recognition of continuous speech. Proceedings of the IEEE, Vol. 83, No. 5, pp. 742-770, May 1995. [111]N. Morgan and H. Bourland. Continuous speech recognition. IEEE Signal Processing Magazine, Vol. 12, No. 3, pp. 24-42, May 1995. [112]N. Morgan and H. Bourland. Neural networks for statistical recognition of speech. Proceedings of the IEEE, Vol 83, No. 5, pp 742-770, May 1995. [113]A. Miller. From here to ATM. IEEE Spectrum, Vol. 31, No. 6, pp 20-24, June 1994. [114]Y.K. Muthusamy, E. Barnard and R.A. Cole. IEEE Signal Processing Magazine, Vol. 11, No. 4, pp. 33-41, October 1994. [115]R.N. Mutagi. Psuedo noise sequences for engineers. IEE Electronics Communication Engineering Journal, Vol.

448
8, No. 2, pp. 79-87, April 1996.

DSPedia

[116]R.N. Mutagi. Pseudo noise sequences for engineergs. IEE Electronics and Communication Engineering Journal, Vol. 8. No. 2, pp. 79-87, April 1996. [117]C.L. Nikias and J.M Mendel. Signal processing with higher order statistics. IEEE Signal Processing Magazine, Vol. 10, No. 3, p 10-37, July 1993. [118]P.A. Nelson, F. Orduna-Bustamante, D. Engler, H. Hamada. Experiments on a system for the synthesis of virtual acoustic sources. Journal of the Audio Engineering Society, Vol. 44, No. 11, pp 973-989, Novemeber 1996. [119]P.A. Nelson, F. Orduna-Bustamante, H. Hamada. Multichannel signal processing techniques in the reproduction of sound. Journal of the Audio Engineering Society, Vol. 44, No. 11, pp 973-989, Novemeber 1996. [120]P. Noll. Digital audio coding for visual communications. Proceedings of the IEEE, Vol. 83, No. 6, pp. 9 925-943, June 1995 [121]K.J. Olejniczak and G.T. Heydt. (editors). Special Section on the Hartley Transform. Proceedings of the IEEE, Vol. 82, No . 3, pp. 372-447, March 1994. [122]J. Picone. Continuous speech recognition using hidden Markov models. IEEE ASSP Magazine, Vol. 7, No. 3, pp. 26-41, July 1990. [123]M. Poletti. The design of encoding functions for stereophonic and polyphonic sound systems. Journal of the Audio Engineering Society, Vol. 44, No. 11, pp 948-963, November 1996. [124]P.A. Ramsdale. The development of personal communications. IEE Electronics Communication Engineering Journal, Vol. 8, No. 3, pp. 143-151, June 1996. [125]P. Regalia, S.K. Mitra, P.P. Vaidynathan. The digital all-pass filter: a versatile building block. Proceedings of the IEEE, Vol. 76, No. 1, pp. 19-37, January 1988. [126]D.W. Robinson and R.S. Dadson. A redetermination of the equal loudness relations for pure tones. British Journal of Applied Physics, Vol. 7, pp. 166-181, 1956. [127]R.W. Robinson. Tools for Embedded Digital Signal Processing. IEEE Spectrum, Vol. 29, No. 11, pp 81-84, November 1992. [128]C.W. Sanchez. An Understanding and Implementation of the SCMS Serial Copy Management System for Digital Audio Transmission. 94th AES Convention, Preprint #3518, March 1993. R. Schafer and T. Sikora. Digital video coding standards and their role in video communications. Proceedings of the IEEE, Vol. 83, No. 6, pp. 907-924, June 1995. [129]C.E. Shannon. A mathematical theory of Communication. The Bell System Technical Journal, Vol. 27, pp. 379423, July 1948. (Reprinted in Claude Elwood Shannon: Collected Papers [41].) [130]C.E. Shannon. The Bandwagon (Editorial). Institute of Radio Engineers, Transations on Information Theory, Vol. IT-2, p. 3 March 1956. (Reprinted in Claude Elwood Shannon: Collected Papers [41].) [131]J.J. Shynk. Frequency domain and multirate adaptive filtering. IEEE Signal Processing Magazine, Vol. 9, No. 1, pp. 10-37, January 1992. [132]J.J. Shynk. Adaptive IIR filtering. IEEE ASSP Magazine, Vol. 6, No. 2, pp. 4-21, April 1989. [133]H.F. Silverman and D.P. Morgan. The appliation of dynamic programming to converted speech recognition. IEEE ASSP Magazine, Vol. 7, No. 3, pp. 6-25, July 1990. [134]J.L. Smith. Data compression and perceived quality. IEEE Signal Processing Magazine, Vol. 12, No. 5, pp. 58-59, September 1995.

449
[135]A.S. Spanias. Speech coding: a tutorial review. Proceedings of the IEEE, Vol. 82, No. 10, pp. 1541-1582, October 1994. [136]A.O. Steinhardt. Householder transforms in signal processing. IEEE Signal Processing Magazine, Vol. 5, No. 3, pp. 4-12, July 1988. [137]R.W. Stewart. Practical DSP for Scientist. Proceedings of IEEE International Conference on Acoutics, Speech and Signal Processing 93, pp. I-32 to I-35, Minneapolis, 1993. [138]C.Stone. Infrasound. Audio Media, Issue 55, AM Publishing Ltd, London,June 1995. [139]J.A. Storer. Special Section on Data Compression. Proceedings of the IEEE, Vol. 82, No. 6, pp. 856-955, June 1994. [140]JP. Strobach. New forms of Levinson and Schur algorithms. IEEE Signal Processing Magazine, Vol. 8, No. 1, pp. 12-36, January 1991. [141].R. Treicher, I. Fijalkow, and C.R. Johnson, Jr. Fractionally spaced equalizers. IEEE Signal Processing Magazine, Vol. 13, No. 3, pp. 65-81, May 1996. [142]B.D.Van Veen and K. Buckley. Beamforming: A Versatile Approach to spatial filtering. IEEE ASSP Magazine, Vol. 5, No.2, pp. 4-24, April 1988. [143]V. Valimaki, J. Huopaniemi, M. Karjalainen and Z. Janosy. Physical modeling of plucked string instruments with application to real time sound synthesis. Journal of the Audio Engineering Society, Vol. 44, No. 5, pp. 331-353, May 1996. [144]V.D. Vaughn and T.S. Wilkinson. System considerations for multispectral image compression designs. IEEE Signal Processing Magazine, Vol. 12, No. 1, pp. 19-31, January 1995. [145]S.A. White. Applications of distributed arithmetic to digital signal processing: a tutorial review. [146]IEEE ASSP Magazine, Vol. 6, No. 3, pp. 4-19, July 1989. [147]W.H.W. Tuttlebee. Cordless telephones and cellular radios: synergies of DECT and GSM. IEE Electronics Communication Engineering Journal, Vol. 8, No. 5, pp. 213-223, October 1996. [148]Working Group on Communication Aids for the Hearing Impaired. Speech perception aids for hearing impaired people: current status and needed research. Journal of Acoustical Society of America, Vol. 90, No.2, 1991 [149]R.D. Wright. Signal processing hearing aids. Hearing Aid Audiology Group, Special Publication, British Society of Audiology, London, 1992. [150]F. Wylie. Digital audio data compression. IEE Electronics and Communication Engineering Journal, pp. 5-10, February 1995. [151]I. Wickelgren. The Strange Senses of Other Species. IEEE Spectrum, Vol. 33, No. 3, pp. 32-37. March 1996. [152]B. Widrow et al. Adaptive Noise Cancellation: Principles and Applications. Proceedings of the IEEE, Vol. 63, pp. 1692-1716, 1975. [153]B. Widrow et al. Stationary and Non-stationary learning characteristics of the LMS adaptive filter. Proc. IEEE, Vol 64, pp. 1151-1162, 1976. [154]T. Yamamoto, K. Koguchi, M. Tsuchida. Proposal of a 96kHz sampling digital audio. 97th AES Convention, October 1994, Audio Engineering Society preprint 3884 (F-5). [155]T. Yoshida. The rewritable minidisc system. Proceedings of the IEEE, Vol. 82, No. 10, pp. 1492-1500 October 1994. [156]Y.Q. Zhang, W. Li, M.L. Liou (Editors). Special Issue on Advances in Image and Video Compression. Proceedings

450 of the IEEE, Vol. 83, No. 2, February 1995.

DSPedia

[157]British Society of Audiology. Recommended procedures for pure tone audiometry. British Journal of Audiometry, Vol. 15, pp213-216, 1981. [158]IEC-958/ IEC-85, Digital Audio Interface / Amendment. International Electrotechnical Commission, 1990. [159]DSP Education Session. Proceedings of IEEE International Conference on Acoutics, Speech and Signal Processing 92, pp. 73-109, San Francisco, 1992. [160]Special Section on the Hartley Transform (Edited by K.J. Olejniczak and G.T. Heydt). Proceedings of the IEEE, Vol. 82, No. 3, March 1994. [161]Special Issue on Advances in Image and Video Compression (Edited by Y.Q. Zhang, W. Li and M.L. Liou). Proceedings of the IEEE, Vol. 83, No. 2, February 1995. [162]Special Issue on Digital Television Part 2: Hardware and Applications (Editor M. Kunt). Proceedings of the IEEE, Vol. 83, No. 7, July 1995. [163]Special Issue on Electrical Therapy of Cardiac Arrhythmias (Edited by R.E. Ideker and R.C. Barr). Proceedings of the IEEE, Vol. 84, No. 3, March 1996. [164]Special Section on Data Compression (Editor J.A. Storer). Proceedings of the IEEE, Vol. 82, No. 6, June 1994. [165]Special Section on Field Programmable Gate Arrays (Editor A. El Gamal). Proceedings of the IEEE, Vol. 81, No. 7, July 1993. [166]Special Issue on Wireless Networks for Mobile and Personal Communications (Editor B. Jabbari). Proceedings of the IEEE, Vol. 82, No. 9, September 1994. [167]Special Issue on Digital Television, Part 1: Technologies (Editor M. Kunt). Proceedings of the IEEE, Vol. 83, No. 6, June 1995. [168]Special Issue on Time-Frequency Analysis (Editor P.J. Loughlin). Proceedings of the IEEE, Vol. 84, No. 9, September 1996. [169]Technology 1995. IEEE Spectrum, Vol. 32, No.1, January 1995.

Similar Documents

Premium Essay

Multicultural Lesson Plan

...Lesson Plans Lesson plans will be filled out on how to use Dolch words to help improve reading. 1. Are the objectives aligned with academic standards? Yes State how the lesson objectives support the academic standards. The lesson will help students in grades Pre-K, K and 1st to understand how to read. 2. Are the assessments aligned with the objectives? Yes Is it explicitly clear what the students should know and be able to do by the end of the lesson? Yes, the students should know a certain number of dolch words by a certain date. What is the evidence demonstrating mastery of knowledge and/or skills that will support the lesson objectives? Documentation of the words correct on paper. 3. Are the learning experiences relevant to the objectives? Yes 4. Is instruction differentiated? Yes, EEL and Special Education students will have flash cards, I-pads to help pronounce the words, memory match. How will ELL or special needs students benefit from these lessons? To help them build up their vocabulary by learning the basic and more frequent words. Name the activities that support learning modalities, gender, or cultural groups. 5. Are there frequent and multiple ways to check for understanding during instruction? List the strategies to check for understanding. Include other strategies not in the lessons that could be utilized to further check for student understanding. • Spelling test • Memory test • Flash card • Reading probes • Printable worksheets ...

Words: 469 - Pages: 2

Premium Essay

Vietnam Legacy Lesson Plan

...DEMO LESSON Lesson Plan in Language in Literature II Afro-Asian Literature I. Subject Matter: Vietnam Legacy Reference: Language in Literature Afro-Asian Literature (p. 156) Materials: illustration boards, Multimedia materials Time Frame: 2 Meetings II. Objectives A. Generate responses to a question the poem asks about giving tributes to fallen soldiers B. Identify imageries in the poem C. Demonstrate comprehension on the poem by giving accurate answers to questions D. Enumerate ways how they can promote peace in their own little ways as students. III. Lesson Proper A. Introduction 1. Through a DLP, show pictures of soldiers. Let students verbalize their ideas about the pictures. 2. Inform them that they are about to read a poem about soldiers. 3. Post the following question to be answered towards the end of the lesson: What is the best way to remember those who fought in a war for freedom? B. Interaction 4. Let students read the poem silently, then let them read it aloud in chorus with the poem projected through DLP. 5. Instruct them to locate the following unfamiliar words on the poem, then give their definitions: * Granite- a very hard rock used in building * Engraved- carved into a block or surface * Relive- To undergo or experience again, especially in the imagination. * Laureate- a person who has been honored for achieving distinction in a particular field or with a particular award...

Words: 585 - Pages: 3

Free Essay

Ubd Lesson Plan

...|Name: Gail Faulkner |Date:4-22-2012 | |Unit Title: the study of Water | |Grade Level: PRE-K | |Subject: Science | |Unit Length: 10 minutes | |Brief Summary of Unit | |(Describe the context for this unit within the curriculum, and the curricular aims of the unit.) | | | |What understandings or goals will be assessed through this task? (Students will…) | | | |Will understand how water is a way of life for people. They will also gain recognition of various types of things we use water for.| |The students will develop...

Words: 1904 - Pages: 8

Free Essay

Classroom Observation Check

...TLA Self-Assessment: To What Extent Do My Lessons Show… OLTA Code | Area of Teaching, Learning and Assessment - “To what extent do I, my students or my lesson show…” | Strength | AFI | 1 | QUALITY OF LEARNING | | | 1a | Punctuality (and attendance) * do my students arrive on time; are they ready and prepared for work; * do I address lateness properly and effectively; have I evidence of strategies I have used to prevent lateness; * do I support late-comers and integrate them into the lesson; * am I aware of any background issues that might prevent my students arriving on time; * do my students demonstrate a commitment to learning through consistently high records of attendance (90% +)… | | | 1b | Learner Engagement * are my students attentive and focused; * are they responding to my questioning and are ALL my students engaged in the tasks during the lesson; * are my students enthusiastic and do I nurture this with my tasks; * am I aware of which students need more of a challenge and do I include this in my tasks… | | | 1c | Standard of Work * if someone was to look at my students’ work on the tables or in their folders does it meet or exceed the expectations of typical students at this level; * are their folders neat, tidy and have they brought the appropriate resources with them; * am I helping my students develop along Blooms’ taxonomy from developmental to higher order skills… | | | 1d | Individual & Independent Learning (outside)...

Words: 2434 - Pages: 10

Premium Essay

Cross Curricular Learning

...Introduction Cross Curricular teaching involves a conscious effort to encourage students to apply skills or knowledge acquired through learning at school to more than one subject on their curriculum. A central topic or theme can interlink subjects with each other, and students as a result should be able to learn an aspect of a central topic and apply information learned in one subject to others(Householder & Boulin 1992). As mentioned in Component 1 there are huge advantages to Cross Curricular teaching and learning. However, many students do not or cannot apply the knowledge that they have acquired in one subject to others (www.fromtinyacorns.wordpress.com/2009/10/28/about-transdisciplinary-thematic-learning) Learning beyond subject boundaries provides a student with the opportunity to experience not only learning that is relevant to one subject’s requirements, but also learning information that they will apply to other subjects. This type of learning some students may find to be challenging as they will be studying one topic from many different perspectives. However, with a deeper understanding of a topic, others will find it more interesting and may be encouraged to study the topic even further. Rensick (1989) confirms this stating that in contrast to learning topics in isolation students become more actively engaged in their studies when they participate in cross curricular learning. Cross curricular learning that is based on linking learning experiences and...

Words: 3118 - Pages: 13

Premium Essay

Reflective Essay 3

...REFLECTIVE ESSAY #4 What a week last week was. Most the lessons touched me in some way. I liked the discipline model and progressive discipline process. It reminded me on how discipline should be handled. I have seen the progressive discipline model in action and when it is was done right it worked. Once again the speech was a little nerve racking, but I got through it. I missed a lot of my introduction because I got flustered. After I got over my initial nervousness the rest was smooth. It flowed pretty good and I had a good closing. Now I know that I need a little more practice with my speech before jumping into it. I know that I can do better on it so I will use my experience on the last two speeches to improve on the next one. Effective negotiations was an eye opener in the aspect that I do a lot of the technics that were taught, but didn’t know I was using them. I am defiantly more integrative then distributive. I like to build that relationship then to “burn the bridge” with people. I try to make sure that everyone is happy with the out come of the discussion and that everyone got what they wanted. Sometimes it’s not possible and I feel like I have failed when that happen. Continuous improvement was a little difficult to wrap my head around at times. As soon as I thought I had a grasp on it I would get the question that was posed wrong. System 1 and system 2 thinking was like that, but by the end of the lesson I had a better understanding of the...

Words: 284 - Pages: 2

Premium Essay

A Day No Pigs Would Die Character Analysis

...Have you ever learned a valuable lesson that has helped you? In the book A Day No Pigs Would Die by Robert Peck, Rob also learned some valuable lessons throughout the book. His friends and family members taught him those lessons to help him later in life. SomeA few of the important lessons he learns are life isn’t fair, death is a part of life, and respect and listen your elders. Rob learns that life isn’t fair mostly from his father. Rob wanted to go to a baseball game on Sunday but his father told him he couldn’t. In the text, Rob’s dad says, “Rob, the Book of Shaker forbids frills on any day. And that goes double on Sunday.”(32) People think that Rob’s dad isn’t as good as them because he can’t read or write. Rob finds this out when he asks his dad why he can’t vote. In the text Rob’s dad also says, “...It’s account of...

Words: 801 - Pages: 4

Free Essay

Lesson from Geese

...Lessons from Geese 'Individual empowerment results from quality honking' Lessons from Geese provides a perfect example of the importance of team work and how it can have a profound and powerful effect on any form of personal or business endeavor. When we use these five principles in our personal and business life it will help us to foster and encourage a level of passion and energy in ourselves, as well as those who are our friends, associates or team members. It is essential to remember that teamwork happens inside and outside of business life when it is continually nurtured and encouraged. Lesson 1 - The Importance of Achieving Goals as each goose flaps its wings it creates an UPLIFT for the birds that follow. By flying in a 'V' formation the whole flock adds 71 percent extra to the flying range. Outcome When we have a sense of community and focus, we create trust and can help each other to achieve our goals. Lesson 2 - The Importance of Team Work When a goose falls out of formation it suddenly feels the drag and resistance of flying alone. It quickly moves back to take advantage of the lifting power of the birds in front. Outcome if we had as much sense as geese we would stay in formation with those headed where we want to go. We are willing to accept their help and give our help to others. Lesson 3 - The Importance of Sharing when a goose tires of flying up front it drops back into formation and another goose flies to the point position. Outcome It pays to...

Words: 397 - Pages: 2

Free Essay

Lesson Plan

...LESSON PLAN Activity:Volleyball Grade: High School Date: 1/31/14 Name. Objectives: Number of Students:30 1. (Psychomotor) Students will demonstrate proper form of a Forearm Pass through repetition. Equipment Needed: balls, net Play Space Needed: court 2. (Cognitive) Students will show proper understanding of a Forearm Pass through peer teaching. Special Markings: front court, back court CALIFORNIA SKILL AREAS: 1. Sensorimotor and perceptual motor 2. Locomotor 3. Non-locomotor 4. Balance 5. Eye-hand coordination 6. Eye-foot coordination 7. General coordination 8. Creative movement NASPE CONTENT STANDARDS IN PHYSICAL EDUCATION – a physically educated person: 1. Demonstrates competency in motor skills and movement patterns needed to perform a variety of physical activities 2. Demonstrates understanding of movement concepts, principles, strategies, and tactics as they apply to the learning and performance of physical activities 3. Participates regularly in physical activity 4. Achieves and maintains a health-enhancing level of physical fitness 5. Exhibits responsible personal and social behavior that respects self and others in physical activity 6. Values physical activity for health, enjoyment, challenge, self-expression, and/or social interaction Time Description of Skills and Activities Student Teacher Organization Formations Teaching Cues...

Words: 463 - Pages: 2

Free Essay

Fnt1

...Lessons from Geese 'Individual empowerment results from quality honking' Lessons from Geese provides a perfect example of the importance of team work and how it can have a profound and powerful effect on any form of personal or business endeavor. When we use these five principles in our personal and business life it will help us to foster and encourage a level of passion and energy in ourselves, as well as those who are our friends, associates or team members. It is essential to remember that teamwork happens inside and outside of business life when it is continually nurtured and encouraged. Lesson 1 - The Importance of Achieving Goals as each goose flaps its wings it creates an UPLIFT for the birds that follow. By flying in a 'V' formation the whole flock adds 71 percent extra to the flying range. Outcome When we have a sense of community and focus, we create trust and can help each other to achieve our goals. Lesson 2 - The Importance of Team Work When a goose falls out of formation it suddenly feels the drag and resistance of flying alone. It quickly moves back to take advantage of the lifting power of the birds in front. Outcome if we had as much sense as geese we would stay in formation with those headed where we want to go. We are willing to accept their help and give our help to others. Lesson 3 - The Importance of Sharing when a goose tires of flying up front it drops back into formation and another goose flies to the point position. Outcome It pays to...

Words: 397 - Pages: 2

Free Essay

Docx

...project that you would use to demonstrate the concept of mechanism/ the use of waste material to your learners and then:  Choose the grade (8-12)    (b) (c) discuss the theoretical background for the project as you would explain it to your learners complete step one of the technological process (problem analysis) for the project you have chosen (describe the project as a problem that must be solved) complete step two of the technological process (design alternative solutions) for the project you have chosen. Discuss the role of activities in Technology Education (give your own opinion as well). Choose any topic of Technology subject and.   develop a lesson plan that includes a practical project that the learners have to do, then, develop a completed project layout example for the practical project that you included in your lesson plan. The project layout example must be completed in such a way that it can serve as an example of how you would expect your learners to complete a project layout, and not consist only of the headings of the technological process. NB Use the curriculum...

Words: 351 - Pages: 2

Free Essay

Aed 205 Course Success Begins / Tutorialrank.Com

...AED 205 Career in Education(UOP) For more course tutorials visit www.tutorialrank.com Write a 500- to 700-word reflection on the reasons why you are interested in a career in education, and what you expect classroom dynamics to be like. What inspired you? What does the ideal classroom environment look like in your mind? How will your past experiences as a student or in your current line of work influence how you will manage a class full of students? ----------------------------------------------------------------------- AED 205 Challenges in the Classroom Checkpoint (UOP) For more course tutorials visit www.tutorialrank.com Respond to the following in 200 to 250 words: In your opinion, what are the most challenging obstacles teachers face when managing the classroom? What strengths do you possess that will help you meet and overcome those challenges? ----------------------------------------------------------------------- AED 205 Challenges of Independent Work Checkpoint(UOP) For more course tutorials visit www.tutorialrank.com Resource: Find Table 8.1 on p. 235 (Ch. 8) of the text. Select one challenge for a teacher, and one challenge for a student. Explain in 200 to 300 words which strategies you would use to mediate those selected challenges posed byseatwork. ----------------------------------------------------------------------- AED 205 Classroom Management Goals Checkpoint(UOP) For more course tutorials visit www.tutorialrank...

Words: 555 - Pages: 3

Premium Essay

Classroom Management

...as being the most important of this week’s lessons, classroom management and cooperative learning. Classroom management might be one of the single most important things to having a successful teaching career. Some teachers have terrible discipline problems, while others create an atmosphere in the classroom that is conductive to learning, with little apparent effort. Of cause it can be some natural talent to captivate students from the very beginning of the lesson, keep them busy and involved in work all the time and easily solve discipline problems if they appear, and all this without bothering much. There is definitely a lot to envy. Especially when your dear students always disappoint you with misbehavior, constant talking over you, fooling around and growing restless and as a result – poor knowledge of the subject. Fortunately there is a way out – start learning rules of classroom management. I think that if you can focus on four major things you can have good classroom management, these are: effective lessons based on a well-conceived curriculum, good organizational skills, good teacher-student relationships, effective discipline (which can be easily achieved after the first three conditions are satisfied.) The first of these points must not be overlooked. Most of the ‘discipline’ difficulties experienced by teachers in the classroom were before the lesson started; they were inherent in the lesson plan The students’ work should be interesting...

Words: 563 - Pages: 3

Premium Essay

How to Create a Lesson Plan

...There are many elements to take under consideration while creating a lesson plan, however these elements are important for different reasons and when combining the elements in the right order you can produce what every teacher hopes for: measurable student learning. The elements I’ll be discussing are Common Core Curriculum Standards, Performance and Learning Outcomes, Anticipatory Set, Lesson Opening, Guiding Questions, Instructional Step by Step Procedures, Guided Practice, Lesson Closing, Independent Practice, Materials, Assessment, Lesson Evaluation, and Management. The first element that should appear in a lesson plan is the Common Core Curriculum Standards. This should be first within ones lesson plan because it withholds the standards that clearly communicate was is expected of the student at each grade level within the different subject matters. This element is important because it realizes that both content and skills are important. CCSS are also important because it sets a guideline for what students should be learning and include rigorous content and application of knowledge through high order skills that are aligned with future college and work expectations. The second element that should appear in a lesson plan is that of Performance and Learning Outcomes, or in different terms the objectives and goals. This should appear second because it outlines the topics and material that the students will be learning with a certain amount of time; also showing them what...

Words: 1354 - Pages: 6

Free Essay

Blooms Taxonomy

...UbD Lesson Plan Template Client Organization: Telephone: Main Contact: Fax: _________________________________________________ Email Address: Date: |VITAL INFORMATION | |Author |Richard Chappell | |*Subject(s) |English reading class | |Topic or Unit of Study |reading | |*Grade/Level |9th | |*Summary |Teacher will introduce The Cask of Amontillado to the class | | |Teachers writes down some of the following questions from Blooms taxonomy levels: | | | ...

Words: 636 - Pages: 3