Free Essay

Meteorological Instruments


Submitted By johnn014
Words 216230
Pages 865
Guide to Meteorological Instruments and Methods of Observation

WMO-No. 8

Guide to Meteorological Instruments and Methods of Observation
WMO-No. 8

Seventh edition 2008

WMO-No. 8 © World Meteorological Organization, 2008 The right of publication in print, electronic and any other form and in any language is reserved by WMO. Short extracts from WMO publications may be reproduced without authorization, provided that the complete source is clearly indicated. Editorial correspondence and requests to publish, reproduce or translate this publication in part or in whole should be addressed to: Chairperson, Publications Board World Meteorological Organization (WMO) 7 bis, avenue de la Paix P.O. Box No. 2300 CH-1211 Geneva 2, Switzerland ISBN 978-92-63-10008-5
NOTE The designations employed in WMO publications and the presentation of material in this publication do not imply the expression of any opinion whatsoever on the part of the Secretariat of WMO concerning the legal status of any country, territory, city or area, or of its authorities, or concerning the delimitation of its frontiers or boundaries. Opinions expressed in WMO publications are those of the authors and do not necessarily reflect those of WMO. The mention of specific companies or products does not imply that they are endorsed or recommended by WMO in preference to others of a similar nature which are not mentioned or advertised.

Tel.: +41 (0) 22 730 84 03 Fax: +41 (0) 22 730 80 40 E-mail:


One of the purposes of the World Meteorological Organization (WMO) is to coordinate the activities of its 188 Members in the generation of data and information on weather, climate and water, according to internationally agreed standards. With this in mind, each session of the World Meteorological Congress adopts Technical Regulations which lay down the meteorological practices and procedures to be followed by WMO Members. These Technical Regulations are supplemented by a number of Manuals and Guides which describe in more detail the practices, procedures and specifications that Members are requested to follow and implement. While Manuals contain mandatory practices, Guides such as this one contain recommended practices. The first edition of the Guide to Meteorological Instruments and Methods of Observation was published in 1954 and consisted of twelve chapters. Since then, standardization has remained a key concern of the Commission for Instruments and Methods of Observation (CIMO) activities, and CIMO has periodically reviewed the contents of the Guide, making recommendations for additions and amendments whenever appropriate. The present, seventh, edition is a fully revised version which includes additional topics and chapters reflecting recent technological developments. Its purpose, as with the previous editions, is to give comprehensive and up-to-date guidance on the most effective practices for carrying out meteorological observations and measurements. This edition was prepared through the collaborative efforts of 42 experts from 17 countries and was adopted by the fourteenth session of CIMO (Geneva, December 2006). The Guide describes most instruments, systems and techniques in regular use, from the simplest to the most complex and sophisticated, but does not attempt to deal with methods and instruments used only for research or experimentally. Furthermore,

the Guide is not intended to be a detailed instruction manual for use by observers and technicians, but rather, it is intended to provide the basis for the preparation of manuals by National Meteorological and Hydrological Services (NMHSs) or other interested users operating observing systems, to meet their specific needs. However, no attempt is made to specify the fully detailed design of instruments, since to do so might hinder their further development. It was instead considered preferable to restrict standardization to the essential requirements and to confine recommendations to those features which are generally most common to various configurations of a given instrument or measurement system. Although the Guide is written primarily for NMHSs, many other organizations and research and educational institutions taking meteorological observations have found it useful, so their requirements have been kept in mind in the preparation of the Guide. Additionally, many instrument manufacturers have recognized the usefulness of the Guide in the development and production of instruments and systems especially suited to Members’ needs. Because of the considerable demand for this publication, a decision was taken to make it available on the WMO website to all interested users. Therefore, on behalf of WMO, I wish to express my gratitude to all those NMHSs, technical commissions, expert teams and individuals who have contributed to this publication.

(M. Jarraud) Secretary-General



Part I. MeasureMent of MeteorologIcal VarIaBles
CHAPTER 1. General ........................................................................................................................ CHAPTER 2. Measurement of temperature ..................................................................................... CHAPTER 3. Measurement of atmospheric pressure ....................................................................... CHAPTER 4. Measurement of humidity .......................................................................................... CHAPTER 5. Measurement of surface wind ..................................................................................... CHAPTER 6. Measurement of precipitation .................................................................................... CHAPTER 7. Measurement of radiation .......................................................................................... CHAPTER 8. Measurement of sunshine duration ........................................................................... CHAPTER 9. Measurement of visibility ........................................................................................... CHAPTER 10. Measurement of evaporation ...................................................................................... CHAPTER 11. Measurement of soil moisture .................................................................................... CHAPTER 12. Measurement of upper-air pressure, temperature and humidity ............................... CHAPTER 13. Measurement of upper wind....................................................................................... CHAPTER 14. Present and past weather; state of the ground ........................................................... CHAPTER 15. Observation of clouds ................................................................................................. CHAPTER 16. Measurement of ozone ............................................................................................... CHAPTER 17. Measurement of atmospheric composition ................................................................ I.1–1 I.2–1 I.3–1 I.4–1 I.5–1 I.6–1 I.7–1 I.8–1 I.9–1 I.10–1 I.11–1 I.12–1 I.13–1 I.14–1 I.15–1 I.16–1 I.17–1

Guide to MeteoroloGical instruMents and Methods of observation


Part II. oBserVIng systeMs
CHAPTER 1. Measurements at automatic weather stations ............................................................ CHAPTER 2. Measurements and observations at aeronautical meteorological stations ................. CHAPTER 3. Aircraft observations ................................................................................................... CHAPTER 4. Marine observations ................................................................................................... CHAPTER 5. Special profiling techniques for the boundary layer and the troposphere ................ CHAPTER 6. Rocket measurements in the stratosphere and mesosphere ....................................... CHAPTER 7. Locating the sources of atmospherics ......................................................................... CHAPTER 8. Satellite observations .................................................................................................. CHAPTER 9. Radar measurements ................................................................................................... II.1–1 II.2–1 II.3–1 II.4–1 II.5–1 II.6–1 II.7–1 II.8–1 II.9–1

CHAPTER 10. Balloon techniques ..................................................................................................... II.10–1 CHAPTER 11. Urban observations ..................................................................................................... II.11–1 CHAPTER 12. Road Meteorological Measurements ........................................................................... II.12–1

Part III. QualIty assurance anD ManageMent of oBserVIng systeMs
CHAPTER 1. Quality management .................................................................................................. III.1–1 CHAPTER 2. Sampling meteorological variables ............................................................................. III.2–1 CHAPTER 3. Data reduction ............................................................................................................ III.3–1 CHAPTER 4. Testing, calibration and intercomparison................................................................... III.4–1 CHAPTER 5. Taining of instrument specialists ................................................................................ III.5–1

lIst of contrIButors to tHe guIDe .............................................................. III.3–1

Part I MeasureMent of MeteorologIcal VarIaBles

Part I. MeasureMent of MeteorologIcal VarIaBles contents
Page CHAPTER 1. GEnERAl....................................................................................................................... 1.1 Meteorological observations ................................................................................................ 1.2 Meteorological observing systems ....................................................................................... 1.3 General requirements of a meteorological station .............................................................. 1.4 General requirements of instruments.................................................................................. 1.5 Measurement standards and definitions ............................................................................. 1.6 Uncertainty of measurements ............................................................................................. Annex 1.A. Regional centres ......................................................................................................... Annex 1.B. Operational measurement uncertainty requirements and instrument performance ................................................................................................................................... Annex 1.C. Station exposure description ...................................................................................... References and further reading ...................................................................................................... CHAPTER 2. MEASUREMEnT Of TEMPERATURE............................................................................. 2.1 General ................................................................................................................................. 2.2 liquid-in-glass thermometers .............................................................................................. 2.3 Mechanical thermographs ................................................................................................... 2.4 Electrical thermometers ....................................................................................................... 2.5 Radiation shields .................................................................................................................. Annex. Defining the fixed points of the international temperature scale of 1990 ....................... References and further reading ...................................................................................................... CHAPTER 3. MEASUREMEnT Of ATMOSPHERIC PRESSURE ........................................................... 3.1 General ................................................................................................................................. 3.2 Mercury barometers ............................................................................................................. 3.3 Electronic barometers .......................................................................................................... 3.4 Aneroid barometers.............................................................................................................. 3.5 Barographs ........................................................................................................................... 3.6 Bourdon-tube barometers .................................................................................................... 3.7 Barometric change ............................................................................................................... 3.8 General exposure requirements ........................................................................................... 3.9 Barometer exposure ............................................................................................................. 3.10 Comparison, calibration and maintenance ......................................................................... 3.11 Adjustment of barometer readings to other levels .............................................................. 3.12 Pressure tendency and pressure tendency characteristic ................................................... Annex 3.A. Correction of barometer readings to standard conditions ......................................... Annex 3.B. Regional standard barometers ..................................................................................... References and further reading ...................................................................................................... I.1–1 I.1–1 I.1–2 I.1–2 I.1–6 I.1–7 I.1–9 I.1–17 I.1–19 I.1–25 I.1–27 I.2–1 I.2–1 I.2–4 I.2–10 I.2–11 I.2–16 I.2–18 I.2–20 I.3–1 I.3–1 I.3–3 I.3–8 I.3–11 I.3–12 I.3–13 I.3–13 I.3–14 I.3–14 I.3–15 I.3–20 I.3–21 I.3–22 I.3–25 I.3–26

CHAPTER 4. MEASUREMEnT Of HUMIDITy ................................................................................... 4.1 4.2 4.3 4.4 4.5 General ................................................................................................................................. The psychrometer ................................................................................................................ The hair hygrometer ............................................................................................................ The chilled-mirror dewpoint hygrometer............................................................................ The lithium chloride heated condensation hygrometer (dew cell).....................................

I.4–1 I.4–1 I.4–6 I.4–12 I.4–14 I.4–17


Part I. MeasureMent of MeteorologIcal VarIaBles

Page 4.6 Electrical resistive and capacitive hygrometers ................................................................... 4.7 Hygrometers using absorption of electromagnetic radiation .............................................. 4.8 Safety .................................................................................................................................... 4.9 Standard instruments and calibration ................................................................................. Annex 4.A. Definitions and specifications of water vapour in the atmosphere............................ Annex 4.B. formulae for the computation of measures of humidity ........................................... References and further reading ...................................................................................................... CHAPTER 5. MEASUREMEnT Of SURfACE wInD ........................................................................... 5.1 General ................................................................................................................................. 5.2 Estimation of wind .............................................................................................................. 5.3 Simple instrumental methods ............................................................................................. 5.4 Cup and propeller sensors ................................................................................................... 5.5 wind-direction vanes........................................................................................................... 5.6 Other wind sensors .............................................................................................................. 5.7 Sensors and sensor combinations for component resolution ............................................. 5.8 Data-processing methods..................................................................................................... 5.9 Exposure of wind instruments ............................................................................................. 5.10 Calibration and maintenance .............................................................................................. Annex. The effective roughness length ......................................................................................... References and further reading ...................................................................................................... I.4–20 I.4–21 I.4–21 I.4–23 I.4–26 I.4–29 I.4–30 I.5–1 I.5–1 I.5–3 I.5–4 I.5–4 I.5–5 I.5–5 I.5–6 I.5–6 I.5–8 I.5–11 I.5–12 I.5–13

CHAPTER 6. MEASUREMEnT Of PRECIPITATIOn ........................................................................... 6.1 General ................................................................................................................................. 6.2 Siting and exposure ............................................................................................................. 6.3 non-recording precipitation gauges .................................................................................... 6.4 Precipitation gauge errors and corrections .......................................................................... 6.5 Recording precipitation gauges ........................................................................................... 6.6 Measurement of dew, ice accumulation and fog precipitation ........................................... 6.7 Measurement of snowfall and snow cover .......................................................................... Annex 6.A. Precipitation intercomparison sites ............................................................................ Annex 6.B. Suggested correction procedures for precipitation measurements ............................. References and further reading ...................................................................................................... CHAPTER 7. MEASUREMEnT Of RADIATIOn .................................................................................. 7.1 General ................................................................................................................................. 7.2 Measurement of direct solar radiation................................................................................. 7.3 Measurement of global and diffuse sky radiation ............................................................... 7.4 Measurement of total and long-wave radiation .................................................................. 7.5 Measurement of special radiation quantities ...................................................................... 7.6 Measurement of UV radiation ............................................................................................. Annex 7.A. nomenclature of radiometric and photometric quantities ........................................ Annex 7.B. Meteorological radiation quantities, symbols and definitions ................................... Annex 7.C. Specifications for world, regional and national radiation centres ............................. Annex 7.D. Useful formulae .......................................................................................................... Annex 7.E. Diffuse sky radiation – correction for a shading ring.................................................. References and further reading ......................................................................................................

I.6–1 I.6–1 I.6–3 I.6–3 I.6–6 I.6–8 I.6–11 I.6–14 I.6–18 I.6–19 I.6–20 I.7–1 I.7–1 I.7–5 I.7–11 I.7–19 I.7–24 I.7–25 I.7–31 I.7–33 I.7–35 I.7–37 I.7–39 I.7–40


I.3 Page

CHAPTER 8. MEASUREMEnT Of SUnSHInE DURATIOn ................................................................ 8.1 General ................................................................................................................................. 8.2 Instruments and sensors ...................................................................................................... 8.3 Exposure of sunshine detectors ........................................................................................... 8.4 General sources of error ....................................................................................................... 8.5 Calibration ........................................................................................................................... 8.6 Maintenance ........................................................................................................................ Annex. Algorithm to estimate sunshine duration from direct global irradiance measurements ................................................................................................................................ References and further reading ...................................................................................................... CHAPTER 9. MEASUREMEnT Of VISIBIlITy .................................................................................... 9.1 General ................................................................................................................................. 9.2 Visual estimation of meteorological optical range .............................................................. 9.3 Instrumental measurement of the meteorological optical range ........................................ References and further reading ...................................................................................................... CHAPTER 10. MEASUREMEnT Of EVAPORATIOn ...........................................................................

I.8–1 I.8–1 I.8–3 I.8–7 I.8–7 I.8–7 I.8–9 I.8–10 I.8–11 I.9–1 I.9–1 I.9–5 I.9–8 I.9–15 I.10–1

10.1 General ................................................................................................................................. I.10–1 10.2 Atmometers .......................................................................................................................... I.10–2 10.3 Evaporation pans and tanks ................................................................................................ I.10–3 10.4 Evapotranspirometers (lysimeters) ...................................................................................... I.10–6 10.5 Estimation of evaporation from natural surfaces ................................................................ I.10–7 References and further reading ...................................................................................................... I.10–10 CHAPTER 11. MEASUREMEnT Of SOIl MOISTURE......................................................................... I.11–1

11.1 General ................................................................................................................................. I.11–1 11.2 Gravimetric direct measurement of soil water content ........................................................ I.11–3 11.3 Soil water content: indirect methods .................................................................................. I.11–4 11.4 Soil water potential instrumentation .................................................................................. I.11–6 11.5 Remote sensing of soil moisture .......................................................................................... I.11–8 11.6 Site selection and sample size .............................................................................................. I.11–9 References and further reading ...................................................................................................... I.11–10 CHAPTER 12. MEASUREMEnT Of UPPER-AIR PRESSURE, TEMPERATURE AnD HUMIDITy ......... 12.1 General ................................................................................................................................. 12.2 Radiosonde electronics ........................................................................................................ 12.3 Temperature sensors............................................................................................................. 12.4 Pressure sensors .................................................................................................................... 12.5 Relative humidity sensors .................................................................................................... 12.6 Ground station equipment .................................................................................................. 12.7 Radiosonde operations......................................................................................................... 12.8 Radiosondes errors .............................................................................................................. 12.9 Comparison, calibration and maintenance ......................................................................... 12.10 Computations and reporting ............................................................................................... Annex 12.A. Accuracy requirements (standard error) for upper-air measurements for synoptic meteorology, interpreted for conventional upper-air and wind measurements ............ I.12–1 I.12–1 I.12–6 I.12–7 I.12–9 I.12–12 I.12–15 I.12–16 I.12–18 I.12–28 I.12–31 I.12–34


Part I. MeasureMent of MeteorologIcal VarIaBles

Page Annex 12.B. Performance limits for upper wind and radiosonde temperature, relative humidity and geopotential height ................................................................................................ I.12–35 Annex 12.C. Guidelines for organizing radiosonde intercomparisons and for the establishment of test sites .................................................................................................. I.12–40 References and further reading ...................................................................................................... I.12–44 CHAPTER 13. MEASUREMEnT Of UPPER wInD ............................................................................. 13.1 General ................................................................................................................................. 13.2 Upper-wind sensors and instruments .................................................................................. 13.3 Measurement methods ....................................................................................................... 13.4 Exposure of ground equipment ........................................................................................... 13.5 Sources of error .................................................................................................................... 13.6 Comparison, calibration and maintenance ......................................................................... 13.7 Corrections........................................................................................................................... References and further reading ...................................................................................................... I.13–1 I.13–1 I.13–4 I.13–10 I.13–12 I.13–13 I.13–18 I.13–19 I.13–21

CHAPTER 14. PRESEnT AnD PAST wEATHER; STATE Of THE GROUnD........................................ 14.1 General ................................................................................................................................. 14.2 Observation of present and past weather ............................................................................ 14.3 State of the ground .............................................................................................................. 14.4 Special phenomena .............................................................................................................. Annex. Criteria for light, moderate and heavy precipitation intensity ........................................ References and further reading ...................................................................................................... CHAPTER 15. OBSERVATIOn Of ClOUDS .......................................................................................

I.14–1 I.14–1 I.14–2 I.14–5 I.14–5 I.14–7 I.14–8 I.15–1

15.1 General ................................................................................................................................. I.15–1 15.2 Estimation and observation of cloud amount, height and type ......................................... I.15–3 15.3 Instrumental measurements of cloud amount .................................................................... I.15–5 15.4 Measurement of cloud height using a searchlight .............................................................. I.15–5 15.5 Measurement of cloud height using a balloon .................................................................... I.15–7 15.6 Rotating-beam ceilometer .................................................................................................... I.15–7 15.7 laser ceilometer ................................................................................................................... I.15–8 References and further reading ...................................................................................................... I.15–11

CHAPTER 16. MEASUREMEnT Of OzOnE ....................................................................................... 16.1 General ................................................................................................................................. 16.2 Surface ozone measurements ............................................................................................... 16.3 Total ozone measurements .................................................................................................. 16.4 Measurements of the vertical profile of ozone .................................................................... 16.5 Corrections to ozone measurements ................................................................................... 16.6 Aircraft and satellite observations ....................................................................................... Annex 16.A. Units for total and local ozone ................................................................................. Annex 16.B. Measurement theory ................................................................................................. References and further reading ...................................................................................................... CHAPTER 17. MEASUREMEnT Of ATMOSPHERIC COMPOSITIOn ................................................

I.16–1 I.16–1 I.16–3 I.16–4 I.16–11 I.16–16 I.16–17 I.16–18 I.16–20 I.16–22 I.17–1

17.1 General ................................................................................................................................. I.17–1 17.2 Measurement of specific variables ....................................................................................... I.17–1 17.3 Quality assurance ................................................................................................................. I.17–10 References and further reading ...................................................................................................... I.17–12



1.1 1.1.1

Meteorological observations


set up Regional Instrument Centres (RICs) to maintain standards and provide advice. Their terms of reference and locations are given in Annex 1.A. The definitions and standards stated in this Guide (see section 1.5.1) will always conform to internationally adopted standards. Basic documents to be referred to are the International Meteorological Vocabulary (WMO, 1992a) and the International Vocabulary of Basic and General Terms in Metrology (ISO, 1993a). 1.1.2 representativeness

Meteorological (and related environmental and geophysical) observations are made for a variety of reasons. They are used for the real-time preparation of weather analyses, forecasts and severe weather warnings, for the study of climate, for local weatherdependent operations (for example, local aerodrome flying operations, construction work on land and at sea), for hydrology and agricultural meteorology, and for research in meteorology and climatology. The purpose of the Guide to Meteorological Instruments and Methods of Observation is to support these activities by giving advice on good practices for meteorological measurements and observations. There are many other sources of additional advice, and users should refer to the references placed at the end of each chapter for a bibliography of theory and practice relating to instruments and methods of observation. The references also contain national practices, national and international standards, and specific literature. They also include reports published by the World Meteorological Organization (WMO) for the Commission for Instruments and Methods of Observation (CIMO) on technical conferences, instrumentation, and international comparisons of instruments. Many other Manuals and Guides issued by WMO refer to particular applications of meteorological observations (see especially those relating to the Global Observing System (WMO, 2003a; 1989), aeronautical meteorology (WMO, 1990), hydrology (WMO, 1994), agricultural meteorology (WMO, 1981) and climatology (WMO, 1983). Quality assurance and maintenance are of special interest for instrument measurements. Throughout this Guide many recommendations are made in order to meet the stated performance requirements. Particularly, Part III of this Guide is dedicated to quality assurance and management of observing systems. It is recognized that quality management and training of instrument specialists is of utmost importance. Therefore, on the recommendation of CIMO,1 several regional associations of WMO have
1 Recommended by the Commission for Instruments and Methods of Observation at its ninth session (1985) through Recommendation 19.

The representativeness of an observation is the degree to which it accurately describes the value of the variable needed for a specific purpose. Therefore, it is not a fixed quality of any observation, but results from joint appraisal of instrumentation, measurement interval and exposure against the requirements of some particular application. For instance, synoptic observations should typically be representative of an area up to 100 km around the station, but for small-scale or local applications the considered area may have dimensions of 10 km or less. In particular, applications have their own preferred timescales and space scales for averaging, station density and resolution of phenomena — small for agricultural meteorology, large for global longrange forecasting. Forecasting scales are closely related to the timescales of the phenomena; thus, shorter-range weather forecasts require more frequent observations from a denser network over a limited area in order to detect any small-scale phenomena and their quick development. Using various sources (WMO, 2003a; 2001; Orlanski, 1975), horizontal meteorological scales may be classified as follows, with a factor two uncertainty: (a) Microscale (less than 100 m) for agricultural meteorology, for example, evaporation; (b) Toposcale or local scale (100–3 km), for example, air pollution, tornadoes; (c) Mesoscale (3–100 km), for example, thunderstorms, sea and mountain breezes; (d) Large scale (100–3 000 km), for example, fronts, various cyclones, cloud clusters; (e) Planetary scale (larger than 3 000 km), for example, long upper tropospheric waves.



Section 1.6 discusses the required and achievable uncertainties of instrument systems. The stated achievable uncertainties can be obtained with good instrument systems that are properly operated, but are not always obtained in practice. Good observing practices require skill, training, equipment and support, which are not always available in sufficient degree. The measurement intervals required vary by application: minutes for aviation, hours for agriculture, and days for climate description. Data storage arrangements are a compromise between available capacity and user needs. Good exposure, which is representative on scales from a few metres to 100 km, is difficult to achieve (see section 1.3). Errors of unrepresentative exposure may be much larger than those expected from the instrument system in isolation. A station in a hilly or coastal location is likely to be unrepresentative on the large scale or mesoscale. However, good homogeneity of observations in time may enable users to employ data even from unrepresentative stations for climate studies. 1.1.3 Metadata


Meteorological observing systeMs

The purpose of this Guide and related WMO publications is to ensure reliability of observations by standardization. However, local resources and circumstances may cause deviations from the agreed standards of instrumentation and exposure. A typical example is that of regions with much snowfall, where the instruments are mounted higher than usual so that they can be useful in winter as well as summer. Users of meteorological observations often need to know the actual exposure, type and condition of the equipment and its operation; and perhaps the circumstances of the observations. This is now particularly significant in the study of climate, in which detailed station histories have to be examined. Metadata (data about data) should be kept concerning all of the station establishment and maintenance matters described in section 1.3, and concerning changes which occur, including calibration and maintenance history and the changes in terms of exposure and staff (WMO, 2003b). Metadata are especially important for elements which are particularly sensitive to exposure, such as precipitation, wind and temperature. One very basic form of metadata is information on the existence, availability and quality of meteorological data and of the metadata about them.

The requirements for observational data may be met using in situ measurements or remote-sensing (including space-borne) systems, according to the ability of the various sensing systems to measure the elements needed. WMO (2003a) describes the requirements in terms of global, regional and national scales and according to the application area. The Global Observing System, designed to meet these requirements, is composed of the surface-based subsystem and the space-based subsystem. The surface-based subsystem comprises a wide variety of types of stations according to the particular application (for example, surface synoptic station, upper-air station, climatological station, and so on). The space-based subsystem comprises a number of spacecraft with on-board sounding missions and the associated ground segment for command, control and data reception. The succeeding paragraphs and chapters in this Guide deal with the surface-based system and, to a lesser extent, with the space-based subsystem. To derive certain meteorological observations by automated systems, for example, present weather, a so-called “multi- sensor” approach is necessary, where an algorithm is applied to compute the result from the outputs of several sensors.


general requireMents of a Meteorological station

The requirements for elements to be observed according to the type of station and observing network are detailed in WMO (2003a). In this section, the observational requirements of a typical climatological station or a surface synoptic network station are considered. The following elements are observed at a station making surface observations (the chapters refer to Part I of the Guide): Present weather (Chapter 14) Past weather (Chapter 14) Wind direction and speed (Chapter 5) Cloud amount (Chapter 15) Cloud type (Chapter 15) Cloud-base height (Chapter 15) Visibility (Chapter 9) Temperature (Chapter 2) Relative humidity (Chapter 4)

CHaPTEr 1. GENEral


Atmospheric pressure Precipitation Snow cover Sunshine and/ or solar radiation Soil temperature Evaporation

(Chapter 3) (Chapter 6) (Chapter 6) (Chapters 7, 8) (Chapter 2) (Chapter 10)


(d) (e)

Instruments exist which can measure all of these elements, except cloud type. However, with current technology, instruments for present and past weather, cloud amount and height, and snow cover are not able to make observations of the whole range of phenomena, whereas human observers are able to do so. Some meteorological stations take upper-air measurements (Part I, Chapters 12 and 13), measurements of soil moisture (Part I, Chapter 11), ozone (Part I, Chapter 16) and atmospheric composition (Part I, Chapter 17), and some make use of special instrument systems as described in Part II of this Guide. Details of observing methods and appropriate instrumentation are contained in the succeeding chapters of this Guide. 1.3.1 automatic weather stations



To code and dispatch observations (in the absence of automatic coding and communication systems); To maintain in situ recording devices, including the changing of charts when provided; To make or collate weekly and/or monthly records of climatological data where automatic systems are unavailable or inadequate; To provide supplementary or back-up observations when automatic equipment does not make observations of all required elements, or when it is out of service; To respond to public and professional enquiries.

Observers should be trained and/or certified by an authorized Meteorological Service to establish their competence to make observations to the required standards. They should have the ability to interpret instructions for the use of instrumental and manual techniques that apply to their own particular observing systems. Guidance on the instrument training requirements for observers will be given in Part III, Chapter 5. 1.3.3 siting and exposure site selection

Most of the elements required for synoptic, climatological or aeronautical purposes can be measured by automatic instrumentation (Part II, Chapter 1). As the capabilities of automatic systems increase, the ratio of purely automatic weather stations to observer-staffed weather stations (with or without automatic instrumentation) increases steadily. The guidance in the following paragraphs regarding siting and exposure, changes of instrumentation, and inspection and maintenance apply equally to automatic weather stations and staffed weather stations. 1.3.2 observers

Meteorological observing stations are designed so that representative measurements (or observations) can be taken according to the type of station involved. Thus, a station in the synoptic network should make observations to meet synoptic-scale requirements, whereas an aviation meteorological observing station should make observations that describe the conditions specific to the local (aerodrome) site. Where stations are used for several purposes, for example, aviation, synoptic and climatological purposes, the most stringent requirement will dictate the precise location of an observing site and its associated sensors. A detailed study on siting and exposure is published in WMO (1993a). As an example, the following considerations apply to the selection of site and instrument exposure requirements for a typical synoptic or climatological station in a regional or national network: (a) Outdoor instruments should be installed on a level piece of ground, preferably no smaller than 25 m x 25 m where there are many installations, but in cases where there are relatively few installations (as in Figure 1.1)

Meteorological observers are required for a number of reasons, as follows: (a) To make synoptic and/or climatological observations to the required uncertainty and representativeness with the aid of appropriate instruments; (b) To maintain instruments, metadata documentation and observing sites in good order;



1.5 m

1.5 m

1.5 m Thermometer screen 100 cm Soil thermometer

1.5 m Cup-counter anemometer on slender 2 m pole


Raingauge 1

1.5 m West 3m Raingauge 2 30 cm

1.5 m Recording raingauge East 2m 1m

1.5 m

Soil thermometer 60 cm Concrete slab 1.4 m Min. therm

Grass minimum thermometer 1m 75 m 2m Soil thermometers 20 cm 10 cm 5cm Bare-soil minimum thermometer Bare patch to be kept weeded

1.25 m Sunshine recorder on 2 m pillar 1.5 m


1.5 m



figure 1.1. layout of an observing station in the northern hemisphere showing minimum distances between installations



the area may be considerably smaller, for example, 10 m x 7 m (the enclosure). The ground should be covered with short grass or a surface representative of the locality, and surrounded by open fencing or palings to exclude unauthorized persons. Within the enclosure, a bare patch of ground of about 2 m x 2 m is reserved for observations of the state of the ground and of soil temperature at depths of equal to or less than 20 cm (Part I, Chapter 2) (soil temperatures at depths greater than 20 cm can be measured outside this bare patch of ground). An example of the layout of such a station is given in Figure 1.1 (taken from WMO, 1989); There should be no steeply sloping ground in the vicinity, and the site should not be in a hollow. If these conditions are not met, the observations may show peculiarities of entirely local significance; The site should be well away from trees, buildings, walls or other obstructions. The distance of any such obstacle (including





fencing) from the raingauge should not be less than twice the height of the object above the rim of the gauge, and preferably four times the height; The sunshine recorder, raingauge and anemometer must be exposed according to their requirements, preferably on the same site as the other instruments; It should be noted that the enclosure may not be the best place from which to estimate the wind speed and direction; another observing point, more exposed to the wind, may be desirable; Very open sites which are satisfactory for most instruments are unsuitable for raingauges. For such sites, the rainfall catch is reduced in conditions other than light winds and some degree of shelter is needed; If in the instrument enclosure surroundings, maybe at some distance, objects like trees or buildings obstruct the horizon significantly, alternative viewpoints should be selected for observations of sunshine or radiation;

CHaPTEr 1. GENEral





The position used for observing cloud and visibility should be as open as possible and command the widest possible view of the sky and the surrounding country; At coastal stations, it is desirable that the station command a view of the open sea. However, the station should not be too near the edge of a cliff because the wind eddies created by the cliff will affect the wind and precipitation measurements; Night observations of cloud and visibility are best made from a site unaffected by extraneous lighting.

must be separately specified. It is the datum level to which barometric reports at the station refer; such barometric values being termed “station pressure” and understood to refer to the given level for the purpose of maintaining continuity in the pressure records (WMO, 1993b). If a station is located at an aerodrome, other elevations must be specified (see Part II, Chapter 2, and WMO, 1990). Definitions of measures of height and mean sea level are given in WMO (1992a). 1.3.4 changes of instrumentation and homogeneity

It is obvious that some of the above considerations are somewhat contradictory and require compromise solutions. Detailed information appropriate to specific instruments and measurements is given in the succeeding chapters. coordinates of the station

The position of a station referred to in the World Geodetic System 1984 (WGS-84) Earth Geodetic Model 1996 (EGM96) must be accurately known and recorded.2 The coordinates of a station are: (a) The latitude in degrees with a resolution of 1 in 1 000; (b) The longitude in degrees with a resolution of 1 in 1 000; (c) The height of the station above mean sea level,3 namely, the elevation of the station, to the nearest metre. These coordinates refer to the plot on which the observations are taken and may not be the same as those of the town, village or airfield after which the station is named. The elevation of the station is defined as the height above mean sea level of the ground on which the raingauge stands or, if there is no raingauge, the ground beneath the thermometer screen. If there is neither raingauge nor screen, it is the average level of terrain in the vicinity of the station. If the station reports pressure, the elevation to which the station pressure relates
2 3 For an explanation of the WGS-84 and recording issues, see ICAO, 2002. Mean sea level (MSL) is defined in WMO, 1992a. The fixed reference level of MSL should be a well-defined geoid, like the WGS-84 Earth Geodetic Model 1996 (EGM96) [Geoid: the equipotential surface of the Earth’s gravity field which best fits, in a least squares sense, global MSL].

The characteristics of an observing site will generally change over time, for example, through the growth of trees or erection of buildings on adjacent plots. Sites should be chosen to minimize these effects, if possible. Documentation of the geography of the site and its exposure should be kept and regularly updated as a component of the metadata (see Annex 1.C and WMO, 2003b). It is especially important to minimize the effects of changes of instrument and/or changes in the siting of specific instruments. Although the static characteristics of new instruments might be well understood, when they are deployed operationally they can introduce apparent changes in site climatology. In order to guard against this eventuality, observations from new instruments should be compared over an extended interval (at least one year; see the Guide to Climatological Practices (WMO, 1983) before the old measurement system is taken out of service. The same applies when there has been a change of site. Where this procedure is impractical at all sites, it is essential to carry out comparisons at selected representative sites to attempt to deduce changes in measurement data which might be a result of changing technology or enforced site changes. 1.3.5 inspection and maintenance Inspection of stations

All synoptic land stations and principal climatological stations should be inspected no less than once every two years. Agricultural meteorological and special stations should be inspected at intervals sufficiently short to ensure the maintenance of a high standard of observations and the correct functioning of instruments.



The principal objective of such inspections is to ascertain that: (a) The siting and exposure of instruments are known, acceptable and adequately documented; (b) Instruments are of the approved type, in good order, and regularly verified against standards, as necessary; (c) There is uniformity in the methods of observation and the procedures for calculating derived quantities from the observations; (d) The observers are competent to carry out their duties; (e) The metadata information is up to date. Further information on the standardization of instruments is given in section 1.5. Maintenance

(d) (e) (f)

Simplicity of design which is consistent with requirements; Durability; Acceptable cost of instrument, consumables and spare parts.

With regard to the first two requirements, it is important that an instrument should be able to maintain a known uncertainty over a long period. This is much better than having a high initial uncertainty that cannot be retained for long under operating conditions. Initial calibrations of instruments will, in general, reveal departures from the ideal output, necessitating corrections to observed data during normal operations. It is important that the corrections should be retained with the instruments at the observing site and that clear guidance be given to observers for their use. Simplicity, strength of construction, and convenience of operation and maintenance are important since most meteorological instruments are in continuous use year in, year out, and may be located far away from good repair facilities. Robust construction is especially desirable for instruments that are wholly or partially exposed to the weather. Adherence to such characteristics will often reduce the overall cost of providing good observations, outweighing the initial cost. 1.4.2 recording instruments

Observing sites and instruments should be maintained regularly so that the quality of observations does not deteriorate significantly between station inspections. Routine (preventive) maintenance schedules include regular “housekeeping” at observing sites (for example, grass cutting and cleaning of exposed instrument surfaces) and manufacturers’ recommended checks on automatic instruments. Routine quality control checks carried out at the station or at a central point should be designed to detect equipment faults at the earliest possible stage. Depending on the nature of the fault and the type of station, the equipment should be replaced or repaired according to agreed priorities and timescales. As part of the metadata, it is especially important that a log be kept of instrument faults, exposure changes, and remedial action taken where data are used for climatological purposes. Further information on station inspection and management can be found in WMO (1989).


general requireMents of instruMents


Desirable characteristics

The most important requirements for meteorological instruments are the following: (a) Uncertainty, according to the stated requirement for the particular variable; (b) Reliability and stability; (c) Convenience of operation, calibration and maintenance;

In many of the recording instruments used in meteorology, the motion of the sensing element is magnified by levers that move a pen on a chart on a clock-driven drum. Such recorders should be as free as possible from friction, not only in the bearings, but also between the pen and paper. Some means of adjusting the pressure of the pen on the paper should be provided, but this pressure should be reduced to a minimum consistent with a continuous legible trace. Means should also be provided in clock-driven recorders for making time marks. In the design of recording instruments that will be used in cold climates, particular care must be taken to ensure that their performance is not adversely affected by extreme cold and moisture, and that routine procedures (time marks, and so forth) can be carried out by the observers while wearing gloves. Recording instruments should be compared frequently with instruments of the direct-reading type.

CHaPTEr 1. GENEral


An increasing number of instruments make use of electronic recording in magnetic media or in semiconductor microcircuits. Many of the same considerations given for bearings, friction and coldweather servicing apply to the mechanical components of such instruments.

for assigning values to other standards of the same quantity. Primary standard: A standard that is designated or widely acknowledged as having the highest metrological qualities and whose value is accepted without reference to other standards of the same quantity. Secondary standard: A standard whose value is assigned by comparison with a primary standard of the same quantity. Reference standard: A standard, generally having the highest metrological quality available at a given location or in a given organization, from which the measurements taken there are derived. Working standard: A standard that is used routinely to calibrate or check material measures, measuring instruments or reference materials.
Notes: 1. 2. A working standard is usually calibrated against a reference A working standard used routinely to ensure that measurestandard. ments are being carried out correctly is called a “check standard”.


MeasureMent stanDarDs anD Definitions


Definitions of standards of measurement

The term “standard” and other similar terms denote the various instruments, methods and scales used to establish the uncertainty of measurements. A nomenclature for standards of measurement is given in the Inter national Vocabulary of Basic and General Terms in Metrology, which was prepared simultaneously by the International Bureau of Weights and Measures, the International Electrotechnical Commission, the International Federation of Clinical Chemistry, the International Organization for Standardization, the International Union of Pure and Applied Chemistry, the International Union of Pure and Applied Physics and the International Organization of Legal Metrology and issued by ISO (1993a). Some of the definitions are as follows: (Measurement) standard: A material measure, measuring instrument, reference material or measuring system intended to define, realize, conserve or reproduce a unit or one or more values of a quantity to serve as a reference. Examples: 1 kg mass standard 100 Ω standard resistor

Transfer standard: A standard used as an intermediary to compare standards.
Note: The term “transfer device” should be used when the

intermediary is not a standard.

Travelling standard: A standard, sometimes of special construction, intended for transport between different locations. Collective standard: A set of similar material measures or measuring instruments fulfilling, by their combined use, the role of a standard. Example:
Notes: 1. 2. A collective standard is usually intended to provide a single The value provided by a collective standard is an appropriate value of a quantity. mean of the values provided by the individual instruments.

Notes: 1. A set of similar material measures or measuring instruments

The World Radiometric Reference

that, through their combined use, constitutes a standard is called a “collective standard”. 2. A set of standards of chosen values that, individually or in combination, provides a series of values of quantities of the same kind is called a “group standard”.

International standard: A standard recognized by an international agreement to serve internationally as the basis for assigning values to other standards of the quantity concerned. National standard: A standard recognized by a national decision to serve, in a country, as the basis

Traceability: A property of the result of a measurement or the value of a standard whereby it can be related to stated references, usually national or international standards, through an unbroken chain of comparisons all having stated uncertainties.



Calibration: The set of operations which establish, under specified conditions, the relationship between values indicated by a measuring instrument or measuring system, or values represented by a material measure, and the corresponding known values of a measurand (the physical quantity being measured).
Notes: 1. The result of a calibration permits the estimation of errors of indication of the measuring instrument, measuring system or material measure, or the assignment of marks on arbitrary scales. 2. 3. 4. A calibration may also determine other metrological properties. The result of a calibration may be recorded in a document, The result of a calibration is sometimes expressed as a calibra-


symbols, units and constants symbols and units

sometimes called a calibration certificate or calibration report. tion factor, or as a series of calibration factors in the form of a calibration curve.


Procedures for standardization

In order to control effectively the standardization of meteorological instruments on a national and international scale, a system of national and regional standards has been adopted by WMO. The locations of the regional standards for pressure and radiation are given in Part I, Chapter 3 (Annex 3.B), and Part I, Chapter 7 (Annex 7.C), respectively. In general, regional standards are designated by the regional associations, and national standards by the individual Members. Unless otherwise specified, instruments designated as regional and national standards should be compared by means of travelling standards at least once every five years. It is not essential for the instruments used as travelling standards to possess the uncertainty of primary or secondary standards; they should, however, be sufficiently robust to withstand transportation without changing their calibration. Similarly, the instruments in operational use at a Service should be periodically compared directly or indirectly with the national standards. Comparisons of instruments within a Service should, as far as possible, be made at the time when the instruments are issued to a station and subsequently during each regular inspection of the station, as recommended in section 1.3.5. Portable standard instruments used by inspectors should be checked against the standard instruments of the Service before and after each tour of inspection. Comparisons should be carried out between operational instruments of different designs (or principles of operation) to ensure homogeneity of measurements over space and time (see section 1.3.4).

Instrument measurements produce numerical values. The purpose of these measurements is to obtain physical or meteorological quantities representing the state of the local atmosphere. For meteorological practices, instrument readings represent variables, such as “atmospheric pressure”, “air temperature” or “wind speed”. A variable with symbol a is usually represented in the form a = {a}·[a], where {a} stands for the numerical value and [a] stands for the symbol for t he unit. General pr inciples concer ning quantities, units and symbols are stated by ISO (1993b) and IUPAP (1987). The International System of Units (SI) should be used as the system of units for the evaluation of meteorological elements included in reports for international exchange. This system is published and updated by BIPM (1998). Guides for the use of SI are issued by NIST (1995) and ISO (1993b). Variables not defined as an international symbol by the International System of Quantities (ISQ), but commonly used in meteorology can be found in the International Meteorological Tables (WMO, 1966) and relevant chapters in this Guide. The following units should be used for meteorological observations: (a) Atmospheric pressure, p, in hectopascals (hPa);4 (b) Temperature, t, in degrees Celsius (°C) or T in kelvin (K);
Note: The Celsius and kelvin temperature scales should

conform to the actual definition of the International Temperature Scale (for 2004: ITS-90, see BIPM, 1990).

(c) (d)

(e) (f)

Wind speed, in both surface and upper-air observations, in metres per second (m s–1); Wind direction in degrees clockwise from north or on the scale 0–36, where 36 is the wind from the north and 09 the wind from the east (°); Relative humidity, U, in per cent (%); Precipitation (total amount) in millimetres (mm) or kilograms per m–2 (kg m–2);5
The unit “pascal” is the principal SI derived unit for the pressure quantity. The unit and symbol “bar” is a unit outside the SI system; in every document where it is used, this unit (bar) should be defined in relation to the SI. Its continued use is not encouraged. By definition, 1 mbar (millibar) ≡≡ 1 hPa (hectopascal). Assuming that 1 mm equals 1 kg m–2 independent of temperature.



CHaPTEr 1. GENEral


Precipitation intensity, Ri, in millimetres per hour (mm h–1) or kilograms per m–2 per second (kg m–2 s–1);6 (h) Snow water equivalent in kilograms per m–2 (kg m–2); (i) Evaporation in millimetres (mm); (j) Visibility in metres (m); (k) Irradiance in watts per m2 and radiant exposure in joules per m2 (W m–2, J m–2); (l) Duration of sunshine in hours (h); (m) Cloud height in metres (m); (n) Cloud amount in oktas; (o) Geopotential, used in upper-air observations, in standard geopotential metres (m’).
Note: Height, level or altitude are presented with respect to


The term measurement is carefully defined in section 1.6.2, but in most of this Guide it is used less strictly to mean the process of measurement or its result, which may also be called an “observation”. A sample is a single measurement, typically one of a series of spot or instantaneous readings of a sensor system, from which an average or smoothed value is derived to make an observation. For a more theoretical approach to this discussion, see Part III, Chapters 2 and 3. The terms accuracy, error and uncertainty are carefully defined in section 1.6.2, which explains that accuracy is a qualitative term, the numerical expression of which is uncertainty. This is good practice and is the form followed in this Guide. Formerly, the common and less precise use of accuracy was as in “an accuracy of ±x”, which should read “an uncertainty of x”. sources and estimates of error

a well-defined reference. Typical references are Mean Sea Level (MSL), station altitude or the 1013.2 hPa plane.

The standard geopotential metre is defined as 0.980 665 of the dynamic metre; for levels in the troposphere, the geopotential is close in numerical value to the height expressed in metres. constants

The following constants have been adopted for meteorological use: (a) Absolute temperature of the normal ice point T0 = 273.15 K (t = 0.00°C); (b) Absolute temperature of the triple point of water T = 273.16 K (t = 0.01°C), by definition of ITS-90; (c) Standard normal gravity (gn) = 9.806 65 m s–2; (d) Density of mercury at 0°C = 1.359 51 · 104 kg m–3. The values of other constants are given in WMO (1973; 1988).

The sources of error in the various meteorological measurements are discussed in specific detail in the following chapters of this Guide, but in general they may be seen as accumulating through the chain of traceability and the measurement conditions. It is convenient to take air temperature as an example to discuss how errors arise, but it is not difficult to adapt the following argument to pressure, wind and other meteorological quantities. For temperature, the sources of error in an individual measurement are as follows: (a) Errors in the international, national and working standards, and in the comparisons made between them. These may be assumed to be negligible for meteorological applications; (b) Errors in the comparisons made between the working, travelling and/or check standards and the field instruments in the laboratory or in liquid baths in the field (if that is how the traceability is established). These are small if the practice is good (say ±0.1 K uncertainty at the 95 per cent confidence level, including the errors in (a) above), but may quite easily be larger, depending on the skill of the operator and the quality of the equipment; (c) Non-linearity, drift, repeatability and reproducibility in the field thermometer and its transducer (depending on the type of thermometer element); (d) The effectiveness of the heat transfer between the thermometer element and the air in the thermometer shelter, which should ensure that the element is at thermal equilibrium

1.6 1.6.1

uncertainty of MeasureMents

Meteorological measurements general

This section deals with definitions that are relevant to the assessment of accuracy and the measurement of uncertainties in physical measurements, and concludes with statements of required and achievable uncertainties in meteorology. First, it discusses some issues that arise particularly in meteorological measurements.
6 Recommendation 3 (CBS-XII), Annex 1, adopted through Resolution 4 (EC-LIII).





with the air (related to system time-constant or lag coefficient). In a well-designed aspirated shelter this error will be very small, but it may be large otherwise; The effectiveness of the thermometer shelter, which should ensure that the air in the shelter is at the same temperature as the air immediately surrounding it. In a welldesigned case this error is small, but the difference between an effective and an ineffective shelter may be 3°C or more in some circumstances; The exposure, which should ensure that the shelter is at a temperature which is representative of the region to be monitored. Nearby sources and heat sinks (buildings, other unrepresentative surfaces below and around the shelter) and topography (hills, land-water boundaries) may introduce large errors. The station metadata should contain a good and regularly updated description of exposure (see Annex 1.C) to inform data users about possible exposure errors.

analysis. In that case, the mean and standard deviation of the differences between the station and the analysed field may be calculated, and these may be taken as the errors in the station measurement system (including effects of exposure). The uncertainty in the estimate of the mean value in the long term may, thus, be made quite small (if the circumstances at the station do not change), and this is the basis of climate change studies. 1.6.2 Definitions of measurements and their errors

The following terminology relating to the accuracy of measurements is taken from ISO (1993a), which contains many definitions applicable to the practices of meteorological observations. ISO (1995) gives very useful and detailed practical guidance on the calculation and expression of uncertainty in measurements. Measurement: A set of operations having the objective of determining the value of a quantity.
Note: The operations may be performed automatically.

Systematic and random errors both arise at all the above-mentioned stages. The effects of the error sources (d) to (f) can be kept small if operations are very careful and if convenient terrain for siting is available; otherwise these error sources may contribute to a very large overall error. However, they are sometimes overlooked in the discussion of errors, as though the laboratory calibration of the sensor could define the total error completely. Establishing the true value is difficult in meteorology (Linacre, 1992). Well-designed instrument comparisons in the field may establish the characteristics of instruments to give a good estimate of uncertainty arising from stages (a) to (e) above. If station exposure has been documented adequately, the effects of imperfect exposure can be corrected systematically for some parameters (for example, wind; see WMO, 2002) and should be estimated for others. Comparing station data against numerically analysed fields using neighbouring stations is an effective operational quality control procedure, if there are sufficient reliable stations in the region. Differences between the individual observations at the station and the values interpolated from the analysed field are due to errors in the field as well as to the performance of the station. However, over a period, the average error at each point in the analysed field may be assumed to be zero if the surrounding stations are adequate for a sound

Result of a measurement: Value attributed to a measurand (the physical quantity that is being measured), obtained by measurement.
Notes: 1. When a result is given, it should be made clear whether it refers to the indication, the uncorrected result or the corrected result, and whether several values are averaged. 2. A complete statement of the result of a measurement includes information about the uncertainty of the measurement.

Corrected result: The result of a measurement after correction for systematic error. Value (of a quantity): The magnitude of a particular quantity generally expressed as a unit of measurement multiplied by a number. Example: Length of a rod: 5.34 m. True value (of a quantity): A value consistent with the definition of a given particular quantity. = ±
Notes: 1. 2. This is a value that would be obtained by a perfect True values are by nature indeterminate. measurement.

CHaPTEr 1. GENEral

I.1–11 of measurement comprises, in general,

Accuracy (of measurement): The closeness of the agreement between the result of a measurement and a true value of the measurand.
Notes: 1. 2. “Accuracy” is a qualitative concept. The term “precision” should not be used for “accuracy”.

2. Uncertainty

many components. Some of these components may be evaluated from the statistical distribution of the results of a series of measurements and can be characterized by experimental standard deviations. The other components, which can also be characterized by standard deviations, are evaluated from assumed probability distributions based on experience or other information.

Repeatability (of results of measurements): The closeness of the agreement between the results of successive measurements of the same measurand carried out under the same measurement conditions.
Notes: 1. 2. These conditions are called repeatability conditions. Repeatability conditions include: (a) The same measurement procedure; (b) The same observer; (c) The same measuring instrument used under the same conditions (including weather); (d) The same location; (e) Repetition over a short period of time. 3. Repeatability may be expressed quantitatively in terms of the dispersion characteristics of the results.

3. It is understood that the result of the measurement is the best estimate of the value of the measurand, and that all components of uncertainty, including those arising from systematic effects, such as components associated with corrections and reference standards, contribute to the dispersion.

Error (of measurement): The result of a measurement minus a true value of the measurand.
Note: Since a true value cannot be determined, in practice a

conventional true value is used.

Deviation: The value minus its conventional true value. Random error: The result of a measurement minus the mean that would result from an infinite number of measurements of the same measurand carried out under repeatability conditions.
Notes: 1. 2. Random error is equal to error minus systematic error. Because only a finite number of measurements can be taken,

Reproducibility (of results of measurements): The closeness of the agreement between the results of measurements of the same measurand carried out under changed measurement conditions.

it is possible to determine only an estimate of random error.

1. A valid statement of reproducibility requires specification of the conditions changed. 2. The changed conditions may include: (a) The principle of measurement; (b) The method of measurement; (c) The observer; (d) The measuring instrument; (e) The reference standard; (f) The location; (g) The conditions of use (including weather); (h) The time. 3. Reproducibility may be expressed quantitatively in terms of the dispersion characteristics of the results. 4. Here, results are usually understood to be corrected results.

Systematic error: A mean that would result from an infinite number of measurements of the same measurand carried out under repeatability conditions minus a true value of the measurand.
Notes: 1. 2. Systematic error is equal to error minus random error. Like true value, systematic error and its causes cannot be

completely known.

Correction: The value added algebraically to the uncorrected result of a measurement to compensate for a systematic error. 1.6.3 characteristics of instruments

Uncertainty (of measurement): A variable associated with the result of a measurement that characterizes the dispersion of the values that could be reasonably attributed to the measurand.

Some other properties of instruments which must be understood when considering their uncertainty are taken from ISO (1993a). Sensitivity: The change in the response of a measuring instrument divided by the corresponding change in the stimulus.
Note: Sensitivity may depend on the value of the stimulus.

1. The variable may be, for example, a standard deviation (or a given multiple thereof), or the half-width of an interval having a stated level of confidence.



Discrimination: The ability of a measuring instrument to respond to small changes in the value of the stimulus. Resolution: A quantitative expression of the ability of an indicating device to distinguish meaningfully between closely adjacent values of the quantity indicated. Hysteresis: The property of a measuring instrument whereby its response to a given stimulus depends on the sequence of preceding stimuli. Stability (of an instrument): The ability of an instrument to maintain its metrological characteristics constant with time. Drift: The slow variation with time of a metrological characteristic of a measuring instrument. Response time: The time interval between the instant when a stimulus is subjected to a specified abrupt change and the instant when the response reaches and remains within specified limits around its final steady value. The following other definitions are used frequently in meteorology: Statements of response time: The time for 90 per cent of the step change is often given. The time for 50 per cent of the step change is sometimes referred to as the half-time. Calculation of response time: In most simple systems, the response to a step change is:




figure I.2. the distribution of data in an instrument comparison 1.6.4 the measurement uncertainties of a single instrument

ISO (1995) should be used for the expression and calculation of uncertainties. It gives a detailed practical account of definitions and methods of reporting, and a comprehensive description of suitable statistical methods, with many illustrative examples. the statistical distributions of observations

To determine the uncertainty of any individual measurement, a statistical approach is to be considered in the first place. For this purpose, the following definitions are stated (ISO, 1993; 1995): (a) Standard uncertainty; (b) Expanded uncertainty; (c) Variance, standard deviation; (d) Statistical coverage interval. If n comparisons of an operational instrument are made with the measured variable and all other significant variables held constant, if the best estimate of the true value is established by use of a reference standard, and if the measured variable has a Gaussian distribution,7 the results may be displayed as in Figure 1.2. In this figure, T is the true value, O is the mean of the n values O observed with one instrument, and σ is the standard deviation of the observed values with respect to their mean values. In this situation, the following characteristics can be identified: (a) The systematic error, often termed bias, given by the algebraic difference O – T. Systematic errors cannot be eliminated but may often be reduced. A correction factor can be applied to compensate for the systematic effect. Typically, appropriate calibrations and
7 However, note that several meteorological variables do not follow a Gaussian distribution. See section

Y = A(1 − e

− t /τ



where Y is the change after elapsed time t; A is the amplitude of the step change applied; t is the elapsed time from the step change; and τ is a characteristic variable of the system having the dimension of time. The variable τ is referred to as the time-constant or the lag coefficient. It is the time taken, after a step change, for the instrument to reach 1/e of the final steady reading. In other systems, the response is more complicated and will not be considered here (see also Part III, Chapter 2). Lag error: The error that a set of measurements may possess due to the finite response time of the observing instrument.

Chapter 1. GeNeraL





adjustments should be performed to eliminate the systematic errors of sensors. Systematic errors due to environmental or siting effects can only be reduced; The random error, which arises from unpredictable or stochastic temporal and spatial variations. The measure of this random effect can be expressed by the standard deviation σ determined after n measurements, where n should be large enough. In principle, σ is a measure for the uncertainty of O; The accuracy of measurement, which is the closeness of the agreement between the result of a measurement and a true value of the measurand. The accuracy of a measuring instrument is the ability to give responses close to a true value. Note that “accuracy” is a qualitative concept; The uncertainty of measurement, which represents a parameter associated with the result of a measurement, that characterizes the dispersion of the values that could be reasonably attributed to the measurand. The uncertainties associated with the random and systematic effects that give rise to the error can be evaluated to express the uncertainty of measurement. Estimating the true value

Upper limit:

LU = X + k ⋅

σ n σ n


Lower limit:

LL = X − k ⋅


where X is the average of the observations O corrected for systematic error; σ is the standard deviation of the whole population; and k is a factor, according to the chosen level of confidence, which can be calculated using the normal distribution function. Some values of k are as follows:
Level of confidence 90% 1.645 95% 1.960 99% 2.575


The level of confidence used in the table above is for the condition that the true value will not be outside the one particular limit (upper or lower) to be computed. When stating the level of confidence that the true value will lie between both limits, both the upper and lower outside zones have to be considered. With this in mind, it can be seen that k takes the value 1.96 for a 95 per cent probability, and that the true value of the mean lies between the limits LU and LL. Estimating the true value – n small

In normal practice, observations are used to make an estimate of the true value. If a systematic error does not exist or has been removed from the data, the true value can be approximated by taking the mean of a very large number of carefully executed independent measurements. When fewer measurements are available, their mean has a distribution of its own and only certain limits within which the true value can be expected to lie can be indicated. In order to do this, it is necessary to choose a statistical probability (level of confidence) for the limits, and the error distribution of the means must be known. A very useful and clear explanation of this notion and related subjects is given by Natrella (1966). Further discussion is given by Eisenhart (1963). Estimating the true value – n large

When n is small, the means of samples conform to Student’s t distribution provided that the observational errors have a Gaussian or nearGaussian distribution. In this situation, and for a chosen level of confidence, the upper and lower limits can be obtained from: Upper limit: LU ≈ X + t ⋅

ˆ σ n


Lower limit:

LL ≈ X − t ⋅

ˆ σ n


When the number of n observations is large, the distribution of the means of samples is Gaussian, even when the observational errors themselves are not. In this situation, or when the distribution of the means of samples is known to be Gaussian for other reasons, the limits between which the true value of the mean can be expected to lie are obtained from:

where t is a factor (Student’s t) which depends upon the chosen level of confidence and the number n of ˆ measurements; and σ is the estimate of the standard deviation of the whole population, made from the measurements obtained, using: n ˆ σ2 =

i =1

( X i − X )2 n −1 =

n 2 ⋅σ0 n −1




where Xi is an individual value Oi corrected for systematic error. Some values of t are as follows:
Level of confidence 90% 95% 99%

1 4 8 60 6.314 2.132 1.860 1.671 12.706 2.776 2.306 2.000 63.657 4.604 3.355 2.660

random errors as indicated by the expressions and because of any unknown component of the systematic error. Limits should be set to the uncertainty of the systematic error and should be added to those for random errors to obtain the overall uncertainty. However, unless the uncertainty of the systematic error can be expressed in probability terms and combined suitably with the random error, the level of confidence is not known. It is desirable, therefore, that the systematic error be fully determined. expressing the uncertainty

where df is the degrees of freedom related to the number of measurements by df = n – 1. The level of confidence used in this table is for the condition that the true value will not be outside the one particular limit (upper or lower) to be computed. When stating the level of confidence that the true value will lie between the two limits, allowance has to be made for the case in which n is large. With this in mind, it can be seen that t takes the value 2.306 for a 95 per cent probability that the true value lies between the limits LU and LL, when the estimate is made from nine measurements (df = 8). The values of t approach the values of k as n becomes large, and it can be seen that the values of k are very nearly equalled by the values of t when df equals 60. For this reason, tables of k (rather than tables of t) are quite often used when the number of measurements of a mean value is greater than 60 or so. Estimating the true value – additional remarks

If random and systematic effects are recognized, but reduction or corrections are not possible or not applied, the resulting uncertainty of the measurement should be estimated. This uncertainty is determined after an estimation of the uncertainty arising from random effects and from imperfect correction of the result for systematic effects. It is common practice to express the uncertainty as “expanded uncertainty” in relation to the “statistical coverage interval”. To be consistent with common practice in metrology, the 95 per cent confidence level, or k = 2, should be used for all types of measurements, namely: = k σ = 2 σ (1.7)

As a result, the true value, defined in section 1.6.2, will be expressed as: = ± = ±2 σ Measurements of discrete values

Investigators should consider whether or not the distribution of errors is likely to be Gaussian. The distribution of some variables themselves, such as sunshine, visibility, humidity and ceiling, is not Gaussian and their mathematical treatment must, therefore, be made according to rules valid for each particular distribution (Brooks and Carruthers, 1953). In practice, observations contain both random and systematic errors. In every case, the observed mean value has to be corrected for the systematic error insofar as it is known. When doing this, the estimate of the true value remains inaccurate because of the

While the state of the atmosphere may be described well by physical variables or quantities, a number of meteorological phenomena are expressed in terms of discrete values. Typical examples of such values are the detection of sunshine, precipitation or lightning and freezing precipitation. All these parameters can only be expressed by “yes” or “no”. For a number of parameters, all of which are members of the group of present weather phenomena, more than two possibilities exist. For instance, discrimination between drizzle, rain, snow, hail and their combinations is required when reporting present weather. For these practices, uncertainty calculations like those stated above are not applicable. Some of these parameters are related to a numerical threshold value (for example, sunshine detection using direct radiation intensity), and the determination of the uncertainty of any derived variable (for example, sunshine

CHaPTEr 1. GENEral


duration) can be calculated from the estimated uncertainty of the source variable (for example, direct radiation intensity). However, this method is applicable only for derived parameters, and not for the typical present weather phenomena. Although a simple numerical approach cannot be presented, a number of statistical techniques are available to determine the quality of such observations. Such techniques are based on comparisons of two data sets, with one set defined as a reference. Such a comparison results in a contingency matrix, representing the cross-related frequencies of the mutual phenomena. In its most simple form, when a variable is Boolean (“yes” or “no”), such a matrix is a two by two matrix with the number of equal occurrences in the elements of the diagonal axis and the “missing hits” and “false alarms” in the other elements. Such a matrix makes it possible to derive verification scores or indices to be representative for the quality of the observation. This technique is described by Murphy and Katz (1985). An overview is given by Kok (2000). 1.6.5 accuracy requirements general

signal. Thus, for various purposes, the amplitudes of the noise and the signal serve, respectively, to determine: (a) The limits of performance beyond which improvement is unnecessary; (b) The limits of performance below which the data obtained would be of negligible value. This argument, defining and determining limits (a) and (b) above, was developed extensively for upper-air data by WMO (1970). However, statements of requirements are usually derived not from such reasoning but from perceptions of practically attainable performance, on the one hand, and the needs of the data users, on the other. required and achievable performance

The performance of a measuring system includes its reliability, capital, recurrent and lifetime cost, and spatial resolution, but the performance under discussion here is confined to uncertainty (including scale resolution) and resolution in time. Various statements of requirements have been made, and both needs and capability change with time. The statements given in Annex 1.B are the most authoritative at the time of writing, and may be taken as useful guides for development, but they are not fully definitive. The requirements for the variables most commonly used in synoptic, aviation and marine meteorology, and in climatology are summarized in Annex 1.B.8 It gives requirements only for surface measurements that are exchanged internationally. Details on the observational data requirements for Global Data-processing and Forecasting System Centres for global and regional exchange are given in WMO (1992b). The uncertainty requirement for wind measurements is given separately for speed and direction because that is how wind is reported. The ability of individual sensors or observing systems to meet the stated requirements is changing constantly as instrumentation and observing technology advance. The characteristics of typical
8 Established by the CBS Expert Team on Requirements for Data from Automatic Weather Stations (2004) and approved by the president of CIMO for inclusion in this edition of the Guide after consultation with the presidents of the other technical commissions.

The uncertainty with which a meteorological variable should be measured varies with the specific purpose for which the measurement is required. In general, the limits of performance of a measuring device or system will be determined by the variability of the element to be measured on the spatial and temporal scales appropriate to the application. Any measurement can be regarded as made up of two parts: the signal and the noise. The signal constitutes the quantity which is to be determined, and the noise is the part which is irrelevant. The noise may arise in several ways: from observational error, because the observation is not made at the right time and place, or because short-period or small-scale irregularities occur in the observed quantity which are irrelevant to the observations and need to be smoothed out. Assuming that the observational error could be reduced at will, the noise arising from other causes would set a limit to the accuracy. Further refinement in the observing technique would improve the measurement of the noise but would not give much better results for the signal. At the other extreme, an instrument – the error of which is greater than the amplitude of the signal itself – can give little or no information about the



sensors or systems currently available are given in Annex 1.B.9 It should be noted that the achievable operational uncertainty in many cases does not meet the stated requirements. For some of the quantities, these uncertainties are achievable

only with the highest quality equipment and procedures. Uncertainty requirements for upper-air measurements are dealt with in Part I, Chapter 12.


Established by the CIMO Expert Team on Surface Technology and Measurement Techniques (2004) and confirmed for inclusion in this Guide by the president of CIMO.

CHaPTEr 1. GENEral


aNNEx 1.a regIonal centres
1. Considering the need for the regular calibration and maintenance of meteorological instruments to meet the increasing needs for highquality meteorological and hydrological data, the need for building the hierarchy of the traceability of measurements to the International System of Units (SI) standards, Members’ requirements for the standardization of meteorological and related environmental instruments, the need for international instrument comparisons and evaluations in support of worldwide data compatibility and homogeneity, the need for training instrument experts and the role played by Regional Instrument Centres (RICs) in the Global Earth Observing System of Systems, the Natural Disaster Prevention and Mitigation Programme and other WMO cross-cutting programmes, it has been recommended that:10 A. regional Instrument centres with full capabilities and functions should have the following capabilities to carry out their corresponding functions: capabilities: (a) A RIC must have, or have access to, the necessary facilities and laboratory equipment to perform the functions necessary for the calibration of meteorological and related environmental instruments; A RIC must maintain a set of meteorological standard instruments and establish the traceability of its own measurement standards and measuring instruments to the SI; A RIC must have qualified managerial and technical staff with the necessary experience to fulfil its functions; A RIC must develop its individual technical procedures for the calibration of meteorological and related environmental instruments using calibration equipment employed by the RIC; A RIC must develop its individual quality assurance procedures; A RIC must participate in, or organize, interlaboratory comparisons of standard calibration instruments and methods; A RIC must, when appropriate, utilize the resources and capabilities of the Region according to the Region’s best interests;
Recommended by the Commission for Instruments and Methods of Observation at its fourteenth session, held in 2006.



A RIC must, as far as possible, apply international standards applicable for calibration laboratories, such as ISO/IEC 17025; A recognized authority must assess a RIC, at least every five years, to verify its capabilities and performance;

corresponding functions: (j) A RIC must assist Members of the Region in calibrating their national meteorological standards and related environmental monitoring instruments; (k) A RIC must participate in, or organize, WMO and/or regional instrument intercomparisons, following relevant CIMO recommendations; (l) According to relevant recommendations on the WMO Quality Management Framework, a RIC must make a positive contribution to Members regarding the quality of measurements; (m) A RIC must advise Members on enquiries regarding instrument performance, maintenance and the availability of relevant guidance materials; (n) A RIC must actively participate, or assist, in the organization of regional workshops on meteorological and related environmental instruments; (o) The RIC must cooperate with other RICs in the standardization of meteorological and related environmental meaurements; (p) A RIC must regularly inform Members and report,11 on an annual basis, to the president of the regional association and to the WMO Secretariat on the services offered to Members and activities carried out; B. regional Instrument centres with basic capabilities and functions should have the following capabilities to carry out their corresponding functions: capabilities: (a) A RIC must have the necessary facilities and laboratory equipment to perform the functions necessary for the calibration of meteorological and related environmental instruments;
A Web-based approach is recommended.




(e) (f)




I.1–18 (b)




(e) (f)




A RIC must maintain a set of meteorological standard instruments12 and establish the traceability of its own measurement standards and measuring instruments to the SI; A RIC must have qualified managerial and technical staff with the necessary experience to fulfil its functions; A RIC must develop its individual technical procedures for the calibration of meteorological and related environmental instruments using calibration equipment employed by the RIC; A RIC must develop its individual quality assurance procedures; A RIC must participate in, or organize, interlaboratory comparisons of standard calibration instruments and methods; A RIC must, when appropriate, utilize the resources and capabilities of the Region according to the Region’s best interests; A RIC must, as far as possible, apply international standards applicable for calibration laboratories, such as ISO/IEC 17025; A recognized authority must assess a RIC, at least every five years, to verify its capabilities and performance;

meteorological and related environmental monitoring instruments according to capabilities (b); (k) According to relevant recommendations on the WMO Quality Management Framework, a RIC must make a positive contribution to Members regarding the quality of measurements; (l) A RIC must advise Members on enquiries regarding instrument performance, maintenance and the availability of relevant guidance materials; (m) The RIC must cooperate with other RICs in the standardization of meteorological and related environmental instruments; (n) A RIC must regularly inform Members and report,13 on an annual basis, to the president of the regional association and to the WMO Secretariat on the services offered to Members and activities carried out. 2. The following RICs have been designated by the regional associations concerned: Algiers (Algeria), Cairo (Egypt), Casablanca (Morocco), Nairobi (Kenya) and Gaborone (Botswana) for RA I; Beijing (China) and Tsukuba (Japan) for RA II; Buenos Aires (Argentina) for RA III; Bridgetown (Barbados), Mount Washington (United States) and San José (Costa Rica) for RA IV; Manila (Philippines) and Melbourne (Australia) for RA V; , Bratislava (Slovakia), Ljubljana (Slovenia) and Trappes (France) for RA VI.

corresponding functions: (j) A RIC must assist Members of the Region in calibrating their national standard


For calibrating one or more of the following variables: temperature, humidity, pressure or others specified by the Region.


A Web-based approach is recommended.

aNNEx 1.B oPeratIonal MeasureMent uncertaInty requIreMents and InstruMent PerforMance
(3) Reported Mode of resolution measurement/ observation Required measurement uncertainty Sensor time Output Achievable constant averaging time measurement uncertainty Remarks (4) (5) (6) (7) (8) (9)





1. 0.1 K I 0.3 K for ≤ –40ºC 0.1 K for > –40ºC and ≤ +40ºC 0.3 K for > +40ºC 20 s 1 min 0.2 K

temperature achievable uncertainty and effective time-constant may be affected by the design of the thermometer solar radiation screen Time-constant depends on the air-flow over the sensor 0.2 K


air temperature

–80 – +60°C


Extremes of air temperature

–80 – +60°C

0.1 K


0.5 K for ≤ –40ºC 0.3 K for > –40ºC and ≤ +40ºC 0.5 K for > +40ºC 0.1 K 20 s 1 min

20 s

1 min

CHaPTEr 1. GENEral

1.3 0.1 K I

Sea surface temperature

–2 – +40°C

0.2 K

2. 0.1 K I 0.1 K

Humidity 20 s 1 min 0.5 K Wet-bulb temperature (psychrometer) 1% I 1% 20 s 1 min 0.2 K If measured directly and in combination with air temperature (dry bulb) large errors are possible due to aspiration and cleanliness problems (see also note 11) 40 s 1 min Solid state and others 3% Solid state sensors may show significant temperature and humidity dependence


dewpoint temperature

–80 – +35°C


relative humidity

0 – 100%



(1) Reported Mode of resolution measurement/ observation Required measurement uncertainty Sensor time Output Achievable constant averaging time measurement uncertainty Remarks











3. 0.1 hPa I 0.1 hPa 20 s 1 min 0.3 hPa

atmospheric pressure Both station pressure and MSl pressure Measurement uncertainty is seriously affected by dynamic pressure due to wind if no precautions are taken Inadequate temperature compensation of the transducer may affect the measurement uncertainty significantly 0.2 hPa difference between instantaneous values 2/8



500 – 1 080 hPa



Not specified

0.1 hPa


0.2 hPa

4. 4.1 1/8 I 1/8 n/a

clouds Cloud amount

0/8 – 8/8



Height of cloud base

0 m – 30 km

10 m


10 m for ≤ 100 m 10% for > 100 m


~10 m

Period (30 s) clustering algorithms may be used to estimate low cloud amount automatically achievable measurement uncertainty is undetermined because no clear definition exists for instrumentally measured cloud-base height (e.g. based on penetration depth or significant discontinuity in the extinction profile) Significant bias during precipitation


Height of cloud top

Not available

(1) Reported Mode of resolution measurement/ observation Required measurement uncertainty Sensor time Output Achievable constant averaging time measurement uncertainty Remarks











5. 0.5 m s–1 a distance constant 2–5m 2 and/or 10 min 0.5 m s–1 for ≤ 5 m s–1 10% for > 5 m s–1 0.5 m s–1 for ≤ 5 m s–1 10% for > 5 m s–1 5°




0 – 75 m s–1



0 – 360°



2 and/or 10 min

average over 2 and/or 10 min Non-linear devices. Care needed in design of averaging process distance constant is usually expressed as response length averages computed over Cartesian components (see Part III, Chapter 3, section 3.6 of this Guide) 0.5 m s–1 for Highest 3 s average should ≤ 5 m s–1 be recorded 10% for > 5 m s–1

CHaPTEr 1. GENEral



0.1 – 150 m s–1 a 10%

0.1 m s–1


6. 6.1 0.1 mm T 0.1 mm for ≤ 5 mm 2% for > 5 mm n/a

Precipitation amount (daily)

0 – 500 mm


The larger of 5% or 0.1 mm


depth of snow

0 – 25 m

1 cm


1 cm for ≤ 20 cm 5% for > 20 cm 1 cm for ≤ 10 cm 10% for > 10 cm

Quantity based on daily amounts Measurement uncertainty depends on aerodynamic collection efficiency of gauges and evaporation losses in heated gauges average depth over an area representative of the observing site


Thickness of ice accretion on ships

Not specified

1 cm




(1) Reported Mode of resolution measurement/ observation I (trace): n/a for 0.02 – 0.2 mm h–1 0.1 mm h–1 for 0.2 – 2 mm h–1 5% for > 2 mm h–1 < 30 s 1 min uncertainty values for liquid precipitation only uncertainty is seriously affected by wind Sensors may show significant non-linear behaviour For < 0.2 mm h–1: detection only (yes/no) sensor time constant is significantly affected during solid precipitation using catchment type of gauges The larger of 0.1 h or 2% 0.4 MJ m–2 for ≤ 8 MJ m–2 5% for > 8 MJ m–2 n/a radiant exposure expressed as daily sums (amount) of (net) radiation Required measurement uncertainty Sensor time Output Achievable constant averaging time measurement uncertainty Remarks












Precipitation intensity

0.02 mm h–1 0.1 mm h–1 – 2 000 mm h–1

7. 60 s T 0.1 h 20 s n/a



Sunshine duration (daily) 1 J m–2 T 20 s 0.4 MJ m–2 for ≤ 8 MJ m–2 5% for > 8 MJ m–2

0 – 24 h


Net radiation, radiant exposure (daily)

Not specified

8. 1m I 50 m for ≤ 600 m 10% for > 600 m – ≤ 1 600 m 20% for > 1500 m

Visibility < 30 s 1 and 10 min The larger of achievable measurement 20 m or 20% uncertainty may depend on the cause of obscuration Quantity to be averaged: extinction coefficient (see Part III, Chapter 3, section 3.6, of this Guide). Preference for averaging logarithmic values



Meteorological optical range (MOr)

10 m – 100 km

(1) Reported Mode of resolution measurement/ observation 1m a 10 m for ≤ 400 m 25 m for > 400 m – ≤ 800 m 10% for > 800 m < 30 s 1 and 10 min Required measurement uncertainty Sensor time Output Achievable constant averaging time measurement uncertainty Remarks












runway visual range (rvr)

10 m – 1 500 m

The larger of In accordance with 20 m or 20% WMO-No. 49, volume II, attachment a (2004 ed.) and ICaO doc 9328-aN/908 (second ed., 2000) average over 20 min for instrumental measurements

9 9.1 0.1 m a 0.5 m for ≤ 5 m 10% for > 5 m 0.5 s 20 min

Waves Significant wave height 1s 1° a 10° 0.5 s 20 min a 0.5 s 0.5 s 20 min 0.5 m for ≤ 5m 10% for > 5m 0.5 s 20°

0 – 50 m


Wave period

0 – 100 s


Wave direction

0 – 360°

average over 20 min for instrumental measurements average over 20 min for instrumental measurements

CHaPTEr 1. GENEral

10. 10.1 0.1 mm T 0.1 mm for ≤ 5 mm 2% for > 5 mm n/a

evaporation amount of pan evaporation

0 – 100 mm

Notes: 1. Column 1 gives the basic variable. 2. Column 2 gives the common range for most variables; limits depend on local climatological conditions. 3. Column 3 gives the most stringent resolution as determined by the Manual on Codes (WMO-No. 306). 4. In column 4: I = Instantaneous: In order to exclude the natural small-scale variability and the noise, an average value over a period of 1 min is considered as a minimum and most suitable; averages over periods of up to 10 min are acceptable. A: = Averaging: Average values over a fixed period, as specified by the coding requirements. T: = Totals: Totals over a fixed period, as specified by coding requirements.


Column 5 gives the recommended measurement uncertainty requirements for general operational use, i.e. of Level II data according to FM 12, 13, 14, 15 and its BuFr equivalents. They have been adopted by all eight technical commissions and are applicable for synoptic, aeronautical, agricultural and marine meteorology, hydrology, climatology, etc. These requirements are applicable for both manned and automatic weather stations as defined in the Manual on the Global Observing System (WMO-No. 544). Individual applications may have less stringent requirements. The stated value of required measurement uncertainty represents the uncertainty of the reported value with respect to the true value and indicates the interval in which the true value lies with a stated probability. The recommended probability level is 95 per cent (k = 2), which corresponds to the 2 σ level for a normal (Gaussian) distribution of the variable. The assumption that all known corrections are taken into account implies that the errors in reported values will have a mean value (or bias) close to zero. Any residual bias should be small compared with the stated measurement uncertainty requirement. The true value is the value which, under operational conditions, perfectly characterizes the variable to be measured/observed over the representative time interval, area and/or volume required, taking into account siting and exposure. 6. Columns 2 to 5 refer to the requirements established by the CBS Expert Team on Requirements for Data from Automatic Weather Stations in 2004. 7. Columns 6 to 8 refer to the typical operational performance established by the CIMO Expert Team on Surface Technology and Measurement Techniques in 2004. 8. Achievable measurement uncertainty (column 8) is based on sensor performance under nominal and recommended exposure that can be achieved in operational practice. It should be regarded as a practical aid to users in defining achievable and affordable requirements. 9. n/a = not applicable. 10. The term uncertainty has preference over accuracy (i.e. uncertainty is in accordance with ISO standards on the uncertainty of measurements (ISO, 1995)). 11. Dewpoint temperature, relative humidity and air temperature are linked, and thus their uncertainties are linked. When averaging, preference is given to absolute humidity as the principal variable.



CHaPTEr 1. GENEral


aNNEx 1.C statIon exPosure descrIPtIon

The accuracy with which an observation describes the state of a selected part of the atmosphere is not the same as the uncertainty of the instrument, because the value of the observation also depends on the instrument’s exposure to the atmosphere. This is not a technical matter, so its description is the responsibility of the station observer or attendant. In practice, an ideal site with perfect exposure is seldom available and, unless the actual exposure is adequately documented, the reliability of observations cannot be determined (WMO, 2002). Station metadata should contain the following aspects of instrument exposure: (a) Height of the instruments above the surface (or below it, for soil temperature); (b) Type of sheltering and degree of ventilation for temperature and humidity; (c) Degree of interference from other instruments or objects (masts, ventilators); (d) Microscale and toposcale surroundings of the instrument, in particular: (i) The state of the enclosure’s surface, influencing temperature and humidity; nearby major obstacles (buildings, fences, trees) and their size; (ii) The degree of horizon obstruction for sunshine and radiation observations; (iii) Surrounding terrain roughness and major vegetation, influencing the wind; (iv) All toposcale terrain features such as small slopes, pavements, water surfaces;

(v) Major mesoscale terrain features, such as coasts, mountains or urbanization. Most of these matters will be semi-permanent, but any significant changes (growth of vegetation, new buildings) should be recorded in the station logbook, and dated. For documenting the toposcale exposure, a map with a scale not larger than 1:25 000 showing contours of ≈ 1 m elevation differences is desirable. On this map the locations of buildings and trees (with height), surface cover and installed instruments should be marked. At map edges, major distant terrain features (for example, builtup areas, woods, open water, hills) should be indicated. Photographs are useful if they are not merely close-ups of the instrument or shelter, but are taken at sufficient distance to show the instrument and its terrain background. Such photographs should be taken from all cardinal directions. The necessary minimum metadata for instrument exposure can be provided by filling in the template given on the next page for every station in a network (see Figure 1.3). An example of how to do this is shown in WMO (2003b). The classes used here for describing terrain roughness are given in Part I, Chapter 5, of the Guide. A more extensive description of metadata matters is given in WMO (2004).


Station Elevation 0 200 m Enclosure Building Road x x Trees, bushes x (12) Height (m) of obstacle +3 Elevation contour Latitude

Update Longitude


Radiation horizon 1: 6 1: 10 1: 20 N Temperature and humidity: Surface cover under screen Soil under screen Precipitation: Wind: Gauge rim height Anenomoter height Free-standing? , width ,to E , to S, , length to W yes/no . . E S Sensor height Artificial ventilation? W N yes/no
8° 4° 0°

(if “no” above: building height Terrain roughness class: to N Remarks:

figure I.3. general template for station exposure metadata


CHaPTEr 1. GENEral


references and furtHer readIng

Bureau International des Poids et Mesures/Comité Consultatif de Thermométrie, 1990: The International Temperature Scale of 1990 (ITS90) (H. Preston Thomas). Metrologia, 1990, 27, pp. 3–10. Bureau International des Poids et Mesures, 1998: The International System of Units (SI). Seventh edition, BIPM, Sèvres/Paris. Brooks, C.E.P. and N. Carruthers,1953: Handbook of Statistical Methods in Meteorology. MO 538, Meteorological Office, London. Eisenhart, C., 1963: Realistic evaluation of the precision and accuracy of instrument calibration systems. National Bureau of Standards–C, Engineering and Instrumentation, Journal of Research, Volume 67C, Number 2, April–June 1963. International Civil Aviation Organization, 2002: World Geodetic System — 1984 (WGS-84) Manual. ICAO Doc 9674–AN/946. Second edition, Quebec. International Organization for Standardization, 1993a: International Vocabulary of Basic and General Terms in Metrology. Prepared by BIPM/ISO/OIML/ IEC/IFCC/IUPAC and IUPAP, second edition, Geneva. International Organization for Standardization, 1993b: ISO Standards Handbook: Quantities and Units. ISO 31:1992, third edition, Geneva. International Organization for Standardization, 1995: Guide to the Expression of Uncertainty of Measurement. Published in the name of BIPM/ IEC/IFCC/ISO/IUPAC/IUPAP and OIML, first edition, Geneva. International Union of Pure and Applied Physics, 1987: Symbols, Units, Nomenclature and Fundamental Constants in Physics. SUNAMCO Document IUPAP-25 (E.R. Cohen and P. Giacomo), reprinted from Physica 146A, pp. 1–68. Kok, C.J., 2000: On the Behaviour of a Few Popular Verification Scores in Yes/No Forecasting. Scientific Report, WR-2000-04, KNMI, De Bilt. Linacre, E., 1992: Climate Data and Resources – A Reference and Guide. Routledge, London, 366 pp. Murphy, A.H. and R.W. Katz (eds.), 1985: Probability, Statistics and Decision Making in the Atmospheric Sciences. Westview Press, Boulder. National Institute of Standards and Technology, 1995: Guide for the Use of the International System of Units (SI) (B.N. Taylor). NIST Special

Publication No. 811, Gaithersburg, United States. Natrella, M.G., 1966: Experimental Statistics. National Bureau of Standards Handbook 91, Washington DC. Orlanski, I., 1975: A rational subdivision of scales for atmospheric processes. Bulletin of the American Meteorological Society, 56, pp. 527-530. World Meteorological Organization, 1966: International Meteorological Tables (S. Letestu, ed.) (1973 amendment), WMO-No. 188. TP.94, Geneva. World Meteorological Organization, 1970: Performance Requirements of Aerological Instruments (C.L. Hawson). Technical Note No. 112, WMO-No. 267. TP.151, Geneva. World Meteorological Organization, 1981: Guide to Agricultural Meteorological Practices. Second edition, WMO-No. 134, Geneva. World Meteorological Organization, 1983: Guide to Climatological Practices. Second edition, WMO-No. 100, Geneva (updates available at World Meteorological Organization, 1988: Technical Regulations. Volume I, Appendix A, WMO-No. 49, Geneva. World Meteorological Organization, 1989: Guide on the Global Observing System. WMO-No. 488, Geneva. World Meteorological Organization, 1990: Guide on Meteorological Observation and Information D i s t r i b u t i o n S y s t e m s a t A e ro d r o m e s. WMO-No. 731, Geneva. World Meteorological Organization, 1992a: International Meteorological Vocabulary. Second edition, WMO-No. 182, Geneva. World Meteorological Organization, 1992b: Manual on the Global Data-processing and Forecasting System. Volume I – Global Aspects, Appendix I-2, WMO-No. 485, Geneva. World Meteorological Organization, 1993a: Siting and Exposure of Meteorological Instruments (J. Ehinger). Instruments and Observing Methods Report No. 55, WMO/TD-No. 589, Geneva. World Meteorological Organization, 1993b: Weather Reporting. Volume A – Observing stations, WMO-No. 9, Geneva. World Meteorological Organization, 1994: Guide to Hydrological Practices. Fifth edition, WMO-No. 168, Geneva.



World Meteorological Organization, 2001: Lecture Notes for Training Agricultural Meteorological Personnel. Second edition, WMO-No. 551, Geneva. World Meteorological Organization, 2002: Station exposure metadata needed for judging and improving the quality of observations of wind, temperature and other parameters (J. Wieringa and E. Rudel). Papers Presented at the WMO Technical Conference on Meteorological and Environmental Instruments and Methods of

Observation (TECO–2002), Instruments and Obser ving Methods Report No. 75, WMO/TD-No. 1123, Geneva. World Meteorological Organization, 2003a: Manual on the Global Observing System. Volume I – Global Aspects, WMO-No. 544, Geneva. World Meterological Organization, 2003b: Guidelines on Climate Metadata and Homogenization (P. Llansó, ed.). World Climate Data and Monitoring Programme (WCDMP) Series Report No. 53, WMO/TD-No. 1186, Geneva.


MeasureMent of teMPerature

2.1 2.1.1



units anD scales


WMO (1992) defines temperature as a physical quantity characterizing the mean random motion of molecules in a physical body. Temperature is characterized by the behaviour whereby two bodies in thermal contact tend to an equal temperature. Thus, temperature represents the thermodynamic state of a body, and its value is determined by the direction of the net flow of heat between two bodies. In such a system, the body which overall loses heat to the other is said to be at the higher temperature. Defining the physical quantity temperature in relation to the “state of a body” however is difficult. A solution is found by defining an internationally approved temperature scale based on universal freezing and triple points.1 The current such scale is the International Temperature Scale of 1990 (ITS-90) 2 and its temperature is indicated by T90. For the meteorological range (–80 to +60°C) this scale is based on a linear relationship with the electrical resistance of platinum and the triple point of water, defined as 273.16 kelvin (BIPM, 1990). For meteorological purposes, temperatures are measured for a number of media. The most common variable measured is air temperature (at various heights). Other variables are ground, soil, grass minimum and seawater temperature. WMO (1992) defines air temperature as “the temperature indicated by a thermometer exposed to the air in a place sheltered from direct solar radiation”. Although this definition cannot be used as the definition of the thermodynamic quantity itself, it is suitable for most applications.

The thermodynamic temperature (T), with units of kelvin (K), (also defined as “kelvin temperature”), is the basic temperature. The kelvin is the fraction 1/273.16 of the thermodynamic temperature of the triple point of water. The temperature (t), in degrees Celsius (or “Celsius temperature”) defined by equation 2.1, is used for most meteorological purposes (from the ice-point secondary reference in Table 2 in the annex): t/°C = T/K – 273.15 (2.1)

A temperature difference of one degree Celsius (°C) unit is equal to one kelvin (K) unit. Note that the unit K is used without the degree symbol. In the thermodynamic scale of temperature, measurements are expressed as differences from absolute zero (0 K), the temperature at which the molecules of any substance possess no kinetic energy. The scale of temperature in general use since 1990 is the ITS-90 (see the annex), which is based on assigned values for the temperatures of a number of reproducible equilibrium states (see Table 1 in the annex) and on specified standard instruments calibrated at those temperatures. The ITS was chosen in such a way that the temperature measured against it is identical to the thermodynamic temperature, with any difference being within the present limits of measurement uncertainty. In addition to the defined fixed points of the ITS, other secondary reference points are available (see Table 2 in the annex). Temperatures of meteorological interest are obtained by interpolating between the fixed points by applying the standard formulae in the annex. 2.1.3 Meteorological requirements general


The authoritative body for this scale is the International Bureau of Weights and Measures/Bureau International des Poids et Mesures (BIPM), Sèvres (Paris); see http://www. BIPM’s Consultative Committee for Thermometry (CCT) is the executive body responsible for establishing and realizing the ITS. Practical information on ITS-90 can be found on the ITS-90 website:


Meteorological requirements for temperature measurements primarily relate to the following: (a) The air near the Earth’s surface; (b) The surface of the ground;

I.2–2 (c) (d) (e)


The soil at various depths; The surface levels of the sea and lakes; The upper air.

These measurements are required, either jointly or independently and locally or globally, for input to numerical weather prediction models, for hydrological and agricultural purposes, and as indicators of climatic variability. Local temperature also has direct physiological significance for the day-to-day activities of the world’s population. Measurements of temperature may be required as continuous records or may be sampled at different time intervals. This chapter deals with requirements relating to (a), (b) and (c). accuracy requirements

All temperature-measuring instruments should be issued with a certificate confirming compliance with the appropriate uncertainty or performance specification, or a calibration certificate that gives the corrections that must be applied to meet the required uncertainty. This initial testing and calibration should be performed by a national testing institution or an accredited calibration laboratory. Temperature-measuring instruments should also be checked subsequently at regular intervals, the exact apparatus used for this calibration being dependent on the instrument or sensor to be calibrated. response times of thermometers

The range, reported resolution and required uncertainty for temperature measurements are detailed in Part I, Chapter 1, of this Guide. In practice, it may not be economical to provide thermometers that meet the required performance directly. Instead, cheaper thermometers, calibrated against a laboratory standard, are used with corrections being applied to their readings as necessary. It is necessary to limit the size of the corrections to keep residual errors within bounds. Also, the operational range of the thermometer will be chosen to reflect the local climatic range. As an example, the table below gives an acceptable range of calibration and errors for thermometers covering a typical measurement range.

For routine meteorological observations there is no advantage in using thermometers with a very small time-constant or lag coefficient, since the temperature of the air continually fluctuates up to one or two degrees within a few seconds. Thus, obtaining a representative reading with such a thermometer would require taking the mean of a number of readings, whereas a thermometer with a larger time-constant tends to smooth out the rapid fluctuations. Too long a time-constant, however, may result in errors when long-period changes of temperature occur. It is recommended that the time-constant, defined as the time required by the thermometer to register 63.2 per cent of a step change in air temperature, should be 20 s. The time-constant depends on the air-flow over the sensor. recording the circumstances in which measurements are taken

thermometer characteristic requirements
Thermometer type Span of scale (˚C) range of calibration (˚C) Maximum error Maximum difference between maximum and minimum correction within the range Maximum variation of correction within any interval of 10˚C Ordinary –30 to 45 –30 to 40 0°C and ice for t < 0°C, respectively of 0.5 and 0.1 K, for a relative humidity U of 50 per cent and a range of true air temperatures (where the dry-bulb reading is assumed to give the true air temperature). table 4.3. error in derived relative humidity resulting from wet- and ice-bulb index errorsε (tx) for U = 50 per cent
Air temperature in °C Error in relative humidity, ε (U) in per cent due to an error in wet- or ice-bulb temperature (tx) = 0.5 K –30 –20 –10 0 10 20 30 40 50 60 27 14 8 5 4 3 2 2 (tx) = 0.1 K 12 5 3 2 1 0.5 0.5 0.5 0

Precision platinum electrical resistance thermometers are widely used in place of mercury-in-glass thermometers, in particular where remote reading and continuous measurements are required. It is necessary to ensure that the devices, and the interfacing electrical circuits selected, meet the performance requirements. These are detailed in Part I, Chapter . Particular care should always be taken with regard to self-heating effects in electrical thermometers. The psychrometric formulae in Annex 4.B used for Assmann aspiration psychrometers are also valid if platinum resistance thermometers are used in place of the mercury-in-glass instruments, with different configurations of elements and thermometers. The formula for water on the wet bulb is also valid for some transversely ventilated psychrometers (WMO, 1989a).


Thermometer lag coefficients: To obtain the highest accuracy with a psychrometer it is desirable to arrange for the wet and dry bulbs to have approximately the same lag coefficient; with thermometers having the same bulb size, the wet bulb has an appreciably smaller lag than the dry bulb.







Errors relating to ventilation: Errors due to insufficient ventilation become much more serious through the use of inappropriate humidity tables (see sections covering individual psychrometer types). Errors due to excessive covering of ice on the wet bulb: Since a thick coating of ice will increase the lag of the thermometer, it should be removed immediately by dipping the bulb into distilled water. Errors due to contamination of the wet-bulb sleeve or to impure water: Large errors may be caused by the presence of substances that alter the vapour pressure of water. The wet bulb with its covering sleeve should be washed at regular intervals in distilled water to remove soluble impurities. This procedure is more frequently necessary in some regions than others, for example, at or near the sea or in areas subject to air pollution. Errors due to heat conduction from the thermometer stem to the wet-bulb system: The conduction of heat from the thermometer stem to the wet bulb will reduce the wet-bulb depression and lead to determinations of humidity that are too high. The effect is most pronounced at low relative humidity but can be effectively eliminated by extending the wet-bulb sleeve at least  cm beyond the bulb up the stem of the thermometer. The assmann aspirated psychrometer description

psychrometer in the field may be as good as the achievable accuracy stated in Table 4.1, and with great care it can be significantly improved. Annex 4.B lists standard formulae for the computation of measures of humidity using an Assmann psychrometer,4 which are the bases of some of the other artificially ventilated psychrometers, in the absence of well-established alternatives. observation procedure

The wick, which must be free of grease, is moistened with distilled water. Dirty or crusty wicks should be replaced. Care should be taken not to introduce a water bridge between the wick and the radiation shield. The mercury columns of the thermometers should be inspected for breaks, which should be closed up or the thermometer should be replaced. The instrument is normally operated with the thermometers held vertically. The thermometer stems should be protected from solar radiation by turning the instrument so that the lateral shields are in line with the sun. The instrument should be tilted so that the inlet ducts open into the wind, but care should be taken so that solar radiation does not fall on the thermometer bulbs. A wind screen is necessary in very windy conditions when the rotation of the aspirator is otherwise affected. The psychrometer should be in thermal equilibrium with the surrounding air. At air temperatures above 0°C, at least three measurements at 1 min intervals should be taken following an aspiration period. Below 0°C it is necessary to wait until the freezing process has finished, and to observe whether there is water or ice on the wick. During the freezing and thawing processes the wet-bulb temperature remains constant at 0°C. In the case of outdoor measurements, several measurements should be taken and the average taken. Thermometer readings should be made with a resolution of 0.1 K or better. A summary of the observation procedure is as follows: (a) Moisten the wet bulb; (b) Wind the clockwork motor (or start the electric motor); (c) Wait  or  min or until the wet-bulb reading has become steady; (d) Read the dry bulb;


Two mercury-in-glass thermometers, mounted vertically side by side in a chromium- or nickelplated polished metal frame, are connected by ducts to an aspirator. The aspirator may be driven by a spring or an electric motor. One thermometer bulb has a well-fitted muslin wick which, before use, is moistened with distilled water. Each thermometer is located inside a pair of coaxial metal tubes, highly polished inside and out, which screen the bulbs from external thermal radiation. The tubes are all thermally insulated from each other. A WMO international intercomparison of Assmanntype psychrometers from 10 countries (WMO, 1989a) showed that there is good agreement between dryand wet-bulb temperatures of psychrometers with the dimensional specifications close to the original specification, and with aspiration rates above . m s–1. Not all commercially available instruments fully comply. A more detailed discussion is found in WMO (1989a). The performance of the Assmann

Recommended by the Commission for Instruments and Methods of Observation at its tenth session (1989).

I.4–10 (e) (f)


Read the wet bulb; Check the reading of the dry bulb. exposure and siting

with sensibly the wet-bulb temperature and in sufficient (but not excessive) quantity. If no wick is used, the wet bulb should be protected from dirt by enclosing the bulb in a small glass tube between readings. It is recommended that screen psychrometers be artificially aspirated. Both thermometers should be aspirated at an air speed of about  m s–1. Both spring-wound and electrically driven aspirators are in common use. The air should be drawn in horizontally across the bulbs, rather than vertically, and exhausted in such a way as to avoid recirculation. The performance of the screen psychrometer may be much worse than that shown in Table 4.1, especially in light winds if the screen is not artificially ventilated. The psychrometric formulae given in section 4..1.1 apply to screen psychrometers, but the coefficients are quite uncertain. A summary of some of the formulae in use is given by Bindon (1965). If there is artificial ventilation at  m s–1 or more across the wet bulb, the values given in Annex 4.B may be applied, with a psychrometer coefficient of 6.5 · 10–4 K–1 for water. However, values from 6.50 to 6.78 · 10–4 are in use for wet bulbs above 0°C, and 5.70 to 6.5 · 10–4 for below 0°C. For a naturally ventilated screen psychrometer, coefficients in use range from 7.7 to 8.0 · 10–4 above freezing and 6.8 to 7.. 10–4 for below freezing when there is some air movement in the screen, which is probably nearly always the case. However, coefficients up to 1 · 10–4 for water and 10.6 · 10–4 for ice have been advocated for when there is no air movement. The psychrometer coefficient appropriate for a particular configuration of screen, shape of wet bulb and degree of ventilation may be determined by comparison with a suitable working or reference standard, but there will be a wide scatter in the data, and a very large experiment would be necessary to obtain a stable result. Even when a coefficient has been obtained by such an experiment, the confidence limits for any single observation will be wide, and there would be little justification for departing from established national practices. special observation procedures

Observations should be made in an open area with the instrument either suspended from a clamp or attached using a bracket to a thin post, or held with one hand at arm’s length with the inlets slightly inclined into the wind. The inlets should be at a height of 1. to  m above ground for normal measurements of air temperature and humidity. Great care should be taken to prevent the presence of the observer or any other nearby sources of heat and water vapour, such as the exhaust pipe of a motor vehicle, from having an influence on the readings. Calibration

The ventilation system should be checked regularly, at least once per month. The calibration of the thermometers should also be checked regularly. The two may be compared together, with both thermometers measuring the dry-bulb temperature. Comparison with a certified reference thermometer should be performed at least once a year. maintenance

Between readings, the instrument should be stored in an unheated room or be otherwise protected from precipitation and strong insolation. When not in use, the instrument should be stored indoors in a sturdy packing case such as that supplied by the manufacturer. 4.2.3 screen psychrometer description

Two mercury-in-glass thermometers are mounted vertically in a thermometer screen. The diameter of the sensing bulbs should be about 10 mm. One of the bulbs is fitted with a wet-bulb sleeve, which should fit closely to the bulb and extend at least 0 mm up the stem beyond it. If a wick and water reservoir are used to keep the wet-bulb sleeve in a moist condition, the reservoir should preferably be placed to the side of the thermometer and with the mouth at the same level as, or slightly lower than, the top of the thermometer bulb. The wick should be kept as straight as possible and its length should be such that water reaches the bulb

The procedures described in section 4..1.5 apply to the screen psychrometer. In the case of a naturally aspirated wet bulb, provided that the



water reservoir has about the same temperature as the air, the correct wet-bulb temperature will be attained approximately 15 min after fitting a new sleeve; if the water temperature differs substantially from that of the air, it may be necessary to wait for 0 min. exposure and siting


The exposure and siting of the screen are described in Part I, Chapter . 4.2.4 sling or whirling psychrometers description

A small portable type of whirling or sling psychrometer consists of two mercury-in-glass thermometers mounted on a sturdy frame, which is provided with a handle and spindle, and located at the furthest end from the thermometer bulbs, by means of which the frame and thermometers may be rotated rapidly about a horizontal axis. The wet-bulb arrangement varies according to individual design. Some designs shield the thermometer bulbs from direct insolation, and these are to be preferred for meteorological measurements. The psychrometric formulae in Annex 4.B may be used. observation procedure

Air is drawn into a duct where it passes over an electrical heating element and then into a measuring chamber containing both dry- and wet-bulb thermometers and a water reservoir. The heating element control circuit ensures that the air temperature does not fall below a certain level, which might typically be 10°C. The temperature of the water reservoir is maintained in a similar way. Thus, neither the water in the reservoir nor the water at the wick should freeze, provided that the wet-bulb depression is less than 10 K, and that the continuous operation of the psychrometer is secured even if the air temperature is below 0°C. At temperatures above 10°C the heater may be automatically switched off, when the instrument reverts to normal psychrometric operation. Electrical thermometers are used so that they may be entirely enclosed within the measuring chamber and without the need for visual readings. A second dry-bulb thermometer is located at the inlet of the duct to provide a measurement of the ambient air temperature. Thus, the ambient relative humidity may be determined. The psychrometric thermometer bulbs are axially aspirated at an air velocity in the region of  m s–1. observation procedure

The following guidelines should be applied: (a) All instructions with regard to the handling of Assmann aspirated psychrometers apply also to sling psychrometers; (b) Sling psychrometers lacking radiation shields for the thermometer bulbs should be shielded from direct insolation in some other way; (c) Thermometers should be read at once after aspiration ceases because the wet-bulb temperature will begin to rise immediately, and the thermometers are likely to be subject to insolation effects. 4.2.5 heated psychrometer

A heated psychrometer would be suitable for automatic weather stations. exposure and siting

The instrument itself should be mounted outside a thermometer screen. The air inlet, where ambient air temperature is measured, should be inside the screen. 4.2.6 The WMo reference psychrometer

The principle of the heated psychrometer is that the water-vapour content of an air mass does not change if it is heated. This property may be exploited to the advantage of the psychrometer by avoiding the need to maintain an ice bulb under freezing conditions.

The reference psychrometer and procedures for its operation are described in WMO (199). The wetand dry-bulb elements are enclosed in an aspirated shield, for use as a free-standing instrument. Its significant characteristic is that the psychrometer coefficient is calculable from the theory of heat and mass exchanges at the wet bulb, and is different from the coefficient for other psychrometers, with a value of 6.5 · 10–4 K–1 at 50 per cent relative humidity, 0°C and 1 000 hPa. Its wet-bulb temperature is very close to the theoretical value (see Annex 4.A,



paragraphs 18 and 19). This is achieved by ensuring that the evaporation at the wet bulb is very efficient and that extraneous heating is minimized. The nature of the air-flow over the wet bulb is controlled by careful shaping of the duct and the bulb, and by controlling the ventilation rate. The double shield is highly reflective externally, and blackened on the inside, and the thermometer elements are insulated and separated by a shield. The shields and the wet-bulb element (which contains the thermometer) are made of stainless steel to minimize thermal conduction. The procedures for the use of the reference psychrometer ensure that the wet bulb is completely free of grease, even in the monomolecular layers that always arise from handling any part of the apparatus with the fingers. This is probably the main reason for the close relation of the coefficient to the theoretical value, and its difference from the psychrometer coefficients of other instruments. The reference psychrometer is capable of great accuracy, 0.8 per cent uncertainty in relative humidity at 50 per cent relative humidity and 0°C. It has also been adopted as the WMO reference thermometer. It is designed for use in the field but is not suitable for routine use. It should be operated only by staff accustomed to very precise laboratory work. Its use as a reference instrument is discussed in section 4.9.7.

which is particularly relevant for use at low air temperatures. This procedure also results in a more linear response function, although the tensile strength is reduced. For accurate measurements, a single hair element is to be preferred, but a bundle of hairs is commonly used to provide a degree of ruggedness. Chemical treatment with barium (BaS) or sodium (NaS) sulfide yields further linearity of response. The hair hygrograph or hygrometer is considered to be a satisfactory instrument for use in situations or during periods where extreme and very low humidities are seldom or never found. The mechanism of the instrument should be as simple as possible, even if this makes it necessary to have a non-linear scale. This is especially important in industrial regions, since air pollutants may act on the surface of the moving parts of the mechanism and increase friction between them. The rate of response of the hair hygrometer is very dependent on air temperature. At –10°C the lag of the instrument is approximately three times greater than the lag at 10°C. For air temperatures between 0 and 0°C and relative humidities between 0 and 80 per cent a good hygrograph should indicate 90 per cent of a sudden change in humidity within about  min. A good hygrograph in perfect condition should be capable of recording relative humidity at moderate temperatures with an uncertainty of ± per cent. At low temperatures, the uncertainty will be greater. Using hair pre-treated by rolling (as described above) is a requirement if useful information is to be obtained at low temperatures. 4.3.2 Description

4.3 4.3.1

The hair hyGroMeTer

General considerations

Any absorbing material tends to equilibrium with its environment in terms of both temperature and humidity. The water-vapour pressure at the surface of the material is determined by the temperature and the amount of water bound by the material. Any difference between this pressure and the watervapour pressure of the surrounding air will be equalized by the exchange of water molecules. The change in the length of hair has been found to be a function primarily of the change in relative humidity with respect to liquid water (both above and below an air temperature of 0°C), with an increase of about  to .5 per cent when the humidity changes from 0 to 100 per cent. By rolling the hairs to produce an elliptical cross-section and by dissolving out the fatty substances with alcohol, the ratio of the surface area to the enclosed volume increases and yields a decreased lag coefficient

The detailed mechanism of hair hygrometers varies according to the manufacturer. Some instruments incorporate a transducer to provide an electrical signal, and these may also provide a linearizing function so that the overall response of the instrument is linear with respect to changes in relative humidity. The most commonly used hair hygrometer is the hygrograph. This employs a bundle of hairs held under slight tension by a small spring and connected to a pen arm in such a way as to magnify a change in the length of the bundle. A pen at the end of the pen arm is in contact with a paper chart fitted around a metal cylinder and registers the angular displacement of the arm.



The cylinder rotates about its axis at a constant rate determined by a mechanical clock movement. The rate of rotation is usually one revolution either per week or per day. The chart has a scaled time axis that extends round the circumference of the cylinder and a scaled humidity axis parallel to the axis of the cylinder. The cylinder normally stands vertically. The mechanism connecting the pen arm to the hair bundle may incorporate specially designed cams that translate the non-linear extension of the hair in response to humidity changes into a linear angular displacement of the arm. The hair used in hygrometers may be of synthetic fibre. Where human hair is used it is normally first treated as described in section 4..1 to improve both the linearity of its response and the response lag, although this does result in a lower tensile strength. The pen arm and clock assembly are normally housed in a box with glass panels which allow the registered humidity to be observed without disturbing the instrument, and with one end open to allow the hair element to be exposed in free space outside the limits of the box. The sides of the box are separate from the solid base, but the end opposite the hair element is attached to it by a hinge. This arrangement allows free access to the clock cylinder and hair element. The element may be protected by an open mesh cage. 4.3.3 observation procedure

the tensioning spring. However, the effect of hysteresis may be evidenced in the failure of the pen to return to its original position. 4.3.4 exposure and siting

The hygrograph or hygrometer should be exposed in a thermometer screen. Ammonia is very destructive to natural hair. Exposure in the immediate vicinity of stables and industrial plants using ammonia should be avoided. When used in polar regions, the hygrograph should preferably be exposed in a special thermometer screen which provides the instrument with sufficient protection against precipitation and drifting snow. For example, a cover for the thermometer screen can be made of fine-meshed net (Mullergas) as a precautionary measure to prevent the accumulation of snow crystals on the hairs and the bearing surfaces of the mechanical linkage. This method can be used only if there is no risk of the net being wetted by melting snow crystals. 4.3.5 sources of error Changes in zero offset

The hair hygrometer should always be tapped lightly before being read in order to free any tension in the mechanical system. The hygrograph should, as far as possible, not be touched between changes of the charts except in order to make of time marks. Both the hygrometer and the hygrograph can normally be read to the nearest 1 per cent of relative humidity. Attention is drawn to the fact that the hair hygrometer measures relative humidity with respect to saturation over liquid water even at air temperatures below 0°C. The humidity of the air may change very rapidly and, therefore, accurate setting of time marks on a hygrograph is very important. In making the marks, the pen arm should be moved only in the direction of decreasing humidity on the chart. This is done so that the hairs are slackened by the displacement and, to bring the pen back to its correct position, the restoring force is applied by

For various reasons which are poorly understood, the hygrograph is liable to change its zero. The most likely cause is that excess tension has been induced in the hairs. For instance, the hairs may be stretched if time marks are made in the direction of increasing humidity on the chart or if the hygrograph mechanism sticks during decreasing humidity. The zero may also change if the hygrograph is kept in very dry air for a long time, but the change may be reversed by placing the instrument in a saturated atmosphere for a sufficient length of time. errors due to contamination of the hair

Most kinds of dust will cause appreciable errors in observations (perhaps as much as 15 per cent relative humidity). In most cases this may be eliminated, or at least reduced, by cleaning and washing the hairs. However, the harmful substances found in dust may also be destructive to hair (see section 4..4). hysteresis

Hysteresis is exhibited both in the response of the hair element and in the recording mechanism of the hair hygrometer. Hysteresis in the recording



mechanism is reduced through the use of a hair bundle, which allows a greater loading force to overcome friction. It should be remembered that the displacement magnification of the pen arm lever applies also to the frictional force between the pen and paper, and to overcome this force it requires a proportionately higher tension in the hair. The correct setting of the tensioning spring is also required to minimize hysteresis, as is the correct operation of all parts of the transducing linkage. The main fulcrum and any linearizing mechanism in the linkage introduce much of the total friction. Hysteresis in the hair element is normally a shortterm effect related to the absorption-desorption processes and is not a large source of error once vapour pressure equilibrium is established (see section 4..5.1 in respect of prolonged exposure at low humidity). 4.3.6 calibration and comparisons

The hair should be washed at frequent intervals using distilled water on a soft brush to remove accumulated dust or soluble contaminants. At no time should the hair be touched by fingers. The bearings of the mechanism should be kept clean and a small amount of clock oil should be applied occasionally. The bearing surfaces of any linearizing mechanism will contribute largely to the total friction in the linkage, which may be minimized by polishing the surfaces with graphite. This procedure may be carried out by using a piece of blotting paper rubbed with a lead pencil. With proper care, the hairs may last for several years in a temperate climate and when not subject to severe atmospheric pollution. Recalibration and adjustment will be required when hairs are replaced.


The chilleD-Mirror DeWpoinT hyGroMeTer

The readings of a hygrograph should be checked as frequently as is practical. In the case where wet- and dry-bulb thermometers are housed in the same thermometer screen, these may be used to provide a comparison whenever suitable steady conditions prevail, but otherwise field comparisons have limited value due to the difference in response rate of the instruments. Accurate calibration can only be obtained through the use of an environmental chamber and by comparison with reference instruments. The 100 per cent humidity point may be checked, preferably indoors with a steady air temperature, by surrounding the instrument with a saturated cloth (though the correct reading will not be obtained if a significant mass of liquid water droplets forms on the hairs). The ambient indoor humidity may provide a low relative humidity checkpoint for comparison against a reference aspirated psychrometer. A series of readings should be obtained. Long-term stability and bias may be appraised by presenting comparisons with a reference aspirated psychrometer in terms of a correlation function. 4.3.7 Maintenance


General considerations theory

The dewpoint (or frost-point) hygrometer is used to measure the temperature at which moist air, when cooled, reaches saturation and a deposit of dew (or ice) can be detected on a solid surface, which usually is a mirror. The deposit is normally detected optically. The principle of the measurement is described in section and below. The thermodynamic dewpoint is defined for a plane surface of pure water. In practice, water droplets have curved surfaces, over which the saturation vapour pressure is higher than for the plane surface (known as the Kelvin effect). Hydrophobic contaminants will exaggerate the effect, while soluble ones will have the opposite effect and lower the saturation vapour pressure (the Raoult effect). The Kelvin and Raoult effects (which, respectively, raise and lower the apparent dewpoint) are minimized if the critical droplet size adopted is large rather than small; this reduces the curvature effect directly and reduces the Raoult effect by lowering the concentration of a soluble contaminant. Principles

Observers should be encouraged to keep the hygrometer clean.

When moist air at temperature T, pressure p and mixing ratio rw (or ri) is cooled, it eventually reaches its saturation point with respect to a free water



surface (or to a free ice surface at lower temperatures) and a deposit of dew (or frost) can be detected on a solid non-hygroscopic surface. The temperature of this saturation point is called the thermodynamic dewpoint temperature T d (or the thermodynamic frost-point temperature T f ). The corresponding saturation vapour pressure with respect to water e’w (or ice e’i ) is a function of Td (or Tf ), as shown in the following equations:

The mirror should be equipped with a (preferably automatic) device for detecting contaminants that may increase or decrease the apparent dewpoint (see section 4.4..), so that they may be removed. optical detection assembly

ew ( p, Td ) = f ( p ) ⋅ ew (Td ) = ’

r⋅p 0.621 98 + r


ei’( p, T f ) = f ( p ) ⋅ ei (T f ) =

r⋅p 0.621 98 + r


The hygrometer measures T d or T f . Despite the great dynamic range of moisture in the troposphere, this instrument is capable of detecting both very high and very low concentrations of water vapour by means of a thermal sensor alone. Cooling using a low-boiling-point liquid has been used but is now largely superseded except for very low water-vapour concentrations. It follows from the above that it must also be possible to determine whether the deposit is supercooled liquid or ice when the surface temperature is at or below freezing point. The chilled-mirror hygrometer is used for meteorological measurements and as a reference instrument both in the field and in the laboratory. 4.4.2 Description sensor assembly

An electro-optical system is usually employed to detect the formation of condensate and to provide the input to the servo-control system to regulate the temperature of the mirror. A narrow beam of light is directed at the mirror at an angle of incidence of about 55°. The light source may be incandescent but is now commonly a light-emitting diode. In simple systems, the intensity of the directly reflected light is detected by a photodetector that regulates the cooling and heating assembly through a servo-control. The specular reflectivity of the surface decreases as the thickness of the deposit increases; cooling should cease while the deposit is thin, with a reduction in reflectance in the range of 5 to 40 per cent. More elaborate systems use an auxiliary photodetector which detects the light scattered by the deposit; the two detectors are capable of very precise control. A second, uncooled, mirror may be used to improve the control system. Greatest precision is obtained by controlling the mirror to a temperature at which condensate neither accumulates nor dissipates; however, in practice, the servo-system will oscillate around this temperature. The response time of the mirror to heating and cooling is critical in respect of the amplitude of the oscillation, and should be of the order of 1 to  s. The air-flow rate is also important for maintaining a stable deposit on the mirror. It is possible to determine the temperature at which condensation occurs with a precision of 0.05 K. It is feasible, but a time-consuming and skilled task, to observe the formation of droplets by using a microscope and to regulate the mirror temperature under manual control. thermal control assembly

The most widely used systems employ a small polished-metal reflecting surface, cooled electrically using a Peltier-effect device. The sensor consists of a thin metallic mirror of small ( to 5 mm) diameter that is thermally regulated using a cooling assembly (and possibly a heater), with a temperature sensor (thermocouple or platinum resistance thermometer) embedded on the underside of the mirror. The mirror should have a high thermal conductance, optical reflectivity and corrosion resistance combined with a low permeability to water vapour. Suitable materials used include gold, rhodium-plated silver, chromiumplated copper and stainless steel.

A Peltier-effect thermo-junction device provides a simple reversible heat pump; the polarity of direct current energization determines whether heat is pumped to, or from, the mirror. The device is bonded to, and in good thermal contact with, the underside of the mirror. For very low dewpoints, a multistage Peltier device may be required.



Thermal control is achieved by using an electrical servo-system that takes as input the signal from the optical detector subsystem. Modern systems operate under microprocessor control. A low-boiling-point fluid, such as liquid nitrogen, may be used to provide cooling, but this technique is no longer widely used. Similarly, electrical resistance wire may be used for heating but has now been superseded with the advent of small Peltier devices. temperature display system

set the flow at a rate that is consistent with the stable operation of the mirror temperature servocontrol system and at an acceptable rate of response to changes in humidity. The optimum flow rate is dependent upon the moisture content of the air sample and is normally within the range of 0.5 to 1 l min–1. 4.4.3 observation procedure

The mirror temperature, as measured by the electrical thermometer embedded beneath the mirror surface, is presented to the observer as the dewpoint of the air sample. Commercial instruments normally include an electrical interface for the mirror thermometer and a digital display, but may also provide digital and analogue electrical outputs for use with data-logging equipment. A chart recorder is particularly useful for monitoring the performance of the instrument in the case where the analogue output provides a continuous registration of the mirror thermometer signal but the digital display does not. auxiliary systems

The correct operation of a dewpoint hygrometer depends upon achieving an appropriate volume air-flow rate through the measuring chamber. The setting of a regulator for this purpose, usually a throttling device located downstream of the measuring chamber, is likely to require adjustment to accommodate diurnal variations in air temperature. Adjustment of the air-flow will disturb the operation of the hygrometer, and it may even be advisable to initiate a heating cycle. Both measures should be taken with sufficient time in order for a stable operation to be achieved before a reading is taken. The amount of time required will depend upon the control cycle of the individual instrument. The manufacturer’s instructions should be consulted to provide appropriate guidance on the air-flow rate to be set and on details of the instrument’s control cycle. The condition of the mirror should be checked frequently; the mirror should be cleaned as necessary. The stable operation of the instrument does not necessarily imply that the mirror is clean. It should be washed with distilled water and dried carefully by wiping it with a soft cloth or cotton dabstick to remove any soluble contaminant. Care must be taken not to scratch the surface of the mirror, most particularly where the surface has a thin plating to protect the substrate or where an ice/liquid detector is incorporated. If an air filter is not in use, cleaning should be performed at least daily. If an air filter is in use, its condition should be inspected at each observation. The observer should take care not to stand next to the air inlet or to allow the outlet to become blocked. For readings at, or below, 0°C the observer should determine whether the mirror condensate is supercooled water or ice. If no automatic indication is given, the mirror must be observed. From time to time the operation of any automatic system should be verified. An uncertainty of ±0. K over a wide dewpoint range (–60 to 50°C) is specified for the best instruments.

A microscope may be incorporated to provide a visual method to discriminate between supercooled water droplets and ice crystals for mirror temperatures below 0°C. Some instruments have a detector mounted on the mirror surface to provide an automatic procedure for this purpose (for example, capacitive sensor), while others employ a method based on reflectance. A microprocessor-based system may incorporate algorithms to calculate and display relative humidity. In this case, it is important that the instrument should discriminate correctly between a water and an ice deposit. Many instruments provide an automatic procedure for minimizing the effects of contamination. This may be a regular heating cycle in which volatile contaminants are evaporated and removed in the air stream. Systems with a wiper to automatically clean the mirror by means of a wiper are also in use. For meteorological measurements, and in most laboratory applications, a small pump is required to draw the sampled air through the measuring chamber. A regulating device is also required to




exposure and siting

The criteria for the siting of the sensor unit are similar to those for any aspirated hygrometer, although less stringent than for either a psychrometer or a relative humidity sensor, considering the fact that the dew or frost point of an air sample is unaffected by changes to the ambient temperature provided that it remains above the dewpoint at all times. For this reason, a temperature screen is not required. The sensor should be exposed in an open space and may be mounted on a post, within a protective housing structure, with an air inlet at the required level. An air-sampling system is required. This is normally a small pump that must draw air from the outlet port of the measuring chamber and eject it away from the inlet duct. Recirculation of the air-flow should be avoided as this represents a poor sampling technique, although under stable operation the water-vapour content at the outlet should be effectively identical to that at the inlet. Recirculation may be avoided by fixing the outlet above the inlet, although this may not be effective under radiative atmospheric conditions when a negative air temperature lapse rate exists. An air filter should be provided for continuous outdoor operations. It must be capable of allowing an adequate throughflow of air without a large blocking factor, as this may result in a significant drop in air pressure and affect the condensation temperature in the measuring chamber. A sintered metal filter may be used in this application to capture all but the smallest aerosol particles. A metal filter has the advantage that it may be heated easily by an electrical element in order to keep it dry under all conditions. It is more robust than the membrane-type filter and more suited to passing the relatively high air-flow rates required by the chilled-mirror method as compared with the sorption method. On the other hand, a metallic filter may be more susceptible to corrosion by atmospheric pollutants than some membrane filters. 4.4.5 calibration

the mirror temperature is below 0°C the deposit should be inspected visually, if this is possible, to determine whether it is of supercooled water or ice. A useful check is to compare the mirror temperature measurement with the air temperature while the thermal control system of the hygrometer is inactive. The instrument should be aspirated, and the air temperature measured at the mouth of the hygrometer air intake. This check is best performed under stable, non-condensing conditions. In bright sunshine, the sensor and duct should be shaded and allowed to come to equilibrium. The aspiration rate may be increased for this test. An independent field calibration of the mirror thermometer interface may be performed by simulating the thermometer signal. In the case of a platinum resistance thermometer, a standard platinum resistance simulation box, or a decade resistance box and a set of appropriate tables, may be used. A special simulator interface for the hygrometer control unit may also be required.


The liThiUM chloriDe heaTeD conDensaTion hyGroMeTer (DeW cell)


General considerations Principles

The physical principles of the heated salt-solution method are discussed in section The equilibrium vapour pressure at the surface of a saturated lithium chloride solution is exceptionally low. As a consequence, a solution of lithium chloride is extremely hygroscopic under typical conditions of surface atmospheric humidity; if the ambient vapour pressure exceeds the equilibrium vapour pressure of the solution, water vapour will condense over it (for example, at 0°C water vapour condenses over a plane surface of a saturated solution of lithium chloride to only 15 per cent relative humidity). A thermodynamically self-regulating device may be achieved if the solution is heated directly by passing an electrical current through it from a constantvoltage device. An alternating current should be used to prevent polarization of the solution. As the electrical conductivity decreases, so will the heating current, and an equilibrium point will be reached whereby a constant temperature is maintained; any

Regular comparisons against a reference instrument, such as an Assmann psychrometer or another chilled-mirror hygrometer, should be made as the operation of a field chilled mirror is subject to a number of influences which may degrade its performance. An instrument continuously in the field should be the subject of weekly check measurements. As the opportunity arises, its operation at both dew and frost points should be verified. When



cooling of the solution will result in the condensation of water vapour, thus causing an increase in conductivity and an increase in heating current, which will reverse the cooling trend. Heating beyond the balance point will evaporate water vapour until the consequent fall in conductivity reduces the electrical heating to the point where it is exceeded by heat losses, and cooling ensues. It follows from the above that there is a lower limit to the ambient vapour pressure that may be measured in this way at any given temperature. Below this value, the salt solution would have to be cooled in order for water vapour to condense. This would be equivalent to the chilled-mirror method except that, in the latter case, condensation takes place at a lower temperature when saturation is achieved with respect to a pure water surface, namely, at the ambient dewpoint. A degree of uncertainty is inherent in the method due to the existence of four different hydrates of lithium chloride. At certain critical temperatures, two of the hydrates may be in equilibrium with the aqueous phase, and the equilibrium temperature achieved by heating is affected according to the hydrate transition that follows. The most serious ambiguity for meteorological purposes occurs for ambient dewpoint temperatures below –1°C. For an ambient dewpoint of –°C, the potential difference in equilibrium temperature, according to which one of the two hydrate-solution transitions takes place, results in an uncertainty of ±.5 K in the derived dewpoint value. description

solution of lithium chloride, sometimes combined with potassium chloride. Bifilar silver or gold wire is wound over the covering of the bobbin, with equal spacing between the turns. An alternating electrical current source is connected to the two ends of the bifilar winding; this is commonly derived from the normal electrical supply (50 or 60 Hz). The lithium chloride solution is electrically conductive to a degree determined by the concentration of solute. A current passes between adjacent bifilar windings, which act as electrodes, and through the solution. The current heats the solution, which increases in temperature. Except under conditions of extremely low humidity, the ambient vapour pressure will be higher than the equilibrium vapour pressure over the solution of lithium chloride at ambient air temperature, and water vapour will condense onto the solution. As the solution is heated by the electrical current, a temperature will eventually be reached above which the equilibrium vapour pressure exceeds the ambient vapour pressure, evaporation will begin, and the concentration of the solution will increase. An operational equilibrium temperature exists for the instrument, depending upon the ambient water-vapour pressure. Above the equilibrium temperature, evaporation will increase the concentration of the solution, and the electrical current and the heating will decrease and allow heat losses to cause the temperature of the solution to fall. Below the equilibrium temperature, condensation will decrease the concentration of the solution, and the electrical current and the heating will increase and cause the temperature of the solution to rise. At the equilibrium temperature, neither evaporation nor condensation occurs because the equilibrium vapour pressure and the ambient vapour pressure are equal. In practice, the equilibrium temperature measured is influenced by individual characteristics of sensor construction and has a tendency to be higher than that predicted from equilibrium vapour-pressure data for a saturated solution of lithium chloride. However, reproducibility is sufficiently good to allow the use of a standard transfer function for all sensors constructed to a given specification. Strong ventilation affects the heat transfer characteristics of the sensor, and fluctuations in ventilation lead to unstable operation.

The dew-cell hygrometer measures the temperature at which the equilibrium vapour pressure for a saturated solution of lithium chloride is equal to the ambient water-vapour pressure. Empirical transformation equations, based on saturation vapour pressure data for lithium chloride solution and for pure water, provide for the derivation of the ambient water vapour and dewpoint with respect to a plane surface of pure water. The dewpoint temperature range of –1 to 5°C results in dew-cell temperatures in the range of 17 to 71°C. sensors with direct heating

The sensor consists of a tube, or bobbin, with a resistance thermometer fitted axially within. The external surface of the tube is covered with a glass fibre material (usually tape wound around and along the tube) that is soaked with an aqueous



In order to minimize the risk of excessive current when switching on the hygrometer (as the resistance of the solution at ambient temperature is rather low), a current-limiting device, in the form of a small lamp, is normally connected to the heater element. The lamp is chosen so that, at normal bobbin-operating currents, the filament resistance will be low enough for the hygrometer to function properly, while the operating current for the incandescent lamp (even allowing for a bobbin-offering no electrical resistance) is below a value that might damage the heating element. The equilibrium vapour pressure for saturated lithium chloride depends upon the hydrate being in equilibrium with the aqueous solution. In the range of solution temperatures corresponding to dewpoints of –1 to 41°C monohydrate normally occurs. Below –1°C, dihydrate forms, and above 41°C, anhydrous lithium chloride forms. Close to the transition points, the operation of the hygrometer is unstable and the readings ambiguous. However, the –1°C lower dewpoint limit may be extended to –0°C by the addition of a small amount of potassium chloride (KCl). sensors with indirect heating

A current-limiting device should be installed if not provided by the manufacturer, otherwise the high current may damage the sensor when the instrument is powered-up. 4.5.3 exposure and siting

The hygrometer should be located in an open area in a housing structure which protects it from the effects of wind and rain. A system for providing a steady aspiration rate is required. The heat from the hygrometer may affect other instruments; this should be taken into account when choosing its location. The operation of the instrument will be affected by atmospheric pollutants, particularly substances which dissociate in solutions and produce a significant ion concentration. 4.5.4 sources of error

An electrical resistance thermometer is required for measuring the equilibrium temperature; the usual sources of error for thermometry are present. The equilibrium temperature achieved is determined by the properties of the solute, and significant amounts of contaminant will have an unpredictable effect. Variations in aspiration affect the heat exchange mechanisms and, thus, the stability of operation of the instrument. A steady aspiration rate is required for a stable operation. 4.5.5 calibration

Improved accuracy, compared with the arrangement described in section 4.5.1., may be obtained when a solution of lithium chloride is heated indirectly. The conductance of the solution is measured between two platinum electrodes and provides control of a heating coil. 4.5.2 operational procedure

Readings of the equilibrium temperature of the bobbin are taken and a transfer function applied to obtain the dewpoint temperature. Disturbing the sensor should be avoided as the equilibrium temperature is sensitive to changes in heat losses at the bobbin surface. The instrument should be energized continuously. If allowed to cool below the equilibrium temperature for any length of time, condensation will occur and the electrolyte will drip off. Check measurements with a working reference hygrometer must be taken at regular intervals and the instrument must be cleaned and retreated with a lithium chloride solution, as necessary.

A field calibration should be performed at least once a month, by means of comparison with a working standard instrument. Calibration of the bobbin thermometer and temperature display should be performed regularly, as for other operational thermometers and display systems. 4.5.6 Maintenance

The lithium chloride should be renewed regularly. This may be required once a month, but will depend upon the level of atmospheric pollution. When renewing the solution, the bobbin should be washed with distilled water and fresh solution subsequently applied. The housing structure should be cleaned at the same time.



Fresh solution may be prepared by mixing five parts by weight of anhydrous lithium chloride with 100 parts by weight of distilled water. This is equivalent to 1 g of anhydrous lithium chloride to 0 ml of water. The temperature-sensing apparatus should be maintained in accordance with the recommendations for electrical instruments used for making air temperature measurements, but bearing in mind the difference in the range of temperatures measured.

order to avoid polarization of the electrolyte. Low-frequency supply can be used, given that the DC resistance is to be measured, and therefore it is possible to employ quite long leads between the sensor and its electrical interface. 4.6.3 electrical capacitance

The method is based upon the variation of the dielectric properties of a solid, hygroscopic material in relation to the ambient relative humidity. Polymeric materials are most widely used for this purpose. The water bound in the polymer alters its dielectric properties owing to the large dipole moment of the water molecule. The active part of the humidity sensor consists of a polymer foil sandwiched between two electrodes to form a capacitor. The electrical impedance of this capacitor provides a measure of relative humidity. The nominal value of capacitance may be only a few or several hundred picofarads, depending upon the size of the electrodes and the thickness of the dielectric. This will, in turn, influence the range of excitation frequency used to measure the impedance of the device, which is normally at least several kilohertz and, thus, requires that short connections be made between the sensor and the electrical interface to minimize the effect of stray capacitance. Therefore, capacitance sensors normally have the electrical interface built into the probe, and it is necessary to consider the effect of environmental temperature on the performance of the circuit components. 4.6.4 observation procedure


elecTrical resisTive anD capaciTive hyGroMeTers


General considerations

Certain hygroscopic materials exhibit changes in their electrical properties in response to a change in the ambient relative humidity with only a small temperature dependence. Electrical relative humidity sensors are increasingly used for remote-reading applications, particularly where a direct display of relative humidity is required. Since many of them have very non-linear responses to changes in humidity, the manufacturers often supply them with special data-processing and display systems. 4.6.2 electrical resistance

Sensors made from chemically treated plastic material having an electrically conductive surface layer on the non-conductive substrate may be used for meteorological purposes. The surface resistivity varies according to the ambient relative humidity. The process of adsorption, rather than absorption, is dominant because the humidity-sensitive part of such a sensor is restricted to the surface layer. As a result, this type of sensor is capable of responding rapidly to a change in ambient humidity. This class of sensor includes various electrolytic types in which the availability of conductive ions in a hygroscopic electrolyte is a function of the amount of adsorbed water vapour. The electrolyte may take various physical forms, such as liquid or gel solutions, or an ion-exchange resin. The change in impedance to an alternating current, rather than to a direct current, is measured in

Sensors based on changes in the electronic properties of hygroscopic materials are frequently used for the remote reading of relative humidity and also for automatic weather stations. 4.6.5 exposure and siting

The sensors should be mounted inside a thermometer screen. The manufacturer’s advice regarding the mounting of the actual sensor should be followed. The use of protective filters is mandatory. Direct contact with liquid water will seriously harm sensors using hygroscopic electrolyte as a sensor element. Great care should be taken to prevent liquid water from reaching the sensitive element of such sensors.





Field and laboratory calibrations should be carried out as for hair hygrometers. Suitable auxiliary equipment to enable checks by means of salt solutions is available for most sensors of this type. 4.6.7 Maintenance

length can be determined by measuring the ratio of their intensities at the receiver. The most widely used source for this technique is a tungsten lamp, filtered to isolate a pair of wavelengths in the infrared region. The measuring path is normally greater than 1 m. Both types of EMR absorption hygrometers require frequent calibration and are more suitable for measuring changes in vapour concentration than absolute levels. The most widespread application of the EMR absorption hygrometer is to monitor very high frequency variations in humidity, since the method does not require the detector to achieve vapour-pressure equilibrium with the sample. The time-constant of an optical hygrometer is typically just a few milliseconds. The use of optical hygrometers remains restricted to research activities.

Observers should be encouraged to maintain the hygrometer in clean conditions (see section


hyGroMeTers UsinG absorpTion of elecTroMaGneTic raDiaTion

The water molecule absorbs electromagnetic radiation (EMR) in a range of wavebands and discrete wavelengths; this property can be exploited to obtain a measure of the molecular concentration of water vapour in a gas. The most useful regions of the electromagnetic spectrum, for this purpose, lie in the ultraviolet and infrared regions. Therefore, the techniques are often classified as optical hygrometry or, more correctly, EMR absorption hygrometry. The method makes use of measurements of the attenuation of radiation in a waveband specific to water-vapour absorption, along the path between a source of the radiation and a receiving device. There are two principal methods for determining the degree of attenuation of the radiation as follows: (a) Transmission of narrow-band radiation at a fixed intensity to a calibrated receiver: The most commonly used source of radiation is hydrogen gas; the emission spectrum of hydrogen includes the Lyman-Alpha line at 11.6 nm, which coincides with a watervapour absorption band in the ultraviolet region where there is little absorption by other common atmospheric gases. The measuring path is typically a few centimetres in length; (b) Transmission of radiation at two wavelengths, one of which is strongly absorbed by water vapour and the other being either not absorbed or only very weakly absorbed: If a single source is used to generate the radiation at both wavelengths, the ratio of their emitted intensities may be accurately known, so that the attenuation at the absorbed wave-



Chemical agents are widely used in the measurement of humidity. The properties of such agents should always be made known to the personnel handling them. All chemicals should be kept in secure and clearly labelled containers and stored in an appropriate environment. Instructions concerning the use of toxic materials may be prescribed by local authorities. Saturated salt solutions are widely used in the measurement of humidity. The notes that follow give some guidance for the safe use of some commonly used salts: (a) Barium chloride (BaCl): Colourless crystals; very soluble in water; stable, but may emit toxic fumes in a fire; no hazardous reaction with water, acids, bases, oxidizers or with combustible materials; ingestion causes nausea, vomiting, stomach pains and diarrhoea; harmful if inhaled as dust and if it comes into contact with the skin; irritating to eyes; treat with copious amounts of water and obtain medical attention if ingested; (b) Calcium chloride (CaCl): Colourless crystals; deliquescent; very soluble in water, dissolves with increase in heat; will initiate exothermic polymerization of methyl vinyl ether; can react with zinc to liberate hydrogen; no hazardous reactions with acids, bases, oxidizers or combustibles; irritating to the skin, eyes and




respiratory system; ingestion causes gastric irritation; ingestion of large amounts can lead to hypercalcaemia, dehydration and renal damage; treat with copious amounts of water and obtain medical attention; Lithium chloride (LiCl): Colourless crystals; stable if kept dry; very soluble in water; may emit toxic fumes in a fire; ingestion may affect ionic balance of blood leading to anorexia, diarrhoea, vomiting, dizziness and central nervous system disturbances; kidney damage may result if sodium intake is low (provide plenty of drinking water and obtain medical attention); no hazardous reactions with water, acids, bases, oxidizers or combustibles;



Magnesium nitrate (Mg(NO)): Colourless crystals; deliquescent; very soluble in water; may ignite combustible material; can react vigorously with deoxidizers, can decompose spontaneously in dimethylformamide; may emit toxic fumes in a fire (fight the fire with a water spray); ingestion of large quantities can have fatal effects (provide plenty of drinking water and obtain medical attention); may irritate the skin and eyes (wash with water); Potassium nitrate (KNO): White crystals or crystalline powder; very soluble in water; stable but may emit toxic fumes in a fire (fight the fire with a water spray); ingestion of large quantities causes vomiting, but it is

table 4.4. standard instruments for the measurement of humidity
Dewpoint temperature Standard instrument Primary standard Requirement gravimetric hygrometer –60 to –15 –15 to 40 –60 to –35 –35 to 35 35 to 60 –75 to –15 –15 to 30 30 to 80 –75 to 30 0.3 0.1 0.25 0.03 0.25 0.25 0.1 0.2 0.2 5 to 100 5 to 100 0.2 0.2 Range (°C) Uncertainty (K) Relative humidity (%) Range Uncertainty

standard two-temperature humidity generator standard two-pressure humidity generator Secondary standard Requirement Chilled-mirror hygrometer Reference psychrometer Reference standard Requirement Reference psychrometer Chilled-mirror hygrometer Working standard Requirement Assmann psychrometer Chilled-mirror hygrometer

–80 to –15 –15 to 40 –60 to 40

0.75 0.25 0.15

5 to 100


5 to 100


–80 to –15 –15 to 40

1.0 0.3

5 to 100 5 to 100

1.5 0.6

–60 to 40


–15 to 40 –10 to 25 –10 to 30


5 to 100 40 to 90

2 1





rapidly excreted in urine (provide plenty of drinking water); may irritate eyes (wash with water); no hazardous reaction with water, acids, bases, oxidizers or combustibles; Sodium chloride (NaCl): Colourless crystals or white powder; very soluble in water; stable; no hazardous reaction with water, acids, bases, oxidizers or combustibles; ingestion of large amounts may cause diarrhoea, nausea, vomiting, deep and rapid breathing and convulsions (in severe cases obtain medical attention).


calibration intervals and methods

Regular calibration is required for all humidity sensors in the field. For chilled-mirror psychrometers and heated dewpoint hygrometers that use a temperature detector, calibration can be checked whenever a regular maintenance routine is performed. Comparison with a working standard, such as an Assmann psychrometer, should be performed at least once a month. The use of a standard type of aspirated psychrometer, such as the Assmann, as a working standard has the advantage that its integrity can be verified by comparing the dry- and wet-bulb thermometers, and that adequate aspiration may be expected from a healthy sounding fan. The reference instrument should itself be calibrated at an interval appropriate to its type. Saturated salt solutions can be applied with sensors that require only a small-volume sample. A very stable ambient temperature is required and it is difficult to be confident about their use in the field. When using salt solutions for control purposes, it should be borne in mind that the nominal humidity value given for the salt solution itself is not traceable to any primary standard. 4.9.3 laboratory calibration

Advice concerning the safe use of mercury is given in Part I, Chapter .


sTanDarD insTrUMenTs anD calibraTion


principles involved in the calibration of hygrometers

Precision in the calibration of humidity sensors entails special problems, to a great extent owing to the relatively small quantity of water vapour which can exist in an air sample at normal temperatures, but also due to the general difficulty of isolating and containing gases and, more particularly, vapour. An ordered hierarchy of international traceability in humidity standards is only now emerging. An absolute standard for humidity (namely, a realization of the physical definition for the quantity of humidity) can be achieved by gravimetric hygrometry. The reference psychrometer (within its limited range) is also a form of primary standard, in that its performance is calculable. The calibration of secondary, reference and working standards involves several steps. Table 4.4 shows a summary of humidity standard instruments and their performances. A practical field calibration is most frequently done by means of well-designed aspirated psychrometers and dewpoint sensors as working standards. These specific types of standards must be traceable to the higher levels of standards by careful comparisons. Any instrument used as a standard must be individually calibrated for all variables involved in calculating humidity (air temperature, wet-bulb temperature, dewpoint temperature, and so forth). Other factors affecting performance, such as air-flow, must also be checked.

Laboratory calibration is essential for maintaining accuracy in the following ways: (a) Field and working standard instruments: Laboratory calibration of field and working standard instruments should be carried out on the same regular basis as for other operational thermometers. For this purpose, the chilled-mirror sensor device may be considered separately from the control unit. The mirror thermometer should be calibrated independently and the control unit should be calibrated on the same regular basis as other items of precision electronic equipment. The calibration of a field instrument in a humidity generator is not strictly necessary if the components have been calibrated separately, as described previously. The correct operation of an instrument may be verified under stable room conditions by comparison with a reference instrument, such as an Assmann psychrometer or a standard chilled-mirror hygrometer. If the field instrument incorporates an ice detector, the correct operation of this system should be verified.

I.4–24 (b)


Reference and standard instruments: Laboratory calibration of reference and standard instruments requires a precision humidity generator and a suitable transfer standard hygrometer. Two-pressure and two-temperature humidity generators are able to deliver a suitable controlled flow of air at a predetermined temperature and dewpoint. The calibration should be performed at least every 1 months and over the full range of the reference application for the instrument. The calibration of the mirror thermometer and the temperature display system should be performed independently at least once every 1 months. primary standards Gravimetric hygrometry

and allowed to expand isothermally in a second chamber at a lower pressure P. Both chambers are maintained at the same temperature in an oil bath. The relative humidity of the water vapour-gas mixture is straightforwardly related to the total pressures in each of the two chambers through Dalton’s law of partial pressures; the partial pressure e’ of the vapour in the low-pressure chamber will have the same relation to the saturation vapour pressure e’w as the total pressure in the high-pressure saturator has to the total pressure in the low-pressure chamber. Thus, the relative humidity Uw is given by: Uw = 100 · e’/e’w = 100 · P1/P (4.5)


The relation also holds for the solid phase if the gas is saturated with respect to ice at pressure P1: Ui = 100 · e’/e’i = 100 · P1/P (4.6)

The gravimetric method yields an absolute measure of the water-vapour content of an air sample in terms of its humidity mixing ratio. This is obtained by first removing the water vapour from the sample using a known mass of a drying agent, such as anhydrous phosphorous pentoxide (PO5) or magnesium perchlorate (Mg(ClO4)). The mass of the water vapour is determined by weighing the drying agent before and after absorbing the vapour. The mass of the dry sample is determined either by weighing (after liquefaction to render the volume of the sample manageable) or by measuring its volume (and having knowledge of its density). The complexity of the apparatus required to accurately carry out the procedure described limits the application of this method to the laboratory environment. In addition, a substantial volume sample of air is required for accurate measurements to be taken and a practical apparatus requires a steady flow of the humid gas for a number of hours, depending upon the humidity, in order to remove a sufficient mass of water vapour for an accurate weighing measurement. As a consequence, the method is restricted to providing an absolute calibration reference standard. Such an apparatus is found mostly in national calibration standards laboratories. dynamic two-pressure standard humidity generator

dynamic two-temperature standard humidity generator

This laboratory apparatus provides a stream of humid gas at temperature T 1 having a dew- or frost-point temperature T  . Two temperaturecontrolled baths, each equipped with heat exchangers and one with a saturator containing either water or ice, are used first to saturate the air-stream at temperature T1 and then to heat it isobarically to temperature T  . In practical designs, the air-stream is continuously circulated to ensure saturation. Test instruments draw off air at temperature T and a flow rate that is small in proportion to the main circulation. 4.9.5 secondary standards

A secondary standard instrument should be carefully maintained and removed from the calibration laboratory only for the purpose of calibration with a primary standard or for intercomparison with other secondary standards. Secondary standards may be used as transfer standards from the primary standards. A chilled-mirror hygrometer may be used as a secondary standard instrument under controlled conditions of air temperature, humidity and pressure. For this purpose, it should be calibrated from a recognized accredited laboratory, giving uncertainty limits throughout the operational range of the instrument. This calibration must be directly traceable to a primary standard and should be renewed at an appropriate interval (usually once every 1 months).

This laboratory apparatus serves to provide a source of humid gas whose relative humidity is determined on an absolute basis. A stream of the carrier gas is passed through a saturating chamber at pressure P1



General considerations for chilled-mirror hygrometers are discussed in section 4.4. This method presents a fundamental technique for determining atmospheric humidity. Provided that the instrument is maintained and operated correctly, following the manufacturer’s instructions, it can provide a primary measurement of dew or frost point within limits of uncertainty determined by the correspondence between the mirror surface temperature at the appropriate point of the condensation/evaporation cycle and the temperature registered by the mirror thermometer at the observation time. The Kelvin and Raoult effects upon the condensation temperature must be taken into consideration, and any change of the air pressure resulting from the sampling technique must be taken into account by using the equations given in section 4.4.1.. 4.9.6 Working standards (and field reference instruments)

frequently during the operation. The psychrometer’s description and operating instructions are given in WMO (199). 4.9.8 saturated salt solutions

Vessels containing saturated solutions of appropriate salts may be used to calibrate relative humidity sensors. Commonly used salts and their saturation relative humidities at 5°C are as follows: Barium chloride (BaCl): 90. per cent Sodium chloride (NaCl): 75. per cent Magnesium nitrate (Mg(NO)): 5.9 per cent Calcium chloride (CaCl): 9.0 per cent Lithium chloride (LiCl): 11.1 per cent It is important that the surface area of the solution is large compared to that of the sensor element and the enclosed volume of air so that equilibrium may be achieved quickly; an airtight access port is required for the test sensor. The temperature of the vessel should be measured and maintained at a constant level as the saturation humidity for most salts has a significant temperature coefficient. Care should be taken when using saturated salt solutions. The degree of toxicity and corrosivity of salt solutions should be known to the personnel dealing with them. The salts listed above may all be used quite safely, but it is nevertheless important to avoid contact with the skin, and to avoid ingestion and splashing into the eyes. The salts should always be kept in secure and clearly labelled containers which detail any hazards involved. Care should be taken when dissolving calcium chloride crystals in water, as much heat is evolved. Section 4.8 deals with chemical hazards in greater detail. Although the use of saturated salt solutions provides a simple method to adjust some (relative) humidity sensors, such adjustment cannot be considered as a traceable calibration of the sensors. The (nominal) values of salt solutions have, at the moment, generally no traceability to reference standards. Measurements from sensors adjusted by means of the saturated salt solution method should always be checked by calibration standards after adjustment.

A chilled-mirror hygrometer or an Assmann psychrometer may be used as a working standard for comparisons under ambient conditions in the field or the laboratory. For this purpose, it is necessary to have performed comparisons at least at the reference standard level. The comparisons should be performed at least once every 1 months under stable room conditions. The working standard will require a suitable aspiration device to sample the air. 4.9.7 The WMo reference psychrometer

This type of psychrometer is essentially a primary standard because its performance is calculable. However, its main use is as a highly accurate reference instrument, specifically for type-testing other instrument systems in the field. It is intended for use as a free-standing instrument, alongside the screen or other field instruments, and must be made precisely to its general specification and operated by skilled staff experienced in precise laboratory work; careful attention should be given to aspiration and to preventing the wet bulb from being contaminated by contact with fingers or other objects. There are, however, simple tests by which the readings may be validated at any time, and these should be used



AnnEx 4.A definitions and sPeCifiCations of water vaPour in the atmosPhere
(adapted from the Technical Regulations (WMO-No. 49), Volume I, Appendix B)

(1) The mixing ratio r of moist air is the ratio of the mass mv of water vapour to the mass ma of dry air with which the water vapour is associated:

(5) The vapour pressure e’ of water vapour in moist air at total pressure p and with mixing ratio r is defined by:


mv ma


e’ =

r p = xv ⋅ p 0.621 98 + r


() The specific humidity, mass concentration or moisture content q of moist air is the ratio of the mass mv of water vapour to the mass mv + ma of moist air in which the mass of water vapour mv is contained: mv q= (4.A.) mv + ma () Vapour concentration (density of water vapour in a mixture) or absolute humidity: For a mixture of water vapour and dry air the vapour concentration v is defined as the ratio of the mass of vapour mv to the volume V occupied by the mixture:

(6) Saturation: Moist air at a given temperature and pressure is said to be saturated if its mixing ratio is such that the moist air can coexist in neutral equilibrium with an associated condensed phase (liquid or solid) at the same temperature and pressure, the surface of separation being plane. (7) Saturation mixing ratio: The symbol rw denotes the saturation mixing ratio of moist air with respect to a plane surface of the associated liquid phase. The symbol ri denotes the saturation mixing ratio of moist air with respect to a plane surface of the associated solid phase. The associated liquid and solid phases referred to consist of almost pure water and almost pure ice, respectively, there being some dissolved air in each. (8) Saturation vapour pressure in the pure phase: The saturation vapour pressure ew of pure aqueous vapour with respect to water is the pressure of the vapour when in a state of neutral equilibrium with a plane surface of pure water at the same temperature and pressure; similarly for ei with respect to ice; ew and ei are temperaturedependent functions only, namely: ew = ew (T) ei = ei (T) (4.A.7) (4.A.8)

ρv =

mv V


(4) Mole fraction of the water vapour of a sample of moist air: The mole fraction xv of the water vapour of a sample of moist air, composed of a mass ma of dry air and a mass mv of water vapour, is defined by the ratio of the number of moles of water vapour (nv = mv/Mv) to the total number of moles of the sample nv + na, where na indicates the number of moles of dry air (na = ma/Ma) of the sample concerned. This gives:

nv xv = na + nv or: (4.A.4)

xv =

r 0.621 98 + r


where r is merely the mixing ratio (r = mv/ma) of the water vapour of the sample of moist air.

(9) Mole fraction of water vapour in moist air saturated with respect to water: The mole fraction of water vapour in moist air saturated with respect to water, at pressure p and temperature T, is the mole fraction xvw of the water vapour of a sample of moist air, at the same pressure p and the same temperature T, that is in stable equilibrium in the presence of a plane surface of water



containing the amount of dissolved air corresponding to equilibrium. Similarly, xvi will be used to indicate the saturation mole fraction with respect to a plane surface of ice containing the amount of dissolved air corresponding to equilibrium. (10) Saturation vapour pressure of moist air: The saturation vapour pressure with respect to water e’w of moist air at pressure p and temperature T is defined by:

(15)1 The relative humidity Uw with respect to water of moist air at pressure p and temperature T is the ratio in per cent of the vapour mole fraction xv to the vapour mole fraction xvw which the air would have if it were saturated with respect to water at the same pressure p and temperature T. Accordingly:

⎛ x ⎞ ⎛ pxv ⎞ U w = 100 ⎜ v ⎟ = 100 ⎜ ⎟ ⎝ xvw ⎠ p,T ⎝ pxvw ⎠ p,T ⎛ eʹ ⎞ = 100 ⎜ ⎟ ʹ ⎝ ew ⎠ p,T

ew = ʹ

rw p = xvw ⋅ p 0.621 98 + rw


Similarly, the saturation vapour pressure with respect to ice e’i of moist air at pressure p and temperature T is defined by:

eiʹ =

ri p = xvi ⋅ p 0.621 98 + ri

where subscripts p,T indicate that each term is subject to identical conditions of pressure and temperature. The last expression is formally similar to the classic definition based on the assumption of Dalton’s law of partial pressures. Uw is also related to the mixing ratio r by:


(11) Relations between saturation vapour pressures of the pure phase and of moist air: In the meteorological range of pressure and temperature the following relations hold with an error of 0.5 per cent or less: e’w = ew e’i = ei (4.A.11) (4.A.1)

U w = 100

r 0.621 98 + rw ⋅ rw 0.621 98 + r


where rw is the saturation mixing ratio at the pressure and temperature of the moist air. (16)1 The relative humidity Ui with respect to ice of moist air at pressure p and temperature T is the ratio in per cent of the vapour mole fraction xv to the vapour mole fraction xvi which the air would have if it were saturated with respect to ice at the same pressure p and temperature T. Corresponding to the defining equation in paragraph 15:

(1) The thermodynamic dewpoint temperature Td of moist air at pressure p and with mixing ratio r is the temperature at which moist air, saturated with respect to water at the given pressure, has a saturation mixing ratio rw equal to the given mixing ratio r. (1) The thermodynamic frost-point temperature Tf of moist air at pressure p and mixing ratio r is the temperature at which moist air, saturated with respect to ice at the given pressure, has a saturation mixing ratio ri equal to the given ratio r. (14) The dewpoint and frost-point temperatures so defined are related to the mixing ratio r and pressure p by the respective equations:

Ui = 100

xv xvi


= 100

pxv pxvi

= p,T e ei



ew ( p,Td ) = f ( p) ⋅ ew (Td ) = xv ⋅ p = ʹ

r⋅p (4.A.1) 0.621 98 + r

(17) Relative humidity at temperatures less than 0°C is to be evaluated with respect to water. The advantages of this procedure are as follows: (a) Most hygrometers which are essentially responsive to the relative humidity indicate relative humidity with respect to water at all temperatures; (b) The majority of clouds at temperatures below 0°C consist of water, or mainly of water; (c) Relative humidities greater than 100 per cent would in general not be observed. This is of

ei ( p, T f ) = f ( p ) ⋅ ei (T f ) = xv ⋅ p =

r⋅p 0.621 98 + r


Equations 4.A.15 and 4.A.17 do not apply to moist air when pressure p is less than the saturation vapour pressure of pure water and ice, respectively, at temperature T.




particular importance in synoptic weather messages, since the atmosphere is often supersaturated with respect to ice at temperatures below 0°C; The majority of existing records of relative humidity at temperatures below 0°C are expressed on a basis of saturation with respect to water.

where Lv(Tw ) is the heat of vaporization of water at temperature Tw , cpa is the specific heat of dry air at constant pressure; and cpv is the specific heat of water vapour at constant pressure.
Note: Thermodynamic wet-bulb temperature as here defined

has for some time been called “temperature of adiabatic saturation” by air-conditioning engineers.

(18) The thermodynamic wet-bulb temperature of moist air at pressure p, temperature T and mixing ratio r is the temperature Tw attained by the moist air when brought adiabatically to saturation at pressure p by the evaporation into the moist air of liquid water at pressure p and temperature Tw and containing the amount of dissolved air corresponding to equilibrium with saturated air of the same pressure and temperature. Tw is defined by the equation:

(19) The thermodynamic ice-bulb temperature of moist air at pressure p, temperature T and mixing ratio r is the temperature Ti at which pure ice at pressure p must be evaporated into the moist air in order to saturate it adiabatically at pressure p and temperature Ti. The saturation is with respect to ice. Ti is defined by the equation:

h ( p,T, r ) + ⎡ ri ( p,Ti ) − r ⎤ hi ( p,Ti ) ⎣ ⎦ = h ( p,Ti , ri ( p,Ti ))


h ( p,T, r ) + ⎡ rw ( p,Tw ) − r ⎤ hw ( p,Tw ) ⎣ ⎦ = h ( p,Tw , rw ( p,Tw ))


where rw(p,Tw) is the mixing ratio of saturated moist air at pressure p and temperature Tw ; hw(p,Tw) is the enthalpy of 1 gram of pure water at pressure p and temperature Tw ; h(p,T,r) is the enthalpy of 1 + rw grams of moist air, composed of 1 gram of dry air and r grams of water vapour, at pressure p and temperature T; and h(p,Tw ,rw (p,Tw )) is the enthalpy of 1 + rw grams of saturated air, composed of 1 gram of dry air and rw grams of water vapour, at pressure p and temperature Tw. (This is a function of p and Tw only and may appropriately be denoted by hsw(p,Tw).) If air and water vapour are regarded as ideal gases with constant specific heats, the above equation becomes:

where ri(p, Ti) is the mixing ratio of saturated moist air at pressure p and temperature Ti; hi(p, Ti) is the enthalpy of 1 gram of pure ice at pressure p and temperature Ti ; h(p,T,r) is the enthalpy of 1 + r grams of moist air, composed of 1 gram of dry air and r grams of water vapour, at pressure p and temperature T; and h(p,Ti ,ri (p,Ti)) is the enthalpy of 1 + ri grams of saturated air, composed of 1 gram of dry air and ri grams of water vapour, at pressure p and temperature Ti . (This is a function of p and Ti only, and may appropriately be denoted by hsi(p,Ti ).) If air and water vapour are regarded as ideal gases with constant specific heats, the above equation becomes:

T − Tw =

⎡ rw ( p,Tw ) − r ⎤ Lv (Tw ) ⎣ ⎦ c pa + rc pv

T − Ti =

⎡ ri ( p,Ti ) − r ⎤ Ls (Ti ) ⎣ ⎦ c p + rc pv


(4.A.19) where Ls(Ti ) is the heat of sublimation of ice at temperature Ti. The relationship between Tw and Ti as defined and the wet-bulb or ice-bulb temperature as indicated by a particular psychrometer is a matter to be determined by carefully controlled experiment, taking into account the various variables concerned, for example, ventilation, size of thermometer bulb and radiation.

The enthalpy of a system in equilibrium at pressure p and temperature T is defined as E + pV, where E is the internal energy of the system and V is its volume. The sum of the enthalpies of the phases of a closed system is conserved in adiabatic isobaric processes.



AnnEx 4.B formulae for the ComPutation of measures of humidity
(see also section 4.1.2) Saturation vapour pressure: ew(t) = 6.11 exp [17.6 t/(4.1 + t)] = f(p) · ew(t) ei(t) = 6.11 exp [.46 t/(7.6 + t)] e’i(p,t) = f(p) · ei(t) f(p) = 1.0016 + .15 · 10–6 p – 0.074 p–1 Dew point and frost point: Water (–45 to 60°C) (pure phase) e’w (p,t) Moist air Ice (–65 to 0°C) (pure phase) Moist air [see note]

td = tf =

243.12 ⋅ ln [ eʹ / 6.112 f ( p)] 17.62 − ln [ eʹ / 6.112 f ( p)]
272.62 ⋅ ln [ eʹ / 6.112 f ( p)] 22.46 − ln [ eʹ / 6.112 f ( p)]

Water (–45 to 60°C)
Ice (–65 to 0°C)

Psychrometric formulae for the Assmann psychrometer: e’ = e’w (p,tw) - 6.5 . 10–4 · (1 + 0.000 944 tw) · p · (t – tw) e’ = e’i (p,ti) - 5.75 . 10–4 · p · (t – ti) Relative humidity: U = 100 e’/e’w(p,t) % U = 100 e’w(p,td)/e’w(p,t) % Units applied: t = air temperature (dry-bulb temperature); tw = wet-bulb temperature; ti = ice-bulb temperature; td = dewpoint temperature; tf = frost-point temperature; p = pressure of moist air; ew(t) = saturation vapour pressure in the pure phase with regard to water at the dry-bulb temperature; ew(tw) = saturation vapour pressure in the pure phase with regard to water at the wet-bulb temperature; ei(t) = saturation vapour pressure in the pure phase with regard to ice at the dry-bulb temperature; ei(ti) = saturation vapour pressure in the pure phase with regard to ice at the ice-bulb temperature; e′w (t) = saturation vapour pressure of moist air with regard to water at the dry-bulb temperature; e′w (tw) = saturation vapour pressure of moist air with regard to water at the wet-bulb temperature; e′i (t) = saturation vapour pressure of moist air with regard to ice at the dry-bulb temperature; e′i (ti) = saturation vapour pressure of moist air with regard to ice at the ice-bulb temperature; U = relative humidity.
Note: In fact, f is a function of both pressure and temperature, i.e. f = f(p, t), as explained in WMO (1966) in the introduction to Table 4.10. In practice, the temperature dependency (±0.1%) is much lower with respect to pressure (0 to +0.6%). Therefore, the temperature dependency may be omitted in the formula above (see also WMO (1989a), Chapter 10). This formula, however, should be used only for pressure around 1 000 hPa (i.e. surface measurements) and not for upper-air measurements, for which WMO (1966), Table 4.10 should be used.

Water Ice



referenCes and further readinG

Bindon, H.H., 1965: A critical review of tables and charts used in psychrometry. In: A. Wexler (ed.), Humidity and Moisture, Volume 1, Reinhold, New York, pp. –15. Sonntag, D., 1990: Important new values of the physical constants of 1986, vapour pressure formulations based on the ITS-90 and psychrometer formulae. Zeitschrift für Meteorologie, Volume 40, Number 5, pp. 40–44. Sonntag, D., 1994: Advancements in the field of hygrometry. Zeitschrift für Meteorologie, Volume , Number , pp. 51–66. Wexler, A. (ed.), 1965: Humidity and Moisture. Volumes 1 and , Reinhold, New York. World Meteorological Organization, 1966: International Meteorological Tables (S. Letestu, ed.). WMO-No. 188.TP.94, Geneva.

World Meteorological Organization, 1988: Technical Regulations. Volume I, WMO-No. 49, Geneva. World Meteorological Organization, 1989a: WMO Assmann Aspiration Psychrometer Intercomparison (D. Sonntag). Instruments and Observing Methods Report No. 4, WMO/TDNo. 89, Geneva. World Meteorological Organization, 1989b: WMO International Hygrometer Intercomparison (J. Skaar, K. Hegg, T. Moe and K. Smedstud). Instruments and Observing Methods Report No. 8, WMO/TD-No. 16, Geneva. World Meteorological Organization, 199: Measurement of Temperature and Humidity (R.G. Wylie and T. Lalas). Technical Note No. 194, WMO-No. 759, Geneva.


measurement of surface wind

5.1 5.1.1


may indicate sharp wave-front gusts with a short duration. For the definition of gust duration an ideal measuring chain is used, namely a single filter that takes a running average over t0 seconds of the incoming wind signal. Extremes detected behind such a filter are defined as peak gusts with duration t0. Other measuring systems with various filtering elements are said to measure gusts with duration t0 when a running average filter with integration time t0 would have produced an extreme with the same height (see Beljaars, 1987; WMO, 1987 for further discussion). Standard deviation is:


The following definitions are used in this chapter (see Mazzarella, 1972, for more details). Wind velocity is a three-dimensional vector quantity with small-scale random fluctuations in space and time superimposed upon a larger-scale organized flow. It is considered in this form in relation to, for example, airborne pollution and the landing of aircraft. For the purpose of this Guide, however, surface wind will be considered mainly as a twodimensional vector quantity specified by two numbers representing direction and speed. The extent to which wind is characterized by rapid fluctuations is referred to as gustiness, and single fluctuations are called gusts. Most users of wind data require the averaged horizontal wind, usually expressed in polar coordinates as speed and direction. More and more applications also require information on the variability or gustiness of the wind. For this purpose, three quantities are used, namely the peak gust and the standard deviations of wind speed and direction. Averaged quantities are quantities (for example, horizontal wind speed) that are averaged over a period of 10 to 60 min. This chapter deals mainly with averages over 10 min intervals, as used for forecasting purposes. Climatological statistics usually require averages over each entire hour, day and night. Aeronautical applications often use shorter averaging intervals (see Part II, Chapter 2). Averaging periods shorter than a few minutes do not sufficiently smooth the usually occurring natural turbulent fluctuations of wind; therefore, 1 min “averages” should be described as long gusts. Peak gust is the maximum observed wind speed over a specified time interval. With hourly weather reports, the peak gust refers to the wind extreme in the last full hour. Gust duration is a measure of the duration of the observed peak gust. The duration is determined by the response of the measuring system. Slowly responding systems smear out the extremes and measure long smooth gusts; fast response systems

su = (ui − U )2 =

(( Σ ( u ) − ( Σ u ) n ) n ) i 2 i 2


where u is a time-dependent signal (for example, horizontal wind speed) with average U and an overbar indicates time-averaging over n samples ui. The standard deviation is used to characterize the magnitude of the fluctuations in a particular signal. Time-constant (of a first-order system) is the time required for a device to detect and indicate about 63 per cent of a step-function change. Response length is approximately the passage of wind (in metres) required for the output of a wind-speed sensor to indicate about 63 per cent of a step-function change of the input speed. Critical damping (of a sensor such as a wind vane, having a response best described by a second-order differential equation) is the value of damping which gives the most rapid transient response to a step change without overshoot. Damping ratio is the ratio of the actual damping to the critical damping. Undamped natural wavelength is the passage of wind that would be required by a vane to go through one period of an oscillation if there were no damping. It is less than the actual “damped” wavelength by a factor (1 − D 2 ) if D is the damping ratio.

I.5–2 5.1.2


Units and scales

Wind speed should be reported to a resolution of 0.5 m s–1 or in knots (0.515 m s–1) to the nearest unit, and should represent, for synoptic reports, an average over 10 min. Averages over a shorter period are necessary for certain aeronautical purposes (see Part II, Chapter 2). Wind direction should be reported in degrees to the nearest 10°, using a 01 ... 36 code (for example, code 2 means that the wind direction is between 15 and 25°), and should represent an average over 10 min (see Part II, Chapter 2, for synoptical purposes). Wind direction is defined as the direction from which the wind blows, and is measured clockwise from geographical north, namely, true north. “Calm” should be reported when the average wind speed is less than 1 kn. The direction in this case is coded as 00. Wind direction at stations within 1° of the North Pole or 1° of the South Pole should be reported according to Code Table 0878 in WMO (1995). The azimuth ring should be aligned with its zero coinciding with the Greenwich 0° meridian. There are important differences compared to the synoptic requirement for measuring and reporting wind speed and direction for aeronautical purposes at aerodromes for aircraft take-off and landing (see Part II, Chapter 2). Wind direction should be measured, namely, from the azimuth setting, with respect to true north at all meteorological observing stations. At aerodromes the wind direction must be indicated and reported with respect to magnetic north for aeronautical observations and with an averaging time of 2 min. Where the wind measurements at aerodromes are disseminated beyond the aerodrome as synoptic reports, the direction must be referenced to true north and have an averaging time of 10 min. 5.1.3 Meteorological requirements

Apart from mean wind speed and direction, many applications require standard deviations and extremes (see section 5.8.2). The required accuracy is easily obtained with modern instrumentation. The most difficult aspect of wind measurement is the exposure of the anemometer. Since it is nearly impossible to find a location where the wind speed is representative of a large area, it is recommended that estimates of exposure errors be made (see section 5.9). Many applications require information about the gustiness of the wind. Such applications provide “nowcasts” for aircraft take-off and landing, windload climatology, air pollution dispersion problems and exposure correction. Two variables are suitable for routine reading, namely the standard deviation of wind speed and direction and the 3 s peak gust (see Recommendations 3 and 4 (CIMO-X) (WMO, 1990)). 5.1.4 Methods of measurement and observation

Surface wind is usually measured by a wind vane and cup or propeller anemometer. When the instrumentation is temporarily out of operation or when it is not provided, the direction and force of the wind may be estimated subjectively (the table below provides wind speed equivalents in common use for estimations). The instruments and techniques specifically discussed here are only a few of the more convenient ones available and do not comprise a complete list. The references and further reading at the end of this chapter provide a good literature on this subject. The sensors briefly described below are cup-rotor and propeller anemometers, and direction vanes. Cup and vane, propeller and vane, and propellers alone are common combinations. Other classic sensors, such as the pitot tube, are less used now for routine measurements but can perform satisfactorily, while new types being developed or currently in use as research tools may become practical for routine measurement with advanced technology. For nearly all applications, it is necessary to measure the averages of wind speed and direction. Many applications also need gustiness data. A wind-measuring system, therefore, consists not only of a sensor, but also of a processing and recording system. The processing takes care of the averaging and the computation of the standard deviations and extremes. In its simplest form, the processing can be done by writing the wind signal with a pen recorder and estimating the mean and extreme by reading the record.

Wind observations or measurements are required for weather monitoring and forecasting, for wind-load climatology, for probability of wind damage and estimation of wind energy, and as part of the estimation of surface fluxes, for example, evaporation for air pollution dispersion and agricultural applications. Performance requirements are given in Part I, Chapter 1, Annex 1.B. An accuracy for horizontal speed of 0.5 m s–1 below 5 m s–1 and better than 10 per cent above 5 m s–1 is usually sufficient. Wind direction should be measured with an accuracy of 5°.



wind speed equivalents
Beaufort scale number and description Wind speed equivalent at a standard height of 10 m above open flat ground (kn) (m s–1) (km h–1) (mi h–1) Specifications for estimating speed over land

0 1 2 3 4 5 6

Calm light air light breeze gentle breeze Moderate breeze fresh breeze strong breeze

4 m s–1) and to average σu/U and/or σθ over all available data per wind sector class (30° wide) and per season (surface roughness depends, for example, on tree foliage). The values of z0u can now be determined with the above equations, where comparison of the results from σu and σθ give some idea of the accuracy obtained. In cases where no standard deviation information is available, but the maximum gust is determined per wind speed averaging period (either 10 min or 1 h), the ratios of these maximum gusts to the averages in the same period (gust factors) can also be used to determine z 0u (Verkaik, 2000). Knowledge of system dynamics, namely, the response length of the sensor and the response time of the recording chain, is required for this approach. terrain classification from davenport (1960) adapted by wieringa (1980b) in terms of aerodynamic roughness length z0
Class 1 2 3 4 5 6 7 8 Short terrain description open sea, fetch at least 5 km Mud flats, snow; no vegetation, no obstacles open flat terrain; grass, few isolated obstacles low crops; occasional large obstacles, x/H > 20 High crops; scattered obstacles, 15 < x/H < 20 Parkland, bushes; numerous obstacles, x/H ≈ 10 Regular large obstacle coverage (suburb, forest) City centre with high- and low-rise buildings z0 (m) 0.000 2 0.005 0.03 0.10 0.25 0.5 1.0 ≥2


Here x is a typical upwind obstacle distance and H is the

height of the corresponding major obstacles. for more detailed and

where cu = 2.2 and cv = 1.9 and = 0.4 for unfiltered measurements of su and sd. For the measuring systems

updated terrain class descriptions see davenport and others (2000) (see also Part II, Chapter 11, Table 11.2).



references and further reading

Ackermann, G.R., 1983: Means and standard deviations of horizontal wind components. Journal of Climate and Applied Meteorology, 22, pp. 959–961. Albers, A., H. Klug and D. Westermann, 2000: Outdoor Comparison of Cup Anemometers. DEWI Magazin, No. 17, August 2000. Beljaars, A.C.M., 1987: The influence of sampling and filtering on measured wind gusts. Journal of Atmospheric and Oceanic Technology, 4, pp. 613–626. Busch, N.E. and L. Kristensen, 1976: Cup anemometer overspeeding. Journal of Applied Meteorology, 15, pp. 1328–1332. Coppin, P.A., 1982: An examination of cup anemometer overspeeding. Meteorologische Rundschau, 35, pp. 1–11. Curran, J.C., G.E. Peckham, D. Smith, A.S. Thom, J.S.G. McCulloch and I.C. Strangeways, 1977: Cairngorm summit automatic weather station. Weather, 32, pp. 60–63. Davenport, A.G., 1960: Rationale for determining design wind velocities. Journal of the Structural Division, American Society of Civil Engineers, 86, pp. 39–68. Davenport, A.G., C.S.B. Grimmond, T.R. Oke and J.Wieringa, 2000: Estimating the roughness of cities and sheltered country. Preprints of the Twelfth American Meteorological Society Conference on Applied Climatology (Asheville, NC , United States), pp. 96–99. Evans, R.A. and B.E. Lee, 1981: The problem of anemometer exposure in urban areas: a windtunnel study. Meteorological Magazine, 110, pp. 188–189. Frenkiel, F.N., 1951: Frequency distributions of velocities in turbulent flow. Journal of Meteorology, 8, pp. 316–320. Gill, G.C., L.E. Olsson, J. Sela and M. Suda, 1967: Accuracy of wind measurements on towers or stacks. Bulletin of the American Meteorological Society, 48, pp. 665–674. Gold, E., 1936: Wind in Britain – The Dines and some notable records during the last 40 years. Quarterly Journal of the Royal Meteorological Society, 62, pp. 167–206. Grimmond, C.S.B., T.S. King, M. Roth and T.R. Oke, 1998: Aerodynamic roughness of urban areas derived from wind observations. Boundary Layer Meteorology, 89, pp. 1–24.

Kaimal, J.C., 1980: Sonic anemometers. Air-sea Interaction: Instr uments and Methods (F. Dobson, L. Hasse and R. Davis, eds), Plenum Press, New York, pp. 81–96. Lenschow, D.H. (ed.), 1986: Probing the Atmospheric Boundary Layer. American Meteorological Society, Boston. MacCready, P.B., 1966: Mean wind speed measurements in turbulence. Journal of Applied Meteorology, 5, pp. 219–225. MacCready, P.B. and H.R. Jex, 1964: Response characteristics and meteorological utilization of propeller and vane wind sensors. Journal of Applied Meteorology, 3, pp. 182–193. Makinwa, K.A.A., J.H. Huijsing and A. Hagedoorn, 2001: Industrial design of a solid-state wind sensor. Proceedings of the First ISA/IEEE Conference, Houston, November 2001, pp. 68–71. Mazzarella, D.A., 1972: An inventory of specifications for wind-measuring instruments. Bulletin of the American Meteorological Society, 53, pp. 860–871. Mollo-Christensen, E. and J.R. Seesholtz, 1967: Wind tunnel measurements of the wind disturbance field of a model of the Buzzards Bay Entrance Light Tower. Journal of Geophysical Research, 72, pp. 3549–3556. Patterson, J., 1926: The cup anemometer. Transactions of the Royal Society of Canada, 20, Series III, pp. 1–54. Smith, S.D., 1980: Dynamic anemometers. Air-sea Interaction: Instr uments and Methods (F. Dobson, L. Hasse and R. Davis, eds.). Plenum Press, New York, pp. 65–80. Taylor, P.A. and R.J. Lee, 1984: Simple guidelines for estimating wind speed variations due to small scale topographic features. Climatological Bulletin, Canadian Meteorological and Oceanographic Society, 18, pp. 3–22. Van Oudheusden, B.W. and J.H. Huijsing, 1991: Microelectronic thermal anemometer for the measurement of surface wind. Journal of Atmospheric and Oceanic Technology, 8, pp. 374–384. Verkaik, J.W., 2000: Evaluation of two gustiness models for exposure correction calculations. Journal of Applied Meteorology, 39, pp. 1613–1626. Walmsley, J.L., I.B. Troen, D.P. Lalas and P.J. Mason, 1990: Surface-layer flow in complex terrain: Comparison of models and full-scale observations. Boundary-Layer Meteorology, 52, pp. 259–281.



Wieringa, J., 1967: Evaluation and design of wind vanes. Journal of Applied Meteorology, 6, pp. 1114–1122. Wieringa, J., 1980a: A revaluation of the Kansas mast influence on measurements of stress and cup anemometer overspeeding. BoundaryLayer Meteorology, 18, pp. 411–430. Wieringa, J., 1980b: Representativeness of wind observations at airports. Bulletin of the American Meteorological Society, 61, pp. 962–971. Wieringa, J., 1983: Description requirements for assessment of non-ideal wind stations, for example Aachen. Journal of Wind Engineering and Industrial Aerodynamics, 11, pp. 121–131. Wieringa, J., 1986: Roughness-dependent geographical interpolation of surface wind speed averages. Quarterly Journal of the Royal Meteorological Society, 112, pp. 867–889. Wieringa, J., 1996: Does representative wind information exist? Journal of Wind Engineering and Industrial Aerodynamics, 65, pp. 1–12. Wieringa, J. and F.X.C.M. van Lindert, 1971: Application limits of double-fin and coupled wind vanes. Journal of Applied Meteorology, 10, pp. 137–145. World Meteorological Organization, 1981: Review of Reference Height for and Averaging Time of S u r f a c e Wi n d M e a s u r e m e n t s a t S e a (F.W. Dobson). Marine Meteorology and Related Oceanographic Activities Report No. 3, Geneva. World Meteorological Organization, 1984a: Compendium of Lecture Notes for Training Class IV Meteorological Personnel (B.J. Retallack). Volume II – Meteorology (second edition), WMO-No. 266, Geneva. World Meteorological Organization, 1984b: Distortion of the wind field by the Cabauw Meteorological Tower (H.R.A. Wessels). Papers Presented at the WMO Technical Conference on Instruments and Cost-effective Meteorological Observations (TECEMO), Instruments and Observing Methods Report No. 15, Geneva.

World Meteorological Organization, 1987: The Measurement of Gustiness at Routine Wind Stations: A Review (A.C.M. Beljaars). Instruments and Observing Methods Report No. 31, Geneva. World Meteorological Organization, 1989: Wind Measurements Reduction to a Standard Level (R.J. Shearman and A.A. Zelenko). Marine Meteorology and Related Oceanographic Activities Report No. 22, WMO/TD-No 311, Geneva. World Meteorological Organization, 1990: Abridged Final Report of the Tenth Session of the Commission for Instruments and Methods of Observation. WMO-No. 727, Geneva. World Meteorological Organization, 1991: Guidance on the Establishment of Algorithms for Use in Synoptic Automatic Weather Stations: Processing of Surface Wind Data (D. Painting). Report of the CIMO Working Group on Surface Measurements, Instruments and Observing Methods Report No. 47, WMO/TD-No. 452, Geneva. World Meteorological Organization, 1995: Manual on Codes. Volume I.1, WMO-No. 306, Geneva. World Meteorological Organization, 2000: Wind measurements: Potential wind speed derived from wind speed fluctuations measurements, and the representativity of wind stations (J.P. van der Meulen). Papers Presented at the WMO Technical Conference on Meteorological and Environmental Instruments and Methods of Observation (TECO-2000), Instruments and Obser ving Methods Report No. 74, WMO/TD-No. 1028, p. 72, Geneva. World Meteorological Organization, 2001: Lecture notes for training agricultural meteorological personnel (second edition; J. Wieringa and J. Lomas). WMO-No. 551, Geneva (sections 5.3.3 and 9.2.4). Wyngaard, J.C., 1981: The effects of probe-induced flow distortion on atmospheric turbulence measurements. Journal of Applied Meteorology, 20, pp. 784–794.


MeasureMent of PrecIPItatIon



This chapter describes the well-known methods of precipitation measurements at ground stations. It does not discuss measurements which attempt to define the structure and character of precipitation, or which require specialized instrumentation, which are not standard meteorological observations (such as drop size distribution). Radar and satellite measurements, and measurements at sea, are discussed in Part II. Information on precipitation measurements which includes, in particular, more detail on snow cover measurements can also to be found in WMO (1992a; 1998). The general problem of representativeness is particularly acute in the measurement of precipitation. Precipitation measurements are particularly sensitive to exposure, wind and topography, and metadata describing the circumstances of the measurements are particularly important for users of the data. The analysis of precipitation data is much easier and more reliable if the same gauges and siting criteria are used throughout the networks. This should be a major consideration in designing networks. 6.1.1 Definitions

should be read to the nearest 0.2 mm and, if feasible, to the nearest 0.1 mm; weekly or monthly amounts should be read to the nearest 1 mm (at least). Daily measurements of precipitation should be taken at fixed times common to the entire network or networks of interest. Less than 0.1 mm (0.2 mm in the United States) is generally referred to as a trace. The rate of rainfall (intensity) is similarly expressed in linear measures per unit time, usually millimetres per hour. Snowfall measurements are taken in units of centimetres and tenths, to the nearest 0.2 cm. Less than 0.2 cm is generally called a trace. The depth of snow on the ground is usually measured daily in whole centimetres. 6.1.3 Meteorological and hydrological requirements

Part I, Chapter 1, Annex 1.B gives a broad statement of the requirements for accuracy, range and resolution for precipitation measurements, and gives 5 per cent as the achievable accuracy (at the 95 per cent confidence level). The common observation times are hourly, threehourly and daily, for synoptic, climatological and hydrological purposes. For some purposes, a much greater time resolution is required to measure very high rainfall rates over very short periods. For some applications, storage gauges are used with observation intervals of weeks or months or even a year in mountains and deserts. 6.1.4 Measurement methods Instruments

Precipitation is defined as the liquid or solid products of the condensation of water vapour falling from clouds or deposited from air onto the ground. It includes rain, hail, snow, dew, rime, hoar frost and fog precipitation. The total amount of precipitation which reaches the ground in a stated period is expressed in terms of the vertical depth of water (or water equivalent in the case of solid forms) to which it would cover a horizontal projection of the Earth’s surface. Snowfall is also expressed by the depth of fresh, newly fallen snow covering an even horizontal surface (see section 6.7). 6.1.2 units and scales

The unit of precipitation is linear depth, usually in millimetres (volume/area), or kg m–2 (mass/area) for liquid precipitation. Daily amounts of precipitation

Precipitation gauges (or raingauges if only liquid precipitation can be measured) are the most common instruments used to measure precipitation. Generally, an open receptacle with vertical sides is used, usually in the form of a right cylinder, with a funnel if its main purpose is to measure rain. Since various sizes and shapes of orifice and gauge heights are used in different countries, the measurements are not strictly comparable (WMO, 1989a). The volume or weight of the catch is measured, the latter in particular for solid precipitation. The gauge orifice may be at one of many specified



heights above the ground or at the same level as the surrounding ground. The orifice must be placed above the maximum expected depth of snow cover, and above the height of significant potential insplashing from the ground. For solid precipitation measurement, the orifice is above the ground and an artificial shield is placed around it. The most commonly used elevation height in more than 100 countries varies between 0.5 and 1.5 m (WMO, 1989a). The measurement of precipitation is very sensitive to exposure, and in particular to wind. Section 6.2 discusses exposure, while section 6.4 discusses at some length the errors to which precipitation gauges are prone, and the corrections that may be applied. This chapter also describes some other special techniques for measuring other types of precipitation (dew, ice, and the like) and snow cover. Some new techniques which are appearing in operational use are not described here, for example, the optical raingauge, which makes use of optical scattering. Useful sources of information on new methods under development are the reports of recurrent conferences, such as the international workshops on precipitation measurement (Slovak Hydrometeorological Institute and Swiss Federal Institute of Technology, 1993; WMO, 1989b) and those organized by the Commission for Instruments and Methods of Observation (WMO, 1998). Point measurements of precipitation serve as the primary source of data for areal analysis. However, even the best measurement of precipitation at one point is only representative of a limited area, the size of which is a function of the length of the accumulation period, the physiographic homogeneity of the region, local topography and the precipitation-producing process. Radar and, more recently, satellites are used to define and quantify the spatial distribution of precipitation. The techniques are described in Part II of this Guide. In principle, a suitable integration of all three sources of areal precipitation data into national precipitation networks (automatic gauges, radar, and satellite) can be expected to provide sufficiently accurate areal precipitation estimates on an operational basis for a wide range of precipitation data users. Instruments that detect and identify precipitation, as distinct from measuring it, may be used as present weather detectors, and are referred to in Part I, Chapter 14.

reference gauges and intercomparisons

Several types of gauges have been used as reference gauges. The main feature of their design is that of reducing or controlling the effect of wind on the catch, which is the main reason for the different behaviours of gauges. They are chosen also to reduce the other errors discussed in section 6.4. Ground-level gauges are used as reference gauges for liquid precipitation measurement. Because of the absence of wind-induced error, they generally show more precipitation than any elevated gauge (WMO, 1984). The gauge is placed in a pit with the gauge rim at ground level, sufficiently distant from the nearest edge of the pit to avoid insplashing. A strong plastic or metal anti-splash grid with a central opening for the gauge should span the pit. Provision should be made for draining the pit. Pit gauge drawings are given in WMO (1984). The reference gauge for solid precipitation is the gauge known as the Double Fence Intercomparison Reference. It has octagonal vertical double fences surrounding a Tretyakov gauge, which itself has a particular form of wind-deflecting shield. Drawings and a description are given by Goodison, Sevruk and Klemm (1989), in WMO (1985), and in the final report of the WMO intercomparison of solid precipitation gauges (WMO, 1998). Recommendations for comparisons of precipitation gauges against the reference gauges are given in Annex 6.A.1 documentation

The measurement of precipitation is particularly sensitive to gauge exposure, so metadata about the measurements must be recorded meticulously to compile a comprehensive station history, in order to be available for climate and other studies and quality assurance. Section 6.2 discusses the site information that must be kept, namely detailed site descriptions, including vertical angles to significant obstacles around the gauge, gauge configuration, height of the gauge orifice above ground and height of the wind speed measuring instrument above ground.
1 Recommended by the Commission for Instruments and Methods of Observation at its eleventh session (1994).



Changes in observational techniques for precipitation, mainly the use of a different type of precipitation gauge and a change of gauge site or installation height, can cause temporal inhomogeneities in precipitation time series (see Part III, Chapter 2). The use of differing types of gauges and site exposures causes spatial inhomogeneities. This is due to the systematic errors of precipitation measurement, mainly the wind-induced error. Since adjustment techniques based on statistics can remove the inhomogeneities relative to the measurements of surrounding gauges, the correction of precipitation measurements for the wind-induced error can eliminate the bias of measured values of any type of gauge. The following sections (especially section 6.4) on the various instrument types discuss the corrections that may be applied to precipitation measurements. Such corrections have uncertainties, and the original records and the correction formulae should be kept. Any changes in the observation methods should also be documented.

plan should be made. Sites on a slope or the roof of a building should be avoided. Sites selected for measuring snowfall and/or snow cover should be in areas sheltered as much as possible from the wind. The best sites are often found in clearings within forests or orchards, among trees, in scrub or shrub forests, or where other objects act as an effective wind-break for winds from all directions. Preferably, however, the effects of the wind, and of the site on the wind, can be reduced by using a ground-level gauge for liquid precipitation or by making the air-flow horizontal above the gauge orifice using the following techniques (listed in order of decreasing effectiveness): (a) In areas with homogeneous dense vegetation; the height of such vegetation should be kept at the same level as the gauge orifice by regular clipping; (b) In other areas, by simulating the effect in (a) through the use of appropriate fence structures; (c) By using windshields around the gauge. The surface surrounding the precipitation gauge can be covered with short grass, gravel or shingle, but hard, flat surfaces, such as concrete, should be avoided to prevent excessive in-splashing.


siting anD exPosure

All methods for measuring precipitation should aim to obtain a sample that is representative of the true amount falling over the area which the measurement is intended to represent, whether on the synoptic scale, mesoscale or microscale. The choice of site, as well as the systematic measurement error, is, therefore, important. For a discussion of the effects of the site, see Sevruk and zahlavova (1994). The location of precipitation stations within the area of interest is important, because the number and locations of the gauge sites determine how well the measurements represent the actual amount of precipitation falling in the area. Areal representativeness is discussed at length in WMO (1992a), for rain and snow. WMO (1994) gives an introduction to the literature on the calculation of areal precipitation and corrections for topography. The effects on the wind field of the immediate surroundings of the site can give rise to local excesses and deficiencies in precipitation. In general, objects should not be closer to the gauge than a distance of twice their height above the gauge orifice. For each site, the average vertical angle of obstacles should be estimated, and a site


non-recorDing PreciPitation gauges


ordinary gauges Instruments

The commonly used precipitation gauge consists of a collector placed above a funnel leading into a container where the accumulated water and melted snow are stored between observation times. Different gauge shapes are in use worldwide as shown in Figure 6.1. Where solid precipitation is common and substantial, a number of special modifications are used to improve the accuracy of measurements. Such modifications include the removal of the raingauge funnel at the beginning of the snow season or the provision of a special snow fence (see WMO, 1998) to protect the catch from blowing out. Windshields around the gauge reduce the error caused by deformation of the wind field above the gauge and by snow drifting into the gauge. They are advisable for rain and essential for snow. A wide variety of gauges are in use (see WMO, 1989a).









figure 6.1. different shapes of standard precipitation gauges. the solid lines show streamlines and the dashed lines show the trajectories of precipitation particles. the first gauge shows the largest wind field deformation above the gauge orifice, and the last gauge the smallest. consequently, the wind-induced error for the first gauge is larger than for the last gauge (sevruk and nespor, 1994).

The stored water is either collected in a measure or poured from the container into a measure, or its level in the container is measured directly with a graduated stick. The size of the collector orifice is not critical for liquid precipitation, but an area of at least 200 cm2 is required if solid forms of precipitation are expected in significant quantity. An area of 200 to 500 cm2 will probably be found most convenient. The most important requirements of a gauge are as follows: (a) The rim of the collector should have a sharp edge and should fall away vertically on the inside, and be steeply bevelled on the outside; the design of gauges used for measuring snow should be such that any narrowing of the orifice caused by accumulated wet snow about the rim is small; (b) The area of the orifice should be known to the nearest 0.5 per cent, and the construction


(d) (e)

should be such that this area remains constant while the gauge is in normal use; The collector should be designed to prevent rain from splashing in and out. This can be achieved if the vertical wall is sufficiently deep and the slope of the funnel is sufficiently steep (at least 45 per cent). Suitable arrangements are shown in Figure 6.2; The construction should be such as to minimize wetting errors; The container should have a narrow entrance and be sufficiently protected from radiation to minimize the loss of water by evaporation. Precipitation gauges used in locations where only weekly or monthly readings are practicable should be similar in design to the type used for daily measurements, but with a container of larger capacity and stronger construction.



The measuring cylinder should be made of clear glass or plastic which has a suitable coefficient of thermal expansion and should be clearly marked to show the size or type of gauge with which it is to be used. Its diameter should be less than 33 per cent of that of the rim of the gauge; the smaller the relative diameter, the greater the precision of measurement. The graduations should be finely engraved; in general, there should be marks at 0.2 mm intervals and clearly figured lines at each whole millimetre. It is also desirable that the line corresponding to 0.1 mm be marked. The maximum error of the graduations should not exceed ±0.05 mm at or above the 2 mm graduation mark and ±0.02 mm below this mark. To measure small precipitation amounts with adequate precision, the inside diameter of the measuring cylinder should taper off at its base. In all measurements, the bottom of the water meniscus should define the water level, and the cylinder should be kept vertical when reading, to avoid parallax errors. Repetition of the main graduation lines on the back of the measure is also helpful for reducing such errors.

displacement caused by the rod itself. The maximum error in the dip-rod graduation should not exceed ±0.5 mm at any point. A dip-rod measurement should be checked using a volumetric measure, wherever possible. operation

The measuring cylinder must be kept vertical when it is being read, and the observer must be aware of parallax errors. Snow collected in nonrecording precipitation gauges should be either weighed or melted immediately after each observation and then measured using a standard graduated measuring cylinder. It is also possible to measure precipitation catch by accurate weighing, a procedure which has several advantages. The total weight of the can and contents is measured and the known weight of the can is subtracted. There is little likelihood of spilling the water and any water adhering to the can is included in the weight. The commonly used methods are, however, simpler and cheaper. calibration and maintenance


These lines must intersect the vertical wall below the rim of the gauge

The graduation of the measuring cylinder or stick must, of course, be consistent with the chosen size of the collector. The calibration of the gauge, therefore, includes checking the diameter of the gauge orifice and ensuring that it is within allowable tolerances. It also includes volumetric checks of the measuring cylinder or stick.


figure 6.2. suitable collectors for raingauges

Dip-rods should be made of cedar wood, or another suitable material that does not absorb water appreciably and possesses only a small capillary effect. Wooden dip-rods are unsuitable if oil has been added to the collector to suppress evaporation. When this is the case, rods made of metal or other materials from which oil can be readily cleaned must be used. Non-metallic rods should be provided with a brass foot to avoid wear and be graduated according to the relative areas of cross-section of the gauge orifice and the collector; graduations should be marked at least every 10 mm and include an allowance for the

Routine maintenance should include, at all times, keeping the gauge level in order to prevent an out-of-level gauge (see Rinehart, 1983; Sevruk, 1984). As required, the outer container of the gauge and the graduate should be kept clean at all times both inside and outside by using a longhandled brush, soapy water and a clean water rinse. Worn, damaged or broken parts should be replaced, as required. The vegetation around the gauge should be kept trimmed to 5 cm (where applicable). The exposure should be checked and recorded. 6.3.2 storage gauges

Storage gauges are used to measure total seasonal precipitation in remote and sparsely inhabited areas. Such gauges consist of a collector above a funnel, leading into a container that is large enough to store the seasonal catch (or the monthly catch in wet areas). A layer of no less than 5 mm of a suitable oil or other evaporation



suppressant should be placed in the container to reduce evaporation (WMO, 1972). This layer should allow the free passage of precipitation into the solution below it. An antifreeze solution may be placed in the container to convert any snow which falls into the gauge into a liquid state. It is important that the antifreeze solution remain dispersed. A mixture of 37.5 per cent by weight of commercial calcium chloride (78 per cent purity) and 62.5 per cent water makes a satisfactory antifreeze solution. Alternatively, aqueous solutions of ethylene glycol or of an ethylene glycol and methanol mixture can be used. While more expensive, the latter solutions are less corrosive than calcium chloride and give antifreeze protection over a much wider range of dilution resulting from subsequent precipitation. The volume of the solution initially placed in the container should not exceed 33 per cent of the total volume of the gauge. In some countries, this antifreeze and oil solution is considered toxic waste and, therefore, harmful to the environment. Guidelines for the disposal of toxic substances should be obtained from local environmental protection authorities. The seasonal precipitation catch is determined by weighing or measuring the volume of the contents of the container (as with ordinary gauges; see section 6.3.1). The amount of oil and antifreeze solution placed in the container at the beginning of the season and any contraction in the case of volumetric measurements must be carefully taken into account. Corrections may be applied as with ordinary gauges. The operation and maintenance of storage gauges in remote areas pose several problems, such as the capping of the gauge by snow or difficulty in locating the gauge for recording the measurement, and so on, which require specific monitoring. Particular attention should be paid to assessing the quality of data from such gauges.

non-recording gauges. The particular cases of recording gauges are discussed in section 6.5. Comprehensive accounts of errors and corrections can be found in WMO (1982; 1984; 1986; and, specifically for snow, 1998). Details of the models currently used for adjusting raw precipitation data in Canada, Denmark, Finland, the Russian Federation, Switzerland and the United States are given in WMO (1982). WMO (1989a) gives a description of how the errors occur. There are collected conference papers on the topic in WMO (1986; 1989b). The amount of precipitation measured by commonly used gauges may be less than the actual precipitation reaching the ground by up to 30 per cent or more. Systematic losses will vary by type of precipitation (snow, mixed snow and rain, and rain). The systematic error of solid precipitation measurements is commonly large and may be of an order of magnitude greater than that normally associated with liquid precipitation measurements. For many hydrological purposes it is necessary first to make adjustments to the data in order to allow for the error before making the calculations. The adjustments cannot, of course, be exact (and may even increase the error). Thus, the original data should always be kept as the basic archives both to maintain continuity and to serve as the best base for future improved adjustments if, and when, they become possible. The true amount of precipitation may be estimated by correcting for some or all of the various error terms listed below: (a) Error due to systematic wind field deformation above the gauge orifice: typically 2 to 10 per cent for rain and 10 to 50 per cent for snow; (b) Error due to the wetting loss on the internal walls of the collector; (c) Error due to the wetting loss in the container when it is emptied: typically 2 to 15 per cent in summer and 1 to 8 per cent in winter, for (b) and (c) together; (d) Error due to evaporation from the container (most important in hot climates): 0 to 4 per cent; (e) Error due to blowing and drifting snow; (f) Error due to the in- and out-splashing of water: 1 to 2 per cent;


PreciPitation gauge errors anD corrections

It is convenient to discuss at this point the errors and corrections that apply in some degree to most precipitation gauges, whether they are recording or




Random observational and instrumental errors, including incorrect gauge reading times.

The first six error components are systematic and are listed in order of general importance. The net error due to blowing and drifting snow and to inand out-splashing of water can be either negative or positive, while net systematic errors due to the wind field and other factors are negative. Since the errors listed as (e) and (f) above are generally difficult to quantify, the general model for adjusting the data from most gauges takes the following form: Pk = kPc = k (Pg +ΔP1 + ≡ΔP2 + ΔP3) where Pk is the adjusted precipitation amount; k (see Figure 6.3) is the adjustment factor for the effects of wind field deformation; Pc is the amount of precipitation caught by the gauge collector; Pg is the measured amount of precipitation in the gauge; ΔP1 is the adjustment for the wetting loss on the internal walls of the collector; ΔP2 is the adjustment for wetting loss in the container after emptying; and ΔP 3 is the adjustment for evaporation from the container. The corrections are applied to daily or monthly totals or, in some practices, to individual precipitation events. In general, the supplementary data needed to make such adjustments include the wind speed at the gauge orifice during precipitation, drop size, precipitation intensity, air temperature and humidity, and the characteristics of the gauge site. Wind speed and precipitation type or intensity may be sufficient variables to determine the corrections. Wind speed alone is sometimes used. At sites where such observations are not made, interpolation between the observations made at adjacent sites may be used for making such adjustments, but with caution, and for monthly rainfall data only. For most precipitation gauges, wind speed is the most important environmental factor contributing to the under-measurement of solid precipitation. These data must be derived from standard meteorological observations at the site in order to provide daily adjustments. In particular, if wind speed is not measured at gauge orifice height, it can be derived by using a mean wind speed reduction procedure after having

knowledge of the roughness of the surrounding surface and the angular height of surrounding obstacles. A suggested scheme is shown in Annex 6.B.2 This scheme is very site-dependent, and estimation requires a good knowledge of the station and gauge location. Shielded gauges catch more precipitation than their unshielded counterparts, especially for solid precipitation. Therefore, gauges should be shielded either naturally (for example, forest clearing) or artificially (for example, Alter, Canadian Nipher type, Tretyakov windshield) to minimize the adverse effect of wind speed on measurements of solid precipitation (refer to WMO, 1994 and 1998, for some information on shield design). Wetting loss (Sevruk, 1974a) is another cumulative systematic loss from manual gauges which varies with precipitation and gauge type; its magnitude is also a function of the number of times the gauge is emptied. Average wetting loss can be up to 0.2 mm per observation. At synoptic stations where precipitation is measured every 6 h, this can become a very significant loss. In some countries, wetting loss has been calculated to be 15 to 20 per cent of the measured winter precipitation. Correction for wetting loss at the time of observation is a feasible alternative. Wetting loss can be kept low in a well-designed gauge. The internal surfaces should be of a material which can be kept smooth and clean; paint, for example, is unsuitable, but baked enamel is satisfactory. Seams in the construction should be kept to a minimum. Evaporation losses (Sevruk, 1974b) vary by gauge type, climatic zone and time of year. Evaporation loss is a problem with gauges that do not have a funnel device in the bucket, especially in late spring at mid-latitudes. Losses of over 0.8 mm per day have been reported. Losses during winter are much less than during comparable summer months, ranging from 0.1 to 0.2 mm per day. These losses, however, are cumulative. In a well-designed gauge, only a small water surface is exposed, its ventilation is minimized, and the water temperature is kept low by a reflective outer surface. It is clear that, in order to achieve data compatibility when using different gauge types and shielding during all weather conditions, corrections to the
2 A wind reduction scheme recommended by the Commission for Instruments and Methods of Observation at its eleventh session (1994).




uhp = 3.0 m s-1 uhp = 3.0 m s-1 uhp = 1.0 m s-1 uhp = 1.0 m s-1

uhp = 3.0 m s-1 uhp = 3.0 m s-1 uhp = 1.0 m s-1 uhp = 1.0 m s-1






2 3 i (mm h-1)





3 2 i (mm h-1)



1.8 1.6 1.4 1.2 1.0 1

uhp = 3.0 m s-1 uhp = 2.0 m s-1 uhp = 1.5 m s-1 uhp = 1.0 m s-1

uhp = 3.0 m s-1 uhp = 2.0 m s-1 uhp = 1.5 m s-1 uhp = 1.0 m s-1












i (mm h-1)

i (mm h-1)

figure 6.3. conversion factor k defined as the ratio of “correct” to measured precipitation for rain (top) and snow (bottom) for two unshielded gauges in dependency of wind speed uhp, intensity i and type of weather situation according to nespor and sevruk (1999). on the left is the german Hellmann manual standard gauge, and on the right the recording, tipping-bucket gauge by lambrecht. Void symbols in the top diagrams refer to orographic rain, and black ones to showers. note the different scales for rain and snow. for shielded gauges, k can be reduced to 50 and 70 per cent for snow and mixed precipitation, respectively (WMo, 1998). the heat losses are not considered in the diagrams (in switzerland they vary with altitude between 10 and 50 per cent of the measured values of fresh snow).

actual measurements are necessary. In all cases where precipitation measurements are adjusted in an attempt to reduce errors, it is strongly recommended that both the measured and adjusted values be published.


recorDing PreciPitation gauges

limited to the measurement of rainfall. Some new automatic gauges that measure precipitation without using moving parts are available. These gauges use devices such as capacitance probes, pressure transducers, and optical or small radar devices to provide an electronic signal that is proportional to the precipitation equivalent. The clock device that times intervals and dates the time record is a very important component of the recorder. 6.5.1 Weighing-recording gauge Instruments

Recording precipitation automatically has the advantage that it can provide better time resolution than manual measurements, and it is possible to reduce the evaporation and wetting losses. These readings are of course subject to the wind effects discussed in section 6.4 Three types of automatic precipitation recorders are in general use, namely the weighing-recording type, the tilting or tipping-bucket type, and the float type. Only the weighing type is satisfactory for measuring all kinds of precipitation, the use of the other two types being for the most part

In these instruments, the weight of a container, together with the precipitation accumulated therein, is recorded continuously, either by means of a spring mechanism or with a system of balance weights. All precipitation, both liquid and solid, is recorded as it falls. This type of gauge normally has no provision for emptying itself; the capacity (namely, the maximum accumulation between



recha rge) ra nges f rom 150 to 750 mm. The gauges must be maintained to minimize evaporation losses, which can be accomplished by adding sufficient oil or other evaporation suppressants inside the container to form a film over the water surface. Any difficulties arising from oscillation of the balance in strong winds can be reduced with an oil damping mechanism or, if recent work is substantiated, by suitably programming a microprocessor to eliminate this effect on the readings. Such weighing gauges are particularly useful for recording snow, hail, and mixtures of snow and rain, since the solid precipitation does not need to be melted before it can be recorded. For winter operation, the catchment container is charged with an antifreeze solution (see section 6.3.2) to dissolve the solid contents. The amount of antifreeze depends on the expected amount of precipitation and the minimum temperature expected at the time of minimum dilution. The weight of the catchment container, measured by a calibrated spring, is translated from a vertical to an angular motion through a series of levers or pulleys. This angular motion is then communicated mechanically to a drum or strip chart or digitized through a transducer. The accuracy of these types of gauges is related directly to their measuring and/or recording characteristics, which can vary with manufacturer. errors and corrections

Some potential errors in manual methods of precipitation measurement can be eliminated or at least minimized by using weighing-recording gauges. Random measurement errors associated with human observer error and certain systematic errors, particularly evaporation and wetting loss, are minimized. In some countries, trace observations are officially given a value of zero, thus resulting in a biased underestimate of the seasonal precipitation total. This problem is minimized with weighing-type gauges, since even very small amounts of precipitation will accumulate over time. The correction of weighing gauge data on an hourly or daily basis may be more difficult than on longer time periods, such as monthly climatological summaries. Ancillary data from automatic weather stations, such as wind at gauge height, air temperature, present weather or snow depth, will be useful in interpreting and correcting accurately the precipitation measurements from automatic gauges. calibration and maintenance

Except for error due to the wetting loss in the container when it is emptied, weighing-recording gauges are susceptible to all of the other sources of error discussed in section 6.4. It should also be noted that automatic recording gauges alone cannot identify the type of precipitation. A significant problem with this type of gauge is that precipitation, particularly freezing rain or wet snow, can stick to the inside of the gauge orifice and not fall into the bucket until some time later. This severely limits the ability of weighing-recording gauges to provide accurate timing of precipitation events. Another common fault with weighing-type gauges is wind pumping. This usually occurs during high winds when turbulent air currents passing over and around the catchment container cause oscillations in the weighing mechanism. By using programmable data-logging systems, errors associated with such anomalous recordings can be minimized by averaging readings over short time intervals, namely, 1 min. Timing errors in the instrument clock may assign the catch to the wrong period or date.

Weighing-recording gauges usually have few moving parts and, therefore, should seldom require calibration. Calibration commonly involves the use of a series of weights which, when placed in the bucket or catchment container, provide a predetermined value equivalent to an amount of precipitation. Calibrations should normally be done in a laboratory setting and should follow the manufacturer’s instructions. Routine maintenance should be conducted every three to four months, depending on precipitation conditions at the site. Both the exterior and interior of the gauge should be inspected for loose or broken parts and to ensure that the gauge is level. Any manual read-out should be checked against the removable data record to ensure consistency before removing and annotating the record. The bucket or catchment container should be emptied, inspected, cleaned, if required, and recharged with oil for rainfall-only operation or with antifreeze and oil if solid precipitation is expected (see section 6.3.2). The recording device should be set to zero in order to make maximum use of the gauge range. The tape, chart supply or digital memory as well as the power supply should be checked and replaced, if required. A voltohmmeter may be required to set the gauge output to zero when a data logger is used or to



check the power supply of the gauge or recording system. Timing intervals and dates of record must be checked. 6.5.2 tipping-bucket gauge

provide precipitation amount. It may also be used with a chart recorder. errors and corrections

The tipping-bucket raingauge is used for measuring accumulated totals and the rate of rainfall, but does not meet the required accuracy because of the large non-linear errors, particularly at high precipitation rates. Instruments

The principle behind the operation of this instrument is simple. A light metal container or bucket divided into two compartments is balanced in unstable equilibrium about a horizontal axis. In its normal position, the bucket rests against one of two stops, which prevents it from tipping over completely. Rain water is conducted from a collector into the uppermost compartment and, after a predetermined amount has entered the compartment, the bucket becomes unstable and tips over to its alternative rest position. The bucket compartments are shaped in such a way that the water is emptied from the lower one. Meanwhile, rain continues to fall into the newly positioned upper compartment. The movement of the bucket as it tips over can be used to operate a relay contact to produce a record consisting of discontinuous steps; the distance between each step on the record represents the time taken for a specified small amount of rain to fall. This amount of rain should not exceed 0.2 mm if detailed records are required. The bucket takes a small but finite time to tip and, during the first half of its motion, additional rain may enter the compartment that already contains the calculated amount of rainfall. This error can be appreciable during heavy rainfall (250 mm h–1), but it can be controlled. The simplest method is to use a device like a siphon at the foot of the funnel to direct the water to the buckets at a controlled rate. This smoothes out the intensity peaks of very shortperiod rainfall. Alternatively, a device can be added to accelerate the tipping action; essentially, a small blade is impacted by the water falling from the collector and is used to apply an additional force to the bucket, varying with rainfall intensity. The tipping-bucket gauge is particularly convenient for automatic weather stations because it lends itself to digital methods. The pulse generated by a contact closure can be monitored by a data logger and totalled over selected periods to

Since the tipping-bucket raingauge has sources of error which differ somewhat from those of other gauges, special precautions and corrections are advisable. Some sources of error include the following: (a) The loss of water during the tipping action in heavy rain can be minimized but not eliminated; (b) With the usual bucket design, the exposed water surface is large in relation to its volume, meaning that appreciable evaporation losses can occur, especially in hot regions. This error may be significant in light rain; (c) The discontinuous nature of the record may not provide satisfactory data during light drizzle or very light rain. In particular, the time of onset and cessation of precipitation cannot be accurately determined; (d) Water may adhere to both the walls and the lip of the bucket, resulting in rain residue in the bucket and additional weight to be overcome by the tipping action. Tests on waxed buckets produced a 4 per cent reduction in the volume required to tip the balance compared with non-waxed buckets. Volumetric calibration can change, without adjustment of the calibration screws, by variation of bucket wettability through surface oxidation or contamination by impurities and variations in surface tension; (e) The stream of water falling from the funnel onto the exposed bucket may cause over-reading, depending on the size, shape and position of the nozzle; (f) The instrument is particularly prone to bearing friction and to having an improperly balanced bucket because the gauge is not level. Careful calibration can provide corrections for the systematic parts of these errors. The measurements from tipping-bucket raingauges may be corrected for effects of exposure in the same way as other types of precipitation gauge. Heating devices can be used to allow for measurements during the cold season, particularly of solid precipitation. However, the performance of heated tipping-bucket gauges has been found to be very poor as a result of large errors due to both wind and evaporation of melting snow. Therefore, these types of gauges are not recommended for use in winter precipitation measurement in



regions where temperatures fall below 0°C for prolonged periods. calibration and maintenance

Calibration of the tipping bucket is usually accomplished by passing a known amount of water through the tipping mechanism at various rates and by adjusting the mechanism to the known volume. This procedure should be followed under laboratory conditions. Owing to the numerous error sources, the collection characteristics and calibration of tipping-bucket raingauges are a complex interaction of many variables. Daily comparisons with the standard raingauge can provide useful correction factors, and is good practice. The correction factors may vary from station to station. Correction factors are generally greater than 1.0 (under-reading) for low-intensity rain, and less than 1.0 (over-reading) for highintensity rain. The relationship between the correction factor and intensity is not linear but forms a curve. Routine maintenance should include cleaning the accumulated dirt and debris from funnel and buckets, as well as ensuring that the gauge is level. It is highly recommended that the tipping mechanism be replaced with a newly calibrated unit on an annual basis. Timing intervals and dates of records must be checked. 6.5.3 float gauge

the beginning or the end of the siphoning period, which should not be longer than 15 s. In some instruments, the float chamber assembly is mounted on knife edges so that the full chamber overbalances; the surge of the water assists the siphoning process, and, when the chamber is empty, it returns to its original position. Other rain recorders have a forced siphon which operates in less than 5 s. One type of forced siphon has a small chamber that is separate from the main chamber and accommodates the rain that falls during siphoning. This chamber empties into the main chamber when siphoning ceases, thus ensuring a correct record of total rainfall. A heating device (preferably controlled by a thermostat) should be installed inside the gauge if there is a possibility that water might freeze in the float chamber during the winter. This will prevent damage to the float and float chamber and will enable rain to be recorded during that period. A small heating element or electric lamp is suitable where a mains supply of electricity is available, otherwise other sources of power may be employed. One convenient method uses a short heating strip wound around the collecting chamber and connected to a large-capacity battery. The amount of heat supplied should be kept to the minimum necessary in order to prevent freezing, because the heat may reduce the accuracy of the observations by stimulating vertical air movements above the gauge and increasing evaporation losses. A large undercatch by unshielded heated gauges, caused by the wind and the evaporation of melting snow, has been reported in some countries, as is the case for weighing gauges (see section Apart from the fact that calibration is performed using a known volume of water, the maintenance procedures for this gauge are similar to those of the weighing-recording gauge (see section

In this type of instrument, the rain passes into a float chamber containing a light float. As the level of the water within the chamber rises, the vertical movement of the float is transmitted, by a suitable mechanism, to the movement of a pen on a chart or a digital transducer. By suitably adjusting the dimensions of the collector orifice, the float and the float chamber, any desired chart scale can be used. In order to provide a record over a useful period (24 h are normally required) either the float chamber has to be very large (in which case a compressed scale on the chart or other recording medium is obtained), or a mechanism must be provided for empt ying the f loat chamber automatically and quickly whenever it becomes full, so that the chart pen or other indicator returns to zero. Usually a siphoning arrangement is used. The actual siphoning process should begin precisely at the predetermined level with no tendency for the water to dribble over at either


MeasureMent of DeW, ice accuMulation anD fog PreciPitation


Measurement of dew and leaf wetness

The deposition of dew is essentially a nocturnal phenomenon and, although relatively small in amount and locally variable, is of much interest in



arid zones; in very arid regions, it may be of the same order of magnitude as the rainfall. The exposure of plant leaves to liquid moisture from dew, fog and precipitation also plays an important role in plant disease, insect activity, and the harvesting and curing of crops. In order to assess the hydrological contribution of dew, it is necessary to distinguish between dew formed: (a) As a result of the downward transport of atmospheric moisture condensed on cooled surfaces, known as dew-fall; (b) By water vapour evaporated from the soil and plants and condensed on cooled surfaces, known as distillation dew; (c) As water exuded by leaves, known as guttation. All three forms of dew may contribute simultaneously to the observed dew, although only the first provides additional water to the surface, and the latter usually results in a net loss. A further source of moisture results from fog or cloud droplets being collected by leaves and twigs and reaching the ground by dripping or by stem flow. The amount of dew deposited on a given surface in a stated period is usually expressed in units of kg m–2 or in millimetres depth of dew. Whenever possible, the amount should be measured to the nearest tenth of a millimetre.

measurements and the deposition of dew on a natural surface should, therefore, be established for each particular set of surface and exposure conditions; empirical relationships should also be established to distinguish between the processes of dew formation if that is important for the particular application. A number of instruments are in use for the direct measurement of the occurrence, amount and duration of leaf wetness and dew. Dew-duration recorders use either elements which themselves change in such a manner as to indicate or record the wetness period, or electrical sensors in which the electrical conductivity of the surface of natural or artificial leaves changes in the presence of water resulting from rain, snow, wet fog or dew. In dew balances, the amount of moisture deposited in the form of precipitation or dew is weighed and recorded. In most instruments providing a continuous trace, it is possible to distinguish between moisture deposits caused by fog, dew or rain by considering the type of trace. The only certain method of measuring net dew-fall by itself is through the use of a very sensitive lysimeter (see Part I, Chapter 10). In WMO (1992b) two particular electronic instruments for measuring leaf wetness are advocated for development as reference instruments, and various leaf-wetting simulation models are proposed. Some use an energy balance approach (the inverse of evaporation models), while others use correlations. Many of them require micrometeorological measurements. Unfortunately, there is no recognized standard method of measurement to verify them. 6.6.2 Measurement of ice accumulation

Leaf wetness may be described as light, moderate or heavy, but its most important measures are the time of onset or duration.
A review of the instruments designed for measuring dew and the duration of leaf wetness, as well as a bibliography, is given in WMO (1992b). The following methods for the measurement of leaf wetness are considered. The amount of dew depends critically on the properties of the surface, such as its radiative properties, size and aspect (horizontal or vertical). It may be measured by exposing a plate or surface, which can be natural or artificial, with known or standardized properties, and assessing the amount of dew by weighing it, visually observing it, or making use of some other quantity such as electrical conductivity. The problem lies in the choice of the surface, because the results obtained instrumentally are not necessarily representative of the dew deposit on the surrounding objects. Empirical relationships between the instrumental

Ice can accumulate on surfaces as a result of several phenomena. Ice accumulation from freezing precipitation, often referred to as glaze, is the most dangerous type of icing condition. It may cause extensive damage to trees, shrubs and telephone and power lines, and create hazardous conditions on roads and runways. Hoar frost (commonly called frost) forms when air with a dew-point temperature below freezing is brought to saturation by cooling. Hoar frost is a deposit of interlocking ice crystals formed by direct sublimation on objects, usually of small diameter, such as tree branches, plant stems, leaf edges, wires, poles, and so forth. Rime is a white or milky and opaque granular deposit of ice formed by the rapid freezing of supercooled water drops as they come into contact with an exposed object.



Measurement methods

At meteorological stations, the observation of ice accumulation is generally more qualitative than quantitative, primarily due to the lack of a suitable sensor. Ice accretion indicators, usually made of anodized aluminium, are used to observe and report the occurrence of freezing precipitation, frost or rime icing. Observations of ice accumulation can include both the measurement of the dimensions and the weight of the ice deposit as well as a visual description of its appearance. These observations are particularly important in mountainous areas where such accumulation on the windward side of a mountain may exceed the normal precipitation. A system consisting of rods and stakes with two pairs of parallel wires (one pair oriented north-south and the other east-west) can be used to accumulate ice. The wires may be suspended at any level, and the upper wire of each pair should be removable. At the time of observation, both upper wires are removed, placed in a special container, and taken indoors for melting and weighing of the deposit. The cross-section of the deposit is measured on the permanently fixed lower wires. Recording instruments are used in some countries for continuous registration of rime. A vertical or horizontal rod, ring or plate is used as the sensor, and the increase in the amount of rime with time is recorded on a chart. A simple device called an icescope is used to determine the appearance and presence of rime and hoar frost on a snow surface. The ice-scope consists of a round plywood disc, 30 cm in diameter, which can be moved up or down and set at any height on a vertical rod fixed in the ground. Normally, the disc is set flush with the snow surface to collect the rime and hoar frost. Rime is also collected on a 20 cm diameter ring fixed on the rod, 20 cm from its upper end. A wire or thread 0.2 to 0.3 mm in diameter, stretched between the ring and the top end of the rod, is used for the observation of rime deposits. If necessary, each sensor can be removed and weighed. Ice on pavements

ice. One sensor using two electrodes embedded in the road, flush with the surface, measures the electrical conductivity of the surface and readily distinguishes between dry and wet surfaces. A second measurement, of ionic polarizability, determines the ability of the surface, to hold an electrical charge; a small charge is passed between a pair of electrodes for a short time, and the same electrodes measure the residual charge, which is higher when there is an electrolyte with free ions, such as salty water. The polarizability and conductivity measurements together can distinguish between dry, moist and wet surfaces, frost, snow, white ice and some de-icing chemicals. However, because the polarizability of the non-crystalline black ice is indistinguishable from water under some conditions, the dangerous black ice state can still not be detected with the two sensors. In at least one system, this problem has been solved by adding a third specialized capacitive measurement which detects the unique structure of black ice. The above method is a passive technique. There is an active in situ technique that uses either a heating element, or both heating and cooling elements, to melt or freeze any ice or liquid present on the surface. Simultaneous measurements of temperature and of the heat energy involved in the thaw-freeze cycle are used to determine the presence of ice and to estimate the freezing point of the mixture on the surface. Most in situ systems include a thermometer to measure the road surface temperature. The quality of the measurement depends critically on the mounting (especially the materials) and exposure, and care must be taken to avoid radiation errors. There are two remote-sensing methods under development which lend themselves to car-mounted systems. The first method is based on the reflection of infrared and microwave radiation at several frequencies (about 3 000 nm and 3 GHz, respectively). The microwave reflections can determine the thickness of the water layer (and hence the risk of aquaplaning), but not the ice condition. Two infrared frequencies can be used to distinguish between dry, wet and icy conditions. It has also been demonstrated that the magnitude of reflected power at wavelengths around 2 000 nm depends on the thickness of the ice layer. The second method applies pattern recognition techniques to the reflection of laser light from the pavement, to distinguish between dry and wet surfaces, and black ice.

Sensors have been developed and are in operation to detect and describe ice on roads and runways, and to support warning and maintenance programmes. With a combination of measurements, it is possible to detect dry and wet snow and various forms of

I.6–14 6.6.3


Measurement of fog precipitation

Fog consists of minute water droplets suspended in the atmosphere to form a cloud at the Earth’s surface. Fog droplets have diameters from about 1 to 40 μm and fall velocities from less than 1 to approximately 5 cm s–1. In fact, the fall speed of fog droplets is so low that, even in light winds, the drops will travel almost horizontally. When fog is present, horizontal visibility is usually less than 5 km; it is rarely observed when the temperature and dewpoint differ by more than 2°C. Meteorologists are generally more concerned with fog as an obstruction to vision than as a form of precipitation. However, from a hydrological standpoint, some forested high-elevation areas experience frequent episodes of fog as a result of the advection of clouds over the surface of the mountain, where the consideration of precipitation alone may seriously underestimate the water input to the watershed (Stadtmuller and Agudelo, 1990). More recently, the recognition of fog as a water supply source in upland areas (Schemenauer and Cereceda, 1994a) and as a wet deposition pathway (Schemenauer and Cereceda, 1991; Vong, Sigmon and Mueller, 1991) has led to the requirement for standardizing methods and units of measurement. The following methods for the measurement of fog precipitation are considered. Although there have been a great number of measurements for the collection of fog by trees and various types of collectors over the last century, it is difficult to compare the collection rates quantitatively. The most widely used fogmeasuring instrument consists of a vertical wire mesh cylinder centrally fixed on the top of a raingauge in such a way that it is fully exposed to the free flow of the air. The cylinder is 10 cm in diameter and 22 cm in height, and the mesh is 0.2 cm by 0.2 cm (Grunow, 1960). The droplets from the moisture-laden air are deposited on the mesh and drop down into the gauge collector where they are measured or registered in the same way as rainfall. Some problems with this instrument are its small size, the lack of representativeness with respect to vegetation, the storage of water in the small openings in the mesh, and the ability of precipitation to enter directly into the raingauge portion, which confounds the measurement of fog deposition. In addition, the calculation of fog precipitation by simply subtracting the amount of rain in a standard raingauge (Grunow, 1963) from that in the fog collector leads to erroneous results whenever wind is present.

An inexpensive, 1 m2 standard fog collector and standard unit of measurement is proposed by Schemenauer and Cereceda (1994b) to quantify the importance of fog deposition to forested highelevation areas and to measure the potential collection rates in denuded or desert mountain ranges. The collector consists of a flat panel made of a durable polypropylene mesh and mounted with its base 2 m above the ground. The collector is coupled to a tipping-bucket raingauge to determine deposition rates. When wind speed measurements are taken in conjunction with the fog collector, reasonable estimates of the proportions of fog and rain being deposited on the vertical mesh panel can be taken. The output of this collector results in litres of water. Since the surface area is 1 m 2, this gives a collection in l m–2.


MeasureMent of snoWfall anD snoW cover

The authoritative texts on this topic are WMO (1994) and WMO (1992a), which cover the hydrological aspects, including the procedures, for snow surveying on snow courses. The following is a brief account of some simple and well-known methods, and a brief review of the instrumentation. Snowfall is the depth of freshly fallen snow deposited over a specified period (generally 24 h). Thus, snowfall does not include the deposition of drifting or blowing snow. For the purposes of depth measurements, the term “snow” should also include ice pellets, glaze, hail, and sheet ice formed directly or indirectly from precipitation. Snow depth usually means the total depth of snow on the ground at the time of observation. The water equivalent of a snow cover is the vertical depth of the water that would be obtained by melting the snow cover. 6.7.1 snowfall depth

Direct measurements of the depth of fresh snow on open ground are taken with a graduated ruler or scale. A sufficient number of vertical measurements should be made in places where drifting is considered absent in order to provide a representative average. Where the extensive drifting of snow has occurred, a greater number of measurements are needed to obtain a representative depth. Special precautions should be taken so as not to measure



any previously fallen snow. This can be done by sweeping a suitable patch clear beforehand or by covering the top of the old snow surface with a piece of suitable material (such as wood with a slightly rough surface, painted white) and measuring the depth accumulated on it. On a sloping surface (to be avoided, if possible) measurements should still be taken with the measuring rod vertical. If there is a layer of old snow, it would be incorrect to calculate the depth of the new snow from the difference between two consecutive measurements of total depth of snow since lying snow tends to become compressed and to suffer ablation. 6.7.2 Direct measurements of snow cover depth

the type, amount and timing of precipitation. It is capable of an uncertainty of ±2.5 cm. 6.7.3 Direct measurements of snow water equivalent

The standard method of measuring water equivalent is by gravimetric measurement using a snow tube to obtain a sample core. This method serves as the basis for snow surveys, a common procedure in many countries for obtaining a measure of water equivalent. The method consists of either melting each sample and measuring its liquid content or by weighing the frozen sample. A measured quantity of warm water or a heat source can be used to melt the sample. Cylindrical samples of fresh snow may be taken with a suitable snow sampler and either weighed or melted. Details of the available instruments and sampling techniques are described in WMO (1994). Often a standard raingauge overflow can be used for this method. Snowgauges measure snowfall water equivalent directly. Essentially, any non-recording precipitation gauges can also be used to measure the water equivalent of solid precipitation. Snow collected in these types of gauges should be either weighed or melted immediately after each observation, as described in section The recording-weighing gauge will catch solid forms of precipitation as well as liquid forms, and record the water equivalent in the same manner as liquid forms (see section 6.5.1). The water equivalent of solid precipitation can also be estimated using the depth of fresh snowfall. This measurement is converted to water equivalent by using an appropriate specific density. Although the relationship stating that 1 cm of fresh snow equals the equivalent of 1 mm of water may be used with caution for long-term average values, it may be highly inaccurate for a single measurement, as the specific density ratio of snow may vary between 0.03 and 0.4. 6.7.4 snow pillows

Depth measurements of snow cover or snow accumulated on the ground are taken with a snow ruler or similar graduated rod which is pushed down through the snow to the ground surface. It may be difficult to obtain representative depth measurements using this method in open areas since the snow cover drifts and is redistributed under the effects of the wind, and may have embedded ice layers that limit penetration with a ruler. Care should be taken to ensure that the total depth is measured, including the depth of any ice layers which may be present. A number of measurements are taken and averaged at each observing station. A number of snow stakes, painted with rings of alternate colours or another suitable scale, provide a convenient means of measuring the total depth of snow on the ground, especially in remote regions. The depth of snow at the stake or marker may be observed from distant ground points or from aircraft by means of binoculars or telescopes. The stakes should be painted white to minimize the undue melting of the snow immediately surrounding them. Aerial snow depth markers are vertical poles (of variable length, depending on the maximum snow depth) with horizontal cross-arms mounted at fixed heights on the poles and oriented according to the point of observation. The development of an inexpensive ultrasonic ranging device to provide reliable snow depth measurements at automatic stations has provided a feasible alternative to the standard observation, both for snow depth and fresh snowfall (Goodison and others, 1988). This sensor can be utilized to control the quality of automatic recording gauge measurements by providing additional details on

Snow pillows of various dimensions and materials are used to measure the weight of the snow that accumulates on the pillow. The most common pillows are flat circular containers (with a diameter of 3.7 m) made of rubberized material and filled with an antifreeze mixture of methyl alcohol and water or a methanol-glycol-water solution. The pillow is installed on the surface of the ground,



flush with the ground, or buried under a thin layer of soil or sand. In order to prevent damage to the equipment and to preserve the snow cover in its natural condition, it is recommended that the site be fenced in. Under normal conditions, snow pillows can be used for 10 years or more. Hydrostatic pressure inside the pillow is a measure of the weight of the snow on the pillow. Measuring the hydrostatic pressure by means of a float-operated liquid-level recorder or a pressure transducer provides a method of continuous measurement of the water equivalent of the snow cover. Variations in the accuracy of the measurements may be induced by temperature changes. In shallow snow cover, diurnal temperature changes may cause expansion or contraction of the fluid in the pillow, thus giving spurious indications of snowfall or snow melt. In deep mountain areas, diurnal temperature fluctuations are unimportant, except at the beginning and end of the snow season. The access tube to the measurement unit should be installed in a temperature-controlled shelter or in the ground to reduce the temperature effects. In situ and/or telemetry data-acquisition systems can be installed to provide continuous measurements of snow water equivalent through the use of charts or digital recorders. Snow pillow measurements differ from those taken with standard snow tubes, especially during the snow-melt period. They are most reliable when the snow cover does not contain ice layers, which can cause “bridging” above the pillows. A comparison of the water equivalent of snow determined by a snow pillow with measurements taken by the standard method of weighing shows that these may differ by 5 to 10 per cent. 6.7.5 radioisotope snowgauges

is either natural or artificial. One part (for example, the detector/source) of the system is located at the base of the snowpack, and the other at a height greater than the maximum expected snow depth. As snow accumulates, the count rate decreases in proportion to the water equivalent of the snowpack. Systems using an artificial source of radiation are used at fixed locations to obtain measurements only for that site. A system using naturally occurring uranium as a ring source around a single pole detector has been successfully used to measure packs of up to 500 mm of water equivalent, or a depth of 150 cm. A profiling radioactive snowgauge at a fixed location provides data on total snow water equivalent and density and permits an accurate study of the water movements and density changes that occur with time in a snowpack (Armstrong, 1976). A profiling gauge consists of two parallel vertical access tubes, spaced approximately 66 cm apart, which extend from a cement base in the ground to a height above the maximum expected depth of snow. A gamma ray source is suspended in one tube, and a scintillation gamma-ray detector, attached to a photomultiplier tube, in the other. The source and detector are set at equal depths within the snow cover and a measurement is taken. Vertical density profiles of the snow cover are obtained by taking measurements at depth increments of about 2 cm. A portable gauge (young, 1976) which measures the density of the snow cover by backscatter, rather than transmission of the gamma rays, offers a practical alternative to digging deep snow pits, while instrument portability makes it possible to assess areal variations of density and water equivalent. 6.7.6 natural gamma radiation

Nuclear gauges measure the total water equivalent of the snow cover and/or provide a density profile. They are a non-destructive method of sampling and are adaptable to in situ recording and/or telemetry systems. Nearly all systems operate on the principle that water, snow or ice attenuates radiation. As with other methods of point measurement, siting in a representative location is critical for interpreting and applying point measurements as areal indices. The gauges used to measure total water content consist of a radiation detector and a source, which

The method of gamma radiation snow surveying is based on the attenuation by snow of gamma radiation emanating from natural radioactive elements in the top layer of the soil. The greater the water equivalent of the snow, the more the radiation is attenuated. Terrestrial gamma surveys can consist of a point measurement at a remote location, a series of point measurements, or a selected traverse over a region (Loijens, 1975). The method can also be used on aircraft. The equipment includes a portable gamma-ray spectrometer that utilizes a small scintillation crystal to measure the rays in a wide spectrum and in three spectral windows (namely, potassium, uranium and thorium emissions). With this method, measurements of



gamma levels are required at the point, or along the traverse, prior to snow cover. In order to obtain absolute estimates of the snow water equivalent, it is necessary to correct the readings for soil moisture changes in the upper 10 to 20 cm of soil for variations in background radiation resulting from cosmic rays, instrument drift and the washout of radon gas (which is a source of gamma radiation) in precipitation with subsequent build-up in the soil or snow. Also, in order to determine the relationship between spectrometer count rates and

water equivalent, supplementary snow water equivalent measurements are initially required. Snow tube measurements are the common reference standard. The natural gamma method can be used for snowpacks which have up to 300 mm water equivalent; with appropriate corrections, its precision is ±20 mm. The advantage of this method over the use of artificial radiation sources is the absence of a radiation risk.



aNNEx 6.a PrecIPItatIon IntercoMParIson sItes

The Commission for Instruments and Methods of Observation, at its eleventh session, held in 1994, made the following statement regarding precipitation intercomparison sites: The Commission recognized the benefits of national precipitation sites or centres where past, current and future instruments and methods of observation for precipitation can be assessed on an ongoing basis at evaluation stations. These stations should: (a) Operate the WMO recommended gauge configurations for rain (pit gauge) and snow (Double Fence Intercomparison Reference (DFIR)). Installation and operation will follow specifications of the WMO precipitation intercomparisons. A DFIR installation is not required when only rain is observed; (b) Operate past, current and new types of operational precipitation gauges or other methods of observation according to standard operating procedures and evaluate the accuracy and performance against WMO recommended reference instruments; (c) Take auxiliary meteorological measurements which will allow the development and tests


(e) (f)


for the application of precipitation correction procedures; Provide quality control of data and archive all precipitation intercomparison data, including the related meteorological observations and the metadata, in a readily acceptable format, preferably digital; Operate continuously for a minimum of 10 years; Test all precipitation correction procedures available (especially those outlined in the final reports of the WMO intercomparisons) on the measurement of rain and solid precipitation; Facilitate the conduct of research studies on precipitation measurements. It is not expected that the centres provide calibration or verification of instruments. They should make recommendations on national observation standards and should assess the impact of changes in observational methods on the homogeneity of precipitation time series in the region. The site would provide a reference standard for calibrating and validating radar or remote-sensing observations of precipitation.



aNNEx 6.B suggested correctIon Procedures for PrecIPItatIon MeasureMents
The Commission for Instruments and Methods of Observation, at its eleventh session, held in 1994, made the following statement regarding the correction procedures for precipitation measurements: The correction methods are based on simplified physical concepts as presented in the Instruments Development Inquiry (Instruments and Observing Methods Report No. 24, WMO/TD-No. 231). They depend on the type of precipitation gauge applied. The effect of wind on a particular type of gauge has been assessed by using intercomparison measurements with the WMO reference gauges — the pit gauge for rain and the Double Fence Intercomparison Reference (DFIR) for snow as is shown in the International Comparison of National Precipitation Gauges with a Reference Pit Gauge (Instruments and Observing Methods Report No. 17, WMO/TDNo. 38) and by the preliminary results of the WMO Solid Precipitation Measurement Intercomparison. The reduction of wind speed to the level of the gauge orifice should be made according to the following formula: uhp = (log hz0–1) · (log Hz0–1)–1 · (1 – 0.024α) uH where uhp is the wind speed at the level of the gauge orifice; h is the height of the gauge orifice above ground; z0 is the roughness length (0.01 m for winter and 0.03 m for summer); H is the height of the wind speed measuring instrument above ground; uH is the wind speed measured at the height H above ground; and α is the average vertical angle of obstacles around the gauge. The latter depends on the exposure of the gauge site and can be based either on the average value of direct measurements, on one of the eight main directions of the wind rose of the vertical angle of obstacles (in 360°) around the gauge, or on the classification of the exposure using metadata as stored in the archives of Meteorological Services. The classes are as follows:
Class Exposed site Angle 0–5 Description Only a few small obstacles such as bushes, group of trees, a house Small groups of trees or bushes or one or two houses Parks, forest edges, village centres, farms, group of houses, yards Young forest, small forest clearing, park with big trees, city centres, closed deep valleys, strongly rugged terrain, leeward of big hills

Mainly exposed site Mainly protected site Protected site




Wetting losses occur with the moistening of the inner walls of the precipitation gauge. They depend on the shape and the material of the gauge, as well as on the type and frequency of precipitation. For example, for the Hellmann gauge they amount to an average of 0.3 mm on a rainy and 0.15 mm on a snowy day; the respective values for the Tretyakov gauge are 0.2 mm and 0.1 mm. Information on wetting losses for other types of gauges can be found in Methods of Correction for Systematic Error in Point Precipitation Measurement for Operational Use (WMO-No. 589).


references and furtHer readIng

Armstrong, R.L., 1976: The application of isotopic profiling snow-gauge data to avalanche research. Proceedings of the Forty-fourth Annual Western Snow Conference, Atmospheric Environment Service, Canada, pp. 12–19. Goodison, B.E., J.R. Metcalfe, R.A. Wilson and K. Jones, 1988: The Canadian automatic snow depth sensor: A performance update. Proceedings of the Fifty-sixth Annual Western Snow Conference, Atmospheric Environment Service, Canada, pp. 178–181. Goodison, B.E., B. Sevruk, and S. Klemm, 1989: WMO solid precipitation measurement intercomparison: Objectives, methodology and analysis. In: International Association of Hydrological Sciences, 1989: Atmospheric deposition. Proceedings, Baltimore Symposium (May, 1989) IAHS Publication No. 179, Wallingford. Grunow, J., 1960: The productiveness of fog precipitation in relation to the cloud droplet spectrum. In: American Geophysical Union, 1960, Physics of precipitation. Geophysical Monograph No. 5, Proceedings of the Cloud Physics Conference (3–5 June 1959, Woods Hole, Massachusetts), Publication No. 746, pp. 110–117. Grunow, J., 1963: Weltweite Messungen des Nebelniederschlags nach der Hohenpeissenberger Methode. In: International Union of Geodesy and Geophysics, General Assembly (Berkeley, California, 19–31 August 1963), International Association of Scientific Hydrology Publication No. 65, 1964, pp. 324–342. Loijens, H.S., 1975: Measurements of snow water equivalent and soil moisture by natural gamma radiation. Proceedings of the Canadian Hydrological Symposium-75 (11–14 August 1975, Winnipeg), pp. 43–50. Nespor, V. and B. Sevruk, 1999: Estimation of windinduced error of rainfall gauge measurements using a numerical simulation. Journal of Atmospheric and Oceanic Technology, Volume 16, Number 4, pp. 450–464. Rinehart, R.E., 1983: Out-of-level instruments: Errors in hydrometeor spectra and precipitation measurements. Journal of Climate and Applied Meteorology, 22, pp. 1404–1410. Schemenauer, R.S. and P. Cereceda, 1991: Fog water collection in arid coastal locations. Ambio, Volume 20, Number 7, pp. 303–308.

Schemenauer, R.S. and P. Cereceda,1994a: Fog collection’s role in water planning for developing countries. Natural Resources Forum, Volume 18, Number 2, pp. 91–100. Schemenauer, R.S. and P. Cereceda, 1994b: A proposed standard fog collector for use in high-elevation regions. Journal of Applied Meteorology, Volume 33, Number 11, pp. 1313–1322. Sevruk, B., 1974a: Correction for the wetting loss of a Hellman precipitation gauge. Hydrological Sciences Bulletin, Volume 19, Number 4, pp. 549–559. Sevruk, B., 1974b: Evaporation losses from containers of Hellman precipitation gauges. Hydrological Sciences Bulletin, Volume 19, Number 2, pp. 231–236. Sevruk, B., 1984: Comments on “Out-of-level instruments: Errors in hydrometeor spectra and precipitation measurements”. Journal of Climate and Applied Meteorology, 23, pp. 988–989. Sevruk, B. and V. Nespor, 1994: The effect of dimensions and shape of precipitation gauges on the wind-induced error. In: M. Desbois and F. Desalmand (eds.): Global Precipitation and Climate Change, NATO ASI Series, I26, Springer Verlag, Berlin, pp. 231–246. Sevruk, B. and L. zahlavova, 1994: Classification system of precipitation gauge site exposure: Evaluation and application. International Journal of Climatology, 14(b), pp. 681–689. Slovak Hydrometeorological Institute and Swiss Federal Institute of Technology, 1993: Precipitation measurement and quality control. Proceedings of the International Symposium on Precipitation and Evaporation (B. Sevruk and M. Lapin, eds) (Bratislava, 20–24 September 1993), Volume I, Bratislava and zurich. Smith, J.L., H.G. Halverson, and R.A. Jones, 1972: Central Sierra Profiling Snowgauge: A Guide to Fabrication and Operation. USAEC Report TID25986, National Technical Information Service, U.S. Department of Commerce, Washington DC. Stadtmuller, T. and N. Agudelo, 1990: Amount and variability of cloud moisture input in a tropical cloud forest. In: Proceedings of the Lausanne Symposia (August/November), IAHS Publication No. 193, Wallingford.



Vong, R.J., J.T. Sigmon and S.F. Mueller, 1991: Cloud water deposition to Appalachian forests. Environmental Science and Technology, 25(b), pp. 1014–1021. World Meteorological Organization, 1972: Evaporation losses from storage gauges (B. Sevruk) Distribution of Precipitation in Mountainous Areas, Geilo Symposium (Norway, 31 July–5 August 1972), Volume II, technical papers, WMO-No. 326, Geneva, pp. 96–102. World Meteorological Organization, 1982: Methods of Correction for Systematic Error in Point Precipitation Measurement for Operational Use (B. Sevruk). Operational Hydrology Report No. 21, WMO-No. 589, Geneva. World Meteorological Organization, 1984: International Comparison of National Precipitation Gauges with a Reference Pit Gauge (B. Sevruk and W.R. Hamon). Instruments and Observing Methods Report No. 17, WMO/TDNo. 38, Geneva. World Meteorological Organization, 1985: International Organizing Committee for the WMO Solid Precipitation Measurement Intercomparison. Final report of the first session (distributed to participants only), Geneva. World Meteorological Organization, 1986: Papers Presented at the Workshop on the Correction of Precipitation Measurements (B. Sevruk, ed.) (zurich, Switzerland, 1–3 April 1985). Instruments and Observing Methods Report No. 25, WMO/TD-No. 104, Geneva.

World Meteorological Organization, 1989a: Catalogue of National Standard Precipitation Gauges (B. Sevruk and S. Klemm). Instruments and Observing Methods Report No. 39, WMO/ TD-No. 313, Geneva. World Meteorological Organization, 1989b: International Workshop on Precipitation Measurements (B. Sevruk, ed.) (St Moritz, Switzerland, 3–7 December 1989). Instruments and Observing Methods Report No. 48, WMO/ TD-No. 328, Geneva. World Meteorological Organization, 1992a: Snow Cover Measurements and Areal Assessment of Precipitation and Soil Moisture (B. Sevruk, ed.). Operational Hydrology Report No. 35, WMO-No. 749, Geneva. World Meteorological Organization, 1992b: Report on the Measurement of Leaf Wetness (R.R. Getz). Agricultural Meteorology Report No. 38, WMO/TD-No. 478, Geneva. World Meteorological Organization, 1994: Guide to Hydrological Practices. Fifth edition, WMO-No. 168, Geneva. World Meteorological Organization, 1998: WMO Solid Precipitation Measurement Intercomparison: Final Report (B.E. Goodison, P.y.T. Louie and D. yang) Instruments and Observing Methods Report No. 67, WMO/TD-No. 872, Geneva. young, G.J., 1976: A portable profiling snow-gauge: Results of field tests on glaciers. Proceedings of the Forty-fourth Annual Western Snow Conference, Atmospheric Environment Service, Canada, pp. 7–11.


MeasureMent of radIatIon



The various fluxes of radiation to and from the Earth’s surface are among the most important variables in the heat economy of the Earth as a whole and at any individual place at the Earth’s surface or in the atmosphere. Radiation measurements are used for the following purposes: (a) To study the transformation of energy within the Earth-atmosphere system and its variation in time and space; (b) To analyse the properties and distribution of the atmosphere with regard to its constituents, such as aerosols, water vapour, ozone, and so on; (c) To study the distribution and variations of incoming, outgoing and net radiation; (d) To satisfy the needs of biological, medical, agricultural, architectural and industrial activities with respect to radiation; (e) To verify satellite radiation measurements and algorithms. Such applications require a widely distributed regular series of records of solar and terrestrial surface radiation components and the derivation of representative measures of the net radiation. In addition to the publication of serial values for individual observing stations, an essential objective must be the production of comprehensive radiation climatologies, whereby the daily and seasonal variations of the various radiation constituents of the general thermal budget may be more precisely evaluated and their relationships with other meteorological elements better understood. A very useful account of radiation measurements and the operation and design of networks of radiation stations is contained in WMO (1986a). This manual describes the scientific principles of the measurements and gives advice on quality assurance, which is most important for radiation measurements. The Baseline Surface Radiation Network (BSRN) Operations Manual (WMO, 1998) gives an overview of the latest state of radiation measurements. Following normal practice in this field, errors and uncertainties are expressed in this chapter as a 66 per cent confidence interval of the difference from the true quantity, which is similar to a standard

deviation of the population of values. Where needed, specific uncertainty confidence intervals are indicated and uncertainties are estimated using the International Organization for Standardization method (ISO, 1995). For example, 95 per cent uncertainty implies that the stated uncertainty is for a confidence interval of 95 per cent. 7.1.1 Definitions

Annex 7.A contains the nomenclature of radiometric and photometric quantities. It is based on definitions recommended by the International Radiation Commission of the International Association of Meteorology and Atmospheric Sciences and by the International Commission on Illumination (ICI). Annex 7.B gives the meteorological radiation quantities, symbols and definitions. Radiation quantities may be classified into two groups according to their origin, namely solar and terrestrial radiation. In the context of this chapter, “radiation” can imply a process or apply to multiple quantities. For example, “solar radiation” could mean solar energy, solar exposure or solar irradiance (see Annex 7.B). Solar energy is the electromagnetic energy emitted by the sun. The solar radiation incident on the top of the terrestrial atmosphere is called extraterrestrial solar radiation; 97 per cent of which is confined to the spectral range 290 to 3 000 nm is called solar (or sometimes shortwave) radiation. Part of the extra-terrestrial solar radiation penetrates through the atmosphere to the Earth’s surface, while part of it is scattered and/or absorbed by the gas molecules, aerosol particles, cloud droplets and cloud crystals in the atmosphere. Terrestrial radiation is the long-wave electromagnetic energy emitted by the Earth’s surface and by the gases, aerosols and clouds of the atmosphere; it is also partly absorbed within the atmosphere. For a temperature of 300 K, 99.99 per cent of the power of the terrestrial radiation has a wavelength longer than 3 000 nm and about 99 per cent longer than 5 000 nm. For lower temperatures, the spectrum is shifted to longer wavelengths.



Since the spectral distributions of solar and terrestrial radiation overlap very little, they can very often be treated separately in measurements and computations. In meteorology, the sum of both types is called total radiation. Light is the radiation visible to the human eye. The spectral range of visible radiation is defined by the spectral luminous efficiency for the standard observer. The lower limit is taken to be between 360 and 400 nm, and the upper limit between 760 and 830 nm (ICI, 1987). Thus, 99 per cent of the visible radiation lies between 400 and 730 nm. The radiation of wavelengths shorter than about 400 nm is called ultraviolet (UV), and longer than about 800 nm, infrared radiation. The UV range is sometimes divided into three sub-ranges (IEC, 1987): UV-A: UV-B: UV-C: 7.1.2 315–400 nm 280–315 nm 100–280 nm units and scales units

the uncertainty of radiation measurements. With the results of many comparisons of 15 individual absolute pyrheliometers of 10 different types, a WRR has been defined. The old scales can be transferred into the WRR using the following factors:
Ångström scale 1905


= 1.026

Smithsonian scale 1913


= 0.977

IPS 1956


= 1.026

The WRR is accepted as representing the physical units of total irradiance within 0.3 per cent (99 per cent uncertainty of the measured value). realization of the World radiometric reference: World standard group In order to guarantee the long-term stability of the new reference, a group of at least four absolute pyrheliometers of different design is used as the WSG. At the time of incorporation into this group, the instruments are given a reduction factor to correct their readings to the WRR. To qualify for membership of this group, a radiometer must fulfil the following specifications: (a) Long-term stability must be better than 0.2 per cent of the measured value; (b) The 95 per cent uncertainty of the series of measurements with the instrument must lie within the limits of the uncertainty of the WRR; (c) The instrument has to have a different design from the other WSG instruments. To meet the stability criteria, the instruments of the WSG are the subjects of an inter-comparison at least once a year, and, for this reason, WSG is kept at the WRC Davos. computation of world radiometric reference values In order to calibrate radiometric instruments, the reading of a WSG instrument, or one that is directly traceable to the WSG, should be used. During international pyrheliometer comparisons (IPCs), the WRR value is calculated from the mean of at least three participating instruments of the WSG. To yield WRR values, the readings of the WSG instruments are always corrected with the individual reduction factor, which is determined at the time of their

The International System of Units (SI) is to be preferred for meteorological radiation variables. A general list of the units is given in Annexes 7.A and 7.B. standardization

The responsibility for the calibration of radiometric instruments rests with the World, Regional and National Radiation Centres, the specifications for which are given in Annex 7.C. Furthermore, the World Radiation Centre (WRC) at Davos is responsible for maintaining the basic reference, the World Standard Group (WSG) of instruments, which is used to establish the World Radiometric Reference (WRR). During international comparisons, organized every five years, the standards of the regional centres are compared with the WSG, and their calibration factors are adjusted to the WRR. They, in turn, are used to transmit the WRR periodically to the national centres, which calibrate their network instruments using their own standards. definition of the World radiometric reference In the past, several radiation references or scales have been used in meteorology, namely the Ångström scale of 1905, the Smithsonian scale of 1913, and the international pyrheliometric scale of 1956 (IPS 1956). The developments in absolute radiometry in recent years have very much reduced



incorporation into the WSG. Since the calculation of the mean value of the WSG, serving as the reference, may be jeopardized by the failure of one or more radiometers belonging to the WSG, the Commission for Instruments and Methods of Observation resolved1 that at each IPC an ad hoc group should be established comprising the Rapporteur on Meteorological Radiation Instruments (or designate) and at least five members, including the chairperson. The director of the comparison must participate in the group’s meetings as an expert. The group should discuss the preliminary results of the comparison, based on criteria defined by the WRC, evaluate the reference and recommend the updating of the calibration factors. 7.1.3 Meteorological requirements data to be recorded

and best practice uncertainties are stated for the Global Climate Observing System’s Baseline Surface Radiation Network (see WMO, 1998). It may be said generally that good quality measurements are difficult to achieve in practice, and for routine operations they can be achieved only with modern equipment and redundant measurements. Some systems still in use fall short of best practice, the lesser performance having been acceptable for many applications. However, data of the highest quality are increasingly in demand. sampling and recording

Irradiance and radiant exposure are the quantities most commonly recorded and archived, with averages and totals of over 1 h. There are also many requirements for data over shorter periods, down to 1 min or even tens of seconds (for some energy applications). Daily totals of radiant exposure are frequently used, but these are expressed as a mean daily irradiance. For climatological purposes, measurements of direct solar radiation shorter than a day are needed at fixed true solar hours, or at fixed airmass values. Measurements of atmospheric extinction must be made with very short response times to reduce the uncertainties arising from variations in air mass. For radiation measurements, it is particularly important to record and make available information about the circumstances of the observations. This includes the type and traceability of the instrument, its calibration history, and its location in space and time, spatial exposure and maintenance record. uncertainty

The uncertainty requirements can best be satisfied by making observations at a sampling period less than the 1/e time-constant of the instrument, even when the data to be finally recorded are integrated totals for periods of up to 1 h, or more. The data points may be integrated totals or an average flux calculated from individual samples. Digital data systems are greatly to be preferred. Chart recorders and other types of integrators are much less convenient, and the resultant quantities are difficult to maintain at adequate levels of uncertainty. times of observation

In a worldwide network of radiation measurements, it is important that the data be homogeneous not only for calibration, but also for the times of observation. Therefore, all radiation measurements should be referred to what is known in some countries as local apparent time, and in others as true solar time. However, standard or universal time is attractive for automatic systems because it is easier to use, but is acceptable only if a reduction of the data to true solar time does not introduce a significant loss of information (that is to say, if the sampling and storage rates are high enough, as indicated in section above). See Annex 7.D for useful formulae for the conversion from standard to solar time. 7.1.4 Measurement methods

Statements of uncertainty for net radiation are given in Part I, Chapter 1. The required 66 per cent uncertainty for radiant exposure for a day, stated by WMO for international exchange, is 0.4 MJ m–2 for ≤ 8 MJ m–2 and 5 per cent for > 8 MJ m–2. There are no formally agreed statements of required uncertainty for other radiation quantities, but uncertainty is discussed in the sections of this chapter dealing with the various types of measurements,

Recommended by the Commission for Instruments and Methods of Observation at its eleventh session (1994).

Meteorological radiation instruments are classified using various criteria, namely the type of variable to be measured, the field of view, the spectral response, the main use, and the like. The most important types of classifications are listed in Table 7.1. The quality of the instruments is characterized by items (a) to (h) below. The instruments and their operation are described in sections 7.2 to 7.4 below. WMO (1986a) provides a detailed account of instruments and the principles according to which they operate.



Absolute radiometers are self-calibrating, meaning that the irradiance falling on the sensor is replaced by electrical power, which can be accurately measured. The substitution, however, cannot be perfect; the deviation from the ideal case determines the uncertainty of the radiation measurement. Most radiation sensors, however, are not absolute and must be calibrated against an absolute instrument. The uncertainty of the measured value, therefore, depends on the following factors, all of which should be known for a well-characterized instrument: (a) Resolution, namely, the smallest change in the radiation quantity which can be detected by the instrument; (b) Long-term drifts of sensitivity (the ratio of electrical output signal to the irradiance applied),





(g) (h)

namely, the maximum possible changeover, for example, one year; Changes in sensitivity owing to changes of environmental variables, such as temperature, humidity, pressure and wind; Non-linearity of response, namely, changes in sensitivity associated with variations in irradiance; Deviation of the spectral response from that postulated, namely the blackness of the receiving surface, the effect of the aperture window, and so on; Deviation of the directional response from that postulated, namely cosine response and azimuth response; Time-constant of the instrument or the measuring system; Uncertainties in the auxiliary equipment.

table 7.1. Meteorological radiation instruments
Instrument classification absolute pyrheliometer Pyrheliometer Parameter to be measured direct solar radiation direct solar radiation Main use Primary standard Secondary standard for calibrations (b) Network Network (a) Viewing angle (sr) (see Figure 7.1) 5 x 10–3 (approx. 2.5˚ half angle) 5 x 10–3 to 2.5 x 10–2 5 x 10–3 to 2.5 x 10–2 1 x 10–3 to 1 x 10–2 (approx. 2.3˚ full angle) 2π

Spectral pyrheliometer Sunphotometer

direct solar radiation in broad spectral bands (e.g., with OG 530, rG 630, etc. filters) direct solar radiation in narrow spectral bands (e.g., at 500 ±2.5 nm, 368±2.5 nm) (a) Global (solar) radiation (b) diffuse sky (solar) radiation (c) reflected solar radiation Global (solar) radiation in broadband spectral ranges (e.g., with OG 530, rG 630, etc. filters) Net global (solar) radiation upward long-wave radiation (downwardlooking) (b) downward long-wave radiation (upward-looking) Total radiation Net total radiation (a)

(a) Standard (b) Network (a) Working standard (b) Network Network


Spectral pyranometer

Net pyranometer Pyrgeometer

(a) Working standard (b) Network Network

4π 2π

Pyrradiometer Net pyrradiometer

Working standard Network

2π 4π



Front aperture

d r

the definition of these angles refer to Figure 7.1. During the comparison of instruments with different view-limiting geometries, it should be kept in mind that the aureole radiation influences the readings more significantly for larger slope and aperture angles. The difference can be as great as 2 per cent between the two apertures mentioned above for an air mass of 1.0. In order to enable climatological comparison of direct solar radiation data during different seasons, it may be necessary to reduce all data to a mean sun-Earth distance: E = E/R
N 2


Receiving surface

figure 7.1. View-limiting geometry: the opening half-angle is arctan r/d; the slope angle is arctan (r–r)/d Instruments should be selected according to their end-use and the required uncertainty of the derived quantity. Certain instruments perform better for particular climates, irradiances and solar positions

where EN is the solar radiation, normalized to the mean sun-Earth distance, which is defined to be one astronomical unit (AU) (see Annex 7.D); E is the measured direct solar radiation; and R is the sun-Earth distance in astronomical units. 7.2.1 Direct solar radiation


MeasureMent of Direct solar raDiation

Direct solar radiation is measured by means of pyrheliometers, the receiving surfaces of which are arranged to be normal to the solar direction. By means of apertures, only the radiation from the sun and a narrow annulus of sky is measured, the latter radiation component is sometimes referred to as circumsolar radiation or aureole radiation. In modern instruments, this extends out to a halfangle of about 2.5° on some models, and to about 5° from the sun’s centre (corresponding, respectively, to 5 · 10–3 and 5 · 10–2 sr). The construction of the pyrheliometer mounting must allow for the rapid and smooth adjustment of the azimuth and elevation angles. A sighting device is usually included in which a small spot of light or solar image falls upon a mark in the centre of the target when the receiving surface is exactly normal to the direct solar beam. For continuous recording, it is advisable to use automatic sun-following equipment (sun tracker). As to the view-limiting geometry, it is recommended that the opening half-angle be 2.5° (5 · 10–3 sr) and the slope angle 1° for all new designs of direct solar radiation instruments. For

Some of the characteristics of operational pyrheliometers (other than primary standards) are given in Table 7.2 (adapted from ISO, 1990a), with indicative estimates of the uncertainties of measurements made with them if they are used with appropriate expertise and quality control. Cheaper pyrheliometers are available (see ISO, 1990a), but without effort to characterize their response the resulting uncertainties reduce the quality of the data, and, given that a sun tracker is required, in most cases the incremental cost for a good pyrheliometer is minor. The estimated uncertainties are based on the following assumptions: (a) Instruments are well-maintained, correctly aligned and clean; (b) 1 min and 1 h figures are for clear-sky irradiances at solar noon; (c) Daily exposure values are for clear days at mid-latitudes. Primary standard pyrheliometers

An absolute pyrheliometer can define the scale of total irradiance without resorting to reference sources or radiators. The limits of uncertainty of the definition must be known; the quality of this knowledge determines the reliability of an absolute pyrheliometer. Only specialized laboratories should operate and maintain primary standards. Details of their construction and operation are given in WMO (1986a). However, for the sake of completeness, a brief account is given here.



All absolute pyrheliometers of modern design use cavities as receivers and electrically calibrated, differential heat-flux meters as sensors. At present, this combination has proved to yield the lowest uncertainty possible for the radiation levels encountered in solar radiation measurements (namely, up to 1.5 kW m–2). Normally, the electrical calibration is performed by replacing the radiative power by electrical power, which is dissipated in a heater winding as close as possible to where the absorption of solar radiation takes place. The uncertainties of such an instrument’s measurements are determined by a close examination of the physical properties of the instrument and by performing laboratory measurements and/or model calculations to determine the deviations from ideal behaviour, that is, how perfectly the electrical substitution can be achieved. This procedure is called characterization of the instrument. The following specification should be met by an absolute pyrheliometer (an individual instrument, not a type) to be designated and used as a primary standard: (a) At least one instrument out of a series of manufactured radiometers has to be fully characterized. The 95 per cent uncertainty of this characterization should be less than 2 W m–2 under the clear-sky conditions suitable for calibration (see ISO, 1990a). The 95 per cent uncertainty (for all components of the uncertainty) for a series of measurements should not exceed 4 W m–2 for any measured value; (b) Each individual instrument of the series must be compared with the one which has been characterized, and no individual instrument should deviate from this instrument by more than the characterization uncertainty as determined in (a) above; (c) A detailed description of the results of such comparisons and of the characterization of the instrument should be made available upon request; (d) Traceability to the WRR by comparison with the WSG or some carefully established reference with traceability to the WSG is needed in order to prove that the design is within the state of the art. The latter is fulfilled if the 95 per cent uncertainty for a series of measurements traceable to the WRR is less than 1 W m –2.

table 7.2. characteristics of operational pyrheliometers
Characteristic response time (95 per cent response) Zero offset (response to 5 K h–1 change in ambient temperature) resolution (smallest detectable change in W m–2) Stability (percentage of full scale, change/year) Temperature response (percentage maximum error due to change of ambient temperature within an interval of 50 K) High Good < 30 s



< 15 s

2 W m–2 4 W m–2 0.51 0.1 1 1 0.5 2

Non-linearity (percentage deviation 0.2 from the responsivity at 500 W m–2 due to the change of irradiance within 100 W m–2 to 1 100 W m–2) Spectral sensitivity (percentage deviation of the product of spectral absorptance and spectral transmittance from the corresponding mean within the range 300 to 3 000 nm) Tilt response (percentage deviation from the responsivity at 0° tilt (horizontal) due to change in tilt from 0° to 90° at 1 000 W m–2) achievable uncertainty, 95 per cent confidence level (see above) 1 min totals per cent kJ m–2 1 h totals per cent kJ m–2 daily totals per cent kJ m–2






0.9 0.56 0.7 21 0.5 200

1.8 1 1.5 54 1.0 400

Near state of the art; suitable for use as a working standard; maintainable only at stations with special facilities and staff. acceptable for network operations.


secondary standard pyrheliometers

An absolute pyrheliometer which does not meet the specification for a primary standard or which



is not fully characterized can be used as a secondary standard if it is calibrated by comparison with the WSG with a 95 per cent uncertainty for a series of measurements less than 1 W m–2. Other types of instruments with measurement uncertainties similar or approaching those for primary standards may be used as secondary standards. The Ångström compensation pyrheliometer has been, and still is, used as a convenient secondary standard instrument for the calibration of pyranometers and other pyrheliometers. It was designed by K. Ångström as an absolute instrument, and the Ångström scale of 1905 was based on it; now it is used as a secondary standard and must be calibrated against a standard instrument. The sensor consists of two platinized manganin strips, each of which is about 18 mm long, 2 mm wide and about 0.02 mm thick. They are blackened with a coating of candle soot or with an optical matt black paint. A thermo-junction of copperconstantan is attached to the back of each strip so that the temperature difference between the strips can be indicated by a sensitive galvanometer or an electrical micro-voltmeter. The dimensions of the strip and front diaphragm yield opening half-angles and slope angles as listed in Table 7.3.

where E is the irradiance in W m–2; K is the calibration constant determined by comparison with a primary standard (W m–2 A–2); and iL iR is the current in amperes measured with the left- or right-hand strip exposed to the direct solar beam, respectively. Before and after each series of measurements, the zero of the system is adjusted electrically by using either of the foregoing methods, the zeros being called “cold” (shaded) or “hot” (exposed), as appropriate. Normally, the first reading, say iR, is excluded and only the following iL–iR pairs are used to calculate the irradiance. When comparing such a pyrheliometer with other instruments, the irradiance derived from the currents corresponds to the geometric mean of the solar irradiances at the times of the readings of iL and iR. The auxiliary instrumentation consists of a power supply, a current-regulating device, a nullmeter and a current monitor. The sensitivity of the nullmeter should be about 0.05 · 10–6 A per scale division for a low-input impedance (< 10 Ω), or about 0.5 µV with a highinput impedance (> 10 KΩ). Under these conditions, a temperature difference of about 0.05 K between the junction of the copper-constantan thermocouple causes a deflection of one scale division, which indicates that one of the strips is receiving an excess heat supply amounting to about 0.3 per cent. The uncertainty of the derived direct solar irradiance is highly dependent on the qualities of the current-measuring device, whether a moving-coil milliammeter or a digital multi-meter which measures the voltage across a standard resistor, and on the operator’s skill. The fractional error in the output value of irradiance is twice as large as the fractional error in the reading of the electric current. The heating current is directed to either strip by means of a switch and is normally controlled by separate rheostats in each circuit. The switch can also cut the current off so that the zero can be determined. The resolution of the rheostats should be sufficient to allow the nullmeter to be adjusted to within one half of a scale division. field and network pyrheliometers

table 7.3. View-limiting geometry of Ångström pyrheliometers
Angle Opening half-angle Slope angle Vertical 5° – 8° 0.7° – 1.0° Horizontal ~ 2° 1.2° – 1.6°

The measurement set consists of three or more cycles, during which the left- or right-hand strip is alternately shaded from or exposed to the direct solar beam. The shaded strip is heated by an electric current, which is adjusted in such a way that the thermal electromagnetic force of the thermocouple and, hence, the temperature difference between the two strips approximate zero. Before and after a measuring sequence, the zero is checked either by shading or by exposing both strips simultaneously. Depending on which of these methods is used and on the operating instructions of the manufacturer, the irradiance calculation differs slightly. The method adopted for the IPCs uses the following formula: E = K·iL·iR (7.2)

These pyrheliometers generally make use of a thermopile as the detector. They have similar viewlimiting geometry as standard pyrheliometers. Older models tend to have larger fields of view and slope angles. These design features were primarily designed to reduce the need for accurate sun



tracking. However, the larger the slope (and opening) angle, the larger the amount of aureole radiation sensed by the detector; this amount may reach several per cent for high optical depths and large limiting angles. With new designs of sun trackers, including computer-assisted trackers in both passive and active (sun-seeking) configurations, the need for larger slope angles is unnecessary. However, a slope angle of 1° is still required to ensure that the energy from the direct solar beam is distributed evenly on the detector; and allows for minor sun tracker pointing errors of the order of 0.1°. The intended use of the pyrheliometer may dictate the selection of a particular type of instrument. Some manually oriented models, such as the Linke Fuessner Actinometer, are used mainly for spot measurements, while others such as the EKO, Eppley, Kipp and zonen, and Middleton types are designed specifically for the long-term monitoring of direct irradiance. Before deploying an instrument, the user must consider the significant differences found among operational pyrheliometers as follows: (a) The field of view of the instrument; (b) Whether the instrument measures both the long-wave and short-wave portion of the spectrum (namely, whether the aperture is open or covered with a glass or quartz window); (c) The temperature compensation or correction methods; (d) The magnitude and variation of the zero irradiance signal; (e) If the instrument can be installed on an automated tracking system for long-term monitoring; (f) If, for the calibration of other operational pyrheliometers, differences (a) to (c) above are the same, and if the pyrheliometer is of the quality required to calibrate other network instruments. calibration of pyrheliometers

son (for example, during the periodically organized IPCs) such a pyrheliometer can be used as a standard to calibrate, again by comparison with the sun as a source, secondary standards and field pyrheliometers. Secondary standards can also be used to calibrate field instruments, but with increased uncertainty. The quality of sun-source calibrations may depend on the aureole influence if instruments with different view-limiting geometries are compared. Also, the quality of the results will depend on the variability of the solar irradiance, if the time-constants and zero irradiance signals of the pyrheliometers are significantly different. Lastly, environmental conditions, such as temperature, pressure and net long-wave irradiance, can influence the results. If a very high quality of calibration is required, only data taken during very clear and stable days should be used. The procedures for the calibration of field pyrheliometers are given in an ISO standard (ISO, 1990b). From recent experience at IPCs, a period of five years between traceable calibrations to the WSG should suffice for primary and secondary standards. Field pyrheliometers should be calibrated every one to two years; the more prolonged the use and the more rigorous the conditions, the more often they should be calibrated. 7.2.2 spectral direct solar irradiance and measurement of optical depth

Spectral measurements of the direct solar irradiance are used in meteorology mainly to determine optical depth (see Annex 7.B) in the atmosphere. They are used also for medical, biological, agricultural and solar-energy applications. The aerosol optical depth represents the total extinction, namely, scattering and absorption by aerosols in the size range 100 to 10 000 nm radius, for the column of the atmosphere equivalent to unit optical air mass. Particulate matter, however, is not the only influencing factor for optical depth. Other atmospheric constituents such as air molecules (Rayleigh scatterers), ozone, water vapour, nitrogen dioxide and carbon dioxide also contribute to the total extinction of the beam. Most optical depth measurements are taken to understand better the loading of the atmosphere by aerosols. However, optical depth measurements of other constituents, such as water vapour, ozone and nitrogen dioxide, can be obtained if appropriate wavebands are selected.

All pyrheliometers, other than absolute pyrheliometers, must be calibrated by comparison using the sun as the source with a pyrheliometer that has traceability to the WSG and a likely uncertainty of calibration equal to or better than the pyrheliometer being calibrated. As all solar radiation data must be referred to the WRR, absolute pyrheliometers also use a factor determined by comparison with the WSG and not their individually determined one. After such a compari-



table 7.4. specification of idealized schott glass filters
Approximate temperature coefficient of short-wave cut-off (nm K–1) 0.12 0.17 0.18

Schott type

Typical 50% cut-off wavelength (nm) Short Long 526 ± 2 630 ± 2 702 ± 2 2 900 2 900 2 900

Mean transmission (3 mm thickness) 0.92 0.92 0.92

OG 530 rG 630 rG 700

The temperature coefficients for Schott filters are as given by the manufacturer. The short-wave cut-offs are adjusted to the standard filters used for calibration. Checks on the short and long wavelength cut-offs are required for reducing uncertainties in derived quantities.

The aerosol optical depth δ a(λ) at a specific wavelength λ is based on the Bouguer-Lambert law (or Beer’s law for monochromatic radiation) and can be determined by:

directly to the evaluation of sun photometer data, but not to broadband pyrheliometer data. Aerosol optical depth observations should be made only when no visible clouds are within 10° of the sun. When sky conditions permit, as many observations as possible should be made in a day and a maximum range of air masses should be covered, preferably in intervals of Δm less than 0.2. Only instantaneous values can be used for the determination of aerosol optical depth; instantaneous means that the measurement process takes less than 1 s. Broadband pyrheliometry

δ a (λ ) =

ln( E0 (λ ) / E (λ )) − ma

Σ (δ i (λ ) ⋅ mi )


where δ a (λ) is the aerosol optical depth at a waveband centred at wavelength λ; ma is the air mass for aerosols (unity for the vertical beam);δi is the optical depth for species i, other than aerosols at a waveband centred at wavelength λ; mi is the air mass for extinction species i, other than aerosols; E0(λ) is the spectral solar irradiance outside the atmosphere at wavelength λ; and E(λ) is the spectral solar irradiance at the surface at wavelength λ. Optical thickness is the total extinction along the path through the atmosphere, that is, the air mass multiplied by the optical depth mδ. Turbidity τ is the same quantity as optical depth, but using base 10 rather than base e in Beer’s Law, as follows: τ(λ)m = log (E0(λ)/E(λ)) accordingly: τ(λ) = 2.301δ(λ) (7.5) (7.4)

Broadband pyrheliometry makes use of a carefully calibrated pyrheliometer with broadband glass filters in front of it to select the spectral bands of interest. The specifications of the classical filters used are summarized in Table 7.4. The cut-off wavelengths depend on temperature, and some correction of the measured data may be needed. The filters must be properly cleaned before use. In operational applications, they should be checked daily and cleaned if necessary. The derivation of aerosol optical depth from broadband data is very complex, and there is no standard procedure. Use may be made both of tables which are calculated from typical filter data and of some assumptions on the state of the atmosphere. The reliability of the results depends on how well the filter used corresponds to the filter in the calculations and how good the atmospheric assumptions are. Details of the evaluation and the corresponding tables can be found in WMO (1978). A discussion of the techniques is given by Kuhn (1972) and Lal (1972).

In meteorology, two types of measurements are performed, namely broadband pyrheliometry and narrowband sun radiometry (sometimes called sun photometry). Since the aerosol optical depth is defined only for monochromatic radiation or for a very narrow wavelength range, it can be applied



sun radiometry (photometry) and aerosol optical depth

A narrowband sun radiometer (or photometer) usually consists of a narrowband interference filter and a photovoltaic detector, usually a silicon photodiode. The full field of view of the instrument is 2.5° with a slope angle of 1° (see Figure 7.1). Although the derivation of optical depth using these devices is conceptually simple, many early observations from these devices have not produced useful results. The main problems have been the shifting of the instrument response because of changing filter transmissions and detector characteristics over short periods, and poor operator training for manually operated devices. Accurate results can be obtained with careful operating procedures and frequent checks of instrument stability. The instrument should be calibrated frequently, preferably using in situ methods or using reference devices maintained by a radiation centre with expertise in optical depth determination. Detailed advice on narrowband sun radiometers and network operations is given in WMO (1993a). To calculate aerosol optical depth from narrowband sun radiometer data with small uncertainty, the station location, pressure, temperature, column ozone amount, and an accurate time of measurement must be known (WMO, 2005). The most accurate calculation of the total and aerosol optical depth from spectral data at wavelength λ (the centre wavelength of its filter) makes use of the following:
P ( SS0λ()λ )2 ) − P δ R (λ )mR − δO (λ )mO ... ( R
3 3

For all wavelengths, Rayleigh extinction must be considered. Ozone optical depth must be considered at wavelengths of less than 340 nm and throughout the Chappius band. Nitrogen dioxide optical depths should be considered for all wavelengths less than 650 nm, especially if measurements are taken in areas that have urban influences. Although there are weak water vapour absorption bands even within the 500 nm spectral region, water vapour absorption can be neglected for wavelengths less than 650 nm. Further references on wavelength selection can be found in WMO (1986b). A simple algorithm to calculate Rayleigh-scattering optical depths is a combination of the procedure outlined by Fröhlich and Shaw (1980), and the young (1981) correction. For more precise calculations the algorithm by Bodhaine and others (1999) is also available. Both ozone and nitrogen dioxide follow Beer’s law of absorption. The WMO World Ozone Data Centre recommends the ozone absorption coefficients of Bass and Paur (1985) in the UV region and Vigroux (1953) in the visible region. Nitrogen dioxide absorption coefficients can be obtained from Schneider and others (1987). For the reduction of wavelengths influenced by water vapour, the work of Frouin, Deschamps and Lecomte (1990) may be considered. Because of the complexity of water vapour absorption, bands that are influenced significantly should be avoided unless deriving water vapour amount by spectral solar radiometry. 7.2.3 exposure

δ a (λ ) =




For continuous recording and reduced uncertainties, an accurate sun tracker that is not influenced by environmental conditions is essential. Sun tracking to within 0.2° is required, and the instruments should be inspected at least once a day, and more frequently if weather conditions so demand (with protection against adverse conditions). The principal exposure requirement for a recording instrument is the same as that for a pyrheliometer namely, freedom from obstructions to the solar beam at all times and seasons of the year. Furthermore, the site should be chosen so that the incidence of fog, smoke and airborne pollution is as typical as possible of the surrounding area. For continuous recording, protection is needed against rain, snow, and so forth. The optical window, for instance, must be protected as it is usually made of quartz and is located in front of the instrument. Care must be taken to ensure that such a window is kept clean and that

where S(λ) is the instrument reading (for example, in volts or counts), S0(λ) is the hypothetical reading corresponding to the top of the atmosphere spectral solar irradiance at 1 AU (this can be established by extrapolation to air-mass zero by various Langley methods, or from the radiation centre which calibrated the instrument); R is the sun-Earth distance (in astronomical units; see Annex 7.D); P is the atmospheric pressure; P0 is the standard atmospheric pressure, and the second, third and subsequent terms in the top line are the contributions of Rayleigh, ozone and other extinctions. This can be simplified for less accurate work by assuming that the relative air masses for each of the components are equal.



condensation does not appear on the inside. For successful derivation of aerosol optical depth such attention is required, as a 1 per cent change in transmission at unit air mass translates into a 0.010 change in optical depth. For example, for transmission measurements at 500 nm at clean sea-level sites, a 0.010 change represents between 20 to 50 per cent of the mean winter aerosol optical depth.

achieved with appropriate facilities, well-trained staff and good quality control under the sky conditions outlined in 7.2.1. 7.3.1 calibration of pyranometers


MeasureMent of global anD Diffuse sky raDiation

The solar radiation received from a solid angle of 2π sr on a horizontal surface is referred to as global radiation. This includes radiation received directly from the solid angle of the sun’s disc, as well as diffuse sky radiation that has been scattered in traversing the atmosphere. The instrument needed for measuring solar radiation from a solid angle of 2π sr into a plane surface and a spectral range from 300 to 3 000 nm is the pyranometer. The pyranometer is sometimes used to measure solar radiation on surfaces inclined in the horizontal and in the inverted position to measure reflected global radiation. When measuring the diffuse sky component of solar radiation, the direct solar component is screened from the pyranometer by a shading device (see section Pyranometers normally use thermo-electric, photoelectric, pyro-electric or bimetallic elements as sensors. Since pyranometers are exposed continually in all weather conditions they must be robust in design and resist the corrosive effects of humid air (especially near the sea). The receiver should be hermetically sealed inside its casing, or the casing must be easy to take off so that any condensed moisture can be removed. Where the receiver is not permanently sealed, a desiccator is usually fitted in the base of the instrument. The properties of pyranometers which are of concern when evaluating the uncertainty and quality of radiation measurement are: sensitivity, stability, response time, cosine response, azimuth response, linearity, temperature response, thermal offset, zero irradiance signal and spectral response. Further advice on the use of pyranometers is given in ISO (1990c) and WMO (1998). Table 7.5 (adapted from ISO, 1990a) describes the characteristics of pyranometers of various levels of performance, with the uncertainties that may be

The calibration of a pyranometer consists of the determination of one or more calibration factors and the dependence of these on environmental conditions, such as: (a) Temperature; (b) Irradiance level; (c) Spectral distribution of irradiance; (d) Temporal variation; (e) Angular distribution of irradiance; (f) Inclination of instrument; (g) The net long-wave irradiance for thermal offset correction; (h) Calibration methods. Normally, it is necessary to specify the test environmental conditions, which can be quite different for different applications. The method and conditions must also be given in some detail in the calibration certificate. There are a variety of methods for calibrating pyranometers using the sun or laboratory sources. These include the following: (a) By comparison with a standard pyrheliometer for the direct solar irradiance and a calibrated shaded pyranometer for the diffuse sky irradiance; (b) By comparison with a standard pyrheliometer using the sun as a source, with a removable shading disc for the pyranometer; (c) With a standard pyheliometer using the sun as a source and two pyranometers to be calibrated alternately measuring global and diffuse irradiance; (d) By comparison with a standard pyranometer using the sun as a source, under other natural conditions of exposure (for example, a uniform cloudy sky and direct solar irradiance not statistically different from zero); (e) In the laboratory, on an optical bench with an artificial source, either normal incidence or at some specified azimuth and elevation, by comparison with a similar pyranometer previously calibrated outdoors; (f) In the laboratory, with the aid of an integrating chamber simulating diffuse sky radiation, by comparison with a similar type of pyranometer previously calibrated outdoors. These are not the only methods; (a), (b) and (c) and (d) are commonly used. However, it is essential that,



table 7.5. characteristics of operational pyranometers
Characteristic response time (95 per cent response) Zero offset: (a) response to 200 W m–2 net thermal radiation (ventilated) (b) response to 5 K h–1 change in ambient temperature resolution (smallest detectable change) Stability (change per year, percentage of full scale) directional response for beam radiation (the range of errors caused by assuming that the normal incidence responsivity is valid for all directions when measuring, from any direction, a beam radiation whose normal incidence irradiance is 1 000 W m–2) Temperature response (percentage maximum error due to any change of ambient temperature within an interval of 50 K) Non-linearity (percentage deviation from the responsivity at 500 W m–2 due to any change of irradiance within the range 100 to 1 000 W m–2) Spectral sensitivity (percentage deviation of the product of spectral absorptance and spectral transmittance from the corresponding mean within the range 300 to 3 000 nm) Tilt response (percentage deviation from the responsivity at 0˚ tilt (horizontal) due to change in tilt from 0˚ to 90˚ at 1 000 W m–2) achievable uncertainty (95 per cent confidence level): Hourly totals daily totals
a b c

High quality


Good quality < 30 s 15 W m–2 4 W m–2 5 W m–2 1.5 20 W m–2


Moderate quality < 60 s 30 W m–2 8 W m–2 10 W m–2 3.0 30 W m–2


< 15 s 7 W m–2 2 W m–2 1 W m–2 0.8 10 W m–2













3% 2%

8% 5%

20% 10%

Near state of the art; suitable for use as a working standard; maintainable only at stations with special facilities and staff. acceptable for network operations. Suitable for low-cost networks where moderate to low performance is acceptable.

except for (b), either the zero irradiance signals for all instruments are known or pairs of identical model pyranometers in identical configurations are used. Ignoring these offsets and differences can bias the results significantly. Method (c) is considered to give very good results without the need for a calibrated pyranometer. It is difficult to determine a specific number of measurements on which to base the calculation of the pyranometer calibration factor. However, the standard error of the mean can be calculated and should be less than the desired limit when sufficient readings have been taken under the desired conditions. The principal variations (apart from

fluctuations due to atmospheric conditions and observing limitations) in the derived calibration factor are due to the following: (a) Departures from the cosine law response, particularly at solar elevations of less than 10° (for this reason it is better to restrict calibration work to occasions when the solar elevation exceeds 30°); (b) The ambient temperature; (c) Imperfect levelling of the receiver surface; (d) Non-linearity of instrument response; (e) The net long-wave irradiance between the detector and the sky. The pyranometer should be calibrated only in the position of use.



When using the sun as the source, the apparent solar elevation should be measured or computed (to the nearest 0.01°) for this period from solar time (see Annex 7.D). The mean instrument or ambient temperature should also be noted. By reference to a standard pyrheliometer and a shaded reference pyranometer

In this method, described in ISO (1993), the pyranometer’s response to global irradiance is calibrated against the sum of separate measurements of the direct and diffuse components. Periods with clear skies and steady radiation (as judged from the record) should be selected. The vertical component of the direct solar irradiance is determined from the pyrheliometer output, and the diffuse sky irradiance is measured with a second pyranometer that is continuously shaded from the sun. The direct component is eliminated from the diffuse sky pyranometer by shading the whole outer dome of the instrument with a disc of sufficient size mounted on a slender rod and held some distance away. The diameter of the disc and its distance from the receiver surface should be chosen in such a way that the screened angle approximately equals the aperture angles of the pyrheliometer. Rather than using the radius of the pyranometer sensor, the radius of the outer dome should be used to calculate the slope angle of the shading disc and pyranometer combination. This shading arrangement occludes a close approximation of both the direct solar beam and the circumsolar sky irradiance as sensed by the pyrheliometer. On a clear day, the diffuse sky irradiance is less than 15 per cent of the global irradiance; hence, the calibration factor of the reference pyranometer does not need to be known very accurately. However, care must be taken to ensure that the zero irradiance signals from both pyranometers are accounted for, given that for some pyranometers under clear sky conditions the zero irradiance signal can be as high as 15 per cent of the diffuse sky irradiance. The calibration factor is then calculated according to: E · sin h + Vsks = V · k or: k = (E sin h + Vsks)/V (7.8) (7.7)

where E is the direct solar irradiance measured with the pyrheliometer (W m–2), V is the global irradiance output of the pyranometer to be calibrated (µV); Vs is the diffuse sky iradiance output of the shaded reference pyranometer (µV), h is the apparent solar elevation at the time of reading; k is the calibration factor of the pyranometer to be calibrated (W m–2 µV–1); and ks is the calibration factor of the shaded reference pyranometer (W m–2 µV–1), and all the signal measurements are taken simultaneously. The direct, diffuse and global components will change during the comparison, and care must be taken with the appropriate sampling and averaging to ensure that representative values are used. By reference to a standard pyrheliometer

This method, described in ISO (1993a), is similar to the method of the preceding paragraph, except that the diffuse sky irradiance signal is measured by the same pyranometer. The direct component is eliminated temporarily from the pyranometer by shading the whole outer dome of the instrument as described in section The period required for occulting depends on the steadiness of the radiation flux and the response time of the pyranometer, including the time interval needed to bring the temperature and long-wave emission of the glass dome to equilibrium; 10 times the thermopile 1/e time-constant of the pyranometer should generally be sufficient. The difference between the representative shaded and unshaded outputs from the pyranometer is due to the vertical component of direct solar irradiance E measured by the pyrheliometer. Thus: E · sin h = (Vun – Vs) · k or: k = (S · sin h)/ (Vun – Vs) (7.10) (7.9)

where E is the representative direct solar irradiance at normal incidence measured by the pyrheliometer (W m–2); Vun is the representative output signal of the pyranometer (µV) when in unshaded (or global) irradiance mode; Vs is the representative output signal of the pyranometer (µV) when in shaded (or diffuse sky) irradiance mode; h is the apparent solar elevation, and k is the calibration factor (W m–2 µV–1), which is the inverse of the sensitivity (µV W–1 m2).



Both the direct and diffuse components will change during the comparison, and care must be taken with the appropriate sampling and averaging to ensure that representative values of the shaded and unshaded outputs are used for the calculation. To reduce uncertainties associated with representative signals, a continuous series of shade and un-shade cycles should be performed and time-interpolated values used to reduce temporal changes in global and diffuse sky irradiance. Since the same pyranometer is being used in differential mode, and the difference in zero irradiance signals for global and diffuse sky irradiance is negligible, there is no need to account for zero irradiances in equation 7.10. alternate calibration using a pyrheliometer

provide an indication of the directional response. The resultant calibration information for both pyranometers is representative of the global calibration coefficients and produces almost identical information to method, but without the need for a calibrated pyranometer. As with method, to produce coefficients with minimum uncertainty this alternate method requires that the irradiance signals from the pyranometers be adjusted to remove any estimated zero irradiance offset. To reduce uncertainties due to changing directional response it is recommended to use a pair of pyranometers of the same model and observation pairs when sin h (t0) ~ sin h (t1). The method is ideally suited to automatic field monitoring situations where three solar irradiance components (direct, diffuse and global) are monitored continuously. Experience suggests that the data collection necessary for the application of this method may be conducted during as little as one day with the exchange of instruments taking place around solar noon. However, at a field site, the extended periods and days either side of the instrument change may be used for data selection, provided that the pyrheliometer has a valid calibration. By comparison with a reference pyranometer

This method uses the same instrumental set-up as the method described in section, but only requires the pyrheliometer to provide calibrated irradiance data (E), and the two pyranometers are assumed to be un-calibrated (Forgan, 1996). The method calibrates both pyranometers by solving a pair of simultaneous equations analogous to equation 7.7. Irradiance signal data are initially collected with the pyrheliometer and one pyranometer (pyranometer A) measures global irradiance signals (VgA) and the other pyranometer (pyranometer B) measures diffuse irradiance signals (VdB) over a range of solar zenith angles in clear sky conditions. After sufficient data have been collected in the initial configuration, the pyranometers are exchanged so that pyranometer A, which initially measured the global irradiance signal, now measures the diffuse irradiance signal (VdA), and vice versa with regard to pyranometer B. The assumption is made that for each pyranometer the diffuse (kd) and global (kg) calibration coefficients are equal, and the calibration coefficient for pyranometer A is given by:

k A = kgA = kdA


with an identical assumption for pyranometer B coefficients. Then for a time t0 in the initial period a modified version of equation 7.7 is:

E (t 0 )sin(h(t 0 )) = k AVgA (t 0 ) − k BVdB (t 0 ).


For time t1 in the alternate period when the pyranometers are exchanged:

E (t1 )sin(h(t1 )) = k BVgB (t1 ) − k AVdA (t1 )


As the only unknowns in equations 7.12 and 7.13 are kA and kB, these can be solved for any pair of times (t0, t1). Pairs covering a range of solar elevations

As described in ISO (1992b), this method entails the simultaneous operation of two pyranometers mounted horizontally, side by side, outdoors for a sufficiently long period to acquire representative results. If the instruments are of the same model and monitoring configuration, only one or two days should be sufficient. The more pronounced the difference between the types of pyranometer configurations, the longer the period of comparison required. A long period, however, could be replaced by several shorter periods covering typical conditions (clear, cloudy, overcast, rainfall, snowfall, and so on). The derivation of the instrument factor is straightforward, but, in the case of different pyranometer models, the resultant uncertainty is more likely to be a reflection of the difference in model, rather than the stability of the instrument being calibrated. Data selection should be carried out when irradiances are relatively high and varying slowly. Each mean value of the ratio R of the response of the test instrument to that of the reference instrument may be used to calculate k = R · kr, where kr is the calibration factor of the reference, and k is the calibration factor being derived. During a sampling period,



provided that the time between measurements is less than the 1/e time-constant of the pyranometers, data collection can occur during times of fluctuating irradiance. The mean temperature of the instruments or the ambient temperature should be recorded during all outdoor calibration work to allow for any temperature effects, for. By comparison in the laboratory

routine checks on calibration factors

There are several methods for checking the constancy of pyranometer calibration, depending upon the equipment available at a particular station. Every opportunity to check the performance of pyranometers in the field must be seized. At field stations where carefully preserved standards (either pyrheliometers or pyranometers) are available, the basic calibration procedures described above may be employed. Where standards are not available, other techniques can be used. If there is a simultaneous record of direct solar radiation, the two records can be examined for consistency by the method used for direct standardization, as explained in section This simple check should be applied frequently. If there are simultaneous records of global and diffuse sky radiation, the two records should be frequently examined for consistency. In periods of total cloud the global and diffuse sky radiation should be identical, and these periods can be used when a shading disc is used for monitoring diffuse sky radiation. When using shading bands it is recommended that the band be removed so that the diffuse sky pyranometer is measuring global radiation and its data can be compared to simultaneous data from the global pyranometer. The record may be verified with the aid of a travelling working standard sent from the central station of the network or from a nearby station. Lastly, if calibrations are not performed at the site, the pyranometer can be exchanged for a similar one sent from the calibration facility. Either of the last two methods should be used at least once a year. Pyranometers used for measuring reflected solar radiation should be moved into an upright position and checked using the methods described above. 7.3.2 Performance of pyranometers

There are two methods which involve laboratory-maintained artificial light sources providing either direct or diffuse irradiance. In both cases, the test pyranometer and a reference standard pyranometer are exposed under the same conditions. In one method, the pyranometers are exposed to a stabilized tungsten-filament lamp installed at the end of an optical bench. A practical source for this type of work is a 0.5 to 1.0 kW halogen lamp mounted in a water-cooled housing with forced ventilation and with its emission limited to the solar spectrum by a quartz window. This kind of lamp can be used if the standard and the instrument to be calibrated have the same spectral response. For general calibrations, a high-pressure xenon lamp with filters to give an approximate solar spectrum should be used. When calibrating pyranometers in this way, reflection effects should be excluded from the instruments by using black screens. The usual procedure is to install the reference instrument and measure the radiant flux. The reference is then removed and the measurement repeated using the test instrument. The reference is then replaced and another determination is made. Repeated alternation with the reference should produce a set of measurement data of good precision (about 0.5 per cent). In the other method, the calibration procedure uses an integrating light system, such as a sphere or hemisphere illuminated by tungsten lamps, with the inner surface coated with highly reflective diffuse-white paint. This offers the advantage of simultaneous exposure of the reference pyranometer and the instrument to be calibrated. Since the sphere or hemisphere simulates a sky with an approximately uniform radiance, the angle errors of the instrument at 45° dominate. As the cosine error at these angles is normally low, the repeatability of integrating-sphere measurements is generally within 0.5 per cent. As for the source used to illuminate the sphere, the same considerations apply as for the first method.

Considerable care and attention to details are required to attain the desirable standard of uncertainty. A number of properties of pyranometers and measurement systems should be evaluated so that the uncertainty of the resultant data can be estimated. For example, it has been demonstrated that, for a continuous record of global radiation without ancillary measurements of diffuse sky and direct radiation, an uncertainty better than 5 per cent in daily totals represents the result of good and careful work. Similarly, when a protocol similar to that proposed by WMO (1998) is used, uncertainties for daily total can be of the order of 2 per cent.



sensor levelling

For accurate global radiation measurements with a pyranometer it is essential that the spirit level indicate when the plane of the thermopile is horizontal. This can be tested in the laboratory on an optical levelling table using a collimated lamp beam at about a 20° elevation. The levelling screws of the instrument are adjusted until the response is as constant as possible during rotation of the sensor in the azimuth. The spirit-level is then readjusted, if necessary, to indicate the horizontal plane. This is called radiometric levelling and should be the same as physical levelling of the thermopile. However, this may not be true if the quality of the thermopile surface is not uniform. change of sensitivity due to ambient temperature variation

calibrated in the orientation in which it will be used. A correction for tilting is not recommended unless the instrument’s response has been characterized for a variety of conditions. Variation of response with angle of incidence

Thermopile instruments exhibit changes in sensitivity with variations in instrument temperature. Some instruments are equipped with integrated temperature compensation circuits in an effort to maintain a constant response over a large range of temperatures. The temperature coefficient of sensitivity may be measured in a temperature-controlled chamber. The temperature in the chamber is varied over a suitable range in 10° steps and held steady at each step until the response of the pyranometers has stabilized. The data are then fitted with a smooth curve. If the maximum percentage difference due to temperature response over the operational ambient range is 2 per cent or more, a correction should be applied on the basis of the fit of the data. If no temperature chamber is available, the standardization method with pyrheliometers (see section 7.3.1.l, or can be used at different ambient temperatures. Attention should be paid to the fact that not only the temperature, but also, for example, the cosine response (namely, the effect of solar elevation) and non-linearity (namely, variations of solar irradiance) can change the sensitivity. Variation of response with orientation

The dependence of the directional response of the sensor upon solar elevation and azimuth is usually known as the Lambert cosine response and the azimuth response, respectively. Ideally, the solar irradiance response of the receiver should be proportional to the cosine of the zenith angle of the solar beam, and constant for all azimuth angles. For pyranometers, it is recommended that the cosine error (or percentage difference from ideal cosine response) be specified for at least two solar elevation angles, preferably 30° and 10°. A better way of prescribing the directional response is given in Table 7.5, which specifies the permissible error for all angles. Only lamp sources should be used to determine the variation of response with the angle of incidence, because the spectral distribution of the sun changes with the angle of elevation. Using the sun as a source, an apparent variation of response with solar elevation angle could be observed which, in fact, is a variation due to non-homogeneous spectral response. uncertainties in hourly and daily totals

As most pyranometers in a network are used to determine hourly or daily exposures (or exposures expressed as mean irradiances), it is evident that the uncertainties in these values are important. Table 7.5 lists the expected maximum deviation from the true value, excluding calibration errors. The types of pyranometers in the third column of Table 7.5 (namely, those of moderate quality) are not suitable for hourly or daily totals, although they may be suitable for monthly and yearly totals. 7.3.3 installation and maintenance of pyranometers

The calibration factor of a pyranometer may very well be different when the instrument is used in an orientation other than that in which it was calibrated. Inclination testing of pyranometers can be conducted in the laboratory or with the standardization method described in section or It is recommended that the pyranometer be

The site selected to expose a pyranometer should be free from any obstruction above the plane of the sensing element and, at the same time, should be readily accessible. If it is impracticable to obtain such an exposure, the site must be as free as possible of obstructions that may shadow it at any time



in the year. The pyranometer should not be close to light-coloured walls or other objects likely to reflect solar energy onto it; nor should it be exposed to artificial radiation sources. In most places, a flat roof provides a good location for mounting the radiometer stand. If such a site cannot be obtained, a stand placed some distance from buildings or other obstructions should be used. If practicable, the site should be chosen so that no obstruction, in particular within the azimuth range of sunrise and sunset over the year, should have an elevation exceeding 5°. Other obstructions should not reduce the total solar angle by more than 0.5 sr. At stations where this is not possible, complete details of the horizon and the solid angle subtended should be included in the description of the station. A site survey should be carried out before the initial installation of a pyranometer whenever its location is changed or if a significant change occurs with regard to any surrounding obstructions. An excellent method of doing this is to use a survey camera that provides azimuthal and elevation grid lines on the negative. A series of exposures should be made to identify the angular elevation above the plane of the receiving surface of the pyranometer and the angular range in azimuth of all obstructions throughout the full 360° around the pyranometer. If a survey camera is not available, the angular outline of obscuring objects may be mapped out by means of a theodolite or a compass and clinometer combination. The description of the station should include the altitude of the pyranometer above sea level (that is, the altitude of the station plus the height of pyranometer above the ground), together with its geographical longitude and latitude. It is also most useful to have a site plan, drawn to scale, showing the position of the recorder, the pyranometer, and all connecting cables. The accessibility of instrumentation for frequent inspection is probably the most important single consideration when choosing a site. It is most desirable that pyranometers and recorders be inspected at least daily, and preferably more often. The foregoing remarks apply equally to the exposure of pyranometers on ships, towers and buoys. The exposure of pyranometers on these platforms is a very difficult and sometimes hazardous undertaking. Seldom can an instrument be mounted where it is not affected by at least one significant obstruction (for example, a tower). Because of platform

motion, pyranometers are subject to wave motion and vibration. Precautions should be taken, therefore, to ensure that the plane of the sensor is kept horizontal and that severe vibration is minimized. This usually requires the pyranometer to be mounted on suitably designed gimbals. correction for obstructions to a free horizon

If the direct solar beam is obstructed (which is readily detected on cloudless days), the record should be corrected wherever possible to reduce uncertainty. Only when there are separate records of global and diffuse sky radiation can the diffuse sky component of the record be corrected for obstructions. The procedure requires first that the diffuse sky record be corrected, and the global record subsequently adjusted. The fraction of the sky itself which is obscured should not be computed, but rather the fraction of the irradiance coming from that part of the sky which is obscured. Radiation incident at angles of less than 5° makes only a very small contribution to the total. Since the diffuse sky radiation limited to an elevation of 5° contributes less than 1 per cent to the diffuse sky radiation, it can normally be neglected. Attention should be concentrated on objects subtending angles of 10° or more, as well as those which might intercept the solar beam at any time. In addition, it must be borne in mind that light-coloured objects can reflect solar radiation onto the receiver. Strictly speaking, when determining corrections for the loss of diffuse sky radiation due to obstacles, the variance in sky radiance over the hemisphere should be taken into account. However, the only practical procedure is to assume that the radiance is isotropic, that is, the same from all parts of the sky. In order to determine the relative reduction in diffuse sky irradiance for obscuring objects of finite size, the following expression may be used: ΔEsky =π –1∫Φ ∫ Θ sin θ cos θd θd φ (7.14)

where θ is the angle of elevation; φ is the azimuth angle, Θ is the extent in elevation of the object; and Φ is the extent in azimuth of the object. The expression is valid only for obstructions with a black surface facing the pyranometer. For other objects, the correction has to be multiplied by a reduction factor depending on the reflectivity of the object. Snow glare from a low sun may even lead to an opposite sign for the correction.



Installation of pyranometers for measuring global radiation

A pyranometer should be securely attached to whatever mounting stand is available, using the holes provided in the tripod legs or in the baseplate. Precautions should always be taken to avoid subjecting the instrument to mechanical shocks or vibration during installation. This operation is best effected as follows. First, the pyranometer should be oriented so that the emerging leads or the connector are located poleward of the receiving surface. This minimizes heating of the electrical connections by the sun. Instruments with MollGorcynski thermopiles should be oriented so that the line of thermo-junctions (the long side of the rectangular thermopile) points east-west. This constraint sometimes conflicts with the first, depending on the type of instrument, and should have priority since the connector could be shaded, if necessary. When towers are nearby, the instrument should be situated on the side of the tower towards the Equator, and as far away from the tower as practical. Radiation reflected from the ground or the base should not be allowed to irradiate the instrument body from underneath. A cylindrical shading device can be used, but care should be taken to ensure that natural ventilation still occurs and is sufficient to maintain the instrument body at ambient temperature. The pyranometer should then be secured lightly with screws or bolts and levelled with the aid of the levelling screws and spirit-level provided. After this, the retaining screws should be tightened, taking care that the setting is not disturbed so that, when properly exposed, the receiving surface is horizontal, as indicated by the spirit-level. The stand or platform should be sufficiently rigid so that the instrument is protected from severe shocks and the horizontal position of the receiver surface is not changed, especially during periods of high winds and strong solar energy. The cable connecting the pyranometer to its recorder should have twin conductors and be waterproof. The cable should be firmly secured to the mounting stand to minimize rupture or intermittent disconnection in windy weather. Wherever possible, the cable should be properly buried and protected underground if the recorder is located at a distance. The use of shielded cable is recommended; the pyranometer, cable and recorder being connected by a very low resistance conductor to a

common ground. As with other types of thermoelectric devices, care must be exercised to obtain a permanent copper-to-copper junction between all connections prior to soldering. All exposed junctions must be weatherproof and protected from physical damage. After identification of the circuit polarity, the other extremity of the cable may be connected to the data-collection system in accordance with the relevant instructions. Installation of pyranometers for measuring diffuse sky radiation

For measuring or recording separate diffuse sky radiation, the direct solar radiation must be screened from the sensor by a shading device. Where continuous records are required, the pyranometer is usually shaded either by a small metal disc held in the sun’s beam by a sun tracker, or by a shadow band mounted on a polar axis. The first method entails the rotation of a slender arm synchronized with the sun’s apparent motion. If tracking is based on sun synchronous motors or solar almanacs, frequent inspection is essential to ensure proper operation and adjustment, since spurious records are otherwise difficult to detect. Sun trackers with sun-seeking systems minimize the likelihood of such problems. The second method involves frequent personal attention at the site and significant corrections to the record on account of the appreciable screening of diffuse sky radiation by the shading arrangement. Assumptions about the sky radiance distribution and band dimensions are required to correct for the band and increase the uncertainty of the derived diffuse sky radiation compared to that using a sun-seeking disc system. Annex 7.E provides details on the construction of a shading ring and the necessary corrections to be applied. A significant error source for diffuse sky radiation data is the zero irradiance signal. In clear sky conditions the zero irradiance signal is the equivalent of 5 to 10 W m–2 depending on the pyranometer model, and could approach 15 per cent of the diffuse sky irradiance. The Baseline Surface Radiation Network (BSRN) Operations Manual (WMO, 1998) provides methods to minimize the influence of the zero irradiance signal. The installation of a diffuse sky pyranometer is similar to that of a pyranometer which measures global radiation. However, there is the complication of an equatorial mount or shadow-band stand. The distance to a neighbouring pyranometer should be sufficient to guarantee that the shading ring or disc



never shadows it. This may be more important at high latitudes where the sun angle can be very low. Since the diffuse sky radiation from a cloudless sky may be less than one tenth of the global radiation, careful attention should be given to the sensitivity of the recording system. Installation of pyranometers for measuring reflected radiation

Installation and maintenance of pyranometers on special platforms

Very special care should be taken when installing equipment on such diverse platforms as ships, buoys, towers and aircraft. Radiation sensors mounted on ships should be provided with gimbals because of the substantial motion of the platform. If a tower is employed exclusively for radiation equipment, it may be capped by a rigid platform on which the sensors can be mounted. Obstructions to the horizon should be kept to the side of the platform farthest from the Equator, and booms for holding albedometers should extend towards the Equator. Radiation sensors should be mounted as high as is practicable above the water surface on ships, buoys and towers, in order to keep the effects of water spray to a minimum. Radiation measurements have been taken successfully from aircraft for a number of years. Care must be exercised, however, in selecting the correct pyranometer and proper exposure. Particular attention must be paid during installation, especially for systems that are difficult to access, to ensure the reliability of the observations. It may be desirable, therefore, to provide a certain amount of redundancy by installing duplicate measuring systems at certain critical sites.

The height above the surface should be 1 to 2 m. In summer-time, the ground should be covered by grass that is kept short. For regions with snow in winter, a mechanism should be available to adjust the height of the pyranometer in order to maintain a constant separation between the snow and the instrument. Although the mounting device is within the field of view of the instrument, it should be designed to cause less than 2 per cent error in the measurement. Access to the pyranometer for levelling should be possible without disturbing the surface beneath, especially if it is snow. Maintenance of pyranometers

Pyranometers in continuous operation should be inspected at least once a day and perhaps more frequently, for example when meteorological observations are being made. During these inspections, the glass dome of the instrument should be wiped clean and dry (care should be taken not to disturb routine measurements during the daytime). If frozen snow, glazed frost, hoar frost or rime is present, an attempt should be made to remove the deposit very gently (at least temporarily), with the sparing use of a de-icing fluid, before wiping the glass clean. A daily check should also ensure that the instrument is level, that there is no condensation inside the dome, and that the sensing surfaces are still black. In some networks, the exposed dome of the pyranometer is ventilated continuously by a blower to avoid or minimize deposits in cold weather, and to cool the dome in calm weather situations. The temperature difference between the ventilating air and the ambient air should not be more than about 1 K. If local pollution or sand forms a deposit on the dome, it should be wiped very gently, preferably after blowing off most of the loose material or after wetting it a little, in order to prevent the surface from being scratched. Such abrasive action can appreciably alter the original transmission properties of the material. Desiccators should be kept charged with active material (usually a colour-indicating silica gel).


MeasureMent of total anD long-Wave raDiation

The measurement of total radiation includes both short wavelengths of solar origin (300 to 3 000 nm) and longer wavelengths of terrestrial and atmospheric origin (3 000 to 100 000 nm). The instruments used for this purpose are pyrradiometers. They may be used for measuring either upward or downward radiation flux components, and a pair of them may be used to measure the differences between the two, which is the net radiation. Single-sensor pyrradiometers, with an active surface on both sides, are also used for measuring net radiation. Pyrradiometer sensors must have a constant sensitivity across the whole wavelength range from 300 to 100 000 nm. The measurement of long-wave radiation can be accomplished either indirectly, by subtracting the measured global radiation from the total radiation



measured, or directly, by using pyrgeometers. Most pyrgeometers eliminate the short wavelengths by means of filters which have a constant transparency to long wavelengths while being almost opaque to the shorter wavelengths (300 to 3 000 nm). Some pyrgeometers can be used only during the night as they have no means for eliminating solar short-wave radiation. 7.4.1 instruments for the measurement of total radiation

Table 7.7 lists the characteristics of pyrradiometers of various levels of performance, and the uncertainties to be expected in the measurements obtained from them. 7.4.2 calibration of pyrradiometers and net pyrradiometers

One problem with instruments for measuring total radiation is that there are no absorbers which have a completely constant sensitivity over the extended range of wavelengths concerned. The use of thermally sensitive sensors requires a good knowledge of the heat budget of the sensor. Otherwise, it is necessary to reduce sensor convective heat losses to near zero by protecting the sensor from the direct influence of the wind. The technical difficulties linked with such heat losses are largely responsible for the fact that net radiative fluxes are determined less precisely than global radiation fluxes. In fact, different laboratories have developed their own pyrradiometers on technical bases which they consider to be the most effective for reducing the convective heat transfer in the sensor. During the last few decades, pyrradiometers have been built which, although not perfect, embody good measurement principles. Thus, there is a great variety of pyrradiometers employing different methods for eliminating, or allowing for, wind effects, as follows: (a) No protection, in which case empirical formulae are used to correct for wind effects; (b) Determination of wind effects by the use of electrical heating; (c) Stabilization of wind effects through artificial ventilation; (d) Elimination of wind effects by protecting the sensor from the wind. Table 7.6 provides an analysis of the sources of error arising in pyrradiometric measurements and proposes methods for determining these errors. It is difficult to determine the precision likely to be obtained in practice. In situ comparisons at different sites between different designs of pyrradiometer yield results manifesting differences of up to 5 to 10 per cent under the best conditions. In order to improve such results, an exhaustive laboratory study should precede the in situ comparison in order to determine the different effects separately.

Pyrradiometers and net pyrradiometers can be calibrated for short-wave radiation using the same methods as those used for pyranometers (see section 7.3.1) using the sun and sky as the source. In the case of one-sensor net pyrradiometers, the downward-looking side must be covered by a cavity of known and steady temperature. Long-wave radiation calibration is best done in the laboratory with black body cavities. However, it is possible to perform field calibrations. In the case of a net pyrradiometer, the downward flux L↓ is measured separately by using a pyrgeometer; or the upper receiver may be covered as above with a cavity, and the temperature of the snow or water surface Ts is measured directly. In which case, the radiative flux received by the instrument amounts to: L* = L↓ – εσ Ts4 and: V = L* · K or K = V/L* (7.16) (7.15)

where ε is the emittance of the water or snow surface (normally taken as 1); σ is the StefanBoltzmann constant (5.670 4 · 10–8 W m–2 K–1); Ts is the underlying surface temperature (K); L↓ is the irradiance measured by the pyrgeometer or calculated from the temperature of the cavity capping the upper receiver (W m–2); L* is the radiative flux at the receiver (W m–2); V is the output of the instrument (µV); and K is sensitivity (µV/(W m–2)). The instrument sensitivities should be checked periodically in situ by careful selection of welldescribed environmental conditions with slowly varying fluxes. The symmetry of net pyrradiometers requires regular checking. This is done by inverting the instrument, or the pair of instruments, in situ and noting any difference in output. Differences of greater than 2 per cent of the likely full scale between the two directions demand instrument recalibration because either the ventilation rates or absorption factors have become significantly different for the two sensors. Such tests should also be carried out during calibration or installation.



table 7.6. sources of error in pyrradiometric measurements
Elements influencing the measurements
Screening properties

Nature of influence on pyrradiometers with domes
Spectral characteristics of transmission

Effects on the precision of measurements

Methods for determining these characteristics

without domes
None (a) Spectral variations in calibration coefficient (b) The effect of reduced incident radiation on the detector due to short-wave diffusion in the domes (depends on thickness) (c) ageing and other variations in the sensors uncontrolled changes due to wind gusts are critical in computing the radiative flux divergence in the lowest layer of the atmosphere (a) determine spectrally the extinction in the screen (b) Measure the effect of diffuse sky radiation or measure the effect with a varying angle of incidence (c) Spectral analysis: compare with a new dome; determine the extinction of the dome

Convection effects

Changes due to non-radiative energy exchanges: sensordome environment (thermal resistance)

Changes due to nonradiative energy exchanges: sensor-air (variation in areal exchange coefficient) variation of the spectral character of the sensor and of the dissipation of heat by evaporation

Study the dynamic behaviour of the instrument as a function of temperature and speed in a wind tunnel

Effects of hydrometeors (rain, snow, fog, dew, frost) and dust

variation of the spectral transmission plus the non-radiative heat exchange by conduction and change

Changes due to variations in the spectral characteristics of the sensor and to non-radiative energy transfers

Study the influence of forced ventilation on the effects

Properties of the sensor surface (emissivity)

depends on the spectral absorption of the blackening substance on the sensor

Changes in calibration coefficient (a) as a function of spectral response (b) as a function of intensity and azimuth of incident radiation (c) as a function of temperature effects a temperature coefficient is required

(a) Spectrophotometric analysis of the calibration of the absorbing surfaces (b) Measure the sensor’s sensitivity variability with the angle of incidence

Temperature effects

Non-linearity of the sensor as a function of temperature

Study the influence of forced ventilation on these effects (a) Control the thermal capacity of the two sensor surfaces (b) Control the timeconstant over a narrow temperature range

asymmetry effects

(a) differences between the thermal capacities and resistance of the upward- and downward-facing sensors (b) differences in ventilation of the upward- and downward-facing sensors (c) Control and regulation of sensor levelling

(a) Influence on the time-constant of the instrument (b) Error in the determination of the calibration factors for the two sensors



table 7.7. characteristics of operational pyrradiometers
resolution (W m–2) Stability (annual change; per cent of full scale) Cosine response error at 10° elevation azimuth error at 10° elevation (additional to Temperature dependence (–20 to 40°C) Non-linearity (deviation from mean) variation in spectral sensitivity integrated over
a b c

High quality
1 2% 3% 3% 1% 0.5% 2%


Good quality
5 5% 7% 5% 2% 2% 5%


Moderate quality
10 10% 15% 10% 5% 5% 10%


Near state of the art; maintainable only at stations with special facilities and specialist staff. acceptable for network operations. Suitable for low-cost networks where moderate to low performance is acceptable.


instruments for the measurement of long-wave radiation

and the black-body radiative temperature of the instrument. In general, this can be approximated by the following equation:

Over the last decade, significant advances have been made in the measurement of terrestrial radiation by pyrgeometers, which block out solar radiation. Early instruments of this type had significant problems with premature ageing of the materials used to block the short-wave portion of the spectrum, while being transparent to the long-wave portion. However, with the advent of the silicon domed pyrgeometer, this stability problem has been greatly reduced. Nevertheless, the measurement of terrestrial radiation is still more difficult and less understood than the measurement of solar irradiance. Pyrgeometers are subject to the same errors as pyrradiometers (see Table 7.6). Pyrgeometers have developed in two forms. In the first form, the thermopile receiving surface is covered with a hemispheric dome inside which an interference filter is deposited. In the second form, the thermopile is covered with a flat plate on which the interference filter is deposited. In both cases, the surface on which the interference filter is deposited is made of silicon. The first style of instrument provides a full hemispheric field of view, while for the second a 150° field of view is typical and the hemispheric flux is modelled using the manufacturer’s procedures. The argument used for the latter method is that the deposition of filters on the inside of a hemisphere has greater imprecisions than the modelling of the flux below 30° elevations. Both types of instruments are operated on the principle that the measured output signal is the difference between the irradiance emitted from the source

L ↓i =

V + 5.6704 ⋅ 10 −8 ⋅ Td4 K


where L↓i is the infrared terrestrial irradiance (W m–2); V is the voltage output from the sensing element (µV); K is the instrument sensitivity to infrared irradiance (µV/(W m–2)); and Td is the detector temperature (K). Several recent comparisons have been made using instruments of similar manufacture in a variety of measurement configurations. These studies have indicated that, following careful calibration, fluxes measured at night agree to within 2 per cent, but in periods of high solar energy the difference between instruments may reach 13 per cent. The reason for the differences is that the silicon dome and the associated interference filter do not have a sharp and reproducible cut-off between solar and terrestrial radiation, and it is not a perfect reflector of solar energy. Thus, solar heating occurs. By shading the instrument, ventilating it as recommended by ISO (1990a), and measuring the temperature of the dome and the instrument case, this discrepancy can be reduced to less than 5 per cent of the thermopile signal (approximately 15 W m–2). Based upon these and other comparisons, the following recommendations should be followed for the measurement of long-wave radiation: (a) When using pyrgeometers that have a builtin battery circuit to emulate the black-body condition of the instrument, extreme care must be taken to ensure that the battery is




(c) (d)

well maintained. Even a small change in the battery voltage will significantly increase the measurement error. If at all possible, the battery should be removed from the instrument, and the case and dome temperatures of the instrument should be measured according to the manufacturer’s instructions; Where possible, both the case and dome temperatures of the instrument should be measured and used in the determination of irradiance; The instrument should be ventilated; For best results, the instrument should be shaded from direct solar irradiance by a small sun-tracking disc as used for diffuse sky radiation measurement.

level, it is necessary to place the pyranometers and pyrradiometers at a suitable distance from the ground to measure these upward components. Such measurements integrate the radiation emitted by the surface beneath the sensor. For pyranometers and pyrradiometers which have an angle of view of 2π sr and are installed 2 m above the surface, 90 per cent of all the radiation measured is emitted by a circular surface underneath having a diameter of 12 m (this figure is 95 per cent for a diameter of 17.5 m and 99 per cent for one of 39.8 m), assuming that the sensor uses a cosine detector. This characteristic of integrating the input over a relatively large circular surface is advantageous when the terrain has large local variations in emittance, provided that the net pyrradiometer can be installed far enough from the surface to achieve a field of view which is representative of the local terrain. The output of a sensor located too close to the surface will show large effects caused by its own shadow, in addition to the observation of an unrepresentative portion of the terrain. On the other hand, the readings from a net pyrradiometer located too far from the surface can be rendered unrepresentative of the fluxes near that surface because of the existence of undetected radiative flux divergences. Usually a height of 2 m above short homogeneous vegetation is adopted, while in the case of tall vegetation, such as a forest, the height should be sufficient to eliminate local surface heterogeneities adequately. 7.4.5 recording and data reduction

These instruments should be calibrated at national or regional calibration centres by using blackbody calibration units. Experiments using near-black-body radiators fashioned from large hollowed blocks of ice have also met with good success. The calibration centre should provide information on the best method of determining the atmospheric irradiance from a pyrgeometer depending upon which of the above recommendations are being followed. 7.4.4 installation of pyrradiometers and pyrgeometers

Pyrradiometers and pyrgeometers are generally installed at a site which is free from obstructions, or at least has no obstruction with an angular size greater than 5° in any direction, and which has a low sun angle at all times during the year. A daily check of the instruments should ensure that: (a) The instrument is level; (b) Each sensor and its protection devices are kept clean and free from dew, frost, snow and rain; (c) The domes do not retain water (any internal condensation should be dried up); (d) The black receiver surfaces have emissivities very close to 1. Additionally, where polythene domes are used, it is necessary to check from time to time that UV effects have not changed the transmission characteristics. A half-yearly exchange of the upper dome is recommended. Since it is not generally possible to directly measure the reflected solar radiation and the upward long-wave radiation exactly at the surface

In general, the text in section 7.1.3 applies to pyrradiometers and pyrgeometers. Furthermore, the following effects can specifically influence the readings of these radiometers, and they should be recorded: (a) The effect of hydrometeors on non-protected and non-ventilated instruments (rain, snow, dew, frost); (b) The effect of wind and air temperature; (c) The drift of zero of the data system. This is much more important for pyrradiometers, which can yield negative values, than for pyranometers, where the zero irradiance signal is itself a property of the net irradiance at the sensor surface. Special attention should be paid to the position of instruments if the derived long-wave radiation requires subtraction of the solar irradiance component measured by a pyranometer; the



pyrradiometer and pyrranometer should be positioned within 5 m of each other and in such a way that they are essentially influenced in the same way by their environment.

table 7.8. Photopic spectral luminous efficiency values (unity at wavelength of maximum efficacy)
Wavelength (nm) 380 390 400 410 420 430 440 450 460 470 480 490 500 510 520 530 540 550 560 570 580 Photopic V(λ) 0.000 04 0.000 12 0.000 4 0.001 2 0.004 0 0.011 6 0.023 0.038 0.060 0.091 0.139 0.208 0.323 0.503 0.710 0.862 0.954 0.995 0.995 0.952 0.870 Wavelength (nm) 590 600 610 620 630 640 650 660 670 680 690 700 710 720 730 740 750 760 770 780 Photopic V(λ) 0.757 0.631 0.503 0.381 0.265 0.175 0.107 0.061 0.032 0.017 0.008 2 0.004 1 0.002 1 0.001 05 0.000 52 0.000 25 0.000 12 0.000 06 0.000 03 0.000 015


MeasureMent of sPecial raDiation quantities


Measurement of daylight

Illuminance is the incident flux of radiant energy that emanates from a source with wavelengths between 380 and 780 nm and is weighted by the response of the human eye to energy in this wavelength region. The ICI has defined the response of the human eye to photons with a peak responsivity at 555 nm. Figure 7.2 and Table 7.8 provide the relative response of the human eye normalized to this frequency. Luminous efficacy is defined as the relationship between radiant emittance (W m–2) and luminous emittance (lm). It is a function of the relative luminous sensitivity V(λ) of the human eye and a normalizing factor Km (683) describing the number of lumens emitted per watt of electromagnetic radiation from a monochromatic source of 555.19 nm (the freezing point of platinum), as follows:
780 Φv = Km ∫380 Φ(λ) V (λ)dλ


where Φv is the luminous flux (lm m–2 or lux); Φ(λ) is the spectral radiant flux (W m–2 nm–1); V(λ) is the sensitivity of the human eye; and Km is the normalizing constant relating luminous to radiation quantities. Quantities and units for luminous variables are given in Annex 7.A.


0.8 Relative response 0.6 0.4

Illuminance meters comprise a photovoltaic detector, one or more filters to yield sensitivity according to the V(λ) curve, and often a temperature control circuit to maintain signal stability. The ICI has developed a detailed guide to the measurement of daylight (ICI, 1994) which describes expected practices in the installation of equipment, instrument characterization, data-acquisition procedures and initial quality control. The measurement of global illuminance parallels the measurement of global irradiance. However, the standard illuminance meter must be temperature controlled or corrected from at least –10 to 40°C. Furthermore, it must be ventilated to prevent condensation and/or frost from coating the outer surface of the sensing element. Illuminance meters should normally be able to measure fluxes over the range 1 to 20 000 lx. Within this range, uncertainties should remain within the limits of Table 7.9. These values are

0.2 0.0 400 450 500 550 600 Wavelength (nm) 650 700 750

figure 7.2. relative luminous sensitivity V(λ) of the human eye for photopic vision



table 7.9. specification of illuminance meters
Specification v(λ) match uv response Ir response Cosine response Fatigue at 10 klx Temperature coefficient linearity Settling time Uncertainty percentage 2.5 0.2 0.2 1.5 0.1 0.1 K–1 0.2 0.1 s

time. For the presentation of sky luminance data, stereographic maps depicting isolines of equal luminance are most useful.


MeasureMent of uv raDiation

based upon ICI recommendations (ICI, 1987), but only for uncertainties associated with high-quality illuminance meters specifically intended for external daylight measurements. Diffuse sky illuminance can be measured following the same principles used for the measurement of diffuse sky irradiance. Direct illuminance measurements should be taken with instruments having a field of view whose open half-angle is no greater than 2.85° and whose slope angle is less than 1.76°. calibration

Irradiance (W m–2 nm–1)

Calibrations should be traceable to a Standard Illuminant A following the procedures outlined in ICI (1987). Such equipment is normally available only at national standards laboratories. The calibration and tests of specification should be performed yearly. These should also include tests to determine ageing, zero setting drift, mechanical stability and climatic stability. It is also recommended that a field standard be used to check calibrations at each measurement site between laboratory calibrations. recording and data reduction

Measurements of solar UV radiation are in demand because of its effects on the environment and human health, and because of the enhancement of radiation at the Earth’s surface as a result of ozone depletion (Kerr and McElroy, 1993). The UV spectrum is conventionally divided into three parts, as follows: (a) UV-A is the band with wavelengths of 315 to 400 nm, namely, just outside the visible spectrum. It is less biologically active and its intensity at the Earth’s surface does not vary with atmospheric ozone content; (b) UV-B is defined as radiation in the 280 to 315 nm band. It is biologically active and its intensity at the Earth’s surface depends on the atmospheric ozone column, to an extent depending on wavelength. A frequently used expression of its biological activity is its erythemal effect, which is the extent to which it causes the reddening of white human skin; (c) UV-C, in wavelengths of 100 to 280 nm, is completely absorbed in the atmosphere and does not occur naturally at the Earth’s surface.





The ICI has recommended that the following climatological variables be recorded: (a) Global and diffuse sky daylight illuminance on horizontal and vertical surfaces; (b) Illuminance of the direct solar beam; (c) Sky luminance for 0.08 sr intervals (about 10° · 10°) all over the hemisphere; (d) Photopic albedo of characteristic surfaces such as grass, earth and snow. Hourly or daily integrated values are usually needed. The hourly values should be referenced to true solar


Extra-terrestrial irradiance

Surface irradiance (250 milliatmosphere centimetre ozone) Surface irradiance (300 milliatmosphere centimetre ozone) Surface irradiance (350 milliatmosphere centimetre ozone)


1.0E-7 280.00 290.00 300.00 310.00 320.00 330.00

Wavelength (nm)

figure 7.3. Model results illustrating the effect of increasing ozone levels on the transmission of uV-B radiation through the atmosphere



table 7.10. requirements for uV-B global spectral irradiance measurements
1. 2. 3. 4. uv-B Wavelength resolution – 1.0 nm or better Temporal resolution – 10 min or better directional (angular) – separation into direct and diffuse components or better; radiances Meticulous calibration strategy ancillary data (a) Required 1. 2. 3. 4. 1. 2. 3. 4. 5. 6. Total column ozone (within 100 km) aerosol optical depth Ground albedo Cloud cover aerosol; profile using lIdar vertical ozone distribution Sky brightness Global solar irradiance Polarization of zenith radiance Column water amount

This leads to further difficulties in the standardization of instruments and methods of observation. Guidelines and standard procedures have been developed on how to characterize and calibrate UV spectroradiometers and UV filter radiometers used to measure solar UV irradiance (see WMO, 1996; 1999a; 1999b; 2001). Application of the recommended procedures for data quality assurance performed at sites operating instruments for solar UV radiation measurements will ensure a valuable UV radiation database. This is needed to derive a climatology of solar UV irradiance in space and time for studies of the Earth’s climate. Requirements for measuring sites and instrument specifications are also provided in these documents. Requirements for UV-B measurements were put forward in the WMO Global Ozone Research and Monitoring Project (WMO, 1993b) and are reproduced in Table 7.10. The following instrument descriptions are provided for general information and for assistance in selecting appropriate instrumentation. 7.6.1 instruments

(b) Highly recommended

UV-B is the band on which most interest is centred for measurements of UV radiation. An alternative, but now non-standard, definition of the boundary between UV-A and UV-B is 320 nm rather than 315 nm. Measuring UV radiation is difficult because of the small amount of energy reaching the Earth’s surface, the variability due to changes in stratospheric ozone levels, and the rapid increase in the magnitude of the flux with increasing wavelength. Figure 7.3 illustrates changes in the spectral irradiance between 290 and 325 nm at the top of the atmosphere and at the surface in W m–2 nm–1. Global UV irradiance is strongly affected by atmospheric phenomena such as clouds, and to a lesser extent by atmospheric aerosols. The influence of surrounding surfaces is also significant because of multiple scattering. This is especially the case in snow-covered areas. Difficulties in the standardization of UV radiation measurement stem from the variety of uses to which the measurements are put. Unlike most meteorological measurements, standards based upon global needs have not yet been reached. In many countries, measurements of UV radiation are not taken by Meteorological Services, but by health or environmental protection authorities.

Three general types of instruments are available commercially for the measurement of UV radiation. The first class of instruments use broadband filters. These instruments integrate over either the UV-B or UV-A spectrum or the entire broadband UV region responsible for affecting human health. The second class of instruments use one or more interference filters to integrate over discrete portions of the UV-A and/or UV-B spectrum. The third class of instruments are


McKinlay and Diffey (1987) Parrish, Jaenicke and Anderson (1982) normalized to 1 at 250 nm

Erythemal action spectra




1.00E-5 250.00 300.00 350.00 400.00

Wavelength (nm)

figure 7.4. erythemal curves as presented by Parrish, Jaenicke and anderson (1982) and McKinlay and diffey (1987)



spectroradiometers that measure across a predefined portion of the spectrum sequentially using a fixed passband. Broadband sensors

instruments provide simple algorithms to approximate erythemal dosage from the unweighted measurements. The maintenance of these instruments consists of ensuring that the domes are cleaned, the instrument is level, the desiccant (if provided) is active, and the heating/cooling system is working correctly, if so equipped. Otherwise, the care they require is similar to that of a pyranometer. narrowband sensors

Most, but not all, broadband sensors are designed to measure a UV spectrum that is weighted by the erythemal function proposed by McKinlay and Diffey (1987) and reproduced in Figure 7.4. Another action spectrum found in some instruments is that of Parrish, Jaenicke and Anderson (1982). Two methods (and their variations) are used to accomplish this hardware weighting. One of the means of obtaining erythemal weighting is to first filter out nearly all visible wavelength light using UV-transmitting, black-glass blocking filters. The remaining radiation then strikes a UVsensitive phosphor. In turn, the green light emitted by the phosphor is filtered again by using coloured glass to remove any non-green visible light before impinging on a gallium arsenic or a gallium arsenic phosphorus photodiode. The quality of the instrument is dependent on such items as the quality of the outside protective quartz dome, the cosine response of the instrument, the temperature stability, and the ability of the manufacturer to match the erythemal curve with a combination of glass and diode characteristics. Instrument temperature stability is crucial, both with respect to the electronics and the response of the phosphor to incident UV radiation. Phosphor efficiency decreases by approximately 0.5 per cent K–1 and its wavelength response curve is shifted by approximately 1 nm longer every 10 K. This latter effect is particularly important because of the steepness of the radiation curve at these wavelengths. More recently, instruments have been developed to measure erythemally weighted UV irradiance using thin film metal interference filter technology and specially developed silicon photodiodes. These overcome many problems associated with phosphor technology, but must contend with very low photodiode signal levels and filter stability. Other broadband instruments use one or the other measurement technology to measure the complete spectra by using either a combination of glass filters or interference filters. The bandpass is as narrow as 20 nm full-width half-maximum (FWHM) to as wide as 80 nm FWHM for instruments measuring a combination of UV-A and UV-B radiation. Some manufacturers of these

The definition of narrowband for this classification of instrument is vague. The widest bandwidth for instruments in this category is 10 nm FWHM. The narrowest bandwidth at present for commercial instruments is of the order of 2 nm FWHM. These sensors use one or more interference filters to obtain information about a portion of the UV spectra. The simplest instruments consist of a single filter, usually at a wavelength that can be measured by a good-quality, UV enhanced photodiode. Wavelengths near 305 nm are typical for such instruments. The out-of-band rejection of such filters should be equal to, or greater than, 10 –6 throughout the sensitive region of the detector. Higher quality instruments of this type either use Peltier cooling to maintain a constant temperature near 20°C or heaters to increase the instrument filter and diode temperatures to above normal ambient temperatures, usually 40°C. However, the latter alternative markedly reduces the life of interference filters. A modification of this type of instrument uses a photomultiplier tube instead of the photodiode. This allows the accurate measurement of energy from shorter wavelengths and lower intensities at all measured wavelengths. Manufacturers of instruments that use more than a single filter often provide a means of reconstructing the complete UV spectrum through modelled relationships developed around the measured wavelengths. Single wavelength instruments are used similarly to supplement the temporal and spatial resolution of more sophisticated spectrometer networks or for long-term accurate monitoring of specific bands to detect trends in the radiation environment. The construction of the instruments must be such that the radiation passes through the filter close to normal incidence so that wavelength shifting to shorter wavelengths is avoided. For



example, a 10° departure from normal incidence may cause a wavelength shift of 1.5 nm, depending on the refractive index of the filter. The effect of temperature can also be significant in altering the central wavelength by about 0.012 nm K–1 on very narrow filters (< 1 nm). Maintenance for simple one-filter instruments is similar to that of the broadband instruments. For instruments that have multiple filters in a moving wheel assembly, maintenance will include determining whether or not the filter wheel is properly aligned. Regular testing of the highvoltage power supply for photomultiplierequipped instruments and checking the quality of the filters are also recommended. spectroradiometers

of the measurements is usually between 0.5 and 2.0 nm. The time required to complete a full scan across the grating depends upon both the wavelength resolution and the total spectrum to be measured. Scan times to perform a spectral scan across the UV region and part of the visible region (290 to 450 nm) with small wavelength steps range from less than 1 min per scan with modern fast scanning spectroradiometers to about 10 min for some types of conventional high-quality spectroradiometers. For routine monitoring of UV radiation it is recommended that the instrument either be environmentally protected or developed in such a manner that the energy incident on a receiver is transmitted to a spectrometer housed in a controlled climate. In both cases, care must be taken in the development of optics so that uniform responsivity is maintained down to low solar elevations. The maintenance of spectroradiometers designed for monitoring UV-B radiation requires well-trained on-site operators who will care for the instruments. It is crucial to follow the manufacturer’s maintenance instructions because of the complexity of this instrument. 7.6.2 calibration

The most sophisticated commercial instruments are those that use either ruled or holographic gratings to disperse the incident energy into a spectrum. The low energy of the UV radiation compared with that in the visible spectrum necessitates a strong out-of-band rejection. This is achieved by using a double monochromator or by blocking filters, which transmit only UV radiation, in conjunction with a single monochromator. A photomultiplier tube is most commonly used to measure the output from the monochromator. Some less expensive instruments use photodiode or charge-coupled detector arrays. These instruments are unable to measure energy in the shortest wavelengths of the UV-B radiation and generally have more problems associated with stray light. Monitoring instruments are now available with several self-checking features. Electronic tests include checking the operation of the photomultiplier and the analogue to digital conversion. Tests to determine whether the optics of the instrument are functioning properly include testing the instrument by using internal mercury lamps and standard quartz halogen lamps. While these do not give absolute calibration data, they provide the operator with information on the stability of the instrument both with respect to spectral alignment and intensity. Commercially available instruments are constructed to provide measurement capabilities from approximately 290 nm to the mid-visible wavelengths, depending upon the type of construction and configuration. The bandwidth

The calibration of all sensors in the UV-B is both very important and difficult. Guidelines on the calibration of UV spectroradiometers and UV filter radiometers have been given in WMO (1996; 1999a; 1999b; 2001) and in the relevant scientific literature. Unlike pyranometers, which can be traced back to a standard set of instruments maintained at the WRR, these sensors must be either calibrated against light sources or against trap detectors. The latter, while promising in the long-term calibration of narrowband filter instruments, are still not readily available. Therefore, the use of standard lamps that are traceable to national standards laboratories remains the most common means of calibrating sensors measuring in the UV-B. Many countries do not have laboratories capable of characterizing lamps in the UV. In these countries, lamps are usually traceable to the National Institute of Standards and Technology in the United States or to the Physikalisch-Technische Bundesanstalt in Germany. It is estimated that a 5 per cent uncertainty in spot measurements at 300 nm can be achieved only under the most rigorous conditions at the present



time. The uncertainty of measurements of daily totals is about the same, using best practice. Fast changes in cloud cover and/or cloud optical depths at the measuring site require fast spectral scans and small sampling time steps between subsequent spectral scans, in order to obtain representative daily totals of spectral UV irradiance. Measurements of erythemal irradiance would have uncertainties typically in the range 5 to 20 per cent, depending on a number of factors, including the quality of the procedures and the equipment. The sources of error are discussed in the following paragraphs and include: (a) Uncertainties associated with standard lamps; (b) The stability of instruments, including the stability of the spectral filter and, in older instruments, temperature coefficients; (c) Cosine error effects; (d) The fact that the calibration of an instrument varies with wavelength, and that: (i) The spectrum of a standard lamp is not the same as the spectrum being measured; (ii) The spectrum of the UV-B irradiance being measured varies greatly with the solar zenith angle. The use of standard lamps as calibration sources leads to large uncertainties at the shortest wavelengths, even if the transfer of the calibration is perfect. For example, at 250 nm the uncertainty associated with the standard irradiance is of the order of 2.2 per cent. When transferred to a standard lamp, another 1 per cent uncertainty is added. At 350 nm, these uncertainties decrease to approximately 1.3 and 0.7 per cent, respectively. Consideration must also be given to the set-up and handling of standard lamps. Even variations as small as 1 per cent in the current, for example, can lead to errors in the UV flux of 10 per cent or more at the shortest wavelengths. Inaccurate distance measurements between the lamp and the instrument being calibrated can also lead to errors in the order of 1 per cent as the inverse square law applies to the calibration. Webb, and others (1994) discuss various aspects of uncertainty as related to the use of standard lamps in the calibration of UV or visible spectroradiometers. While broadband instruments are the least expensive to purchase, they are the most difficult to characterize. The problems associated with these instruments stem from: (a) the complex set of filters used to integrate the incoming radiation into the erythemal signal; and (b) the fact that the spectral nature of the atmosphere changes with air

mass, ozone amount and other atmospheric constituents that are probably unknown to the instrument user. Even if the characterization of the instrument by using calibrated lamp sources is perfect, the changing spectral properties between the atmosphere and the laboratory would affect the uncertainty of the final measurements. The use of high-output deuterium lamps, a double monochromator and careful filter selection will help in the characterization of these instruments, but the number of laboratories capable of calibrating these devices is extremely limited. Narrowband sensors are easier to characterize than broadband sensors because of the smaller variation in calibrating source intensities over the smaller wavelength pass-band. Trap detectors could potentially be used effectively for narrowband sensors, but have been used only in research projects to date. In recalibrating these instruments, whether they have a single filter or multiple filters, care must be taken to ensure that the spectral characteristics of the filters have not shifted over time. Spectrometer calibration is straightforward, assuming that the instrument has been maintained between calibrations. Once again, it must be emphasized that the transfer from the standard lamp is difficult because of the care that must be taken in setting up the calibration (see above). The instrument should be calibrated in the same position as that in which the measurements are to be taken, as many spectroradiometers are adversely affected by changes in orientation. The calibration of a spectrometer should also include testing the accuracy of the wavelength positioning of the monochromator, checking for any changes in internal optical alignment and cleanliness, and an overall test of the electronics. Periodic testing of the out-of-band rejection, possibly by scanning a helium cadmium laser (λ = 325 nm), is also advisable. Most filter instrument manufacturers indicate a calibration frequency of once a year. Spectroradiometers should be calibrated at least twice a year and more frequently if they do not have the ability to perform self-checks on the photomultiplier output or the wavelength selection. In all cases, absolute calibrations of the instruments should be performed by qualified technicians at the sites on a regular time schedule. The sources used for calibration must guarantee that the calibration can be traced back to absolute



radiation standards kept at certified national metrological institutes. If the results of quality assurance routines applied at the sites indicate a significant change in an instrument’s performance or changes of its calibration level over time, an additional calibration may be needed in between two regular calibrations. All calibrations should be based on expertise and documentation available at

the site and on the guidelines and procedures such as those published in WMO (1996; 1999a; 1999b; 2001). In addition to absolute calibrations of instruments, inter-comparisons between the sources used for calibration, for example, calibration lamps, and the measuring instruments are useful to detect and remove inconsistencies or systematic differences between station instruments at different sites.



aNNEx 7.a noMenclature of radIoMetrIc and PHotoMetrIc quantItIes

(1) radiometric quantities
Name radiant energy radiant flux Symbol Q, (W) Φ, (P) Unit J=W s W Relation –
Φ= dQ dt

Remarks – Power

radiant flux density

(M), (E)

W m–2

dΦ d2Q = dA dA ⋅ dt M= dΦ dA dΦ dA

radiant flux of any origin crossing an area element radiant flux of any origin emerging from an area element radiant flux of any origin incident onto an area element The radiance is a conservative quantity in an optical system

radiant exitance


W m–2



W m–2

E =



W m–2 sr–1


d 2Φ d Ω ⋅ dA ⋅ cosθ dQ t2 = ∫ E dt dA t1 dΦ d

radiant exposure


J m–2

H =

May be used for daily sums of global radiation, etc.

radiant intensity


W sr–1


May be used only for radiation outgoing from “point sources”

(2) Photometric quantities
Name Quantity of light luminous flux luminous exitance Illuminance light exposure luminous intensity luminance luminous flux density Symbol Qv Φv Mv Ev Hv Iv Lv (Mv ; Ev) Unit lm·s lm lm m–2 lm m–2 = lx lm m–2 s = lx·s lm sr–1 = cd lm m–2 s r–1 = cdm–2 lm m–2



(3) optical characteristics
Characteristic Symbol ε Definition Remarks


ε = 1 for a black body




Φa Φi

Φa and Φi are the absorbed and incident radiant flux, respectively




Φr Φi

Φr is the reflected radiant flux




Φt Φi

Φt is the radiant flux transmitted through a layer or a surface

Optical depth


τ = e−δ

In the atmosphere, δ is defined in the vertical. Optical thickness equals δ /cosΘθ, where θ is the apparent zenith angle



aNNEx 7.B MeteorologIcal radIatIon quantItIes, syMBols and defInItIons

Quantity downward radiation



Definitions and remarks downward radiant flux “ radiant energy “ radiant exitanceb “ irradiance “ radiance “ radiant exposure for a specified time interval upward radiant flux “ radiant energy “ radiant exitance “ irradiance “ radiance “ radiant energy per unit area for a specified time interval Hemispherical irradiance on a horizontal surface (θ ⋅ = apparent solar zenith angle)c Subscript d = diffuse

Units W J (W s) W m–2 W m–2 W m–2 sr–1 J m–2 per time interval W J (W s) W m–2 W m–2 W m–2 sr–1 J m–2 per time interval W m–2

Φ↓ a
Q↓ M↓ E↓ L↓ H↓

Q↓ = M↓ = E↓ = L↓ = H↓ = (g = (l = Q↑ = M↑ = Eε↑ = Lε↑ = H↑ =

Φ↓ = Φg↓ + Φl↓

Qg↓ + Ql↓ Mg↓ + Ml↓ Eg↓ + El↓ Lg↓ + Ll↓ Hg↓ + Hl↓ global) long wave) Qrε↑ + Qlε↑ Mrε↑ + Ml↑ ε Erε↑ + El↑ Lrε ↑ + Llε↑ Hrε↑ + Hl↑ ε

upward radiation

Φ↑ a
Q↑ M↑ E↑ L↑ H↑

Φ↑ = Φr↑ +Φl↑

Global radiation


Eg↓ = Ecosθ ⋅ + Ed↓

Sky radiation: downward diffuse solar radiation

Φdε↓ Qd↓ ε Mdε↓ Edε↓ Ldε↓ Hdε↓ Φl↑, Φl↓ Ql↑,εQlε↓ Mlε↑, Ml↓ε Elε↑, Elε↓ Hlε↑, Hlε↓ Φr↑
Qr↑ Mr↑ Er↑ Lr↑ Hr↑

as for downward radiation

upward/downward long-wave radiation

Subscript l = long wave. If only atmospheric radiation is considered, the subscript a may be added, e.g., Φl,a↑ss Subscript r = reflected (the subscript s (specular) and d (diffuse) may be used, if a distinction is to be made between these two components)

as for downward radiation

reflected solar radiation

as for downward radiation

Net radiation

Q* M* E* L* H*

Q* = M↑ = E↑ = L↑ = H↑ =

Φ* = Φ ↓ – Φ↑

Q↓ – Q↑ M↓ – M↑ E↓ – E↑ L↓ – L↑ H↓ – H↑

The subscript g or l is to be added to each of the symbols if only short-wave or long-wave net radiation quantities are considered

as for downward radiation

Quantity direct solar radiation Solar constant a b

Symbol E E0 Relation E = E0τ τ v e–δ/cosθ ⋅ Definitions and remarks τ = atmospheric transmittance δ = optical depth (vertical) Solar irradiance, normalized to mean sun-Earth distance Units W m–2 W m–2


The symbols – or + could be used instead of Θ↓ ΘorΘ ↑ (e.g., Φ+ ≡ Φ↑). Exitance is radiant flux emerging from the unit area; irradiance is radiant flux received per unit area. For flux density in general, the symbol M or E can be used. although not specifically recommended, the symbol F, defined as Φ/area, may also be introduced. In the case of inclined surfaces, θ ⋅ is the angle between the normal to the surface and the direction to the sun.



aNNEx 7.C sPecIfIcatIons for World, regIonal and natIonal radIatIon centres

World radiation centres The World Radiation Centres were designated by the Executive Committee at its thirtieth session in 1978 through Resolution 11 (EC-XXX) to serve as centres for the international calibration of meteorological radiation standards within the global network and to maintain the standard instruments for this purpose. A World Radiation Centre shall fulfil the following requirements. It should either: 1. (a) Possess and maintain a group of at least three stable absolute pyrheliometers, with a traceable 95 per cent uncertainty of less than 1 W m–2 to the World Radiometric Reference, and in stable, clear sun conditions with direct irradiances above 700 Wm–2, 95 per cent of any single measurements of direct solar irradiance will be expected to be within 4 W m–2 of the irradiance. The World Radiation Centre Davos is requested to maintain the World Standard Group for realization of the World Radiometric Reference; (b) It shall undertake to train specialists in radiation; (c) The staff of the centre should provide for continuity and include qualified scientists with wide experience in radiation; (d) It shall take all steps necessary to ensure, at all times, the highest possible quality of its standards and testing equipment; (e) It shall serve as a centre for the transfer of the World Radiometric Reference to the regional centres; (f) It shall have the necessary laboratory and outdoor facilities for the simultaneous comparison of large numbers of instruments and for data reduction; (g) It shall follow closely or initiate developments leading to improved standards and/or methods in meteorological radiometry; (h) It shall be assessed an international agency or by CIMO experts, at least every five years, to verify traceablility of the direct solar radiation measurements; or 2. (a) Provide and maintain an archive for solar radiation data from all the Member States of WMO;

(b) The staff of the centre should provide for continuity and include qualified scientists with wide experience in radiation; (c) It shall take all steps necessary to ensure, at all times, the highest possible quality of, and access to, its database; (d) It shall be assessed be an international agency or by CIMO experts, at least every five years. regional radiation centres A Regional Radiation Centre is a centre designated by a regional association to serve as a centre for intraregional comparisons of radiation instruments within the Region and to maintain the standard instrument necessary for this purpose. A Regional Radiation Centre shall satisfy the following conditions before it is designated as such and shall continue to fulfil them after being designated: (a) It shall possess and maintain a standard group of at least three stable pyrheliometers, with a traceable 95 per cent uncertainty of less than 1 W m–2 to the World Standard Group, and in stable, clear sun conditions with direct irradiances above 700 W m–2, 95 per cent of any single measurements of direct solar irradiance will be expected to be within 6 W m–2 of the irradiance; (b) One of the radiometers shall be compared through a WMO/CIMO sanctioned comparison, or calibrated, at least once every five years against the World Standard Group; (c) The standard radiometers shall be intercompared at least once a year to check the stability of the individual instruments. If the mean ratio, based on at least 100 measurements, and with a 95 per cent, uncertainty less than 0.1 per cent, has changed by more than 0.2 per cent, and if the erroneous instrument cannot be identified, a recalibration at one of the World Radiation Centres must be performed prior to further use as a standard; (d) It shall have, or have access to, the necessary facilities and laboratory equipment for checking and maintaining the accuracy of the auxiliary measuring equipment;

I.7–36 (e)




It shall provide the necessary outdoor facilities for simultaneous comparison of national standard radiometers from the Region; The staff of the centre should provide for continuity and include a qualified scientist with wide experience in radiation; It shall be assessed by a national or international agency or by CIMO experts, at least every five years, to verify traceablility of the direct solar radiation measurements.

technical information for the operation and maintenance of the national network of radiation stations. Arrangements should be made for the collection of the results of all radiation measurements taken in the national network of radiation stations, and for the regular scrutiny of these results with a view to ensuring their accuracy and reliability. If this work is done by some other body, the National Radiation Centre shall maintain close liaison with the body in question. list of World and regional radiation centres World radiation Centres Davos (Switzerland) St Petersburg2 (Russian Federation)

national radiation centres A National Radiation Centre is a centre designated at the national level to serve as a centre for the calibration, standardization and checking of the instruments used in the national network of radiation stations and for maintaining the national standard instrument necessary for this purpose. A National Radiation Centre shall satisfy the following requirements: (a) It shall possess and maintain at least two pyrheliometers for use as a national reference for the calibration of radiation instruments in the national network of radiation stations with a traceable 95 per cent uncertainty of less than 4 W m–2 to the regional representation of the World Radiometric Reference, and in stable, clear sun conditions with direct irradiances above 700 W m–2, 95 per cent of any single measurements of direct solar irradiance will be expected to be within 20 W m–2 of the irradiance; (b) One of the national standard radiometers shall be compared with a regional standard at least once every five years; (c) The national standard radiometers shall be intercompared at least once a year to check the stability of the individual instruments. If the mean ratio, based on at least 100 measurements, and with a 95 per cent uncertainty less than 0.2 per cent, has changed by more than 0.6 per cent and if the erroneous instrument cannot be identified, a recalibration at one of the Regional Radiation Centres must be performed prior to further use as a standard; (d) It shall have or, have access to, the necessary facilities and equipment for checking the performance of the instruments used in the national network; (e) The staff of the centre should provide for continuity and include a qualified scientist with experience in radiation. National Radiation Centres shall be responsible for preparing and keeping up to date all necessary

regional radiation Centres Region I (Africa): Cairo Khartoum Kinshasa

Lagos Tamanrasset Tunis Region II (Asia): Pune (India) Tokyo (Japan) Region III (South America): Buenos Aires (Argentina) Santiago (Chile) Huayao (Peru) Region IV (North America, Central America and the Caribbean): Toronto (Canada) Boulder (United States) Mexico City/Colima (Mexico) Region V (South-West Pacific): Melbourne (Australia) Region VI (Europe): Budapest (Hungary) Davos (Switzerland) St Petersburg (Russian Federation) Norrköping (Sweden) Trappes/Carpentras (France) Uccle (Belgium) Lindenberg (Germany)
2 Mainly operated as a World Radiation Data Centre under the Global Atmosphere Watch Strategic Plan.

(Egypt) (Sudan) (Democratic Republic of the Congo) (Nigeria) (Algeria) (Tunisia)



aNNEx 7.d useful forMulae

general All astronomical data can be derived from tables in the nautical almanacs or ephemeris tables. However, approximate formulae are presented for practical use. Michalsky (1988a, b) compared several sets of approximate formulae and found that the best are the equations presented as convenient approximations in the Astronomical Almanac (United States Naval Observatory, 1993). They are reproduced here for convenience. the position of the sun To determine the actual location of the sun, the following input values are required: (a) year; (b) Day of year (for example, 1 February is day 32); (c) Fractional hour in universal time (UT) (for example, hours + minute/60 + number of hours from Greenwich); (d) Latitude in degrees (north positive); (e) Longitude in degrees (east positive). To determine the Julian date (JD), the Astronomical Almanac determines the present JD from a prime JD set at noon 1 January 2000 UT. This JD is 2 451 545.0. The JD to be determined can be found from: JD = 2 432 916.5 + delta · 365 + leap + day + hour/24 where: delta = year – 1949 leap = integer portion of (delta/4) The constant 2 432 916.5 is the JD for 0000 1 January 1949 and is simply used for convenience. Using the above time, the ecliptic coordinates can be calculated according to the following stepss (L, g and l are in degrees): (a) n = JD – 2 451 545; (b) L (mean longitude) = 280.460 + 0.985 647 4 · n (0 ≤ L < 360°); (c) g (mean anomaly) = 357.528 + 0.985 600 3 · n (0 ≤ g < 360°); (d) l (ecliptic longitude) = L + 1.915 · sin (g) + 0.020 · sin (2g) (0 ≤ l < 360°);


ep (obliquity of the ecliptic) = 23.439 – 0.000 000 4 · n (degrees).

It should be noted that the specifications indicate that all multiples of 360° should be added or subtracted until the final value falls within the specified range. From the above equations, the celestial coordinates can be calculated – the right ascension (ra) and the declination (dec) – by: tan (ra) = cos (ep) · sin (l)/cos (l) sin (dec) = sin (ep) · sin (l) To convert from celestial coordinates to local coordinates, that is, right ascension and declination to azimuth (A) and altitude (a), it is convenient to use the local hour angle (h). This is calculated by first determining the Greenwich mean sidereal time (GMST, in hours) and the local mean sidereal time (LMST, in hours): GMST = 6.697 375 + 0.065 709 824 2 · n + hour (UT) where: 0 ≤ GMST < 24h LMST = GMST + (east longitude)/(15° h–1) From the LMST, the hour angle (ha) is calculated as (ha and ra are in degrees): ha = LMST – 15 · ra (–12 ≤ ha < 12h)

Before the sun reaches the meridian, the hour angle is negative. Caution should be observed when using this term, because it is opposite to what some solar researchers use. The calculations of the solar elevation (el) and the solar azimuth (az) follow (az and el are in degrees): sin (el) = sin (dec) · sin (lat) + cos (dec) · cos (lat) · cos (ha) and: sin (az) = –cos (dec) · sin (ha)/cos (el)



cos(az) = (sin(dec) – sin(el) · sin(lat))/ (cos(el) · cos(lat)) where the azimuth is from 0° north, positive through east. To take into account atmospheric refraction, and derive the apparent solar elevation (h) or the apparent solar zenith angle, the Astronomical Almanac proposes the following equations: (a) A simple expression for refraction R for zenith angles less than 75°: r = 0°.004 52 P tan z/(273 + T) where z is the zenith distance in degrees; P is the pressure in hectopascals; and T is the temperature in °C. (b) For zenith angles greater than 75° and altitudes below 15°, the following approximate formula is recommended:

air mass In calculations of extinction, the path length through the atmosphere, which is called the absolute optical air mass, must be known. The relative air mass for an arbitrary atmospheric constituent, m, is the ratio of the air mass along the slant path to the air mass in the vertical direction; hence, it is a normalizing factor. In a plane parallel, nonrefracting atmosphere m is equal to 1/sin h 0 or 1/cos z0. local apparent time The mean solar time, on which our civil time is based, is derived from the motion of an imaginary body called the mean sun, which is considered as moving at uniform speed in the celestial equator at a rate equal to the average rate of movement of the true sun. The difference between this fixed time reference and the variable local apparent time is called the equation of time, Eq, which may be positive or negative depending on the relative position of the true mean sun. Thus: LAT = LMT + Eq = CT + LC + Eq where LAT is the local apparent time (also known as TST, true solar time), LMT is the local mean time; CT is the civil time (referred to a standard meridian, thus also called standard time); and LC is the longitude correction (4 min for every degree). LC is positive if the local meridian is east of the standard and vice versa. For the computation of Eq, in minutes, the following approximation may be used: Eq = 0.017 2 + 0.428 1 cos Θ0 – 7.351 5 sin Θ0 – 3.349 5 cos 2Θ0 – 9.361 9 sin 2Θ0 where Θ0 = 2 πdn/365 in radians or Θ0 = 360 dn/365 in degrees, and where dn is the day number ranging from 0 on 1 January to 364 on 31 December for a normal year or to 365 for a leap year. The maximum error of this approximation is 35 s (which is excessive for some purposes, such as airmass determination).


P(0.159 4 + 0.019 6 a + 0.000 02 a 2 ) [( 273 + T )(1 + 0.505a + 0.084 5a 2 )]

where a is the elevation (90° – z) where h = el + r and the apparent solar zenith angle z0 = z + r. sun-earth distance The present-day eccentricity of the orbit of the Earth around the sun is small but significant to the extent that the square of the sun-Earth distance R and, therefore, the solar irradiance at the Earth, varies by 3.3 per cent from the mean. In astronomical units (AU), to an uncertainty of 10–4: R = 1.000 14 – 0.016 71 · cos (g) – 0.000 14 · cos (2g) where g is the mean anomaly and is defined above. The solar eccentricity is defined as the mean sunEarth distance (1 AU, R ) divided by the actual sun0 Earth distance squared: E0 = (R0/R)2



aNNEx 7.E dIffuse sKy radIatIon – correctIon for a sHadIng rIng

The shading ring is mounted on two rails oriented parallel to the Earth’s axis, in such a way that the centre of the ring coincides with the pyranometer during the equinox. The diameter of the ring ranges from 0.5 to 1.5 m and the ratio of the width to the radius b/r ranges from 0.09 to 0.35. The adjustment of the ring to the solar declination is made by sliding the ring along the rails. The length of the shading band and the height of the mounting of the rails relative to the pyranometer are determined from the solar position during the summer solstice; the higher the latitude, the longer the shadow band and the lower the rails.

D being the unobscured sky radiation. In the figure below, an example of this correction factor is given for both a clear and an overcast sky, compared with the corresponding empirical curves. It is evident that the deviations from the theoretical curves depend on climatological factors of the station and should be determined experimentally by comparing the instrument equipped with a shading ring with an instrument shaded by a continuously traced disc. If no experimental data are available for the station, data computed for the overcast case with the corresponding b/r should be used. Thus:

Dv b = cos3 δ (tset − trise ) ⋅ sin Φ ⋅ sin δ + cos Φ ⋅ cos δ ⋅ (s Several authors, for example, Drummond (1956), D overcast r Dehne (1980) and Le Baron, Peterson D Dirmhirn and b = cos3 (1980), have proposed formulae for voperational δ (tset − trise ) ⋅ sin Φ ⋅ sin δ + cos Φ ⋅ cos δ ⋅ (sin tset − sin trise ) D overcast r corrections to the sky radiation accounting for the part not measured due to the shadow band. For a where δ is the declination of the sun; Φ is the ring with b/r < 0.2, the radiation Dv lost during a geographic latitude; and trise and tset are the solar day can be expressed as: hour angle for set and rise, respectively (for details, see above). set b Dv ≈ cos3 δ ∫ L(t ) . sin h ⋅ (t ) dt r t rise t 1.15

F clear

where δ is the declination of the sun; t is the hour angle of the sun; t and t are the hour angle at rise set sunrise and sunset, respectively, for a mathematical horizon (Φ being the geographic latitude, t = – t rise set and cos t = – tan Φ . tan δ); L(t) is the sky radiance rise during the day; and h ⋅ is the solar elevation. With this expression and some assumptions on the sky radiance, a correction factor f can be determined: 1 f = D 1− v D

Correction factor


f clear f overcast


F overcast
1.00 –23.5° –20° –10° 0° 10° 20° 23.5°


comparison of calculated and empirically determined correction factors for a shading ring, with b/r = 0.169; f indicates calculated curves and F indicates empirical ones (after dehne, 1980).



references and furtHer readIng

Bass, A.M. and R.J Paur, 1985: The ultraviolet crosssections of ozone: I. The Measurements. Atmospheric Ozone (C.S. zerefos and A. Ghazi, eds.), Reidel, Dordrecht, pp. 606–610. Bodhaine, B.A., N.B. Wood, E.G. Dutton and J.R. Slusser, 1999: On Rayleigh optical depth calculations. Journal of Atmosheric Oceanic Technology, 16, pp. 1854–1861. Dehne, K., 1980: Vorschlag zur standardisierten Reduktion der Daten verschiedener nationaler Himmelsstrahlungs-Messnetze. Annalen der Meteorologie (Neue Folge), 16, pp. 57–59. Drummond, A.J., 1956: On the measurement of sky radiation. Archiv für Meteorologie, Geophysik und Bioklimatologie, Serie B, 7, pp. 413–436. Forgan, B.W., 1996: A new method for calibrating reference and field pyranometers. Journal of Atmospheric and Oceanic Technology, 13, pp. 638–645. Fröhlich, C. and G.E. Shaw, 1980: New determination of Rayleigh scattering in the terrestrial atmosphere. Applied Optics, Volume 19, Issue 11, pp. 1773–1775. Frouin, R., P.-y. Deschamps, and P. Lecomte, 1990: Determination from space of atmospheric total water vapour amounts by differential absorption near 940 nm: Theory and airborne verification. Journal of Applied Meteorology, 29, pp. 448–460. International Commission on Illumination, 1987: Methods of Characterizing Illuminance Meters and Luminance Meters. ICI-No. 69–1987. International Commission on Illumination, 1994: Guide to Recommended Practice of Daylight Measurement. ICI No. 108-1994. International Electrotechnical Commission, 1987: International Electrotechnical Vocabulary. Chapter 845: Lighting, IEC 60050-845. International Organization for Standardization, 1990a: Solar Energy – Specification and Classification of Instruments for Measuring Hemispherical Solar and Direct Solar Radiation. ISO 9060. International Organization for Standardization, 1990b: Solar Energy – Calibration of Field Pyrheliometers by Comparison to a Reference Pyrheliometer. ISO 9059. International Organization for Standardization, 1990c: Solar Energy – Field Pyranometers – Recommended Practice for Use. ISO/TR 9901.

International Organization for Standardization, 1992: Solar Energy – Calibration of field pyranometers by comparison to a reference pyranometer. ISO 9847. International Organization for Standardization, 1993: Solar Energy – Calibration of a pyranometer using a pyrheliometer. ISO 9846. International Organization for Standardization, 1995: Guide to the Expression of Uncertainty in Measurement, Geneva. Kerr, J.B. and T.C. McElroy, 1993: Evidence for large upward trends of ultraviolet-B radiation linked to ozone depletion. Science, 262, pp. 1032–1034. Kuhn, M., 1972: Die spektrale Transparenz der antarktischen Atmosphäre. Teil I: Meßinstrumente und Rechenmethoden. Archiv für Meteorologie, Geophysik und Bioklimatologie, Serie B, 20, pp. 207–248. Lal, M., 1972: On the evaluation of atmospheric turbidity parameters from actinometric data. Geofísica Internacional, Volume 12, Number 2, pp. 1–11. Le Baron, B.A., W.A. Peterson, and I. Dirmhirn, 1980: Corrections for diffuse irradiance measured with shadowbands. Solar Energy, 25, pp. 1–13. Michalsky, J.J., 1988a: The astronomical almanac’s algorithm for approximate solar position (1950– 2050). Solar Energy, Volume 40, Number 3, pp. 227–235. Michalsky, J.J., 1988b: Errata. The astronomical almanac’s algorithm for approximate solar position (1950–2050). Solar Energy, Volume 41, Number 1. McKinlay A.F. and B.L. Diffey, 1987: A reference action spectrum for ultraviolet induced erythema in human skin. In: W.F. Passchier, and B.F.M. Bosnjakovic (eds), Human Exposure to Ultraviolet Radiation: Risks and Regulations, Elsevier, Amsterdam, pp. 83–87. Parrish, J.A., K.F Jaenicke and R.R. Anderson, 1982: Erythema and melanogenesis action spectra of normal human skin. Photochemistry and Photobiology, 36, pp. 187–191. Rüedi, I., 2001: International Pyrheliometer Comparison IPC-IX, Results and Symposium. MeteoSwiss Working Report No. 197, Davos and zurich. Schneider, W., G.K. Moortgat, G.S. Tyndall and J.P. Burrows, 1987: Absorption cross-sections of NO2 in the UV and visible region (200–700 nm) at 298 K. Journal of Photochemistry and Photobiology, A: Chemistry, 40, pp. 195–217.

I.7–41 United States Naval Observatory, 1993: The Astronomical Almanac, Nautical Almanac Office, Washington DC. Vigroux, E., 1953: Contribution à l’étude expérimentale de l’absorption de l’ozone. Annales de Physique, 8, pp. 709–762. Webb, A.R, B.G. Gardiner, M. Blumthaler and P. Foster, 1994: A laboratory investigation of two ultraviolet spectroradiometers. Photochemistry and Photobiology, Volume 60, No. 1, pp. 84–90. World Meteorological Organization, 1978: International Operations Handbook for Measurement of Background Atmospheric Pollution. WMO-No. 491, Geneva. World Meteorological Organization, 1986a: Revised Instruction Manual on Radiation Instruments and Measurements. World Climate Research Programme Publications Series No. 7, WMO/TDNo. 149, Geneva. World Meteorological Organization, 1986b: Recent Progress in Sunphotometry: Determination of the Aerosol Optical Depth. Environmental Pollution Monitoring and Research Programme Report No. 43, WMO/TD-No. 143, Geneva. World Meteorological Organization, 1993a: Report of the WMO Workshop on the Measurement of Atmospheric Optical Depth and Turbidity (Silver Spring, United States, 6–10 December 1993). Global Atmosphere Watch Report No. 101, WMO/TD-No. 659, Geneva. World Meteorological Organization, 1993b: Report of the Second Meeting of the Ozone Research Managers of the Parties to the Vienna Convention for the Protection of the Ozone Layer (Geneva, 10–12 March, 1993). WMO Global Ozone Research and Monitoring Project Report No. 32, Geneva. World Meteorological Organization, 1996: WMO/ UMAP Workshop on Broad-band UV Radiometers (Garmisch-Partenkirchen, Germany, 22–23 April 1996). Global Atmosphere Watch Report No. 120, WMO/TD-No. 894, Geneva. World Meteorological Organization, 1998: Baseline Surface Radiation Network (BSRN): Operations Manual. WMO/TD-No. 879, Geneva. World Meteorological Organization, 1999a: Guidelines for Site Quality Control of UV Monitoring. Global Atmosphere Watch Report No. 126, WMO/TD-No. 884, Geneva. World Meteorological Organization, 1999b: Report of the LAP/COST/WMO Intercomparison of Erythemal Radiometers. (Thessaloniki, Greece, 13–23 September 1999). WMO Global Atmosphere Watch Report No. 141, WMO/TDNo. 1051, Geneva. World Meteorological Organization, 2001: Instruments to Measure Solar Ultraviolet Radiation. Part 1: Spectral instruments, Global Atmosphere Watch Report No. 125, WMO/TD-No. 1066, Geneva. World Meteorological Organization, 2005: WMO/ GAW Experts Workshop on a Global Surface-Based Network for Long Term Observations of Column Aerosol Optical Properties (Davos, Switzerland, 8–10 March 2004). Global Atmosphere Watch Report No. 162, WMO/TD-No. 1287, Geneva. young, A.T., 1981: On the Rayleigh-scattering optical depth of the atmosphere. Journal of Applied Meteorology, 20, pp. 328–330.


MeasureMent of sunsHIne duratIon



The term “sunshine” is associated with the brightness of the solar disc exceeding the background of diffuse sky light, or, as is better observed by the human eye, with the appearance of shadows behind illuminated objects. As such, the term is related more to visual radiation than to energy radiated at other wavelengths, although both aspects are inseparable. In practice, however, the first definition was established directly by the relatively simple Campbell-Stokes sunshine recorder (see section 8.2.3), which detects sunshine if the beam of solar energy concentrated by a special lens is able to burn a special dark paper card. This recorder was already introduced in meteorological stations in 1880 and is still used in many networks. Since no international regulations on the dimensions and quality of the special parts were established, applying different laws of the principle gave different sunshine duration values. In order to homogenize the data of the worldwide network for sunshine duration, a special design of the Campbell-Stokes sunshine recorder, the socalled interim reference sunshine recorder (IRSR), was recommended as the reference (WMO, 1962). The improvement made by this “hardware definition” was effective only during the interim period needed for finding a precise physical definition allowing for both designing automatic sunshine recorders and approximating the “scale” represented by the IRSR as near as possible. With regard to the latter, the settlement of a direct solar threshold irradiance corresponding to the burning threshold of the Campbell-Stokes recorders was strongly advised. Investigations at different stations showed that the threshold irradiance for burning the card varied between 70 and 280 W m– 2 (Bider, 1958; Baumgartner, 1979). However, further investigations, especially performed with the IRSR in France, resulted in a mean value of 120 W m–2, which was finally proposed as the threshold of direct solar irradiance to distinguish bright sunshine.1 With regard to the spread of test results, a threshold accuracy of 20 per cent in instrument specifications is accepted. A pyrheliometer was

recommended as the reference sensor for the detection of the threshold irradiance. For future refinement of the reference, the settlement of the field-of-view angle of the pyrheliometer seems to be necessary (see Part I, Chapter 7, sections 7.2 and 8.1.1 Definition

According to WMO (2003),2 sunshine duration during a given period is defined as the sum of that sub-period for which the direct solar irradiance exceeds 120 W m–2. 8.1.2 units and scales

The physical quantity of sunshine duration (SD) is, evidently, time. The units used are seconds or hours. For climatological purposes, derived terms such as “hours per day” or “daily sunshine hours” are used, as well as percentage quantities, such as “relative daily sunshine duration”, where SD may be related to the extra-terrestrial possible, or to the maximum possible, sunshine duration (SD0 and SDmax, respectively). The measurement period (day, decade, month, year, and so on) is an important addendum to the unit. 8.1.3 Meteorological requirements

Performance requirements are given in Part I, Chapter 1. Hours of sunshine should be measured with an uncertainty of ±0.1 h and a resolution of 0.1 h. Since the number and steepness of the threshold transitions of direct solar radiation determine the possible uncertainty of sunshine duration, the meteorological requirements on sunshine recorders are essentially correlated with the climatological cloudiness conditions (WMO, 1985). In the case of a cloudless sky, only the hourly values at sunrise or sunset constellations can (depending on the amount of dust) be erroneous because of an imperfectly adjusted threshold or spectral dependencies.

Recommended by the Commission for Instruments and Methods of Observation at its eighth session (1981) through Recommendation 10 (CIMO-VIII).

Recommended by the Commission for Instruments and Methods of Observation at its tenth session (1989) through Recommendation 16 (CIMO-X).



In the case of scattered clouds (cumulus, stratocumulus), the steepness of the transition is high and the irradiance measured from the cloudy sky with a pyrheliometer is generally lower than 80 W m–2; that means low requirements on the threshold adjustment. But the field-of-view angle of the recorder can influence the result if bright cloud clusters are near the sun. The highest precision is required if high cloud layers (cirrus, altostratus) with small variations of the optical thickness attenuate the direct solar irradiance around the level of about 120 W m–2. The field-of-view angle is effective as well as the precision of the threshold adjustment. The requirements on sunshine recorders vary, depending on site and season, according to the dominant cloud formation. The latter can be roughly described by three ranges of relative daily sunshine duration SD/SD0 (see section 8.1.2), namely “cloudy sky” by (0 ≤ SD/SD0 < 0.3), “scattered clouds” by (0.3 ≤ SD/SD0 < 0.7) and “fair weather” by (0.7 ≤ SD/SD0 ≤ 1.0). The results for dominant clouded sky generally show the highest percentage of deviations from the reference. application of sunshine duration data

the extra-terrestrial possible SD value), and a and b are constants which have to be determined monthly. The uncertainty of the monthly means of daily global irradiation derived in this way from Campbell-Stokes data was found to be lower than 10 per cent in summer, and rose up to 30 per cent in winter, as reported for German stations (Golchert, 1981). The Ångström formula implies the inverse correlation between cloud amount and sunshine duration. This relationship is not fulfilled for high and thin cloudiness and obviously not for cloud fields which do not cover the sun, so that the degree of inverse correlation depends first of all on the magnitude of the statistical data collected (Stanghellini, 1981; Angell, 1990). The improvement of the accuracy of SD data should reduce the scattering of the statistical results, but even perfect data can generate sufficient results only on a statistical basis. requirement of automated records

One of the first applications of SD data was to characterize the climate of sites, especially of health resorts. This also takes into account the psychological effect of strong solar light on human well-being. It is still used by some local authorities to promote tourist destinations. The description of past weather conditions, for instance of a month, usually contains the course of daily SD data. For these fields of application, an uncertainty of about 10 per cent of mean SD values seemed to be acceptable over many decades. correlations to other meteorological variables

Since electrical power is available in an increasing number of places, the advantage of the CampbellStokes recorder of being self-sufficient is of decreasing importance. Furthermore, the required daily maintenance requirement of replacing the burn card makes the use of Campbell-Stokes recorders problematic at either automatic weather stations or stations with reduced numbers of personnel. Another essential reason to replace Campbell-Stokes recorders by new automated measurement procedures is to avoid the expense of visual evaluations and to obtain more precise results on data carriers permitting direct computerized data processing. 8.1.4 Measurement methods

The most important correlation between sunshine duration and global solar radiation G is described by the so-called Ångström formula: G/G = a + b · (SD/SD )
0 0

The principles used for measuring sunshine duration and the pertinent types of instruments are briefly listed in the following methods: (a) Pyrheliometric method: Pyrheliometric detection of the transition of direct solar irradiance through the 120 W m–2 threshold (according to Recommendation 10 (CIMO-VIII)). Duration values are readable from time counters triggered by the appropriate upward and downward transitions. Type of instrument: pyrheliometer combined with an electronic or computerized threshold discriminator and a time-counting device.


where G/G0 is the so-called clearness index (related to the extra-terrestrial global irradiation), SD/SD0 is the corresponding sunshine duration (related to




Pyranometric method: (i) Pyranometric measurement of global (G) and diffuse (D) solar irradiance to derive the direct solar irradiance as the WMO threshold discriminator value and further as in (a) above. Type of instrument: Radiometer systems of two fitted pyranometers and one sunshade device combined with an electronic or computerized threshold discriminator and a time-counting device. (ii) Pyranometric measurement of global (G) solar irradiance to roughly estimate sunshine duration. Type of instrument: a pyranometer combined with an electronic or computerized device which is able to deliver 10 min means as well as minimum and maximum global (G) solar irradiance within those 10 min.

(rotating diaphragm or mirror, for instance) and combined with an electronic discriminator and a time-counting device. The sunshine duration measurement methods described in the following paragraphs are examples of ways to achieve the above-mentioned principles. Instruments using these methods, with the exception of the Foster switch recorder, participated in the WMO Automatic Sunshine Duration Measurement Comparison in Hamburg from 1988 to 1989 and in the comparison of pyranometers and electronic sunshine duration recorders of Regional Association VI in Budapest in 1984 (WMO, 1986). The description of the Campbell-Stokes sunshine recorder in section 8.2.3 is relatively detailed since this instrument is still widely used in national networks, and the specifications and evaluation rules recommended by WMO should be considered (however, note that this method is no longer recommended, 3 since the duration of bright sunshine is not recorded with sufficient consistency). A historical review of sunshine recorders is given in Coulson (1975), Hameed and Pittalwala (1989) and Sonntag and Behrens (1992).


Burn method: Threshold effect of burning paper caused by focused direct solar radiation (heat effect of absorbed solar energy). The duration is read from the total burn length. Type of instrument: Campbell-Stokes sunshine recorders, especially the recommended version, namely the IRSR (see section 8.2).

8.2 (d) Contrast method: Discrimination of the insolation contrasts between some sensors in different positions to the sun with the aid of a specific difference of the sensor output signals which corresponds to an equivalent of the WMO recommended threshold (determined by comparisons with reference SD values) and further as in (b) above. Type of instrument: Specially designed multisensor detectors (mostly equipped with photovoltaic cells) combined with an electronic discriminator and a time counter. (e) Scanning method: Discrimination of the irradiance received from continuously scanned, small sky sectors with regard to an equivalent of the WMO recommended irradiance threshold (determined by comparisons with reference SD values). Type of instrument: One-sensor receivers equipped with a special scanning device 8.2.1

instruMents anD sensors

Pyrheliometric method general

This method, which represents a direct consequence of the WMO definition of sunshine (see section 8.1.1) and is, therefore, recommended to obtain reference values of sunshine duration, requires a weatherproof pyrheliometer and a reliable solar tracker to point the radiometer automatically or at least semi-automatically to the position of the sun. The method can be modified by the choice of pyrheliometer, the field-of-view angle of which influences the irradiance measured when clouds surround the sun. The sunshine threshold can be monitored by the continuous comparison of the pyrheliometer output with the threshold equivalent voltage Vth = 120 W m–2 · R µV W–1 m2, which is calcultable

See Recommendation 10 (CIMO-VIII).



from the responsivity R of the pyrheliometer. A threshold transition is detected if ΔV = V – V th changes its sign. The connected time counter is running when ΔV > 0. sources of error


The correction of shade-ring losses.

The field-of-view angle is not yet settled by agreed definitions (see Part I, Chapter 7, sections 7.2 and Greater differences between the results of two pyrheliometers with different field-of-view angles are possible, especially if the sun is surrounded by clouds. Furthermore, typical errors of pyrheliometers, namely tilt effect, temperature dependence, non-linearity and zero-offset, depend on the class of the pyrheliometer. Larger errors appear if the alignment to the sun is not precise or if the entrance window is covered by rain or snow. 8.2.2 Pyranometric method general

As a special modification, the replacement of the criterion in equation 8.3 by a statistically derived parameterization formula (to avoid the determination of the solar zenith angle) for applications in more simple data-acquisition systems should be mentioned (Sonntag and Behrens, 1992). The pyranometric method using only one pyranometer to estimate sunshine duration is based on two assumptions on the relation between irradiance and cloudiness as follows: (a) A rather accurate calculation of the potential global irradiance at the Earth’s surface based on the calculated value of the extraterrestrial irradiation (G 0) by taking into account diminishing due to scattering in the atmosphere. The diminishing factor depends on the solar elevation h and the turbidity T of the atmosphere. The ratio between the measured global irradiance and this calculated value of the clear sky global irradiance is a good measure for the presence of clouds; (b) An evident difference between the minimum and maximum value of the global irradiance, measured during a 10 min interval, presumes a temporary eclipse of the sun by clouds. On the other hand, in the case of no such difference, there is no sunshine or sunshine only during the 10 min interval (namely, SD = 0 or SD = 10 min). Based on these assumptions, an algorithm can be used (Slob and Monna, 1991) to calculate the daily SD from the sum of 10 min SD. Within this algorithm, SD is determined for succeeding 10 min intervals (namely, SD10’ = ƒ · 10 min, where ƒ is the fraction of the interval with sunshine, 0 ≤ ƒ ≤ 1). The diminishing factor largely depends on the optical path of the sunlight travelling through the atmosphere. Because this path is related to the elevation of the sun, h = 90° – z, the algorithm discriminates between three time zones. Although usually ƒ = 0 or ƒ = 1, special attention is given to 0 < ƒ < 1. This algorithm is given in the annex. The uncertainty is about 0.6 h for daily sums. sources of error

The pyranometric method to derive sunshine duration data is based on the fundamental relationship between the direct solar radiation (I) and the global (G) and diffuse (D) solar radiation: I · cos ζ = G – D (8.2)

where ζ is the solar zenith angle and I · cos ζ is the horizontal component of I. To fulfil equation 8.2 exactly, the shaded field-of-view angle of the pyranometer for measuring D must be equal to the field-of-view angle of the pyrheliometer (see Part I, Chapter 7). Furthermore, the spectral ranges, as well as the time-constants of the pyrheliometers and pyranometers, should be as similar as possible. In the absence of a sun-tracking pyrheliometer, but where computer-assisted pyranometric measurements of G and D are available, the WMO sunshine criterion can be expressed according to equation 8.2 by: (G–D)/cos ζ > 120 W m–2 which is applicable to instantaneous readings. The modifications of this method in different stations concern first of all: (a) The choice of pyranometer; (b) The shading device applied (shade ring or shade disc with solar tracker) and its shade geometry (shade angle); (8.3)

According to equation 8.3, the measuring errors in global and diffuse solar irradiance are propagated by the calculation of direct solar irradiance and are strongly amplified with increasing solar zenith angles. Therefore, the accuracy of corrections for



losses of diffuse solar energy by the use of shade rings (WMO, 1984a) and the choice of pyranometer quality is of importance to reduce the uncertainty level of the results. 8.2.3 the campbell-stokes sunshine recorder (burn method)

using this method, does not provide accurate data of sunshine duration. The table below summarizes the main specifications and requirements for a Campbell-Stokes sunshine recorder of the IRSR grade. A recorder to be used as an IRSR should comply with the detailed specifications issued by the UK Met Office, and IRSR record cards should comply with the detailed specifications issued by Météo-France. adjustments

The Campbell-Stokes sunshine recorder consists essentially of a glass sphere mounted concentrically in a section of a spherical bowl, the diameter of which is such that the sun’s rays are focused sharply on a card held in grooves in the bowl. The method of supporting the sphere differs according to whether the instrument is operated in polar, temperate or tropical latitudes. To obtain useful results, both the spherical segment and the sphere should be made with great precision, the mounting being so designed that the sphere can be accurately centred therein. Three overlapping pairs of grooves are provided in the spherical segment so that the cards can be suitable for different seasons of the year (one pair for both equinoxes), their length and shape being selected to suit the geometrical optics of the system. It should be noted that the aforementioned problem of burns obtained under variable cloud conditions indicates that this instrument, and indeed any instrument

In installing the recorder, the following adjustments are necessary: (a) The base must be levelled; (b) The spherical segment should be adjusted so that the centre line of the equinoctial card lies in the celestial Equator (the scale of latitude marked on the bowl support facilitates this task); (c) The vertical plan through the centre of the sphere and the noon mark on the spherical segment must be in the plane of the geographic meridian (north-south adjustment).

campbell-stokes recorder (Irsr grade) specifications Glass sphere Shape: uniform Spherical segment Material: Gunmetal or equivalent durability Record cards Material: Good quality pasteboard not affected appreciably by moisture accurate to within 0.3 mm 0.4 ± 0.05 mm Within 2 per cent

diameter: Colour:

10 cm very pale or colourless


73 mm

Width: Thickness: Moisture effect:

additional specifications: (a)

refractive index: 1.52 ± 0.02 Focal length: 75 mm for sodium “d” light

Central noon line engraved transversely across inner surface


adjustment for Colour: inclination of segment to horizontal according Graduations: to latitude double base with provision for levelling and azimuth setting

dark, homogeneous, no difference detected in diffuse daylight Hour-lines printed in black




A recorder is best tested for (c) above by observing the image of the sun at the local apparent noon; if the instrument is correctly adjusted, the image should fall on the noon mark of the spherical segment or card. evaluation

tudes higher than about 65°, some countries use modified versions. One possibility is to use two Campbell-Stokes recorders operated back to back, one of them being installed in the standard manner, while the other should be installed facing north. In many climates, it may be necessary to heat the device to prevent the deposition of frost and dew. Comparisons in climates like that of northern Europe between heated and normally operated instruments have shown that the amount of sunshine not measured by a normal version, but recorded by a heat device, is about 1 per cent of the monthly mean in summer and about 5 to 10 per cent of the monthly mean in winter. sources of error

In order to obtain uniform results from CampbellStokes recorders, it is especially important to conform closely to the following directions for measuring the IRSR records. The daily total duration of bright sunshine should be determined by marking off on the edge of a card of the same curvature the lengths corresponding to each mark and by measuring the total length obtained along the card at the level of the recording to the nearest tenth of an hour. The evaluation of the record should be made as follows: (a) In the case of a clear burn with round ends, the length should be reduced at each end by an amount equal to half the radius of curvature of the end of the burn; this will normally correspond to a reduction of the overall length of each burn by 0.1 h; (b) In the case of circular burns, the length measured should be equal to half the diameter of the burn. If more than one circular burn occurs on the daily record, it is sufficient to consider two or three burns as equivalent to 0.1 h of sunshine; four, five, six burns as equivalent to 0.2 h of sunshine; and so on in steps of 0.1 h; (c) Where the mark is only a narrow line, the whole length of this mark should be measured, even when the card is only slightly discoloured; (d) Where a clear burn is temporarily reduced in width by at least a third, an amount of 0.1 h should be subtracted from the total length for each such reduction in width, but the maximum subtracted should not exceed one half of the total length of the burn. In order to assess the random and systematic errors made while evaluating the records and to ensure the objectivity of the results of the comparison, it is recommended that the evaluations corresponding to each one of the instruments compared be made successively and independently by two or more persons trained in this type of work. special versions

The errors of this recorder are mainly generated by the dependence on the temperature and humidity of the burn card as well as by the overburning effect, especially in the case of scattered clouds (Ikeda, Aoshima and Miyake, 1986). The morning values are frequently disturbed by dew or frost at middle and high latitudes. 8.2.4 contrast-evaluating devices

The Foster sunshine switch is an optical device that was introduced operationally in the network of the United States in 1953 (Foster and Foskett, 1953). It consists of a pair of selenium photocells, one of which is shielded from direct sunshine by a shade ring. The cells are corrected so that in the absence of the direct solar beam no signal is produced. The switch is activated when the direct solar irradiance exceeds about 85 W m–2 (Hameed and Pittalwala, 1989). The position of the shade ring requires adjustments only four times a year to allow for seasonal changes in the sun’s apparent path across the sky. 8.2.5 contrast-evaluating and scanning devices general

Since the standard Campbell-Stokes sunshine recorder does not record all the sunshine received during the summer months at stations with lati-

A number of different opto-electronic sensors, namely contrast-evaluating and scanning devices (see, for example, WMO, 1984b), were compared during the WMO Automatic Sunshine Duration Measurement Comparison at the Regional Radiation Centre of Regional Association VI in Hamburg (Germany) from 1988 to 1989. The report of this



comparison contains detailed descriptions of all the instruments and sensors that participated in this event. sources of error


The distribution of cloudiness over the sky or solar radiation reflected by the surroundings can influence the results because of the different procedures to evaluate the contrast and the relatively large field-of-view angles of the cells in the arrays used. Silicon photovoltaic cells without filters typically have the maximum responsivity in the nearinfrared, and the results, therefore, depend on the spectrum of the direct solar radiation. Since the relatively small, slit-shaped, rectangular field-of-view angles of this device differ considerably from the circular-symmetrical one of the reference pyrheliometer, the cloud distribution around the sun can cause deviations from the reference values. Because of the small field of view, an imperfect glass dome may be a specific source of uncertainty. The spectral responsivity of the sensor should also be considered in addition to solar elevation error. At present, only one of the commercial recorders using a pyroelectric detector is thought to be free of spectral effects.

as a factor of the local climate and should be well documented, as mentioned above; The site should be free of surrounding surfaces that could reflect a significant amount of direct solar radiation to the detector. Reflected radiation can influence mainly the results of the contrast-measuring devices. To overcome this interference, white paint should be avoided and nearby surfaces should either be kept free of snow or screened.

The adjustment of the detector axis is mentioned above. For some detectors, the manufacturers recommend tilting the axis, depending on the season.


general sources of error

The uncertainty of sunshine duration recorded using different types of instrument and methods was demonstrated as deviations from reference values in WMO for the weather conditions of Hamburg (Germany) in 1988–1989. The reference values are also somewhat uncertain because of the uncertainty of the calibration factor of the pyrheliometer used and the dimensions of its field-of-view angle (dependency on the aureole). For single values, the time constant should also be considered. General sources of uncertainty are as follows: (a) The calibration of the recorder (adjustment of the irradiance threshold equivalent (see section 8.5)); (b) The typical variation of the recorder response due to meteorological conditions (for example, temperature, cloudiness, dust) and the position of the sun (for example, errors of direction, solar spectrum); (c) The poor adjustment and instability of important parts of the instrument; (d) The simplified or erroneous evaluation of the values measured; (e) Erroneous time-counting procedures; (f) Dirt and moisture on optical and sensing surfaces; (g) Poor quality of maintenance.


exPosure of sunshine Detectors

The three essential aspects for the correct exposure of sunshine detectors are as follows: (a) The detectors should be firmly fixed to a rigid support. This is not required for the SONI (WMO, 1984b) sensors that are designed also for use on buoys; (b) The detector should provide an uninterrupted view of the sun at all times of the year throughout the whole period when the sun is more than 3° above the horizon. This recommendation can be modified in the following cases: (i) Small antennas or other obstructions of small angular width (≤2°) are acceptable if no alternative site is available. In this case, the position, elevation and angular width of obstructions should be well documented and the potential loss of sunshine hours during particular hours and days should be estimated by the astronomical calculation of the apparent solar path; (ii) In mountainous regions (valleys, for instance), natural obstructions are acceptable



The following general remarks should be made before the various calibration methods are described:

I.8–8 (a) (b)






No standardized method to calibrate SD detectors is available; For outdoor calibrations, the pyrheliometric method has to be used to obtain reference data; Because of the differences between the design of the SD detectors and the reference instrument, as well as with regard to the natural variability of the measuring conditions, calibration results must be determined by long-term comparisons (some months); Generally the calibration of SD detectors requires a specific procedure to adjust their threshold value (electronically for opto-electric devices, by software for pyranometric systems); For opto-electric devices with an analogue output, the duration of the calibration period should be relatively short; The indoor method (using a lamp) is recommended primarily for regular testing of the stability of field instruments. outdoor methods comparison of sunshine duration data

six months at European mid-latitudes. Therefore, the facilities to calibrate network detectors should permit the calibration of several detectors simultaneously. (The use of qtot as a correction factor for the Σ SD values gives reliable results only if the periods to be evaluated have the same cloud formation as during the calibration period. Therefore, this method is not recommended.) If the method is applied to data sets which are selected according to specific measurement conditions (for example, cloudiness, solar elevation angle, relative sunshine duration, daytime), it may be possible, for instance, to find factors q sel = Σ sel SD ref/ Σ sel SD cal statistically for different types of cloudiness. The factors could also be used to correct data sets for which the cloudiness is clearly specified. On the other hand, an adjustment of the threshold equivalent voltage is recommended, especially if qsel values for worse cloudiness conditions (such as cirrus and altostratus) are considered. An iterative procedure to validate the adjustment is also necessary; depending on the weather, some weeks or months of comparison may be needed. comparison of analogue signals


Reference values SDref have to be measured simultaneously with the sunshine duration values SDcal of the detector to be calibrated. The reference instrument used should be a pyrheliometer on a solar tracker combined with an irradiance threshold discriminator (see section 8.1.4). Alternatively, a regularly recalibrated sunshine recorder of selected precision may be used. Since the accuracy requirement of the sunshine threshold of a detector varies with the meteorological conditions (see section 8.1.3), the comparison results must be derived statistically from data sets covering long periods. If the method is applied to the total data set of a period (with typical cloudiness conditions), the first calibration result is the ratio qtot = Σtot SDref /Σtot SDcal. For q >1 or q {0.3 + exp(–TL/(0.9 + 9.4 sin (h)} and Gmax–Gmin < 0.1 G0 with TL = 10? If “yes” If “no” c1 ƒ=1











references and furtHer readIng

Angell, J.K., 1990: Variation in United States cloudiness and sunshine duration between 1950 and the drought year of 1988. Journal of Climate, 3, pp. 296–308. Baumgartner, T., 1979: Die Schwellenintensität des Sonnenscheinautographen Campbell-Stokes an wolkenlosen Tagen. Arbeitsberichte der Schweizerischen Meteorologischen Zentralanstalt, No. 84, zürich. Bider, M., 1958: Über die Genauigkeit der Registrierungen des Sonnenscheinautographen Campbell-Stokes. Archiv für Meteorologie, Geophysik und Bioklimatologie, Serie B, Volume 9, No. 2, pp. 199–230. Coulson, K.L., 1975: Solar and Terrestrial Radiation. Methods and Measurements. Academic Press, New york, pp. 215–233. Foster, N.B. and L.W. Foskett, 1953: A photoelectric sunshine recorder. Bulletin of the American Meteorological Society, 34, pp. 212–215. Golchert, H.J., 1981: Mittlere Monatliche Globalstrahlungsverteilungen in der Bundesrepublik Deutschland. Meteorologische Rundschau, 34, pp. 143–151. Hameed, S. and I. Pittalwala, 1989: An investigation of the instrumental effects on the historical sunshine record of the United States. Journal of Climate, 2, pp. 101–104. Ikeda, K., T. Aoshima and y. Miyake, 1986: Development of a new sunshine-duration meter. Journal of the Meteorological Society of Japan, Volume 64, Number 6, pp. 987–993. Jaenicke, R. and F. Kasten, 1978: Estimation of atmospheric turbidity from the burned traces of the Campbell-Stokes sunshine recorder. Applied Optics, 17, pp. 2617–2621. Painter, H.E., 1981: The performance of a CampbellStokes sunshine recorder compared with a simultaneous record of normal incidence irradiance. The Meteorological Magazine, 110, pp. 102–109. Slob, W.H. and W.A.A. Monna, 1991: Bepaling van een directe en diffuse straling en van zonneschijnduur uit 10-minuutwaarden van de globale straling. KNMI TR136, de Bilt. Sonntag, D. and K. Behrens, 1992: Ermittlung der Sonnenscheindauer aus pyranometrisch gemessenen Bestrahlungsstärken der Global-und Himmelsstrahlung. Berichte des Deutschen Wetterdienstes, No. 181.

Stanghellini, C., 1981: A simple method for evaluating sunshine duration by cloudiness observations. Journal of Applied Meteorology, 20, pp. 320–323. World Meteorological Organization, 1962: Abridged Final Report of the Third Session of the Commission for Instruments and Methods of Observation. WMO-No. 116 R.P. 48, Geneva. World Meteorological Organization, 1982: Abridged Final Report of the Eighth Session of the Commission for Instruments and Methods of Observation. WMO-No. 590, Geneva. World Meteorological Organization, 1984a: Diffuse solar radiation measured by the shade ring method improved by a new correction formula (K. Dehne). Papers Presented at the WMO Technical Conference on Instruments and Cost-effective Meteorological Observations (TECIMO). Instruments and Observing Methods Report No. 15, Geneva, pp. 263–267. World Meteorological Organization, 1984b: A new sunshine duration sensor (P. Lindner). Papers Presented at the WMO Technical Conference on Instruments and Cost-effective Meteorological Observations (TECIMO). Instruments and Observing Methods Report No. 15, Geneva, pp. 179–183. World Meteorological Organization, 1985: Dependence on threshold solar irradiance of measured sunshine duration (K. Dehne). Papers Presented at the Third WMO Technical Conference on Instruments and Methods of Observation (TECIMO III). Instruments and Observing Methods Report No. 22, WMO/TD-No. 50, Geneva, pp. 263–271. World Meteorological Organization, 1986: Radiation and Sunshine Duration Measurements: Comparison of Pyranometers and Electronic Sunshine Duration Recorders of RA VI (G. Major). WMO Instruments and Observing Methods Report No. 16, WMO/TD-No. 146, Geneva. World Meteorological Organization, 1990: Abridged Final Report of the Tenth Session of the Commission for Instruments and Methods of Observation. WMO-No. 727, Geneva. World Meteorological Organization, 2003: Manual on the Global Observing System. WMO-No. 544, Geneva.


MeasureMent of VIsIBIlIty

9.1 9.1.1



Visibility was first defined for meteorological purposes as a quantity to be estimated by a human observer, and observations made in that way are widely used. However, the estimation of visibility is affected by many subjective and physical factors. The essential meteorological quantity, which is the transparency of the atmosphere, can be measured objectively and is represented by the meteorological optical range (MOR). The meteorological optical range is the length of path in the atmosphere required to reduce the luminous flux in a collimated beam from an incandescent lamp, at a colour temperature of 2 700 K, to 5 per cent of its original value, the luminous flux being evaluated by means of the photometric luminosity function of the International Commission on Illumination. Visibility, meteorological visibility (by day) and meteorological visibility at night1 are defined as the greatest distance at which a black object of suitable dimensions (located on the ground) can be seen and recognized when observed against the horizon sky during daylight or could be seen and recognized during the night if the general illumination were raised to the normal daylight level (WMO, 1992a; 2003). Visual range (meteorological): Distance at which the contrast of a given object with respect to its background is just equal to the contrast threshold of an observer (WMO, 1992a). Airlight is light from the sun and the sky which is scattered into the eyes of an observer by atmospheric suspensoids (and, to a slight extent, by

air molecules) lying in the observer’s cone of vision. That is, airlight reaches the eye in the same manner as diffuse sky radiation reaches the Earth’s surface. Airlight is the fundamental factor limiting the daytime horizontal visibility for black objects, because its contributions, integrated along the cone of vision from eye to object, raise the apparent luminance of a sufficiently remote black object to a level which is indistinguishable from that of the background sky. Contrary to subjective estimates, most of the airlight entering observers’ eyes originates in portions of their cone of vision lying rather close to them. The following four photometric qualities are defined in detail in various standards, such as by the International Electrotechnical Commission (IEC, 1987): (a) Luminous flux (symbol: F (or Φ); unit: lumen) is a quantity derived from radiant flux by evaluating the radiation according to its action upon the International Commission on Illumination standard photometric observer; (b) Luminous intensity (symbol: I; unit: candela or lm sr–1) is luminous flux per unit solid angle; (c) Luminance (symbol: L; unit: cd m–2) is luminous intensity per unit area; (d) Illuminance (symbol; E, unit; lux or lm m–2) is luminous flux per unit area. The extinction coefficient (symbol σ) is the proportion of luminous flux lost by a collimated beam, emitted by an incandescent source at a colour temperature of 2 700 K, while travelling the length of a unit distance in the atmosphere. The coefficient is a measure of the attenuation due to both absorption and scattering. The luminance contrast (symbol C) is the ratio of the difference between the luminance of an object and its background and the luminance of the background. The contrast threshold (symbol ε) is the minimum value of the luminance contrast that the human eye can detect, namely, the value which allows an object to be distinguished from its background. The contrast threshold varies with the individual.

To avoid confusion, visibility at night should not be defined in general as “the greatest distance at which lights of specified moderate intensity can be seen and identified” (see the Abridged Final Report of the Eleventh Session of the Commission for Instruments and Methods of Observation (WMO-No. 807)). If visibility should be reported based on the assessment of light sources, it is recommended that a visual range should be defined by specifying precisely the appropriate light intensity and its application, like runway visual range. Nevertheless, at its eleventh session CIMO agreed that further investigations were necessary in order to resolve the practical difficulties of the application of this definition.



The illuminance threshold (symbol Et) is the smallest illuminance, required by the eye, for the detection of point sources of light against a background of specified luminance. The value of Et, therefore, varies according to lighting conditions. The transmission factor (symbol T) is defined, for a collimated beam from an incandescent source at a colour temperature of 2 700 K, as the fraction of luminous flux which remains in the beam after traversing an optical path of a given length in the atmosphere. The transmission factor is also called the transmission coefficient. The terms transmittance or transmissive power of the atmosphere are also used when the path is defined, that is, of a specific length (for example, in the case of a transmissometer). In this case, T is often multiplied by 100 and expressed in per cent. 9.1.2 units and scales

climatology. Here, visibility must be representative of the optical state of the atmosphere. Secondly, it is an operational variable which corresponds to specific criteria or special applications. For this purpose, it is expressed directly in terms of the distance at which specific markers or lights can be seen. One of the most important special applications is found in meteorological services to aviation (see Part II, Chapter 2). The measure of visibility used in meteorology should be free from the influence of extra-meteorological conditions; it must be simply related to intuitive concepts of visibility and to the distance at which common objects can be seen under normal conditions. MOR has been defined to meet these requirements, as it is convenient for the use of instrumental methods by day and night, and as the relations between MOR and other measures of visibility are well understood. MOR has been formally adopted by WMO as the measure of visibility for both general and aeronautical uses (WMO, 1990a). It is also recognized by the International Electrotechnical Commission (IEC, 1987) for application in atmospheric optics and visual signalling. MOR is related to the intuitive concept of visibility through the contrast threshold. In 1924, Koschmieder, followed by Helmholtz, proposed a value of 0.02 for ε . Other values have been proposed by other authors. They vary from 0.007 7 to 0.06, or even 0.2. The smaller value yields a larger estimate of the visibility for given atmospheric conditions. For aeronautical requirements, it is accepted that ε is higher than 0.02, and it is taken as 0.05 since, for a pilot, the contrast of an object (runway markings) with respect to the surrounding terrain is much lower than that of an object against the horizon. It is assumed that, when an observer can just see and recognize a black object against the horizon, the apparent contrast of the object is 0.05, and, as explained below, this leads to the choice of 0.05 as the transmission factor adopted in the definition of MOR. Accuracy requirements are discussed in Part I, Chapter 1. 9.1.4 Measurement methods

The meteorological visibility or MOR is expressed in metres or kilometres. The measurement range varies according to the application. While for synoptic meteorological requirements, the scale of MOR readings extends from below 100 m to more than 70 km, the measurement range may be more restricted for other applications. This is the case for civil aviation, where the upper limit may be 10 km. This range may be further reduced when applied to the measurement of runway visual range representing landing and take-off conditions in reduced visibility. Runway visual range is required only between 50 and 1 500 m (see Part II, Chapter 2). For other applications, such as road or sea traffic, different limits may be applied according to both the requirements and the locations where the measurements are taken. The errors of visibility measurements increase in proportion to the visibility, and measurement scales take this into account. This fact is reflected in the code used for synoptic reports by the use of three linear segments with decreasing resolution, namely, 100 to 5 000 m in steps of 100 m, 6 to 30 km in steps of 1 km, and 35 to 70 km in steps of 5 km. This scale allows visibility to be reported with a better resolution than the accuracy of the measurement, except when visibility is less than about 1 000 m. 9.1.3 Meteorological requirements

The concept of visibility is used extensively in meteorology in two distinct ways. First, it is one of the elements identifying air-mass characteristics, especially for the needs of synoptic meteorology and

Visibilit y is a complex psycho -physica l phenomenon, governed mainly by the atmospheric extinction coefficient associated with solid and liquid particles held in suspension in the atmosphere; the extinction is caused primarily by



scattering rather than by absorption of the light. Its estimation is subject to variations in individual perception and interpretative ability, as well as the light source characteristics and the transmission factor. Thus, any visual estimate of visibility is subjective. When visibility is estimated by a human observer it depends not only on the photometric and dimensional characteristics of the object which is, or should be, perceived, but also on the observer’s contrast threshold. At night, it depends on the intensity of the light sources, the background illuminance and, if estimated by an observer, the adaptation of the observer’s eyes to darkness and the observer’s illuminance threshold. The estimation of visibility at night is particularly problematic. The first definition of visibility at night in section 9.1.1 is given in terms of equivalent daytime visibility in order to ensure that no artificial changes occur in estimating the visibility at dawn and twilight. The second definition has practical applications especially for aeronautical requirements, but it is not the same as the first and usually gives different results. Both are evidently imprecise. Instrumental methods measure the extinction coefficient from which the MOR may be calculated. The visibility may then be calculated from knowledge of the contrast and illuminance thresholds, or by assigning agreed values to them. It has been pointed out by Sheppard (1983) that:
“strict adherence to the definition (of MOR) would require mounting a transmitter and receiver of appropriate spectral characteristics on two platforms which could be separated, for example along a railroad, until the transmittance was 5 per cent. Any other approach gives only an estimate of MOR.”

radiation in the visible light spectrum. The terms photopic vision and scotopic vision refer to daytime and night-time conditions, respectively. The adjective photopic refers to the state of accommodation of the eye for daytime conditions of ambient luminance. More precisely, the photopic state is defined as the visual response of an observer with normal sight to the stimulus of light incident on the retinal fovea (the most sensitive central part of the retina). The fovea permits fine details and colours to be distinguished under such conditions of adaptation. In the case of photopic vision (vision by means of the fovea), the relative luminous efficiency of the eye varies with the wavelength of the incident light. The luminous efficiency of the eye in photopic vision is at a maximum for a wavelength of 555 nm. The response curve for the relative efficiency of the eye at the various wavelengths of the visible spectrum may be established by taking the efficiency at a wavelength of 555 nm as a reference value. The curve in Figure 9.1, adopted by the International Commission on Illumination for an average normal observer, is therefore obtained.


Relative luminous efficiency

V2 0.8












Wavelength (nm)

However, fixed instruments are used on the assumption that the extinction coefficient is independent of distance. Some instruments measure attenuation directly and others measure the scattering of light to derive the extinction coefficient. These are described in section 9.3. The brief analysis of the physics of visibility in this chapter may be useful for understanding the relations between the various measures of the extinction coefficient, and for considering the instruments used to measure it. Visual perception — photopic and scotopic vision The conditions of visual perception are based on the measurement of the photopic efficiency of the human eye with respect to monochromatic

figure 9.1. relative luminous efficiency of the human eye for monochromatic radiation. the continuous line indicates daytime vision, while the broken line indicates night-time vision. Night-time vision is said to be scotopic (vision involving the rods of the retina instead of the fovea). The rods, the peripheral part of the retina, have no sensitivity to colour or fine details, but are particularly sensitive to low light intensities. In scotopic vision, maximum luminous efficiency corresponds to a wavelength of 507 nm. Scotopic vision requires a long period of accommodation, up to 30 min, whereas photopic vision requires only 2 min.

I.9–4 Basic equations


where Lh is the luminance of the horizon, and Lb is the luminance of the object. The luminance of the horizon arises from the airlight scattered from the atmosphere along the observer’s line of sight. It should be noted that, if the object is darker than the horizon, C is negative, and that, if the object is black (Lb = 0), C = –1. In 1924, Koschmieder established a relationship, which later became known as Koschmieder’s law, between the apparent contrast (Cx) of an object, seen against the horizon sky by a distant observer, and its inherent contrast (C0), namely, the contrast that the object would have against the horizon when seen from very short range. Koschmieder’s relationship can be written as: Cx = C0 e-σx (9.9)

The basic equation for visibility measurements is the Bouguer-Lambert law: F = F0 e-σx (9.1)

where F is the luminous flux received after a length of path x in the atmosphere and F0 is the flux for x = 0. Differentiating, we obtain:


− dF 1 ⋅ F dx


Note that this law is valid only for monochromatic light, but may be applied to a spectral flux to a good approximation. The transmission factor is: T = F/F0 (9.3)

Mathematical relationships between MOR and the different variables representing the optical state of the atmosphere may be deduced from the BouguerLambert law. From equations 9.1 and 9.3 we may write: T = F/F0 = e-σx (9.4)

This relationship is valid provided that the scatter coefficient is independent of the azimuth angle and that there is uniform illumination along the whole path between the observer, the object and the horizon. If a black object is viewed against the horizon (C0 = –1) and the apparent contrast is –0.05, equation 9.9 reduces to: 0.05 = e-σx (9.10)

If this law is applied to the MOR definition T = 0.05, then x = P and the following may be written: T = 0.05 = e-σP (9.5)

Hence, the mathematical relation of MOR to the extinction coefficient is: P = (1/σ) · ln (1/0.05) ≈ 3/σ (9.6)

Comparing this result with equation 9.5 shows that when the magnitude of the apparent contrast of a black object, seen against the horizon, is 0.05, that object is at MOR (P). Meteorological visibility at night The distance at which a light (a night visibility marker) can be seen at night is not simply related to MOR. It depends not only on MOR and the intensity of the light, but also on the illuminance at the observer’s eye from all other light sources. In 1876, Allard proposed the law of attenuation of light from a point source of known intensity (I) as a function of distance (x) and extinction coefficient (σ). The illuminance (E) of a point light source is given by: E = I · x –2 · e-σx (9.11)

where ln is the log to base e or the natural logarithm. When combining equation 9.4, after being deduced from the Bouguer-Lambert law, and equation 9.6, the following equation is obtained: P = x · ln (0.05)/ln (T) (9.7)

This equation is used as a basis for measuring MOR with transmissometers where x is, in this case, equal to the transmissometer baseline a in equation 9.14. Meteorological visibility in daylight The contrast of luminance is:


Lb − Lh Lh


When the light is just visible, E = Et and the following may be written: σ = (1/x) · ln {I/(Et · x2)} (9.12)



Noting that P = (1/σ) · ln (1/0.05) in equation 9.6, we may write: P = x · ln (1/0.05)/ln (I/(Et · x2) (9.13)

The relationship between MOR and the distance at which lights can be seen is described in section 9.2.3, while the application of this equation to visual observations is described in section 9.2.


visual estiMation of Meteorological oPtical range



A meteorological observer can make a visual estimation of MOR using natural or man-made objects (groups of trees, rocks, towers, steeples, churches, lights, and so forth). Each station should prepare a plan of the objects used for observation, showing their distances and bearings from the observer. The plan should include objects suitable for daytime observations and objects suitable for night-time observations. The observer must also give special attention to significant directional variations of MOR. Observations should be made by observers who have “normal” vision and have received suitable training. The observations should normally be made without any additional optical devices (binoculars, telescope, theodolite, and the like) and, preferably, not through a window, especially when objects or lights are observed at night. The eye of the observer should be at a normal height above the ground (about 1.5 m); observations should, thus, not be made from the upper storeys of control towers or other high buildings. This is particularly important when visibility is poor. When visibility varies in different directions, the value recorded or reported may depend on the use to be made of the report. In synoptic messages, the lower value should be reported, but in reports for aviation the guidance in WMO (1990a) should be followed. 9.2.2 estimation of meteorological optical range by day For daytime observations, the visual estimation of visibility gives a good approximation of the true value of MOR.

Provided that they meet the following requirements, objects at as many different distances as possible should be selected for observation during the day. Only black, or nearly black, objects which stand out on the horizon against the sky should be chosen. Light-coloured objects or objects located close to a terrestrial background should be avoided as far as possible. This is particularly important when the sun is shining on the object. Provided that the albedo of the object does not exceed about 25 per cent, no error larger than 3 per cent will be caused if the sky is overcast, but it may be much larger if the sun is shining. Thus, a white house would be unsuitable, but a group of dark trees would be satisfactory, except when brightly illuminated by sunlight. If an object against a terrestrial background has to be used, it should stand well in front of the background, namely, at a distance at least half that of the object from the point of observation. A tree at the edge of a wood, for example, would not be suitable for visibility observations. For observations to be representative, they should be made using objects subtending an angle of no less than 0.5° at the observer’s eye. An object subtending an angle less than this becomes invisible at a shorter distance than would large objects in the same circumstances. It may be useful to note that a hole of 7.5 mm in diameter, punched in a card and held at arm’s length, subtends this angle approximately; a visibility object viewed through such an aperture should, therefore, completely fill it. At the same time, however, such an object should not subtend an angle of more than 5°. 9.2.3 estimation of meteorological optical range at night

Methods which may be used to estimate MOR at night from visual observations of the distance of perception of light sources are described below. Any source of light may be used as a visibility object, provided that the intensity in the direction of observation is well defined and known. However, it is generally desirable to use lights which can be regarded as point sources, and whose intensity is not greater in any one more favoured direction than in another and not confined to a solid angle which is too small. Care must be taken to ensure the mechanical and optical stability of the light source. A distinction should be made between sources known as point sources, in the vicinity of which there is no other source or area of light, and clusters



of lights, even though separated from each other. In the latter case, such an arrangement may affect the visibility of each source considered separately. For measurements of visibility at night, only the use of suitably distributed point sources is recommended. It should be noted that observations at night, using illuminated objects, may be affected appreciably by the illumination of the surroundings, by the physiological effects of dazzling, and by other lights, even when these are outside the field of vision and, more especially, if the observation is made through a window. Thus, an accurate and reliable observation can be made only from a dark and suitably chosen location. Furthermore, the importance of physiological factors cannot be overlooked, since these are an important source of measurement dispersion. It is essential that only qualified observers with normal vision take such measurements. In addition, it is necessary to allow a period of adaptation (usually from 5 to 15 min) during which the eyes become accustomed to the darkness. For practical purposes, the relationship between the distance of perception of a light source at night and the value of MOR can be expressed in two different ways, as follows: (a) For each value of MOR, by giving the value of luminous intensity of the light, so that there is a direct correspondence between the distance where it is barely visible and the value of MOR; (b) For a light of a given luminous intensity, by giving the correspondence between the distance of perception of the light and the value of MOR. The second relationship is easier and also more practical to use since it would not be an easy matter to install light sources of differing intensities at different distances. The method involves using light sources which either exist or are installed around the station and replacing I, x and Et in equation 9.13 by the corresponding values of the available light sources. In this way, the Meteorological Services can draw up tables giving values of MOR as a function of background luminance and the light sources of known intensity. The values to be assigned to the illuminance threshold Et vary considerably in accordance with the ambient luminance. The following values, considered as average observer values, should be used: (a) 10–6.0 lux at twilight and at dawn, or when there is appreciable light from artificial

(b) (c)

sources; 10–6.7 lux in moonlight, or when it is not yet quite dark; 10–7.5 lux in complete darkness, or with no light other than starlight.

Tables 9.1 and 9.2 give the relations between MOR and the distance of perception of light sources for each of the above methods for different observation conditions. They have been compiled to guide Meteorological Services in the selection or installation of lights for night visibility observations and in the preparation of instructions for their observers for the computation of MOR values. table 9.1. relation between Mor and intensity of a just-visible point source for three values of Et
MOR P (m) 100 200 500 1 000 2 000 5 000 10 000 20 000 50 000 Luminous intensity (candela) of lamps only just visible at distances given in column P Twilight –6.0 (Et = 10 ) 0.2 0.8 5 20 80 500 2 000 8 000 50 000 Moonlight –6.7 (Et = 10 ) 0.04 0.16 1 4 16 100 400 1 600 10 000 Complete darkness –7.5 (Et = 10 ) 0.006 0.025 0.16 0.63 2.5 16 63 253 1 580

table 9.2. relation between Mor and the distance at which a 100 cd point source is just visible for three values of Et
MOR P (m) 100 200 500 1 000 2 000 5 000 10 000 20 000 50 000 Distance of perception (metres) of a lamp of 100 cd as a function of MOR value Twilight –6.0 (Et = 10 ) 250 420 830 1 340 2 090 3 500 4 850 6 260 7 900 Moonlight –6.7 (Et = 10 ) 290 500 1 030 1 720 2 780 5 000 7 400 10 300 14 500 Complete darkness –7.5 (Et = 10 ) 345 605 1 270 2 170 3 650 6 970 10 900 16 400 25 900

An ordinary 100 W incandescent bulb provides a light source of approximately 100 cd. In view of the substantial differences caused by relatively small variations in the values of the visual illuminance threshold and by different conditions



of general illumination, it is clear that Table 9.2 is not intended to provide an absolute criterion of visibility, but indicates the need for calibrating the lights used for night-time estimation of MOR so as to ensure as far as possible that night observations made in different locations and by different Services are comparable. 9.2.4 estimation of meteorological optical range in the absence of distant objects

At certain locations (open plains, ships, and so forth), or when the horizon is restricted (valley or cirque), or in the absence of suitable visibility objects, it is impossible to make direct estimations, except for relatively low visibilities. In such cases, unless instrumental methods are available, values of MOR higher than those for which visibility points are available have to be estimated from the general transparency of the atmosphere. This can be done by noting the degree of clarity with which the most distant visibility objects stand out. Distinct outlines and features, with little or no fuzziness of colours, are an indication that MOR is greater than the distance between the visibility object and the observer. On the other hand, indistinct visibility objects are an indication of the presence of haze or of other phenomena reducing MOR. 9.2.5 general Observations of objects should be made by observers who have been suitably trained and have what is usually referred to as normal vision. This human factor has considerable significance in the estimation of visibility under given atmospheric conditions, since the perception and visual interpretation capacity vary from one individual to another. accuracy of daytime visual estimates of meteorological optical range Observations show that estimates of MOR based on instrumental measurements are in reasonable agreement with daytime estimates of visibility. Visibility and MOR should be equal if the observer’s contrast threshold is 0.05 (using the criterion of recognition) and the extinction coefficient is the same in the vicinity of both the instrument and the observer. Middleton (1952) found, from 1000 measurements, that the mean contrast ratio threshold for a group accuracy of visual observations

of 10 young airmen trained as meteorological observers was 0.033 with a range, for individual observations, from less than 0.01 to more than 0.2. Sheppard (1983) has pointed out that when the Middleton data are plotted on a logarithmic scale they show good agreement with a Gaussian distribution. If the Middleton data represent normal observing conditions, we must expect daylight estimates of visibility to average about 14 per cent higher than MOR with a standard deviation of 20 per cent of MOR. These calculations are in excellent agreement with the results from the First WMO Intercomparison of Visibility Measurements (WMO, 1990b), where it was found that, during daylight, the observers’ estimates of visibility were about 15 per cent higher than instrumental measurements of MOR. The interquartile range of differences between the observer and the instruments was about 30 per cent of the measured MOR. This corresponds to a standard deviation of about 22 per cent, if the distribution is Gaussian. accuracy of night-time visual estimates of meteorological optical range From table 9.2 in section 9.2.3, it is easy to see how misleading the values of MOR can be if based simply on the distance at which an ordinary light is visible, without making due allowance for the intensity of the light and the viewing conditions. This emphasizes the importance of giving precise, explicit instructions to observers and of providing training for visibility observations. Note that, in practice, the use of the methods and tables described above for preparing plans of luminous objects is not always easy. The light sources used as objects are not necessarily well located or of stable, known intensity, and are not always point sources. With respect to this last point, the lights may be wide- or narrow-beam, grouped, or even of different colours to which the eye has different sensitivity. Great caution must be exercised in the use of such lights. The estimation of the visual range of lights can produce reliable estimates of visibility at night only when lights and their background are carefully chosen; when the viewing conditions of the observer are carefully controlled; and when considerable time can be devoted to the observation to ensure that the observer’s eyes are fully accommodated to the viewing conditions. Results from the First WMO Intercomparison of Visibility Measurements (WMO, 1990b) show that, during the hours of darkness, the observer’s estimates of visibility were about 30 per



cent higher than instrumental measurements of MOR. The interquartile range of differences between the observer and the instruments was only slightly greater than that found during daylight (about 35 to 40 per cent of the measured MOR).

distant object with that of the sky background (for example, the Lohle telephotometer), but they are not normally used for routine measurements since, as stated above, it is preferable to use direct visual observations. These instruments may, however, be useful for extrapolating MOR beyond the most distant object. Visual extinction meters A very simple instrument for use with a distant light at night takes the form of a graduated neutral filter, which reduces the light in a known proportion and can be adjusted until the light is only just visible. The meter reading gives a measure of the transparency of the air between the light and the observer, and, from this, the extinction coefficient can be calculated. The overall accuracy depends mainly on variations in the sensitivity of the eye and on fluctuations in the radiant intensity of the light source. The error increases in proportion to MOR. The advantage of this instrument is that it enables MOR values over a range from 100 m to 5 km to be measured with reasonable accuracy, using only three well-spaced lights, whereas without it a more elaborate series of lights would be essential if the same degree of accuracy were to be achieved. However, the method of using such an instrument (determining the point at which a light appears or disappears) considerably affects the accuracy and homogeneity of the measurements. transmissometers The use of a transmissometer is the method most commonly used for measuring the mean extinction coefficient in a horizontal cylinder of air between a transmitter, which provides a modulated flux light source of constant mean power, and a receiver incorporating a photodetector (generally a photodiode at the focal point of a parabolic mirror or a lens). The most frequently used light source is a halogen lamp or xenon pulse discharge tube. Modulation of the light source prevents disturbance from sunlight. The transmission factor is determined from the photodetector output and this allows the extinction coefficient and the MOR to be calculated. Since transmissometer estimates of MOR are based on the loss of light from a collimated beam, which depends on scatter and absorption, they are closely related to the definition of MOR. A good, well-maintained transmissometer


instruMental MeasureMent of the Meteorological oPtical range



The adoption of certain assumptions allows the conversion of instrumental measurements into MOR. It is not always advantageous to use an instrument for daytime measurements if a number of suitable visibility objects can be used for direct observations. However, a visibility-measuring instrument is often useful for night observations or when no visibility objects are available, or for automatic observing systems. Instruments for the measurement of MOR may be classified into one of the following two categories: (a) Those measuring the extinction coefficient or transmission factor of a horizontal cylinder of air: Attenuation of the light is due to both scattering and absorption by particles in the air along the path of the light beam; (b) Those measuring the scatter coefficient of light from a small volume of air: In natural fog, absorption is often negligible and the scatter coefficient may be considered as being the same as the extinction coefficient. Both of the above categories include instruments used for visual measurements by an observer and instruments using a light source and an electronic device comprising a photoelectric cell or a photodiode to detect the emitted light beam. The main disadvantage of visual measurements is that substantial errors may occur if observers do not allow sufficient time for their eyes to become accustomed to the conditions (particularly at night). The main characteristics of these two categories of MOR-measuring instruments are described below. 9.3.2 instruMents Measuring the extinction coefficient

telephotometric instruments A number of telephotometers have been designed for daytime measurement of the extinction coefficient by comparing the apparent luminance of a



working within its range of highest accuracy provides a very good approximation to the true MOR. There are two types of transmissometer: (a) Those with a transmitter and a receiver in different units and at a known distance from each other, as illustrated in Figure 9.2;
Light source

hydrometeors (such as rain, or snow) or lithometeors (such as blowing sand) MOR values must be treated with circumspection. If the measurements are to remain acceptable over a long period, the luminous flux must remain constant during this same period. When halogen light is used, the problem of lamp filament ageing is less critical and the flux remains more constant. However, some transmissometers use feedback systems (by sensing and measuring a small portion of the emitted flux) giving greater homogeneity of the luminous flux with time or compensation for any change. As will be seen in the section dealing with the accuracy of MOR measurements, the value adopted for the transmissometer baseline determines the MOR measurement range. It is generally accepted that this range is between about 1 and 25 times the baseline length. A further refinement of the transmissometer measurement principle is to use two receivers or retroreflectors at different distances to extend both the lower limit (short baseline) and the upper limit (long baseline) of the MOR measurement range. These instruments are referred to as “double baseline” instruments. In some cases of very short baselines (a few metres), a photodiode has been used as a light source, namely, a monochromatic light close to infrared. However, it is generally recommended that polychromatic light in the visible spectrum be used to obtain a representative extinction coefficient. Visibility lidars The lidar (light detection and ranging) technique as described for the laser ceilometer in Part I, Chapter 15, may be used to measure visibility when the beam is directed horizontally. The rangeresolved profile of the backscattered signal S depends on the output signal S0, the distance x, the back scatter coefficient β, and transmission factor T, such that: S(x) ~ S0 • 1/ x2 • β(x) • T2 where T = ∫ – σ(x) dx (9.15)



Transmitter unit

Receiver unit

figure 9.2. double-ended transmissometer (b) Those with a transmitter and a receiver in the same unit, with the emitted light being reflected by a remote mirror or retroreflector (the light beam travelling to the reflector and back), as illustrated in Figure 9.3.
Light source

Folded baseline


Transmitter-receiver unit Photodetector

figure 9.3. single-ended transmissometer

The distance covered by the light beam between the transmitter and the receiver is commonly referred to as the baseline and may range from a few metres to 150 m (or even 300 m) depending on the range of MOR values to be measured and the applications for which these measurements are to be used. As seen in the expression for MOR in equation 9.7, the relation: P = a ·ln (0.05)/ln (T) (9.14)

where a is the transmissometer baseline, is the basic formula for transmissometer measurements. Its validity depends on the assumptions that the application of the Koschmieder and BouguerLambert laws is acceptable and that the extinction coefficient along the transmissometer baseline is the same as that in the path between an observer and an object at MOR. The relationship between the transmission factor and MOR is valid for fog droplets, but when visibility is reduced by other

Under the condition of horizontal homogeneity of the atmosphere, β and σ are constant and the extinction coefficient σ is determined from only two points of the profile: ln (S(x) • x2/ S0) ~ ln β – 2 σ x (9.16)



In an inhomogeneous atmosphere the rangedependent quantities of β(x) and σ(x) may be separated with the Klett Algorithm (Klett, 1985). As MOR approaches 2 000 m, the accuracy of the lidar method becomes poor. 9.3.3 instruments measuring the scatter coefficient

located in the same housing and below the light source where it receives the light backscattered by the volume of air sampled. Several researchers have tried to find a relationship between visibility and the coefficient of back scatter, but it is generally accepted that that correlation is not satisfactory.

The attenuation of light in the atmosphere is due to both scattering and absorption. The presence of pollutants in the vicinity of industrial zones, ice crystals (freezing fog) or dust may make the absorption term significant. However, in general, the absorption factor is negligible and the scatter phenomena due to reflection, refraction, or diffraction on water droplets constitute the main factor reducing visibility. The extinction coefficient may then be considered as equal to the scatter coefficient, and an instrument for measuring the latter can, therefore, be used to estimate MOR. Measurements are most conveniently taken by concentrating a beam of light on a small volume of air and by determining, through photometric means, the proportion of light scattered in a sufficiently large solid angle and in directions which are not critical. Provided that it is completely screened from interference from other sources of light, or that the light source is modulated, an instrument of this type can be used during both the day and night. The scatter coefficient b is a function that may be written in the following form: π 2π b= (9.17) ∫ I (φ )sin(φ )dφ Φv 0 where Φv is the flux entering the volume of air V and I(Φ) is the intensity of the light scattered in direction Φ with respect to the incident beam. Note that the accurate determination of b requires the measurement and integration of light scattered out of the beam over all angles. Practical instruments measure the scattered light over a limited angle and rely on a high correlation between the limited integral and the full integral. Three measurement methods are used in these instruments: back scatter, forward scatter, and scatter integrated over a wide angle. (a) Back scatter: In these instruments (Figure 9.4), a light beam is concentrated on a small volume of air in front of the transmitter, the receiver being


Sampling volume


figure 9.4. Visibility meter measuring back scatter (b) Forward scatter: Several authors have shown that the best angle is between 20 and 50°. The instruments, therefore, comprise a transmitter and a receiver, the angle between the beams being 20 to 50°. Another arrangement involves placing either a single diaphragm half-way between a transmitter and a receiver or two diaphragms each a short distance from either a transmitter or a receiver. Figure 9.5 illustrates the two configurations that have been used.


Sampling volume

Receiver Sampling volume



figure 9.5. two configurations of visibility meters measuring forward scatter


Scatter over a wide angle: Such an instrument, illustrated in Figure 9.6, which is usually known as an integrating nephelometer, is based on the principle of measuring scatter



over as wide an angle as possible, ideally 0 to 180°, but in practice about 0 to 120°. The receiver is positioned perpendicularly to the axis of the light source which provides light over a wide angle. Although, in theory, such an instrument should give a better estimate of the scatter coefficient than an instrument measuring over a small range of scattering angles, in practice it is more difficult to prevent the presence of the instrument from modifying the extinction coefficient in the air sampled. Integrating nephelometers are not widely used for measuring MOR, but this type of instrument is often used for measuring pollutants.
Light source


instrument exposure and siting

Measuring instruments should be located in positions which ensure that the measurements are representative for the intended purpose. Thus, for general synoptic purposes, the instruments should be installed at locations free from local atmospheric pollution, for example, smoke, industrial pollution, dusty roads. The volume of air in which the extinction coefficient or scatter coefficient is measured should normally be at the eye level of an observer, about 1.5 m above the ground. It should be borne in mind that transmissometers and instruments measuring the scatter coefficient should be installed in such a way that the sun is not in the optical field at any time of the day, either by mounting with a north-south optical axis (to ±45°) horizontally, for latitudes up to 50°, or by using a system of screens or baffles. For aeronautical purposes, measurements are to be representative of conditions at the airport. These conditions, which relate more specifically to airport operations, are described in Part II, Chapter 2. The instruments should be installed in accordance with the directions given by the manufacturers. Particular attention should be paid to the correct alignment of transmissometer transmitters and receivers and to the correct adjustment of the light beam. The poles on which the transmitter/receivers are mounted should be mechanically firm (while remaining frangible when installed at airports) to avoid any misalignment due to ground movement during freezing and, particularly, during thawing. In addition, the mountings must not distort under the thermal stresses to which they are exposed. 9.3.5 calibration and maintenance


Black hole

figure 9.6. Visibility meter measuring scattered light over a wide angle In all the above instruments, as for most transmissometers, the receivers comprise photodetector cells or photodiodes. The light used is pulsed (for example, high-intensity discharge into xenon). These types of instruments require only limited space (1 to 2 m in general). They are, therefore, useful when no visibility objects or light sources are available (onboard ships, by roadsides, and so forth). Since the measurement relates only to a very small volume of air, the representativeness of measurements for the general state of the atmosphere at the site may be open to question. However, this representativeness can be improved by averaging a number of samples or measurements. In addition, smoothing of the results is sometimes achieved by eliminating extreme values. The use of these types of instruments has often been limited to specific applications (for example, highway visibility measurements, or to determine whether fog is present) or when less precise MOR measurements are adequate. These instruments are now being used in increasing numbers in automatic meteorological observation systems because of their ability to measure MOR over a wide range and their relatively low susceptibility to pollution compared with transmissometers.

In order to obtain satisfactory and reliable observations, instruments for the measurement of MOR should be operated and maintained under the conditions prescribed by the manufacturers, and should be kept continuously in good working order. Regular checks and calibration in accordance with the manufacturer’s recommendations should ensure optimum performance. Calibration in very good visibility (over 10 to 15 km) should be carried out regularly. Atmospheric conditions resulting in erroneous calibration



must be avoided. When, for example, there are strong updraughts, or after heavy rain, considerable variations in the extinction coefficient are encountered in the layer of air close to the ground; if several transmissometers are in use on the site (in the case of airports), dispersion is observed in their measurements. Calibration should not be attempted under such conditions. Note that in the case of most transmissometers, the optical surfaces must be cleaned regularly, and daily servicing must be planned for certain instruments, particularly at airports. The instruments should be cleaned during and/or after major atmospheric disturbances, since rain or violent showers together with strong wind may cover the optical systems with a large number of water droplets and solid particles resulting in major MOR measurement errors. The same is true for snowfall, which could block the optical systems. Heating systems are often placed at the front of the optical systems to improve instrument performance under such conditions. Air-blowing systems are sometimes used to reduce the above problems and the need for frequent cleaning. However, it must be pointed out that these blowing and heating systems may generate air currents warmer than the surrounding air and may adversely affect the measurement of the extinction coefficient of the air mass. In arid zones, sandstorms or blowing sand may block the optical system and even damage it. 9.3.6 sources of error in the measurement of meteorological optical range and estimates of accuracy

caution. Another factor that must be taken into account when discussing representativeness of measurements is the homogeneity of the atmosphere itself. At all MOR values, the extinction coefficient of a small volume of the atmosphere normally fluctuates rapidly and irregularly, and individual measurements of MOR from scatter meters and short baseline transmissometers, which have no in-built smoothing or averaging system, show considerable dispersion. It is, therefore, necessary to take many samples and to smooth or average them to obtain a representative value of MOR. The analysis of the results from the First WMO Intercomparison of Visibility Measurements (WMO, 1990b) indicates that, for most instruments, no benefit is gained by averaging over more than 1 min, but for the “noisiest” instruments an averaging time of 2 min is preferable. accuracy of telephotometers and visual extinction meters Visual measurements based on the extinction coefficient are difficult to take. The main source of error is the variability and uncertainty of the performance of the human eye. These errors have been described in the sections dealing with the methods of visual estimation of MOR. accuracy of transmissometers The sources of error in transmissometer measurements may be summarized as follows: (a) Incorrect alignment of transmitters and receivers; (b) Insufficient rigidity and stability of transmitter/receiver mountings (freezing and thawing of the ground, thermal stress); (c) Ageing and incorrect centring of lamps; (d) Calibrating error (visibility too low or calibration carried out in unstable conditions affecting the extinction coefficient); (e) Instability of system electronics; (f) Remote transmission of the extinction coefficient as a low-current signal subject to interference from electromagnetic fields (particularly at airports). It is preferable to digitize the signals; (g) Disturbance due to rising or setting of the sun, and poor initial orientation of the transmissometers; (h) Atmospheric pollution dirtying the optical systems; (i) Local atmospheric conditions (for example,

general All practical operational instruments for the measurement of MOR sample a relatively small region of the atmosphere compared with that scanned by a human observer. Instruments can provide an accurate measurement of MOR only when the volume of air that they sample is representative of the atmosphere around the point of observation out to a radius equal to MOR. It is easy to imagine a situation, with patchy fog or a local rain or snow storm, in which the instrument reading is misleading. However, experience has shown that such situations are not frequent and that the continuous monitoring of MOR using an instrument will often lead to the detection of changes in MOR before they are recognized by an unaided observer. Nevertheless, instrumental measurements of MOR must be interpreted with



rain showers and strong winds, snow) giving unrepresentative extinction coefficient readings or diverging from the Koschmieder law (snow, ice crystals, rain, and so forth). The use of a transmissometer that has been properly calibrated and well maintained should give good representative MOR measurements if the extinction coefficient in the optical path of the instrument is representative of the extinction coefficient everywhere within the MOR. However, a transmissometer has only a limited range over which it can provide accurate measurements of MOR. A relative error curve for MOR may be plotted by differentiating the basic transmissometer formula (see equation 9.7). Figure 9.7 shows how the relative error varies with transmission, assuming that the measurement accuracy of the transmission factor T is 1 per cent.
Relative error in MOR for 1 per cent error in transmittance

of 1.25 and 10.7 times the baseline length, the relative MOR error should be low and of the order of 5 per cent, assuming that the error of T is 1 per cent. The relative error of MOR exceeds 10 per cent when MOR is less than 0.87 times the baseline length or more than 27 times this length. When the measurement range is extended further, the error increases rapidly and becomes unacceptable. However, results from the First WMO Intercomparison of Visibility Measurements (WMO, 1990b) show that the best transmissometers, when properly calibrated and maintained, can provide measurements of MOR with a standard error of about 10 per cent when MOR is up to 60 times their baseline. accuracy of scatter meters The principal sources of error in measurements of MOR taken with scatter meters are as follows: (a) Calibration error (visibility too low or calibration carried out in unstable conditions affecting the extinction coefficient); (b) Lack of repeatability in terms of procedure or materials when using opaque scatterers for calibration; (c) Instability of system electronics; (d) Remote transmission of the scatter coefficient as a low-current or voltage signal subject to interference from electromagnetic fields (particularly at airports). It is preferable to digitize the signals; (e) Disturbance due to rising or setting of the sun, and poor initial orientation of the instrument; (f) Atmospheric pollution dirtying the optical systems (these instruments are much less sensitive to dirt on their optics than transmissometers, but heavy soiling does have an effect); (g) Atmospheric conditions (for example, rain, snow, ice crystals, sand, local pollution) giving a scatter coefficient that differs from the extinction coefficient. Results from the First WMO Intercomparison of Visibility Measurements (WMO, 1990b) show that scatter meters are generally less accurate than transmissometers at low values of MOR and show greater variability in their readings. There was also evidence that scatter meters, as a class, were more affected by precipitation than transmissometers. However, the best scatter meters showed little or no susceptibility to precipitation and provided estimates of MOR with

60% Transmissometer baseline 75 m 50%


30% MOR 55 m to 4 000 m



MOR 65 m to 2 000 m MOR 95 m to 800 m













figure 9.7. error in measurements of meteorological optical range as a function of a 1 per cent error in transmittance This 1 per cent value of transmission error, which may be considered as correct for many older instruments, does not include instrument drift, dirt on optical components, or the scatter of measurements due to the phenomenon itself. If the accuracy drops to around 2 to 3 per cent (taking the other factors into account), the relative error values given on the vertical axis of the graph must be multiplied by the same factor of 2 or 3. Note also that the relative MOR measurement error increases exponentially at each end of the curve, thereby setting both upper and lower limits to the MOR measurement range. The example shown by the curve indicates the limit of the measuring range if an error of 5, 10 or 20 per cent is accepted at each end of the range measured, with a baseline of 75 m. It may also be deduced that, for MOR measurements between the limits



standard deviation of about 10 per cent over a range of MOR from about 100 m to 50 km. Almost all the scatter meters in the intercomparison exhibited significant systematic error over part of their measurement range. Scatter meters showed very

low susceptibility to contamination of their optical systems. An overview of the differences between scatter meters and transmissometers is given by WMO (1992b).



references and furtHer readIng

International Electrotechnical Commission, 1987: International Electrotechnical Vocabulary. Chapter 845: Lighting, IEC 50. Middleton, W.E.K., 1952: Vision Through the Atmosphere. University of Toronto Press, Toronto. Sheppard, B.E., 1983: Adaptation to MOR. Preprints of the Fifth Symposium on Meteorological Observations and Instrumentation (Toronto, 11–15 April 1983), pp. 226–269. Klett, J.D., 1985: Lidar inversion with variable backscatter/extinction ratios. Applied Optics, 24, pp. 1638–1643. World Meteorological Organization, 1989: Guide on the Global Observing System. WMO-No. 488, Geneva. World Meteorological Organization, 1990a: Guide on Meteorological Observation and Information Distribution Systems at Aerodromes. WMONo. 731, Geneva.

World Meteorological Organization, 1990b: The First WMO Intercomparison of Visibility Measurements: Final Report (D.J. Griggs, D.W. Jones, M. Ouldridge and W.R. Sparks). Instruments and Observing Methods Report No. 41, WMO/TD-No. 401, Geneva. World Meteorological Organization, 1992a: International Meteorological Vocabulary. WMONo. 182, Geneva. World Meteorological Organization, 1992b: Visibility measuring instruments: Differences between scatterometers and transmissometers (J.P. van der Meulen). Papers Presented at the WMO Technical Conference on Instruments and Methods of Observation (TECO-92) (Vienna, Austria, 11–15 May 1992), Instruments and Observing Methods Report No. 49, WMO/TD-No. 462, Geneva. World Meteorological Organization, 2003: Manual on the Global Observing System. WMO-No. 544, Geneva.

CHaPTEr 10

MeasureMent of eVaPoratIon

10.1 10.1.1



Meteorological requirements


The International Glossary of Hydrology (WMO/ UNESCO, 1992) and the International Meteorological Vocabulary (WMO, 1992) present the following definitions (but note some differences): (Actual) evaporation: Quantity of water evaporated from an open water surface or from the ground. Transpiration: Process by which water from vegetation is transferred into the atmosphere in the form of vapour. (Actual) evapotranspiration (or effective evapotranspiration): Quantity of water vapour evaporated from the soil and plants when the ground is at its natural moisture content. Potential evaporation (or evaporativity): Quantity of water vapour which could be emitted by a surface of pure water, per unit surface area and unit time, under existing atmospheric conditions. Potential evapotranspiration: Maximum quantity of water capable of being evaporated in a given climate from a continuous expanse of vegetation covering the whole ground and well supplied with water. It includes evaporation from the soil and transpiration from the vegetation from a specific region in a specific time interval, expressed as depth of water. If the term potential evapotranspiration is used, the types of evaporation and transpiration occurring must be clearly indicated. For more details on these terms refer to WMO (1994). 10.1.2 units and scales

Estimates both of evaporation from free water surfaces and from the ground and of evapotranspiration from vegetation-covered surfaces are of great importance to hydrological modelling and in hydrometeorological and agricultural studies, for example, for the design and operation of reservoirs and irrigation and drainage systems. Performance requirements are given in Part I, Chapter 1. For daily totals, an extreme outer range is 0 to 100 mm, with a resolution of 0.1 mm. The uncertainty, at the 95 per cent confidence level, should be ±0.1 mm for amounts of less than 5 mm, and ±2 per cent for larger amounts. A figure of 1 mm has been proposed as an achievable accuracy. In principle, the usual instruments could meet these accuracy requirements, but difficulties with exposure and practical operation cause much larger errors (WMO, 1976). Factors affecting the rate of evaporation from any body or surface can be broadly divided into two groups, meteorological factors and surface factors, either of which may be rate-limiting. The meteorological factors may, in turn, be subdivided into energy and aerodynamic variables. Energy is needed to change water from the liquid to the vapour phase; in nature, this is largely supplied by solar and terrestrial radiation. Aerodynamic variables, such as wind speed at the surface and vapour pressure difference between the surface and the lower atmosphere, control the rate of transfer of the evaporated water vapour. It is useful to distinguish between situations where free water is present on the surface and those where it is not. Factors of importance include the amount and state of the water and also those surface characteristics which affect the transfer process to the air or through the body surface. Resistance to moisture transfer to the atmosphere depends, for example, on surface roughness; in arid and semi-arid areas, the size and shape of the evaporating surface is also extremely important. Transpiration from vegetation, in addition to the meteorological and surface factors already noted, is largely determined by plant characteristics and responses. These include, for example, the number

The rate of evaporation is defined as the amount of water evaporated from a unit surface area per unit of time. It can be expressed as the mass or volume of liquid water evaporated per area in unit of time, usually as the equivalent depth of liquid water evaporated per unit of time from the whole area. The unit of time is normally a day. The amount of evaporation should be read in millimetres (WMO, 2003). Depending on the type of instrument, the usual measuring accuracy is 0.1 to 0.01 mm.



and size of stomata (openings in the leaves), and whether these are open or closed. Stomatal resistance to moisture transfer shows a diurnal response but is also considerably dependent upon the availability of soil moisture to the rooting system. The availability of soil moisture for the roots and for the evaporation from bare soil depends on the capillary supply, namely, on the texture and composition of the soil. Evaporation from lakes and reservoirs is influenced by the heat storage of the water body. Methods for estimating evaporation and evapotranspiration are generally indirect; either by point measurements by an instrument or gauge, or by calculation using other measured meteorological variables (WMO, 1997). 10.1.4 Measurement methods

For reservoirs or lakes, and for plots or small catchments, estimates may be made by water budget, energy budget, aerodynamic and complementarity approaches. The latter techniques are discussed in section 10.5. It should also be emphasized that different evaporimeters or lysimeters represent physically different measurements. The adjustment factors required for them to represent lake or actual or potential evaporation and evapotranspiration are necessarily different. Such instruments and their exposure should, therefore, always be described very carefully and precisely, in order to understand the measuring conditions as fully as possible. More details on all methods are found in WMO (1994).

Direct measurements of evaporation or evapotranspiration from extended natural water or land surfaces are not practicable at present. However, several indirect methods derived from point measurements or other calculations have been developed which provide reasonable results. The water loss from a standard saturated surface is measured with evaporimeters, which may be classified as atmometers and pan or tank evaporimeters. These instruments do not directly measure either evaporation from natural water surfaces, actual evapotranspiration or potential evapotranspiration. The values obtained cannot, therefore, be used without adjustment to arrive at reliable estimates of lake evaporation or of actual and potential evapotranspiration from natural surfaces. An evapotranspirometer (lysimeter) is a vessel or container placed below the ground surface and filled with soil, on which vegetation can be cultivated. It is a multi-purpose instrument for the study of several phases of the hydrological cycle under natural conditions. Estimates of evapotranspiration (or evaporation in the case of bare soil) can be made by measuring and balancing all the other water budget components of the container, namely, precipitation, underground water drainage, and change in water storage of the block of soil. Usually, surface runoff is eliminated. Evapotranspirometers can also be used for the estimation of the potential evaporation of the soil or of the potential evapotranspiration of plantcovered soil, if the soil moisture is kept at field capacity.

10.2 10.2.1


instrument types

An atmometer is an instrument that measures the loss of water from a wetted, porous surface. The wetted surfaces are either porous ceramic spheres, cylinders, plates, or exposed filter-paper discs saturated with water. The evaporating element of the livingstone atmometer is a ceramic sphere of about 5 cm in diameter, connected to a water reservoir bottle by a glass or metal tube. The atmospheric pressure on the surface of the water in the reservoir keeps the sphere saturated with water. The Bellani atmometer consists of a ceramic disc fixed in the top of a glazed ceramic funnel, into which water is conducted from a burette that acts as a reservoir and measuring device. The evaporating element of the Piche evaporimeter is a disc of filter paper attached to the underside of an inverted graduated cylindrical tube, closed at one end, which supplies water to the disc. Successive measurements of the volume of water remaining in the graduated tube will give the amount lost by evaporation in any given time. 10.2.2 Measurement taken by atmometers

Although atmometers are frequently considered to give a relative measure of evaporation from plant surfaces, their measurements do not, in fact, bear any simple relation to evaporation from natural surfaces.



Readings from Piche evaporimeters with carefully standardized shaded exposures have been used with some success to derive the aerodynamic term, a multiplication of a wind function and the saturation vapour pressure deficit, required for evaporation estimation by, for example, Penman’s combination method after local correlations between them were obtained. While it may be possible to relate the loss from atmometers to that from a natural surface empirically, a different relation may be expected for each type of surface and for differing climates. Atmometers are likely to remain useful in small-scale surveys. Their great advantages are their small size, low cost and small water requirements. Dense networks of atmometers can be installed over a small area for micrometeorological studies. The use of atmometers is not recommended for water resource surveys if other data are available. 10.2.3 sources of error in atmometers

The adoption of the Russian 20 m2 tank as the international reference evaporimeter has been recommended. 10.3.1 united states class a pan

The United States Class A pan is of cylindrical design, 25.4 cm deep and 120.7 cm in diameter. The bottom of the pan is supported 3 to 5 cm above the ground level on an open-frame wooden platform, which enables air to circulate under the pan, keeps the bottom of the pan above the level of water on the ground during rainy weather, and enables the base of the pan to be inspected without difficulty. The pan itself is constructed of 0.8 mm thick galvanized iron, copper or monel metal, and is normally left unpainted. The pan is filled to 5 cm below the rim (which is known as the reference level). The water level is measured by means of either a hookgauge or a fixed-point gauge. The hookgauge consists of a movable scale and vernier fitted with a hook, the point of which touches the water surface when the gauge is correctly set. A stilling well, about 10 cm across and about 30 cm deep, with a small hole at the bottom, breaks any ripples that may be present in the tank, and serves as a support for the hookgauge during an observation. The pan is refilled whenever the water level, as indicated by the gauge, drops by more than 2.5 cm from the reference level. 10.3.2 russian ggi-3000 pan

One of the major problems in the operation of atmometers is keeping the evaporating surfaces clean. Dirty surfaces will affect significantly the rate of evaporation, in a way comparable to the wet bulb in psychrometry. Furthermore, the effect of differences in their exposure on evaporation measurements is often remarkable. This applies particularly to the exposure to air movement around the evaporating surface when the instrument is shaded.


evaPoration Pans anD tanks

Evaporation pans or tanks have been made in a variety of shapes and sizes and there are different modes of exposing them. Among the various types of pans in use, the United States Class A pan, the Russian GGI-3000 pan and the Russian 20 m2 tank are described in the following subsections. These instruments are now widely used as standard network evaporimeters and their performance has been studied under different climatic conditions over fairly wide ranges of latitude and elevation. The pan data from these instruments possess stable, albeit complicated and climate-zone-dependent, relationships with the meteorological elements determining evaporation, when standard construction and exposure instructions have been carefully followed.

The Russian GGI-3000 pan is of cylindrical design, with a surface area of 3 000 cm2 and a depth of 60 cm. The bottom of the pan is cone-shaped. The pan is set in the soil with its rim 7.5 cm above the ground. In the centre of the tank is a metal index tube upon which a volumetric burette is set when evaporation observations are made. The burette has a valve, which is opened to allow its water level to equalize that in the pan. The valve is then closed and the volume of water in the burette is accurately measured. The height of the water level above the metal index tube is determined from the volume of water in, and the dimensions of, the burette. A needle attached to the metal index tube indicates the height to which the water level in the pan should be adjusted. The water level should be maintained so that it does not fall more than 5 mm or rise more than 10 mm above the needle point. A GGI-3000 raingauge with a collector that has an area of 3 000 cm2 is usually installed next to the GGI-3000 pan.

I.10–4 10.3.3


russian 20 M2 tank


This tank has a surface of 20 m2 and a diameter of about 5 m; it is cylindrical with a flat bottom and is 2 m deep. It is made of 4 to 5 mm thick welded iron sheets and is installed in the soil with its rim 7.5 cm above the ground. The inner and exposed outer surfaces of the tank are painted white. The tank is provided with a replenishing vessel and a stilling well with an index pipe upon which the volumetric burette is set when the water level in the tank is measured. Inside the stilling well, near the index pipe, a small rod terminating in a needle point indicates the height to which the water level is to be adjusted. The water level should always be maintained so that it does not fall more than 5 mm below or rise more than 10 mm above the needle point. A graduated glass tube attached laterally to the replenishing tank indicates the amount of water added to the tank and provides a rough check of the burette measurement. 10.3.4 Measurements taken by evaporation pans and tanks



Sunken, where the main body of the tank is below ground level, the evaporating surface being at or near the level of the surrounding surface; Above ground, where the whole of the pan and the evaporation surface are at some small height above the ground; Mounted on moored floating platforms on lakes or other water bodies.

The rate of evaporation from a pan or tank evaporimeter is measured by the change in level of its free water surface. This may be done by such devices as described above for Class A pans and GGI-3000 pans. Several types of automatic evaporation pans are in use. The water level in such a pan is kept constant by releasing water into the pan from a storage tank or by removing water from the pan when precipitation occurs. The amount of water added to, or removed from, the pan is recorded. In some tanks or pans, the level of the water is also recorded continuously by means of a float in the stilling well. The float operates a recorder. Measurements of pan evaporation are the basis of several techniques for estimating evaporation and evapotranspiration from natural surfaces whose water loss is of interest. Measurements taken by evaporation pans are advantageous because they are, in any case, the result of the impact of the total meteorological variables, and because pan data are available immediately and for any period required. Pans are, therefore, frequently used to obtain information about evaporation on a routine basis within a network. 10.3.5 exposure of evaporation pans and tanks

Evaporation stations should be located at sites that are fairly level and free from obstructions such as trees, buildings, shrubs or instrument shelters. Such single obstructions, when small, should not be closer than 5 times their height above the pan; for clustered obstructions, this becomes 10 times. Plots should be sufficiently large to ensure that readings are not influenced by spray drift or by upwind edge effects from a cropped or otherwise different area. Such effects may extend to more than 100 m. The plot should be fenced off to protect the instruments and to prevent animals from interfering with the water level; however, the fence should be constructed in such a way that it does not affect the wind structure over the pan. The ground cover at the evaporation station should be maintained as similar as possible to the natural cover common to the area. Grass, weeds, and the like should be cut frequently to keep them below the level of the pan rim with regard to sunken pans (7.5 cm). Preferably this same grass height of below 7.5 cm applies also to Class A pans. Under no circumstance should the instrument be placed on a concrete slab or asphalt, or on a layer of crushed rock. This type of evaporimeter should not be shaded from the sun. 10.3.6 sources of error in evaporation pans and tanks

The mode of pan exposure leads both to various advantages and to sources of measurement errors. Pans installed above the ground are inexpensive and easy to install and maintain. They stay cleaner than sunken tanks as dirt does not, to any large extent, splash or blow into the water from the surroundings. Any leakage that develops after installation is relatively easy to detect and rectify. However, the amount of water evaporated is greater than that from sunken pans, mainly because of the additional radiant energy intercepted by the sides. Adverse side-wall effects can be largely eliminated by using an insulated pan, but this adds to the cost,

Three types of exposures are mainly used for pans and tanks as follows:



would violate standard construction instructions and would change the “stable” relations mentioned in section 10.3. Sinking the pan into the ground tends to reduce objectionable boundary effects, such as radiation on the side walls and heat exchange between the atmosphere and the pan itself. But the disadvantages are as follows: (a) More unwanted material collects in the pan, with the result that it is difficult to clean; (b) Leaks cannot easily be detected and rectified; (c) The height of the vegetation adjacent to the pan is somewhat more critical. Moreover, appreciable heat exchange takes place between the pan and the soil, and this depends on many factors, including soil type, water content and vegetation cover. A floating pan approximates more closely evaporation from the lake than from an onshore pan exposed either above or at ground level, even though the heat-storage properties of the floating pan are different from those of the lake. It is, however, influenced by the particular lake in which it floats and it is not necessarily a good indicator of evaporation from the lake. Observational difficulties are considerable and, in particular, splashing frequently renders the data unreliable. Such pans are also costly to install and operate. In all modes of exposure it is most important that the tank should be made of non-corrosive material and that all joints be made in such a way as to minimize the risk of the tank developing leaks. Heavy rain and very high winds are likely to cause splash-out from pans and may invalidate the measurements. The level of the water surface in the evaporimeter is important. If the evaporimeter is too full, as much as 10 per cent (or more) of any rain falling may splash out, leading to an overestimate of evaporation. Too low a water level will lead to a reduced evaporation rate (of about 2.5 per cent for each centimetre below the reference level of 5 cm, in temperate regions) due to excessive shading and sheltering by the rim. If the water depth is allowed to become very shallow, the rate of evaporation rises due to increased heating of the water surface. It is advisable to restrict the permitted water-level range either by automatic methods, by adjusting the level at each reading, or by taking action to

remove water when the level reaches an upper-limit mark, and to add water when it reaches a lowerlimit mark. 10.3.7 Maintenance of evaporation pans and tanks

An inspection should be carried out at least once a month, with particular attention being paid to the detection of leaks. The pan should be cleaned out as often as necessary to keep it free from litter, sediment, scum and oil films. It is recommended that a small amount of copper sulphate, or of some other suitable algacide, be added to the water to restrain the growth of algae. If the water freezes, all the ice should be broken away from the sides of the tank and the measurement of the water level should be taken while the ice is floating. Provided that this is done, the fact that some of the water is frozen will not significantly affect the water level. If the ice is too thick to be broken the measurement should be postponed until it can be broken, the evaporation should then be determined for the extended period. It is often necessary to protect the pan from birds and other small animals, particularly in arid and tropical regions. This may be achieved by the use of the following: (a) Chemical repellents: In all cases where such protection is used, care must be taken not to change significantly the physical characteristics of the water in the evaporimeter; (b) A wire-mesh screen supported over the pan: Standard screens of this type are in routine use in a number of areas. They prevent water loss caused by birds and animals, but also reduce the evaporation loss by partly shielding the water from solar radiation and by reducing wind movement over the water surface. In order to obtain an estimate of the error introduced by the effect of the wiremesh screen on the wind field and the thermal characteristics of the pan, it is advisable to compare readings from the protected pan with those of a standard pan at locations where interference does not occur. Tests with a protective cylinder made of 25 mm hexagonal-mesh steel wire netting supported by an 8 mm steel-bar framework showed a consistent reduction of 10 per cent in the evaporation rate at three different sites over a two-year period.

I.10–6 10.4

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES evaPotransPiroMeters (lysiMeters)

Several types of lysimeters have been described in the technical literature. Details of the design of some instruments used in various countries are described in WMO (1966; 1994). In general, a lysimeter consists of the soil-filled inner container and retaining walls or an outer container, as well as special devices for measuring percolation and changes in the soil-moisture content. There is no universal international standard lysimeter for measuring evapotranspiration. The surface area of lysimeters in use varies from 0.05 to some 100 m2 and their depth varies from 0.1 to 5 m. According to their method of operation, lysimeters can be classified into non-weighable and weighable instruments. Each of these devices has its special merits and drawbacks, and the choice of any type of lysimeter depends on the problem to be studied. Non-weighable (percolation-type) lysimeters can be used only for long-term measurements, unless the soil-moisture content can be measured by some independent and reliable technique. Large-area percolation-type lysimeters are used for water budget and evapotranspiration studies of tall, deeprooting vegetation cover, such as mature trees. Small, simple types of lysimeters in areas with bare soil or grass and crop cover could provide useful results for practical purposes under humid conditions. This type of lysimeter can easily be installed and maintained at a low cost and is, therefore, suitable for network operations. Weighable lysimeters, unless of a simple microlysimeter-type for soil evaporation, are much more expensive, but their advantage is that they secure reliable and precise estimates of short-term values of evapotranspiration, provided that the necessary design, operation and siting precautions have been taken. Several weighing techniques using mechanical or hydraulic principles have been developed. The simpler, small lysimeters are usually lifted out of their sockets and transferred to mechanical scales by means of mobile cranes. The container of a lysimeter can be mounted on a permanently installed mechanical scale for continuous recording. The design of the weighing and recording system can be considerably simplified by using load cells with strain gauges of variable electrical resistance. The hydraulic weighing systems use the

principle of fluid displacement resulting from the changing buoyancy of a floating container (socalled floating lysimeter), or the principle of fluid pressure changes in hydraulic load cells. The large weighable and recording lysimeters are recommended for precision measurements in research centres and for standardization and parameterization of other methods of evapotranspiration measurement and the modelling of evapotranspiration. Small weighable types of lysimeters are quite useful and suitable for network operation. Microlysimeters for soil evaporation are a relatively new phenomenon. 10.4.1 Measurements taken by lysimeters

The rate of evapotranspiration may be estimated from the general equation of the water budget for the lysimeter containers. Evapotranspiration equals precipitation/irrigation minus percolation minus change in water storage. Hence, the observational programme on lysimeter plots includes precipitation/irrigation, percolation and change in soil water storage. It is useful to complete this programme through observations of plant growth and development. Precipitation – and irrigation, if any – is preferably measured at ground level by standard methods. Percolation is collected in a tank and its volume may be measured at regular intervals or recorded. For precision measurements of the change in water storage, the careful gravimetric techniques described above are used. When weighing, the lysimeter should be sheltered to avoid wind-loading effects. The application of the volumetric method is quite satisfactory for estimating long-term values of evapotranspiration. With this method, measurements are taken of the amount of precipitation and percolation. It is assumed that a change in water storage tends to zero over the period of observation. Changes in the soil moisture content may be determined by bringing the moisture in the soil up to field capacity at the beginning and at the end of the period. 10.4.2 exposure of evapotranspirometers

Observations of evapotranspiration should be representative of the plant cover and moisture conditions of the general surroundings of the station (WMO, 2003). In order to simulate representative evapotranspiration rates, the soil and



plant cover of the lysimeter should correspond to the soil and vegetation of the surrounding area, and disturbances caused by the existence of the instrument should be minimized. The most important requirements for the exposure of lysimeters are given below. In order to maintain the same hydromechanical properties of the soil, it is recommended that the lysimeter be placed into the container as an undisturbed block (monolith). In the case of light, rather homogenous soils and a large container, it is sufficient to fill the container layer by layer in the same sequence and with the same density as in the natural profile. In order to simulate the natural drainage process in the container, restricted drainage at the bottom must be prevented. Depending on the soil texture, it may be necessary to maintain the suction at the bottom artificially by means of a vacuum supply. Apart from microlysimeters for soil evaporation, a lysimeter should be sufficiently large and deep, and its rim as low as practicable, to make it possible to have a representative, free-growing vegetation cover, without restriction to plant development. In general, the siting of lysimeters is subject to fetch requirements, such as that of evaporation pans, namely, the plot should be located beyond the zone of influence of buildings, even single trees, meteorological instruments, and so on. In order to minimize the effects of advection, lysimeter plots should be located at a sufficient distance from the upwind edge of the surrounding area, that is, not less than 100 to 150 m. The prevention of advection effects is of special importance for measurements taken at irrigated land surfaces. 10.4.3 sources of error in lysimeter measurements

(i) Thermal isolation from the subsoil; (ii) Thermal effects of the air rising or descending between the container and the retaining walls; (iii) Alteration of the thermal properties of the soil through alteration of its texture and its moisture conditions; (d) Insufficient equivalence of the water budget to that of the surrounding area caused by: (i) Disturbance of soil structure; (ii) Restricted drainage; (iii) Vertical seepage at walls; (iv) Prevention of surface runoff and lateral movement of soil water. Some suitable arrangements exist to minimize lysimeter measurement errors, for example, regulation of the temperature below the container, reduction of vertical seepage at the walls by flange rings, and so forth. In addition to the careful design of the lysimeter equipment, sufficient representativeness of the plant community and the soil type of the area under study is of great importance. Moreover, the siting of the lysimeter plot must be fully representative of the natural field conditions. 10.4.4 lysimeters maintenance

Several arrangements are necessary to maintain the representativeness of the plant cover inside the lysimeter. All agricultural and other operations (sowing, fertilizing, mowing, and the like) in the container and surrounding area should be carried out in the same way and at the same time. In order to avoid errors due to rainfall catch, the plants near and inside the container should be kept vertical, and broken leaves and stems should not extend over the surface of the lysimeter. The maintenance of the technical devices is peculiar to each type of instrument and cannot be described here. It is advisable to test the evapotranspirometer for leaks at least once a year by covering its surface to prevent evapotranspiration and by observing whether, over a period of days, the volume of drainage equals the amount of water added to its surface. 10.5 estiMation of evaPoration froM natural surfaces

Lysimeter measurements are subject to several sources of error caused by the disturbance of the natural conditions by the instrument itself. Some of the major effects are as follows: (a) Restricted growth of the rooting system; (b) Change of eddy diffusion by discontinuity between the canopy inside the lysimeter and in the surrounding area. Any discontinuity may be caused by the annulus formed by the containing and retaining walls and by discrepancies in the canopy itself; (c) Insufficient thermal equivalence of the lysimeter to the surrounding area caused by:

Consideration of the factors which affect evaporation, as outlined in section 10.1.3, indicates that the rate of evaporation from a natural surface



will necessarily differ from that of an evaporimeter exposed to the same atmospheric conditions, because the physical characteristics of the two evaporating surfaces are not identical. In practice, evaporation or evapotranspiration rates from natural surfaces are of interest, for example, reservoir or lake evaporation, crop evaporation, as well as areal amounts from extended land surfaces such as catchment areas. In particular, accurate areal estimates of evapotranspiration from regions with varied surface characteristics and land-use patterns are very difficult to obtain (WMO, 1966; 1997). Suitable methods for the estimation of lake or reservoir evaporation are the water budget, energy budget and aerodynamic approaches, the combination method of aerodynamic and energy-balance equations, and the use of a complementarity relationship between actual and potential evaporation. Furthermore, pan evaporation techniques exist which use pan evaporation for the establishment of a lake-to-pan relation. Such relations are specific to each pan type and mode of exposure. They also depend on the climatic conditions (see WMO, 1985; 1994 (Chapter 37)). The water non-limiting point or areal values of evapotranspiration from vegetation-covered land surfaces may be obtained by determining such potential (or reference crop) evapotranspiration with the same methods as those indicated above for lake applications, but adapted to vegetative conditions. Some methods use additional growth stage-dependent coefficients for each type of vegetation, such as crops, and/or an integrated crop stomatal resistance value for the vegetation as a whole. The Royal Netherlands Meteorological Institute employs the following procedure established by G.F. Makkink (Hooghart, 1971) for calculating the daily (24 h) reference vegetation evaporation from the averaged daily air temperature and the daily amount of global radiation as follows: Saturation vapour pressure at air temperature T:

Psychrometric constant: Δ(T) = 0.646 + 0.0006T Specific heat of evaporation of water: λ(T) = 1 000 . (2 501 – 2.38 . T) Density of water: ρ = 1 000 Global radiation (24 h amount): Q Air temperature (24 h average): T Daily reference vegetation evaporation: °C J/m2 kg/m3 J/kg hPa/°C

Er =

1 000 ⋅ 0.65 ⋅ δ (T ) ⋅Q {δ (T ) + γ (T )} ⋅ ρ ⋅ λ (T )


Note: The constant 1 000 is for conversion from metres to millimetres; the constant 0.65 is a typical empirical constant.

By relating the measured rate of actual evapotranspiration to estimates of the water non-limiting potential evapotranspiration and subsequently relating this normalized value to the soil water content, soil water deficits, or the water potential in the root zone, it is possible to devise coefficients with which the actual evapotranspiration rate can be calculated for a given soil water status. Point values of actual evapotranspiration from land surfaces can be estimated more directly from observations of the changes in soil water content measured by sampling soil moisture on a regular basis. Evapotranspiration can be measured even more accurately using a weighing lysimeter. Further methods make use of turbulence measurements (for example, eddy-correlation method) and profile measurements (for example, in boundary-layer data methods and, at two heights, in the Bowen-ratio energy-balance method). They are much more expensive and require special instruments and sensors for humidity, wind speed and temperature. Such estimates, valid for the type of soil and canopy under study, may be used as reliable independent reference values in the development of empirical relations for evapotranspiration modelling.

es (T ) = 6.107 ⋅ 10


T 237.3+ T


Slope of the curve of saturation water vapour pressure versus temperature at T:

(T ) =

7.5 ⋅ 237.3 ⋅ ln (10) ⋅ es (T ) ( 237.3 + T )2




The difficulty in determining basin evapotranspiration arises from the discontinuities in surface characteristics which cause variable evapotranspiration rates within the area under consideration. When considering short-term values, it is necessary to estimate evapotranspiration by using empirical relationships. Over a long period (in order to minimize storage effects) the water-budget approach can be used to estimate basin evapotranspiration (see WMO, 1971). One approach, suitable for estimates from extended areas, refers to the atmospheric water balance and derives areal evapotranspiration from radiosonde data. WMO (1994, Chapter 38) describes the abovementioned methods, their advantages and their application limits.

The measurement of evaporation from a snow surface is difficult and probably no more accurate than the computation of evaporation from water. Evaporimeters made of polyethylene or colourless plastic are used in many countries for the measurement of evaporation from snow-pack surfaces; observations are made only when there is no snowfall. Estimates of evaporation from snow cover can be made from observations of air humidity and wind speed at one or two levels above the snow surface and at the snow-pack surface, using the turbulent diffusion equation. The estimates are most reliable when evaporation values are computed for periods of five days or more.



references and furtHer readIng

Hooghart, J.C. (ed.), 1971: Evaporation and Weather. TNO Committee of Hydrological Research, Technical Meeting 44, Proceedings and Information No. 39, TNO, The Hague. World Meteorological Organization, 1966: Measurement and Estimation of Evaporation and Evapotranspiration. Technical Note No. 83, WMO-No. 201.TP.105, Geneva. World Meteorological Organization, 1971: Problems of Evaporation Assessment in the Water Balance (C.E. Hounam). WMO/IHD Report No. 13, WMO-No. 285, Geneva. World Meteorological Organization, 1973: Atmospheric Vapour Flux Computations for Hydrological Purposes (J.P. Peixoto). WMO/IHD Report No. 20, WMO-No. 357, Geneva. World Meteorological Organization, 1976: The CIMO International Evaporimeter Comparisons. WMO-No. 449, Geneva. World Meteorological Organization, 1977: Hydrological Application of Atmospheric Vapour-Flux Analyses (E.M. Rasmusson). Operational Hydrology Report No. 11, WMO-No. 476, Geneva.

World Meteorological Organization, 1985: Casebook on Operational Assessment of Areal Evaporation. Operational Hydrology Report No. 22, WMONo. 635, Geneva. World Meteorological Organization, 1992: International Meteorological Vocabulary. Second edition, WMO-No. 182, Geneva. World Meteorological Organization/United Nations Educational, Scientific and Cultural Organization, 1992: International Glossary of Hydrology. WMO-No. 385, Geneva. World Meteorological Organization, 1994: Guide to Hydrological Practices. Fifth edition, WMONo. 168, Geneva. World Meteorological Organization, 1997: Estimation of Areal Evapotranspiration. Technical Reports in Hydrology and Water Resources No. 56, WMO/ TD-No. 785, Geneva. World Meteorological Organization, 2003: Manual on the Global Observing System. Volume I, WMO-No. 544, Geneva.

CHaPTEr 11

MeasureMent of soIl MoIsture



Soil moisture is an important component in the atmospheric water cycle, both on a small agricultural scale and in large-scale modelling of land/ atmosphere interaction. Vegetation and crops always depend more on the moisture available at root level than on precipitation occurrence. Water budgeting for irrigation planning, as well as the actual scheduling of irrigation action, requires local soil moisture information. Knowledge of the degree of soil wetness helps to forecast the risk of flash floods, or the occurrence of fog. Nevertheless, soil moisture has been seldom observed routinely at meteorological stations. Documentation of soil wetness was usually restricted to the description of the “state of the ground” by means of WMO Code Tables 0901 and 0975, and its measurement was left to hydrologists, agriculturalists and other actively interested parties. Around 1990 the interest of meteorologists in soil moisture measurement increased. This was partly because, after the pioneering work by Deardorff (1978), numerical atmosphere models at various scales became more adept at handling fluxes of sensible and latent heat in soil surface layers. Moreover, newly developed soil moisture measurement techniques are more feasible for meteorological stations than most of the classic methods. To satisfy the increasing need for determining soil moisture status, the most commonly used methods and instruments will be discussed, including their advantages and disadvantages. Some less common observation techniques are also mentioned. 11.1.1 Definitions

Soil water content on the basis of mass is expressed in the gravimetric soil moisture content, θg, defined by: θg = Mwater/Msoil (11.1)

where Mwater is the mass of the water in the soil sample and Msoil is the mass of dry soil that is contained in the sample. Values of θg in meteorology are usually expressed in per cent. Because precipitation, evapotranspiration and solute transport variables are commonly expressed in terms of flux, volumetric expressions for water content are often more useful. The volumetric soil moisture content of a soil sample, θv, is defined as: θv = Vwater/Vsample (11.2)

where Vwater is the volume of water in the soil sample and Vsample is the total volume of dry soil + air + water in the sample. Again, the ratio is usually expressed in per cent. The relationship between gravimetric and volumetric moisture contents is: θv = θg ( ρb/ρw) (11.3)

where ρb is the dry soil bulk density and ρw is the soil water density. The basic technique for measuring soil water content is the gravimetric method, described below in section 11.2. Because this method is based on direct measurements, it is the standard with which all other methods are compared. Unfortunately, gravimetric sampling is destructive, rendering repeat measurements on the same soil sample impossible. Because of the difficulties of accurately measuring dry soil and water volumes, volumetric water contents are not usually determined directly. soil water potential

Soil moisture determinations measure either the soil water content or the soil water potential. soil water content Soil water content is an expression of the mass or volume of water in the soil, while the soil water potential is an expression of the soil water energy status. The relation between content and potential is not universal and depends on the characteristics of the local soil, such as soil density and soil texture.

Soil water potential describes the energy status of the soil water and is an important parameter for water transport analysis, water storage estimates and soil-plant-water relationships. A difference in water potential between two soil locations indicates a tendency for water flow, from high to low potential. When the soil is drying, the water potential becomes more negative and the work that must be



done to extract water from the soil increases. This makes water uptake by plants more difficult, so the water potential in the plant drops, resulting in plant stress and, eventually, severe wilting. Formally, the water potential is a measure of the ability of soil water to perform work, or, in the case of negative potential, the work required to remove the water from the soil. The total water potential ψt, the combined effect of all force fields, is given by: ψt = ψz + ψm + ψo + ψp (11.4)

class of units are those of pressure head in (centi)metres of water or mercury, energy per unit weight. The relation of the three potential unit classes is: ψ (J kg–1) = γ × ψ (Pa) = [ψ (m)] / g (11.5)

where ψz is the gravitational potential, based on elevation above the mean sea level; m is the matric potential, suction due to attraction of water by the soil matrix; o is the osmotic potential, due to energy effects of solutes in water; and p is the pressure potential, the hydrostatic pressure below a water surface. The potentials which are not related to the composition of water or soil are together called hydraulic potential, ψh. In saturated soil, this is expressed as ψh = ψz + ψp, while in unsaturated soil, it is expressed as ψh = ψz + ψm. When the phrase “water potential” is used in studies, maybe with the notation ψw, it is advisable to check the author’s definition because this term has been used for ψm + ψz as well as for ψm + ψo. The gradients of the separate potentials will not always be significantly effective in inducing flow. For example, ψ 0 requires a semi-permeable membrane to induce flow, and ψp will exist in saturated or ponded conditions, but most practical applications are in unsaturated soil. 11.1.2 units

where γ = 10 3 kg m –3 (density of water) and g = 9.81m s–2 (gravity acceleration). Because the soil water potential has a large range, it is often expressed logarithmically, usually in pressure head of water. A common unit for this is called pF, and is equal to the base–10 logarithm of the absolute value of the head of water expressed in centimetres. 11.1.3 Meteorological requirements

Soil consists of individual particles and aggregates of mineral and organic materials, separated by spaces or pores which are occupied by water and air. The relative amount of pore space decreases with increasing soil grain size (intuitively one would expect the opposite). The movement of liquid water through soil depends upon the size, shape and generally the geometry of the pore spaces. If a large quantity of water is added to a block of otherwise “dry” soil, some of it will drain away rapidly by the effects of gravity through any relatively large cracks and channels. The remainder will tend to displace some of the air in the spaces between particles, the larger pore spaces first. Broadly speaking, a well-defined “wetting front” will move downwards into the soil, leaving an increasingly thick layer retaining all the moisture it can hold against gravity. That soil layer is then said to be at “field capacity”, a state that for most soils occurs about ψ ≈ 10 kPa (pF ≈ 2). This state must not be confused with the undesirable situation of “saturated” soil, where all the pore spaces are occupied by water. After a saturation event, such as heavy rain, the soil usually needs at least 24 h to reach field capacity. When moisture content falls below field capacity, the subsequent limited movement of water in the soil is partly liquid, partly in the vapour phase by distillation (related to temperature gradients in the soil), and sometimes by transport in plant roots. Plant roots within the block will extract liquid water from the water films around the soil particles with which they are in contact. The rate at which this extraction is possible depends on the soil moisture potential. A point is reached at which the forces holding moisture films to soil particles cannot be overcome by root suction plants are starved of

In solving the mass balance or continuity equations for water, it must be remembered that the components of water content parameters are not dimensionless. Gravimetric water content is the weight of soil water contained in a unit weight of soil (kg water/kg dry soil). Likewise, volumetric water content is a volume fraction (m3 water/m3 soil). The basic unit for expressing water potential is energy (in joules, kg m2 s–2) per unit mass, J kg–1. Alternatively, energy per unit volume (J m–3) is equivalent to pressure, expressed in pascals (Pa = kg m–1 s–2). Units encountered in older literature are bar (= 100 kPa), atmosphere (= 101.32 kPa), or pounds per square inch (= 6.895 kPa). A third



water and lose turgidity: soil moisture has reached the “wilting point”, which in most cases occurs at a soil water potential of –1.5 MPa (pF = 4.2). In agriculture, the soil water available to plants is commonly taken to be the quantity between field capacity and the wilting point, and this varies highly between soils: in sandy soils it may be less than 10 volume per cent, while in soils with much organic matter it can be over 40 volume per cent. Usually it is desirable to know the soil moisture content and potential as a function of depth. Evapotranspiration models concern mostly a shallow depth (tens of centimetres); agricultural applications need moisture information at root depth (order of a metre); and atmospheric general circulation models incorporate a number of layers down to a few metres. For hydrological and waterbalance needs – such as catchment-scale runoff models, as well as for effects upon soil properties such as soil mechanical strength, thermal conductivity and diffusivity – information on deep soil water content is needed. The accuracy needed in water content determinations and the spatial and temporal resolution required vary by application. An often-occurring problem is the inhomogeneity of many soils, meaning that a single observation location cannot provide absolute knowledge of the regional soil moisture, but only relative knowledge of its change. 11.1.4 Measurement methods

dielectric measurement methods were only developed well after 1980, so too-early reviews should not be relied on much when choosing an operational method. There are four operational alternatives for the determination of soil water content. First, there is classic gravimetric moisture determination, which is a simple direct method. Second, there is lysimetry, a non-destructive variant of gravimetric measurement. A container filled with soil is weighed either occasionally or continuously to indicate changes in total mass in the container, which may in part or totally be due to changes in soil moisture (lysimeters are discussed in more detail in Part I, Chapter 10). Third, water content may be determined indirectly by various radiological techniques, such as neutron scattering and gamma absorption. Fourth, water content can be derived from the dielectric properties of soil, for example, by using timedomain reflectometry. Soil water potential measurement can be performed by several indirect methods, in particular using tensiometers, resistance blocks and soil psychrometers. None of these instruments is effective at this time over the full range of possible water potential values. For extended study of all methods of various soil moisture measurements, up-to-date handbooks are provided by Klute (1986), Dirksen (1999), and Smith and Mullins (referenced here as Gardner and others, 2001, and Mullins, 2001).

The methods and instruments available to evaluate soil water status may be classified in three ways. First, a distinction is made between the determination of water content and the determination of water potential. Second, a so-called direct method requires the availability of sizeable representative terrain from which large numbers of soil samples can be taken for destructive evaluation in the laboratory. Indirect methods use an instrument placed in the soil to measure some soil property related to soil moisture. Third, methods can be ranged according to operational applicability, taking into account the regular labour involved, the degree of dependence on laboratory availability, the complexity of the operation and the reliability of the result. Moreover, the preliminary costs of acquiring instrumentation must be compared with the subsequent costs of local routine observation and data processing. Reviews such as WMO (1968; 1989; 2001) and Schmugge, Jackson and McKim (1980) are very useful for learning about practical problems, but


graviMetric Direct MeasureMent of soil Water content

The gravimetric soil moisture content θg is typically determined directly. Soil samples of about 50 g are removed from the field with the best available tools (shovels, spiral hand augers, bucket augers, perhaps power-driven coring tubes), disturbing the sample soil structure as little as possible (Dirksen, 1999). The soil sample should be placed immediately in a leak-proof, seamless, pre-weighed and identified container. As the samples will be placed in an oven, the container should be able to withstand high temperatures without melting or losing significant mass. The most common soil containers are aluminium cans, but non-metallic containers should be used if the samples are to be dried in microwave ovens in the laboratory. If soil samples are to be transported for a considerable distance, tape should be used to seal the container to avoid moisture loss by evaporation.



The samples and container are weighed in the laboratory both before and after drying, the difference being the mass of water originally in the sample. The drying procedure consists in placing the open container in an electrically heated oven at 105°C until the mass stabilizes at a constant value. The drying times required usually vary between 16 and 24 h. Note that drying at 105°±5°C is part of the usually accepted definition of “soil water content”, originating from the aim to measure only the content of “free” water which is not bound to the soil matrix (Gardner and others, 2001). If the soil samples contain considerable amounts of organic matter, excessive oxidation may occur at 105°C and some organic matter will be lost from the sample. Although the specific temperature at which excessive oxidation occurs is difficult to specify, lowering the oven temperature from 105 to 70°C seems to be sufficient to avoid significant loss of organic matter, but this can lead to water content values that are too low. Oven temperatures and drying times should be checked and reported. Microwave oven drying for the determination of gravimetric water contents may also be used effectively (Gee and Dodson, 1981). In this method, soil water temperature is quickly raised to boiling point, then remains constant for a period due to the consumption of heat in vaporizing water. However, the temperature rapidly rises as soon as the energy absorbed by the soil water exceeds the energy needed for vaporizing the water. Caution should be used with this method, as temperatures can become high enough to melt plastic containers if stones are present in the soil sample. Gravimetric soil water contents of air-dry (25°C) mineral soil are often less than 2 per cent, but, as the soil approaches saturation, the water content may increase to values between 25 and 60 per cent, depending on soil type. Volumetric soil water content, θv, may range from less than 10 per cent for air-dry soil to between 40 and 50 per cent for mineral soils approaching saturation. Soil θv determination requires measurement of soil density, for example, by coating a soil clod with paraffin and weighing it in air and water, or some other method (Campbell and Henshall, 2001). Water contents for stony or gravelly soils can be grossly misleading. When rocks occupy an appreciable volume of the soil, they modify direct measurement of soil mass, without making a similar contribution to the soil porosity. For example, gravimetric water content may be 10 per cent for a soil sample with a bulk density of 2 000 kg m–3;

however, the water content of the same sample based on finer soil material (stones and gravel excluded) would be 20 per cent, if the bulk density of fine soil material was 1 620 kg m–3. Although the gravimetric water content for the finer soil fraction, θg,fines, is the value usually used for spatial and temporal comparison, there may also be a need to determine the volumetric water content for a gravelly soil. The latter value may be important in calculating the volume of water in a root zone. The relationship between the gravimetric water content of the fine soil material and the bulk volumetric water content is given by: θv,stony = θg,fines ( ρb/ ρv)(1 + Mstones/Mfines) (11.6) where θv,stony is the bulk volumetric water content of soil containing stones or gravel and Mstones and Mfines are the masses of the stone and fine soil fractions (Klute, 1986). 11.3 soil Water content: inDirect MethoDs

The capacity of soil to retain water is a function of soil texture and structure. When removing a soil sample, the soil being evaluated is disturbed, so its water-holding capacity is altered. Indirect methods of measuring soil water are helpful as they allow information to be collected at the same location for many observations without disturbing the soil water system. Moreover, most indirect methods determine the volumetric soil water content without any need for soil density determination. 11.3.1 radiological methods

Two different radiological methods are available for measuring soil water content. One is the widely used neutron scatter method, which is based on the interaction of high-energy (fast) neutrons and the nuclei of hydrogen atoms in the soil. The other method measures the attenuation of gamma rays as they pass through soil. Both methods use portable equipment for multiple measurements at permanent observation sites and require careful calibration, preferably with the soil in which the equipment is to be used. When using any radiation-emitting device, some precautions are necessary. The manufacturer will provide a shield that must be used at all times. The only time the probe leaves the shield is when it is lowered into the soil access tube. When the guidelines and regulations regarding radiation hazards stipulated by the manufacturers and health



authorities are followed, there is no need to fear exposure to excessive radiation levels, regardless of the frequency of use. Nevertheless, whatever the type of radiation-emitting device used, the operator should wear some type of film badge that will enable personal exposure levels to be evaluated and recorded on a monthly basis. neutron scattering method

Usually, a straight tube with a diameter of 5 cm is sufficient for the probe to be lowered into the tube without a risk of jamming. Care should be taken in installing the access tube to ensure that no air voids exist between the tube and the soil matrix. At least 10 cm of the tube should extend above the soil surface, in order to allow the box containing the electronics to be mounted on top of the access tube. All access tubes should be fitted with a removable cap to keep rainwater from entering the tubes. In order to enhance experimental reproducibility, the soil water content is not derived directly from the number of slow neutrons detected, but rather from a count ratio (CR), given by: CR = Csoil/Cbackground (11.7)

In neutron soil moisture detection (Visvalingam and Tandy, 1972; Greacen, 1981), a probe containing a radioactive source emitting high-energy (fast) neutrons and a counter of slow neutrons is lowered into the ground. The hydrogen nuclei, having about the same mass as neutrons, are at least 10 times as effective for slowing down neutrons upon collision as most other nuclei in the soil. Because in any soil most hydrogen is in water molecules, the density of slow “thermalized” neutrons in the vicinity of the neutron probe is nearly proportional to the volumetric soil water content. Some fraction of the slowed neutrons, after a number of collisions, will again reach the probe and its counter. When the soil water content is large, not many neutrons are able to travel far before being thermalized and ineffective, and then 95 per cent of the counted returning neutrons come from a relatively small soil volume. In wet soil, the “radius of influence” may be only 15 cm, while in dry soil that radius may increase to 50 cm. Therefore, the measured soil volume varies with water content, and thin layers cannot be resolved. This method is hence less suitable to localize water-content discontinuities, and it cannot be used effectively in the top 20 cm of soil on account of the soil air discontinuity. Several source and detector arrangements are possible in a neutron probe, but it is best to have a probe with a double detector and a central source, typically in a cylindrical container. Such an arrangement allows for a nearly spherical zone of influence and leads to a more linear relation of neutron count to soil water content. A cable is used to attach a neutron probe to the main instrument electronics, so that the probe can be lowered into a previously installed access tube. The access tube should be seamless and thick enough (at least 1.25 mm) to be rigid, but not so thick that the access tube itself slows neutrons down significantly. The access tube must be made of non-corrosive material, such as stainless steel, aluminium or plastic, although polyvinylchloride should be avoided as it absorbs slow neutrons.

where Csoil is the count of thermalized neutrons detected in the soil and Cbackground is the count of thermalized neutrons in a reference medium. All neutron probe instruments now come with a reference standard for these background calibrations, usually against water. The standard in which the probe is placed should be at least 0.5 m in diameter so as to represent an “infinite” medium. Calibration to determine Cbackground can be done by a series of ten 1 min readings, to be averaged, or by a single 1 h reading. Csoil is determined from averaging several soil readings at a particular depth/location. For calibration purposes, it is best to take three samples around the access tube and to average the water contents corresponding to the average CR calculated for that depth. A minimum of five different water contents should be evaluated for each depth. Although some calibration curves may be similar, a separate calibration for each depth should be conducted. The lifetime of most probes is more than 10 years. gamma-ray attenuation

Whereas the neutron method measures the volumetric water content in a large sphere, gamma-ray absorption scans a thin layer. The dual-probe gamma device is nowadays mainly used in the laboratory since dielectric methods became operational for field use. Another reason for this is that gamma rays are more dangerous to work with than neutron scattering devices, as well as the fact that the operational costs for the gamma rays are relatively high. Changes in gamma attenuation for a given mass absorption coefficient can be related to changes in total soil density. As the attenuation of gamma rays is due to mass, it is not possible to determine water content unless the attenuation of gamma rays due



to the local dry soil density is known and remains unchanged with changing water content. Determining accurately the soil water content from the difference between the total and dry density attenuation values is therefore not simple. Compared to neutron scattering, gamma-ray attenuation has the advantage of allowing accurate measurements at a few centimetres below the airsurface interface. Although the method has a high degree of resolution, the small soil volume evaluated will exhibit more spatial variation due to soil heterogeneities (Gardner and Calissendorff, 1967). 11.3.2 soil water dielectrics

where ε is the dielectric constant of the soil water system. This empirical relationship has proved to be applicable in many soils, roughly independent of texture and gravel content (Drungil, Abt and Gish, 1989). However, soil-specific calibration is desirable for soils with low density or with a high organic content. For complex soil mixtures, the De Loor equation has proved useful (Dirksen and Dasberg, 1993). Generally, the parallel probes are separated by 5 cm and vary in length from 10 to 50 cm; the rods of the probe can be of any metallic substance. The sampling volume is essentially a cylinder of a few centimetres in radius around the parallel probes (Knight, 1992). The coaxial cable from the probe to the signal-processing unit should not be longer than about 30 m. Soil water profiles can be obtained from a buried set of probes, each placed horizontally at a different depth, linked to a field data logger by a multiplexer. frequency-domain measurement

When a medium is placed in the electric field of a capacitor or waveguide, its influence on the electric forces in that field is expressed as the ratio between the forces in the medium and the forces which would exist in vacuum. This ratio, called permittivity or “dielectric constant”, is for liquid water about 20 times larger than that of average dry soil, because water molecules are permanent dipoles. The dielectric properties of ice, and of water bound to the soil matrix, are comparable to those of dry soil. Therefore, the volumetric content of free soil water can be determined from the dielectric characteristics of wet soil by reliable, fast, non-destructive measurement methods, without the potential hazards associated with radioactive devices. Moreover, such dielectric methods can be fully automated for data acquisition. At present, two methods which evaluate soil water dielectrics are commercially available and used extensively, namely time-domain reflectometry and frequencydomain measurement. time-domain reflectometry

While time-domain refletometry uses microwave frequencies in the gigahertz range, frequencydomain sensors measure the dielectric constant at a single microwave megahertz frequency. The microwave dielectric probe utilizes an open-ended coaxial cable and a single reflectometer at the probe tip to measure amplitude and phase at a particular frequency. Soil measurements are referenced to air, and are typically calibrated with dielectric blocks and/or liquids of known dielectric properties. One advantage of using liquids for calibration is that a perfect electrical contact between the probe tip and the material can be maintained (Jackson, 1990). As a single, small probe tip is used, only a small volume of soil is ever evaluated, and soil contact is therefore critical. As a result, this method is excellent for laboratory or point measurements, but is likely to be subject to spatial variability problems if used on a field scale (Dirksen, 1999).

Time-domain reflectometry is a method which determines the dielectric constant of the soil by monitoring the travel of an electromagnetic pulse, which is launched along a waveguide formed by a pair of parallel rods embedded in the soil. The pulse is reflected at the end of the waveguide and its propagation velocity, which is inversely proportional to the square root of the dielectric constant, can be measured well by actual electronics. The most widely used relation between soil dielectrics and soil water content was experimentally summarized by Topp, Davis and Annan (1980) as follows: θv = –0.053 + 0.029 ε – 5.5 · 10–4 ε2 + 4.3 · 10–6 ε3 (11.8)


soil Water Potential instruMentation

The basic instruments capable of measuring matric potential are sufficiently inexpensive and reliable to be used in field-scale monitoring programmes. However, each instrument has a limited accessible water potential range. Tensiometers work well only in wet soil, while resistance blocks do better in moderately dry soil.





The most widely used and least expensive water potential measuring device is the tensiometer. Tensiometers are simple instruments, usually consisting of a porous ceramic cup and a sealed plastic cylindrical tube connecting the porous cup to some pressure-recording device at the top of the cylinder. They measure the matric potential, because solutes can move freely through the porous cup. The tensiometer establishes a quasi-equilibrium condition with the soil water system. The porous ceramic cup acts as a membrane through which water flows, and therefore must remain saturated if it is to function properly. Consequently, all the pores in the ceramic cup and the cylindrical tube are initially filled with de-aerated water. Once in place, the tensiometer will be subject to negative soil water potentials, causing water to move from the tensiometer into the surrounding soil matrix. The water movement from the tensiometer will create a negative potential or suction in the tensiometer cylinder which will register on the recording device. For recording, a simple U-tube filled with water and/or mercury, a Bourdon-type vacuum gauge or a pressure transducer (Marthaler and others, 1983) is suitable. If the soil water potential increases, water moves from the soil back into the tensiometer, resulting in a less negative water potential reading. This exchange of water between the soil and the tensiometer, as well as the tensiometer’s exposure to negative potentials, will cause dissolved gases to be released by the solution, forming air bubbles. The formation of air bubbles will alter the pressure readings in the tensiometer cylinder and will result in faulty readings. Another limitation is that the tensiometer has a practical working limit of ψ ≈ –85 kPa. Beyond –100 kPa (≈ 1 atm), water will boil at ambient temperature, forming water vapour bubbles which destroy the vacuum inside the tensiometer cylinder. Consequently, the cylinders occasionally need to be de-aired with a hand-held vacuum pump and then refilled. Under drought conditions, appreciable amounts of water can move from the tensiometer to the soil. Thus, tensiometers can alter the very condition they were designed to measure. Additional proof of this process is that excavated tensiometers often have accumulated large numbers of roots in the proximity of the ceramic cups. Typically, when the tensiometer acts as an “irrigator”, so much water is lost through the ceramic cups that a vacuum in the cylinder cannot be maintained, and the tensiometer gauge will be inoperative.

Before installation, but after the tensiometer has been filled with water and degassed, the ceramic cup must remain wet. Wrapping the ceramic cup in wet rags or inserting it into a container of water will keep the cup wet during transport from the laboratory to the field. In the field, a hole of the appropriate size and depth is prepared. The hole should be large enough to create a snug fit on all sides, and long enough so that the tensiometer extends sufficiently above the soil surface for deairing and refilling access. Since the ceramic cup must remain in contact with the soil, it may be beneficial in stony soil to prepare a thin slurry of mud from the excavated site and to pour it into the hole before inserting the tensiometer. Care should also be taken to ensure that the hole is backfilled properly, thus eliminating any depressions that may lead to ponded conditions adjacent to the tensiometer. The latter precaution will minimize any water movement down the cylinder walls, which would produce unrepresentative soil water conditions. Only a small portion of the tensiometer is exposed to ambient conditions, but its interception of solar radiation may induce thermal expansion of the upper tensiometer cylinder. Similarly, temperature gradients from the soil surface to the ceramic cup may result in thermal expansion or contraction of the lower cylinder. To minimize the risk of temperature-induced false water potential readings, the tensiometer cylinder should be shaded and constructed of non-conducting materials, and readings should be taken at the same time every day, preferably in the early morning. A new development is the osmotic tensiometer, where the tube of the meter is filled with a polymer solution in order to function better in dry soil. For more information on tensiometers see Dirksen (1999) and Mullins (2001). 11.4.2 resistance blocks

Electrical resistance blocks, although insensitive to water potentials in the wet range, are excellent companions to the tensiometer. They consist of electrodes encased in some type of porous material that within about two days will reach a quasi-equilibrium state with the soil. The most common block materials are nylon fabric, fibreglass and gypsum, with a working range of about –50 kPa (for nylon) or –100 kPa (for gypsum) up to –1 500 kPa. Typical block sizes are 4 cm × 4 cm × 1 cm. Gypsum blocks last a few years, but less in very wet or saline soil (Perrier and Marsh, 1958).



This method determines water potential as a function of electrical resistance, measured with an alternating current bridge (usually ≈ 1 000 Hz) because direct current gives polarization effects. However, resistance decreases if soil is saline, falsely indicating a wetter soil. Gypsum blocks are less sensitive to soil saltiness effects because the electrodes are consistently exposed to a saturated solution of calcium sulphate. The output of gypsum blocks must be corrected for temperature (Aggelides and Londra, 1998). Because resistance blocks do not protrude above the ground, they are excellent for semi-permanent agricultural networks of water potential profiles, if installation is careful and systematic (WMO, 2001). When installing the resistance blocks it is best to dig a small trench for the lead wires before preparing the hole for the blocks, in order to minimize water movement along the wires to the blocks. A possible field problem is that shrinking and swelling soil may break contact with the blocks. On the other hand, resistance blocks do not affect the distribution of plant roots. Resistance blocks are relatively inexpensive. However, they need to be calibrated individually. This is generally accomplished by saturating the blocks in distilled water and then subjecting them to a predetermined pressure in a pressure-plate apparatus (Wellings, Bell and Raynor, 1985), at least at five different pressures before field installation. Unfortunately, the resistance is less on a drying curve than on a wetting curve, thus generating hysteresis errors in the field because resistance blocks are slow to equilibrate with varying soil wetness (Tanner and Hanks, 1952). As resistance-block calibration curves change with time, they need to be calibrated before installation and to be checked regularly afterwards, either in the laboratory or in the field.

The lowest water potential typically associated with active plant water uptake corresponds to a relative humidity of between 98 and 100 per cent. This implies that, if the water potential in the soil is to be measured accurately to within 10 kPa, the temperature would have to be controlled to better than 0.001 K. This means that the use of field psychrometers is most appropriate for low matric potentials, of less than –300 kPa. In addition, the instrument components differ in heat capacities, so diurnal soil temperature fluctuations can induce temperature gradients in the psychrometer (Brunini and Thurtell, 1982). Therefore, Spanner psychrometers should not be used at depths of less than 0.3 m, and readings should be taken at the same time each day, preferably in the early morning. In summary, soil psychrometry is a difficult and demanding method, even for specialists.


reMote sensing of soil Moisture



Psychrometers are used in laboratory research on soil samples as a standard for other techniques (Mullins, 2001), but a field version is also available, called the Spanner psychrometer (Rawlins and Campbell, 1986). This consists of a miniature thermocouple placed within a small chamber with a porous wall. The thermocouple is cooled by the Peltier effect, condensing water on a wire junction. As water evaporates from the junction, its temperature decreases and a current is produced which is measured by a meter. Such measurements are quick to respond to changes in soil water potential, but are very sensitive to temperature and salinity (Merrill and Rawlins, 1972).

Earlier in this chapter it was mentioned that a single observation location cannot provide absolute knowledge of regional soil moisture, but only relative knowledge of its change, because soils are often very inhomogeneous. However, nowadays measurements from space-borne instruments using remote-sensing techniques are available for determining soil moisture in the upper soil layer. This allows interpolation at the mesoscale for estimation of evapotranspiration rates, evaluation of plant stress and so on, and also facilitates moisture balance input in weather models (Jackson and Schmugge, 1989; Saha, 1995). The usefulness of soil moisture determination at meteorological stations has been increased greatly thereby, because satellite measurements need “ground truth” to provide accuracy in the absolute sense. Moreover, station measurements are necessary to provide information about moisture in deeper soil layers, which cannot be observed from satellites or aircraft. Some principles of the airborne measurement of soil moisture are briefly given here; for more details see Part II, Chapter 8. Two uncommon properties of the water in soil make it accessible to remote sensing. First, as already discussed above in the context of time-domain reflectometry, the dielectric constant of water is of an order of magnitude larger than that of dry soils at microwave lengths. In remote sensing, this feature can be used either passively or actively (Schmugge, Jackson and McKim, 1980). Passive sensing analyses the natural microwave emissions from the Earth’s surface, while active sensing refers to evaluating back scatter of a satellite-sent signal.



The microwave radiometer response will range from an emissivity of 0.95 to 0.6 or lower for passive microwave measurements. For the active satellite radar measurements, an increase of about 10 db in return is observed as soil goes from dry to wet. The microwave emission is referred to as brightness temperature Tb and is proportional to the emissivity β and the temperature of the soil surface, Tsoil, or: Tb = β Tsoil (11.9)

10 cm and 1 m, and also lower if there is much deep infiltration. Observation frequency should be approximately once every week. Indirect measurement should not necessarily be carried in the meteorological enclosure, but rather near it, below a sufficiently horizontal natural surface which is typical of the uncultivated environment. Representativity of any soil moisture observation point is limited because of the high probability of significant variations, both horizontally and vertically, of soil structure (porosity, density, chemical composition). Horizontal variations of soil water potential tend to be relatively less than such variations of soil water content. Gravimetric water content determinations are only reliable at the point of measurement, making a large number of samples necessary to describe adequately the soil moisture status of the site. To estimate the number of samples n needed at a local site to estimate soil water content at an observed level of accuracy (L), the sample size can be estimated from: n = 4 (σ2/L2) (11.11)

where Tsoil is in kelvin and ≡β depends on soil texture, surface roughness and vegetation. Any vegetation canopy will influence the soil component. The volumetric water content is related to the total active backscatter St by: θv = L (St – Sv) (RA)–1 (11.10)

where L is a vegetation attenuation coefficient; Sv is the back scatter from vegetation; R is a soil surface roughness term; and A is a soil moisture sensitivity term. As a result, microwave response to soil water content can be expressed as an empirical relationship. The sampling depth in the soil is of the order of 5 to 10 cm. The passive technique is robust, but its pixel resolution is limited to not less than 10 km because satellite antennas have a limited size. The active satellite radar pixel resolution is more than a factor of 100 better, but active sensing is very sensitive to surface roughness and requires calibration against surface data. The second remote-sensing feature of soil water is its relatively large heat capacity and thermal conductivity. Therefore, moist soils have a large thermal inertia. Accordingly, if cloudiness does not interfere, remote sensing of the diurnal range of surface temperature can be used to estimate soil moisture (Idso and others, 1975; Van de Griend, Camillo and Gurney, 1985).

where σ2 is the sample variance generated from a preliminary sampling experiment. For example, suppose that a preliminary sampling yielded a (typical) σ2 of 25 per cent and the accuracy level needed to be within 3 per cent, 12 samples would be required from the site (if it can be assumed that water content is normally distributed across the site). A regional approach divides the area into strata based on the uniformity of relevant variables within the strata, for example, similarity of hydrological response, soil texture, soil type, vegetative cover, slope, and so on. Each stratum can be sampled independently and the data recombined by weighing the results for each stratum by its relative area. The most critical factor controlling the distribution of soil water in low-sloping watersheds is topography, which is often a sufficient criterion for subdivision into spatial units of homogeneous response. Similarly, sloping rangeland will need to be more intensely sampled than flat cropland. However, the presence of vegetation tends to diminish the soil moisture variations caused by topography.


site selection anD saMPle size

Standard soil moisture observations at principal stations should be made at several depths between



references and furtHer readIng

Aggelides, S.M. and P.A. Londra, 1998: Comparison of empirical equations for temperature correction of gypsum sensors. Agronomy Journal, 90, pp. 441–443. Brunini, O. and G.W. Thurtell, 1982: An improved thermocouple hygrometer for in situ measurements of soil water potential. Soil Science Society of America Journal, 46, pp. 900–904. Campbell, D.J. and J.K. Henshall, 2001: Bulk density. In: K.A. Smith and C.E. Mullins, Soil and Environmental Analysis: Physical Methods, Marcel Dekker, New york, pp. 315–348. Deardorff, J.W., 1978: Efficient prediction of ground surface temperature and moisture, with inclusion of a layer of vegetation. Journal of Geophysical Research, 83, pp. 1889–1904. Dirksen, C., 1999: Soil Physics Measurements. Catena Verlag, Reiskirchen, Germany, 154 pp. Dirksen, C. and S. Dasberg, 1993: Improved calibration of time domain reflectometry soil water content measurements. Soil Science Society of America Journal, 57, pp. 660–667. Drungil, C.E.C., K. Abt and T.J. Gish, 1989: Soil moisture determination in gravelly soils with time domain reflectometry. Transactions of the American Society of Agricultural Engineering, 32, pp. 177–180. Gardner, W.H. and C. Calissendorff, 1967: Gammaray and neutron attenuation measurement of soil bulk density and water content. Proceedings of the Symposium on the Use of Isotope and Raditation Techniques in Soil Physics and Irrigation Studies (Istanbul, 12-16 June 1967). International Atomic Energy Agency, Vienna, pp. 101–112. Gardner, C.M.K., D.A. Robinson, K. Blyth and J.D. Cooper, 2001: Soil water content. In: K.A. Smith, and C.E. Mullins, Soil and Environmental Analysis: Physical Methods, Marcel Dekker, New york, pp. 1–64. Gee, G.W. and M.E. Dodson, 1981: Soil water content by microwave drying: A routine procedure. Soil Science Society of America Journal, 45, pp. 1234–1237. Greacen, E.L., 1981: Soil Water Assessment by the Neutron Method. CSIRO, Australia, 140 pp. Idso, S.B., R.D. Jackson, R.J. Reginato and T.J. Schmugge, 1975: The utility of surface temperature measurements for the remote sensing of sun for soil water status. Journal of Geophysical Research, 80, pp. 3044–3049.

Jackson, T.J., 1990: Laboratory evaluation of a fieldportable dielectric/soil moisture probe. IEEE Transactions on Geoscience and Remote Sensing, 28, pp. 241–245. Jackson, T.J. and T.J. Schmugge, 1989: Passive microwave remote sensing system for soil moisture: Some supporting research. IEEE Transactions on Geoscience and Remote Sensing, 27, pp. 225–235. Klute, A. (ed.), 1986: Methods of Soil Analysis, Part 1: Physical and Mineralogical Methods. American Society of Agronomy, Madison, Wisconsin, United States, 1188 pp. Knight, J.H., 1992: Sensitivity of time domain reflectometry measurements to lateral variations in soil water content. Water Resources Research, 28, pp. 2345–2352. Marthaler, H.P., W. Vogelsanger, F. Richard and J.P. Wierenga, 1983: A pressure transducer for field tensiometers. Soil Science Society of America Journal, 47, pp. 624–627. Merrill, S.D. and S.L. Rawlins, 1972: Field measurement of soil water potential with thermocouple psychrometers. Soil Science, 113, pp. 102–109. Mullins, C.E., 2001: Matric potential. In: K.A. Smith, and C.E. Mullins, Soil and Environmental Analysis: Physical Methods. Marcel Dekker, New york, pp. 65–93. Perrier, E.R. and A.W. Marsh, 1958: Performance characteristics of various electrical resistance units and gypsum materials. Soil Science, 86, pp. 140–147. Rawlins, S.L. and G.S. Campbell, 1986: Water potential: Thermocouple psychrometry. In: A. Klute, (ed.), Methods of Soil Analysis – Part 1: Physical and Mineralogical Methods. American Society of Agronomy, Madison, Wisconsin, United States, pp. 597–618. Saha, S.K., 1995: Assessment of regional soil moisture conditions by coupling satellite sensor data with a soil-plant system heat and moisture balance model. International Journal of Remote Sensing, 16, pp. 973–980. Schmugge, T.J., T.J. Jackson, and H.L. McKim, 1980: Survey of methods for soil moisture determination. Water Resources Research, 16, pp. 961–979. Tanner, C.B. and R.J. Hanks, 1952: Moisture hysteresis in gypsum moisture blocks. Soil Science Society of America Proceedings, 16, pp. 48–51. Topp, G.C., J.L. Davis and A.P. Annan, 1980: Electromagnetic determination of soil water



content: Measurement in coaxial transmission lines. Water Resources Research, 16, pp. 574-582. Van de Griend, A.A., P.J. Camillo and R.J. Gurney, 1985: Discrimination of soil physical parameters, thermal inertia and soil moisture from diurnal surface temperature fluctuations. Water Resources Research, 21, pp. 997–1009. Visvalingam, M. and J.D. Tandy, 1972: The neutron method for measuring soil moisture content: A review. European Journal of Soil Science, 23, pp. 499–511. Wellings, S.R., J.P. Bell and R.J. Raynor, 1985: The Use of Gypsum Resistance Blocks for Measuring Soil Water Potential in the Field. Report No. 92,

Institute of Hydrology, Wallingford, United Kingdom. World Meteorological Organization, 1968: Practical Soil Moisture Problems in Agriculture. Technical Note No. 97, WMO-No. 235.TP.128, Geneva. World Meteorological Organization, 1989: Land Management in Arid and Semi-arid Areas. Technical Note No. 186, WMO-No. 662, Geneva. World Meteorological Organization, 2001: Lecture Notes for Training Agricultural Meteorological Personnel (J. Wieringa and J. Lomas). Second edition, WMO-No. 551, Geneva.

CHaPTEr 12

MeasureMent of uPPer-aIr Pressure, teMPerature and HuMIdIty

12.1 12.1.1



for atmospheric constituents, such as ozone concentration or radioactivity. These additional measurements are not discussed in any detail in this chapter. 12.1.2 units used in upper-air measurements

The following definitions from WMO (1992; 2003a) are relevant to upper-air measurements using a radiosonde: Radiosonde: Instrument intended to be carried by a balloon through the atmosphere, equipped with devices to measure one or several meteorological variables (pressure, temperature, humidity, etc.), and provided with a radio transmitter for sending this information to the observing station. Radiosonde observation: An observation of meteorological variables in the upper air, usually atmospheric pressure, temperature and humidity, by means of a radiosonde. Note: The radiosonde may be attached to a balloon, or it may be dropped (dropsonde) from an aircraft or rocket. Radiosonde station: A station at which observations of atmospheric pressure, temperature and humidity in the upper air are made by electronic means. Upper-air observation: A meteorological observation made in the free atmosphere, either directly or indirectly. Upper-air station, upper air synoptic station, aerological station: A surface location from which upper-air observations are made. Sounding: Determination of one or several upper-air meteorological variables by means of instruments carried aloft by balloon, aircraft, kite, glider, rocket, and so on. This chapter will primarily deal with radiosonde systems. Measurements using special platforms or specialized equipment, or made indirectly by remote-sensing methods will be discussed in various chapters of Part II of this Guide. Radiosonde systems are normally used to measure pressure, temperature and relative humidity. At most operational sites, the radiosonde system is also used for upper-wind determination (see Part I, Chapter 13). In addition, some radiosondes are flown with sensing systems

The units of measurement for the meteorological variables of radiosonde observations are hectopascals for pressure, degrees Celsius for temperature, and per cent for relative humidity. Relative humidity is reported relative to saturated vapour pressure over a water surface, even at temperatures less than 0°C. The unit of geopotential height used in upper-air observations is the standard geopotential metre, defined as 0.980 665 dynamic metres. In the troposphere, the value of the geopotential height is approximately equal to the geometric height expressed in metres. The values of the physical functions and constants adopted by WMO (1988) should be used in radiosonde computations. 12.1.3 Meteorological requirements radiosonde data for meteorological operations

Upper-air measurements of temperature and relative humidity are two of the basic measurements used in the initialization of the analyses of numerical weather prediction models for operational weather forecasting. Radiosondes provide most of the in situ temperature and relative humidity measurements over land, while radiosondes launched from remote islands or ships provide a limited coverage over the oceans. Temperatures with resolution in the vertical similar to radiosondes can be observed by aircraft either during ascent, descent, or at cruise levels. The aircraft observations are used to supplement the radiosonde observations, particularly over the sea. Satellite observations of temperature and water vapour distribution have lower vertical resolution than radiosonde or aircraft measurements. Satellite observations have greatest impact on numerical weather prediction analyses over



the oceans and other areas of the globe where radiosonde and aircraft observations are sparse or unavailable. Accurate measurements of the vertical structure of temperature and water vapour fields in the troposphere are extremely important for all types of forecasting, especially regional and local forecasting. The measurements indicate the existing structure of cloud or fog layers in the vertical. Furthermore, the vertical structure of temperature and water vapour fields determines the stability of the atmosphere and, subsequently, the amount and type of cloud that will be forecast. Radiosonde measurements of the vertical structure can usually be provided with sufficient accuracy to meet most user requirements. However, negative systematic errors in radiosonde relative humidity measurements of high humidity in clouds cause problems in numerical weather prediction analyses, if the error is not compensated. High-resolution measurements of the vertical structure of temperature and relative humidity are important for environmental pollution studies (for instance, identifying the depth of the atmospheric boundary layer). High resolution in the vertical is also necessary for forecasting the effects of atmospheric refraction on the propagation of electromagnetic radiation or sound waves. Civil aviation, artillery and other ballistic applications, such as space vehicle launches, have operational requirements for measurements of the density of air at given pressures (derived from radiosonde temperature and relative humidity measurements). Radiosonde observations are vital for studies of upper-air climate change. Hence, it is important to keep adequate records of the systems used for measurements and also of any changes in the operating or correction procedures used with the equipment. In this context, it has proved necessary to establish the changes in radiosonde instruments and practices that have taken place since radiosondes were used on a regular basis (see for instance WMO, 1993a). Climate change studies based on radiosonde measurements require extremely high stability in the systematic errors of the radiosonde measurements. However, the errors in early radiosonde measurements of some meteorological variables, particularly relative humidity and pressure, were too high to provide acceptable longterm references at all heights reported by the radiosondes. Thus, improvements to and changes in radiosonde design were necessary. Furthermore,

expenditure limitations on meteorological operations require that radiosonde consumables remain cheap if widespread radiosonde use is to continue. Therefore, certain compromises in system measurement accuracy have to be accepted by users, taking into account that radiosonde manufacturers are producing systems that need to operate over an extremely wide range of meteorological conditions: 1 050 to 5 hPa for pressure 50 to –90°C for temperature 100 to 1 per cent for relative humidity with the systems being able to sustain continuous reliable operation when operating in heavy rain, in the vicinity of thunderstorms, and in severe icing conditions. relationships between satellite and radiosonde upper-air measurements

Nadir-viewing satellite observing systems do not measure vertical structure with the same accuracy or degree of confidence as radiosonde or aircraft systems. The current satellite temperature and water vapour sounding systems either observe upwelling radiances from carbon dioxide or water vapour emissions in the infrared, or alternatively oxygen or water vapour emissions at microwave frequencies (see Part II, Chapter 8). The radiance observed by a satellite channel is composed of atmospheric emissions from a range of heights in the atmosphere. This range is determined by the distribution of emitting gases in the vertical and the atmospheric absorption at the channel frequencies. Most radiances from satellite temperature channels approximate mean layer temperatures for a layer at least 10 km thick. The height distribution (weighting function) of the observed temperature channel radiance will vary with geographical location to some extent. This is because the radiative transfer properties of the atmosphere have a small dependence on temperature. The concentrations of the emitting gas may vary to a small extent with location and cloud; aerosol and volcanic dust may also modify the radiative heat exchange. Hence, basic satellite temperature sounding observations provide good horizontal resolution and spatial coverage worldwide for relatively thick layers in the vertical, but the precise distribution in the vertical of the atmospheric emission observed may be difficult to specify at any given location. Most radiances observed by nadir-viewing satellite water vapour channels in the troposphere originate from layers of the atmosphere about 4 to 5 km



thick. The pressures of the atmospheric layers contributing to the radiances observed by a water vapour channel vary with location to a much larger extent than for the temperature channels. This is because the thickness and central pressure of the layer observed depend heavily on the distribution of water vapour in the vertical. For instance, the layers observed in a given water vapour channel will be lowest when the upper troposphere is very dry. The water vapour channel radiances observed depend on the temperature of the water vapour. Therefore, water vapour distribution in the vertical can be derived only once suitable measurements of vertical temperature structure are available. Limb-viewing satellite systems can provide measurements of atmospheric structure with higher vertical resolution than nadir-viewing systems; an example of this type of system is temperature and water vapour measurement derived from global positioning system (GPS) radio occultation. In this technique, vertical structure is measured along paths in the horizontal of at least 200 km (Kursinski and others, 1997). Thus, the techniques developed for using satellite sounding information in numerical weather prediction models incorporate information from other observing systems, mainly radiosondes and aircraft. This information may be contained in an initial estimate of vertical structure at a given location, which is derived from forecast model fields or is found in catalogues of possible vertical structure based on radiosonde measurements typical of the geographical location or air mass type. In addition, radiosonde measurements are used to cross-reference the observations from different satellites or the observations at different view angles from a given satellite channel. The comparisons may be made directly with radiosonde observations or indirectly through the influence from radiosonde measurements on the vertical structure of numerical forecast fields. Hence, radiosonde and satellite sounding systems are complementary observing systems and provide a more reliable global observation system when used together. Maximum height of radiosonde observations

of the higher cost of the balloons and gas necessary to lift the equipment to the lowest pressures. Temperature errors in many radiosonde systems increase rapidly at low pressures. Therefore, some of the available radiosonde systems are unsuitable for observing at the lowest pressures. The problems associated with the contamination of sensors during flight and very long timeconstants of sensor response at low temperatures and pressures limit the usefulness of radiosonde relative humidity measurements to the troposphere. accuracy requirements

This section and the next summarize the requirements for radiosonde accuracy and compare them with operational performance. A detailed discussion of performance and sources of errors is given in later sections. The practical accuracy requirements for radiosonde observations are included in Annex 12.A. WMO (1970) describes a very useful approach to the consideration of the performance of instrument systems, which bears on the system design. Performance is based on observed atmospheric variability. Two limits are defined as follows: (a) The limit of performance beyond which improvement is unnecessary for various purposes; (b) The limit of performance below which the data obtained would be of negligible value for various purposes. The performance limits derived by WMO (1970) for upper-wind and for radiosonde temperature, relative humidity and geopotential height measurements are contained in Tables 1 to 4 of Annex 12.B. temperature: requirements and performance

Radiosonde observations are used regularly for measurements up to heights of about 35 km. However, many observations worldwide will not be made to heights greater than about 25 km, because

Most modern radiosonde systems measure temperature in the troposphere with a standard error of between 0.1 and 0.5 K. This performance is usually within a factor of three of the optimum performance suggested in Table 2 of Annex 12.B. Unfortunately, standard errors larger than 1 K are still found in some radiosonde networks in tropical regions. The measurements at these stations fall outside the lower performance limit found in Table 2 of Annex 12.B, and are in the category where the measurements have negligible value for the stated purpose.



At pressures higher than about 30 hPa in the types, errors at temperatures lower than -40°C may stratosphere, the measurement accuracy of most exceed the limit where the measurements have no modern radiosondes is similar to the measurevalue for the stated purpose. ment accuracy in the troposphere. Thus, in this part of the stratosphere, radiosonde measurement geopotential heights errors are about twice the stated optimum performance limit. At pressures lower than 30 Errors in geopotential height determined from hPa, the errors in older radiosonde types increase radiosonde observations differ according to rapidly with decreasing pressure and in some whether the height is for a specified pressure level cases approach the limit where they cease to be or for the height of a given turning point in the useful for the stated purpose. The rapid escalation temperature or relative humidity structure, such in radiosonde temperature measurement errors at as the tropopause. The error, ε zα (t 1 ), in the very low pressure results from an increase in geopotential height at a given time into flight is temperature errors associated with infrared and given by: solar radiation coupled with a rapid increase in errors in the heights assigned to the temperatures. p1 + ε p ( p1 ) p At very low pressures, even relatively small errors δT δT R 1 dp R dp ∫ [Tv ( (12.1) ( p ) − ε p ( p )] ε z t1 = ∫ [ε T ( p ) − ε p ( p )] + p) + εT in the radiosonde pressure measurements will δp δp gp p g p p 0 1 produce large errors in height and, hence, reported p1 + ε p ( p1 ) p temperature (see section δT δ R 1 dp R ε z t1 = ≡[ε T ( p ) − ε p ( p )] + ≡ [Tv ( p) + εT ( p) − δ T εp ( p)] dp δp gp p g p p p 0 1 relative humidity where p 0 is the surface pressure; p 1 is the true Errors in modern radiosonde relative humidity pressure at time t1; p1 + εp(p1) is the actual presmeasurements are at least a factor of two or three sure indicated by the radiosonde at time t1; εT (p) larger than the optimum performance limit for and εp (p) are the errors in the radiosonde temperhigh relative humidity suggested in Table 3 of ature and pressure measurements, respectively, Annex 12.B, for the troposphere above the convecas a function of pressure; T v (p) is the virtual tive boundary layer. Furthermore, the errors in temperature at pressure p; and R and g are radiosonde relative humidity measurements the gas and gravitational constants as specified in increase as temperature decreases. For some sensor WMO (1988).



table 12.1. errors in geopotential height (m) (typical errors in standard levels, εz(ps), and significant levels, εz(t1), for given temperature and pressure errors, at or near specified levels. errors are similar in northern and southern latitudes.) 300 hPa Temperature error εT = 0.25 K, pressure error εp = 0 hPa Standard and significant levels Temperature error εT = 0 K, pressure error εp = –1 hPa 25˚n Standard level Significant level 50˚n summer Standard level Significant level 50˚n winter Standard level Significant level 3 26 5 70 6 213 –4 625 3 26 5 72 1 223 –20 680 3 27 12 72 –2 211 –24 650 9 17 26 34 100 hPa 30 hPa 10 hpa



For a specified standard pressure level, ps, the pressure of the upper integration limit in the height computation is specified and is not subject to the radiosonde pressure error. Hence, the error in the standard pressure level geopotential height reduces to: R ps δT dp ε z ( ps ) = ε p ( p )] ≡ [ εT ( p ) − (12.2) g p0 δp p Table 12.1 shows, the errors in geopotential height that are caused by radiosonde sensor errors for typical atmospheres. It shows that the geopotentials of given pressure levels can be measured quite well, which is convenient for the synoptic and numerical analysis of constant pressure surfaces. However, large errors may occur in the heights of significant levels such as the tropopause and other turning points, and other levels may be calculated between the standard levels. Large height errors in the stratosphere resulting from pressure sensor errors of 2 or 3 hPa are likely to be of greatest significance in routine measurements in the tropics, where there are always significant temperature gradients in the vertical throughout the stratosphere. Ozone concentrations in the stratosphere also have pronounced gradients in the vertical, and height assignment errors will introduce significant errors into the ozonesonde reports at all latitudes. The optimum performance requirements for the heights of isobaric surfaces in a synoptic network, as stated in Table 4 of Annex 12.B, place extremely stringent requirements on radiosonde measurement accuracy. For instance, the best modern radiosondes would do well if height errors were only a factor of five higher than the optimum performance in the troposphere and an order of magnitude higher than the optimum performance in the stratosphere. 12.1.4 Measurement methods

solar radiation. In most modern radiosondes, coatings are applied to the temperature sensor to minimize solar heating. Software corrections for the residual solar heating are then applied during data processing. Nearly all relative humidity sensors require some protection from rain. A protective cover or duct reduces the ventilation of the sensor and hence the speed of response of the sensing system as a whole. The cover or duct also provides a source of contamination after passing through cloud. However, in practice, the requirement for protection for relative humidity sensors from rain or ice is usually more important than perfect exposure to the ambient air. Thus, protective covers or ducts are usually used with a relative humidity sensor. Pressure sensors are usually mounted internally to minimize the temperature changes in the sensor during flight and to avoid conflicts with the exposure of the temperature and relative-humidity sensors. Other important features required in radiosonde design are reliability, robustness, light weight and small dimensions. With modern electronic multiplexing readily available, it is also important to sample the radiosonde sensors at a high rate. If possible, this rate should be about once per second, corresponding to a minimum sample separation of about 5 m in the vertical. Since radiosondes are generally used only once, or not more than a few times, they must be designed for mass production at low cost. Ease and stability of calibration is very important, since radiosondes must often be stored for long periods (more than a year) prior to use. (Many of the most important Global Climate Observing System stations, for example, in Antarctica, are on sites where radiosondes cannot be delivered more than once per year.) A radiosonde should be capable of transmitting an intelligible signal to the ground receiver over a slant range of at least 200 km. The voltage of the radiosonde battery varies with both time and temperature. Therefore, the radiosonde must be designed to accept battery variations without a loss of measurement accuracy or an unacceptable drift in the transmitted radio frequency. radio frequency used by radiosondes

This section discusses radiosonde methods in general terms. Details of instrumentation and procedures are given in other sections. constraints on radiosonde design

Certain compromises are necessary when designing a radiosonde. Temperature measurements are found to be most reliable when sensors are exposed unprotected above the top of the radiosonde, but this also leads to direct exposure to

The radio frequency spectrum bands currently used for most radiosonde transmissions are shown in Table 12.2. These correspond to the meteorological aids allocations specified by the International Telecommunication Union (ITU) Radiocommunication Sector radio regulations.



table 12.2. Primary frequencies used by radiosondes in the meteorological aids bands
Radio frequency band (MHz) Status ITU regions

400.15 – 406 1 668.4 – 1 700

Primary Primary

all all

tries. Therefore, preparations for the future in most countries should be based on the principle that radiosonde transmitters and receivers will have to work with bandwidths of much less than 1 MHz in order to avoid interfering signals. Transmitter stability may have to be better than ±5 kHz in countries with dense radiosonde networks, and not worse than about ±200 kHz in most of the remaining countries. National Meteorological Services need to maintain contact with national radiocommunication authorities in order to keep adequate radio frequency allocations and to ensure that their operations are protected from interference. Radiosonde operations will also need to avoid interference with, or from, data collection platforms transmitting to meteorological satellites between 401 and 403 MHz, with the downlinks from meteorological satellites between 1 690 and 1 700 MHz and with the command and data acquisition operations for meteorological satellites at a limited number of sites between 1 670 and 1 690 MHz.

Most secondary radar systems manufactured and

deployed in the Russian Federation operate in a radio frequency band centred at 1 780 MHz.

The radio frequency actually chosen for radiosonde operations in a given location will depend on various factors. At sites where strong upper winds are common, slant ranges to the radiosonde are usually large and balloon elevations are often very low. Under these circumstances, the 400-MHz band will normally be chosen for use since a good communication link from the radiosonde to the ground system is more readily achieved at 400 MHz than at 1680 MHz. When upper winds are not so strong, the choice of frequency will, on average, be usually determined by the method of upper-wind measurement used (see Part I, Chapter 13). The frequency band of 400 MHz is usually used when navigational aid windfinding is chosen, and 1680 MHz when radiotheodolites or a tracking antenna are to be used with the radiosonde system. The radio frequencies listed in Table 12.2 are allocated on a shared basis with other services. In some countries, the national radiocommunication authority has allocated part of the bands to other users, and the whole of the band is not available for radiosonde operations. In other countries, where large numbers of radiosonde systems are deployed in a dense network, there are stringent specifications on radio frequency drift and bandwidth occupied by an individual flight. Any organization proposing to fly radiosondes should check that suitable radio frequencies are available for their use and should also check that they will not interfere with the radiosonde operations of the National Meteorological Service. There is now strong pressures, supported by government radiocommunication agencies, to improve the efficiency of radio frequency use. Therefore, radiosonde operations will have to share with a greater range of users in the future. Wideband radiosonde systems occupying most of the available spectrum of the meteorological aids bands will become impracticable in many coun-

12.2 12.2.1

raDiosonDe electronics

general features

A basic radiosonde design usually comprises three main parts as follows: (a) The sensors plus references; (b) An electronic transducer, converting the output of the sensors and references into electrical signals; (c) The radio transmitter. In rawinsonde systems (see Part I, Chapter 13), there are also electronics associated with the reception and retransmission of radionavigation signals, or transponder system electronics for use with secondary radars. Radiosondes are usually required to measure more than one meteorological variable. Reference signals are used to compensate for instability in the conversion between sensor output and transmitted telemetry. Thus, a method of switching between various sensors and references in a predetermined cycle is required. Most modern radiosondes use electronic switches operating at high speed with one measurement cycle lasting typically between 1 and 2 s. This rate of sampling allows the meteorological variables to be sampled at height intervals of between 5 and 10 m at normal rates of ascent.




Power supply for radiosondes

Radiosonde batteries should be of sufficient capacity to power the radiosonde for the required flight time in all atmospheric conditions. For radiosonde ascents to 5 hPa, radiosonde batteries should be of sufficient capacity to supply the required currents for up to three hours, given that ascents may often be delayed and that flight times may be as long as two hours. Three hours of operation would be required if descent data from the radiosonde were to be used. Batteries should be as light as practicable and should have a long storage life. They should also be environmentally safe following use. Many modern radiosondes can tolerate significant changes in output voltage during flight. Two types of batteries are in common use, the dry-cell type and water-activated batteries. Dry batteries have the advantage of being widely available at very low cost because of the high volume of production worldwide. However, they may have the disadvantage of having limited shelf life. Also, their output voltage may vary more during discharge than that of water-activated batteries. Water-activated batteries usually use a cuprous chloride and sulphur mixture. The batteries can be stored for long periods. The chemical reactions in water-activated batteries generate internal heat, reducing the need for thermal insulation and helping to stabilize the temperature of the radiosonde electronics during flight. These batteries are not manufactured on a large scale for other users. Therefore, they are generally manufactured directly by the radiosonde manufacturers. Care must be taken to ensure that batteries do not constitute an environmental hazard once the radiosonde falls to the ground after the balloon has burst. 12.2.3 Methods of data transmission radio transmitter

mitter power output lower than 250 mW. At 1 680 MHz the most widely used radiosonde type has a power output of about 330 mW. The modulation of the transmitter varies with radiosonde type. It would be preferable in future that radiosonde manufacturers standardize the transmission of data from the radiosonde to the ground station. In any case, the radiocommunication authorities in many regions of the world will require that radiosonde transmitters meet certain specifications in future, so that the occupation of the radiofrequency spectrum is minimized and other users can share the nominated meteorological aids radiofrequency bands (see section

12.3 12.3.1

teMPerature sensors

general requirements

The best modern temperature sensors have a speed of response to changes of temperature which is fast enough to ensure that systematic bias from thermal lag during an ascent remains less than 0.1 K through any layer of depth of 1 km. At typical radiosonde rates of ascent, this is achieved in most locations with a sensor time-constant with a response faster than 1 s in the early part of the ascent. In addition, the temperature sensors should be designed to be as free as possible from radiation errors introduced by direct or backscattered solar radiation or heat exchange in the infrared. Infrared errors can be avoided by using sensor coatings that have low emissivity in the infrared. In the past, the most widely used white sensor coatings had high emissivity in the infrared. Measurements by these sensors were susceptible to significant errors from infrared heat exchange (see section Temperature sensors also need to be sufficiently robust to withstand buffeting during launch and sufficiently stable to retain accurate calibration over several years. Ideally, the calibration of temperature sensors should be sufficiently reproducible to make individual sensor calibration unnecessary. The main types of temperature sensors in routine use are thermistors (ceramic resistive semiconductors), capacitive sensors, bimetallic sensors and thermocouples. The rate of response of the sensor is usually measured in terms of the time-constant of response, τ. This is defined (as in section 1.6.3 in Part I, Chapter 1) by: dTe /dt = 1/τ · (Te – T) (12.3)

A wide variety of transmitter designs are in use. Solid-state circuitry is mainly used up to 400 MHz and valve (cavity) oscillators may be used at 1 680 MHz. Modern transmitter designs are usually crystal-controlled to ensure a good frequency stability during the sounding. Good frequency stability during handling on the ground prior to launch and during flight are important. At 400 MHz, widely used radiosonde types are expected to have a trans-


Part I. MeasureMent of MeteorologIcal VarIaBles

where Te is the temperature of the sensor and T is the true air temperature. Thus, the time-constant is defined as the time required to respond by 63 per cent to a sudden change of temperature. The time-constant of the temperature sensor is proportional to thermal capacity and inversely proportional to the rate of heat transfer by convection from the sensor. Thermal capacity depends on the volume and composition of the sensor, whereas the heat transfer from the sensor depends on the sensor surface area, the heat transfer coefficient and the rate of the air mass flow over the sensor. The heat transfer coefficient has a weak dependence on the diameter of the sensor. Thus, the time-constants of response of temperature sensors made from a given material are approximately proportional to the ratio of the sensor volume to its surface area. Consequently, thin sensors of large surface area are the most effective for obtaining a fast response. The variation of the time-constant of response with the mass rate of air flow can be expressed as: τ = τ · (ρ · v)–n

derived from a combination of laboratory testing and comparisons with very-fast response sensors during ascent in radiosonde comparison tests. As noted above, modern capacitative sensors and bead thermistors have time-constants of response faster than 1 s at 1 000 hPa.
12.3.2 Thermistors

Thermistors are usually made of a ceramic material whose resistance changes with temperature. The sensors have a high resistance that decreases with absolute temperature. The relationship between resistance, R, and temperature, T, can be expressed approximately as: R = A · exp (B/T) (12.5)


where ρ is the air density, v the air speed over the sensor, and n a constant.
Note: For a sensor exposed above the radiosonde body on an

where A and B are constants. Sensitivity to temperature changes is very high, but the response to temperature changes is far from linear since the sensitivity decreases roughly with the square of the absolute temperature. As thermistor resistance is very high, typically tens of thousands of ohms, self-heating from the voltage applied to the sensor is negligible. It is possible to manufacture very small thermistors and, thus, fast rates of response can be obtained. Solar heating of a modern chip thermistor is around 1°C at 10 hPa. 12.3.3 Thermocapacitors

outrigger, v would correspond to the rate of ascent, but the air speed over the sensor may be lower than the rate of ascent if the sensor were mounted in an internal duct.

The value of n varies between 0.4 and 0.8, depending on the shape of the sensor and on the nature of the air flow (laminar or turbulent). Representative values of the time-constant of response of the older types of temperature sensors are shown in Table 12.3 at pressures of 1 000, 100 and 10 hPa, for a rate of ascent of 5 m s–1. These values were
Table 12.3. Typical time-constants of response of radiosonde temperature sensors
Temperature sensor T at 1 000 hPa(s) 3 (h+R). If V > 2V0 then the orbit becomes parabolic, the satellite has reached escape velocity, and it will not remain in orbit around the Earth. A geostationary orbit is achieved if the satellite orbits in the same direction as the Earth’s rotation, with a period of one day. If the orbit is circular above the Equator it becomes stationary relative to the Earth and, therefore, always views the same area

Satellite h r r a (1– e) Perigee R





Earth centre a 1– e2


(a) Elements of a satellite elliptical orbit

(b) Circular satellite orbit


Aries θ Perigee i Inclination

Equatorial plane Orbit plane




(c) Satellite orbital elements on the celestrial shell

figure 8.1. geometry of satellite orbits

Chapter 6. roCket measurements in the stratosphere and mesosphere


the sensors in order to avoid heating of the sensors due to the Joule effect caused by the electromag‑ ρCd As Vr ⋅ Vr dV netic energy radiated from the transmitter; the m = mg − − ρ Vb (6.5)2 mω × V g− dt 2 power of the latter should, in any case, be limited to the minimum necessary (from 200 to dV mw). ρCd As Vr ⋅ Vr 500 m = mg − − ρ Vb g − 2 mω × V With the use of such low transmission power, dt 2 together with a distance between the transmitter and the receiving station which may be as much as 150 km, it is usually necessary to use high gain where A s is the cross‑sectional area of the sphere; directional receiving antennas. C d is the coefficient of drag; g is the acceleration due to gravity; m is the sphere mass; V is the On reception, and in order to be able to assign the sphere velocity; V r is the motion of the sphere data to appropriate heights, the signals obtained relative to the air; Vb is the volume of the sphere; after demodulation or decoding are recorded on ρ is the atmospheric density; and ω is the Earth’s multichannel magnetic tape together with the angular velocity. time‑based signals from the tracking radar. Time correlation between the telemetry signals and radar The relative velocity of the sphere with respect position data is very important. to air mass is defined as Vr = V – Va, where Va is the total wind velocity. C d is calculated on the basis of the relative velocity of the sphere. The terms on the right hand side of equation 6.5 6.4 TemperaTure measuremenT by represent the gravity, friction, buoyancy, and inflaTable falling sphere Coriolis forces, respectively. The inflatable falling sphere is a simple 1 m diameter mylar balloon containing an inflation mechanism and nominally weighs about 155 g. The sphere is deployed at an altitude of approximately 115 km where it begins its free fall under gravitational and wind forces. After being deployed the sphere is inflated to a super pressure of approximately 10 to 12 hPa by the vaporization of a liquid, such as isopentane. The surface of the sphere is metallized to enable radar tracking for position information as a function of time. To achieve the accuracy and precision required, the radar must be a high‑precision tracking system, such as an FPS‑16 C‑band radar or better. The radar‑measured position information and the coefficient of drag are then used in the equations of motion to calculate atmospheric density and winds. The calculation of density requires knowledge of the sphere’s coefficient of drag over a wide range of flow conditions (Luers, 1970; Engler and Luers, 1978). Pressure and temperature are also calculated for the same altitude increments as density. Sphere measurements are affected only by the external physical forces of gravity, drag acceleration and winds, which makes the sphere a potentially more accurate measurement than other in situ measurements (Schmidlin, Lee and Michel, 1991). The motion of the falling sphere is described by a simple equation of motion in a frame of reference having its origin at the centre of the Earth, as follows: After simple mathematical manipulation, equa‑ tion 6.5 is decomposed into three orthogonal components, including the vertical component of the equation of motion from which the density is calculated, thus obtaining:


¨ 2 m ( gz − z − C z ) . Cd As Vr (z − wz ) + 2Vb gz


where gz is the acceleration of gravity at level z; wz is the vertical wind component, usually assumed to be zero; ż is the vertical component of the .. sphere’s velocity; and z is the vertical compo‑ nent of the sphere’s acceleration. The magnitudes of the buoyancy force (V b g z ) and the Coriolis force (C z ) terms compared to the other terms of equation 6.7 are small and are either neglected or treated as perturbations. The temperature profile is extracted from the retrieved atmospheric density using the hydro‑ static equation and the equation of state, as follows:

Tz = Ta

ρa M 0 a + o ∫ ρh gdh ρ z Rρ z h


where h is the height, the variable of integration; M0 is the molecular weight of dry air; R is the univer‑ sal gas constant; Ta is temperature in K at reference


part ii. observing systems

altitude a; Tz is temperature in K at level z; ρa is the density at reference altitude a; ρh is the density to be integrated over the height interval h to a; and ρz is the density at altitude z. Note that the source of temperature error is the uncertainty associated with the retrieved density value. The error in the calculated density is comprised of high and low spatial frequency components. The high frequency component may arise from many sources, such as measure‑ ment error, computational error and/or atmospheric variability, and is somewhat random. Nonetheless, the error amplitude may be suppressed by statistical averaging. The low frequency component, however, including bias and linear variation, may be related to actual atmospheric features and is difficult to separate from the measurement error.

and g0 is the acceleration due to gravity at sea level; M is the molecular weight of the air; pi is the pressure at the upper level zi; pi–1 is the pressure at the lower level zi–1; rT is the radius of the Earth; R is the gas constant (for a perfect gas); Ti is the temperature at the upper level zi; Ti–1 is the temperature at the lower level zi–1; zi is the upper level; and zi–1 is the lower level. By comparison with a balloon‑borne radiosonde from which a pressure value p is obtained, an initial pressure pi may be determined for the rocket sound‑ ing at the common level zi, which usually lies near 20 km, or approximately 50 hPa. Similarly, by using the perfect gas law (equation 6.9), the density profile ρ can be determined. This method is based on step‑by‑step integration from the lower to the upper levels. It is, therefore, necessary to have very accurate height and temper‑ ature data for the various levels. 6.5.2 speed of sound, thermal conductivity and viscosity


CalCulaTion of oTher aerologiCal variables


pressure and density

Knowledge of the air temperature, given by the sensor as a function of height, enables atmospheric pressure and density at various levels to be deter‑ mined. In a dry atmosphere with constant molecular weight, and making use of the hydrostatic equation:

Using the basic data for pressure and temperature, other parameters, which are essential for elaborat‑ ing simulation models, are often computed, such as the following: (a) The speed of sound Vs:

Vs = γ R


1 2


dp = –gρdz and the perfect gas law:

(6.8) (b)

where = Cp /Cv;
The coefficient of thermal conductivity, , of the air, expressed in W m–1 K–1:


M p ⋅ R T



2.650 2 ⋅ 10 −3 ⋅ T 2 T + 2454 ⋅ 10
12 T



the relationship between pressures pi and pi–1 at the two levels zi and zi–1 between which the tempera‑ ture gradient is approximately constant may be expressed as:

The coefficient of viscosity of the air μ, expressed in N s m–2:

pi = ai . pi–1 where: (6.10)


2 1.458 ⋅ 10 −6 ⋅ T 2 T + 110.4

3 3



neTworks and Comparisons

ai = exp

−M rT ⋅ g0 RTi −1 rT + zi −1


⋅ 1−

−M rT ⋅ go RTi −1 rT + zi −1

⋅ 1−

Ti − Ti −1 2Ti −1 T

{zi − zi −1 }

(6.11) At present, only one or two countries carry out Ti − Ti −1 {zi − zi −regular soundings of the upper atmosphere. 1} 2Ti −1 T Reduction in operational requirements and the high costs associated with the launch operation tend to limit the number of stations and launching frequency.



These include a 20-channel high-resolution infrared radiation sounder (HIRS), a 4-channel microwave sounding unit (MSU), and a 3-channel infrared stratospheric sounding unit (SSU). The instrument characteristics of the TOVS are described in Table 8.3, which shows the number of channels; the nadir field of view; the aperture; viewing scan angle; swath width; the number of pixels viewed per swath (steps); and data digitization level, for four instruments carried on NOAA series polar-orbiting satellites. Comparable data for the AVHRR are also included for comparison. Annexes 8.A and 8.B contain details of the AVHRR and HIRS channels and their applications. There are other instruments on the NOAA polar orbiters, including the solar backscatter ultraviolet (SBUV) and the Earth radiation budget experiment (ERBE) radiometers. In mid-latitudes, a polar orbiter passes overhead twice daily. Selection of the time of day at which this occurs at each longitude involves optimizing the operation of instruments and reducing the times needed between observations and the delivery of data to forecast computer models. The addition of a 20-channel microwave sounder, the advanced microwave sounding unit, beginning on NOAA-K, will greatly increase the data flow from the spacecraft. This, in turn, will force changes in the direct-broadcast services. Two other sensors with a total of seven channels, the MSU and the SSU, are to be eliminated at the same time. Imager The radiometer used on United States geostationary satellites up to GOES-7 (all of which were stabilized by spinning) has a name that reflects its lineage; visible infrared spin-scan radiometer (VISSR) refers to its imaging channels. As the VISSR atmospheric sounder (VAS), it now includes 12 infrared channels. Eight parallel visible fields of view (0.55–0.75 µm) view the sunlit Earth with 1 km resolution. Sounder Twelve infrared channels observe upwelling terrestrial radiation in bands from 3.945 to 14.74 µm. Of these, two are window channels and observe the surface, seven observe radiation in the atmospheric carbon dioxide absorption bands, while the remaining three observe radiation in the water vapour bands. The selection of channels has the effect of geostationary satellites

observing atmospheric radiation from varying heights within the atmosphere. Through a mathematical inversion process, an estimate of temperatures versus height in the lower atmosphere and stratosphere can be obtained. Another output is an estimate of atmospheric water vapour, in several deep layers. The characteristics of the VAS/VISSR instrument are shown in Table 8.4, which provides details of the scans by GOES satellites, including nadir fields of view for visible and infrared channels; scan angles (at the spacecraft); the resulting swath width on the Earth’s surface; the number of picture elements (pixels) per swath; and the digitization level for each pixel. Ancillary sensors Two additional systems for data collection are operational on the GOES satellites. Three sensors combine to form the space environment monitor. These report solar X-ray emission levels and monitor magnetic field strength and arrival rates for high-energy particles. A data-collection system receives radioed reports from Earth-located data-collection platforms and, via transponders, forwards these to a central processing facility. Platform operators may also receive their data by direct broadcast. New systems GOES-8, launched in 1994, has three-axis stabilization and no longer uses the VAS/VISSR system. It has an imager and a sounder similar in many respects to AVHRR and TOVS, respectively, but with higher horizontal resolution. 8.2.4 current operational meteorological and related satellite series

For details of operational and experimental satellites see WMO (1994b). For convenience, a brief description is given here. The World Weather Watch global observation satellite system is summarized in Figure 8.4. There are many other satellites for communication, environmental and military purposes, some of which also have meteorological applications. The following are low orbiting satellites: (a) TIROS-N/NOAA-A series: The United States civil satellites. The system comprises at least two satellites, the latest of which is NOAA-12, launched in 1991. They provide image



table 8.3. Instrument systems on noaa satellites
Instrument Number of channels 3 4 20 Field of view (km) 147 105 17 Aperture (cm) 8 — 15 Scan angle (°) ±40 ±47.4 ±49.5 Swath width (km) ±736 ±1 174 ±1 120 Steps Data (bits) 8 11 56 12 12 13


table 8.4. Visible and infrared instrument systems on noaa spin-scanning geostationary satellites
Channel visible Infrared Field of view (km) 1 7–14 Scan angle (°) ±8.70 ±3.45 Swath width (km) ±9 050 ±2 226 Pixels/Swath 8 x 15 228 3 822 Digits (bits) 6 10




(e) (f)

services and carry instruments for temperature sounding as well as for data collection and data platform location. Some of the products of the systems are provided on the Global Telecommunication System (GTS); DMSP series: the United States military satellites. These provide image and microwave sounding data, and the SSM/I instrument provides microwave imagery. Their real-time transmissions are encrypted, but can be made available for civil use; METEOR-2, the Russian series: These provide image and sounding services, but lower quality infrared imagery. Limited data available on the GTS includes cloud images at southern polar latitudes; FY-1 series: Launched by China, providing imaging services, with visible and infrared channels; SPOT: A French satellite providing commercial high-resolution imaging services; ERS-1: An experimental European Space Agency satellite providing sea-surface temperatures, surface wind and wave information and other oceanographic and environmental data, launched in 1991.




GMS: The Japanese satellites, providing a range of services similar to GOES, but with no soundings, operating at 140°E; METEOSAT: The EUMETSAT satellites built by the European Space Agency, providing a range of services similar to GOES, operating at zero longitude; INSAT: The Indian satellite with threeaxis stabilization located at 74°E initially launched in 1989, providing imagery, but only cloud-drift winds are available on the GTS.

There are, therefore, effectively four geosynchronous satellites presently in operation.

8.3 8.3.1

Meteorological observations

retrieval of geophysical quantities from radiance measurements

The following are goestationary satellites: (a) GOES: The United States satellites. At present the GOES series products include imagery, soundings and cloud motion data. When two satellites are available, they are usually located at 75°W and 135°W;

The quantity measured by the sensors on satellites is radiance in a number of defined spectral bands. The data are transmitted to ground stations and may be used to compile images, or quantitatively to calculate temperatures, concentrations of water vapour and other radiatively active gases, and other properties of the Earth’s surface and atmosphere. The measurements taken may be at many levels, and from them profiles through the atmosphere may be constructed.



METEOR (Russian Federation)

GOMS (Russian Federation) 76°E 850 KM

GMS (Japan) 140°E


35 800 KM

GOES-W (United States) 135°W



figure 8.4. the World Weather Watch global observation satellite system Conceptually, images are continuous two-dimensional distributions of brightness. It is this continuity that the brain seems so adept at handling. In practice, satellite images are arrangements of closely spaced picture elements (pixels), each with a particular brightness. When viewed at a suitable distance, they are indistinguishable from continuous functions. The eye and brain exploit the relative contrasts within scenes at various spatial frequencies, to identify positions and types of many weather phenomena. It is usual to use the sounding data in numerical models, and hence, they, and most other quantitative data derived from the array of pixels, are often treated as point values. The radiance data from the visible channels may be converted to brightness, or to the reflectance of the surface being observed. Data from the infrared channels may be converted to temperature, using the concept of brightness temperature (see section There are limits to both the amount and the quality of information that can be extracted from a field of radiances measured from a satellite. It is useful to consider an archetypal passive remote-sensing system to see where these limits arise. It is assumed that the surface and atmosphere together reflect, or emit, or both, electromagnetic radiation towards the system. The physical processes may be summarized as follows. The variations in reflected radiation are caused by: (a) Sun elevation; (b) Satellite-sun azimuth angle; (c) Satellite viewing angle; (d) Transparency of the object; (e) Reflectivity of the underlying surface; (f) The extent to which the object is filling the field of view; (g) Overlying thin layers (thin clouds or aerosols). Many clouds are far from plane parallel and horizontally homogeneous. It is also known, from the interpretation of common satellite images, that other factors of importance are: (a) Sun-shadowing by higher objects; (b) The shape of the object (the cloud topography) giving shades and shadows in the reflected light. Variations in emitted radiation are mainly caused by: (a) The satellite viewing angle; (b) Temperature variations of the cloud; (c) Temperature variations of the surface (below the cloud);

II.8–10 (d) (e) (f) (g)

PART II. OBseRvIng sysTems

The temperature profile of the atmosphere; Emissivity variations of the cloud; Emissivity variations of the surface; Variations within the field of view of the satellite instrument; (h) The composition of the atmosphere between the object and the satellite (water vapour, carbon dioxide, ozone, thin clouds, aerosols, etc.).

This is Wien’s law. For the sun, T is 6 000 K and is 0.48μ. For the Earth, T is 290 K and λm is 10μ. The total flux emitted by a black body is: E = Bλ dλ = T4



Essentially, the system consists of optics to collect the radiation, a detector to determine how much there is, some telecommunications equipment to digitize this quantity (conversion into counts) and transmit it to the ground, some more equipment to receive the information and decode it into something useful, and a device to display the information. At each stage, potentially useful information about a scene being viewed is lost. This arises as a consequence of a series of digitization processes that transform the continuous scene. These include resolutions in space, wavelength and radiometric product, as discussed in section Radiance and brightness temperature

Where σ is Stefan’s constant. B is proportional to T at microwave and far infrared wavelengths (the RayleighJeans part of the spectrum). The typical dependence of B on T for λ at or below λm is shown in Figure 8.5. If radiance in a narrow wavelength band is measured, the Planck function can be used to calculate the temperature of the black body that emitted it:

Tλ =

c2 ⎡ c1 ⎡ λ ln 5 + 1 λ Bλ


where c1 and c2 are derived constants. This is known as the brightness temperature, and for most purposes the radiances transmitted from the satellite are converted to these quantities Tλ.
4μ (2 500 cm-1)

Emission from a black body A black body absorbs all radiation which falls upon it. In general, a body absorbs only a fraction of incident radiation; the fraction is known as the absorptivity, and it is wavelength dependent. Similarly, the efficiency for emission is known as the emissivity. At a given wavelength: emissivity = absorptivity (8.5)
B (μ . T) B (μ . 273)

6.7μ (1 500 cm-1)

10μ (1 000 cm-1) 15μ Microwave and far infrared



This is Kirchhoff’s law. The radiance (power per unit area per steradian) per unit wavelength interval emitted by a black body at temperature T and at wavelength λ is given by:

0 200 250 300

T, Temperature (K)

Bλ (T ) =

2≠ hc 2 λ −5 exp (hc / k λT ) − 1


Figure 8.5. Temperature dependence of the Planck function

where Bλ (W m–2 sr–1 cm–1) and its equivalent in wave number units, Bν (W m–2 sr–1 cm), are known as the Planck function. c, h and k are the speed of light, the Planck constant, and the Boltzmann constant, respectively. The following laws can be derived from equation 8.6. Bλ peaks at wavelength λm given by:

Atmospheric absorption Atmospheric absorption in the infrared is dominated by absorption bands of water, carbon dioxide, ozone, and so on. Examination of radiation within these bands enables the characteristics of the atmosphere to be determined: its temperature and the concentration of the absorbers. However, there are regions of the spectrum where absorption is low, providing the possibility for a satellite sensor to view the surface or

λm T = 0.29




cloud top and to determine its temperature or other characteristics. Such spectral regions are called “windows”. There is a particularly important window near the peak of the Earth/atmosphere emission curve, around 11 µm (see Figure 8.3). resolution

object within the scene with the spatial response of the detector and the time of each integration. An alternative to scanning by moving mirrors is the use of linear arrays of detectors. With no moving parts, they are much more reliable than mirrors; however, they introduce problems in the intercalibration of the different detectors. Radiometric resolution

Spatial resolution The continuous nature of the scene is divided into a number of discrete picture elements or pixels that are governed by the size of the optics, the integration time of the detectors and possibly by subsequent sampling. The size of the object that can be resolved in the displayed image depends upon the size of these pixels. Owing to the effects of diffraction by elements of the optical system, the focused image of a distant point object in the scene has a characteristic angular distribution known as a point spread function or airy pattern (Figure 8.6(a)). Two distant point objects that are displaced within the field of view are considered separable (i.e. the Rayleigh criterion) if the angle between the maxima of their point spread functions is greater than /D, where is the wavelength of the radiation and D is the diameter of the beam (Figure 8.6(b)). However, if these two point spread functions are close enough to be focused on the same detector, they cannot be resolved. In many remote-sensing systems, it is the effective displacement of adjacent detectors that limits the spatial resolution. Only if they are close together, as in Figure 8.6(c), can the two objects be resolved. A general method of determining the resolution of the optical system is by computing or measuring its modulation transfer function. The modulation of a sinusoidal function is the ratio of half its peak-to-peak amplitude to its mean value. The modulation transfer function is derived by evaluating the ratio of the output to input modulations as a function of the wavelength (or spatial frequency) of the sinusoid. In practice, many space-borne systems use the motion of the satellite to extend the image along its track, and moving mirrors to build up the picture across the track. In such systems, the focused image of the viewed objects is scanned across a detector. The output from the detector is integrated over short periods to achieve the separation of objects. The value obtained for each integration is a complicated convolution of the point-spread functions of every

The instantaneous scene is focused by the optics onto a detector which responds to the irradiance upon it. The response can either be through a direct effect on the electronic energy levels within the detector (quantum detection) or through the radiation being absorbed, warming the detector and changing some characteristic of it, such as resistance (thermal detection). Voltages caused by a number of extraneous sources are also detected, including those due to the following: (a) The thermal motion of electrons within the detector (Johnson noise); (b) Surface irregularities and electrical contacts; (c) The quantum nature of electrical currents (shot noise).

Contains 85% total irradiance Maximum values 1.7% and 0.4%

Radius r (a) The irradiance profile of the airy diffraction pattern

Unresolved Just resolved Well resolved (b) Optical separation of adjacent points

Detectors Resolved but not detected

Detectors Resolved and detected

(c) Effect of detectors on resolution

figure 8.6. optical resolution



To increase the signal to noise ratio, the system can be provided with large collecting optics, cooled detectors and long detector integration times. The combination of signal and noise voltages (an analogue signal) is integrated in time to produce a digital value. The sequence of integrated values corresponding to each line of the scene has then to be encoded and transmitted to the ground. Having received the data, decoded and processed them into useful products, the images can be displayed on a suitable device. Usually, this involves representing each pixel value as a suitable colour on a monitor or shade of grey on a facsimile recorder. Display resolution Thus, the continuous observed scene has been transformed into discrete pixels on a monitor. The discrete nature of the image is only noticeable when the resolutions of the image and the display device are grossly mismatched. The pixels on a typical monitor are separated by approximately 0.3 mm. Each pixel itself comprises three dots of different coloured phosphors. At a reasonable viewing distance of 75 cm, the eye can only resolve the pixels if they have high contrast. Note that the resolution of the eye, about 0.2 mrad, is limited by the separation of the photosensitive cells in the retina. The last part of the system involves the interpretive skills of the forecaster, who uses the images to obtain information about weather systems. calibration

zenith) that is reflected and measured by the satellite radiometer in the spectral interval valid for each channel. Atmospheric absorption and scattering effects are neglected. The term equivalent albedo is used here to indicate that it is not a strictly true albedo value due to the fact that measurements are taken in a limited spectral interval and that the values are not corrected for atmospheric effects. To calculate the reflectance of each pixel (considering the dependence of varying solar zenith angle, varying satellite zenith angle and varying sun-satellite azimuth angle), the concept of bidirectional reflectance may be applied: Ri (μ0,μ, ) = Ai /µ0 (8.12)

where Ri is bidirectional reflectance; µ0 is the cosine of the solar zenith angle; µ is the cosine of the satellite zenith angle; and is the sun-satellite azimuth angle. One disadvantage of a fixed pre-launch calibration algorithm is that conditions in the satellite orbit could be considerably different from ground conditions, thus leading to incorrect albedo values. The effects of radiometer degradations with time can also seriously affect the calibration. Both effects have been observed for earlier satellites. Also, changes in calibration techniques and coefficients from one satellite to the next in the series need attention by the user. The conclusion is that, until an on-board calibration technique can be realized, radiometer data from the visible channels have to be examined carefully to discover discrepancies from the nominal calibration algorithms. Calibration of infrared channels

Calibration of the visible channels The two visible channels on the AVHRR instrument are calibrated before launch. Radiances measured by the two channels are calculated from: Li = Ai Si and Ai = Gi Xi + Ii (8.11) (8.10)

where i is the channel number; L is radiance (W m–2 sr–1); X is the digital count (10 bits); G is the calibration gain (slope); I is the calibration intercept; A is equivalent albedo; and S is equivalent solar radiance, computed from the solar constant and the spectral response of each channel. G and I are measured before launch. Equivalent albedo, A, is the percentage of the incoming top of the atmosphere solar radiance (with the sun in

Unlike the visible channels, the infrared channels are calibrated continuously on board the satellite. A linear relation is established between the radiometer digital counts and radiance. The calibration coefficients may be estimated for every scan line by using two reference measurements. A cold reference point is obtained by viewing space which acts as a black body at about 3 K, essentially a zero radiance source. The other reference point is obtained from an internal black body, the temperature of which is monitored. The Planck function (see section 8.3.2) then gives the radiance (W m–2 sr–1) at each wavelength. A linear relationship between radiance and digital counts derived from the fixed points is used. A small non-linear correction is also applied. Difficulties of various sorts may arise. For example, during some autumn months, the calibration of NOAA-10 channel 3 data has suffered from



serious errors (giving temperatures too high). Although the reason for this is not clear, it may be caused by conditions when the satellite in the ascending node turns from illuminated to dark conditions. Rapid changes of internal black body temperatures could then occur and the application of a constant calibration algorithm may be incorrect. Calibration of HIRS and MSU For HIRS (see Annex 8.B), calibration measurements are taken every 40 scan lines and occupy 3 scan lines (for which no Earth-view data are available). The procedure is essentially the same as for the AVHRR, using the two known temperatures. For MSU (see Annex 8.B), the calibration sequence takes place at the end of each scan line and so that no Earth view data are lost. Again, a two-point calibration is provided from warm and cold reference sources. However, for MSU channel frequencies and typical Earth-view temperatures, the measured radiances are in the Rayleigh-Jeans tail of the Planck function, where radiance is proportional to brightness temperature. Therefore, the data may be calibrated into brightness temperature directly (see section 8.3.2). digitization

the projection in which the image is displayed. This is necessary partly because of the distortions arising from viewing the curved Earth using a scanning mirror, and partly because of the need to use images in conjunction with other meteorological data on standard chart backgrounds. A key element in the process of remapping the image as seen from space (“space-view”), to fit the required projection, is knowing the position on the Earth of each pixel (“navigation”). This is achieved by knowing the orbital characteristics of the satellite (supplied by the satellite operator), the precise time at which each line of the image was recorded, and the geometry of the scan. In practice, the remapping is carried out as follows. The position within the space-view scene that corresponds to the centre of each pixel in the final reprojected image is located, using the orbital data and the geometry of the final projection. The values of the pixels at, and in the locality of, this point are used to compute a new value. Effectively, this is a weighted average of the nearby values and is assigned to the pixel in the final image. Many sophisticated methods have been studied to perform this weighted average. Most are not applicable to near-real-time applications due to the large amount of computing effort required. However, the increasing availability of parallel processing computing is expected to change this position. 8.3.2 vertical profiles of temperature and humidity the tIros operational vertical sounder system

The digitization of the radiance provides a number of discrete values separated by constant steps. The temperature differences corresponding to these steps in radiance define the quanta of temperature in the final image. Owing to the non-linearity of the black-body function with temperature, the size of these steps depends upon the temperature. AVHRR data are digitized using 10 bits, thereby providing 1 024 different values. For the thermal infrared channels, the temperature step at 300 K is about 0.1 K, but it is 0.3 K at 220 K.

Other systems are digitized using different numbers of bits. The infrared images for METEOSAT use 8 bits, but the visible and water-vapour channels have only 6 significant bits. Interestingly, tests have demonstrated that a monochrome satellite image Lλ = Bλ (T ( Ps ))τ λ ( Ps ) + can be displayed without serious degradation using 0 the equivalent of only 5 bits. dτ ( p ) Lλ = Bλ (T ( Ps ))τ λ ( Ps ) + ∫ Bλ (T ( p )) λ dp dp ps remapping The requirements for the rapid processing of large amounts of data are best met by using digital computers. In an operational system, the most intensive computational task is to change

The TIROS-N/NOAA-A series of satellites carry the TOVS system, including the HIRS and MSU instruments. They observe radiation upwelling from the Earth and atmosphere, which is given by the radiative transfer equation (RTE):


≡Bλ (T ( p))


dτ λ ( (8.13) p ) dp dp

where B≡ is the Planck function at wavelength λ; Lλ is the upwelling irradiance; T(p) is the temperature as a function of pressure p; ps is the surface pressure; and τλ is the transmittance.



The first term is the contribution from the Earth’s surface and the second the radiation from the atmosphere; dτλ/dp is called the weighting function. The solution of the RTE is the basis of atmospheric sounding. The upwelling irradiance at the top of the atmosphere arises from a combination of the Planck function and the spectral transmittance. The Planck function conveys temperature information; the transmittance is associated with the absorption and density profile of radiatively active gases; and the weighting function contains profile information. For different wavelengths, the weighting function will peak at different altitudes. Temperature soundings may be constructed if a set of wavelength intervals can be chosen such that the corresponding radiances originate to a significant extent from different layers in the atmosphere. Figure 8.7 shows typical weighting functions which have been used for processing data from HIRS. The solution of the RTE is very complex, mainly because of the overlap in the weighting functions shown in Figure 8.7. A number of different methods have been developed to derive temperature and humidity profiles. A general account of several methods is given by Smith (1985), and develop1 2 3 4 5 6 8 10 1

ments are reported in the successive reports of the TOVS Study Conferences (CIMSS, 1991). Early methods which were widely used were based on regressions between radiances and ground truth (from radiosondes), under various atmospheric conditions. Better results are obtained from solutions of the RTE, described as physical retrievals. The basic principle by which water vapour concentration is calculated is illustrated by a procedure used in some physical retrieval schemes. The temperature profile is calculated using wavelengths in which carbon dioxide emits, and it is also calculated using wavelengths in which water vapour emits, with an assumed vertical distribution of water vapour. The difference between the two temperature profiles is due to the difference between the assumed and the actual water vapour profiles, and the actual profile may therefore be deduced. In most Meteorological Services, the retrieval of geophysical quantities for use in numerical weather prediction is carried out by using physical methods. At NOAA, data are retrieved by obtaining a first guess using a library search method followed by a full physical retrieval based

MIRS long wave CO2 channels

2 3 4 5 6 8 10

MIRS short wave CO2/H2O channels


20 30 40 50 60 80 100 200 300 400 500 600 800 1 000 1 2 3 4 5 6 8 10 20 30 40 50 60 80 100 200 300 400 500 600 800 1 000 20

(2) (3)

30 40 50 60 80 100 200

(16) (15) (13)
0 0.2 0.4

(5) (7)
0 0.2 0.4






300 400 500 600 800 1 000 100

0.8 1.0



Weighting function for scan angle significantly off nadir Weighting function for nadir view

SSU 15μm CO2 channels (3)

MIRS water vapour and long wave window channels

2 2

(2) (1) (4)
400 300



(3) MSU microwave O2 channels (2)

500 600 700 800


Weighting function

0 0.2 0.4 0.6 0.8 1.0

900 1 000 0 0.2


0.6 0.8 1.0

figure 8.7. toVs weighting functions (normalized)

figure 8.8. schematic illustration of a group of weighting functions for nadir viewing and the effect of scanning off nadir on one of these functions

Chapter 8. SateLLIte OBSerVatIONS


on a solution of the RTE. Other Services, such as the UK Met Office and the Australian Bureau of Meteorology, use a numerical model first guess followed by a full solution of the RTE. The latest development is a trend towards a varia‑ tional solution of the RTE in the presence of all other data available at the time of analysis. This can be extended to four dimensions to allow asynoptic data to contribute over a suitable period. It is necessary for all methods to identify and use pixels with no cloud, or to allow for the effects of cloud. Procedures for this are described in section 8.3.3. The limb effect

cients for each scan angle. However, if a regression retrieval is performed in which one set of coeffi‑ cients (appropriate to a zero scan angle) is used, all brightness temperatures must be converted to the same angle of view, usually the nadir. The weakness of the regression approach to the limb effect is the difficulty of developing regres‑ sions for different cloud, temperature and moisture regimes. A better approach, which has now become operational in some centres, is to use the physical retrieval method in which the radiative transfer equation is solved for every scan angle at which measurements are required. Limb scanning for soundings Operational meteorological sounders look straight down from the satellite to the Earth’s surface, but an alternative approach is to look at the Earth’s limb. The weighting functions are very sharp for limb‑scanning sensors and always peak at the high‑ est pressure in the field of view. Hence, good vertical resolution (1 km) is obtained with a horizontal resolution of around 10 km. Somewhat poorer reso‑ lutions are available with vertical sounding, although it is not possible to make measurements lower than about 15 km altitude with limb‑sound‑ ing techniques, and therefore vertical sounding is necessary for tropospheric measurements. Resolution and accuracy

The limb effect is illustrated in Figure 8.8. As the angle of view moves away from the vertical, the path length of the radiation through the atmos‑ phere increases. Therefore, the transmittances from all levels to space decrease and the peak of the weighting function rises. If the channel senses radi‑ ation from an atmospheric layer in which there is a temperature lapse rate, the measured radiance will change; for tropospheric channels it will tend to decrease. It is, therefore, necessary for some appli‑ cations to convert the measured radiances to estimate the brightness temperature that would have been measured if the instrument had viewed the same volume vertically. The limb‑correction method may be applied, or a physical retrieval method. Limb corrections are applied to brightness tempera‑ tures measured at non‑zero nadir angle. They are possible because the weighting function of the nadir view for one channel will, in general, peak at a level intermediate between the weighting func‑ tion peaks of two channels at the angle of measurement. Thus, for a given angle, θ, the differ‑ ence between the brightness temperature at nadir and at the angle of measurement may be expressed as a linear combination of the measured brightness temperatures in a number of channels:

(TB )θ = 0 − (TB )θ = aθi + i i 0

j =1

Σ aθji (TB )θj



The coefficient aθ is found by multiple linear regres‑ ji sion on synthetic brightness temperatures computed for a representative set of profiles. It is possible to remove the need for a limb correc‑ tion. For example, a temperature retrieval algorithm may be used with a different set of regression coeffi‑

The accuracy of satellite retrievals is difficult to assess. As with many other observing systems, there is the problem of determining “what is truth?” A widely used method of assessing accuracy is the study of statistics of differences between retrievals and collocated radiosonde profiles. Such statistics will include the retrieval errors and will also contain contributions from radiosonde errors (which include the effects of both discrepancies from the true profile along the radiosonde ascent path and the degree to which this profile is representative of the surrounding volume of atmosphere) and collo‑ cation errors caused by the separation in space and time between the satellite sounding and the radio‑ sonde ascent. Although retrieval‑radiosonde collocation statistics are very useful, they should not be treated simply as measurements of retrieval error. Brightness temperatures It is important to note the strong non‑linearity in the equations converting radiances to brightness temperatures. This means that, when dealing





Brightness temperature (K)





200 0 200 400 600 800 1 000 0 200 400 600 800 1 000

Channel 3 10-bit count

Channel 4 10-bit count

figure 8.9. typical calibration curves for aVHrr channels 3 and 4 digital counts to brightness temperatures. the curve for aVHrr channel 5 is very similar to the curve for aVHrr channel 4. with brightness temperatures, the true temperature measurement accuracy of the radiometer varies with the temperature. This is not the case when handling radiances as these are linearly related to the radiometer counts. In the AVHRR, all three infrared channels have rapidly decreasing accuracy for lower temperatures. This can be seen in Figure 8.9 (which shows only two channels). table 8.5. uncertainty (K) of aVHrr Ir channels
Temperature (K) 200 220 270 320 Channel 3 ~10 2.5 0.18 0.03 Channel 4 ~0.3 0.22 0.10 0.06

channel 4. Channel 5 is very similar to channel 4. Channel 3 is much less accurate at low temperatures, but better than channel 4, at temperatures higher than 290 K. Soundings Figure 8.10 shows typical difference statistics from the UK Met Office retrieval system. The bias and standard deviation profiles for retrieval-radiosonde differences are shown. These are based on all collocations obtained from NOAA-11 retrievals during July 1991, with collocation criteria of 3 h time separation and 150 km horizontal separation. If the set of profiles in the collocations is large, and both are representative of the same population, the biases in these statistics should be very small. The biases found, about 1° at some pressure levels, are to be expected here, where collocations for a limited period and limited area may not be representative of a zonal set. The standard deviations, while they are larger than the equivalent values for retrieval errors alone, exhibit some of the expected characteristics of the retrieval error profile. They have a minimum in the mid-troposphere, with higher values near the surface and the tropopause. The lower tropospheric values reflect problems

Comparisons of measurement accuracies for channel 3 (Annex 8.A) and channel 4 show some differences. When treating 10-bit values, the uncertainties are as shown in Table 8.5. Channel 3 shows a stronger non-linearity than channel 4 leading to much lower accuracies for low temperatures than



associated with residual cloud contamination and various surface effects. Low-level inversions will also tend to cause retrieval problems. The tropopause values reflect both the lack of information in the radiances from this part of the profile, as well as the tendency of the retrieval method to smooth out features of this type. Resolution The field of view of the HIRS radiometer (Table 8.3) is about 17 km at the subsatellite point, and profile calculations can be made out to the edge of the swath, where the field is elliptical with an axis of about 55 km. Profiles can be calculated at any horizontal grid size, but they are not independent if they are closer than the field of view. Temperature soundings are calculated down to the cloud top, or to the surface if the MSU instrument is used. Over land and close to the coast, the horizontal variability of temperature and emissivity cause uncertainties which limit their use in numerical models below about 500 hPa. The vertical resolution of the observations is related to the weighting functions, and is typically about 3 km. This poor vertical resolution is one of the main shortcomings of the present sounding system for numerical weather prediction, and it will be improved in the next generation of sounding instruments, such as the atmospheric infrared sounder (AIRS) and the high-resolution interferameter sounder (HIS). 8.3.3 cloud and land surface characteristics and cloud clearing cloud and land surface observations

attenuation of radiation by the atmosphere. For thin clouds, temperatures in AVHRR channels 3 (3.7 µm) (Annex 8.A) are warmer than those in channel 4 (11 µm) (see Figure 8.11(a)). The converse is true for thick low cloud, this being the basis of the fog detection scheme described by Eyre, Brownscombe and Allam (1984) (see Figure 8.11(b)). The difference between AVHRR channels 4 and 5 (11 µm and 12 µm) is sensitive to the thickness of cloud and to the water vapour content of the atmosphere. A threshold applied to this difference facilitates the detection of thin cirrus. During the day, reflected solar radiation, adjusted to eliminate the effects of variations of solar elevation, can also be used. A threshold test separates bright cloud from dark surfaces. A fourth test uses the ratio for the radiance of the near infrared channel 2 (0.9 µm) to that of the visible channel 1 (0.6 µm). This ratio has a value that is: (a) Close to unity for clouds; (b) About 0.5 for water, due to the enhanced backscattering by aerosols at short wavelengths; (c) About 1.5 for land, and particularly growing vegetation, due to the high reflectance of leafy structures in the near infrared. Having detected the location of the pixels uncontaminated by cloud using these methods, it is possible to determine some surface parameters. Of these, the most important is sea-surface temperature (section 8.3.6). Land surfaces have highly variable emissivities that make calculations very uncertain. Cloud parameters can be extracted using extensions to the series of tests outlined previously. These include cloud-top temperatures, fractional cloud cover and optical thickness. The height of the cloud top may be calculated in several ways. The simplest method is to use brightness temperatures from one or more channels to calculate cloud-top temperature, and infer the height from a temperature profile, usually derived from a numerical model. This method works well for heavy stratiform and cumulus cloud fields, but not for semi-transparent clouds such as cirrus, or for fields of small cumulus clouds. Smith and Platt (1978) showed how to use the radiative transfer equation in close pairs of HIRS channels to calculate pressure and, hence, the height of the tops of scattered or thin cloud, with errors typically between half and a quarter of the cloud thickness of semi-transparent layers.

The scheme developed at the UK Met Office is typical of those that may be used to extract information about clouds and the surface. It applies a succession of tests to each pixel within a scene in attempts to identify cloud. The first is a threshold test in the infrared; essentially, any pixels colder than a specified temperature are deemed to contain cloud. The second test looks at the local variance of temperatures within an image. High values indicate either mixtures of clear and cloudy pixels or those containing clouds at different levels. Small values at low temperatures indicate fully cloudy pixels. The brightness temperatures of an object in different channels depend upon the variations with wavelength, the emissivity of the object and the



It should be stressed that such products can be derived only from data streams that contain precise calibration data. These data can only be considered as images when they are displayed on a suitable device. Although in some cases they are derived to be used as input variables for mesoscale numerical models, much useful information can be gained through viewing them. Various combinations of radiometer channels are used to define particular types of clouds, snow and vegetation, as shown for example in Figure 8.12.
10 20 20 50 70 100 150 200 250 300 400 500 700 850 1000 9999 8888

soundings of the tIros operational vertical sounder in the presence of cloud

Cloud clearing Infrared radiances are affected markedly by the presence of clouds, since most are almost opaque in this wavelength region. Consequently, the algorithms used in the retrieval of tropospheric temperature must be able to detect clouds which have a significant effect on the radiances and, if possible, make allowances for these effects. This is usually done by
First guess (corrected)

Temperature (°C: 1° intervals)

100 150 200 250 300 400 500 700 850 1000 9999 0


RMS Dewpoint (°c : 1° intervals)

NUMBER (1000s)






Forecast bias correction (added)

8888 T8; 9999–1.5 m; All scan pos; No routes 1+2; All times; All surfaces (land + sea); BTRP > =–300; Elev < = 500 m; Col dist 0–150 km. Total profiles 9856; Sonde NOK 2122; BTRP NOK 1; Route NOK 166; Elev NOK 794. figure 8.10. error statistics for vertical profiles (uK Met office)



correcting the measured radiances to obtain “clear-column” values, namely, the radiances which would be measured from the same temperature and humidity profiles in the absence of cloud. In many retrieval schemes, the inversion process converts clear-column radiances to atmosphere parameters, and so a preliminary cloud-clearing step is required. Many of the algorithms developed are variants of the adjacent field of view or N* method (Smith, 1985). In this approach, the measured radiances, R1
11 τ = 0.7 0.7

and R2, in two adjacent fields of view (hereafter referred to as “spots”) of a radiometer channel can, under certain conditions, be expressed as follows: R1 = N1 Rcloudy + (1 – N1) Rclear R2 = N2 Rcloudy + (1 – N2) Rclear (8.15)

where Rclear and Rcloudy are the radiances appropriate to clear and completely overcast conditions, respectively; and N1 and N2 are the effective fractional



Radiance (W m-2 mm-1 sr-1) at 11 mm

Radiance (W m-2 μm-1 sr-1) at 3.75 μm

(a) The effect of semi-transparent cloud on radiances. Surface radiance Bs is reduced by semi-transparent cloud to τBs. Temperature corresponding to τBs is higher for 3.7 μm than for 11 μm.

9 Bs






Bs μm 3.




τBs τBs 6 0.2





T8 300



Temperature (K)
11 E11 = 1.0 E3.7 = 0.85 10 0.6 0.7

Radiance (W m-2 μm-1 sr-1) at 3.75 μm

Radiance (W m-2 mm-1 sr-1) at 11 mm

(b) The effect of different emissivity on radiances. Radiance received at satellite, Bsat, is: Bsat = E B (Ts) where E is emissivity; B is blackbody function; and Ts is surface temperature. For low cloud and fog; E11 μm ≈ 1.0 E3.7 μm ≈ 0.85 The temperature corresponding to E B (Ts) is higher for 11 μm than for 3.7 μm.



8 Bs 7
75 μm





0.3 B3.7


0.2 EB3.7


280 T3.7

T11 290




Temperature (K)

figure 8.11. calculation of temperature in the presence of clouds



cloud coverages in spots 1 and 2. In deriving these equations, the following assumptions have been made: (a) That the atmospheric profile and surface characteristics in the two spots are the same; (b) That only one layer of cloud is present; (c) That the cloud top has the same height (and temperature) in both spots. If the fractional cloud coverages in the two spots are different (N1 ≠ N2), equation 8.15 may be solved simultaneously to give the clear radiance:

Mean vector differences between cloud drift winds and winds measured by wind-finding radars within 100 nm were typically 3, 5 and 7 m s–1 for low, middle and high clouds, respectively, for one month. These indicate that the errors are comparable at low levels with those for conventional measurements. The wind estimation process is typically fully automatic. Target cloud areas covering about 20 × 20 pixels are chosen from half-hourly images using criteria which include a suitable range of brightness temperatures and gradients within each trial area. Once the targets have been selected, auto-tracking is performed, using typically a 6 or 12 h numerical prognosis as a first-guess field to search for wellcorrelated target areas. Root-mean-square differences may be used to compare the arrays of brightness temperatures of the target and search areas in order to estimate motion. The first guess reduces the size of the search area that is necessary to obtain the wind vector, but it also constrains the results to lie within a certain range of the forecast wind field. Error flags are assigned to each measurement on the basis of several characteristics, including the differences between the successive half-hour vectors and the difference between the measurement and the first-guess field. These error flags can be used in numerical analysis to give appropriate weight to the data. The number of measurements for each synoptic hour is, of course, limited by the existence of suitable clouds and is typically of the order of 600 vectors per hemisphere. At high latitudes, sequential images from polarorbiting satellites can be used to produce cloud motion vectors in the latitudes not reached by the geostationary satellites. A further development of the same technique is to calculate water vapour winds, using satellite images of the water vapour distribution. scatterometer surface winds

Rclear = where N* = N1/N2.

R1 N * R2 1− N *


This method has been considerably elaborated, using HIRS and MSU channels, the horizontal resolution of which is the sufficient for the assumptions to hold true sufficiently often. In this method, regression between co-located measurements in the MSU2 channel and the HIRS channels is used, and the coefficients are updated regularly, usually on a weekly basis. Newer methods are now being applied, using AVHRR data to help clear the HIRS field of view. Furthermore, full physical retrieval methods are possible, using AVHRR and TOVS data, in which the fractional cloud cover and cloud height and amount can be explicitly computed from the observed radiances. 8.3.4 Wind measurements cloud drift winds

Cloud drift winds are produced from geostationary satellite images by tracking cloud tops, usually for two half-hour periods between successive infrared images. The accuracy of the winds is limited to the extent that cloud motion represents the wind (for example a convective cloud cluster may move with the speed of a mesoscale atmospheric disturbance, and not with the speed of an identifiable wind). It also depends on the extent to which a representative cloud height can be determined from the brightness temperature field. In addition, the accuracy of the winds is dependent on the time interval and, to a limited extent, on the correlations between the cloud images used in their calculation, the spatial resolution of these images, the error in the first-guess fields, the degree to which the first-guess field limits the search for correlated patterns in sequential images, and the amount of development taking place in the clouds.

The scatterometer is an instrument on the experimental ERS-1 satellite, which produces routine wind measurements over the sea surface. The technique will become operational on satellites now being prepared. As soon as microwave radar became widely used in the 1940s, it was found that at low elevation angles, surrounding terrain (or at sea, waves) caused large,



A1 – A2 Snow

T3 – T4




Cb Ac



Cu/la Ci/la L a n d –25 Sunglint

Ac St Ci/la


Sc St


10 25 50 75 100 125 150 A1

Cu over land Snow
50 75 100 125 150 A1

Ci/sea Land 25 Sea

(a) Snow, cumulonimbus (Cb), nimbostratus (Ns), altocumulus (Ac), cumulus (Cu) over land, cirrus (Ci) over land, sunglint, land and sea in the A1 – (A1 – A2) feature space. The figure is extracted from the database for summer, NOAA-10 and a sun elevation around 40°.

(b) Object classes in the A1 – (T3 – T4) feature space. From the same database section as in (). Separability of snow and clouds is apparent. A problem is the discrimination of stratus and sunglint (Sg) during summer. Sunglint/spring is also included.

figure 8.12. Identification of cloud and surface properties often referred to as the radar equation. σ0 may be set in a linear form (as above) or in decibels (dB), i.e. σ0dB = 10 log10 σ0lin. Experimental evidence from scatterometers operating over the ocean shows that σ0 increases with surface wind speed (as measured by ships or buoys), decreases with incidence angle, and is dependent on the radar beam angle relative to wind direction. Figure 8.13 is a plot of σ0 aircraft data against wind direction for various wind speeds. Direction 0° corresponds to looking upwind, 90° to crosswind and 180° to downwind. The European Space Agency has coordinated a number of experiments to confirm these types of curves at 5.3 GHz, which is the operating frequency for this instrument on the ERS-1 satellite. Several aircraft scatterometers have been flown close to instrumented ships and buoys in the North Sea, the Atlantic and the Mediterranean. The σ0 data are then correlated with the surface wind, which has been adjusted to a common anemometer height of 10 m (assuming neutral stability). An empirical model function has been fitted to this data of the form: σ0 = a0 . Uγ (1 + a1cos + a2cos ) (8.19)

unwanted echoes. Ever since, designers and users of radar equipment have sought to reduce this noise. Researchers investigating the effect found that the backscattered echo from the sea became large with increasing wind speed, thus opening the possibility of remotely measuring the wind. Radars designed to measure this type of echo are known as scatterometers. Backscattering is due principally to in-phase reflections from a rough surface; for incidence angles of more than about 20° from the vertical, this occurs when the Bragg condition is met: Λsinθi = nλ/2 (8.17)

where Λ is the surface roughness wavelength; λ is the radar wavelength; and θi is the incidence angle and n = 1,2,3 ... First order Bragg scattering (n=1), at microwave frequencies, arises from the small ripples (cats’ paws) generated by the instantaneous surface wind stress. The level of backscatter from an extended target, such as sea surface, is generally termed the normalized radar cross-section, or σ0. For a given geometry and transmitted power, σ0 is proportional to the power received back at the radar. In terms of other known or measureable radar parameters:

P 64 π 3 R 4 α = R⋅ 2 2 PT λ LS G0 (G / G0 )2 A


where PT is the transmitted power and PR is the power received back at the radar; R is the slant range to the target of area A; λ is the radar wavelength; LS includes atmospheric attenuation and other system losses; G0 is the peak antenna gain; and G/G0 is the relative antenna gain in the target direction. Equation 8.18 is

where the coefficients a0, a1, a2 and are dependent on the incidence angle. This model relates the neutral stability wind speed at 10 m, U, and the wind direction relative to the radar, , to the normalized radar cross-section. It may also be the case that σ0 is a function of seasurface temperature, sea state and surface slicks (natural or man-made). However, these parameters have yet to be demonstrated as having any



significant effect on the accuracy of wind vector retrieval. Since σ0 shows a clear relationship with wind speed and direction, in principle, measuring σ0 at two or more different azimuth angles allows both wind speed and direction to be retrieved. However, the direction retrieved may not be unique; there may be ambiguous directions. In 1978, a wind scatterometer was flown on a satellite – the SEASAT-A satellite scatterometer (SASS) – for the first time and ably demonstrated the accuracy of this new form of measurement. The specification was for root-mean-square accuracies of 2 m s–1 for wind speed and 20° for direction. Comparisons with conventional wind measurements showed that these figures were met if the rough wind direction was known, so as to select the best from the ambiguous set of SASS directions. The SASS instrument used two beams either side of the spacecraft, whereas the ERS-1 scatterometer uses a third, central beam to improve wind direction discrimination; however, since it is only a single-sided instrument, it provides less coverage. Each of the three antennas produces a narrow beam of radar energy in the horizontal, but wide beam in the vertical, resulting in a narrow band of illumination of the sea surface across the 500 km width of the swath. As the satellite travels

forward, the centre then rear beam measures from the same part of the ocean as the fore beam. Hence, each part of the swath, divided into 50 km squares, has three σ0 measurements taken at different relative directions to the local surface wind vector. Figure 8.14 shows the coverage of the scatterometer for the North Atlantic over 24 h. These swaths are not static and move westwards to fill in the large gaps on subsequent days. Even so, the coverage is not complete due to the relatively small swath width in relation to, for example, the AVHRR imager on the NOAA satellites. However, there is potentially a wind available every 50 km within the coverage area, globally, and the European Space Agency delivers this information to operational users within 3 h of the measurement time. The raw instrument data are recorded on board and replayed to European Space Agency ground stations each orbit, the principle station being at Kiruna in northern Sweden, where the wind vectors are derived. As already mentioned, the scatterometer principally measures the power level of the backscatter at a given location at different azimuth angles. Since we know the geometry, such as range and incidence angles, equation 8.18 can be used to calculate a triplet of values of σ0 for each cell. In theory, it should be possible to use the model function (equation 8.19) to extract the two pieces of information required (wind speed and direction) using appropriate simultaneous equations. However, in practice, this is not feasible; the three σ0s will have a finite measurement error, and the function itself is highly nonlinear. Indeed, the model, initially based on aircraft data, may not be applicable to all circumstances. Wind speed and direction must be extracted numerically, usually by minimizing a function of the form:
0 0 σ io − σ o (U , φiθi ) 0 σ i Kpi

° (dB)
0 –2 –4 –6 –8 –10 –12 –14

Wind-speed (m s-1) Range (dB)
30 25 20 42 40 37





5 20


i =1







Wind direction (°)

figure 8.13. Measured backscatter, o (in decibels), against relative wind direction for different wind speeds. data are for 13 gHz, vertical polarization.

where R is effectively the sum of squares of the residuals, comparing the measured values of σ0 to those from the model function (using an estimate of wind speed and direction), weighted by the noise in each beam, Kpi, which is related to the S/N ratio. The wind vector estimate is refined so as to minimize R. Starting at different first-guess wind directions, the numerical solution can converge on up to four distinct, or ambiguous, wind vectors, although



there are often only two obviously different ones, usually about 180° apart. One of these is the “correct” solution, in that it is the closest to the true wind direction and within the required rootmean-square accuracies of 2 m s –1 and 20°. Algorithms have been developed to select the correct set of solutions. Numerical model wind fields are also used as first-guess fields to aid such analyses. Work is currently under way with ERS-1 data to calibrate and validate satellite winds using surface and low-level airborne measurements. Microwave radiometer surface wind speed

(not over land) can be measured to an accuracy of a few metres per second using a regression equation on the brightness temperatures in several channels. Work continues to verify and develop these algorithms, which are not yet used operationally. 8.3.5 Precipitation Visible/infrared techniques

The special sensor microwave imagers (SSM/I) flying on the DMSP satellite provide microwave radiometric brightness temperatures at several frequencies (19, 22, 37 and 85.5 GHz) and both vertical and horizontal polarization. Several algorithms have been developed to measure a variety of meteorological parameters. Surface wind speeds over sea

Visible/infrared techniques derive qualitative or quantitative estimates of rainfall from satellite imagery through indirect relationships between solar radiance reflected by clouds (or cloud brightness temperatures) and precipitation. A number of methods have been developed and tested during the past 15 years with a measured degree of success. There are two basic approaches, namely the “life-history” and the “cloud-indexing” techniques. The first technique uses data from geostationary

N Kiruna



23:00 13:20 11:40 00:30


20:00 10:00





figure 8.14. ers-1 subsatellite tracks and wind scatterometer coverage of the north atlantic region over one day. the large gaps are partially filled on subsequent days; nominally this occurs in a three-day cycle. the dashed lines show the limits of reception for the Kiruna ground station in sweden.



satellites, which produce images usually every half hour, and has been mostly applied to convective systems. The second technique, also based on cloud classification, does not require a series of consecutive observations of the same cloud system. It must be noted, however, that, up to now, none of these techniques has been shown to be “transportable”. In other words, relationships derived for a given region and a given period may not be valid for a different location and/or season. Other problems include difficulties in defining rain/ no-rain boundaries and inability to cope with the rainfall patterns at the mesoscale or local scale. Scientists working in this field are aware of these problems; for this reason it is current practice to speak of the derivation of “precipitation indices” rather than rain rates. cloud-indexing methods

If desired, an additional term related to the visible image can be included on the right-hand side of equation 8.21. The next step is to relate PI to a physical quantity related in some way to rain. This is done by adjusting the coefficients A and the threshold level T0 by comparison with independent observations, such as raingauge or radar data. One of the problems inherent in this technique is the bias created by the potential presence of highlevel non-precipitating clouds such as cirrus. Another limitation resides in the fact that the satellite measurement represents an instantaneous observation integrated over space, while raingauge observations are integrated over time at a given site. life-history methods

Cloud indexing was the first technique developed to estimate precipitation from space. It is based on the assumption that the probability of rainfall over a given area is related to the amount and type of cloudiness present over the area. Hence, it could be postulated that precipitation can be characterized by the structure of the upper surface of the associated cloudiness. In addition, in the case of convective precipitation, it could also be postulated that a relationship exists between the capacity of a cumuliform cloud to produce rain and its vertical as well as its horizontal dimensions. The vertical extent of a convective cloud is related to the cloud-top brightness temperature (higher cloud tops are associated with colder brightness temperatures). The approach is, therefore, to perform a cloud structure analysis (objective or subjective) based on the definition of a criterion relating cloudiness to a coefficient (or index) of precipitation. This characteristic may be, for instance, the number of image pixels above a given threshold level. The general approach for cloud-indexing methods involving infrared observations is to derive a relationship between a precipitation index (PI) and a function of the cloud surface area, S(TBB), associated with the background brightness temperature (TBB) colder than a given threshold T0. This relationship can be generally expressed as follows:

Life-history methods, as indicated by their name, are based on the observation of a series of consecutive images obtained from a geostationary satellite. It has been observed that the amount of precipitation associated with a given cloud is also related to its stage of development. Therefore, two clouds presenting the same aspect (from the visible infrared images point of view) may produce different quantities of rain, depending on whether they are growing or decaying. As with the cloud-indexing technique, a relationship is derived between a PI and a function of the cloud surface area, S(TBB), associated with a given brightness temperature (TBB) lying above a given threshold level. In addition, cloud evolution is taken into account and expressed in terms of the rate of change of S(TBB) between two consecutive observations. An equation, as complex as desired, may be derived between PI and functions of S(TBB) and its derivative with respect to time:

PI = A + A, S (TBB) + A for TBB < T0.

d S (TBB) dt


Here, also, another step is necessary in order to relate the PI defined by the equation to a physical quantity related to rain. Many such relationships have already been published. These publications have been discussed extensively, and it has been demonstrated, at least

PI = Ao + for TBBi < T0.

Σ A , S (TBBi) i i i I





for one instance, that taking into account the cloud evolution with time added unnecessary complexity and that comparable success could be obtained with a simple cloud-indexing technique. Recently, more physics has been introduced to the various schemes. Improvements include the following: (a) The use of cloud models to take into account the stratiform precipitation often associated with convective rainfall and to help with cloud classification; (b) The use of cloud microphysics, such as dropsize/rain-rate relations; (c) The introduction of simultaneous upper tropospheric water vapour observations; (d) The introduction of a time lag between the satellite observations and the ground-based measurements. It has also become evident that satellite data could be used in conjunction with radar observations, not only to validate a method, but as a complementary tool. FRONTIERS (the forecasting rain optimized using new techniques of interactively enhanced radar and satellite), developed by the UK Met Office, provides an example of the combined use of satellite imagery and radar observations. Various comparisons between different methods over the same test cases have now been performed and published. However, any final statement concerning the success (or lack thereof) of visible infrared methods must be treated with extreme caution. The degree of success is very strongly related to the space-time scales considered, and it cannot be expected that a regression developed and tested for use in climate studies will also be valid for the estimation of mesoscale precipitation. It must also be kept in mind that it is always easy to adjust regression coefficients for a particular case and claim that the method has been validated. Microwave techniques

the microwave region (from 5 to 200 GHz in this case). Therefore, the background brightness temperature (TBB) of the ocean surface appears much colder in the microwave. Over land, the emissivity is close to one, but varies greatly depending on the soil moisture. As far as microwaves are concerned, several different effects are associated with the presence of clouds over the ocean. They are highly frequency dependent. Currently, active methods (space-borne radar) are being developed for experimental use. 8.3.6 sea-surface temperatures

Satellite measurements of radiation emitted from the ocean surface may be used to derive estimates of sea-surface temperature, to complement in situ observation systems (for example, ships, drifting buoys), for use in real-time meteorological or oceanographic applications, and in climate studies. Although satellites measure the temperature from a layer of ocean less than about 1 mm thick, the satellite data compares very favourably with conventional data. The great advantage of satellite data is geographical coverage, which generally far surpasses that available by conventional means. Also, in many cases, the frequency of satellite observations is better than that obtained using drifting buoys, although this depends on the satellite and the latitude of observation, among other things. Satellite sea-surface temperate measurements are most commonly made at infrared wavelengths and, to a lesser degree, at microwave wavelengths. Scanning radiometers are generally used. In the infrared, the essence of the derivation is to remove any pixels contaminated by cloud and to correct the measured infrared brightness temperatures for attenuation by water vapour. Cloud-free pixels must be identified extremely carefully so as to ensure that radiances for the ocean are not affected by clouds, which generally radiate at much colder temperatures than the ocean surface. Algorithms have been developed for the specific purpose of cloud clearing for infrared sea-surface temperature measurements (for example, Saunders and Kriebel, 1988). The satellite infrared sea-surface temperatures can be derived only in cloud-free areas, whereas at microwave wavelengths, cloud attenuation is far smaller so that in all but heavy convective situations the microwave data measurements are available. The disadvantage with the microwave data is that the instrument spatial resolution is usually of the order of several tens of kilometres, whereas infrared resolution is generally around 1 to

Visible infrared measurements represent observations of the upper surfaces of clouds only. In contrast, it is often believed that microwave radiation is not affected by the presence of clouds. This statement is not generally true. Its degree of validity varies with the microwave frequency used as well as with the type of cloud being observed. One major difference between infrared and microwave radiation is the fact that, while the ocean surface emissivity is nearly equal to one in the infrared, its value (although variable) is much smaller in



5 km. Microwave sea-surface temperature measurements are discussed by Alishouse and McClain (1985). Infrared techniques

infrared channels (for example, McClain, Pichel and Walton, 1985). Instruments A number of satellite-borne instruments have been used for sea-surface temperature measurements (Rao and others, 1990) as follows: (a) NOAA AVHRR; (b) GOES VAS; (c) NOAA HIRS/MSU; (d) GMS VISSR; (e) SEASAT and Nimbus-7 SMMR (scanning multichannel microwave radiometer); (f) DMSP SSM/T (special sensor microwave temperature sounder). By far the most widely used source of satellite seasurface temperatures has been the AVHRR, using channels 3, 4 and 5 (Annex 8.A). comparison with ground-based observations

Most satellite measurements are taken in the 10.5 to 12.5 µm atmospheric window, for which corrections to measured brightness temperatures due to water vapour attenuation may be as much as 10 K in warm moist (tropical) atmospheres. Sea-surface temperature derivation techniques usually address this problem in one of two ways. In the differing path length (multilook) method, observations are taken of the same sea location at differing look angles. Because atmospheric attenuation is proportional to atmospheric path length, measurements at two look angles can be used to correct for the attenuation. An example of an instrument that uses this technique is the along track scanning radiometer (ATSR), a new generation infrared radiometer that has a dual angle view of the sea and is built specifically to provide accurate sea-surface temperature measurements (Prata) and others, 1990). It is carried on board the European Space Agency remote-sensing satellite ERS-1, launched in July 1991. In the split-window technique, atmospheric attenuation corrections can be made because of differential absorption in a given window region of the atmosphere (for example, 10.5 to 12.5 µm) and the highly wavelength-dependent nature of water vapour absorption. The differing infrared brightness temperatures measured for any two wavelengths within the infrared 10 to 12 µm window support theoretical studies which indicate a highly linear relation between any pair of infrared temperatures and the correction needed. Hence, the difference in atmospheric attenuation between a pair of wavelengths is proportional to the difference in attenuation between a second pair. One window is chosen as a perfect window (through which the satellite “sees” the ocean surface), and one wavelength is common to both pairs. A typical split window algorithm is of the form: TS = a0 + T11 + a1 (T11 – T12) (8.23)

Before considering the comparison of satellite-derived sea-surface temperatures with in situ measurements, it is important to understand what satellite instruments actually measure. Between about 3 and 14 µm, satellite radiometers measure only emitted radiation from a “skin” layer about 1 mm thick. The true physical temperature of this skin layer can differ from the sea temperature below (say, at a depth from a few metres to several tens of metres) by up to several K, depending on the prevailing conditions and on a number of factors such as: (a) The mixing of the upper layers of the ocean due to wind, or gravitational settling at night after the topmost layers radiatively cool; (b) Heating of the ocean surface by sunlight; (c) Evaporation; (d) Rainfall; (e) Currents; (f) Upwelling and downwelling. The most serious of these problems can be the heating of the top layer of the ocean on a calm sunny day. To some degree, the disparity between satellite sea-surface temperatures is circumvented by using daytime and night-time algorithms, which have been specially tuned to take into account diurnal oceanic effects. Alternatively, night-time satellite sea-surface temperatures are often preferred because the skin effect and the oceanic thermocline are at a minimum at night. It should also be remembered that ship measurements refer to a point value at a given depth (“intake temperature”) of 10 m or

where TS is the sea-surface temperature; T values are brightness temperatures at 11 or 12 µm, as indicated; and a0 and a1 are constants. Algorithms of this general form have been derived for use with daytime or night-time measurements, and using several



more, whereas the satellite is measuring radiance averaged over a large area (from 1 up to several tens or hundreds of square kilometres). Note that the ship data can often be highly variable in terms of quality. Rao and others (1990) show a comparison of global satellite multichannel satellite sea-surface temperatures with drifting buoys. The bias is very small and the root mean square deviation is about 0.5 K. Typically, comparisons of infrared satellite seasurface temperatures with in situ data (for example, buoys) show biases within 0.1 K and errors in the range of 0.4 to 0.6 K. Rao and others (1990) also show a comparison of microwave satellite sea-surface temperatures (using the SMMR instrument) with ship observations. The bias is 0.22 K and the standard deviation is 0.75 K for the one-month comparison. In summary, satellite derived sea-surface temperatures provide a very important source of observations for use in meteorological and oceanographic applications. Because satellite instruments provide distinctly different measurements of sea temperature than do ships or buoys, care must be taken when merging the satellite data with conventional data. However, many of these possible problems of merging slightly disparate data sets have been overcome by careful tuning of satellite sea-surface temperature algorithms to ensure that the satellite data are consistent with a reference point defined by drifting buoy observations. 8.3.7 upper tropospheric humidity

The product is extracted twice daily from METEOSAT (based on image data for 1100 and 2300 UTC) and are distributed over the GTS in the WMO SaTOB code. 8.3.8 total ozone

Solar ultraviolet light striking the atmosphere is partly absorbed and partly backscattered back to space. Since ozone is the principal backscatterer, the SBUV radiometer, which measures backscattered ultraviolet, allows calculations of the global distribution and time variation of atmospheric ozone. Measurements in the ultraviolet band, 160 to 400 µm, are now of great interest as being indicative of possible climate changes. In addition to the SBUV, the total ozone mapping spectrometer (TOMS) instrument carried on board Nimbus-7 is a monochrometer measuring radiation in six bands from 0.28 to 0.3125 µm. It has provided total ozone estimates within about 2 per cent of ground-based data for over a decade and has been one of the prime sources of data in monitoring the “ozone hole”. Rather than measure at ultraviolet or visible wavelengths, a 9.7 µm ozone absorption band in the thermal infrared has allowed measurement of total ozone column density by using satellite-borne radiometers which either limb scan or scan subsatellite (for example, the TOVS instrument package on NOAA satellites includes a 9.7 µm channel). The accuracy of this type of satellite measurement compared to ground-based (for example, Dobson spectrophotometer) data is around 10 per cent primarily because of the reliance upon only one channel (Ma, Smith and Woolf, 1984). It should be noted that the great advantage of the satellite data over ground-based data (ozone sondes or Dobson measurements) is the temporal and wide spatial coverage, making such data extremely important in monitoring global ozone depletion, especially over the polar regions, where conventional observation networks are very sparse. During the 1990s, further specialized satellite instruments which measure ozone levels or other related upper atmospheric constituents began to come into service. These included several instruments on the NASA upper atmosphere research satellite (UARS); the polar ozone and aerosol measurement instrument (POAM II) on Spot-3, a remote-sensing satellite launched in 1993; the stratospheric aerosol and gas experiment 3 (SAGE III); and a range of instruments

The method used to extract values of upper tropospheric humidity (from geostationary satellite data) is based on the interpretation of the 6.7 µm water-vapour channel radiances, and the results represent a mean value throughout a deep layer in the atmosphere between approximately 600 and 300 hPa. The limits of this atmospheric column cannot be precisely specified since the contribution function of the water-vapour channel varies in altitude in proportion to the water-vapour content of the atmosphere. The output of segment processing provides a description of all identified surfaces (cloud, land or sea), and the upper upper tropospheric humidity product is derived only for segments not containing medium and high cloud. The horizontal resolution is that of the nominal segment, and values are expressed as percentage relative humidity.



which were scheduled for launch on the Earth Observation System (EOS) polar orbiters in the late 1990s. 8.3.9 volcanic ash detection

Volcanic ash clouds present a severe hazard to aviation. Since 1970 alone, there have been a large number of dangerous and costly incidents involving jet aircraft inadvertently flying through ash clouds ejected from volcanoes, especially in the Asian-Pacific region and the Pacific rim, where there are large numbers of active volcanoes. As a result of this problem, WMO, the International Civil Avation Organization and other organizations have been working actively toward the provision of improved detection and warning systems and procedures so that the risk to passengers and aircraft might be minimized. The discrimination of volcanic ash clouds from normal (water/ice) clouds using single channel infrared or visible satellite imagery is often extremely difficult, if not impossible, primarily because ash clouds often appear in regions where cloudiness and thunderstorm activity are common and the two types of clouds look similar. However, techniques have been developed for utilizing the split window channel on the NOAA AVHRR instrument to aid in distinguishing ash clouds from normal clouds, and to improve the delineation of ash clouds which may not be visible on single channel infrared images. The technique involving AVHRR relies on the fact that the microphysical properties of ash clouds are different from those of water/ice clouds in the thermal infrared, so that over ash cloud the brightness temperature difference between channels 4 and 5 of the AVHRR instrument, T4–T5, is usually negative and up to about –10 K, whereas for water/ice clouds T4–T5 is close to zero or small and positive (Prata, 1989 and Potts, 1993). This principle of detection of volcanic ash clouds is currently being used in the development of multichannel radiometers which are ground- or aircraft-based. Very few studies have taken place with in situ observations of volcanic ash clouds in order to ascertain the quality and accuracy of volcanic ash cloud discrimination using AVHRR. Ground-based reports of volcanic eruptions tend to be used operationally to alert meteorologists to the fact that satellite imagery can then be used to monitor the subsequent evolution and movement of ash

clouds. It should be noted that the technique has its limitations, for example, in cases where the ash cloud may be dispersed and underlying radiation for water/ice clouds or sea/land surfaces may result in T4–T5 values being close to zero or positive, rather than negative as expected over volcanic ash cloud. 8.3.10 normalized difference vegetation indices

Satellite observations may be used to identify and monitor vegetation (Rao and others, 1990). Applications include crop monitoring, deforestation, monitoring, forest management, drought assessment and flood monitoring. The technique relies on the fact that the reflectance of green vegetation is low at visible wavelengths but very high in the region from about 0.7 to 1.3 µm (owing to the interaction of the incident radiation with chlorophyll). However, the reflectance over surfaces such as soil or water remains low in the near-infrared and visible regions. Hence, satellite techniques for the assessment of vegetation generally use the difference in reflectivity between a visible channel and a near-infrared channel around 1 µm. As an example, the normalized difference vegetation index (NDVI) using AVHRR data, which is very widely used, is defined as: NDVI = (Ch2 – Ch1)/(Ch2 + Ch1) (8.24)

Values for this index are generally in the range of 0.1 to 0.6 over vegetation, with the higher values being associated with greater greenness and/or density of the plant canopy. By contrast, over clouds, snow, water or rock, NDVI is either very close to zero or negative. Satellite monitoring of vegetation was first used extensively around the mid-1970s. It has since been refined principally as a result of a gradual improvement in the theoretical understanding of the complex interaction between vegetation and incident radiation, and better knowledge of satellite instrument characteristics and corrections required for the satellite measurements. As with sea-surface temperature satellite measurements, the processing of satellite data for NDVIs involves many corrections – for geometry of satellite view and solar illumination, atmospheric effects such as aerosols and water vapour, instrument calibration characteristics, and so on. Also, at the outset, cloud clearing is carried out to obtain cloud-free pixels.



The three main instruments used in vegetation monitoring by satellite are the NOAA AVHRR, and the Landsat multispectral scanner and thematic mapper. Interpretation of NDVIs and application to various areas of meteorology or to Earth system science rely on an understanding of exactly what the satellite instrument is measuring, which is a complex problem. This is because within the field of view green leaves may be oriented at different angles, there may be different types of vegetation and there may be vegetation-free parts of the field of view. Nevertheless, NDVI correlates with groundmeasured parameters, as illustrated in Figure 8.15 (Paltridge and Barber, 1988), which shows NDVI

(called V0) plotted against fuel moisture content derived from ground sampling of vegetation at various locations viewed by the NOAA AVHRR instrument. The graph shows that NDVI is well correlated with fuel moisture content, except beyond a critical value of fuel moisture content for which the vegetation is very green, and for which the NDVI remains at a constant level. Hence, NDVIs may be very useful in fire weather forecasting. Figure 8.16 (Malingreau, 1986) shows NDVI development over a three-year period, in (a) a rice field area of Thailand and (b) a wheat-rice cropping system in China. Peaks in NDVI correspond to dry season and wet season rice crops in the (a) graph

1.0 0.9

Ararat Lilydale Yallourn

★ ● ■ ▲

0.8 0.7 0.6

Loy Yang

Vegetation index V0

● ★ ● ● ● ★ ▲ ● ● ● ▲ ● ■ ■ ● ▲ ▲ ■ ★ ● ★ ■★ ■ ● ▲ ■ ▲ ▲ ● ● ● ■ ■ ●

0.5 0.4

0.3 0.2 0.1 ★ ★ 0

★ ★ ★ ★ ★ ★

▲ ★






Fuel moisture content (%)

Very high




Level of flammability (fire potential)

figure 8.15. full-cover, satellite-observed vegetation index as a function of fuel moisture content. each point is a particular location average at a particular sampling time (see text).



and to wheat and rice croups in the (b) graph, respectively. 8.3.11 other parameters

to be accurately detected. In combination with channel 1 and 4 images, which may be used for the identification of smoke and cloud, respectively, channel 3 images are very useful in fire detection. Snow and ice can be detected using instruments such as AVHRR (visible and infrared) or the SMMR (microwave) on Nimbus-7 (Gesell, 1989). With AVHRR, the detection process involves the discrimination between snow/ice and various surfaces such as land, sea or cloud. The variation with wavelength of the spectral characteristics of these surfaces is exploited by using algorithms incorporating techniques such as thresholds; ratios of radiances or reflectivities at different wavelengths; differences between radiances or reflectivities; or spatial coherence. The disadvantage of using AVHRR is that detection is limited by the presence of cloud; this is important because cloudiness may be very high in the areas of interest. At microwave wavelengths, sea-ice detection relies on the strong contrast between sea and ice, due to the widely differing emissivities (and hence brightness temperatures) of these surfaces at microwave wavelengths. The main advantage of microwave detection is the all-weather capability, although the spatial resolution is generally tens of kilometres compared to 1 km for AVHRR.

A number of other parameters are now being estimated from satellites, including various atmospheric trace gases, soil moisture (from synthetic aperture radar data (ERS-1)), integrated water vapour (SSM/I), cloud liquid water (SSM/I), distribution of flood waters, and the Earth’s radiation budget (ERBE) (on the NOAA polar orbiters). Atmospheric pressure has not yet been reliably measured from space. Atmospheric instability can be measured from temperature and humidity profiles. Bush-fires have been successfully monitored using satellite instruments, especially the NOAA AVHRR (Robinson, 1991). Channel 3 (at the 3.7 µm window) is extremely sensitive to the presence of “hot spots”, namely, regions in which the brightness temperature may range from 400 up to about 1 000 K. It is sensitive because of the strong temperature sensitivity of the Planck function and the peaking of black-body radiance from hot objects at around 4 µm. Hot spots show up on channel 3 images extremely prominently, thereby allowing fire fronts

(a) NDVI development curve for irrigated rice in the Bangkok Plain (Thailand)
0.5 0.4 0.3

Dry season

Rainy season

8.4 8.4.1

relateD facilities


satellite telemetry

0.2 0.1 0.0

–0.1 –0.2






All satellites receive instructions and transmit data using telemetry facilities. However, all weather satellites in geostationary orbit and some in polar orbits have on-board transponders which receive data telemetered to them from data collection platforms (DCPs) at outstations. This facility allows the satellites to act as telemetering relay stations. The advantages offered by satellite telemetry are the following: (a) Repeater stations are not required; (b) The installation of outstations and receivers is simple; (c) Outstations can be moved from site to site with ease; (d) Outstations are unobtrusive; their antennas are small and do not require high masts; (e) There is little restriction through topography; (f) One receiver can receive data from outstations covering over a quarter of the Earth’s surface; (g) Because power requirements are minimal, solar power is adequate;

(b) NDVI development curve for wheat-rice cropping system of the Jiangsu Province (China)
0.5 0.4 0.3

Rice Wheat


0.2 0.1 0.0

–0.1 –0.2 A M J J A S O N D J F M A M J J A S O N D J F M A M J J A S O N D J F M





figure 8.16. ndVI development curves for irrigated rice in thailand and wheat-rice in china







The signal level is such that it can be received by a 2 m diameter dish antenna, although 1.5 m is often adequate. The dish houses a “down converter”, used to convert the incoming signal from 1 694.5 MHz to 137 MHz for input to a receiver, which decodes the transmissions, outputting the data in ASCII characters to a printer or personal computer.
Ground Station (E)

(A) Outstation

(F) User Receiver




figure 8.17. the Meteosat dcP telemetry system (h) (i) (j) Equipment reliability is high, both on board the spacecraft and in the field; A frequency licence is not required by the user, the satellite operator being licensed; As many receivers as required can be operated, without the need to increase power or facilities at the outstations. the Meteosat data collection platform telemetry system

The unit which forms the heart of an outstation is the DCP. This is an electronic unit, similar in many ways to a logger, which can accept either several analogue voltage inputs directly from sensors, or serial data (RS232) from a processing unit between the sensors and the DCP. It also contains a small memory to store readings taken between transmissions, a processor section for overall management, a clock circuit, the radio transmitter, and either a directional or omnidirectional antenna. Up to 600 bytes can be stored in the memory for transmission at 100 bits per second. This capacity can be doubled, but this requires two 1 min time slots for transmission. The capacity is set by the amount of data that can be transmitted in a 1 min time slot. When manufactured, DCPs are programmed with their address (an 8 digit octal number) and with their time of transmission, both specified by EUMETSAT. In future designs, these are likely to be programmable by the user, to provide greater flexibility. In operation, an operator sets the DCP’s internal clock is set by to GMT. This is carried out either with a “synchronizer unit” or with a portable personal computer. Up to a 15 s drift is permitted either way; thereafter it must be reset. At its appointed times, the DCP transmits the accumulated contents of its memory to METEOSAT, and thereafter clears it, ready to receive the next set of data for transmission at the next time slot. This operation is repeated indefinitely. The synchronizer (or personal computer) can also be used to give the station a name (e.g. its location) and to carry out a range of tests which include checking the clock setting, battery voltage, transmitter state, analogue inputs and the memory contents. It is also possible to speed up the clock to test overall performance, including the making of a test transmission (into a dummy load to prevent interference by transmitting outside the allocated time slot).


Figure 8.17 illustrates the METEOSAT DCP telemetry system. It should be noted that similar systems are implemented on the GOES, GMS and INSAT satellites and are outlined in WMO (1989). The systems for other geostationary satellites are similar. The outstation (A) transmits its measurements to METEOSAT (B) along path 1 at set time intervals (hourly, three-hourly, daily, etc.). It has a 1 min time slot in which to transmit its data, on a frequency of between 402.01 MHz and 402.20 MHz at a power of 5 W (25 to 40 W for mobile outstations, with omnidirectional antenna). The satellite immediately retransmits these data to the European Space Operations Centre (ESOC) ground station (C), sited in the Odenwald near Michelstadt, Germany, along path 2 at a frequency of around 1 675 MHz. From here, the data are sent by landline to ESOC, some 40 km north-west of Odenwald in Darmstadt (D). Here they are quality controlled, archived and, where appropriate, distributed on the Global Telecommunications Network. They are also retained at the ground station and returned to METEOSAT (multiplexed with imagery data) from a second dish antenna (E), along path 3, for retransmission to users via the satellite along path 4.



A DCP will fit into a small housing and can be powered by a solar-charged battery. The remainder of the outstation comprises the sensors, which are similar to those at a conventional logging station or at a ground-based radio telemetry installation. 8.4.3 Images The images are built up, line by line, by a multispectral radiometer (see previous sections). METEOSAT spins on its axis at 100 revolutions per minute, scanning the Earth in horizontal lines from east to west. A mirror makes a small step from south to north at each rotation, building up a complete scan of the Earth in 25 min (including 5 min for resetting the mirror for the next scan). The visible image is formed of 5 000 lines, each of 5 000 pixels, giving a resolution of 2.5 km immediately beneath the satellite (the resolution is less at higher latitudes). The two infrared images each comprise 2 500 lines of 2 500 picture elements, giving a subsatellite resolution of 5 km. The images are transmitted digitally, line by line, at 330 000 bits per second, while the scanner is looking at space. These transmissions are not meant for the end-user and go directly to the ground station, where they are processed by ESOC and subsequently disseminated to users, back via METEOSAT, on two separate channels. The first channel is for high-quality digital image data for reception by a primary data user station. The second channel transmits the images in the analogue form known as weather facsimile (WEFAX), a standard used by most meteorological satellites (including polar orbiters). These can be received by secondary data user stations. Secondary data user stations receive images covering different sections of the Earth’s surface in the METEOSAT field of view. Transmissions follow a daily schedule, one image being transmitted every 4 min. These stations also receive the DCP transmissions. DCP data handling In addition to acquiring and disseminating the images, METEOSAT also currently has 66 channels for relaying DCP data from outstations to the Meteosat data handling

ground station. Of these, half are reserved for international use, that is for mobile DCPs passing from the field of view of one geostationary meteorological satellite into that of the next. The remainder are for fixed “regional” DCPs. Each channel can accommodate as many DCPs as its frequency of reporting and their report lengths permit. Thus, with three-hourly reporting times and 1 min messages from all DCPs, and with a 30 s buffer period between each (to allow for clock drift), each channel could accommodate 120 DCPs, making a total of 7 920. 8.4.4 Polar-orbiting satellite telemetry systems

Polar satellites have low orbits in the north/south direction with a period of about 100 min. Consequently, they do not appear stationary at one point in the sky. Instead, they appear over the horizon, pass across the sky (not necessarily directly overhead) and set at the opposite horizon. They are visible for about 10 min at each pass, but this varies depending on the angle at which they are visible. Such orbits dictate that a different mode of operation is necessary for a telemetry system using them. Unlike geostationary systems, the DCPs used with polar-orbiting satellites (called data collection systems – DCSs) cannot transmit at set times, nor can their antennas be directed at one point in the sky. Instead, the DCSs are given set intervals at which to transmit, ranging from 100 to 200 s. They use a similar, but not identical, frequency to DCPs, and their antennas are, necessarily, omnidirectional. Each outstation is given a slightly different transmission interval so as to reduce the chances of coincidental transmissions from two stations. Further separation of outstations is achieved by the fact that, owing to the satellite’s motion, a Doppler shift in received frequency occurs. This is different for each DCS because it occupies a different location relative to the satellite. This last feature is also used to enable the position of moving outstations to be followed. This is one of the useful features of polar orbits, and can enable, for example, a drifting buoy to be both tracked and its data collected. Furthermore, the buoy can move completely around the world and still be followed by the same satellite. This is the basis of the Argos system which operates on the NOAA satellites and is managed by France. Even fixed DCSs can make use of the feature, in that it enables data to be



collected from any point on Earth via the one satellite. The transmissions from DCSs are received by the satellite at some point in its overpass. The means of transferring the received data to the user has to be different from that adopted for METEOSAT. They follow two routes. In the first route, the received data are immediately retransmitted, in real time, in the ultra high frequency range, and can be received by a user’s receiver on an omnidirectional antenna. To ensure communication, both receiver and outstation must be within a range of not more than about 2 000 km of each other, since both must be able to see the satellite at the same time. In the second route, the received data are recorded on a magnetic tape logger on board the spacecraft and retransmitted to ground stations as the satellite

passes over. These stations are located in the United States and France (Argos system). From here, the data are put onto the GTS or sent as a printout by post if there is less urgency. The cost of using the polar satellites is not small, and, while they have some unique advantages over geostationary systems, they are of less general purpose use as telemetry satellites. Their greatest value is that they can collect data from high latitudes, beyond the reach of geostationary satellites. They can also be of value in those areas of the world not currently covered by geostationary satellites. For example, the Japanese GMS satellite does not currently provide a retransmission facility, and users can receive data only via the GTS. Until such a time as all of the Earth’s surface is covered by geostationary satellites with retransmission facilities, polar orbiting satellites will usefully fill the gap.



aNNEx 8.a adVanced Very HIgH resolutIon radIoMeter cHannels
Nadir resolution 1.1 km: swath width > 2 600 km
Primary uses daytime cloud surface mapping Surface water, ice, snowmelt Sea-surface temperature, night-time cloud mapping Sea-surface temperature, day and night cloud mapping Sea-surface temperature, day and night cloud mapping Channel 1 2 3 4 5 Wavelength μm 0.58–0.68 0.725–1.10 3.55–3.93 10.30–11.30 11.50–12.50



aNNEx 8.B HIrs cHannels and tHeIr aPPlIcatIons the television infrared observation satellite operational vertical sounder high resolution infrared sounder channels
Channel 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Central wavelength μm 15.00 14.70 14.50 14.20 14.00 13.70 13.40 11.10 9.70 8.30 7.30 6.70 4.57 4.52 4.46 4.40 4.24 4.00 3.70 0.70 Cloud detection Surface temperature Temperature sounding Surface temperature and cloud detection Total ozone Water vapour sounding Primary uses Temperature sounding

Microwave sounding unit channels
Channel 1 2 3 4 Central wavelength GHz 50.31 53.73 54.96 57.95 Primary uses Surface emissivity and cloud attenuation Temperature sounding

stratospheric sounding unit channels Three 15 μm channels for temperature sounding.



references and furtHer readIng

Alishouse, J.C. and E.P. McClain, 1985: Sea surface temperature determinations. Advances in Geophysics, Volume 27, pp. 279–296. Cooperative Institute for Meteorological Satellite Studies, 1991: Technical Proceedings of the Sixth International TOVS Study Conference. University of Wisconsin (see also proceedings of previous conferences, 1989, 1988). Eyre, J.R., J.L. Brownscombe and R.J. Allam, 1984: Detection of fog at night using advanced very high resolution radiometer (AVHRR) imagery. Meteorological Magazine, 113, pp. 266–271. Gesell, G., 1989: An algorithm for snow and ice detection using AVHRR data. International Journal of Remote-sensing, Volume 10, pp. 897–905. King-Hele, D., 1964: Theory of Satellite Orbits in an Atmosphere. Butterworths, London. Ma, X.L., W.L. Smith and H.M. Woolf, 1984: Total ozone from NOAA satellites: A physical model for obtaining measurements with high spatial resolution. Journal of Climate and Applied M e t e o r o l o g y , Vo l u m e 2 3 , I s s u e 9 , pp. 1309–1314. Malingreau, J.P., 1986: Global vegetation dynamics: Satellite observations over Asia. International Journal of Remote-sensing, Volume 7, pp. 1121-1146. Massey, H., 1964: Space Physics. Cambridge University Press, London. McClain, E.P., W.G. Pichel and C.C. Walton, 1985: Comparative performance of AVHRR-based multichannel sea surface temperatures. Journal of Geophysical Research, Volume 90, pp. 11587–11601. Paltridge, G.W. and J. Barber, 1988: Monitoring grassland dryness and fire potential in Australia with NOAA/AVHRR data. Remotesensing of the Environment, Volume 25, pp. 381–394. Potts, R.J., 1993: Satellite observations of Mt Pinatubo ash clouds. Australian Meteorological Magazine, Volume 42, pp. 59–68.

Prata, A.J., 1989: Observations of volcanic ash clouds in the 10-12 micron window using AVHRR/2 data. International Journal of Remotesensing, Volume 10, pp. 751–761. Prata, A.J., R.P. Cechet, I.J. Barton and D.T. LlewellynJones, 1990: The along-track scanning radiometer for ERS-1: Scan geometry and data simulation. IEEE Transactions on Geoscience and Remote-sensing, Volume 28, pp. 3–13. Rao, P.K., S.J. Holmes, R.K. Anderson, J.S. Winston and P.E. Lehr, 1990: Weather Satellites: Systems, Data, and Environmental Applications. American Meteorolological Society, Boston. Robinson, J.M., 1991: Fire from space: Global fire evaluation using infrared remote-sensing. International Journal of Remote-sensing, Volume 12, pp. 3–24. Saunders, R.W. and K.T. Kriebel, 1988: An improved method for detecting clear sky and cloudy radiances from AVHRR data. International Journal of Remote-sensing, Volume 9, pp. 123–150. Smith, W.L., 1985: Satellites. In D.D. Houghton (ed.): Handbook of Applied Meteorology, Wiley, New york, pp. 380–472. Smith, W.L. and C.M.R. Platt, 1978: Comparison of satellite-deduced cloud heights with indications from radiosonde and ground-based laser measurements. Journal of Applied Meteorology, Volume 17, Issue 17, pp. 1796–1802. World Meteorological Organization, 1989: Guide on the Global Observing System, WMO-No. 488, Geneva. World Meteorological Organization, 1994a: Information on Meteorological and Other Environmental Satellites. Third edition. WMO-No. 411, Geneva. World Meteorological Organization, 1994b: Application of Satellite Technology: Annual Progress Report 1993. WMO Satellite Report No. SAT-12, WMO/TD-No. 628, Geneva. World Meteorological Organization, 2003: Manual on the Global Observing System. WMO-No. 544, Geneva.


radar MeasureMents



This chapter is an elementary discussion of meteorological microwave radars – the weather radar – used mostly to observe hydrometeors in the atmosphere. It places particular emphasis on the technical and operational characteristics that must be considered when planning, developing and operating radars and radar networks in support of Meteorological and Hydrological Services. It is supported by a substantial list of references. It also briefly mentions the high frequency radar systems used for observation of the ocean surface. Radars used for vertical profiles are discussed in Part II, Chapter 5. 9.1.1 the weather radar

amplitude is used to determine a parameter called the reflectivity factor (Z) to estimate the mass of precipitation per unit volume or the intensity of precipitation through the use of empirical relations. A primary application is thus to detect, map and estimate the precipitation at ground level instantaneously, nearly continuously and over large areas. Some research radars have used reflectivity factors measured at two polarizations of the transmitted and received waveform. Research continues to determine the value and potential of polarization systems for precipitation measurement and target state, but operational systems do not exist at present. Doppler radars have the capability of determining the phase difference between the transmitted and received pulse. The difference is a measure of the mean Doppler velocity of the particles — the reflectivity weighted average of the radial components of the displacement velocities of the hydrometeors in the pulse volume. The Doppler spectrum width is a measurement of the spatial variability of the velocities and provides some indication of the wind shear and turbulence. Doppler radars offer a significant new dimension to weather radar observation and most new systems have this capability. Modern weather radars should have characteristics optimized to produce the best data for operational requirements, and should be adequately installed, operated and maintained to utilize the capability of the system to the meteorologists’ advantage. 9.1.2 radar characteristics, terms and units

Meteorological radars are capable of detecting precipitation and variations in the refractive index in the atmosphere which may be generated by local variations in temperature or humidity. Radar echoes may also be produced from airplanes, dust, birds or insects. This chapter deals with radars in common operational usage around the world. The meteorological radars having characteristics best suited for atmospheric observation and investigation transmit electromagnetic pulses in the 3–10 GHz frequency range (10–3 cm wavelength, respectively). They are designed for detecting and mapping areas of precipitation, measuring their intensity and motion, and perhaps their type. Higher frequencies are used to detect smaller hydrometeors, such as cloud or even fog droplets. Although this has valuable applications in cloud physics research, these frequencies are generally not used in operational forecasting because of excessive attenuation of the radar signal by the intervening medium. At lower frequencies, radars are capable of detecting variations in the refractive index of clear air, and they are used for wind profiling. Although they may detect precipitation, their scanning capabilities are limited by the size of the antenna required to achieve effective resolution. The returned signal from the transmitted pulse encountering a weather target, called an echo, has an amplitude, a phase and a polarization. Most operational radars worldwide are still limited to analysis of the amplitude feature that is related to the size distribution and numbers of particles in the (pulse) volume illuminated by the radar beam. The

The selection of the radar characteristics, and consideration of the climate and the application, are important for determining the acceptable accuracy of measurements for precipitation estimation (Tables 9.1, 9.2 and 9.3). 9.1.3 Meteorological applications

Radar observations have been found most useful for the following: (a) Severe weather detection, tracking and warning;

II.9–2 (b) (c)


Surveillance of synoptic and mesoscale weather systems; Estimation of precipitation amounts.

table 9.2. some meteorological radar parameters and units
Symbol Ze Vr σv Zdr CDR LDR kdp ρ Parameter Equivalent or effective radar reflectivity Mean radial velocity Spectrum width differential reflectivity Circular depolarization ratio linear depolarization ratio Propagation phase Correlation coefficient Units mm6 m–3 or dBZ m s–1 m s–1 dB dB dB degree km–1

The radar characteristics of any one radar will not be ideal for all applications. The selection criteria of a radar system are usually optimized to meet several applications, but they can also be specified to best meet a specific application of major importance. The choices of wavelength, beamwidth, pulse length, and pulse repetition frequencies (PRFs) have particular consequences. Users should therefore carefully consider the applications and climatology before determining the radar specifications. Severe weather detection and warning A radar is the only realistic surface-based means of monitoring severe weather over a wide area. Radar echo intensities, area and patterns can be used to identify areas of severe weather, including thunderstorms with probable hail and damaging winds. Doppler radars that can identify and provide a measurement of intense winds associated with gust fronts, downbursts and tornadoes add a new dimension. The nominal range of coverage is about 200 km, which is sufficient for local short-range forecasting and warning. Radar networks are used to extend the coverage (Browning and others, 1982). Effective warnings require effective interpretation performed by alert and well-trained personnel. table 9.1. radar frequency bands
Radar band UHF L Sa Ca Xa Ku K Ka W a table 9.3. Physical radar parameters and units
Symbol Parameter Speed of light Transmitted frequency doppler frequency shift received power Transmitted power Pulse repetition frequency Pulse repetition time (=1/PrF) antenna rotation rate Transmitted wavelength azimuth angle Units m s–1 Hz Hz mW or dBm kW Hz ms degree s–1 or rpm cm degree degree s degree

c f fd P r t


Frequency 300–1 000 MHz 1 000–2 000 MHz 2 000–4 000 MHz 4 000–8 000 MHz 8 000–12 500 MHz 12.5–18 GHz 18–26.5 GHz 26.5–40 GHz 94 GHz

Wavelength 1–0.3 m 0.3–0.15 m 15–7.5 cm 7.5–3.75 cm 3.75–2.4 cm 2.4–1.66 cm 1.66–1.13 cm 1.13–0.75 cm 0.30 cm

Nominal 70 cm 20 cm 10 cm 5 cm 3 cm 1.50 cm 1.25 cm 0.86 cm 0.30 cm


Ω λ

θ τ γ

Beamwidth between half power points Pulse width Elevation angle

Surveillance of synoptic and mesoscale systems Radars can provide a nearly continuous monitoring of weather related to synoptic and mesoscale storms over a large area (say a range of 220 km,

Most common weather radar bands.

CHaPTEr 9. radar MEaSurEMENTS


area 125 000 km2) if unimpeded by hills. Owing to ground clutter at short ranges and the Earth’s curvature, the maximum practical range for weather observation is about 200 km. Over large water areas, other means of observation are often not available or possible. Networks can extend the coverage and may be cost effective. Radars provide a good description of precipitation. Narrower beamwidths provide better resolution of patterns and greater effectiveness at longer ranges. In regions where very heavy and extensive precipitation is common, a 10-cm wavelength is needed for good precipitation measurements. In other areas, such as mid-latitudes, 5 cm radars may be effective at much lower cost. The 3 cm wavelength suffers from too much attenuation in precipitation to be very effective, except for very light rain or snow conditions. Development work is beginning on the concept of dense networks of 3 cm radars with polarimetric capabilities that could overcome the attenuation problem of stand-alone 3 cm radars. Precipitation estimation Radars have a long history of use in estimating the intensity and thereby the amount and distribution of precipitation with a good resolution in time and space. Most studies have been associated with rainfall, but snow measurements can also be taken with appropriate allowances for target composition. Readers should consult reviews by Joss and Waldvogel (1990), and Smith (1990) for a comprehensive discussion of the state of the art, the techniques, the problems and pitfalls, and the effectiveness and accuracy. Ground-level precipitation estimates from typical radar systems are made for areas of typically 2 km2, successively for 5–10 minute periods using low elevation plan position indicator scans with beamwidths of 1°. The radar estimates have been found to compare with spot precipitation gauge measurements within a factor of two. Gauge and radar measurements are both estimates of a continually varying parameter. The gauge samples an extremely small area (100 cm2, 200 cm2), while the radar integrates over a volume, on a much larger scale. The comparability may be enhanced by adjusting the radar estimates with gauge measurements. 9.1.4 Meteorological products

radar depend on the type of radar, its signal processing characteristics, and the associated radar control and analysis system. Most modern radars automatically perform a volume scan consisting of a number of full azimuth rotations of the antenna at several elevation angles. All raw polar data are stored in a three-dimensional array, commonly called the volume database, which serves as the data source for further data processing and archiving. By means of application software, a wide variety of meteorological products is generated and displayed as images on a high-resolution colour display monitor. Grid or pixel values and conversion to x-y coordinates are computed using three-dimensional interpolation techniques. For a typical Doppler weather radar, the displayed variables are reflectivity, rainfall rate, radial velocity and spectrum width. Each image pixel represents the colour-coded value of a selected variable. The following is a list of the measurements and products generated, most of which are discussed in this chapter: (a) The plan position indicator: A polar format display of a variable, obtained from a single full antenna rotation at one selected elevation. It is the classic radar display, used primarily for weather surveillance; (b) The range height indicator: A display of a variable obtained from a single elevation sweep, typically from 0 to 90°, at one azimuth. It is also a classic radar display that shows detailed cross-section structures and it is used for identifying severe storms, hail and the bright band; (c) The constant altitude plan position indicator (CAPPI): A horizontal cross-section display of a variable at a specified altitude, produced by interpolation from the volume data. It is used for surveillance and for identification of severe storms. It is also useful for monitoring the weather at specific flight levels for air traffic applications. The “no data” regions as seen in the CAPPI (close to and away from the radar with reference to the selected altitude) are filled with the data from the highest and lowest elevation, respectively, in another form of CAPPI, called “Pseudo CAPPI”; (d) Vertical cross-section: A display of a variable above a user-defined surface vector (not necessarily through the radar). It is produced by interpolation from the volume data; (e) The column maximum: A display, in plan, of the maximum value of a variable above each point of the area being observed; (f) Echo tops: A display, in plan, of the height of the highest occurrence of a selectable reflectivity contour, obtained by searching in

A radar can be made to provide a variety of meteorological products to support various applications. The products that can be generated by a weather




the volume data. It is an indicator of severe weather and hail; Vertically integrated liquid: An indicator of the intensity of severe storms. It can be displayed, in plan, for any specified layer of the atmosphere.

and rain rate-reflectivity relationship inadequacies, contribute most to the inaccuracy. By considering only errors attributable to the radar system, the measurable radar parameters can be determined with an acceptable accuracy (Table 9.4). table 9.4. accuracy requirements
Parameter Definition azimuth angle γ V r In addition to these standard or basic displays, other products can be generated to meet the particular requirements of users for purposes such as hydrology, nowcasting (see section 9.10) or aviation: (a) Precipitation-accumulation: An estimate of the precipitation accumulated over time at each point in the area observed; (b) Precipitation subcatchment totals: Area-integrated accumulated precipitation; (c) Velocity azimuth display (VAD): An estimate of the vertical profile of wind above the radar. It is computed from a single antenna rotation at a fixed elevation angle; (d) Velocity volume processing, which uses three-dimensional volume data; (e) Storm tracking: A product from complex software to determine the tracks of storm cells and to predict future locations of storm centroids; (f) Wind shear: An estimate of the radial and tangential wind shear at a height specified by the user; (g) Divergence profile: An estimation of divergence from the radial velocity data from which divergence profile is obtained given some assumptions; (h) Mesocyclone: A product from sophisticated pattern recognition software that identifies rotation signatures within the three-dimensional base velocity data that are on the scale of the parent mesocyclonic circulation often associated with tornadoes; (i) Tornadic vortex signature: A product from sophisticated pattern recognition software that identifies gate-to-gate shear signatures within the three-dimensional base velocity data that are on the scale of tornadic vortex circulations. 9.1.5 radar accuracy requirements

Acceptable accuracy a 0.1˚ 0.1˚ 1.0 m s–1 1 dBZ 1 m s–1

Elevation angle Mean doppler velocity reflectivity factor doppler spectrum width

Z σv


These figures are relative to a normal Gaussian spectrum with

a standard deviation smaller than 4 m–1. Velocity accuracy deteriorates when the spectrum width grows, while reflectivity accuracy improves.

9.2 9.2.1

raDar technology

Principles of radar measurement

The accuracy requirements depend on the most important applications of the radar observations. Appropriately installed, calibrated and maintained modern radars are relatively stable and do not produce significant measurement errors. External factors, such as ground clutter effects, anomalous propagation, attenuation and propagation effects, beam effects, target composition, particularly with variations and changes in the vertical,

The principles of radar and the observation of weather phenomena were established in the 1940s. Since that time, great strides have been made in improving equipment, signal and data processing and its interpretation. The interested reader should consult some of the relevant texts for greater detail. Good references include Skolnik (1970) for engineering and equipment aspects; Battan (1981) for meteorological phenomena and applications; Atlas (1964; 1990), Sauvageot (1982) and WMO (1985) for a general review; Rinehart (1991) for modern techniques; and Doviak and zrnic (1993) for Doppler radar principles and applications. A brief summary of the principles follows. Most meteorological radars are pulsed radars. Electromagnetic waves at fixed preferred frequencies are transmitted from a directional antenna into the atmosphere in a rapid succession of short pulses. Figure 9.1 shows a directional radar antenna emitting a pulsed-shaped beam of electromagnetic

CHaPTEr 9. radar MEaSurEMENTS


energy over the Earth’s curved surface and illuminating a portion of a meteorological target. Many of the physical limitations and constraints of the observation technique are immediately apparent from the figure. For example, there is a limit to the minimum altitude that can be observed at far ranges due to the curvature of the Earth. A parabolic reflector in the antenna system concentrates the electromagnetic energy in a conical-shaped beam that is highly directional. The width of the beam increases with range, for example, a nominal 1° beam spreads to 0.9, 1.7 and 3.5 km at ranges of 50, 100, and 200 km, respectively. The short bursts of electromagnetic energy are absorbed and scattered by any meteorological targets encountered. Some of the scattered energy is reflected back to the radar antenna and receiver. Since the electromagnetic wave travels with the speed of light (that is, 2.99 × 108 m s–1), by measuring the time between the transmission of the pulse and its return, the range of the target is determined. Between successive pulses, the receiver listens for any return of the wave. The return signal from the target is commonly referred to as the radar echo. The strength of the signal reflected back to the radar receiver is a function of the concentration, size and water phase of the precipitation particles that make up the target. The power return, Pr, therefore provides a measure of the characteristics of the meteorological target and is, but not uniquely, related to a precipitation rate depending on the form of precipitation. The “radar range equation”

relates the power-return from the target to the radar characteristics and parameters of the target. The power measurements are determined by the total power backscattered by the target within a volume being sampled at any one instant — the pulse volume (i.e. sample volume). The pulse volume dimensions are dependent on the radar pulse length in space (h) and the antenna beam widths in the vertical ( b) and the horizontal (θb). The beam width, and therefore the pulse volume, increases with range. Since the power that arrives back at the radar is involved in a two-way path, the pulse-volume length is only one half pulse length in space (h/2) and is invariant with range. The location of the pulse volume in space is determined by the position of the antenna in azimuth and elevation and the range to the target. The range (r) is determined by the time required for the pulse to travel to the target and to be reflected back to the radar. Particles within the pulse volume are continuously shuffling relative to one another. This results in phase effects in the scattered signal and in intensity fluctuations about the mean target intensity. Little significance can be attached to a single echo intensity measurement from a weather target. At least 25 to 30 pulses must be integrated to obtain a reasonable estimation of mean intensity (Smith, 1995). This is normally carried out electronically in an integrator circuit. Further averaging of pulses in range, azimuth and time is often conducted to increase the sampling size and accuracy of the estimate. It follows that the space resolution is coarser. 9.2.2 the radar equation for precipitation targets

Target Antenna beamwidths qb, fb R

Antenna beam

PPI mode

h/2 Pulse volume H Antenna elevation 0° parallel to tangent of the Earth

Antenna height ha

γ elevation angle
Radar antenna


Meteorological targets consist of a volume of more or less spherical particles composed entirely of ice and/or water and randomly distributed in space. The power backscattered from the target volume is dependent on the number, size, composition, relative position, shape and orientation of the scattering particles. The total power backscattered is the sum of the power backscattered by each of the scattering particles. Using this target model and electromagnetic theory, Probert-Jones (1962) developed an equation relating the echo power received by the radar to the parameters of the radar and the targets’ range and scattering characteristics. It is generally accepted as being a reliable relationship to provide quantitative reflectivity measurements with good accuracy, bearing in mind the generally realistic assumptions made in the derivation:

figure 9.1. Propagation of electromagnetic waves through the atmosphere for a pulse weather radar; ha is the height of the antenna above the earth’s surface, R is the range, h is the length of the pulse, h/2 is the sample volume depth and H is the height of the pulse above the earth’s surface



≠3 P hG 2θbφb K 10 −18 Z Pr = ⋅ t ⋅ 1024 ln 2 λ2 r2

(e) (9.1)

where Pr is the power received back at the radar, averaged over several pulses, in watts; Pt is the peak power of the pulse transmitted by the radar in watts; h is the pulse length in space, in metres (h = cτ/2 where c is the speed of light and τ is the pulse duration); G is the gain of the antenna over an isotropic radiator; θb and b are the horizontal and vertical beamwidths, respectively, of the antenna radiation pattern at the –3 dB level of one-way transmission, in radians; λ is the wavelength of the transmitted wave, in metres; |K|2 is the refractive index factor of the target; r is the slant range from the radar to the target, in metres; and Z is the radar reflectivity factor (usually taken as the equivalent reflectivity factor Ze when the target characteristics are not well known), in mm6 m–3. The second term in the equation contains the radar parameters, and the third term the parameters depending on the range and characteristics of the target. The radar parameters, except for the transmitted power, are relatively fixed, and, if the transmitter is operated and maintained at a constant output (as it should be), the equation can be simplified to:

Multiple scattering (among particles) is negligible; (f) There is no attenuation in the intervening medium between the radar and the target volume; (g) The incident and backscattered waves are linearly co-polarized; (h) The main lobe of the antenna radiation pattern is Gaussian in shape; (i) The antenna is a parabolic reflector type of circular cross-section; (j) The gain of the antenna is known or can be calculated with sufficient accuracy; (k) The contribution of the side lobes to the received power is negligible; (l) Blockage of the transmitted signal by ground clutter in the beam is negligible; (m) The peak power transmitted (Pt) is the actual power transmitted at the antenna, that is, all wave guide losses, and so on, and attenuation in the radar dome, are considered; (n) The average power measured (Pr) is averaged over a sufficient number of pulses or independent samples to be representative of the average over the target pulse volume.

Pr =

CK Z r



where C is the radar constant. There are a number of basic assumptions inherent in the development of the equation which have varying importance in the application and interpretation of the results. Although they are reasonably realistic, the conditions are not always met exactly and, under particular conditions, will affect the measurements (Aoyagi and Kodaira, 1995). These assumptions are summarized as follows: (a) The scattering precipitation particles in the target volume are homogeneous dielectric spheres whose diameters are small compared to the wavelength, that is D < 0.06 λ for strict application of Rayleigh scattering approximations; (b) The pulse volume is completely filled with randomly scattered precipitation particles; (c) The reflectivity factor Z is uniform throughout the sampled pulse volume and constant during the sampling interval; (d) The particles are all water drops or all ice particles, that is, all particles have the same refractive index factor |K|2 and the power scattering by the particles is isotropic;

This simplified expression relates the echo power measured by the radar to the radar reflectivity factor Z, which is in turn related to the rainfall rate. These factors and their relationship are crucial for interpreting the intensity of the target and estimating precipitation amounts from radar measurements. Despite the many assumptions, the expression provides a reasonable estimate of the target mass. This estimate can be improved by further consideration of factors in the assumptions. 9.2.3 basic weather radar

The basic weather radar consists of the following: (a) A transmitter to produce power at microwave frequency; (b) An antenna to focus the transmitted microwaves into a narrow beam and receive the returning power; (c) A receiver to detect, amplify and convert the microwave signal into a low frequency signal; (d) A processor to extract the desired information from the received signal; (e) A system to display the information in an intelligible form. Other components that maximize the radar capability are: (a) A processor to produce supplementary displays;

CHaPTEr 9. radar MEaSurEMENTS



A recording system to archive the data for training, study and records.

A basic weather radar may be non-coherent, that is, the phase of successive pulses is random and unknown. Almost exclusively current systems use computers for radar control, digital signal processing, recording, product displays and archiving. The power backscattered from a typical radar is of the order of 10–8 to 10–15 W, covering a range of about 70 dB from the strongest to weakest targets detectable. To adequately cover this range of signals, a logarithmic receiver was used in the past. However, modern operational and research radars with linear receivers with 90 dB dynamic range (and other sophisticated features) are just being introduced (Heiss, McGrew and Sirmans, 1990; Keeler, Hwang and Loew, 1995). Many pulses must be averaged in the processor to provide a significant measurement; they can be integrated in different ways, usually in a digital form, and must account for the receiver transfer function (namely, linear or logarithmic). In practice, for a typical system, the signal at the antenna is received, amplified, averaged over many pulses, corrected for receiver transfer, and converted to a reflectivity factor Z using the radar range equation. The reflectivity factor is the most important parameter for radar interpretation. The factor derives from the Rayleigh scattering model and is defined theoretically as the sum of particle (drops) diameters to the sixth power in the sample volume: Z = ∑ vol D6 (9.3)

focus the waves into a pencil shaped beam. Larger reflectors create narrower beams, greater resolution and sensitivity at increasing costs. The beamwidth, the angle subtended by the line between the two points on the beam where the power is one half that at the axis, is dependent on the wavelength, and may be approximated by:

θe =

70 λ d


where the units of θe are degrees; and d is the antenna diameter in the same units as λ. Good weather radars have beamwidths of 0.5 to 1°. The useful range of weather radars, except for long-range detection only of thunderstorms, is of the order of 200 km. The beam at an elevation of, for example, 0.5° is at a height of 4 km above the Earth’s surface. Also, the beamwidth is of the order of 1.5 km or greater. For good quantitative precipitation measurements, the range is less than 200 km. At long ranges, the beam is too high for ground estimates. Also, beam spreading reduces resolution and the measurement can be affected by underfilling with target. Technically, there is a maximum unambiguous range determined by the pulse repetition frequency (equation 9.6) since the range must be measured during the listening period between pulses. At usual PRFs this is not a problem. For example, with a PRF of 250 pulses per second, the maximum range is 600 km. At higher PRFs, typically 1 000 pulses per second, required for Doppler systems, the range will be greatly reduced to about 150 km. New developments may ameliorate this situation (Joe, Passarelli and Siggia, 1995). 9.2.4 Doppler radar

where the unit of Z is mm6 m–3. In many cases, the numbers of particles, composition and shape are not known and an equivalent or effective reflectivity factor Ze is defined. Snow and ice particles must refer to an equivalent Ze which represents Z, assuming the backscattering particles were all spherical drops. A common practice is to work in a logarithmic scale or dBz units which are numerically defined as dBz = 10 log10 ze. Volumetric observations of the atmosphere are normally made by scanning the antenna at a fixed elevation angle and subsequently incrementing the elevation angle in steps at each revolution. An important consideration is the resolution of the targets. Parabolic reflector antennas are used to

The development of Doppler weather radars and their introduction to weather surveillance provide a new dimension to the observations (Heiss, McGrew and Sirmans, 1990). Doppler radar provides a measure of the targets’ velocity along a radial from the radar in a direction either towards or away from the radar. A further advantage of the Doppler technique is the greater effective sensitivity to low reflectivity targets near the radar noise level when the velocity field can be distinguished in a noisy Z field. At the normal speeds of meteorological targets, the frequency shift is relatively small compared with the radar frequency and is very difficult to measure. An easier task is to retain the phase of the transmitted pulse, compare it with the phase of the received pulse and then determine the change in phase between successive pulses. The time rate



of change of the phase is then directly related to the frequency shift, which in turn is directly related to the target velocity – the Doppler effect. If the phase changes by more than ±180°, the velocity estimate is ambiguous. The highest unambiguous velocity that can be measured by a Doppler radar is the velocity at which the target moves, between successive pulses, more than a quarter of the wavelength. At higher speeds, an additional processing step is required to retrieve the correct velocity. The maximum unambiguous Doppler velocity depends on the radar wavelength (λ), and the PRF and can be expressed as:

Vmax = ±

PRF ⋅ λ 4


The maximum unambiguous range can be expressed as:

rmax =

c PRF ⋅ 2

phase of successive pulses is random but known, are cheaper and more common. Fully coherent radars typically employ klystrons in their high-power output amplifiers and have their receiver frequencies derived from the same source as their transmitters. This approach greatly reduces the phase instabilities found in semi-coherent systems, leading to improved ground clutter rejection and better discrimination of weak clear-air phenomena which might otherwise be masked. The microwave transmitter for non-coherent and semi-coherent radars is usually a magnetron, given that it is relatively simple, cheaper and provides generally adequate performance for routine observations. A side benefit of the magnetron is the reduction of Doppler response to second or third trip echoes (echoes arriving from beyond the maximum unambiguous range) due to their random phase, although the same effect could be obtained in coherent systems by introducing known pseudo-random phase disturbances into the receiver and transmitter. Non-coherent radars can be converted relatively easily to a semi-coherent Doppler system. The conversion should also include the more stable coaxial-type magnetron. Both reflectivity factor and velocity data are extracted from the Doppler radar system. The target is typically a large number of hydrometeors (rain drops, snow flakes, ice pellets, hail, etc.) of all shapes and sizes and moving at different speeds due to the turbulent motion within the volume and due to their fall speeds. The velocity field is therefore a spectrum of velocities — the Doppler spectrum (Figure 9.2). Two systems of different complexity are used to process the Doppler parameters. The simpler pulse pair processing (PPP) system uses the comparison of successive pulses in the time domain to extract mean velocity and spectrum width. The second and more complex system uses a fast Fourier transform (FFT) processor to produce a full spectrum of velocities in each sample volume. The PPP system is faster, less computationally intensive and better at low signal-to-noise ratios, but has poorer clutter rejection characteristics than the FFT system. Modern systems try to use the best of both approaches by removing clutter using FFT techniques and subsequently use PPP to determine the radial velocity and spectral width. 9.2.5 Polarization diversity radars


Thus, Vmax and rmax are related by the equation:

Vmax rmax

λc =± 8


These relationships show the limits imposed by the selection of the wavelength and PRF. A high PRF is desirable to increase the unambiguous velocity; a low PRF is desirable to increase the radar range. A compromise is required until better technology is available to retrieve the information unambiguously outside these limits (Doviak and zrnic, 1993; Joe, Passarelli and Siggia, 1995). The relationship also shows that the longer wavelengths have higher limits. In numerical terms, for a typical S-band radar with a PRF of 1 000 Hz, Vmax = ±25 m s–1, while for an X-band radar Vmax = ±8 m s–1. Because the frequency shift of the returned pulse is measured by comparing the phases of the transmitted and received pulses, the phase of the transmitted pulses must be known. In a non-coherent radar, the phase at the beginning of successive pulses is random and unknown, so such a system cannot be used for Doppler measurements; however, it can be used for the basic operations described in the previous section. Some Doppler radars are fully coherent; their transmitters employ very stable frequency sources, in which phase is determined and known from pulse to pulse. Semi-coherent radar systems, in which the

Experiments with polarization diversity radars have been under way for many years to determine their potential for enhanced radar observations of

CHaPTEr 9. radar MEaSurEMENTS


the weather (Bringi and Hendry, 1990). Promising studies point towards the possibility of differentiating between hydrometeor types, a step to discriminating between rain, snow and hail. There are practical technical difficulties, and the techniques and applications have not progressed beyond the research stage to operational usage. The potential value of polarization diversity measurements for precipitation measurement would seem to lie in the fact that better drop size distribution and knowledge of the precipitation types would improve the measurements. Recent work at the United States National Severe Storms Laboratory (Melnikov and others, 2002) on adding polarimetric capability to the NEXRAD radar has demonstrated a robust engineering design utilizing simultaneous transmission and reception of both horizontally and vertically polarized pulses. The evaluation of polarimetric moments, and derived products for rainfall accumulation and hydrometeor classification, has shown that this design holds great promise as a basis for adding polarization diversity to the entire NEXRAD network. There are two basic radar techniques in current usage. One system transmits a circularly polarized wave, and the copolar and orthogonal polarizations are measured. The other system alternately transmits pulses with horizontal then vertical polarization utilizing a high-power switch. The linear system is generally preferred since meteorological information retrieval is less calculation intensive. The latter technique is more common as conventional radars

are converted to have polarization capability. However, the former type of system has some distinct technological advantages. Various polarization bases (Holt, Chandra and Wood, 1995) and dual transmitter systems (Mueller and others, 1995) are in the experimental phase. The main differences in requirements from conventional radars relate to the quality of the antenna system, the accuracy of the electronic calibration and signal processing. Matching the beams, switching polarizations and the measurement of small differences in signals are formidable tasks requiring great care when applying the techniques. The technique is based on micro-differences in the scattering particles. Spherical raindrops become elliptically shaped with the major axis in the horizontal plane when falling freely in the atmosphere. The oblateness of the drop is related to drop size. The power backscattered from an oblate spheroid is larger for a horizontally polarized wave than for a vertically polarized wave assuming Rayleigh scattering. Using suitable assumptions, a drop size distribution can be inferred and thus a rainfall rate can be derived. The differential reflectivity, called ZDR, is defined as 10 times the logarithm of the ratio of the horizontally polarized reflectivity ZH and the vertically polarized reflectivity ZV. Comparisons of the equivalent reflectivity factor Ze and the differential reflectivity ZDR suggest that the target may be separated as being hail, rain, drizzle or snow (Seliga and Bringi, 1976).


Relative power (dB)

Ground clutter


–40 –Vmax



figure 9.2. the doppler spectrum of a weather echo and a ground target. the ground target contribution is centred on zero and is much narrower than the weather echo.



As an electromagnetic wave propagates through a medium with oblate particles, the phase of the incident beam is altered. The effect on the vertical and horizontal phase components depends on the oblateness and is embodied in a parameter termed the specific differential phase (KDP). For heavy rainfall measurements, KDP has certain advantages (zrnic and Ryzhkov, 1995). English and others (1991) demonstrated that the use of KDP for rainfall estimation is much better than Z for rainfall rates greater than about 20 mm hr–1 at the S-band. Propagation effects on the incident beam due to the intervening medium can dominate target backscatter effects and confound the interpretation of the resulting signal. Bebbington (1992) designed a parameter for a circularly polarized radar, termed the degree of polarization, which was insensitive to propagation effects. This parameter is similar to linear correlation for linearly polarized radars. It appears to have value in target discrimination. For example, extremely low values are indicative of scatterers that are randomly oriented such as those caused by airborne grass or ground clutter (Holt and others, 1993). 9.2.6 ground clutter rejection

used to generate a clutter map that is subtracted from the radar pattern collected in precipitating conditions. The problem with this technique is that the pattern of ground clutter changes over time. These changes are primarily due to changes in meteorological conditions; a prime example is anomalous propagation echoes that last several hours and then disappear. Micro-changes to the environment cause small fluctuations in the pattern of ground echoes which confound the use of clutter maps. Adaptive techniques (Joss and Lee, 1993) attempt to determine dynamically the clutter pattern to account for the short-term fluctuations, but they are not good enough to be used exclusively, if at all. Doppler processing techniques attempt to remove the clutter from the weather echo from a signalprocessing perspective. The basic assumption is that the clutter echo is narrow in spectral width and that the clutter is stationary. However, to meet these first criteria, a sufficient number of pulses must be acquired and processed in order to have sufficient spectral resolution to resolve the weather from the clutter echo. A relatively large Nyquist interval is also needed so that the weather echo can be resolved. The spectral width of ground clutter and weather echo is generally much less than 1–2 m s–1 and greater than 1–2 m s–1, respectively. Therefore, Nyquist intervals of about 8 m s–1 are needed. Clutter is generally stationary and is identified as a narrow spike at zero velocity in the spectral representation (Figure 9.2). The spike has finite width because the ground echo targets, such as swaying trees, have some associated motions. Time domain processing to remove the zero velocity (or DC) component of a finite sequence is problematic since the filtering process will remove weather echo at zero velocity as well (zrnic and Hamidi, 1981). Adaptive spectral (Fourier transform) processing can remove the ground clutter from the weather echoes even if they are overlapped (Passarelli and others, 1981; Crozier and others, 1991). This is a major advantage of spectral processing. Stripped of clutter echo, the significant meteorological parameters can be computed. An alternative approach takes advantage of the observation that structures contributing to ground clutter are very small in scale (less than, for example, 100 m). Range sampling is carried out at a very fine resolution (less than 100 m) and clutter is identified using reflectivity and Doppler signal processing. Range averaging (to a final resolution of 1 km) is performed with clutter-free range bins. The philosophy is to detect and ignore range bins with clutter, rather than to correct for

Echoes due to non-precipitation targets are known as clutter, and should be eliminated. Echoes caused by clear air or insects which can be used to map out wind fields are an exception. Clutter can be the result of a variety of targets, including buildings, hills, mountains, aircraft and chaff, to name just a few. Good radar siting is the first line of defence against ground clutter effects. However, clutter is always present to some extent. The intensity of ground clutter is inversely proportional to wavelength (Skolnik, 1970), whereas backscatter from rain is inversely proportional to the fourth power of wavelength. Therefore, shorter wavelength radars are less affected by ground clutter. Point targets, like aircraft, can be eliminated, if they are isolated, by removing echoes that occupy a single radar resolution volume. Weather targets are distributed over several radar resolution volumes. The point targets can be eliminated during the data-processsing phase. Point targets, like aircraft echoes, embedded within precipitation echoes may not be eliminated with this technique depending on relative strength. Distributed targets require more sophisticated signal and data-processing techniques. A conceptually attractive idea is to use clutter maps. The patterns of radar echoes in non-precipitating conditions are

CHaPTEr 9. radar MEaSurEMENTS


the clutter (Joss and Lee, 1993; Lee, Della Bruna and Joss, 1995). This is radically different from the previously discussed techniques and it remains to be seen whether the technique will be effective in all situations, in particular in anomalous propagation situations where the clutter is widespread. Polarization radars can also identify clutter. However, more work is needed to determine their advantages and disadvantages. Clutter can be reduced by careful site selection (see section 9.7). Radars used for long-range surveillance, such as for tropical cyclones or in a widely scattered network, are usually placed on hilltops to extend the useful range, and are therefore likely to see many clutter echoes. A simple suppression technique is to scan automatically at several elevations, and to discard the data at the shorter ranges from the lower elevations, where most of the clutter exists. By processing the radar data into CAPPI products, low elevation data is rejected automatically at short ranges.

to strike the Earth and cause ground echoes not normally encountered. The phenomenon occurs when the index of refraction decreases rapidly with height, for example, an increase in temperature and a decrease in moisture with height. These echoes must be dealt with in producing a precipitation map. This condition is referred to as anomalous propagation (AP or ANAPROP). Some “clear air” echoes are due to turbulent inhomogeneities in the refractive index found in areas of turbulence, layers of enhanced stability, wind shear cells, or strong inversions. These echoes usually occur in patterns, mostly recognizable, but must be eliminated as precipitation fields (Gossard and Strauch, 1983). 9.3.2 attenuation in the atmosphere

Microwaves are subject to attenuation owing to atmospheric gases, clouds and precipitation by absorption and scattering. Attenuation by gases Gases attenuate microwaves in the 3–10 cm bands. Absorption by atmospheric gases is due mainly to water vapour and oxygen molecules. Attenuation by water vapour is directly proportional to the pressure and absolute humidity and increases almost linearly with decreasing temperature. The concentration of oxygen, to altitudes of 20 km, is relatively uniform. Attenuation is also proportional to the square of the pressure. Attenuation by gases varies slightly with the climate and the season. It is significant at weather radar wavelengths over the longer ranges and can amount to 2 to 3 dB at the longer wavelengths and 3 to 4 dB at the shorter wavelengths, over a range of 200 km. Compensation seems worthwhile and can be quite easily accomplished automatically. Attenuation can be computed as a function of range on a seasonal basis for ray paths used in precipitation measurement and applied as a correction to the precipitation field. Attenuation by hydrometeors Attenuation by hydrometeors can result from both absorption and scattering. It is the most significant source of attenuation. It is dependent on the shape, size, number and composition of the particles. This dependence has made it very difficult to overcome in any quantitative way using radar observations alone. It has not been satisfactorily overcome for automated operational measurement systems yet. However, the phenomenon must be recognized and


ProPagation anD scattering of raDar signals

Electromagnetic waves propagate in straight lines, in a homogeneous medium, with the speed of light. The Earth’s atmosphere is not homogeneous and microwaves undergo refraction, absorption and scattering along their path. The atmosphere is usually vertically stratified and the rays change direction depending on the changes in height of the refractive index (or temperature and moisture). When the waves encounter precipitation and clouds, part of the energy is absorbed and a part is scattered in all directions or back to the radar site. 9.3.1 refraction in the atmosphere

The amount of bending of electromagnetic waves can be predicted by using the vertical profile of temperature and moisture (Bean and Dutton, 1966). Under normal atmospheric conditions, the waves travel in a curve bending slightly earthward. The ray path can bend either upwards (sub-refraction) or more earthward (super-refraction). In either case, the altitude of the beam will be in error using the standard atmosphere assumption. From a precipitation measurement standpoint, the greatest problem occurs under super-refractive or “ducting” conditions. The ray can bend sufficiently



the effects reduced by some subjective intervention using general knowledge. Attenuation is dependent on wavelength. At 10 cm wavelengths, the attenuation is rather small, while at 3 cm it is quite significant. At 5 cm, the attenuation may be acceptable for many climates, particularly in the high mid-latitudes. Wavelengths below 5 cm are not recommended for good precipitation measurement except for short-range applications (Table 9.5). table 9.5. one-way attenuation relationships
Wavelength (cm) 10 5 3.2 Relation (dB km–1) 0.000 343 R0.97 0.00 18 R1.05 0.01 R1.21

which is the justification for equation 9.3. |K|2, the refractive index factor, is equal to 0.93 for liquid water and 0.197 for ice. The radar power measurements are used to derive the scattering intensity of the target by using equation 9.2 in the form:


CP r r 2 K


The method and problems of interpreting the reflectivity factor in terms of precipitation rate (R) are discussed in section 9.9. 9.3.4 scattering in clear air

After Burrows and Attwood (1949). One-way specific attenuations at 18˚C. R is in units of mm hr–1.

For precipitation estimates by radar, some general statements can be made with regard to the magnitude of attenuation. Attenuation is dependent on the water mass of the target, thus heavier rains attenuate more; clouds with much smaller mass attenuate less. Ice particles attenuate much less than liquid particles. Clouds and ice clouds cause little attenuation and can usually be ignored. Snow or ice particles (or hailstones) can grow much larger than raindrops. They become wet as they begin to melt and result in a large increase in reflectivity and, therefore, in attenuation properties. This can distort precipitation estimates. 9.3.3 scattering by clouds and precipitation

In regions without precipitating clouds, it has been found that echoes are mostly due to insects or to strong gradients of refractive index in the atmosphere. The echoes are of very low intensity and are detected only by very sensitive radars. Equivalent Ze values for clear air phenomena generally appear in the range of –5 to –55 dBz, although these are not true Z parameters, with the physical process generating the echoes being entirely different. For precipitation measurement, these echoes are a minor “noise” in the signal. They can usually be associated with some meteorological phenomenon such as a sea breeze or thunderstorm outflows. Clear air echoes can also be associated with birds and insects in very low concentrations. Echo strengths of 5 to 35 dBz are not unusual, especially during migrations (Table 9.6). table 9.6. typical backscatter cross-sections for various targets
Object aircraft Human Weather balloon Birds Bees, dragonflies, moths 2 mm water drop σb (m ) 10 to 1 000 0.14 to 1.05 0.01 0.001 to 0.01 3 x 10–6 to 10–5 1.8 x 10–10

The signal power detected and processed by the radar (namely, echo) is power backscattered by the target, or by hydrometeors. The backscattering cross-section (σb) is defined as the area of an isotropic scatterer that would return to the emitting source the same amount of power as the actual target. The backscattering cross-section of spherical particles was first determined by Mie (1908). Rayleigh found that, if the ratio of the particle diameter to the wavelength was equal to or less than 0.06, a simpler expression could be used to determine the backscatter cross-section:

≠ 5 K D6 sb = λ4



Although normal radar processing would interpret the signal in terms of Z or R, the scattering properties of the clear atmosphere are quite different from that of hydrometeors. It is most often expressed in terms of the structure parameter of refractive index, Cn2. This is a measure of the mean-square fluctuations

CHaPTEr 9. radar MEaSurEMENTS


of the refractive index as a function of distance (Gossard and Strauch, 1983).

9.4 9.4.1

velocity MeasureMents

the Doppler spectrum

Doppler radars measure velocity by estimating the frequency shift produced by an ensemble of moving targets. Doppler radars also provide information about the total power returned and about the spectrum width of the precipitation particles within the pulse volume. The mean Doppler velocity is equal to the mean motion of scatterers weighted by their cross-sections and, for near horizontal antenna scans, is essentially the air motion towards or away from the radar. Likewise, the spectrum width is a measure of the velocity dispersion, that is, the shear or turbulence within the resolution volume. A Doppler radar measures the phase of the returned signal by referencing the phase of the received signal to the transmitter. The phase is measured in rectangular form by producing the in-phase (I) and quadrature (Q) components of the signal. The I and Q are samples at a fixed range location. They are collected and processed to obtain the mean velocity and spectrum width. 9.4.2 Doppler ambiguities

and zrnic, 1993) or continuity techniques (Eilts and Smith, 1990). In the former, radial velocity estimates are collected at two different PRFs with different maximum unambiguous velocities and are combined to yield a new estimate of the radial velocity with an extended unambiguous velocity. For example, a C band radar using PRFs of 1 200 and 900 Hz has nominal unambiguous velocities of 16 and 12 m s–1, respectively. The amount of aliasing can be deduced from the difference between the two velocity estimates to dealias the velocity to an extended velocity range of ±48 m s–1 (Figure 9.3). Continuity techniques rely on having sufficient echo to discern that there are aliased velocities and correcting them by assuming velocity continuity (no discontinuities of greater than 2Vmax). There is a range limitation imposed by the use of high PRFs (greater than about 1 000 Hz) as described in section 9.2. Echoes beyond the maximum range will be aliased back into the primary range. For radars with coherent transmitters (e.g, klystron systems), the echoes will appear within the primary range. For coherent-on-receive systems, the second trip echoes will appear as noise (Joe, and Passarelli and Siggia, 1995; Passarelli and others 1981). 9.4.3 vertically pointing measurements

To detect returns at various ranges from the radar, the returning signals are sampled periodically, usually about every µs, to obtain information about every 150 m in range. This sampling can continue until it is time to transmit the next pulse. A sample point in time (corresponding to a distance from the radar) is called a range gate. The radial wind component throughout a storm or precipitation area is mapped as the antenna scans. A fundamental problem with the use of any pulse Doppler radar is the removal of ambiguity in Doppler mean velocity estimates, that is, velocity folding. Discrete equi-spaced samples of a timevarying function result in a maximum unambiguous frequency equal to one half of the sampling frequency (fs). Subsequently, frequencies greater than fs/2 are aliased (“folded”) into the Nyquist co-interval (±fs/2) and are interpreted as velocities within ±λfs/4, where λ is the wavelength of transmitted energy. Techniques to dealias the velocities include dual PRF techniques (Crozier and others, 1991; Doviak

In principle, a Doppler radar operating in the vertically-pointing mode is an ideal tool for obtaining accurate cloud-scale measurements of vertical wind speeds and drop-size distributions (DSDs). However, the accuracy of vertical velocities and DSDs derived from the Doppler spectra have been limited by the strong mathematical interdependence of the two quantities. The real difficulty is that the Doppler spectrum is measured as a function of the scatterers’ total vertical velocity – due to terminal hydrometeor fall speeds, plus updrafts or downdrafts. In order to compute the DSD from a Doppler spectrum taken at vertical incidence, the spectrum must be expressed as a function of terminal velocity alone. Errors of only ±0.25 m s–1 in vertical velocity can cause errors of 100 per cent in drop number concentrations (Atlas, Scrivastava and Sekhon, 1973). A dual-wavelength technique has been developed (termed the Ratio method) by which vertical air velocity may be accurately determined independently of the DSD. In this approach, there is a trade-off between potential accuracy and potential for successful application.

II.9–14 9.4.4


Measurement of velocity fields

A great deal of information can be determined in real time from a single Doppler radar. It should be noted that the interpretation of radial velocity estimates from a single radar is not always unambiguous. Colour displays of single-Doppler radial velocity patterns aid in the real-time interpretation of the associated reflectivity fields and can reveal important features not evident in the reflectivity structures alone (Burgess and Lemon, 1990). Such a capability is of particular importance in the identification and tracking of severe storms. On typical colour displays, velocities between ± Vmax are assigned 1 of 8 to 15 colours or more. Velocities extending beyond the Nyquist interval enter the scale of colours at the opposite end. This process may be repeated if the velocities are aliased more than one Nyquist interval. Doppler radar can also be used to derive vertical profiles of horizontal winds. When the radar’s antenna is tilted above the horizontal, increasing range implies increasing height. A profile of wind with height can be obtained by sinusoidal curve-fitting to the observed data (termed velocity azimuth display (VAD) after Lhermitte and Atlas, 1961) if the wind is relatively uniform over the area of the scan. The winds along the zero radial velocity contour are perpendicular to the radar beam axis. The colour display may be used to easily interpret VAD data obtained from large-scale precipitation systems. Typical elevated conical scan patterns in widespread

precipitation reveal an S-shaped zero radial velocity contour as the mean wind veers with height (Wood and Brown, 1986). On other occasions, closed contours representing jets are evident. Since the measurement accuracy is good, divergence estimates can also be obtained by employing the VAD technique. This technique cannot be accurately applied during periods of convective precipitation around the radar. However, moderately powerful, sensitive Doppler radars have successfully obtained VAD wind profiles and divergence estimates in the optically clear boundary layer during all but the coldest months, up to heights of 3 to 5 km above ground level. The VAD technique seems well suited for winds from precipitation systems associated with extratropical and tropical cyclones. In the radar’s clear-air mode, a time series of measurements of divergence and derived vertical velocity is particularly useful in nowcasting the probability of deep convection. Since the mid-1970s, experiments have been made for measuring three-dimensional wind fields using multiple Doppler arrays. Measurements taken at a given location inside a precipitation area may be combined, by using a proper geometrical transformation, in order to obtain the three wind components. Such estimations are also possible with only two radars, using the continuity equation. Kinematic analysis of a wind field is described in Browning and Wexler (1968).

Measured velocity or velocity difference (m s–1)




–32 –48







Actual velocity (m s–1)

figure 9.3. solid and dashed lines show doppler velocity measurements taken with two different pulse repetition frequencies (1 200 and 900 Hz for a c band radar). speeds greater than the maximum unambiguous velocities are aliased. the differences (dotted line) between the doppler velocity estimates are distinct and can be used to identify the degree of aliasing.

CHaPTEr 9. radar MEaSurEMENTS



sources of error

Radar beam filling In many cases, and especially at large ranges from the radar, the pulse volume is not completely filled with homogeneous precipitation. Precipitation intensities often vary widely on small scales; at large distances from the radar, the pulse volume increases in size. At the same time, the effects of the Earth’s curvature become significant. In general, measurements may be quantitatively useful for ranges of less than 100 km. This effect is important for cloudtop height measurements and the estimation of reflectivity. Non-uniformity of the vertical odistribution of precipitation The first parameter of interest when taking radar measurements is usually precipitation at ground level. Because of the effects of beam width, beam tilting and the Earth’s curvature, radar measurements of precipitation are higher than average over a considerable depth. These measurements are dependent on the details of the vertical distribution of precipitation and can contribute to large errors for estimates of precipitation on the ground. Variations in the Z-R relationship A variety of Z-R relationships have been found for different precipitation types. However, from the radar alone (except for dual polarized radars) these variations in the types and size distribution of hydrometeors cannot be estimated. In operational applications, this variation can be a significant source of error. Attenuation by intervening precipitation Attenuation by rain may be significant, especially at the shorter radar wavelengths (5 and 3 cm). Attenuation by snow, although less than for rain, may be significant over long path lengths. Beam blocking Depending on the radar installation, the radar beam may be partly or completely occulted by the topography or obstacles located between the radar and the target. This results in underestimations of reflectivity and, hence, of rainfall rate. Attenuation due to a wet radome Most radar antennas are protected from wind and rain by a radome, usually made of fibreglass. The

radome is engineered to cause little loss in the radiated energy. For instance, the two-way loss due to this device can be easily kept to less than 1 dB at the C band, under normal conditions. However, under intense rainfall, the surface of the radome can become coated with a thin film of water or ice, resulting in a strong azimuth dependent attenuation. Experience with the NEXRAD WSR-88D radars shows that coating radomes with a special hydrophobic paint essentially eliminates this source of attenuation, at least at 10 cm wavelengths. Electromagnetic interference Electromagnetic interference from other radars or devices, such as microwave links, may be an important factor of error in some cases. This type of problem is easily recognized by observation. It may be solved by negotiation, by changing frequency, by using filters in the radar receiver, and sometimes by software. Ground clutter The contamination of rain echoes by ground clutter may cause very large errors in precipitation and wind estimation. The ground clutter should first be minimized by good antenna engineering and a good choice of radar location. This effect may be greatly reduced by a combination of hardware clutter suppression devices (Aoyagi, 1983) and through signal and data processing. Ground clutter is greatly increased in situations of anomalous propagation. Anomalous propagation Anomalous propagation distorts the radar beam path and has the effect of increasing ground clutter by refracting the beam towards the ground. It may also cause the radar to detect storms located far beyond the usual range, making errors in their range determination because of range aliasing. Anomalous propagation is frequent in some regions, when the atmosphere is subject to strong decreases in humidity and/or increases in temperature with height. Clutter returns owing to anomalous propagation may be very misleading to untrained human observers and are more difficult to eliminate fully by processing them as normal ground clutter. Antenna accuracy The antenna position may be known within 0.2° with a well-engineered system. Errors may also be produced by the excessive width of the radar beam



or by the presence of sidelobes, in the presence of clutter or of strong precipitation echoes. Electronics stability Modern electronic systems are subject to small variations with time. This may be controlled by using a well-engineered monitoring system, which will keep the variations of the electronics within less than 1 dB, or activate an alarm when a fault is detected. Processing accuracy The signal processing must be designed to optimize the sampling capacities of the system. The variances in the estimation of reflectivity, Doppler velocity and spectrum width must be kept to a minimum. Range and velocity aliasing may be significant sources of error. Radar range equation There are many assumptions in interpreting radar-received power measurements in terms of the meteorological parameter Z by the radar range equation. Non-conformity with the assumptions can lead to error.

due both to an increase in the amount of material and to the difficulty in meeting tolerances over a greater size. Within the bands of weather radar interest (S, C, X and K), the sensitivity of the radar or its ability to detect a target is strongly dependent on the wavelength. It is also significantly related to antenna size, gain and beamwidth. For the same antenna, the target detectability increases with decreasing wavelength. There is an increase in sensitivity of 8.87 dB in theory and 8.6 dB in practice from 5 to 3 cm wavelengths. Thus, the shorter wavelengths provide better sensitivity. At the same time, the beamwidth is narrower for better resolution and gain. The great disadvantage is that smaller wavelengths have much larger attenuation. 9.6.3 attenuation


oPtiMizing raDar characteristics

Radar rays are attenuated most significantly in rain, less in snow and ice, and even less in clouds and atmospheric gases. In broad terms, attenuation at the S band is relatively small and generally not too significant. The S band radar, despite its cost, is essential for penetrating the very high reflectivities in mid-latitude and subtropical severe storms with wet hail. X-band radars can be subject to severe attenuation over short distances, and they are not suitable for precipitation rate estimates, or even for surveillance, except at very short range when shadowing or obliteration of more distant storms by nearer storms is not important. The attenuation in the C band lies between the two. 9.6.4 transmitter power


selecting a radar

A radar is a highly effective observation system. The characteristics of the radar and the climatology determine the effectiveness for any particular application. No single radar can be designed to be the most effective for all applications. Characteristics can be selected to maximize the proficiency to best suit one or more applications, such as tornado detection. Most often, for general applications, compromises are made to meet several user requirements. Many of the characteristics are interdependent with respect to performance and, hence, the need for optimization in reaching a suitable specification. Cost is a significant consideration. Much of the interdependence can be visualized by reference to the radar range equation. A brief note on some of the important factors follows. 9.6.2 Wavelength

Target detectability is directly related to the peak power output of the radar pulse. However, there are practical limits to the amount of power output that is dictated by power tube technology. Unlimited increases in power are not the most effective means of increasing the target detectability. For example, doubling the power only increases the system sensitivity by 3 dB. Technically, the maximum possible power output increases with wavelength. Improvements in receiver sensitivity, antenna gain, or choice of wavelength may be better means of increasing detection capability. Magnetrons and klystrons are common power tubes. Magnetrons cost less but are less frequency stable. For Doppler operation, the stability of klystrons was thought to be mandatory. An analysis by Strauch (1981) concluded that magnetrons could be quite effective for general meteorological applications; many Doppler radars today are based on magnetrons. Ground echo rejection techniques and clear air detection applications may favour

The larger the wavelength, the greater the cost of the radar system, particularly antenna costs for comparable beamwidths (i.e. resolution). This is

CHaPTEr 9. radar MEaSurEMENTS


klystrons. On the other hand, magnetron systems simplify rejecting second trip echoes. At normal operating wavelengths, conventional radars should detect rainfall intensities of the order of 0.1 mm h–1 at 200 km and have peak power outputs of the order of 250 kW or greater in the C band. 9.6.5 Pulse length


antenna system, beamwidth, and speed and gain

Weather radars normally use a horn fed antenna with a parabolic reflector to produce a focused narrow conical beam. Two important considerations are the beamwidth (angular resolution) and the power gain. For common weather radars, the size of the antenna increases with wavelength and with the narrowness of the beam required. Weather radars normally have beamwidths in the range of 0.5 to 2.0°. For a 0.5 and 1.0° beam at a C band wavelength, the antenna reflector diameter is 7.1 and 3.6 m, respectively; at S band it is 14.3 and 7.2 m. The cost of the antenna system and pedestal increases much more than linearly with reflector size. There is also an engineering and cost limit. The tower must also be appropriately chosen to support the weight of the antenna. The desirability of having a narrow beam to maximize the resolution and enhance the possibility of having the beam filled with target is particularly critical for the longer ranges. For a 0.5° beam, the azimuthal (and vertical) cross-beam width at 50, 100 and 200 km range is 0.4, 0.9 and 1.7 km, respectively. For a 1.0° beam, the widths are 0.9, 1.7 and 3.5 km. Even with these relatively narrow beams, the beamwidth at the longer ranges is substantially large. The gain of the antenna is also inversely proportional to the beamwidth and thus, the narrower beams also enhance system sensitivity by a factor equal to differential gain. The estimates of reflectivity and precipitation require a nominal minimal number of target hits to provide an acceptable measurement accuracy. The beam must thus have a reasonable dwell time on the target in a rotating scanning mode of operation. Thus, there are limits to the antenna rotation speed. Scanning cycles cannot be decreased without consequences. For meaningful measurements of distributed targets, the particles must have sufficient time to change their position before an independent estimate can be made. Systems generally scan at the speed range of about 3 to 6 rpm. Most weather radars are linearly polarized with the direction of the electric field vector transmitted being either horizontal or vertical. The choice is not clear cut, but the most common polarization is horizontal. Reasons for favouring horizontal polarization include: (a) sea and ground echoes are generally less with horizontal; (b) lesser sidelobes

The pulse length determines the target resolving power of the radar in range. The range resolution or the ability of the radar to distinguish between two discrete targets is proportional to the half pulse length in space. For most klystrons and magnetrons, the maximum ratio of pulse width to PRF is about 0.001. Common pulse lengths are in the range of 0.3 to 4 µs. A pulse length of 2 µs has a resolving power of 300 m, and a pulse of 0.5 µs can resolve 75 m. Assuming that the pulse volume is filled with target, doubling the pulse length increases the radar sensitivity by 6 dB with receiver-matched filtering, while decreasing the resolution; decreasing the pulse length decreases the sensitivity while increasing the resolution. Shorter pulse lengths allow more independent samples of the target to be acquired in range and the potential for increased accuracy of estimate. 9.6.6 Pulse repetition frequency

The PRF should be as high as practical to obtain the maximum number of target measurements per unit time. A primary limitation of the PRF is the unwanted detection of second trip echoes. Most conventional radars have unambiguous ranges beyond the useful range of weather observation by the radar. An important limit on weather target useful range is the substantial height of the beam above the Earth even at ranges of 250 km. For Doppler radar systems, high PRFs are used to increase the Doppler unambiguous velocity measurement limit. The disadvantages of higher PRFs are noted above. The PRF factor is not a significant cost consideration but has a strong bearing on system performance. Briefly, high PRFs are desirable to increase the number of samples measured, to increase the maximum unambiguous velocity that can be measured, and to allow higher permissible scan rates. Low PRFs are desirable to increase the maximum unambiguous range that can be measured, and to provide a lower duty cycle.



in the horizontal provide more accurate measurements in the vertical; and (c) greater backscatter from rain due to the falling drop ellipticity. However, at low elevation angles, better reflection of horizontally polarized waves from plane ground surfaces may produce an unwanted range-dependent effect. In summary, a narrow beamwidth affects system sensitivity, detectability, horizontal and vertical resolution, effective range and measurement accuracy. The drawback of small beamwidth is mainly cost. For these reasons, the smallest affordable beamwidth has proven to improve greatly the utility of the radar (Crozier and others, 1991). 9.6.8 typical weather radar characteristics

As discussed, the radar characteristics and parameters are interdependent. The technical limits on the radar components and the availability of manufactured components are important considerations in the design of radar systems. The Z only radars are the conventional non-coherent pulsed radars that have been in use for decades and are still very useful. The Doppler radars are the new generation of radars that add a new dimension to the observations. They provide estimates of radial velocity. The micro-Doppler radars are radars developed for better detection of small-scale microbursts and tornadoes over very limited areas, such as for air-terminal protection.

9.7 9.7.1

raDar installation

The characteristics of typical radars used in general weather applications are given in Table 9.7. table 9.7. specifications of typical meteorological radars
Type Band
Frequency (GHz) Wavelength (cm) Peak power (kw)

optimum site selection

Z only Doppler Z only C 5.6 5.33 250 2.0 C 5.6 5.33 250 0.5, 2.0 S 3.0 10.0 500 0.25, 4.0 200– 800 log –110 3.7

Doppler S 2.8 10.7 1 000 1.57, 4.5 300– 1 400

MicroDoppler C 5.6 5.4 250 1.1

Pulse length ( s)
PrF (Hz) receiver MdS (dBm) antenna diameter (m) Beamwidth (°) Gain (dB) Polarization rotation rate (rpm)

Optimum site selection for installing a weather radar is dependent on the intended use. When there is a definite zone that requires storm warnings, the best compromise is usually to locate the equipment at a distance of between 20 and 50 km from the area of interest, and generally upwind of it according to the main storm track. It is recommended that the radar be installed slightly away from the main storm track in order to avoid measurement problems when the storms pass over the radar. At the same time, this should lead to good resolution over the area of interest and permit better advance warning of the coming storms (Leone and others, 1989). In the case of a radar network intended primarily for synoptic applications, radars at mid-latitudes should be located at a distance of approximately 150 to 200 km from each another. The distance may be increased at latitudes closer to the Equator, if the radar echoes of interest frequently reach high altitudes. In all cases, narrow-beam radars will yield the best accuracy for precipitation measurements. The choice of radar site is influenced by many economic and technical factors as follows: (a) The existence of roads for reaching the radar; (b) The availability of power and telecommunication links. It is frequently necessary to add commercially available lightning protection devices;

250– 300 log –105 3.7

250– 1 200 log/lin –105 6.2

235– 2 000

log/lin log/lin –113 8.6 –106 7.6

1.1 44 H

0.6 48 H

1.8 38.5 H

1.0 45 H

0.5 51 H

CHaPTEr 9. radar MEaSurEMENTS


(c) (d) (e)




The cost of land; The proximity to a monitoring and maintenance facility; Beam blockage obstacles must be avoided. No obstacle should be present at an angle greater than a half beamwidth above the horizon, or with a horizontal width greater than a half beamwidth; Ground clutter must be avoided as much as possible. For a radar to be used for applications at relatively short range, it is sometimes possible to find, after a careful site inspection and examination of detailed topographic maps, a relatively flat area in a shallow depression, the edges of which would serve as a natural clutter fence for the antenna pattern sidelobes with minimum blockage of the main beam. In all cases, the site survey should include a camera and optical theodolite check for potential obstacles. In certain cases, it is useful to employ a mobile radar system for confirming the suitability of the site. On some modern radars, software and hardware are available to greatly suppress ground clutter with minimum rejection of weather echoes (Heiss, McGrew and Sirmans, 1990); When the radar is required for long-range surveillance, as may be the case for tropical cyclones or other applications on the coast, it will usually be placed on a hill-top. It will see a great deal of clutter, which may not be so important at long ranges (see section 9.2.6 for clutter suppression); Every survey on potential sites should include a careful check for electromagnetic interference, in order to avoid as much as possible interference with other communication systems such as television, microwave links or other radars. There should also be confirmation that microwave radiation does not constitute a health hazard to populations living near the proposed radar site (Skolnik, 1970; Leone and others, 1989). telecommunications and remote displays

advances, in many countries, “nowcasting” is carried out at sites removed from the radar location. Pictures may be transmitted by almost any modern transmission means, such as telephone lines (dedicated or not), fibre optic links, radio or microwave links, and satellite communication channels. The most widely used transmission systems are dedicated telephone lines, because they are easily available and relatively low in cost in many countries. It should be kept in mind that radars are often located at remote sites where advanced telecommunication systems are not available. Radar pictures may now be transmitted in a few seconds due to rapid developments in communication technology. For example, a product with a 100 km range with a resolution of 0.5 km may have a file size of 160 kBytes. Using a compression algorithm, the file size may be reduced to about 20 to 30 kBytes in GIF format. This product file can be transmitted on an analogue telephone line in less than 30 s, while using an ISDN 64 kbps circuit it may take no more than 4 s. However, the transmission of more reflectivity levels or of additional data, such as volume scans of reflectivity or Doppler data, will increase the transmission time.


calibration anD Maintenance

The calibration and maintenance of any radar should follow the manufacturer’s prescribed procedures. The following is an outline. 9.8.1 calibration


Recent developments in telecommunications and computer technology allow the transmission of radar data to a large number of remote displays. In particular, computer systems exist that are capable of assimilating data from many radars as well as from other data sources, such as satellites. It is also possible to monitor and to control remotely the operation of a radar which allows unattended operation. Owing to these technical

Ideally, the complete calibration of reflectivity uses an external target of known radar reflectivity factor, such as a metal-coated sphere. The concept is to check if the antenna and wave guides have their nominal characteristics. However, this method is very rarely used because of the practical difficulties in flying a sphere and multiple ground reflections. Antenna parameters can also be verified by sun flux measurements. Routine calibration ignores the antenna but includes the wave guide and transmitter receiver system. Typically, the following actions are prescribed: (a) Measurement of emitted power and waveform in the proper frequency band; (b) Verification of transmitted frequency and frequency spectrum;

II.9–20 (c)



Injection of a known microwave signal before the receiver stage, in order to check if the levels of reflectivity indicated by the radar are correctly related to the power of the input; Measurement of the signal to noise ratio, which should be within the nominal range according to radar specifications.

many radars, there might be a centralized logistic supply and a repair workshop. The latter receives failed parts from the radars, repairs them and passes them on to logistics for storage as stock parts, to be used as needed in the field. For corrective maintenance, the Service should be sufficiently equipped with the following: (a) Spare parts for all of the most sensitive components, such as tubes, solid state components, boards, chassis, motors, gears, power supplies, and so forth. Experience shows that it is desirable to have 30 per cent of the initial radar investment in critical spare parts on the site. If there are many radars, this percentage may be lowered to about 20 per cent, with a suitable distribution between central and local maintenance; (b) Test equipment, including the calibration equipment mentioned above. Typically, this would amount to approximately 15 per cent of the radar value; (c) Well-trained personnel capable of identifying problems and making repairs rapidly and efficiently. Competent maintenance organization should result in radar availability 96 per cent of the time on a yearly basis, with standard equipment. Better performances are possible at a higher cost. Recommended minimum equipment for calibration and maintenance includes the following: (a) Microwave signal generator; (b) Microwave power meter; (c) MHz oscilloscope; (d) Microwave frequency meter; (e) Standard gain horns; (f) Intermediate frequency signal generator; (g) Microwave components, including loads, couplers, attenuators, connectors, cables, adapters, and so on; (h) Versatile microwave spectrum analyser at the central facility; (i) Standard electrical and mechanical tools and equipment.

If any of these calibration checks indicate any changes or biases, corrective adjustments need to be made. Doppler calibration includes: the verification and adjustment of phase stability using fixed targets or artificial signals; the scaling of the real and imaginary parts of the complex video; and the testing of the signal processor with known artificially generated signals. Levelling and elevation are best checked by tracking the position of the sun in receive-only mode and by using available sun location information; otherwise mechanical levels on the antenna are needed. The presence or absence of echoes from fixed ground targets may also serve as a crude check of transmitter or receiver performance. Although modern radars are usually equipped with very stable electronic components, calibrations must be performed often enough to guarantee the reliability and accuracy of the data. Calibration must be carried out either by qualified personnel, or by automatic techniques such as online diagnostic and test equipment. In the first case, which requires manpower, calibration should optimally be conducted at least every week; in the second, it may be performed daily or even semi-continuously. Simple comparative checks on echo strength and location can be made frequently, using two or more overlapping radars viewing an appropriate target. 9.8.2 Maintenance

Modern radars, if properly installed and operated, should not be subject to frequent failures. Some manufacturers claim that their radars have a mean time between failures (MTBF) of the order of a year. However, these claims are often optimistic and the realization of the MTBF requires scheduled preventive maintenance. A routine maintenance plan and sufficient technical staff are necessary in order to minimize repair time. Preventive maintenance should include at least a monthly check of all radar parts subject to wear, such as gears, motors, fans and infrastructures. The results of the checks should be written in a radar logbook by local maintenance staff and, when appropriate, sent to the central maintenance facility. When there are


PreciPitation MeasureMents

The measurement of precipitation by radars has been a subject of interest since the early days of radar meteorology. The most important advantage of using radars for precipitation measurements is the coverage of a large area with high spatial and temporal resolution from a single observing point and in real time.

CHaPTEr 9. radar MEaSurEMENTS


Furthermore, the two-dimensional picture of the weather situation can be extended over a very large area by compositing data from several radars. However, only recently has it become possible to take measurements over a large area with an accuracy that is acceptable for hydrological applications. Unfortunately, a precise assessment of this accuracy is not possible – partly because no satisfactory basis of comparison is available. A common approach is to use a network of gauges as a reference against which to compare the radar estimates. This approach has an intuitive appeal, but suffers from a fundamental limitation: there is no reference standard against which to establish the accuracy of areal rainfall measured by the gauge network on the scale of the radar beam. Nature does not provide homogeneous, standard rainfall events for testing the network, and there is no higher standard against which to compare the network data. Therefore, the true rainfall for the area or the accuracy of the gauge network is not known. Indeed, there are indications that the gauge accuracy may, for some purposes, be far inferior to what is commonly assumed, especially if the estimates come from a relatively small number of raingauges (Neff, 1977). 9.9.1 Precipitation characteristics affecting radar measurements: the Z-R relation

factor and precipitation rate (Ze against R), precipitation amounts can be estimated reasonably well in snow conditions (the value of 0.208, instead of 0.197 for ice, accounts for the change in particle diameter for water and ice particles of equal mass). The rainfall rate (R) is a product of the mass content and the fall velocity in a radar volume. It is roughly proportional to the fourth power of the particle diameters. Therefore, there is no unique relationship between radar reflectivity and the precipitation rate since the relationship depends on the particle size distribution. Thus, the natural variability in drop-size distributions is an important source of uncertainty in radar precipitation measurements. Empirical Z-R relations and the variations from storm to storm and within individual storms have been the subject of many studies over the past forty years. A Z-R relation can be obtained by calculating values of Z and R from measured drop-size distributions. An alternative is to compare Z measured aloft by the radar (in which case it is called the “equivalent radar reflectivity factor” and labelled Ze) with R measured at the ground. The latter approach attempts to reflect any differences between the precipitation aloft and that which reaches the ground. It may also include errors in the radar calibration, so that the result is not strictly a Z-R relationship. The possibility of accounting for part of the variability of the Z-R relation by stratifying storms according to rain type (such as convective, noncellular, orographic) has received a good deal of attention. No great improvements have been achieved and questions remain as to the practicality of applying this technique on an operational basis. Although variations in the drop-size distribution are certainly important, their relative importance is frequently overemphasized. After some averaging over time and/or space, the errors associated with these variations will rarely exceed a factor of two in rain rate. They are the main sources of the variations in well-defined experiments at near ranges. However, at longer ranges, errors caused by the inability to observe the precipitation close to the ground and beam-filling are usually dominant. These errors, despite their importance, have been largely ignored. Because of growth or evaporation of precipitation, air motion and change of phase (ice and water in the melting layer, or bright band), highly variable vertical reflectivity profiles are observed, both within a given storm and from storm to storm. Unless the beam width is quite narrow, this will

Precipitation is usually measured by using the Z-R relation: Z = A Rb (9.10)

where A and b are constants. The relationship is not unique and very many empirical relations have been developed for various climates or localities and storm types. Nominal and typical values for the index and exponent are A = 200, b = 1.60 (Marshall and Palmer, 1948; Marshall and Gunn, 1952). The equation is developed under a number of assumptions that may not always be completely valid. Nevertheless, history and experience have shown that the relationship in most instances provides a good estimate of precipitation at the ground unless there are obvious anomalies. There are some generalities that can be stated. At 5 and 10 cm wavelengths, the Rayleigh approximation is valid for most practical purposes unless hailstones are present. Large concentrations of ice mixed with liquid can cause anomalies, particularly near the melting level. By taking into account the refractive index factor for ice (i.e., |K|2 = 0.208) and by choosing an appropriate relation between the reflectivity



lead to a non-uniform distribution of reflectivity within the radar sample volume. In convective rainfall, experience shows that there is less difficulty with the vertical profile problem. However, in stratiform rain or snow, the vertical profile becomes more important. With increasing range, the beam becomes wider and higher above the ground. Therefore, the differences between estimates of rainfall by radar and the rain measured at the ground also increase. Reflectivity usually decreases with height; therefore, rain is underestimated by radar for stratiform or snow conditions. At long ranges, for low-level storms, and especially when low antenna elevations are blocked by obstacles such as mountains, the underestimate may be severe. This type of error often tends to dominate all others. This is easily overlooked when observing storms at close ranges only, or when analysing storms that are all located at roughly the same range. These and other questions, such as the choice of the wavelength, errors caused by attenuation, considerations when choosing a radar site for hydrological applications, hardware calibration of radar systems, sampling and averaging, and meteorological adjustment of radar data are discussed in Joss and Waldvogel (1990), Smith (1990) and Sauvageot (1994). The following considers only rainfall measurements; little operational experience is available about radar measurements of snow and even less about measurements of hail. 9.9.2 Measurement procedures

in real time. Today, the data may be obtained in three dimensions in a manageable form, and the computing power is available for accomplishing these tasks. Much of the current research is directed towards developing techniques for doing so on an operational basis (Ahnert and others, 1983). The methods of approach for (b) to (d) above and the adequacy of results obtained from radar precipitation measurement greatly depend on the situation. This can include the specific objective, the geographic region to be covered, the details of the application, and other factors. In certain situations, an interactive process is desirable, such as that developed for FRONTIERS and described in Appendix A of Joss and Waldvogel (1990). It makes use of all pertinent information available in modern weather data centres. To date, no one method of compensating for the effects of the vertical reflectivity profile in real time is widely accepted ((b) above). However, three compensation methods can be identified: (a) Range-dependent correction: The effect of the vertical profile is associated with the combination of increasing height of the beam axis and spreading of the beam with range. Consequently, a climatological mean range-dependent factor can be applied to obtain a first-order correction. Different factors may be appropriate for different storm categories, for example, convective versus stratiform; (b) Spatially-varying adjustment: In situations where the precipitation characteristics vary systematically over the surveillance area, or where the radar coverage is non-uniform because of topography or local obstructions, corrections varying with both azimuth and range may be useful. If sufficient background information is available, mean adjustment factors can be incorporated in suitable look-up tables. Otherwise, the corrections have to be deduced from the reflectivity data themselves or from comparisons with gauge data (a difficult proposition in either case); (c) Full vertical profiles: The vertical profiles in storms vary with location and time, and the lowest level visible to the radar usually varies because of irregularities in the radar horizon. Consequently, a point-by-point correction process using a representative vertical profile for each zone of concern may be needed to obtain the best results. Representative profiles can be obtained from the radar volume scan data themselves, from climatological summaries, or from storm models. This is the most complex approach but can be implemented

The basic procedure for deducing rainfall rates from measured radar reflectivities for hydrological applications requires the following steps: (a) Making sure that the hardware is stable by calibration and maintenance; (b) Correcting for errors using the vertical reflectivity profile; (c) Taking into account all the information about the Ze-R relationship and deducing the rainfall; (d) Adjustment with raingauges. The first three parts are based on known physical factors, and the last one uses a statistical approach to compensate for residual errors. This allows the statistical methods to work most efficiently. In the past, a major limitation on carrying out these steps was caused by analogue circuitry and photographic techniques for data recording and analyses. It was, therefore, extremely difficult to determine and make the necessary adjustments, and certainly not

CHaPTEr 9. radar MEaSurEMENTS


with modern data systems (Joss and Lee, 1993). After making the profile corrections, a reflectivity/ rain-rate relationship should be used which is appropriate to the situation, geography and season, in order to deduce the value of R ((c) in the first paragraph of this section). There is general agreement that comparisons with gauges should be made routinely, as a check on radar performance, and that appropriate adjustments should be made if a radar bias is clearly indicated ((d) in the first paragraph of this section). In situations where radar estimates are far from the mark due to radar calibration or other problems, such adjustments can bring about significant improvements. However, the adjustments do not automatically ensure improvements in radar estimates, and sometimes the adjusted estimates are poorer than the original ones. This is especially true for convective rainfall where the vertical extent of echo mitigates the difficulties associated with the vertical profile, and the gauge data are suspect because of unrepresentative sampling. Also, the spatial decorrelation distance may be small, and the gauge-radar comparison becomes increasingly inaccurate with distance from the gauge. A general guideline is that the adjustments will produce consistent improvements only when the systematic differences (that is, the bias) between the gauge and radar rainfall estimates are larger than the standard deviation of the random scatter of the gauge versus radar comparisons. This guideline makes it possible to judge whether gauge data should be used to make adjustments and leads to the idea that the available data should be tested before any adjustment is actually applied. Various methods for accomplishing this have been explored, but at this time there is no widely accepted approach. Various techniques for using polarization diversity radar to improve rainfall measurements have been proposed. In particular, it has been suggested that the difference between reflectivities measured at horizontal and vertical polarization (Z DR) can provide useful information about drop-size distributions (Seliga and Bringi, 1976). An alternate method is to use KDP that depends on large oblate spheroids distorting the shape of the transmitted wave. The method depends on the hydrodynamic distortions of the shapes of large raindrops, with more intense rainfalls with larger drops giving stronger polarization signatures. There is still considerable controversy, however, as to whether

this technique has promise for operational use for precipitation measurement (English and others, 1991). At close ranges (with high spatial resolution), polarization diversity radars may give valuable information about precipitation particle distributions and other parameters pertinent to cloud physics. At longer ranges, it is impossible to be sure that the radar beam is filled with a homogeneous distribution of hydrometeors. Consequently, the empirical relationship of the polarimetric signature to the drop-size distribution increases uncertainty. Of course, knowing more about Z-R will help, but, even if multiparameter techniques worked perfectly well, the error caused by Z-R could be reduced only from 33 to 17 per cent, as shown by Ulbrich and Atlas (1984). For short-range hydrological applications, the corrections for other biases (already discussed) are usually much greater, perhaps by an order of magnitude or more. 9.9.3 state of the art and summary

Over the years, much research has been directed towards exploring the potential of radars as an instrument for measuring rain. In general, radar measurements of rain, deduced from an empirical Z-R relation, agree well with gauge measurements for ranges close to the radar. Increased variability and underestimation by the radar occur at longer ranges. For example, the Swiss radar estimates, at a range of 100 km on average, only 25 per cent of the actual raingauge amount, despite the fact that it measures 100 per cent at close ranges. Similar, but not quite so dramatic, variations are found in flat country or in convective rain. The reasons are the Earth curvature, shielding by topography and the spread of the radar beam with range. Thus, the main shortcoming in using radars for precipitation measurements and for hydrology in operational applications comes from the inability to measure precipitation close enough to the ground over the desired range of coverage. Because this problem often does not arise in well-defined experiments, it has not received the attention that it deserves as a dominant problem in operational applications. Thanks to the availability of inexpensive, high-speed data-processing equipment, it is now possible to determine the echo distribution in the whole radar coverage area in three dimensions. This knowledge, together with knowledge about the position of the radar and the orography around it, makes it possible



to correct in real time for a large fraction of – or at least to estimate the magnitude of – the vertical profile problem. This correction allows extension of the region in which accuracy acceptable for many hydrological applications is obtained. To make the best possible use of radars, the following rules should be respected: (a) The radar site should be chosen such that precipitation is seen by the radar as close as possible to the ground. “Seen” means here that there is no shielding or clutter echoes, or that the influence of clutter can be eliminated, for instance by Doppler analysis. This condition may frequently restrict the useful radar range for quantitative work to the nearest 50 to 100 km; (b) Wavelength and antenna size should be chosen such that a suitable compromise between attenuation caused by precipitation and good spatial resolution is achieved. At longer ranges, this may require a shorter wavelength to achieve a sufficiently narrow beam, or a larger antenna if S band use is necessary, due to frequent attenuation by huge intense cells; (c) Systems should be rigorously maintained and quality controlled, including by ensuring the sufficient stability and calibration of equipment; (d) Unless measurements of reflectivity are taken immediately over the ground, they should be corrected for errors originating from the vertical profile of reflectivity. As these profiles change with time, reflectivity should be monitored continuously by the radar. The correction may need to be calculated for each pixel, as it depends on the height of the lowest visible volume above the ground. It is important that the correction for the vertical reflectivity profile, as it is the dominant one at longer ranges, should be carried out before any other adjustments; (e) The sample size must be adequate for the application. For hydrological applications, and especially when adjusting radar estimates with gauges, it is desirable to integrate the data over a number of hours and/or square kilometres. Integration has to be performed over the desired quantity (the linear rainfall rate R) to avoid any bias caused by this integration. Even a crude estimate of the actual vertical reflectivity profile can produce an important improvement. Polarimetric measurements may provide some further improvement, but it has yet to be demonstrated that the additional cost and

complexity and risk of misinterpreting polarization measurements can be justified for operational applications in hydrology. The main advantages of radars are their high spatial and temporal resolution, wide area coverage and immediacy (real-time data). Radars also have the capability of measuring over inaccessible areas, such as lakes, and of following a “floating target” or a “convective complex” in a real-time sequence, for instance, to make a short-term forecast. Although it is only to a lesser degree suited to giving absolute accuracy in measuring rain amounts, good quantitative information is already obtained from radar networks in many places. It is unlikely that radars will ever completely replace the raingauge, since gauges provide additional information and are essential for adjusting and/or checking radar indications. On the other hand, as many specialists have pointed out, an extremely dense and costly network of gauges would be needed to obtain a resolution that would be easily attainable with radars. 9.9.4 area-time integral technique

Climatological applications not requiring real-time data can take advantage of the close relationship between the total amount of rainfall and the area and duration of a rain shower (Byers, 1948; Leber, Merrit and Robertson, 1961). Without using a Z-R relationship, Doneaud and others (1984; 1987) found a linear relationship between the rained-upon area and the total rainfall within that area with a very small dispersion. This relationship is dependent on the threshold selected to define the rain area. While this has limited use in real-time short-term forecasting applications, its real value should be in climatological studies and applications.


severe Weather Detection anD noWcasting aPPlications


utilization of reflectivity information

The most commonly used criterion for radar detection of potentially severe thunderstorms today is reflectivity intensity. Operational forecasters are advised to look for regions of high reflectivities (50 dBZ or greater). These include the spiral-bands and eyewall structures that identify tropical cyclones. Hook or finger-like echoes, overhangs and other echo shapes obtained from radar volume scans are used to warn of tornadoes or severe thunderstorms

CHaPTEr 9. radar MEaSurEMENTS


(Lemon, Burgess and Brown, 1978), but the false alarm rate is high. Improved severe thunderstorm detection has been obtained recently through the processing of digital reflectivity data obtained by automatic volume-scanning at 5 to 10 minute update rates. Reflectivity mass measurements such as vertically integrated liquid and severe weather probability have led to improved severe thunderstorm detection and warning, especially for hail. Many techniques have been proposed for identifying hail with 10 cm conventional radar, such as the presence of 50 dBz echo at 3 or 8 km heights (Dennis, Schock and Koscielski, 1970; Lemon, Burgess and Brown, 1978). However, verification studies have not yet been reported for other parts of the world. Federer and others (1978) found that the height of the 45 dBz contour must exceed the height of the zero degree level by more than 1.4 km for hail to be likely. An extension of this method has been verified at the Royal Netherlands Meteorological Institute and is being used operationally (Holleman, and others, 2000; Holleman, 2001). A different approach towards improved hail detection involves the application of dual-wavelength radars – usually X and S bands (Eccles and Atlas, 1973). The physics of what the radar sees at these various wavelengths is crucial for understanding the strengths and limitations of these techniques (hydrometeor cross-section changes or intensity distribution). Studies of polarization diversity show some promise of improved hail detection and heavy rainfall estimation based upon differential reflectivity (ZDR) as measured by a dual-polarization Doppler radar (Seliga and Bringi, 1976). Since the late 1970s, computer systems have been used to provide time lapse and zoom capabilities for radar data. The British FRONTIERS system (Browning and Collier, 1982; Collier, 1989), the Japanese AMeDAS system, the French ARAMIS system (Commission of the European Communities, 1989) and the United States PROFS system allow the user to interact and produce composite colour displays from several remote radars at once, as well as to blend the radar data with other types of information. The synthesis of radar data with raingauge data provides a powerful nowcasting product for monitoring rainfall. “Radar-AMeDAS Precipitation Analysis” is one of the products provided in Japan (Makihara, 2000). Echo intensity obtained from a radar network is converted into precipitation rate using a Ze-R relationship, and 1 h precipitation amount is estimated from the precipitation rate.

The estimated amounts are then calibrated using raingauge precipitation amounts to provide a map of 1 h precipitation amount with high accuracy. 9.10.2 utilization of Doppler information

The best method for measuring winds inside precipitation is the multiple Doppler method, which has been deployed since the mid-1970s for scientific field programmes of limited duration. However, real-time operational use of dual- or triple-Doppler analyses is not anticipated at present because of spatial coverage requirements. An exception may be the limited area requirements of airports, where a bistatic system may be useful (Wurman, Randall and Burghart, 1995). The application of Doppler radar to real-time detection and tracking of severe thunderstorms began in the early 1970s. Donaldson (1970) was probably the first to identify a vortex flow feature in a severe thunderstorm. Quasi-operational experiments have demonstrated that a very high percentage of these single-Doppler vortex signatures are accompanied by damaging hail, strong straight wind or tornadoes (Ray and others, 1980; JDOP, 1979). Since then, the existence of two useful severe storm features with characteristic patterns or “signatures” has become apparent. The first was that of a mesocyclone, which is a vertical column of rising rotating air typically 2 to 10 km in diameter. The mesocyclone signature (or velocity couplet) is observed forming in the mid-levels of a storm and descending to cloud base, coincident with tornado development (Burgess, 1976; Burgess and Lemon, 1990). This behaviour has led to improved tornado warning lead times, of 20 min or longer, during quasi-operational experiments in Oklahoma (JDOP, 1979). Most of the Doppler observances have been made in the United States, and it is not known if this signature can be generalized yet. During experiments in Oklahoma, roughly 50 per cent of all mesocyclones produced verified tornadoes; also, all storms with violent tornadoes formed in environments with strong shear and possessed strong mesocyclones (Burgess and Lemon, 1990). The second signature – the tornado vortex signature (TVS) – is produced by the tornado itself. It is the location of a very small circulation embedded within the mesocyclone. In some cases, the TVS has been detected aloft nearly half an hour or more before a tornado touched the ground. Several years of experience with TVS have demonstrated its great utility for determining tornado location, usually



within ±1 km. It is estimated that 50 to 70 per cent of the tornadoes east of the Rocky Mountain high plains in the United States can be detected (Brown and Lemon, 1976). Large Doppler spectrum widths (second moment) have been identified with tornado location. However, large values of spectrum width have also been well correlated with large values during storm turbulence. Divergence calculated from the radial velocity data appears to be a good measure of the total divergence. Estimations of storm-summit radial divergence match those of the echo-top height, which is an updraft strength indicator. Quasi-operational Doppler experiments have shown that an increase in divergence magnitude is likely to be the earliest indicator that a storm is becoming severe. Moreover, large divergence values near the storm top were found to be a useful hail indicator. Low-level divergence signatures of downbursts have been routinely made with terminal Doppler weather radars for the protection of aircraft during take off and landing. These radars are specially built for limited area surveillance and repeated rapid scanning of the air space around the airport terminals. The microburst has a life cycle of between 10 to 20 min, which requires specialized radar systems for effective detection. In this application, the radar-computer system automatically provides warnings to the air-traffic control tower (Michelson, Schrader and Wieler, 1990). Doppler radar studies of the role of boundary layer convergence lines in new thunderstorm formations support earlier satellite cloud-arc studies. There are indications that mesoscale boundary-layer convergence lines (including intersecting gust fronts from prior convection) play a major role in determining where and when storms will form. Wilson and Schreiber (1986) have documented and explained several cases of tornado genesis by nonprecipitation induced wind shear lines, as observed by Doppler radar (Mueller and Carbone, 1987). Recent improvements in digital radar data-processing and display techniques have led to the development of new quantitative, radar-based products for hydrometeorological applications. A number of European countries and Japan are using such radar products with numerical models for operational flood forecasting and control (for example, see Cluckie and Owens, 1987). Thus, major advances now appear possible in the 0 to 2 h time-specific forecasts of thunderstorms. The development of this potential will require

the efficient integration of Doppler radar, highresolution satellite data, and surface and sounding data. Doppler radars are particularly useful for monitoring tropical cyclones and providing data on their eye, eyewall and spiral-band dynamic evolution, as well as the location and intensity of hurricane-force winds (Ruggiero and Donaldson, 1987; Baynton, 1979).


high frequency raDars for ocean surface MeasureMents

Radio signals in the high-frequency radio band (from 3 to 30 MHz) are backscattered from waves on the sea surface, and their frequency is Doppler shifted. They can be detected by a high-frequency radar set-up to observe them. The strength of the returned signal is due to constructive interference of the rays scattered from successive sea waves spaced so that the scattered rays are in resonance, as occurs in a diffraction grating. In the case of grazing incidence, the resonance occurs when the sea wavelength is half the radio wavelength. The returned signal is Doppler shifted because of the motion of the sea waves. From the Doppler spectrum it is possible to determine the direction of motion of the sea waves, with a left-right ambiguity across the direction of the beam that can be resolved by making use of other information, such as a first-guess field. If the sea waves are in equilibrium with the surface wind, this yields the wind direction; this is the basic sea measurement taken with high-frequency radar. Analysis of the returned spectrum can be developed further to yield the spectrum of sea waves and an indication of wind speed. Measurements can be obtained up to 200 km or more with ground-wave radars, and up to 3 000 km or more with sky-wave radars (using reflection from the ionosphere). The latter are known as over-the-horizon radars. Most operational high frequency radars are military, but some are used to provide routine wind direction data, over very wide areas, to Hydrometeorological Services. Accounts of high frequency radars with meteorological applications, with extensive further references, are given in Shearman (1983), Dexter, Heron and Ward (1982), Keenan and Anderson (1987), and Harlan and Georges (1994).

CHaPTEr 9. radar MEaSurEMENTS


references and furtHer readIng
Ahnert, P.R., M. Hudlow, E. Johnson, D. Greene and M. Dias, 1983: Proposed on-site processing system for NEXRAD. Preprints of the Twenty-first Conference on Radar Meteorology (Edmonton, Canada), American Meteorological Society, Boston, pp. 378–385. Aoyagi, J., 1983: A study on the MTI weather radar system for rejecting ground clutter. Papers in Meteorology and Geophysics, Volume 33, Number 4, pp. 187–243. Aoyagi, J. and N. Kodaira, 1995: The reflection mechanism of radar rain echoes. Preprints of the Twenty-seventh Conference on Radar Meteorology (Vail, Colorado), American Meteorological Society, Boston, pp. 246–248. Atlas, D., 1964: Advances in radar meteorology. Advances in Geophysics (H.E. Landsberg and J. Van Meighem, eds.), Volume 10, Academic Press, New york, pp. 317–479. Atlas, D. (ed.), 1990: Radar in Meteorology. American Meteorological Society, Boston. Atlas, D., R.C. Scrivastava and R.S. Sekhon, 1973: Doppler radar characteristics of precipitation at vertical incidence. Reviews of Geophysics and Space Physics, Volume 11, Number 1, pp. 1–35. Battan, L.J., 1981: Radar Observation of the Atmosphere. University of Chicago Press, Chicago. Baynton, H.W., 1979: The case for Doppler radars along our hurricane affected coasts. Bulletin of the American Meteorological Society, Volume 60, pp. 1014–1023. Bean, B.R. and E.J. Dutton, 1966: Radio Meteorology. Washington DC, U.S. Government Printing Office. Bebbington, D.H.O., 1992: Degree of Polarization as a Radar Parameter and its Susceptibility to Coherent Propagation Effects. Preprints from URSI Commission F Symposium on Wave Propagation and Remote Sensing (Ravenscar, United Kingdom) pp. 431–436. Bringi, V.N. and A. Hendry, 1990: Technology of polarization diversity radars for meteorology. In: Radar in Meteorology (D. Atlas, ed.) American Meteorological Society, Boston, pp. 153–190. Browning, K.A. and R. Wexler, 1968: The determination of kinetic properties of a wind field using Doppler radar. Journal of Applied Meteorology, Volume 7, pp. 105–113. Brown, R.A. and L.R. Lemon, 1976: Single Doppler radar vortex recognition: Part II – Tornadic vortex signatures. Preprints of the Seventeenth Conference on Radar Meteorology (Seattle, Washington), American Meteorological Society, Boston, pp. 104–109. Browning, K.A. and C.G. Collier, 1982: An integrated radar-satellite nowcasting system in the United Kingdom. In: Nowcasting (K.A. Browning, ed.). Academic Press, London, pp. 47–61. Browning, K.A., C.G. Collier, P.R. Larke, P. Menmuir, G.A. Monk and R.G. Owens, 1982: On the forecasting of frontal rain using a weather radar network. Monthly Weather Review, Volume 110, pp. 534–552. Burgess, D.W., 1976: Single Doppler radar vortex recognition: Part I – Mesocyclone signatures. Preprints of the Seventeenth Conference on Radar Meteorology, (Seattle, Washington), American Meteorological Society, Boston, pp. 97–103. Burgess, D.W. and L.R. Lemon, 1990: Severe thunderstorm detection by radar. In Radar in Meteorology (D. Atlas, ed.). American Meteorological Society, Boston, pp. 619–647. Burrows, C.R. and S.S. Attwood, 1949: Radio Wave Propagation. Academic Press, New york. Byers, H.R., 1948: The use of radar in determining the amount of rain falling over a small area. Transactions of the American Geophysical Union, pp. 187–196. Cluckie, I.D. and M.E. Owens, 1987: Real-time Rainfall Run-off Models and Use of Weather Radar Information. In: Weather Radar and Flood Forecasting (V.K. Collinge and C. Kirby, eds). John Wiley and Sons, New york. Collier, C.G., 1989: Applications of Weather Radar Systems: A Guide to Uses of Radar Data in Meteorology and Hydrology. John Wiley and Sons, Chichester, England. Commission of the European Communities, 1990: Une revue du programme ARAMIS (J.L. Cheze). Seminar on Cost Project 73: Weather Radar Networking (Brussels, 5–8 September 1989), pp. 80–85. Crozier, C.L., P. Joe, J. Scott, H. Herscovitch and T. Nichols, 1991: The King City operational Doppler radar: Development, all-season applications and forecasting. Atmosphere-Ocean, Volume, 29, pp. 479–516. Dennis, A.S., C.A. Schock, and A. Koscielski, 1970: Characteristics of hailstorms of western South Dakota. Journal of Applied Meteorology, Volume 9, pp. 127–135. Dexter, P.E., M.L. Heron and J.F. Ward, 1982: Remote sensing of the sea-air interface using HF radars. Australian Meteorological Magazine, Volume 30, pp. 31–41.



Donaldson, R.J., Jr., 1970: Vortex signature recognition by a Doppler radar. Journal of Applied Meteorology, Volume 9, pp. 661–670. Doneaud, A.A., S. Ionescu-Niscov, D.L. Priegnitz and P.L. Smith, 1984: The area-time integral as an indicator for convective rain volumes. Journal of Climate and Applied Meteorology, Volume 23, pp. 555–561. Doneaud, A.A., J.R. Miller Jr., L.R. Johnson, T.H. Vonder Haar and P. Laybe, 1987: The area-time integral technique to estimate convective rain volumes over areas applied to satellite data: A preliminary investigation. Journal of Climate and Applied Meteorology, Volume 26, pp. 156–169. Doviak, R.J. and D.S. zrnic, 1993: Doppler Radar and Weather Observations. Second edition, Academic Press, San Diego. Eccles, P.J. and D. Atlas, 1973: A dual-wavelength radar hail detector. Journal of Applied Meteorology, Volume 12, pp. 847–854. Eilts, M.D. and S.D. Smith, 1990: Efficient dealiasing of Doppler velocities using local environment constraints. Journal of Atmospheric and Oceanic Techonology, Volume 7, pp. 118–128. English, M.E., B. Kochtubajda, F.D. Barlow, A.R. Holt and R. McGuiness, 1991: Radar measurement of rainfall by differential propagation phase: A pilot experiment. Atmosphere-Ocean, Volume 29, pp. 357–380. Federer, B., A. Waldvogel, W. Schmid, F. Hampel, E. Rosini, D. Vento and P. Admirat, 1978: Grossversuch IV: Design of a randomized hail suppression experiment using the Soviet method. Pure and Applied Geophysics, 117, pp. 548–571. Gossard, E.E. and R.G. Strauch, 1983: Radar Observations of Clear Air and Clouds. Elsevier Scientific Publication, Amsterdam. Harlan, J.A. and T.M. Georges, 1994: An empirical relation between ocean-surface wind direction and the Bragg line ratio of HF radar sea echo spectra. Journal of Geophysical Research, Volume 99, C4, pp. 7971–7978. Heiss, W.H., D.L. McGrew, and D. Sirmans, 1990: NEXRAD: Next generation weather radar (WSR88D). Microwave Journal, Volume 33, Number 1, pp. 79–98. Holleman, I., 2001: Hail Detection Using Single-polarization Radar. Scientific Report, Royal Netherlands Meteorological Institute (KNMI) WR-2001-01, De Bilt. Holleman, I., H.R.A. Wessels, J.R.A. Onvlee and S.J.M. Barlag, 2000: Development of a hail-detection product. Physics and Chemistry of the Earth, Part B, 25, pp. 1293–1297.

Holt, A.R., M. Chandra, and S.J. Wood, 1995: Polarisation diversity radar observations of storms at C-Band. Preprints of the Twenty-seventh Conference on Radar Meteorology (Vail, Colorado), American Meteorological Society, Boston, pp. 188–189. Holt, A.R., P.I. Joe, R. McGuinness and E. Torlaschi, 1993: Some examples of the use of degree of polarization in interpreting weather radar data. Proceedings of the Twenty-sixth International Conference on Radar Meteorology, American Meteorological Society, pp. 748–750. Joe, P., R.E. Passarelli and A.D. Siggia, 1995: Second trip unfolding by phase diversity processing. Preprints of the Twenty-seventh Conference on Radar Meteorology (Vail, Colorado), American Meteorological Society, Boston, pp. 770–772. Joint Doppler Operational Project, 1979: Final Report on the Joint Doppler Operational Project. NOAA Technical Memorandum, ERL NSSL-86, Norman, Oklahoma, National Severe Storms Laboratory. Joss, J. and A. Waldvogel, 1990: Precipitation measurement and hydrology. In: Radar in Meteorology (D. Atlas, ed.), American Meteorological Society, Boston, pp. 577–606. Joss, J. and R.W. Lee, 1993: Scan strategy, clutter suppression calibration and vertical profile corrections. Preprints of the Twenty-sixth Conference on Radar Meteorology (Norman, Oklahoma), American Meteorological Society, Boston, pp. 390–392. Keeler, R.J., C.A. Hwang and E. Loew, 1995: Pulse compression weather radar waveforms. Preprints of the Twenty-seventh Conference on Radar Meteorology (Vail, Colorado), American Meteorological Society, Boston, pp. 767–769. Keenan, T.D. and S.J. Anderson, 1987: Some examples of surface wind field analysis based on Jindalee skywave radar data. Australian Meteorological Magazine, 35, pp. 153–161. Leber, G.W., C.J. Merrit, and J.P. Robertson, 1961: WSR-57 Analysis of Heavy Rains. Preprints of the Ninth Weather Radar Conference, American Meteorological Society, Boston, pp. 102–105. Lee, R., G. Della Bruna and J. Joss, 1995: Intensity of ground clutter and of echoes of anomalous propagation and its elimination. Preprints of the Twenty-seventh Conference on Radar Meteorology (Vail, Colorado), American Meteorological Society, Boston, pp. 651–652. Lemon, L.R., D.W. Burgess and R.A. Brown, 1978: Tornadic storm airflow and morphology derived from single-Doppler radar measurements. Monthly Weather Review, Volume 106, pp. 48–61.

CHaPTEr 9. radar MEaSurEMENTS


Leone, D.A., R.M. Endlich, J. Petriceks, R.T.H. Collis and J.R. Porter, 1989: Meteorological considerations used in planning the NEXRAD network. Bulletin of the American Meteorological Society, Volume, 70, pp. 4–13. Lhermitte, R. and D. Atlas, 1961: Precipitation motion by pulse Doppler radar. Preprints of the Ninth Weather Radar Conference, American Meteorological Society, Boston, pp. 218–233. Makihara, y., 2000: Algorithms for precipitation nowcasting focused on detailed analysis using radar and raingauge data. Technical Report of the Meteorological Research Institute, JMA, 39, pp. 63–111. Marshall, J.S. and K.L.S. Gunn, 1952: Measurement of snow parameters by radar. Journal of Meteorology, Volume 9, pp. 322–327. Marshall, J.S. and W.M. Palmer, 1948: The distribution of raindrops with size. Journal of Meteorology, Volume 5, pp. 165–166. Melnikov, V., D.S. zrnic, R.J. Dovink and J.K. Carter, 2002: Status of the dual polarization upgrade on the NOAA’s research and development WSR-88D. Preprints of the Eighteenth International Conference on Interactive Information Processing Systems (Orlando, Florida), American Meteorological Society, Boston, pp. 124–126. Michelson, M., W.W. Schrader and J.G. Wilson, 1990: Terminal Doppler weather radar. Microwave Journal, Volume 33, Number 2, pp. 139–148. Mie, G., 1908: Beiträge zur Optik träber Medien, speziell kolloidaler Metalläsungen. Annalen der Physik, 25, pp. 377–445. Mueller, C.K. and R.E. Carbone, 1987: Dynamics of a thunderstorm outflow. Journal of the Atmospheric Sciences, Vo l u m e 44, pp. 1879–1898. Mueller, E.A., S.A. Rutledge, V.N. Bringi, D. Brunkow, P.C. Kennedy, K. Pattison, R. Bowie and V. Chandrasekar, 1995: CSU-CHILL radar upgrades. Preprints of the Twenty-seventh Conference on Radar Meteorology (Vail, Colorado), American Meteorological Society, Boston, pp. 703–706. Neff, E.L., 1977: How much rain does a rain gage gage? Journal of Hydrology, Volume 35, pp. 213–220. Passarelli, R.E., Jr., P. Romanik, S.G. Geotis and A.D. Siggia, 1981: Ground clutter rejection in the frequency domain. Preprints of the Twentieth Conference on Radar Meteorology (Boston, Massachusetts), American Meteorological Society, Boston, pp. 295–300. Probert-Jones, J.R., 1962: The radar equation in meteorology. Quarterly Journal of the Royal Meteorological Society, Volume 88, pp. 485–495.

Ray, P.S., C.L. ziegler, W. Bumgarner and R.J. Serafin, 1980: Single- and multiple-Doppler radar observations of tornadic storms. Monthly Weather Review, Volume 108, pp. 1607–1625. Rinehart, R.E., 1991: Radar for Meteorologists. Grand Forks, North Dakota, University of North Dakota, Department of Atmopheric Sciences. Ruggiero, F.H. and R.J. Donaldson, Jr., 1987: Wind field derivatives: A new diagnostic tool for analysis of hurricanes by a single Doppler radar. Preprints of the Seventeenth Conference on Hurricanes and Tropical Meteorology (Miami, Florida), American Meteorological Society, Boston, pp. 178–181. Sauvageot, H., 1982: Radarmétéorologie. Eyrolles, Paris. Sauvageot, H., 1994: Rainfall measurement by radar: A review. Atmospheric Research, Volume 35, pp. 27–54. Seliga, T.A. and V.N. Bringi, 1976: Potential use of radar differential reflectivity measurements at orthogonal polarizations for measuring precipitation. Journal of Applied Meteorology, Volume 15, pp. 69–76. Shearman, E.D. R., 1983: Radio science and oceanography. Radio Science, Volume 18, Number 3, pp. 299–320. Skolnik, M.I. (ed.), 1970: Radar Handbook. McGrawHill, New york. Smith, P.L., 1990: Precipitation measurement and hydrology: Panel report. In: Radar in Meteorology. (D. Atlas, ed.), American Meteorological Society, Boston, pp. 607–618. Smith, P.L., 1995: Dwell time considerations for weather radars. Preprints of the Twenty-seventh Conference on Radar Meteorology (Vail, Colorado), American Meteorological Society, Boston, pp. 760–762. Strauch, R.G., 1981: Comparison of meteorological Doppler radars with magnetron and klystron transmitters. Preprints of the Twentieth Conference on Radar Meteorology (Boston, Massachusetts), American Meteorological Society, Boston, pp. 211–214. Ulbrich, C.W. and D. Atlas, 1984: Assessment of the contribution of differential polarization to improve rainfall measurements. Radio Science, Volume 19, Number 1, pp. 49–57. Wilson, J.W. and W.E. Schreiber, 1986: Initiation of convective storms at radar-observed boundary-layer convergence lines. Monthly Weather Review, Volume 114, pp. 2516–2536. Wood, V.T. and R.A. Brown, 1986: Single Doppler velocity signature interpretation of nondivergent environmental winds. Journal of Atmospheric and Oceanic Technology, Volume 3, pp. 114–128.



World Meteorological Organization, 1985: Use of Radar in Meteorology (G.A. Clift). Technical Note No. 181, WMO-No. 625, Geneva. Wurman, J., M. Randall and C. Burghart, 1995: Real-time vector winds from a bistatic Doppler radar network. Preprints of the Twenty-seventh Conference on Radar Meteorology (Vail, Colorado), American Meteorological Society, Boston, pp. 725–728.

zrnic, D.S. and S. Hamidi, 1981: Considerations for the Design of Ground Clutter Cancelers for Weather Radar. Report DOT/FAA/RD-81/72, NTIS, pp. 77. zrnic, D.S. and A.V. Ryzhkov, 1995: Advantages of rain measurements using specific differential phase. Preprints of the Twenty-seventh Conference on Radar Meteorology (Vail, Colorado), American Meteorological Society, pp. 35–37.

CHaPTEr 10

Balloon tecHnIques

10.1 10.1.1


Main types of balloons

Two main categories of balloons are used in meteorology, as follows: (a) Pilot balloons, which are used for the visual measurement of upper wind, and ceiling balloons for the measurement of cloud-base height. Usually they do not carry an appreciable load and are therefore considerably smaller than radiosonde balloons. They are almost invariably of the spherical extensible type and their chief requirement, apart from the ability to reach satisfactory heights, is that they should keep a good spherical shape while rising; (b) Balloons which are used for carrying recording or transmitting instruments for routine upperair observations are usually of the extensible type and spherical in shape. They are usually known as radiosonde or sounding balloons. They should be of sufficient size and quality to enable the required load (usually 200 g to 1 kg) to be carried up to heights as great as 35 km (WMO, 2002) at a rate of ascent sufficiently rapid to enable reasonable ventilation of the measuring elements. For the measurement of upper winds by radar methods, large pilot balloons (100 g) or radiosonde balloons are used depending on the weight and drag of the airborne equipment. Other types of balloons used for special purposes are not described in this chapter. Constant-level balloons that rise to, and float at, a pre-determined level are made of inextensible material. Large constant-level balloons are partly filled at release. Super-pressure constant-level balloons are filled to extend fully the balloon at release. Tetroons are small super-pressure constant-level balloons, tetrahedral in shape, used for trajectory studies. The use of tethered balloons for profiling is discussed in Part II, Chapter 5. 10.1.2 balloon materials and properties

is stronger and can be made with a thicker film for a given performance. It is less affected by temperature, but more affected by the ozone and ultraviolet radiation at high altitudes, and has a shorter storage life. Both materials may be compounded with various additives to improve their storage life, strength and performance at low temperatures both during storage and during flight, and to resist ozone and ultraviolet radiation. As one of the precautions against explosion, an antistatic agent may also be added during the manufacture of balloons intended to be filled with hydrogen. There are two main processes for the production of extensible balloons. A balloon may be made by dipping a form into latex emulsion, or by forming it on the inner surface of a hollow mould. Moulded balloons can be made with more uniform thickness, which is desirable for achieving high altitudes as the balloon expands, and the neck can be made in one piece with the body, which avoids the formation of a possible weak point. Polyethylene is the inextensible material used for constant-level balloons. 10.1.3 balloon specifications

The finished balloons should be free from foreign matter, pinholes or other defects and must be homogeneous and of uniform thickness. They should be provided with necks of between 1 and 5 cm in diameter and 10 to 20 cm long, depending on the size of the balloon. In the case of sounding balloons, the necks should be capable of withstanding a force of 200 N without damage. In order to reduce the possibility of the neck being pulled off, it is important that the thickness of the envelope should increase gradually towards the neck; a sudden discontinuity of thickness forms a weak spot. Balloons are distinguished in size by their nominal weights in grams. The actual weight of individual balloons should not differ from the specified nominal weight by more than 10 per cent, or preferably 5 per cent. They should be capable of expanding to at least four times, and preferably five or six times, their unstretched diameter and of maintaining this expansion for at least 1 h. When inflated, balloons should be spherical or pear-shaped.

The best basic materials for extensible balloons are high-quality natural rubber latex and a synthetic latex based upon polychloroprene. Natural latex holds its shape better than polychloroprene – which



The question of specified shelf life of balloons is important, especially in tropical conditions. Artificial ageing tests exist but they are not reliable guides. One such test is to keep sample balloons in an oven at a temperature of 80°C for four days, this being reckoned as roughly equivalent to four years in the tropics, after which the samples should still be capable of meeting the minimum expansion requirement. Careful packing of the balloons so that they are not exposed to light (especially sunlight), fresh air or extremes of temperature is essential if rapid deterioration is to be prevented. Balloons manufactured from synthetic latex incorporate a plasticizer to resist the stiffening or freezing of the film at the low temperatures encountered near and above the tropopause. Some manufacturers offer alternative balloons for daytime and night-time use, the amount of plasticizer being different.


qLn ( L + W )1/ 3


in which q and n depend on the drag coefficient, and therefore on the Reynolds number, vρD/µ (µ being the viscosity of the air). Unfortunately, a large number of meteorological balloons, at some stages of flight, have Reynolds numbers within the critical region of 1.105 to 3.105, where a rapid change of drag coefficient occurs, and they may not be perfectly spherical. Therefore, it is impracticable to use a simple formula which is valid for balloons of different sizes and different free lifts. The values of q and n in the above equation must, therefore, be derived by experiment; they are typically, very approximately, about 150 and about 0.5, respectively if the ascent rate is expressed in m min–1. Other factors, such as the change of air density and gas leakage, can also affect the rate of ascent and can cause appreciable variation with height. In conducting soundings during precipitation or in icing conditions, a free lift increase of up to about 75 per cent, depending on the severity of the conditions, may be required. An assumed rate of ascent should not be used in any conditions other than light precipitation. A precise knowledge of the rate of ascent is not usually necessary except in the case of pilot- and ceiling-balloon observations, where there is no other means of determining the height. The rate of ascent depends largely on the free lift and air resistance acting on the balloon and train. Drag can be more important, especially in the case of non-spherical balloons. Maximum height depends mainly on the total lift and on the size and quality of the balloon. 10.2.2 balloon performance

10.2 10.2.1

balloon behaviour

rate of ascent

From the principle of buoyancy, the total lift of a balloon is given by the buoyancy of the volume of gas in it, as follows: T = V (ρ – ρg ) = 0.523 D3 (ρ – ρg ) (10.1)

where T is the total lift; V is the volume of the balloon; ρ is the density of the air; ρg is the density of the gas; and D is the diameter of the balloon, which is assumed to be spherical. All units are in the International System of Units. For hydrogen at ground level, the buoyancy (ρ – ρg) is about 1.2 kg m–3. All the quantities in equation 10.1 change with height. The free lift L of a balloon is the amount by which the total lift exceeds the combined weight W of the balloon and its load (if any): L=T–W (10.2)

namely, it is the net buoyancy or the additional weight which the balloon, with its attachments, will just support without rising or falling. It can be shown by the principle of dynamic similarity that the rate of ascent V of a balloon in still air can be expressed by a general formula:

The table below lists typical figures for the performance of various sizes of balloons. They are very approximate. If precise knowledge of the performance of a particular balloon and train is necessary, it must be obtained by analysing actual flights. Balloons can carry payloads greater than those listed in the table if the total lift is increased. This is achieved by using more gas and by increasing the volume of the balloon, which will affect the rate of ascent and the maximum height. The choice of a balloon for meteorological purposes is dictated by the load, if any, to be carried, the rate of ascent, the altitude required, whether the balloon is to be used for visual tracking, and by the cloud cover with regard to its



typical balloon performance
Weight (g) diameter at release (cm) Payload (g) Free lift (g) rate of ascent (m min–1) Maximum height (km) 10 30 0 5 60 12 30 50 0 60 150 13 100 90 0 300 250 20 200 120 250 500 300 21 350 130 250 600 300 26 600 140 250 900 300 31 1 000 160 250 1 100 300 34 1 500 180 1 000 1 300 300 34 3 000 210 1 000 1 700 300 38

colour. Usually, a rate of ascent between 300 and 400 m min–1 is desirable in order to minimize the time required for observation; it may also be necessary in order to provide sufficient ventilation for the radiosonde sensors. In choosing a balloon, it is also necessary to bear in mind that the altitude attained is usually less when the temperature at release is very low. For balloons used in regular operations, it is beneficial to determine the free lift that produces optimum burst heights. For instance, it has been found that a reduction in the average rate of ascent from 390 to 310 m min –1 with some mid-size balloons by reducing the amount of gas for inflation may give an increase of 2 km, on average, in the burst height. Burst height records should be kept and reviewed to ensure that optimum practice is sustained. Daytime visual observations are facilitated by using uncoloured balloons on clear sunny days, and dark-coloured ones on cloudy days. The performance of a balloon is best gauged by the maximum linear extension it will withstand before bursting and is conveniently expressed as the ratio of the diameter (or circumference) at burst to that of the unstretched balloon. The performance of a balloon in flight, however, is not necessarily the same as that indicated by a bursting test on the ground. Performance can be affected by rough handling when the balloon is filled and by stresses induced during launches in gale conditions. In flight, the extension of the balloon may be affected by the loss of elasticity at low temperatures, by the chemical action of oxygen, ozone and ultraviolet radiation, and by manufacture faults such as pinholes or weak spots. A balloon of satisfactory quality should, however, give at least a fourfold extension in an actual sounding. The thickness of

the film at release is usually in the range of 0.1 to 0.2 mm. There is always a small excess of pressure p1 within the balloon during ascent, amounting to a few hPa, owing to the tension of the rubber. This sets a limit to the external pressure that can be reached. It can be shown that, if the temperature is the same inside and outside the balloon, this limiting pressure p is given by:


1.07W Wp1 + 0.075 p1 ≅ L0 L0


where W is the weight of the balloon and apparatus; and L0 is the free lift at the ground, both expressed in grams. If the balloon is capable of reaching the height corresponding with p, it will float at this height.

10.3 10.3.1

hanDling balloons


It is very important that radiosonde balloons should be correctly stored if their best performance is still to be obtained after several months. It is advisable to restrict balloon stocks to the safe minimum allowed by operational needs. Frequent deliveries, wherever possible, are preferable to purchasing in large quantities with consequent long periods of storage. To avoid the possibility of using balloons that have been in storage for a long period, balloons should always be used in the order of their date of manufacture. It is generally possible to obtain the optimum performance up to about 18 months after manufacture, provided that the storage conditions are



carefully chosen. Instructions are issued by many manufacturers for their own balloons and these should be observed meticulously. The following general instructions are applicable to most types of radiosondes balloons. Balloons should be stored away from direct sunlight and, if possible, in the dark. At no time should they be stored adjacent to any source of heat or ozone. Balloons made of either polychloroprene or a mixture, or polychloroprene and natural rubber may deteriorate if exposed to the ozone emitted by large electric generators or motors. All balloons should be kept in their original packing until required for preflight preparations. Care should be taken to see that they do not come into contact with oil or any other substance that may penetrate the wrapping and damage the balloons. Wherever possible, balloons should be stored in a room at temperatures of 15 to 25°C; some manufacturers give specific guidance on this point and such instructions should always be followed. 10.3.2 conditioning

ventilated (e.g. NFPA, 1999). If hydrogen gas is to be used, special safety precautions are essential (see section 10.6). The building should be free from any source of sparks, and all electric switches and fittings should be spark-proof; other necessary details are given in section 10.6.2. If helium gas is to be used, provision may be made for heating the building during cold weather. The walls, doors and floor should have a smooth finish and should be kept free from dust and grit. Heating hydrogen-inflation areas can be accomplished by steam, hot water or any other indirect means; however, electric heating, if any, shall be in compliance with national electrical c o d e s ( e . g . N F PA 5 0 A f o r C l a s s I , Division 2, locations). Protective clothing (see section 10.6.4) should be worn during inflation. The operator should not stay in a closed room with a balloon containing hydrogen. The hydrogen supply should be controlled and the filling operation observed, from outside the filling room if the doors are shut, and the doors should be open when the operator is in the room with the balloon. Balloons should be inflated slowly because sudden expansion may cause weak spots in the balloon film. It is desirable to provide a fine adjustment valve for regulating the gas flow. The desired amount of inflation (free lift) can be determined by using either a filling nozzle of the required weight or one which forms one arm of a balance on which the balloon lift can be weighed. The latter is less convenient, unless it is desirable to allow for variations in the weights of balloons, which is hardly necessary for routine work. It is useful to have a valve fitted to the weight type of the filler, and a further refinement, used in some services, is to have a valve that can be adjusted to close automatically at the required lift. 10.3.4 launching

Balloons made from natural rubber do not require special heat treatment before use, as natural rubber does not freeze at the temperatures normally experienced in buildings used for human occupation. It is, however, preferable for balloons that have been stored for a long period at temperatures below 10°C to be brought to room temperature for some weeks before use. Polychloroprene balloons suffer a partial loss of elasticity during prolonged storage at temperatures below 10°C. For the best results, this loss should be restored prior to inflation by conditioning the balloon. The manufacturer’s recommendations should be followed. It is common practice to place the balloon in a thermally insulated chamber with forced air circulation, maintained at suitable temperature and humidity for some days before inflation, or alternatively to use a warm water bath. At polar stations during periods of extremely low temperatures, the balloons to be used should have special characteristics that enable them to maintain strength and elasticity in such conditions. 10.3.3 inflation

The balloon should be kept under a shelter until everything is ready for its launch. Prolonged exposure to bright sunshine should be avoided as this may cause a rapid deterioration of the balloon fabric and may even result in its bursting before leaving the ground. Protective clothing should be worn during manual launches. No special difficulties arise when launching radiosonde balloons in light winds. Care should always be taken to see that there is no risk of the balloon and instruments striking obstructions before they

If a balloon launcher is not used, a special room, preferably isolated from other buildings, should be provided for filling balloons. It should be well



rise clear of trees and buildings in the vicinity of the station. Release problems can be avoided to a large extent by carefully planning the release area. It should be selected to have a minimum of obstructions that may interfere with launching; the station buildings should be designed and sited considering the prevailing wind, gust effects on the release area and, in cold climates, drifting snow. It is also advisable in high winds to keep the suspension of the instrument below the balloon as short as possible during launching, by using some form of suspension release or unwinder. A convenient device consists of a reel on which the suspension cord is wound and a spindle to which is attached an air brake or escapement mechanism that allows the suspension cord to unwind slowly after the balloon is released. Mechanical balloon launchers have the great advantage that they can be designed to offer almost fool-proof safety, by separating the operator from the balloon during filling and launching. They can be automated to various degrees, even to the point where the whole radiosonde operation requires no operator to be present. They might not be effective at wind speeds above 20 m s–1. Provision should be made for adequate ventilation of the radiosonde sensors before release, and the construction should desirably be such that the structure will not be damaged by fire or explosion.

At one time, night ascents were carried with a small candle in a translucent paper lantern suspended some 2 m or so below the balloon. However, there is a risk of flash or explosion if the candle is brought near the balloon or the source of hydrogen, and there is a risk of starting a forest fire or other serious fires upon return to the Earth. Thus, the use of candles is strongly discouraged. 10.4.2 Parachutes

In order to reduce the risk of damage caused by a falling sounding instrument, it is usual practice to attach a simple type of parachute. The main requirements are that it should be reliable when opening and should reduce the speed of descent to a rate not exceeding about 5 m s–1 near the ground. It should also be water-resistant. For instruments weighing up to 2 kg, a parachute made from waterproof paper or plastic film of about 2 m diameter and with strings about 3 m long is satisfactory. In order to reduce the tendency for the strings to twist together in flight it is advisable to attach them to a light hoop of wood, plastic or metal of about 40 cm in diameter just above the point where they are joined together. When a radar reflector for wind-finding is part of the train it can be incorporated into the parachute and can serve to keep the strings apart. The strings and attachments must be able to withstand the opening of the parachute. If light-weight radiosondes are used (less than about 250 g), the radar reflector alone may provide sufficient drag during descent.


accessories for balloon ascents


illumination for night ascents

10.5 10.5.1

gases for inflation

The light source in general use for night-time pilot-balloon ascents at night is a small electric torch battery and lamp. A battery of two 1.5 V cells, or a water-activated type used with a 2.5 V 0.3 A bulb, is usually suitable. Alternatively, a device providing light by means of chemical fluorescence may be used. For high-altitude soundings, however, a more powerful system of 2 to 3 W, together with a simple reflector, is necessary. If the rate of ascent is to remain unchanged when a lighting unit is to be used, a small increase in free lift is theoretically required; that is to say, the total lift must be increased by more than the extra weight carried (see equation 10.3). In practice, however, the increase required is probably less than that calculated since the load improves the aerodynamic shape and the stability of the balloon.


The two gases most suitable for meteorological balloons are helium and hydrogen. The former is much to be preferred on account of the fact that is free from risk of explosion and fire risks. However, since the use of helium is limited mainly to the few countries which have an abundant natural supply, hydrogen is more generally used (see WMO, 1982). The buoyancy (total lift) of helium is 1.115 kg m–3, at a pressure of 1 013 hPa and a temperature of 15°C. The corresponding figure for pure hydrogen is 1.203 kg m–3 and for commercial hydrogen the figure is slightly lower than this. It should be noted that the use of hydrogen aboard ships is no longer permitted under the general conditions imposed for marine insurance. In these



circumstances, the extra cost of using helium has to be reckoned against the life-threatening hazards to and the extra cost of insurance, if such insurance can be arranged. Apart from the cost and trouble of transportation, the supply of compressed gas in cylinders affords the most convenient way of providing gas at meteorological stations. However, at places where the cost or difficulty of supplying cylinders is prohibitive, the use of an on-station hydrogen generator (see section 10.5.3) should present no great difficulties. 10.5.2 gas cylinders

(a) (b) (c) (d) (e) (f) (g)

Ferro-silicon and caustic soda with water; Aluminium and caustic soda with water; Calcium hydride and water; Magnesium-iron pellets and water; Liquid ammonia with hot platinum catalyst; Methanol and water with a hot catalyst; Electrolysis of water.

For general use, steel gas cylinders, capable of holding 6 m3 of gas compressed to a pressure of 18 MPa (10 MPa in the tropics), are probably the most convenient size. However, where the consumption of gas is large, as at radiosonde stations, larger capacity cylinders or banks of standard cylinders all linked by a manifold to the same outlet valve can be useful. Such arrangements will minimize handling by staff. In order to avoid the risk of confusion with other gases, hydrogen cylinders should be painted a distinctive colour (red is used in many countries) and otherwise marked according to national regulations. Their outlet valves should have left-handed threads to distinguish them from cylinders of non-combustible gases. Cylinders should be provided with a cap to protect the valves in transit. Gas cylinders should be tested at regular intervals ranging from two to five years, depending on the national regulations in force. This should be performed by subjecting them to an internal pressure of at least 50 per cent greater than their normal working pressure. Hydrogen cylinders should not be exposed to heat and, in tropical climates, they should be protected from direct sunshine. Preferably, they should be stored in a well-ventilated shed which allows any hydrogen leaks to escape to the open air. 10.5.3 hydrogen generators

Most of the chemicals used in these methods are hazardous, and the relevant national standards and codes of practice should be scrupulously followed, including correct markings and warnings. They require special transportation, storage, handling and disposal. Many of them are corrosive, as is the residue after use. If the reactions are not carefully controlled, they may produce excess heat and pressure. Methanol, being a poisonous alcohol, can be deadly if ingested, as it may be by substance abusers. In particular, caustic soda, which is widely used, requires considerable care on the part of the operator, who should have adequate protection, especially for the eyes, from contact not only with the solution, but also with the fine dust which is liable to arise when the solid material is being put into the generator. An eye-wash bottle and a neutralizing agent, such as vinegar, should be kept at hand in case of an accident. Some of the chemical methods operate at high pressure, with a consequential greater risk of an accident. High-pressure generators should be tested every two years to a pressure at least twice that of the working pressure. They should be provided with a safety device to relieve excess pressure. This is usually a bursting disc, and it is very important that the operational instructions should be strictly followed with regard to the material, size and form of the discs, and the frequency of their replacement. Even if a safety device is efficient, its operation is very liable to be accompanied by the ejection of hot solution. High-pressure generators must be carefully cleaned out before recharging since remains of the previous charge may considerably reduce the available volume of the generator and, thus, increase the working pressure beyond the design limit. Unfortunately, calcium hydride and magnesium-iron, which have the advantage of avoiding the use of caustic soda, are expensive to produce and are, therefore, likely to be acceptable only for special purposes. Since these two materials produce hydrogen from water, it is essential that they be stored in containers which are completely damp-proof. In

Hydrogen can be produced on site in various kinds of hydrogen generators. All generator plants and hydrogen storage facilities shall be legibly marked and with adequate warnings according to national regulations (e.g. “This unit contains hydrogen”; “Hydrogen – Flammable gas – No smoking – No open flames”). The following have proven to be the most suitable processes for generating hydrogen for meteorological purposes:



the processes using catalysts, care must be taken to avoid catalyst contamination. All systems produce gas at sufficient pressure for filling balloons. However, the production rates of some systems (electrolysis in particular) are too low, and the gas must be produced and stored before it is needed, either in compressed form or in a gasholder. The processes using the electrolysis of water or the catalytic cracking of methanol are attractive because of their relative safety and moderate recurrent cost, and because of the non-corrosive nature of the materials used. These two processes, as well as the liquid ammonia process, require electric power. The equipment is rather complex and must be carefully maintained and subjected to detailed daily check procedures to ensure that the safety control systems are effective. Water for electrolysis must have low mineral content.

by completely separating the operator from the hydrogen. An essential starting point for the consideration of hydrogen precautions is to follow the various national standards and codes of practice concerned with the risks presented by explosive atmospheres in general. Additional information on the precautions that should be followed will be found in publications dealing with explosion hazards, such as in hospitals and other industrial situations where similar problems exist. The operator should never be in a closed room with an inflated balloon. Other advice on safety matters can be found throughout the chapter. 10.6.2 building design


use of hyDrogen anD safety Precautions

Provisions should be made to avoid the accumulation of free hydrogen and of static charges as well as the occurrence of sparks in any room where hydrogen is generated, stored or used. The accumulation of hydrogen must be avoided even when a balloon bursts within the shelter during the course of inflation (WMO, 1982). Safety provisions must be part of the structural design of hydrogen buildings (NFPA, 1999; SAA, 1985). Climatic conditions and national standards and codes are constraints within which it is possible to adopt many designs and materials suitable for safe hydrogen buildings. Codes are advisory and are used as a basis of good practice. Standards are published in the form of specifications for materials, products and safe practices. They should deal with topics such as flame-proof electric-light fittings, electrical apparatus in explosive atmospheres, the ventilation of rooms with explosive atmospheres, and the use of plastic windows, bursting discs, and so on (WMO, 1982). Both codes and standards should contain information that is helpful and relevant to the design of hydrogen buildings. Furthermore, it should be consistent with recommended national practice. Guidance should be sought from national standards authorities when hydrogen buildings are designed or when the safety of existing buildings is reviewed, in particular for aspects such as the following: (a) The preferred location for hydrogen systems; (b) The fire resistance of proposed materials, as related to the fire-resistance ratings that must be respected; (c) Ventilation requirements, including a roof of light construction to ensure that hydrogen



Hydrogen can easily be ignited by a small spark and burns with a nearly invisible flame. It can burn when mixed with air over a wide range of concentrations, from 4 to 74 per cent by volume (NFPA, 1999), and can explode in concentrations between 18 and 59 per cent. In either case, a nearby operator can receive severe burns over the entire surface of any exposed skin, and an explosion can throw the operator against a wall or the ground, causing serious injury. It is possible to eliminate the risk of an accident by using very carefully designed procedures and equipment, provided that they are diligently observed and maintained (Gremia, 1977; Ludtke and Saraduke, 1992; NASA, 1968). The provision of adequate safety features for the buildings in which hydrogen is generated and stored, or for the areas in which balloons are filled or released, does not always receive adequate attention (see the following section). In particular, there must be comprehensive training and continual meticulous monitoring and inspection to ensure that operators follow the procedures. The great advantage of automatic balloon launchers (see section 10.3.4) is that they can be made practically fool-proof and prevent operator injuries,



(d) (e) (f)

and products of an explosion are vented from the highest point of the building; Suitable electrical equipment and wiring; Fire protection (extinguishers and alarms); Provision for the operator to control the inflation of the balloon from outside the filling room.

Measures should be taken to minimize the possibility of sparks being produced in rooms where hydrogen is handled. Thus, any electrical system (switches, fittings, wiring) should be kept outside these rooms; otherwise, special spark-proof switches, pressurized to prevent the ingress of hydrogen, and similarly suitable wiring, should be provided. It is also advisable to illuminate the rooms using exterior lights which shine in through windows. For the same reasons, any tools used should not produce sparks. The observer’s shoes should not be capable of emitting sparks, and adequate lightning protection should be provided. If sprinkler systems are used in any part of the building, consideration should be given to the possible hazard of hydrogen escaping after the fire has been extinguished. Hydrogen detection systems exist and may be used, for instance, to switch off power to the hydrogen generator at 20 per cent of the lower explosive limit and should activate an alarm, and then activate another alarm at 40 per cent of the lower explosive limit. A hazard zone should be designated around the generator, storage and balloon area into which entry is permitted only when protective clothing is worn (see section 10.6.4). Balloon launchers (see section 10.3.4) typically avoid the need for a special balloon-filling room, and greatly simplify the design of hydrogen facilities. 10.6.3 static charges

Charges on balloons are more difficult to deal with. Balloon fabrics, especially pure latex, are very good insulators. Static charges are generated when two insulating materials in contact with each are separated. A single brief contact with the observer’s clothing or hair can generate a 20 kV charge, which is more than sufficient to ignite a mixture of air and hydrogen if it is discharged through an efficient spark. Charges on a balloon may take many hours to dissipate through the fabric to earth or naturally into the surrounding air. Also, it has been established that, when a balloon bursts, the separation of the film along a split in the fabric can generate sparks energetic enough to cause ignition. Electrostatic charges can be prevented or removed by spraying water onto the balloon during inflation, by dipping balloons into antistatic solution (with or without drying them off before use), by using balloons with an antistatic additive in the latex, or by blowing ionized air over the balloon. Merely earthing the neck of the balloon is not sufficient. The maximum electrostatic potential that can be generated or held on a balloon surface decreases with increasing humidity, but the magnitude of the effect is not well established. Some tests carried out on inflated 20 g balloons indicated that spark energies sufficient to ignite hydrogen-oxygen mixtures are unlikely to be reached when the relative humidity of the air is greater than 60 per cent. Other studies have suggested relative humidities from 50 to 76 per cent as safe limits, yet others indicate that energetic sparks may occur at even higher relative humidity. It may be said that static discharge is unlikely when the relative humidity exceeds 70 per cent, but this should not be relied upon (see Cleves, Sumner and Wyatt, 1971). It is strongly recommended that fine water sprays be used on the balloon because the wetting and earthing of the balloon will remove most of the static charges from the wetted portions. The sprays should be designed to wet as large an area of the balloon as possible and to cause continuous streams of water to run from the balloon to the floor. If the doors are kept shut, the relative humidity inside the filling room can rise to 75 per cent or higher, thus reducing the probability of sparks energetic enough to cause ignition. Balloon release should proceed promptly once the sprays are turned off and the filling-shed doors opened. Other measures for reducing the build-up of static charge include the following (WMO, 1982):

The hazards of balloon inflation and balloon release can be considerably reduced by preventing static charges in the balloon-filling room, on the observer’s clothing, and on the balloon itself. Loeb (1958) provides information on the static electrification process. Static charge control is effected by good earthing provisions for hydrogen equipment and filling-room fittings. Static discharge grips for observers can remove charges generated on clothing (WMO, 1982).




(b) (c) (d)


The building should be provided with a complete earthing (grounding) system, with all fittings, hydrogen equipment and the lightning conductor separately connected to a single earth, which itself must comply with national specifications for earth electrodes. Provision should be made to drain electrical charges from the floor; Static discharge points should be provided for the observers; The windows should be regularly coated with an antistatic solution; Operators should be encouraged not to wear synthetic clothing or insulating shoes. It is good practice to provide operators with partially conducting footwear; Any contact between the observer and the balloon should be minimized; this can be facilitated by locating the balloon filler at a height of 1 m or more above the floor.


Protective clothing and first-aid facilities

Proper protective clothing should be worn whenever hydrogen is being used, during all parts of the operations, including generation procedures, when handling cylinders, and during balloon inflation and release. The clothing should include a light-weight flame-proof coat with a hood made of non-synthetic, antistatic material and a covering for the lower face, glasses or goggles, cotton gloves, and any locally recommended anti-flash clothing (see Hoschke and others, 1979). First-aid facilities appropriate to the installation should be provided. These should include initial remedies for flash burns and broken limbs. When chemicals are used, suitable neutralizing solutions should be on hand, for example, citric acid for caustic soda burns. An eye-wash apparatus ready for instant use should be available (WMO, 1982).



references and furtHer readIng

Atmospheric Environment Service (Canada), 1978: The Use of Hydrogen for Meteorological Purposes in the Canadian Atmospheric Environment Service, Toronto. Cleves, A.C., J.F. Sumner and R.M.H. Wyatt, 1971: The Effect of Temperature and Relative Humidity on the Accumulation of Electrostatic Charges on Fabrics and Primary Explosives. Proceedings of the Third Conference on Static Electrification, (London). Gremia, J.O., 1977: A Safety Study of Hydrogen Balloon Inflation Operations and Facilities of the National Weather Service. Trident Engineering Associates, Annapolis, Maryland. Hoschke, B.N., and others, 1979: Report to the Bureau of Meteorology on Protection Against the Burn Hazard from Exploding Hydrogen-filled Meteorological Balloons. CSIRO Division of Textile Physics and the Department of Housing and Construction, Australia. Loeb, L.B., 1958: Static Electrification, SpringerVerlag, Berlin. Ludtke, P. and G. Saraduke, 1992: Hydrogen Gas Safety Study Conducted at the National Weather Service Forecast Office. Norman, Oklahoma. National Aeronautics and Space Administration, 1968: Hydrogen Safety Manual. NASA Technical Memorandum TM-X-52454, NASA Lewis Research Center, United States. National Fire Protection Association, 1999: NFPA 50A: Standard for Gaseous Hydrogen Systems at Consumer Sites. 1999 edition, National Fire Protection Association, Quincy, Maryland.

National Fire Protection Association, 2002: NFPA 68: Guide for Venting of Deflagrations. 2002 edition, National Fire Protection Association, Batterymarch Park, Quincy, Maryland. National Fire Protection Association, 2005: NFPA 70, National Electrical Code. 2005 edition, National Fire Protection Association, Quincy, Maryland. National Fire Protection Association, 2006: NFPA 220, Standard on Types of Building Construction. 2006 edition, National Fire Protection Association, Quincy, Maryland. Rosen, B., V.H. Dayan and R.L. Proffit, 1970: Hydrogen Leak and Fire Detection: A Survey. NASA SP-5092. Standards Association of Australia, 1970: AS C99: Electrical equipment for explosive atmospheres – Flameproof electric lightning fittings. Standards Association of Australia, 1980: AS 1829: Intrinsically safe electrical apparatus for explosive atmospheres. Standards Association of Australia, 1985: AS 1482: Electrical equipment for explosive atmospheres – Protection by ventilation – Type of protection V. Standards Association of Australia, 1995: ASNzS 1020: The control of undesirable static electricity. Standards Association of Australia, 2004: AS 1358: Bursting discs and bursting disc devices – Application selection and installation. World Meteorological Organization, 1982: Meteorological Balloons: The Use of Hydrogen for Inflation of Meteorological Balloons. Instruments and Observing Methods Report No. 13, Geneva.

CHaPTEr 11

urBan oBserVatIons



There is a growing need for meteorological observations conducted in urban areas. Urban populations continue to expand, and Meteorological Services are increasingly required to supply meteorological data in support of detailed forecasts for citizens, building and urban design, energy conservation, transportation and communications, air quality and health, storm water and wind engineering, and insurance and emergency measures. At the same time, Meteorological Services have difficulty in making urban observations that are not severely compromised. This is because most developed sites make it impossible to conform to the standard guidelines for site selection and instrument exposure given in Part I of this Guide owing to obstruction of air-flow and radiation exchange by buildings and trees, unnatural surface cover and waste heat and water vapour from human activities.

This chapter provides information to enable the selection of sites, the installation of a meteorological station and the interpretation of data from an urban area. In particular, it deals with the case of what is commonly called a “standard” climate station. Despite the complexity and inhomogeneity of urban environments, useful and repeatable observations can be obtained. Every site presents a unique challenge. To ensure that meaningful observations are obtained requires careful attention to certain principles and concepts that are virtually unique to urban areas. It also requires the person establishing and running the station to apply those principles and concepts in an intelligent and flexible way that is sensitive to the realities of the specific environment involved. Rigid “rules” have little utility. The need for flexibility runs slightly counter to the general notion of standardization that is promoted as WMO observing practice. In urban areas, it is sometimes necessary to accept exposure

(a) Mesoscale Urban “plume” PBL (b) Rural (b) Local scale Inertial sublayer Roughness sublayer (c) UCL Mixing layer UBL

Surface layer Urban
(c) Microscale

RBL Rural

Surface layer

Roughness sublayer UCL

figure 11.1. schematic of climatic scales and vertical layers found in urban areas: planetary boundary layer (PBl), urban boundary layer (uBl), urban canopy layer (ucl), rural boundary layer (rBl) (modified from oke, 1997).



over non-standard surfaces at non-standard heights, to split observations between two or more locations, or to be closer than usual to buildings or waste heat exhausts. The units of measurement and the instruments used in urban areas are the same as those for other environments. Therefore, only those aspects that are unique to urban areas, or that are made difficult to handle because of the nature of cities, such as the choice of site, instrument exposure and the documentation of metadata, are covered in this chapter. The timing and frequency of observations and the coding of reports should follow appropriate standards (WMO, 1983; 1988; 1995; 2003b; 2006). With regard to automated stations and the requirements for message coding and transmission, quality control, maintenance (noting any special demands of the urban environment) and calibration, the recommendations of Part II, Chapter 1, should be followed. 11.1.1 Definitions and concepts station rationale (b)

of the guidelines in Part I of this Guide specifically aims to avoid microclimatic effects. The climate station recommendations are designed to standardize all sites, as far as practical. This explains the use of a standard height of measurement, a single surface cover, minimum distances to obstacles and little horizon obstruction. The aim is to achieve climate observations that are free of extraneous microclimate signals and hence characterize local climates. With even more stringent standards, first order stations may be able to represent conditions at synoptic space and time scales. The data may be used to assess climate trends at even larger scales. Unless the objectives are

The clarity of the reason for establishing an urban station is essential to its success. Two of the most usual reasons are the wish to represent the meteorological environment at a place for general climatological purposes and the wish to provide data in support of the needs of a particular user. In both cases, the spatial and temporal scales of interest must be defined, and, as outlined below, the siting of the station and the exposure of the instruments in each case may have to be very different. Horizontal scales


very specialized, urban stations should also avoid microclimate influences; however, this is hard to achieve; Local scale: This is the scale that standard climate stations are designed to monitor. It includes landscape features such as topography, but excludes microscale effects. In urban areas this translates to mean the climate of neighbourhoods with similar types of urban development (surface cover, size and spacing of buildings, activity). The signal is the integration of a characteristic mix of microclimatic effects arising from the source area in the vicinity of the site. The source area is the portion of the surface upstream that contributes the main properties of the flux or meteorological concentration being measured (Schmid, 2002). Typical scales are one to several kilometres; Mesoscale: A city influences weather and climate at the scale of the whole city, typically tens of kilometres in extent. A single station is not able to represent this scale.
Vertical scales There is no more important an input to the success of an urban station than an appreciation of the concept of scale. There are three scales of interest (Oke, 1984; Figure 11.1): (a) Microscale: Every surface and object has its own microclimate on it and in its immediate vicinity. Surface and air temperatures may vary by several degrees in very short distances, even millimetres, and air-flow can be greatly perturbed by even small objects. Typical scales of urban microclimates relate to the dimensions of individual buildings, trees, roads, streets, courtyards, gardens, and so forth. Typical scales extend from less than 1 m to hundreds of metres. The formulation

An essential difference between the climate of urban areas and that of rural or airport locations is that in cities the vertical exchanges of momentum, heat and moisture do not occur at a (nearly) plane surface, but in a layer of significant thickness, called the urban canopy layer (UCL) (Figure 11.1). The height of the UCL is approximately equivalent to that of the mean height of the main roughness elements (buildings and trees), zH (see Figure 11.4 for parameter definitions). The microclimatic effects of individual surfaces and obstacles persist for a short distance away from their source and are then mixed and muted by the action of turbulent eddies. The distance required before the effect is obliterated depends on the magnitude of the effect, wind speed and stability (namely, stable, neutral or unstable).



This blending occurs both in the horizontal and the vertical. As noted, horizontal effects may persist up to a few hundred metres. In the vertical, the effects of individual features are discernible in the roughness sublayer (RSL), which extends from ground level to the blending height zr, where the blending action is complete. Rule-of-thumb estimates and field measurements indicate that zr can be as low as 1.5 zH at densely built (closely spaced) and homogeneous sites, but greater than 4 zH in low density areas (Grimmond and Oke, 1999; Rotach, 1999; Christen, 2003). An instrument placed below zr may register microclimate anomalies, but, above that, it “sees” a blended, spatially averaged signal that is representative of the local scale. There is another height restriction to consider. This arises because each local scale surface type generates an internal boundary layer, in which the flow structure and thermodynamic properties are adapted to that surface type. The height of the layer grows with increasing fetch (the distance upwind to the edge where the transition to a distinctly different surface type occurs). The rate at which the internal boundary layer grows with fetch distance depends on the roughness and stability. In rural conditions, the height to fetch ratios might vary from as small as 1:10 in unstable conditions to as large as 1:500 in stable cases, and the ratio decreases as the roughness increases (Garratt, 1992; Wieringa, 1993). Urban areas tend towards neutral stability owing to the enhanced thermal and mechanical turbulence associated with the heat island and their large roughness. Therefore, a height to fetch ratio of about 1:100 is considered typical. The internal boundary layer height is taken above the displacement height zd, which is the reference level for flow above the blending height. (For an explanation of zd, see Figure 11.4 and Note 2 in Table 11.2.) For example, take a hypothetical densely built district with zH of 10 m. This means that zr is at least 15 m. If this height is chosen to be the measurement level, the fetch requirement over similar urban terrain is likely to be at least 0.8 km, since fetch = 100 (zr – zd ), and zd will be about 7 m. This can be a significant site restriction because the implication is that, if the urban terrain is not similar out to at least this distance around the station site, observations will not be representative of the local surface type. At less densely developed sites, where heat island and roughness effects are less, the fetch requirements are likely to be greater. At heights above the blending height, but within the local internal boundary layer, measurements are within an inertial sublayer (Figure 11.1), where

standard boundary layer theory applies. Such theory governs the form of the mean vertical profiles of meteorological variables (including air temperature, humidity and wind speed) and the behaviour of turbulent fluxes, spectra and statistics. This provides a basis for: (a) The calculation of the source area (or “footprint”, see below) from which the turbulent flux or the concentration of a meteorological variable originates; hence, this defines the distance upstream for the minimum acceptable fetch; (b) The extrapolation of a given flux or property through the inertial layer and also downwards into the RSL (and, although it is less reliable, into the UCL). In the inertial layer, fluxes are constant with height and the mean value of meteorological properties are invariant horizontally. Hence, observations of fluxes and standard variables possess significant utility and are able to characterize the underlying local scale environment. Extrapolation into the RSL is less prescribed. source areas (“footprints”)

A sensor placed above a surface “sees” only a portion of its surroundings. This is called the “source area” of the instrument which depends on its height and the characteristics of the process transporting the surface property to the sensor. For upwelling radiation signals (short- and long-wave radiation and surface temperature viewed by an infrared thermometer) the field of view of the instrument and the geometry of the underlying surface set what is seen. By analogy, sensors such as thermometers, hygrometers, gas analysers and anemometers “see” properties such as temperature, humidity, atmospheric gases and wind speed and direction which are carried from the surface to the sensor by turbulent transport. A conceptual illustration of these source areas is given in Figure 11.2. The source area of a downfacing radiometer with its sensing element parallel to the ground is a circular patch with the instrument at its centre (Figure 11.2). The radius (r) of the circular source area contributing to the radiometer signal at height (z1) is given in Schmid and others (1991):

r = z1

1 −1 F



where F is the view factor, namely the proportion of the measured flux at the sensor for which that area is responsible. Depending on its field of view, a radiometer may see only a limited circle, or it may



extend to the horizon. In the latter case, the instrument usually has a cosine response, so that towards the horizon it becomes increasingly difficult to define the actual source area seen. Hence, the use of the view factor which defines the area contributing a set proportion (often selected as 50, 90, 95, 99 or 99.5 per cent) of the instrument’s signal. The source area of a sensor that derives its signal via turbulent transport is not symmetrically distributed around the sensor location. It is elliptical in shape and is aligned in the upwind direction from the tower (Figure 11.2). If there is a wind, the effect of the surface area at the base of the mast is effectively zero, because turbulence cannot transport the influence up to the sensor level. At some distance in the upwind direction the source starts to affect the sensor; this effect rises to a peak, thereafter decaying at greater distances (for the shape in both the x and y directions see Kljun, Rotach and Schmid, 2002; Schmid, 2002). The distance upwind to the first surface area contributing to the signal, to the point of peak influence, to the furthest upwind surface influencing the measurement, and the area of the so-called “footprint” vary considerably over time. They depend on the height of measurement (larger at greater heights), surface roughness, atmospheric stability (increasing from unstable to stable) and whether a turbulent flux or a meteorological concentration is being measured (larger for the concentration) (Kljun, Rotach and Schmid, 2002).

Methods to calculate the dimensions of flux and concentration “footprints” are available (Schmid, 2002; Kljun and others, 2004). Although the situation illustrated in Figure 11.2 is general, it applies best to instruments placed in the inertial sublayer, well above the complications of the RSL and the complex geometry of the three-dimensional urban surface. Within the UCL, the way in which the effects of radiation and turbulent source areas decay with distance has not yet been reliably evaluated. It can be surmised that they depend on the same properties and resemble the overall forms of those in Figure 11.2. However, obvious complications arise due to the complex radiation geometry, and the blockage and channelling of flow, which are characteristic of the UCL. Undoubtedly, the immediate environment of the station is by far the most critical and the extent of the source area of convective effects grows with stability and the height of the sensor. The distance influencing screen-level (~1.5 m) sensors may be a few tens of metres in neutral conditions, less when they are unstable and perhaps more than 100 m when they are stable. At a height of 3 m, the equivalent distances probably extend up to about 300 m in the stable case. The circle of influence on a screen-level temperature or humidity sensor is thought to have a radius of about 0.5 km typically, but this is likely to depend upon the building density.


Radiation source area isopleths Turbulence source area isopleths

Sensor y

50% x 50% 90%

Wind 90%

figure 11.2. conceptual representation of source areas contributing to sensors for radiation and turbulent fluxes of concentrations. If the sensor is a radiometer 50 or 90 per cent of the flux originates from the area inside the perspective circle. If the sensor is responding to a property of turbulent transport, 50 or 90 per cent of the signal comes from the area inside the respective ellipses. these are dynamic in the sense they are oriented into the wind and hence move with wind direction and stability.



Measurement approaches

It follows from the preceding discussion that, if the objective of an instrumented urban site is to monitor the local-scale climate near the surface, there are two viable approaches as follows: (a) Locate the site in the UCL at a location surrounded by average or “typical” conditions for the urban terrain, and place the sensors at heights similar to those used at non-urban sites. This assumes that the mixing induced by flow around obstacles is sufficient to blend properties to form a UCL average at the local scale; (b) Mount the sensors on a tall tower above the RSL and obtain blended values that can be extrapolated down into the UCL. In general, approach (a) works best for air temperature and humidity, and approach (b) for wind speed and direction and precipitation. For radiation, the only significant requirement is for an unobstructed horizon. Urban stations, therefore, often consist of instruments deployed both below and above roof level; this requires that site assessment and description include the scales relevant to both contexts. urban site description

vehicles. Near the other end of the spectrum there are districts with low density housing of one- or two-storey buildings of relatively light construction and considerable garden or vegetated areas with low heat releases, but perhaps large irrigation inputs. No universally accepted scheme of urban classification for climatic purposes exists. A good approach to the built components is that of Ellefsen (1991) who developed a set of urban terrain zone (UTz) types. He initially differentiates according to 3 types of building contiguity (attached (row), detached but close-set, detached and open-set). These are further divided into a total of 17 sub-types by function, location in the city, and building height, construction and age. Application of the scheme requires only aerial photography, which is generally available, and the scheme has been applied in several cities around the world and seems to possess generality. Ellefsen’s scheme can be used to describe urban structure for roughness, airflow, radiation access and screening. It can be argued that the scheme indirectly includes aspects of urban cover, fabric and metabolism because a given structure carries with it the type of cover, materials and degree of human activity. Ellefsen’s scheme is less useful, however, when built features are scarce and there are large areas of vegetation (urban forest, low plant cover, grassland, scrub, crops), bare ground (soil or rock) and water (lakes, swamps, rivers). A simpler scheme of urban climate zones (UCzs) is illustrated in Table 11.1. It incorporates groups of Ellefsen’s zones, plus a measure of the structure, zH/W (see Table 11.1, Note c) shown to be closely related to both flow, solar shading and the heat island, and also a measure of the surface cover (% built) that is related to the degree of surface permeability. The importance of UCzs is not their absolute accuracy to describe the site, but their ability to classify areas of a settlement into districts, whch are similar in their capacity to modify the local climate, and to identify potential transitions to different UCzs. Such a classification is crucial when beginning to set up an urban station, so that the spatial homogeneity criteria are met approximately for a station in the UCL or above the RSL. In what follows, it is assumed that the morphometry of the urban area, or a portion of it, has been assessed using detailed maps, and/or aerial photographs, satellite imagery (visible and/or thermal), planning documents or at least a visual survey conducted from a vehicle and/or on foot. Although

The magnitude of each urban scale does not agree precisely with those commonly given in textbooks. The scales are conferred by the dimensions of the morphometric features that make up an urban landscape. This places emphasis on the need to adequately describe properties of urban areas which affect the atmosphere. The most important basic features are the urban structure (dimensions of the buildings and the spaces between them, the street widths and street spacing), the urban cover (built-up, paved and vegetated areas, bare soil, water), the urban fabric (construction and natural materials) and the urban metabolism (heat, water and pollutants due to human activity). Hence, the characterization of the sites of urban climate stations must take account of these descriptors, use them in selecting potential sites, and incorporate them in metadata that accurately describe the setting of the station. These four basic features of cities tend to cluster to form characteristic urban classes. For example, most central areas of cities have relatively tall buildings that are densely packed together, so the ground is largely covered with buildings or paved surfaces made of durable materials such as stone, concrete, brick and asphalt and where there are large releases from furnaces, air conditioners, chimneys and



taBle 11.1. simplified classification of distinct urban forms arranged in approximate decreasing order of their ability to have an impact on local climate (oke, 2004 unpublished)
Urban climate zone a Image Roughness classb Aspect ratioc % built (impermeable)d

1. Intensely developed urban with detached close-set high-rise buildings with cladding, e.g. downtown towers 2. Intensely high density urban with 2–5 storey, attached or very-close set buildings often of bricks or stone, e.g. old city core 3. Highly developed, medium density urban with row or detached but close-set houses, stores and apartments, e.g. urban housing 4. Highly developed, low or medium density urban with large low buildings and paved parking, e.g. shopping malls, warehouses 5. Medium development, low density suburban with 1 or 2 storey houses, e.g. suburban houses 6. Mixed use with large buildings in open landscape, e.g. institutions such as hospitals, universities, airports 7. Semi-rural development, scattered houses in natural or agricultural areas, e.g. farms, estates
Buildings; Impervious ground; Vegetation Pervious ground Buildings; Impervious ground; Vegetation Pervious ground



> 90%



> 85








0.2–0.6, p to > 1 with trees



0.1–0.5, depends on trees

< 40


> 0.05, depends on trees

< 10


A simplified set of classes that includes aspects of the schemes of Auer (1978) and Ellesfen (1990/91) plus physical measures relating to wind, and thermal and moisture control (columns on the right). Approximate correspondence between UCz and Ellefsen‘s urban terrain zones is: 1 (Dc1, Dc8), 2 (A1–A4, Dc2), 3 (A5, Dc3–5, Do2), 4 (Do1, Do4, Do5), 5 (Do3), 6 (Do6), 7 (none).

b c

Effective terrain roughness according to the Davenport classification (Davenport and others, 2000); see Table 11.2. Aspect ratio = Zh/W is the average height of the main roughness elements (buildings, trees) divided by their average spacing; in the city centre this is the street canyon height/width. This measurement is known to be related to flow regime types (Oke, 1987) and thermal controls (solar shading and longwave screening) Oke 1981. Tall trees increase this measure significantly.


Average proportion of ground plan covered by built features (buildings, roads and paved and other impervious areas); the rest of the area is occupied by pervious cover (green space, water and other natural surfaces). Permeability affects the moisture status of the ground and hence humidification and evaporative cooling potential.



land-use maps can be helpful, it should be appreciated that they depict the function and not necessarily the physical form of the settlement. The task of urban description should result in a map with areas of UCzs delineated. Herein, the UCzs as illustrated in Table 11.1 are used. The categories may have to be adapted to accommodate special urban forms characteristic of some ancient cities or of unplanned urban development found in some less-developed countries. For example, many towns and cities in Africa and Asia do not have as large a fraction of the surface covered by impervious materials and roads may not be paved.

will be typical can be assessed using the ideas behind Table 11.1 and choosing extensive areas of similar urban development for closer investigation. The search can be usefully refined in the case of air temperature and humidity by conducting spatial surveys, wherein the sensor is carried on foot, or mounted on a bicycle or a car and taken through areas of interest. After several repetitions, cross-sections or isoline maps may be drawn (see Figure 11.3), revealing where areas of thermal or moisture anomaly or interest lie. Usually, the best time to do this is a few hours after sunset or before sunrise on nights with relatively calm air-flow and cloudless skies. This maximizes the potential for the differentiation of microclimate and local climate differences. It is not advisable to conduct such surveys close to sunrise or sunset because weather variables change so rapidly at these times that meaningful spatial comparisons are difficult. If the station is to be part of a network to characterize spatial features of the urban climate, a broader view is needed. This consideration should be informed by thinking about the typical spatial form of urban climate distributions. For example, the isolines of urban heat and moisture “islands” indeed look like the contours of their topographic namesakes (Figure 11.3). They have relatively sharp “cliffs”, often a “plateau” over much of the urban area interspersed with localized “mounds” and “basins” of warmth/coolness and moistness/ dryness. These features are co-located with patches of greater or lesser development such as clusters of apartments, shops, factories or parks, open areas or water. Therefore, a decision must be made: is the aim to make a representative sample of the UCz diversity, or is it to faithfully reflect the spatial structure? In most cases the latter is too ambitious with a fixed-station network in the UCL. This is because it will require many stations to depict the gradients near the periphery, the plateau region, and the highs and lows of the nodes of weaker and stronger than average urban development. If measurements are to be taken from a tower, with sensors above the RSL, the blending action produces more muted spatial patterns and the question of distance of fetch to the nearest border between UCzs, and the urban-rural fringe, becomes relevant. Whereas a distance to a change in UCz of 0.5 to 1 km may be acceptable inside the UCL, for a tower-mounted sensor the requirement is likely to be more like a few kilometres of fetch.


choosing a location anD site for an urban station



First, it is necessary to establish clearly the purpose of the station. If there is to be only one station inside the urban area it must be decided if the aim is to monitor the greatest impact of the city, or of a more representative or typical district, or if it is to characterize a particular site (where there may be perceived to be climate problems or where future development is planned). Areas where there is the highest probability of finding maximum effects can be judged initially by reference to the ranked list of UCz types in Table 11.1. Similarly, the likelihood that a station

B City core +8 +6 +4 A Park +4 +2

nd Wi

Built-up area

figure 11.3. typical spatial pattern of isotherms in a large city at night with calm, clear weather illustrating the heat island effect (after oke, 1982).



Since the aim is to monitor local climate attributable to an urban area, it is necessary to avoid extraneous microclimatic influences or other local or mesoscale climatic phenomena that will complicate the urban record. Therefore, unless there is specific interest in topographically generated climate patterns, such as the effects of cold air drainage down valleys and slopes into the urban area, or the speed-up or sheltering of winds by hills and escarpments, or fog in river valleys or adjacent to water bodies, or geographically locked cloud patterns, and so on, it is sensible to avoid locations subject to such local and mesoscale effects. On the other hand, if a benefit or hazard is derived from such events, it may be relevant to design the network specifically to sample its effects on the urban climate, such as the amelioration of an overly hot city by sea or lake breezes. 11.2.2 siting

homogeneity for a screen-level or high-level (above-RSL) station are selected, it is helpful to identify potential “friendly” site owners who could host it. If a government agency is seeking a site, it may already own land in the area which is used for other purposes or have good relations with other agencies or businesses (offices, work yards, spare land, rights of way) including schools, universities, utility facilities (electricity, telephone, pipelines) and transport arteries (roads, railways). These are good sites, because access may be permitted and also because they also often have security against vandalism and may have electrical power connections. Building roofs have often used been as the site for meteorological observations. This may often have been based on the mistaken belief that at this elevation the instrument shelter is free from the complications of the UCL. In fact, roof-tops have their own very distinctly anomalous microclimates that lead to erroneous results. Air-flow over a building creates strong perturbations in speed, direction and gustiness which are quite unlike the flow at the same elevation away from the building or near the ground (Figure 11.5). Flat-topped buildings may actually create flows on their roofs that are counter to the main external flow, and speeds vary from extreme jetting to a near calm. Roofs are also constructed of materials that are thermally rather extreme. In light winds and cloudless skies they can become very hot by day and cold by night. Hence, there is often a sharp gradient of air temperature near the roof. Furthermore, roofs are designed to be waterproof and to shed water rapidly. This, together with their openness to solar radiation and the wind, makes them anomalously dry. In general, therefore, roofs are very poor locations for air temperature, humidity, wind and precipitation observations, unless the instruments are placed on very tall masts. They can, however, be good for observing incoming radiation components. Once the site has been chosen, it is essential that the details of the site characteristics (metadata) be fully documented (see section 11.4).

Once a choice of UCz type and its general location inside the urban area is made, the next step is to inspect the map, imagery and photographic evidence to narrow down candidate locations within a UCz. Areas of reasonably homogeneous urban development without large patches of anomalous structure, cover or material are sought. The precise definition of “reasonably” however is not possible; almost every real urban district has its own idiosyncrasies that reduce its homogeneity at some scale. Although a complete list is therefore not possible, the following are examples of what to avoid: unusually wet patches in an otherwise dry area, individual buildings that jut up by more than half the average building height, a large paved car park in an area of irrigated gardens, a large, concentrated heat source like a heating plant or a tunnel exhaust vent. Proximity to transition zones between different UCz types should be avoided, as should sites where there are plans for or the likelihood of major urban redevelopment. The level of concern about anomalous features decreases with distance away from the site itself, as discussed in relation to source areas. In practice, for each candidate site a “footprint” should be estimated for radiation (for example, equation 11.1) and for turbulent properties. Then, key surface properties such as the mean height and density of the obstacles and characteristics of the surface cover and materials should be documented within these footprints. Their homogeneity should then be judged, either visually or using statistical methods. Once target areas of acceptable

11.3 11.3.1

instruMent exPosure

Modifications to standard practice

In many respects, the generally accepted standards for the exposure of meteorological instruments set out in Part I of this Guide apply to urban sites. However, there will be many occasions when it is



impossible or makes no sense to conform. This section recommends some principles that will assist in such circumstances; however, all eventualities cannot be anticipated. The recommendations here remain in agreement with general objectives set out in Part I, Chapter 1. Many urban stations have been placed over short grass in open locations (parks, playing fields) and as a result they are actually monitoring modified rural-type conditions, not representative urban ones. This leads to the curious finding that some rural-urban pairs of stations show no urban effect on temperature (Peterson, 2003). The guiding principle for the exposure of sensors in the UCL should be to locate them in such a manner that they monitor conditions that are representative of the environment of the selected UCz. In cities and towns it is inappropriate to use sites similar to those which are standard in open rural areas. Instead, it is recommended that urban stations should be sited over surfaces that, within a microscale radius, are representative of the local scale urban environment. The % built category (Table 11.1) is a crude guide to the recommended underlying surface. The requirement that most obviously cannot be met at many urban sites is the distance from obstacles — the site should be located well away from trees, buildings, walls or other obstructions (Chapter 1, Part I). Rather, it is recommended that the urban station be centred in an open space where the surrounding aspect ratio (zH/W) is approximately representative of the locality. When installing instruments at urban sites it is especially important to use shielded cables because of the ubiquity of power lines and other sources of electrical noise at such locations. 11.3.2 temperature air temperature

the lower UCL might be too well sheltered, forced ventilation of the sensor is recommended. If a network includes a mixture of sensor assemblies with/without shields and ventilation, this might contribute to inter-site differences. Practices should therefore be uniform. The surface over which air temperature is measured and the exposure of the sensor assembly should follow the recommendations given in the previous section, namely, the surface should be typical of the UCz and the thermometer screen or shield should be centred in a space with approximately average zH/W. In very densely built-up UCz this might mean that it is located only 5 to 10 m from buildings that are 20 to 30 m high. If the site is a street canyon, zH/W only applies to the cross-section normal to the axis of the street. The orientation of the street axis may also be relevant because of systematic sun-shade patterns. If continuous monitoring is planned, north-south oriented streets are favoured over east-west ones because there is less phase distortion, although daytime course of temperature may be rather peaked. At non-urban stations recommended screen height is between 1.25 and 2 m above ground level. While this is also acceptable for urban sites, it may be better to relax this requirement to allow greater heights. This should not lead to significant error in most cases, especially in