Free Essay

Meteorological Instruments

In:

Submitted By johnn014
Words 216230
Pages 865
Guide to Meteorological Instruments and Methods of Observation

WMO-No. 8

Guide to Meteorological Instruments and Methods of Observation
WMO-No. 8

Seventh edition 2008

WMO-No. 8 © World Meteorological Organization, 2008 The right of publication in print, electronic and any other form and in any language is reserved by WMO. Short extracts from WMO publications may be reproduced without authorization, provided that the complete source is clearly indicated. Editorial correspondence and requests to publish, reproduce or translate this publication in part or in whole should be addressed to: Chairperson, Publications Board World Meteorological Organization (WMO) 7 bis, avenue de la Paix P.O. Box No. 2300 CH-1211 Geneva 2, Switzerland ISBN 978-92-63-10008-5
NOTE The designations employed in WMO publications and the presentation of material in this publication do not imply the expression of any opinion whatsoever on the part of the Secretariat of WMO concerning the legal status of any country, territory, city or area, or of its authorities, or concerning the delimitation of its frontiers or boundaries. Opinions expressed in WMO publications are those of the authors and do not necessarily reflect those of WMO. The mention of specific companies or products does not imply that they are endorsed or recommended by WMO in preference to others of a similar nature which are not mentioned or advertised.

Tel.: +41 (0) 22 730 84 03 Fax: +41 (0) 22 730 80 40 E-mail: publications@wmo.int

PREFACE

One of the purposes of the World Meteorological Organization (WMO) is to coordinate the activities of its 188 Members in the generation of data and information on weather, climate and water, according to internationally agreed standards. With this in mind, each session of the World Meteorological Congress adopts Technical Regulations which lay down the meteorological practices and procedures to be followed by WMO Members. These Technical Regulations are supplemented by a number of Manuals and Guides which describe in more detail the practices, procedures and specifications that Members are requested to follow and implement. While Manuals contain mandatory practices, Guides such as this one contain recommended practices. The first edition of the Guide to Meteorological Instruments and Methods of Observation was published in 1954 and consisted of twelve chapters. Since then, standardization has remained a key concern of the Commission for Instruments and Methods of Observation (CIMO) activities, and CIMO has periodically reviewed the contents of the Guide, making recommendations for additions and amendments whenever appropriate. The present, seventh, edition is a fully revised version which includes additional topics and chapters reflecting recent technological developments. Its purpose, as with the previous editions, is to give comprehensive and up-to-date guidance on the most effective practices for carrying out meteorological observations and measurements. This edition was prepared through the collaborative efforts of 42 experts from 17 countries and was adopted by the fourteenth session of CIMO (Geneva, December 2006). The Guide describes most instruments, systems and techniques in regular use, from the simplest to the most complex and sophisticated, but does not attempt to deal with methods and instruments used only for research or experimentally. Furthermore,

the Guide is not intended to be a detailed instruction manual for use by observers and technicians, but rather, it is intended to provide the basis for the preparation of manuals by National Meteorological and Hydrological Services (NMHSs) or other interested users operating observing systems, to meet their specific needs. However, no attempt is made to specify the fully detailed design of instruments, since to do so might hinder their further development. It was instead considered preferable to restrict standardization to the essential requirements and to confine recommendations to those features which are generally most common to various configurations of a given instrument or measurement system. Although the Guide is written primarily for NMHSs, many other organizations and research and educational institutions taking meteorological observations have found it useful, so their requirements have been kept in mind in the preparation of the Guide. Additionally, many instrument manufacturers have recognized the usefulness of the Guide in the development and production of instruments and systems especially suited to Members’ needs. Because of the considerable demand for this publication, a decision was taken to make it available on the WMO website to all interested users. Therefore, on behalf of WMO, I wish to express my gratitude to all those NMHSs, technical commissions, expert teams and individuals who have contributed to this publication.

(M. Jarraud) Secretary-General

Contents

Page

Part I. MeasureMent of MeteorologIcal VarIaBles
CHAPTER 1. General ........................................................................................................................ CHAPTER 2. Measurement of temperature ..................................................................................... CHAPTER 3. Measurement of atmospheric pressure ....................................................................... CHAPTER 4. Measurement of humidity .......................................................................................... CHAPTER 5. Measurement of surface wind ..................................................................................... CHAPTER 6. Measurement of precipitation .................................................................................... CHAPTER 7. Measurement of radiation .......................................................................................... CHAPTER 8. Measurement of sunshine duration ........................................................................... CHAPTER 9. Measurement of visibility ........................................................................................... CHAPTER 10. Measurement of evaporation ...................................................................................... CHAPTER 11. Measurement of soil moisture .................................................................................... CHAPTER 12. Measurement of upper-air pressure, temperature and humidity ............................... CHAPTER 13. Measurement of upper wind....................................................................................... CHAPTER 14. Present and past weather; state of the ground ........................................................... CHAPTER 15. Observation of clouds ................................................................................................. CHAPTER 16. Measurement of ozone ............................................................................................... CHAPTER 17. Measurement of atmospheric composition ................................................................ I.1–1 I.2–1 I.3–1 I.4–1 I.5–1 I.6–1 I.7–1 I.8–1 I.9–1 I.10–1 I.11–1 I.12–1 I.13–1 I.14–1 I.15–1 I.16–1 I.17–1

Guide to MeteoroloGical instruMents and Methods of observation

Page

Part II. oBserVIng systeMs
CHAPTER 1. Measurements at automatic weather stations ............................................................ CHAPTER 2. Measurements and observations at aeronautical meteorological stations ................. CHAPTER 3. Aircraft observations ................................................................................................... CHAPTER 4. Marine observations ................................................................................................... CHAPTER 5. Special profiling techniques for the boundary layer and the troposphere ................ CHAPTER 6. Rocket measurements in the stratosphere and mesosphere ....................................... CHAPTER 7. Locating the sources of atmospherics ......................................................................... CHAPTER 8. Satellite observations .................................................................................................. CHAPTER 9. Radar measurements ................................................................................................... II.1–1 II.2–1 II.3–1 II.4–1 II.5–1 II.6–1 II.7–1 II.8–1 II.9–1

CHAPTER 10. Balloon techniques ..................................................................................................... II.10–1 CHAPTER 11. Urban observations ..................................................................................................... II.11–1 CHAPTER 12. Road Meteorological Measurements ........................................................................... II.12–1

Part III. QualIty assurance anD ManageMent of oBserVIng systeMs
CHAPTER 1. Quality management .................................................................................................. III.1–1 CHAPTER 2. Sampling meteorological variables ............................................................................. III.2–1 CHAPTER 3. Data reduction ............................................................................................................ III.3–1 CHAPTER 4. Testing, calibration and intercomparison................................................................... III.4–1 CHAPTER 5. Taining of instrument specialists ................................................................................ III.5–1

lIst of contrIButors to tHe guIDe .............................................................. III.3–1

Part I MeasureMent of MeteorologIcal VarIaBles

Part I. MeasureMent of MeteorologIcal VarIaBles contents
Page CHAPTER 1. GEnERAl....................................................................................................................... 1.1 Meteorological observations ................................................................................................ 1.2 Meteorological observing systems ....................................................................................... 1.3 General requirements of a meteorological station .............................................................. 1.4 General requirements of instruments.................................................................................. 1.5 Measurement standards and definitions ............................................................................. 1.6 Uncertainty of measurements ............................................................................................. Annex 1.A. Regional centres ......................................................................................................... Annex 1.B. Operational measurement uncertainty requirements and instrument performance ................................................................................................................................... Annex 1.C. Station exposure description ...................................................................................... References and further reading ...................................................................................................... CHAPTER 2. MEASUREMEnT Of TEMPERATURE............................................................................. 2.1 General ................................................................................................................................. 2.2 liquid-in-glass thermometers .............................................................................................. 2.3 Mechanical thermographs ................................................................................................... 2.4 Electrical thermometers ....................................................................................................... 2.5 Radiation shields .................................................................................................................. Annex. Defining the fixed points of the international temperature scale of 1990 ....................... References and further reading ...................................................................................................... CHAPTER 3. MEASUREMEnT Of ATMOSPHERIC PRESSURE ........................................................... 3.1 General ................................................................................................................................. 3.2 Mercury barometers ............................................................................................................. 3.3 Electronic barometers .......................................................................................................... 3.4 Aneroid barometers.............................................................................................................. 3.5 Barographs ........................................................................................................................... 3.6 Bourdon-tube barometers .................................................................................................... 3.7 Barometric change ............................................................................................................... 3.8 General exposure requirements ........................................................................................... 3.9 Barometer exposure ............................................................................................................. 3.10 Comparison, calibration and maintenance ......................................................................... 3.11 Adjustment of barometer readings to other levels .............................................................. 3.12 Pressure tendency and pressure tendency characteristic ................................................... Annex 3.A. Correction of barometer readings to standard conditions ......................................... Annex 3.B. Regional standard barometers ..................................................................................... References and further reading ...................................................................................................... I.1–1 I.1–1 I.1–2 I.1–2 I.1–6 I.1–7 I.1–9 I.1–17 I.1–19 I.1–25 I.1–27 I.2–1 I.2–1 I.2–4 I.2–10 I.2–11 I.2–16 I.2–18 I.2–20 I.3–1 I.3–1 I.3–3 I.3–8 I.3–11 I.3–12 I.3–13 I.3–13 I.3–14 I.3–14 I.3–15 I.3–20 I.3–21 I.3–22 I.3–25 I.3–26

CHAPTER 4. MEASUREMEnT Of HUMIDITy ................................................................................... 4.1 4.2 4.3 4.4 4.5 General ................................................................................................................................. The psychrometer ................................................................................................................ The hair hygrometer ............................................................................................................ The chilled-mirror dewpoint hygrometer............................................................................ The lithium chloride heated condensation hygrometer (dew cell).....................................

I.4–1 I.4–1 I.4–6 I.4–12 I.4–14 I.4–17

I.2

Part I. MeasureMent of MeteorologIcal VarIaBles

Page 4.6 Electrical resistive and capacitive hygrometers ................................................................... 4.7 Hygrometers using absorption of electromagnetic radiation .............................................. 4.8 Safety .................................................................................................................................... 4.9 Standard instruments and calibration ................................................................................. Annex 4.A. Definitions and specifications of water vapour in the atmosphere............................ Annex 4.B. formulae for the computation of measures of humidity ........................................... References and further reading ...................................................................................................... CHAPTER 5. MEASUREMEnT Of SURfACE wInD ........................................................................... 5.1 General ................................................................................................................................. 5.2 Estimation of wind .............................................................................................................. 5.3 Simple instrumental methods ............................................................................................. 5.4 Cup and propeller sensors ................................................................................................... 5.5 wind-direction vanes........................................................................................................... 5.6 Other wind sensors .............................................................................................................. 5.7 Sensors and sensor combinations for component resolution ............................................. 5.8 Data-processing methods..................................................................................................... 5.9 Exposure of wind instruments ............................................................................................. 5.10 Calibration and maintenance .............................................................................................. Annex. The effective roughness length ......................................................................................... References and further reading ...................................................................................................... I.4–20 I.4–21 I.4–21 I.4–23 I.4–26 I.4–29 I.4–30 I.5–1 I.5–1 I.5–3 I.5–4 I.5–4 I.5–5 I.5–5 I.5–6 I.5–6 I.5–8 I.5–11 I.5–12 I.5–13

CHAPTER 6. MEASUREMEnT Of PRECIPITATIOn ........................................................................... 6.1 General ................................................................................................................................. 6.2 Siting and exposure ............................................................................................................. 6.3 non-recording precipitation gauges .................................................................................... 6.4 Precipitation gauge errors and corrections .......................................................................... 6.5 Recording precipitation gauges ........................................................................................... 6.6 Measurement of dew, ice accumulation and fog precipitation ........................................... 6.7 Measurement of snowfall and snow cover .......................................................................... Annex 6.A. Precipitation intercomparison sites ............................................................................ Annex 6.B. Suggested correction procedures for precipitation measurements ............................. References and further reading ...................................................................................................... CHAPTER 7. MEASUREMEnT Of RADIATIOn .................................................................................. 7.1 General ................................................................................................................................. 7.2 Measurement of direct solar radiation................................................................................. 7.3 Measurement of global and diffuse sky radiation ............................................................... 7.4 Measurement of total and long-wave radiation .................................................................. 7.5 Measurement of special radiation quantities ...................................................................... 7.6 Measurement of UV radiation ............................................................................................. Annex 7.A. nomenclature of radiometric and photometric quantities ........................................ Annex 7.B. Meteorological radiation quantities, symbols and definitions ................................... Annex 7.C. Specifications for world, regional and national radiation centres ............................. Annex 7.D. Useful formulae .......................................................................................................... Annex 7.E. Diffuse sky radiation – correction for a shading ring.................................................. References and further reading ......................................................................................................

I.6–1 I.6–1 I.6–3 I.6–3 I.6–6 I.6–8 I.6–11 I.6–14 I.6–18 I.6–19 I.6–20 I.7–1 I.7–1 I.7–5 I.7–11 I.7–19 I.7–24 I.7–25 I.7–31 I.7–33 I.7–35 I.7–37 I.7–39 I.7–40

contents

I.3 Page

CHAPTER 8. MEASUREMEnT Of SUnSHInE DURATIOn ................................................................ 8.1 General ................................................................................................................................. 8.2 Instruments and sensors ...................................................................................................... 8.3 Exposure of sunshine detectors ........................................................................................... 8.4 General sources of error ....................................................................................................... 8.5 Calibration ........................................................................................................................... 8.6 Maintenance ........................................................................................................................ Annex. Algorithm to estimate sunshine duration from direct global irradiance measurements ................................................................................................................................ References and further reading ...................................................................................................... CHAPTER 9. MEASUREMEnT Of VISIBIlITy .................................................................................... 9.1 General ................................................................................................................................. 9.2 Visual estimation of meteorological optical range .............................................................. 9.3 Instrumental measurement of the meteorological optical range ........................................ References and further reading ...................................................................................................... CHAPTER 10. MEASUREMEnT Of EVAPORATIOn ...........................................................................

I.8–1 I.8–1 I.8–3 I.8–7 I.8–7 I.8–7 I.8–9 I.8–10 I.8–11 I.9–1 I.9–1 I.9–5 I.9–8 I.9–15 I.10–1

10.1 General ................................................................................................................................. I.10–1 10.2 Atmometers .......................................................................................................................... I.10–2 10.3 Evaporation pans and tanks ................................................................................................ I.10–3 10.4 Evapotranspirometers (lysimeters) ...................................................................................... I.10–6 10.5 Estimation of evaporation from natural surfaces ................................................................ I.10–7 References and further reading ...................................................................................................... I.10–10 CHAPTER 11. MEASUREMEnT Of SOIl MOISTURE......................................................................... I.11–1

11.1 General ................................................................................................................................. I.11–1 11.2 Gravimetric direct measurement of soil water content ........................................................ I.11–3 11.3 Soil water content: indirect methods .................................................................................. I.11–4 11.4 Soil water potential instrumentation .................................................................................. I.11–6 11.5 Remote sensing of soil moisture .......................................................................................... I.11–8 11.6 Site selection and sample size .............................................................................................. I.11–9 References and further reading ...................................................................................................... I.11–10 CHAPTER 12. MEASUREMEnT Of UPPER-AIR PRESSURE, TEMPERATURE AnD HUMIDITy ......... 12.1 General ................................................................................................................................. 12.2 Radiosonde electronics ........................................................................................................ 12.3 Temperature sensors............................................................................................................. 12.4 Pressure sensors .................................................................................................................... 12.5 Relative humidity sensors .................................................................................................... 12.6 Ground station equipment .................................................................................................. 12.7 Radiosonde operations......................................................................................................... 12.8 Radiosondes errors .............................................................................................................. 12.9 Comparison, calibration and maintenance ......................................................................... 12.10 Computations and reporting ............................................................................................... Annex 12.A. Accuracy requirements (standard error) for upper-air measurements for synoptic meteorology, interpreted for conventional upper-air and wind measurements ............ I.12–1 I.12–1 I.12–6 I.12–7 I.12–9 I.12–12 I.12–15 I.12–16 I.12–18 I.12–28 I.12–31 I.12–34

I.4

Part I. MeasureMent of MeteorologIcal VarIaBles

Page Annex 12.B. Performance limits for upper wind and radiosonde temperature, relative humidity and geopotential height ................................................................................................ I.12–35 Annex 12.C. Guidelines for organizing radiosonde intercomparisons and for the establishment of test sites .................................................................................................. I.12–40 References and further reading ...................................................................................................... I.12–44 CHAPTER 13. MEASUREMEnT Of UPPER wInD ............................................................................. 13.1 General ................................................................................................................................. 13.2 Upper-wind sensors and instruments .................................................................................. 13.3 Measurement methods ....................................................................................................... 13.4 Exposure of ground equipment ........................................................................................... 13.5 Sources of error .................................................................................................................... 13.6 Comparison, calibration and maintenance ......................................................................... 13.7 Corrections........................................................................................................................... References and further reading ...................................................................................................... I.13–1 I.13–1 I.13–4 I.13–10 I.13–12 I.13–13 I.13–18 I.13–19 I.13–21

CHAPTER 14. PRESEnT AnD PAST wEATHER; STATE Of THE GROUnD........................................ 14.1 General ................................................................................................................................. 14.2 Observation of present and past weather ............................................................................ 14.3 State of the ground .............................................................................................................. 14.4 Special phenomena .............................................................................................................. Annex. Criteria for light, moderate and heavy precipitation intensity ........................................ References and further reading ...................................................................................................... CHAPTER 15. OBSERVATIOn Of ClOUDS .......................................................................................

I.14–1 I.14–1 I.14–2 I.14–5 I.14–5 I.14–7 I.14–8 I.15–1

15.1 General ................................................................................................................................. I.15–1 15.2 Estimation and observation of cloud amount, height and type ......................................... I.15–3 15.3 Instrumental measurements of cloud amount .................................................................... I.15–5 15.4 Measurement of cloud height using a searchlight .............................................................. I.15–5 15.5 Measurement of cloud height using a balloon .................................................................... I.15–7 15.6 Rotating-beam ceilometer .................................................................................................... I.15–7 15.7 laser ceilometer ................................................................................................................... I.15–8 References and further reading ...................................................................................................... I.15–11

CHAPTER 16. MEASUREMEnT Of OzOnE ....................................................................................... 16.1 General ................................................................................................................................. 16.2 Surface ozone measurements ............................................................................................... 16.3 Total ozone measurements .................................................................................................. 16.4 Measurements of the vertical profile of ozone .................................................................... 16.5 Corrections to ozone measurements ................................................................................... 16.6 Aircraft and satellite observations ....................................................................................... Annex 16.A. Units for total and local ozone ................................................................................. Annex 16.B. Measurement theory ................................................................................................. References and further reading ...................................................................................................... CHAPTER 17. MEASUREMEnT Of ATMOSPHERIC COMPOSITIOn ................................................

I.16–1 I.16–1 I.16–3 I.16–4 I.16–11 I.16–16 I.16–17 I.16–18 I.16–20 I.16–22 I.17–1

17.1 General ................................................................................................................................. I.17–1 17.2 Measurement of specific variables ....................................................................................... I.17–1 17.3 Quality assurance ................................................................................................................. I.17–10 References and further reading ...................................................................................................... I.17–12

CHaPTEr 1

general

1.1 1.1.1

Meteorological observations

general

set up Regional Instrument Centres (RICs) to maintain standards and provide advice. Their terms of reference and locations are given in Annex 1.A. The definitions and standards stated in this Guide (see section 1.5.1) will always conform to internationally adopted standards. Basic documents to be referred to are the International Meteorological Vocabulary (WMO, 1992a) and the International Vocabulary of Basic and General Terms in Metrology (ISO, 1993a). 1.1.2 representativeness

Meteorological (and related environmental and geophysical) observations are made for a variety of reasons. They are used for the real-time preparation of weather analyses, forecasts and severe weather warnings, for the study of climate, for local weatherdependent operations (for example, local aerodrome flying operations, construction work on land and at sea), for hydrology and agricultural meteorology, and for research in meteorology and climatology. The purpose of the Guide to Meteorological Instruments and Methods of Observation is to support these activities by giving advice on good practices for meteorological measurements and observations. There are many other sources of additional advice, and users should refer to the references placed at the end of each chapter for a bibliography of theory and practice relating to instruments and methods of observation. The references also contain national practices, national and international standards, and specific literature. They also include reports published by the World Meteorological Organization (WMO) for the Commission for Instruments and Methods of Observation (CIMO) on technical conferences, instrumentation, and international comparisons of instruments. Many other Manuals and Guides issued by WMO refer to particular applications of meteorological observations (see especially those relating to the Global Observing System (WMO, 2003a; 1989), aeronautical meteorology (WMO, 1990), hydrology (WMO, 1994), agricultural meteorology (WMO, 1981) and climatology (WMO, 1983). Quality assurance and maintenance are of special interest for instrument measurements. Throughout this Guide many recommendations are made in order to meet the stated performance requirements. Particularly, Part III of this Guide is dedicated to quality assurance and management of observing systems. It is recognized that quality management and training of instrument specialists is of utmost importance. Therefore, on the recommendation of CIMO,1 several regional associations of WMO have
1 Recommended by the Commission for Instruments and Methods of Observation at its ninth session (1985) through Recommendation 19.

The representativeness of an observation is the degree to which it accurately describes the value of the variable needed for a specific purpose. Therefore, it is not a fixed quality of any observation, but results from joint appraisal of instrumentation, measurement interval and exposure against the requirements of some particular application. For instance, synoptic observations should typically be representative of an area up to 100 km around the station, but for small-scale or local applications the considered area may have dimensions of 10 km or less. In particular, applications have their own preferred timescales and space scales for averaging, station density and resolution of phenomena — small for agricultural meteorology, large for global longrange forecasting. Forecasting scales are closely related to the timescales of the phenomena; thus, shorter-range weather forecasts require more frequent observations from a denser network over a limited area in order to detect any small-scale phenomena and their quick development. Using various sources (WMO, 2003a; 2001; Orlanski, 1975), horizontal meteorological scales may be classified as follows, with a factor two uncertainty: (a) Microscale (less than 100 m) for agricultural meteorology, for example, evaporation; (b) Toposcale or local scale (100–3 km), for example, air pollution, tornadoes; (c) Mesoscale (3–100 km), for example, thunderstorms, sea and mountain breezes; (d) Large scale (100–3 000 km), for example, fronts, various cyclones, cloud clusters; (e) Planetary scale (larger than 3 000 km), for example, long upper tropospheric waves.

I.1–2

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

Section 1.6 discusses the required and achievable uncertainties of instrument systems. The stated achievable uncertainties can be obtained with good instrument systems that are properly operated, but are not always obtained in practice. Good observing practices require skill, training, equipment and support, which are not always available in sufficient degree. The measurement intervals required vary by application: minutes for aviation, hours for agriculture, and days for climate description. Data storage arrangements are a compromise between available capacity and user needs. Good exposure, which is representative on scales from a few metres to 100 km, is difficult to achieve (see section 1.3). Errors of unrepresentative exposure may be much larger than those expected from the instrument system in isolation. A station in a hilly or coastal location is likely to be unrepresentative on the large scale or mesoscale. However, good homogeneity of observations in time may enable users to employ data even from unrepresentative stations for climate studies. 1.1.3 Metadata

1.2

Meteorological observing systeMs

The purpose of this Guide and related WMO publications is to ensure reliability of observations by standardization. However, local resources and circumstances may cause deviations from the agreed standards of instrumentation and exposure. A typical example is that of regions with much snowfall, where the instruments are mounted higher than usual so that they can be useful in winter as well as summer. Users of meteorological observations often need to know the actual exposure, type and condition of the equipment and its operation; and perhaps the circumstances of the observations. This is now particularly significant in the study of climate, in which detailed station histories have to be examined. Metadata (data about data) should be kept concerning all of the station establishment and maintenance matters described in section 1.3, and concerning changes which occur, including calibration and maintenance history and the changes in terms of exposure and staff (WMO, 2003b). Metadata are especially important for elements which are particularly sensitive to exposure, such as precipitation, wind and temperature. One very basic form of metadata is information on the existence, availability and quality of meteorological data and of the metadata about them.

The requirements for observational data may be met using in situ measurements or remote-sensing (including space-borne) systems, according to the ability of the various sensing systems to measure the elements needed. WMO (2003a) describes the requirements in terms of global, regional and national scales and according to the application area. The Global Observing System, designed to meet these requirements, is composed of the surface-based subsystem and the space-based subsystem. The surface-based subsystem comprises a wide variety of types of stations according to the particular application (for example, surface synoptic station, upper-air station, climatological station, and so on). The space-based subsystem comprises a number of spacecraft with on-board sounding missions and the associated ground segment for command, control and data reception. The succeeding paragraphs and chapters in this Guide deal with the surface-based system and, to a lesser extent, with the space-based subsystem. To derive certain meteorological observations by automated systems, for example, present weather, a so-called “multi- sensor” approach is necessary, where an algorithm is applied to compute the result from the outputs of several sensors.

1.3

general requireMents of a Meteorological station

The requirements for elements to be observed according to the type of station and observing network are detailed in WMO (2003a). In this section, the observational requirements of a typical climatological station or a surface synoptic network station are considered. The following elements are observed at a station making surface observations (the chapters refer to Part I of the Guide): Present weather (Chapter 14) Past weather (Chapter 14) Wind direction and speed (Chapter 5) Cloud amount (Chapter 15) Cloud type (Chapter 15) Cloud-base height (Chapter 15) Visibility (Chapter 9) Temperature (Chapter 2) Relative humidity (Chapter 4)

CHaPTEr 1. GENEral

I.1–3

Atmospheric pressure Precipitation Snow cover Sunshine and/ or solar radiation Soil temperature Evaporation

(Chapter 3) (Chapter 6) (Chapter 6) (Chapters 7, 8) (Chapter 2) (Chapter 10)

(c)

(d) (e)

Instruments exist which can measure all of these elements, except cloud type. However, with current technology, instruments for present and past weather, cloud amount and height, and snow cover are not able to make observations of the whole range of phenomena, whereas human observers are able to do so. Some meteorological stations take upper-air measurements (Part I, Chapters 12 and 13), measurements of soil moisture (Part I, Chapter 11), ozone (Part I, Chapter 16) and atmospheric composition (Part I, Chapter 17), and some make use of special instrument systems as described in Part II of this Guide. Details of observing methods and appropriate instrumentation are contained in the succeeding chapters of this Guide. 1.3.1 automatic weather stations

(f)

(g)

To code and dispatch observations (in the absence of automatic coding and communication systems); To maintain in situ recording devices, including the changing of charts when provided; To make or collate weekly and/or monthly records of climatological data where automatic systems are unavailable or inadequate; To provide supplementary or back-up observations when automatic equipment does not make observations of all required elements, or when it is out of service; To respond to public and professional enquiries.

Observers should be trained and/or certified by an authorized Meteorological Service to establish their competence to make observations to the required standards. They should have the ability to interpret instructions for the use of instrumental and manual techniques that apply to their own particular observing systems. Guidance on the instrument training requirements for observers will be given in Part III, Chapter 5. 1.3.3 1.3.3.1 siting and exposure site selection

Most of the elements required for synoptic, climatological or aeronautical purposes can be measured by automatic instrumentation (Part II, Chapter 1). As the capabilities of automatic systems increase, the ratio of purely automatic weather stations to observer-staffed weather stations (with or without automatic instrumentation) increases steadily. The guidance in the following paragraphs regarding siting and exposure, changes of instrumentation, and inspection and maintenance apply equally to automatic weather stations and staffed weather stations. 1.3.2 observers

Meteorological observing stations are designed so that representative measurements (or observations) can be taken according to the type of station involved. Thus, a station in the synoptic network should make observations to meet synoptic-scale requirements, whereas an aviation meteorological observing station should make observations that describe the conditions specific to the local (aerodrome) site. Where stations are used for several purposes, for example, aviation, synoptic and climatological purposes, the most stringent requirement will dictate the precise location of an observing site and its associated sensors. A detailed study on siting and exposure is published in WMO (1993a). As an example, the following considerations apply to the selection of site and instrument exposure requirements for a typical synoptic or climatological station in a regional or national network: (a) Outdoor instruments should be installed on a level piece of ground, preferably no smaller than 25 m x 25 m where there are many installations, but in cases where there are relatively few installations (as in Figure 1.1)

Meteorological observers are required for a number of reasons, as follows: (a) To make synoptic and/or climatological observations to the required uncertainty and representativeness with the aid of appropriate instruments; (b) To maintain instruments, metadata documentation and observing sites in good order;

I.1–4

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES
North

1.5 m

1.5 m

1.5 m Thermometer screen 100 cm Soil thermometer

1.5 m Cup-counter anemometer on slender 2 m pole

2m

Raingauge 1

1.5 m West 3m Raingauge 2 30 cm

1.5 m Recording raingauge East 2m 1m

1.5 m

Soil thermometer 60 cm Concrete slab 1.4 m Min. therm

Grass minimum thermometer 1m 75 m 2m Soil thermometers 20 cm 10 cm 5cm Bare-soil minimum thermometer Bare patch to be kept weeded

1.25 m Sunshine recorder on 2 m pillar 1.5 m

5m

1.5 m

1m

South

figure 1.1. layout of an observing station in the northern hemisphere showing minimum distances between installations

(b)

(c)

the area may be considerably smaller, for example, 10 m x 7 m (the enclosure). The ground should be covered with short grass or a surface representative of the locality, and surrounded by open fencing or palings to exclude unauthorized persons. Within the enclosure, a bare patch of ground of about 2 m x 2 m is reserved for observations of the state of the ground and of soil temperature at depths of equal to or less than 20 cm (Part I, Chapter 2) (soil temperatures at depths greater than 20 cm can be measured outside this bare patch of ground). An example of the layout of such a station is given in Figure 1.1 (taken from WMO, 1989); There should be no steeply sloping ground in the vicinity, and the site should not be in a hollow. If these conditions are not met, the observations may show peculiarities of entirely local significance; The site should be well away from trees, buildings, walls or other obstructions. The distance of any such obstacle (including

(d)

(e)

(f)

(g)

fencing) from the raingauge should not be less than twice the height of the object above the rim of the gauge, and preferably four times the height; The sunshine recorder, raingauge and anemometer must be exposed according to their requirements, preferably on the same site as the other instruments; It should be noted that the enclosure may not be the best place from which to estimate the wind speed and direction; another observing point, more exposed to the wind, may be desirable; Very open sites which are satisfactory for most instruments are unsuitable for raingauges. For such sites, the rainfall catch is reduced in conditions other than light winds and some degree of shelter is needed; If in the instrument enclosure surroundings, maybe at some distance, objects like trees or buildings obstruct the horizon significantly, alternative viewpoints should be selected for observations of sunshine or radiation;

CHaPTEr 1. GENEral

I.1–5

(h)

(i)

(j)

The position used for observing cloud and visibility should be as open as possible and command the widest possible view of the sky and the surrounding country; At coastal stations, it is desirable that the station command a view of the open sea. However, the station should not be too near the edge of a cliff because the wind eddies created by the cliff will affect the wind and precipitation measurements; Night observations of cloud and visibility are best made from a site unaffected by extraneous lighting.

must be separately specified. It is the datum level to which barometric reports at the station refer; such barometric values being termed “station pressure” and understood to refer to the given level for the purpose of maintaining continuity in the pressure records (WMO, 1993b). If a station is located at an aerodrome, other elevations must be specified (see Part II, Chapter 2, and WMO, 1990). Definitions of measures of height and mean sea level are given in WMO (1992a). 1.3.4 changes of instrumentation and homogeneity

It is obvious that some of the above considerations are somewhat contradictory and require compromise solutions. Detailed information appropriate to specific instruments and measurements is given in the succeeding chapters. 1.3.3.2 coordinates of the station

The position of a station referred to in the World Geodetic System 1984 (WGS-84) Earth Geodetic Model 1996 (EGM96) must be accurately known and recorded.2 The coordinates of a station are: (a) The latitude in degrees with a resolution of 1 in 1 000; (b) The longitude in degrees with a resolution of 1 in 1 000; (c) The height of the station above mean sea level,3 namely, the elevation of the station, to the nearest metre. These coordinates refer to the plot on which the observations are taken and may not be the same as those of the town, village or airfield after which the station is named. The elevation of the station is defined as the height above mean sea level of the ground on which the raingauge stands or, if there is no raingauge, the ground beneath the thermometer screen. If there is neither raingauge nor screen, it is the average level of terrain in the vicinity of the station. If the station reports pressure, the elevation to which the station pressure relates
2 3 For an explanation of the WGS-84 and recording issues, see ICAO, 2002. Mean sea level (MSL) is defined in WMO, 1992a. The fixed reference level of MSL should be a well-defined geoid, like the WGS-84 Earth Geodetic Model 1996 (EGM96) [Geoid: the equipotential surface of the Earth’s gravity field which best fits, in a least squares sense, global MSL].

The characteristics of an observing site will generally change over time, for example, through the growth of trees or erection of buildings on adjacent plots. Sites should be chosen to minimize these effects, if possible. Documentation of the geography of the site and its exposure should be kept and regularly updated as a component of the metadata (see Annex 1.C and WMO, 2003b). It is especially important to minimize the effects of changes of instrument and/or changes in the siting of specific instruments. Although the static characteristics of new instruments might be well understood, when they are deployed operationally they can introduce apparent changes in site climatology. In order to guard against this eventuality, observations from new instruments should be compared over an extended interval (at least one year; see the Guide to Climatological Practices (WMO, 1983) before the old measurement system is taken out of service. The same applies when there has been a change of site. Where this procedure is impractical at all sites, it is essential to carry out comparisons at selected representative sites to attempt to deduce changes in measurement data which might be a result of changing technology or enforced site changes. 1.3.5 1.3.5.1 inspection and maintenance Inspection of stations

All synoptic land stations and principal climatological stations should be inspected no less than once every two years. Agricultural meteorological and special stations should be inspected at intervals sufficiently short to ensure the maintenance of a high standard of observations and the correct functioning of instruments.

I.1–6

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

The principal objective of such inspections is to ascertain that: (a) The siting and exposure of instruments are known, acceptable and adequately documented; (b) Instruments are of the approved type, in good order, and regularly verified against standards, as necessary; (c) There is uniformity in the methods of observation and the procedures for calculating derived quantities from the observations; (d) The observers are competent to carry out their duties; (e) The metadata information is up to date. Further information on the standardization of instruments is given in section 1.5. 1.3.5.2 Maintenance

(d) (e) (f)

Simplicity of design which is consistent with requirements; Durability; Acceptable cost of instrument, consumables and spare parts.

With regard to the first two requirements, it is important that an instrument should be able to maintain a known uncertainty over a long period. This is much better than having a high initial uncertainty that cannot be retained for long under operating conditions. Initial calibrations of instruments will, in general, reveal departures from the ideal output, necessitating corrections to observed data during normal operations. It is important that the corrections should be retained with the instruments at the observing site and that clear guidance be given to observers for their use. Simplicity, strength of construction, and convenience of operation and maintenance are important since most meteorological instruments are in continuous use year in, year out, and may be located far away from good repair facilities. Robust construction is especially desirable for instruments that are wholly or partially exposed to the weather. Adherence to such characteristics will often reduce the overall cost of providing good observations, outweighing the initial cost. 1.4.2 recording instruments

Observing sites and instruments should be maintained regularly so that the quality of observations does not deteriorate significantly between station inspections. Routine (preventive) maintenance schedules include regular “housekeeping” at observing sites (for example, grass cutting and cleaning of exposed instrument surfaces) and manufacturers’ recommended checks on automatic instruments. Routine quality control checks carried out at the station or at a central point should be designed to detect equipment faults at the earliest possible stage. Depending on the nature of the fault and the type of station, the equipment should be replaced or repaired according to agreed priorities and timescales. As part of the metadata, it is especially important that a log be kept of instrument faults, exposure changes, and remedial action taken where data are used for climatological purposes. Further information on station inspection and management can be found in WMO (1989).

1.4

general requireMents of instruMents

1.4.1

Desirable characteristics

The most important requirements for meteorological instruments are the following: (a) Uncertainty, according to the stated requirement for the particular variable; (b) Reliability and stability; (c) Convenience of operation, calibration and maintenance;

In many of the recording instruments used in meteorology, the motion of the sensing element is magnified by levers that move a pen on a chart on a clock-driven drum. Such recorders should be as free as possible from friction, not only in the bearings, but also between the pen and paper. Some means of adjusting the pressure of the pen on the paper should be provided, but this pressure should be reduced to a minimum consistent with a continuous legible trace. Means should also be provided in clock-driven recorders for making time marks. In the design of recording instruments that will be used in cold climates, particular care must be taken to ensure that their performance is not adversely affected by extreme cold and moisture, and that routine procedures (time marks, and so forth) can be carried out by the observers while wearing gloves. Recording instruments should be compared frequently with instruments of the direct-reading type.

CHaPTEr 1. GENEral

I.1–7

An increasing number of instruments make use of electronic recording in magnetic media or in semiconductor microcircuits. Many of the same considerations given for bearings, friction and coldweather servicing apply to the mechanical components of such instruments.

for assigning values to other standards of the same quantity. Primary standard: A standard that is designated or widely acknowledged as having the highest metrological qualities and whose value is accepted without reference to other standards of the same quantity. Secondary standard: A standard whose value is assigned by comparison with a primary standard of the same quantity. Reference standard: A standard, generally having the highest metrological quality available at a given location or in a given organization, from which the measurements taken there are derived. Working standard: A standard that is used routinely to calibrate or check material measures, measuring instruments or reference materials.
Notes: 1. 2. A working standard is usually calibrated against a reference A working standard used routinely to ensure that measurestandard. ments are being carried out correctly is called a “check standard”.

1.5

MeasureMent stanDarDs anD Definitions

1.5.1

Definitions of standards of measurement

The term “standard” and other similar terms denote the various instruments, methods and scales used to establish the uncertainty of measurements. A nomenclature for standards of measurement is given in the Inter national Vocabulary of Basic and General Terms in Metrology, which was prepared simultaneously by the International Bureau of Weights and Measures, the International Electrotechnical Commission, the International Federation of Clinical Chemistry, the International Organization for Standardization, the International Union of Pure and Applied Chemistry, the International Union of Pure and Applied Physics and the International Organization of Legal Metrology and issued by ISO (1993a). Some of the definitions are as follows: (Measurement) standard: A material measure, measuring instrument, reference material or measuring system intended to define, realize, conserve or reproduce a unit or one or more values of a quantity to serve as a reference. Examples: 1 kg mass standard 100 Ω standard resistor

Transfer standard: A standard used as an intermediary to compare standards.
Note: The term “transfer device” should be used when the

intermediary is not a standard.

Travelling standard: A standard, sometimes of special construction, intended for transport between different locations. Collective standard: A set of similar material measures or measuring instruments fulfilling, by their combined use, the role of a standard. Example:
Notes: 1. 2. A collective standard is usually intended to provide a single The value provided by a collective standard is an appropriate value of a quantity. mean of the values provided by the individual instruments.

Notes: 1. A set of similar material measures or measuring instruments

The World Radiometric Reference

that, through their combined use, constitutes a standard is called a “collective standard”. 2. A set of standards of chosen values that, individually or in combination, provides a series of values of quantities of the same kind is called a “group standard”.

International standard: A standard recognized by an international agreement to serve internationally as the basis for assigning values to other standards of the quantity concerned. National standard: A standard recognized by a national decision to serve, in a country, as the basis

Traceability: A property of the result of a measurement or the value of a standard whereby it can be related to stated references, usually national or international standards, through an unbroken chain of comparisons all having stated uncertainties.

I.1–8

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

Calibration: The set of operations which establish, under specified conditions, the relationship between values indicated by a measuring instrument or measuring system, or values represented by a material measure, and the corresponding known values of a measurand (the physical quantity being measured).
Notes: 1. The result of a calibration permits the estimation of errors of indication of the measuring instrument, measuring system or material measure, or the assignment of marks on arbitrary scales. 2. 3. 4. A calibration may also determine other metrological properties. The result of a calibration may be recorded in a document, The result of a calibration is sometimes expressed as a calibra-

1.5.3 1.5.3.1

symbols, units and constants symbols and units

sometimes called a calibration certificate or calibration report. tion factor, or as a series of calibration factors in the form of a calibration curve.

1.5.2

Procedures for standardization

In order to control effectively the standardization of meteorological instruments on a national and international scale, a system of national and regional standards has been adopted by WMO. The locations of the regional standards for pressure and radiation are given in Part I, Chapter 3 (Annex 3.B), and Part I, Chapter 7 (Annex 7.C), respectively. In general, regional standards are designated by the regional associations, and national standards by the individual Members. Unless otherwise specified, instruments designated as regional and national standards should be compared by means of travelling standards at least once every five years. It is not essential for the instruments used as travelling standards to possess the uncertainty of primary or secondary standards; they should, however, be sufficiently robust to withstand transportation without changing their calibration. Similarly, the instruments in operational use at a Service should be periodically compared directly or indirectly with the national standards. Comparisons of instruments within a Service should, as far as possible, be made at the time when the instruments are issued to a station and subsequently during each regular inspection of the station, as recommended in section 1.3.5. Portable standard instruments used by inspectors should be checked against the standard instruments of the Service before and after each tour of inspection. Comparisons should be carried out between operational instruments of different designs (or principles of operation) to ensure homogeneity of measurements over space and time (see section 1.3.4).

Instrument measurements produce numerical values. The purpose of these measurements is to obtain physical or meteorological quantities representing the state of the local atmosphere. For meteorological practices, instrument readings represent variables, such as “atmospheric pressure”, “air temperature” or “wind speed”. A variable with symbol a is usually represented in the form a = {a}·[a], where {a} stands for the numerical value and [a] stands for the symbol for t he unit. General pr inciples concer ning quantities, units and symbols are stated by ISO (1993b) and IUPAP (1987). The International System of Units (SI) should be used as the system of units for the evaluation of meteorological elements included in reports for international exchange. This system is published and updated by BIPM (1998). Guides for the use of SI are issued by NIST (1995) and ISO (1993b). Variables not defined as an international symbol by the International System of Quantities (ISQ), but commonly used in meteorology can be found in the International Meteorological Tables (WMO, 1966) and relevant chapters in this Guide. The following units should be used for meteorological observations: (a) Atmospheric pressure, p, in hectopascals (hPa);4 (b) Temperature, t, in degrees Celsius (°C) or T in kelvin (K);
Note: The Celsius and kelvin temperature scales should

conform to the actual definition of the International Temperature Scale (for 2004: ITS-90, see BIPM, 1990).

(c) (d)

(e) (f)

Wind speed, in both surface and upper-air observations, in metres per second (m s–1); Wind direction in degrees clockwise from north or on the scale 0–36, where 36 is the wind from the north and 09 the wind from the east (°); Relative humidity, U, in per cent (%); Precipitation (total amount) in millimetres (mm) or kilograms per m–2 (kg m–2);5
The unit “pascal” is the principal SI derived unit for the pressure quantity. The unit and symbol “bar” is a unit outside the SI system; in every document where it is used, this unit (bar) should be defined in relation to the SI. Its continued use is not encouraged. By definition, 1 mbar (millibar) ≡≡ 1 hPa (hectopascal). Assuming that 1 mm equals 1 kg m–2 independent of temperature.

4

5

CHaPTEr 1. GENEral

I.1–9

Precipitation intensity, Ri, in millimetres per hour (mm h–1) or kilograms per m–2 per second (kg m–2 s–1);6 (h) Snow water equivalent in kilograms per m–2 (kg m–2); (i) Evaporation in millimetres (mm); (j) Visibility in metres (m); (k) Irradiance in watts per m2 and radiant exposure in joules per m2 (W m–2, J m–2); (l) Duration of sunshine in hours (h); (m) Cloud height in metres (m); (n) Cloud amount in oktas; (o) Geopotential, used in upper-air observations, in standard geopotential metres (m’).
Note: Height, level or altitude are presented with respect to

(g)

The term measurement is carefully defined in section 1.6.2, but in most of this Guide it is used less strictly to mean the process of measurement or its result, which may also be called an “observation”. A sample is a single measurement, typically one of a series of spot or instantaneous readings of a sensor system, from which an average or smoothed value is derived to make an observation. For a more theoretical approach to this discussion, see Part III, Chapters 2 and 3. The terms accuracy, error and uncertainty are carefully defined in section 1.6.2, which explains that accuracy is a qualitative term, the numerical expression of which is uncertainty. This is good practice and is the form followed in this Guide. Formerly, the common and less precise use of accuracy was as in “an accuracy of ±x”, which should read “an uncertainty of x”. 1.6.1.2 sources and estimates of error

a well-defined reference. Typical references are Mean Sea Level (MSL), station altitude or the 1013.2 hPa plane.

The standard geopotential metre is defined as 0.980 665 of the dynamic metre; for levels in the troposphere, the geopotential is close in numerical value to the height expressed in metres. 1.5.3.2 constants

The following constants have been adopted for meteorological use: (a) Absolute temperature of the normal ice point T0 = 273.15 K (t = 0.00°C); (b) Absolute temperature of the triple point of water T = 273.16 K (t = 0.01°C), by definition of ITS-90; (c) Standard normal gravity (gn) = 9.806 65 m s–2; (d) Density of mercury at 0°C = 1.359 51 · 104 kg m–3. The values of other constants are given in WMO (1973; 1988).

The sources of error in the various meteorological measurements are discussed in specific detail in the following chapters of this Guide, but in general they may be seen as accumulating through the chain of traceability and the measurement conditions. It is convenient to take air temperature as an example to discuss how errors arise, but it is not difficult to adapt the following argument to pressure, wind and other meteorological quantities. For temperature, the sources of error in an individual measurement are as follows: (a) Errors in the international, national and working standards, and in the comparisons made between them. These may be assumed to be negligible for meteorological applications; (b) Errors in the comparisons made between the working, travelling and/or check standards and the field instruments in the laboratory or in liquid baths in the field (if that is how the traceability is established). These are small if the practice is good (say ±0.1 K uncertainty at the 95 per cent confidence level, including the errors in (a) above), but may quite easily be larger, depending on the skill of the operator and the quality of the equipment; (c) Non-linearity, drift, repeatability and reproducibility in the field thermometer and its transducer (depending on the type of thermometer element); (d) The effectiveness of the heat transfer between the thermometer element and the air in the thermometer shelter, which should ensure that the element is at thermal equilibrium

1.6 1.6.1 1.6.1.1

uncertainty of MeasureMents

Meteorological measurements general

This section deals with definitions that are relevant to the assessment of accuracy and the measurement of uncertainties in physical measurements, and concludes with statements of required and achievable uncertainties in meteorology. First, it discusses some issues that arise particularly in meteorological measurements.
6 Recommendation 3 (CBS-XII), Annex 1, adopted through Resolution 4 (EC-LIII).

I.1–10

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

(e)

(f)

with the air (related to system time-constant or lag coefficient). In a well-designed aspirated shelter this error will be very small, but it may be large otherwise; The effectiveness of the thermometer shelter, which should ensure that the air in the shelter is at the same temperature as the air immediately surrounding it. In a welldesigned case this error is small, but the difference between an effective and an ineffective shelter may be 3°C or more in some circumstances; The exposure, which should ensure that the shelter is at a temperature which is representative of the region to be monitored. Nearby sources and heat sinks (buildings, other unrepresentative surfaces below and around the shelter) and topography (hills, land-water boundaries) may introduce large errors. The station metadata should contain a good and regularly updated description of exposure (see Annex 1.C) to inform data users about possible exposure errors.

analysis. In that case, the mean and standard deviation of the differences between the station and the analysed field may be calculated, and these may be taken as the errors in the station measurement system (including effects of exposure). The uncertainty in the estimate of the mean value in the long term may, thus, be made quite small (if the circumstances at the station do not change), and this is the basis of climate change studies. 1.6.2 Definitions of measurements and their errors

The following terminology relating to the accuracy of measurements is taken from ISO (1993a), which contains many definitions applicable to the practices of meteorological observations. ISO (1995) gives very useful and detailed practical guidance on the calculation and expression of uncertainty in measurements. Measurement: A set of operations having the objective of determining the value of a quantity.
Note: The operations may be performed automatically.

Systematic and random errors both arise at all the above-mentioned stages. The effects of the error sources (d) to (f) can be kept small if operations are very careful and if convenient terrain for siting is available; otherwise these error sources may contribute to a very large overall error. However, they are sometimes overlooked in the discussion of errors, as though the laboratory calibration of the sensor could define the total error completely. Establishing the true value is difficult in meteorology (Linacre, 1992). Well-designed instrument comparisons in the field may establish the characteristics of instruments to give a good estimate of uncertainty arising from stages (a) to (e) above. If station exposure has been documented adequately, the effects of imperfect exposure can be corrected systematically for some parameters (for example, wind; see WMO, 2002) and should be estimated for others. Comparing station data against numerically analysed fields using neighbouring stations is an effective operational quality control procedure, if there are sufficient reliable stations in the region. Differences between the individual observations at the station and the values interpolated from the analysed field are due to errors in the field as well as to the performance of the station. However, over a period, the average error at each point in the analysed field may be assumed to be zero if the surrounding stations are adequate for a sound

Result of a measurement: Value attributed to a measurand (the physical quantity that is being measured), obtained by measurement.
Notes: 1. When a result is given, it should be made clear whether it refers to the indication, the uncorrected result or the corrected result, and whether several values are averaged. 2. A complete statement of the result of a measurement includes information about the uncertainty of the measurement.

Corrected result: The result of a measurement after correction for systematic error. Value (of a quantity): The magnitude of a particular quantity generally expressed as a unit of measurement multiplied by a number. Example: Length of a rod: 5.34 m. True value (of a quantity): A value consistent with the definition of a given particular quantity. = ±
Notes: 1. 2. This is a value that would be obtained by a perfect True values are by nature indeterminate. measurement.

CHaPTEr 1. GENEral

I.1–11 of measurement comprises, in general,

Accuracy (of measurement): The closeness of the agreement between the result of a measurement and a true value of the measurand.
Notes: 1. 2. “Accuracy” is a qualitative concept. The term “precision” should not be used for “accuracy”.

2. Uncertainty

many components. Some of these components may be evaluated from the statistical distribution of the results of a series of measurements and can be characterized by experimental standard deviations. The other components, which can also be characterized by standard deviations, are evaluated from assumed probability distributions based on experience or other information.

Repeatability (of results of measurements): The closeness of the agreement between the results of successive measurements of the same measurand carried out under the same measurement conditions.
Notes: 1. 2. These conditions are called repeatability conditions. Repeatability conditions include: (a) The same measurement procedure; (b) The same observer; (c) The same measuring instrument used under the same conditions (including weather); (d) The same location; (e) Repetition over a short period of time. 3. Repeatability may be expressed quantitatively in terms of the dispersion characteristics of the results.

3. It is understood that the result of the measurement is the best estimate of the value of the measurand, and that all components of uncertainty, including those arising from systematic effects, such as components associated with corrections and reference standards, contribute to the dispersion.

Error (of measurement): The result of a measurement minus a true value of the measurand.
Note: Since a true value cannot be determined, in practice a

conventional true value is used.

Deviation: The value minus its conventional true value. Random error: The result of a measurement minus the mean that would result from an infinite number of measurements of the same measurand carried out under repeatability conditions.
Notes: 1. 2. Random error is equal to error minus systematic error. Because only a finite number of measurements can be taken,

Reproducibility (of results of measurements): The closeness of the agreement between the results of measurements of the same measurand carried out under changed measurement conditions.
Notes:

it is possible to determine only an estimate of random error.

1. A valid statement of reproducibility requires specification of the conditions changed. 2. The changed conditions may include: (a) The principle of measurement; (b) The method of measurement; (c) The observer; (d) The measuring instrument; (e) The reference standard; (f) The location; (g) The conditions of use (including weather); (h) The time. 3. Reproducibility may be expressed quantitatively in terms of the dispersion characteristics of the results. 4. Here, results are usually understood to be corrected results.

Systematic error: A mean that would result from an infinite number of measurements of the same measurand carried out under repeatability conditions minus a true value of the measurand.
Notes: 1. 2. Systematic error is equal to error minus random error. Like true value, systematic error and its causes cannot be

completely known.

Correction: The value added algebraically to the uncorrected result of a measurement to compensate for a systematic error. 1.6.3 characteristics of instruments

Uncertainty (of measurement): A variable associated with the result of a measurement that characterizes the dispersion of the values that could be reasonably attributed to the measurand.
Notes:

Some other properties of instruments which must be understood when considering their uncertainty are taken from ISO (1993a). Sensitivity: The change in the response of a measuring instrument divided by the corresponding change in the stimulus.
Note: Sensitivity may depend on the value of the stimulus.

1. The variable may be, for example, a standard deviation (or a given multiple thereof), or the half-width of an interval having a stated level of confidence.

I.1–12

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

Discrimination: The ability of a measuring instrument to respond to small changes in the value of the stimulus. Resolution: A quantitative expression of the ability of an indicating device to distinguish meaningfully between closely adjacent values of the quantity indicated. Hysteresis: The property of a measuring instrument whereby its response to a given stimulus depends on the sequence of preceding stimuli. Stability (of an instrument): The ability of an instrument to maintain its metrological characteristics constant with time. Drift: The slow variation with time of a metrological characteristic of a measuring instrument. Response time: The time interval between the instant when a stimulus is subjected to a specified abrupt change and the instant when the response reaches and remains within specified limits around its final steady value. The following other definitions are used frequently in meteorology: Statements of response time: The time for 90 per cent of the step change is often given. The time for 50 per cent of the step change is sometimes referred to as the half-time. Calculation of response time: In most simple systems, the response to a step change is:

σ

T

O



figure I.2. the distribution of data in an instrument comparison 1.6.4 the measurement uncertainties of a single instrument

ISO (1995) should be used for the expression and calculation of uncertainties. It gives a detailed practical account of definitions and methods of reporting, and a comprehensive description of suitable statistical methods, with many illustrative examples. 1.6.4.1 the statistical distributions of observations

To determine the uncertainty of any individual measurement, a statistical approach is to be considered in the first place. For this purpose, the following definitions are stated (ISO, 1993; 1995): (a) Standard uncertainty; (b) Expanded uncertainty; (c) Variance, standard deviation; (d) Statistical coverage interval. If n comparisons of an operational instrument are made with the measured variable and all other significant variables held constant, if the best estimate of the true value is established by use of a reference standard, and if the measured variable has a Gaussian distribution,7 the results may be displayed as in Figure 1.2. In this figure, T is the true value, O is the mean of the n values O observed with one instrument, and σ is the standard deviation of the observed values with respect to their mean values. In this situation, the following characteristics can be identified: (a) The systematic error, often termed bias, given by the algebraic difference O – T. Systematic errors cannot be eliminated but may often be reduced. A correction factor can be applied to compensate for the systematic effect. Typically, appropriate calibrations and
7 However, note that several meteorological variables do not follow a Gaussian distribution. See section 1.6.4.2.3.

Y = A(1 − e

− t /τ

)

(1.1)

where Y is the change after elapsed time t; A is the amplitude of the step change applied; t is the elapsed time from the step change; and τ is a characteristic variable of the system having the dimension of time. The variable τ is referred to as the time-constant or the lag coefficient. It is the time taken, after a step change, for the instrument to reach 1/e of the final steady reading. In other systems, the response is more complicated and will not be considered here (see also Part III, Chapter 2). Lag error: The error that a set of measurements may possess due to the finite response time of the observing instrument.

Chapter 1. GeNeraL

I.1–13

(b)

(c)

(d)

adjustments should be performed to eliminate the systematic errors of sensors. Systematic errors due to environmental or siting effects can only be reduced; The random error, which arises from unpredictable or stochastic temporal and spatial variations. The measure of this random effect can be expressed by the standard deviation σ determined after n measurements, where n should be large enough. In principle, σ is a measure for the uncertainty of O; The accuracy of measurement, which is the closeness of the agreement between the result of a measurement and a true value of the measurand. The accuracy of a measuring instrument is the ability to give responses close to a true value. Note that “accuracy” is a qualitative concept; The uncertainty of measurement, which represents a parameter associated with the result of a measurement, that characterizes the dispersion of the values that could be reasonably attributed to the measurand. The uncertainties associated with the random and systematic effects that give rise to the error can be evaluated to express the uncertainty of measurement. Estimating the true value

Upper limit:

LU = X + k ⋅

σ n σ n

(1.2)

Lower limit:

LL = X − k ⋅

(1.3)

where X is the average of the observations O corrected for systematic error; σ is the standard deviation of the whole population; and k is a factor, according to the chosen level of confidence, which can be calculated using the normal distribution function. Some values of k are as follows:
Level of confidence 90% 1.645 95% 1.960 99% 2.575

k

1.6.4.2

The level of confidence used in the table above is for the condition that the true value will not be outside the one particular limit (upper or lower) to be computed. When stating the level of confidence that the true value will lie between both limits, both the upper and lower outside zones have to be considered. With this in mind, it can be seen that k takes the value 1.96 for a 95 per cent probability, and that the true value of the mean lies between the limits LU and LL. 1.6.4.2.2 Estimating the true value – n small

In normal practice, observations are used to make an estimate of the true value. If a systematic error does not exist or has been removed from the data, the true value can be approximated by taking the mean of a very large number of carefully executed independent measurements. When fewer measurements are available, their mean has a distribution of its own and only certain limits within which the true value can be expected to lie can be indicated. In order to do this, it is necessary to choose a statistical probability (level of confidence) for the limits, and the error distribution of the means must be known. A very useful and clear explanation of this notion and related subjects is given by Natrella (1966). Further discussion is given by Eisenhart (1963). 1.6.4.2.1 Estimating the true value – n large

When n is small, the means of samples conform to Student’s t distribution provided that the observational errors have a Gaussian or nearGaussian distribution. In this situation, and for a chosen level of confidence, the upper and lower limits can be obtained from: Upper limit: LU ≈ X + t ⋅

ˆ σ n

(1.4)

Lower limit:

LL ≈ X − t ⋅

ˆ σ n

(1.5)

When the number of n observations is large, the distribution of the means of samples is Gaussian, even when the observational errors themselves are not. In this situation, or when the distribution of the means of samples is known to be Gaussian for other reasons, the limits between which the true value of the mean can be expected to lie are obtained from:

where t is a factor (Student’s t) which depends upon the chosen level of confidence and the number n of ˆ measurements; and σ is the estimate of the standard deviation of the whole population, made from the measurements obtained, using: n ˆ σ2 =

i =1

( X i − X )2 n −1 =

n 2 ⋅σ0 n −1

(1.6)

I.1–14

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

where Xi is an individual value Oi corrected for systematic error. Some values of t are as follows:
Level of confidence 90% 95% 99%

df
1 4 8 60 6.314 2.132 1.860 1.671 12.706 2.776 2.306 2.000 63.657 4.604 3.355 2.660

random errors as indicated by the expressions and because of any unknown component of the systematic error. Limits should be set to the uncertainty of the systematic error and should be added to those for random errors to obtain the overall uncertainty. However, unless the uncertainty of the systematic error can be expressed in probability terms and combined suitably with the random error, the level of confidence is not known. It is desirable, therefore, that the systematic error be fully determined. 1.6.4.3 expressing the uncertainty

where df is the degrees of freedom related to the number of measurements by df = n – 1. The level of confidence used in this table is for the condition that the true value will not be outside the one particular limit (upper or lower) to be computed. When stating the level of confidence that the true value will lie between the two limits, allowance has to be made for the case in which n is large. With this in mind, it can be seen that t takes the value 2.306 for a 95 per cent probability that the true value lies between the limits LU and LL, when the estimate is made from nine measurements (df = 8). The values of t approach the values of k as n becomes large, and it can be seen that the values of k are very nearly equalled by the values of t when df equals 60. For this reason, tables of k (rather than tables of t) are quite often used when the number of measurements of a mean value is greater than 60 or so. 1.6.4.2.3 Estimating the true value – additional remarks

If random and systematic effects are recognized, but reduction or corrections are not possible or not applied, the resulting uncertainty of the measurement should be estimated. This uncertainty is determined after an estimation of the uncertainty arising from random effects and from imperfect correction of the result for systematic effects. It is common practice to express the uncertainty as “expanded uncertainty” in relation to the “statistical coverage interval”. To be consistent with common practice in metrology, the 95 per cent confidence level, or k = 2, should be used for all types of measurements, namely: = k σ = 2 σ (1.7)

As a result, the true value, defined in section 1.6.2, will be expressed as: = ± = ±2 σ 1.6.4.4 Measurements of discrete values

Investigators should consider whether or not the distribution of errors is likely to be Gaussian. The distribution of some variables themselves, such as sunshine, visibility, humidity and ceiling, is not Gaussian and their mathematical treatment must, therefore, be made according to rules valid for each particular distribution (Brooks and Carruthers, 1953). In practice, observations contain both random and systematic errors. In every case, the observed mean value has to be corrected for the systematic error insofar as it is known. When doing this, the estimate of the true value remains inaccurate because of the

While the state of the atmosphere may be described well by physical variables or quantities, a number of meteorological phenomena are expressed in terms of discrete values. Typical examples of such values are the detection of sunshine, precipitation or lightning and freezing precipitation. All these parameters can only be expressed by “yes” or “no”. For a number of parameters, all of which are members of the group of present weather phenomena, more than two possibilities exist. For instance, discrimination between drizzle, rain, snow, hail and their combinations is required when reporting present weather. For these practices, uncertainty calculations like those stated above are not applicable. Some of these parameters are related to a numerical threshold value (for example, sunshine detection using direct radiation intensity), and the determination of the uncertainty of any derived variable (for example, sunshine

CHaPTEr 1. GENEral

I.1–15

duration) can be calculated from the estimated uncertainty of the source variable (for example, direct radiation intensity). However, this method is applicable only for derived parameters, and not for the typical present weather phenomena. Although a simple numerical approach cannot be presented, a number of statistical techniques are available to determine the quality of such observations. Such techniques are based on comparisons of two data sets, with one set defined as a reference. Such a comparison results in a contingency matrix, representing the cross-related frequencies of the mutual phenomena. In its most simple form, when a variable is Boolean (“yes” or “no”), such a matrix is a two by two matrix with the number of equal occurrences in the elements of the diagonal axis and the “missing hits” and “false alarms” in the other elements. Such a matrix makes it possible to derive verification scores or indices to be representative for the quality of the observation. This technique is described by Murphy and Katz (1985). An overview is given by Kok (2000). 1.6.5 1.6.5.1 accuracy requirements general

signal. Thus, for various purposes, the amplitudes of the noise and the signal serve, respectively, to determine: (a) The limits of performance beyond which improvement is unnecessary; (b) The limits of performance below which the data obtained would be of negligible value. This argument, defining and determining limits (a) and (b) above, was developed extensively for upper-air data by WMO (1970). However, statements of requirements are usually derived not from such reasoning but from perceptions of practically attainable performance, on the one hand, and the needs of the data users, on the other. 1.6.5.2 required and achievable performance

The performance of a measuring system includes its reliability, capital, recurrent and lifetime cost, and spatial resolution, but the performance under discussion here is confined to uncertainty (including scale resolution) and resolution in time. Various statements of requirements have been made, and both needs and capability change with time. The statements given in Annex 1.B are the most authoritative at the time of writing, and may be taken as useful guides for development, but they are not fully definitive. The requirements for the variables most commonly used in synoptic, aviation and marine meteorology, and in climatology are summarized in Annex 1.B.8 It gives requirements only for surface measurements that are exchanged internationally. Details on the observational data requirements for Global Data-processing and Forecasting System Centres for global and regional exchange are given in WMO (1992b). The uncertainty requirement for wind measurements is given separately for speed and direction because that is how wind is reported. The ability of individual sensors or observing systems to meet the stated requirements is changing constantly as instrumentation and observing technology advance. The characteristics of typical
8 Established by the CBS Expert Team on Requirements for Data from Automatic Weather Stations (2004) and approved by the president of CIMO for inclusion in this edition of the Guide after consultation with the presidents of the other technical commissions.

The uncertainty with which a meteorological variable should be measured varies with the specific purpose for which the measurement is required. In general, the limits of performance of a measuring device or system will be determined by the variability of the element to be measured on the spatial and temporal scales appropriate to the application. Any measurement can be regarded as made up of two parts: the signal and the noise. The signal constitutes the quantity which is to be determined, and the noise is the part which is irrelevant. The noise may arise in several ways: from observational error, because the observation is not made at the right time and place, or because short-period or small-scale irregularities occur in the observed quantity which are irrelevant to the observations and need to be smoothed out. Assuming that the observational error could be reduced at will, the noise arising from other causes would set a limit to the accuracy. Further refinement in the observing technique would improve the measurement of the noise but would not give much better results for the signal. At the other extreme, an instrument – the error of which is greater than the amplitude of the signal itself – can give little or no information about the

I.1–16

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

sensors or systems currently available are given in Annex 1.B.9 It should be noted that the achievable operational uncertainty in many cases does not meet the stated requirements. For some of the quantities, these uncertainties are achievable

only with the highest quality equipment and procedures. Uncertainty requirements for upper-air measurements are dealt with in Part I, Chapter 12.

9

Established by the CIMO Expert Team on Surface Technology and Measurement Techniques (2004) and confirmed for inclusion in this Guide by the president of CIMO.

CHaPTEr 1. GENEral

I.1–17

aNNEx 1.a regIonal centres
1. Considering the need for the regular calibration and maintenance of meteorological instruments to meet the increasing needs for highquality meteorological and hydrological data, the need for building the hierarchy of the traceability of measurements to the International System of Units (SI) standards, Members’ requirements for the standardization of meteorological and related environmental instruments, the need for international instrument comparisons and evaluations in support of worldwide data compatibility and homogeneity, the need for training instrument experts and the role played by Regional Instrument Centres (RICs) in the Global Earth Observing System of Systems, the Natural Disaster Prevention and Mitigation Programme and other WMO cross-cutting programmes, it has been recommended that:10 A. regional Instrument centres with full capabilities and functions should have the following capabilities to carry out their corresponding functions: capabilities: (a) A RIC must have, or have access to, the necessary facilities and laboratory equipment to perform the functions necessary for the calibration of meteorological and related environmental instruments; A RIC must maintain a set of meteorological standard instruments and establish the traceability of its own measurement standards and measuring instruments to the SI; A RIC must have qualified managerial and technical staff with the necessary experience to fulfil its functions; A RIC must develop its individual technical procedures for the calibration of meteorological and related environmental instruments using calibration equipment employed by the RIC; A RIC must develop its individual quality assurance procedures; A RIC must participate in, or organize, interlaboratory comparisons of standard calibration instruments and methods; A RIC must, when appropriate, utilize the resources and capabilities of the Region according to the Region’s best interests;
Recommended by the Commission for Instruments and Methods of Observation at its fourteenth session, held in 2006.

(h)

(i)

A RIC must, as far as possible, apply international standards applicable for calibration laboratories, such as ISO/IEC 17025; A recognized authority must assess a RIC, at least every five years, to verify its capabilities and performance;

corresponding functions: (j) A RIC must assist Members of the Region in calibrating their national meteorological standards and related environmental monitoring instruments; (k) A RIC must participate in, or organize, WMO and/or regional instrument intercomparisons, following relevant CIMO recommendations; (l) According to relevant recommendations on the WMO Quality Management Framework, a RIC must make a positive contribution to Members regarding the quality of measurements; (m) A RIC must advise Members on enquiries regarding instrument performance, maintenance and the availability of relevant guidance materials; (n) A RIC must actively participate, or assist, in the organization of regional workshops on meteorological and related environmental instruments; (o) The RIC must cooperate with other RICs in the standardization of meteorological and related environmental meaurements; (p) A RIC must regularly inform Members and report,11 on an annual basis, to the president of the regional association and to the WMO Secretariat on the services offered to Members and activities carried out; B. regional Instrument centres with basic capabilities and functions should have the following capabilities to carry out their corresponding functions: capabilities: (a) A RIC must have the necessary facilities and laboratory equipment to perform the functions necessary for the calibration of meteorological and related environmental instruments;
A Web-based approach is recommended.

(b)

(c)

(d)

(e) (f)

(g)

10

11

I.1–18 (b)

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

(c)

(d)

(e) (f)

(g)

(h)

(i)

A RIC must maintain a set of meteorological standard instruments12 and establish the traceability of its own measurement standards and measuring instruments to the SI; A RIC must have qualified managerial and technical staff with the necessary experience to fulfil its functions; A RIC must develop its individual technical procedures for the calibration of meteorological and related environmental instruments using calibration equipment employed by the RIC; A RIC must develop its individual quality assurance procedures; A RIC must participate in, or organize, interlaboratory comparisons of standard calibration instruments and methods; A RIC must, when appropriate, utilize the resources and capabilities of the Region according to the Region’s best interests; A RIC must, as far as possible, apply international standards applicable for calibration laboratories, such as ISO/IEC 17025; A recognized authority must assess a RIC, at least every five years, to verify its capabilities and performance;

meteorological and related environmental monitoring instruments according to capabilities (b); (k) According to relevant recommendations on the WMO Quality Management Framework, a RIC must make a positive contribution to Members regarding the quality of measurements; (l) A RIC must advise Members on enquiries regarding instrument performance, maintenance and the availability of relevant guidance materials; (m) The RIC must cooperate with other RICs in the standardization of meteorological and related environmental instruments; (n) A RIC must regularly inform Members and report,13 on an annual basis, to the president of the regional association and to the WMO Secretariat on the services offered to Members and activities carried out. 2. The following RICs have been designated by the regional associations concerned: Algiers (Algeria), Cairo (Egypt), Casablanca (Morocco), Nairobi (Kenya) and Gaborone (Botswana) for RA I; Beijing (China) and Tsukuba (Japan) for RA II; Buenos Aires (Argentina) for RA III; Bridgetown (Barbados), Mount Washington (United States) and San José (Costa Rica) for RA IV; Manila (Philippines) and Melbourne (Australia) for RA V; , Bratislava (Slovakia), Ljubljana (Slovenia) and Trappes (France) for RA VI.

corresponding functions: (j) A RIC must assist Members of the Region in calibrating their national standard

12

For calibrating one or more of the following variables: temperature, humidity, pressure or others specified by the Region.

13

A Web-based approach is recommended.

aNNEx 1.B oPeratIonal MeasureMent uncertaInty requIreMents and InstruMent PerforMance
(3) Reported Mode of resolution measurement/ observation Required measurement uncertainty Sensor time Output Achievable constant averaging time measurement uncertainty Remarks (4) (5) (6) (7) (8) (9)

(1)

(2)

Variable

Range

1. 0.1 K I 0.3 K for ≤ –40ºC 0.1 K for > –40ºC and ≤ +40ºC 0.3 K for > +40ºC 20 s 1 min 0.2 K

temperature achievable uncertainty and effective time-constant may be affected by the design of the thermometer solar radiation screen Time-constant depends on the air-flow over the sensor 0.2 K

1.1

air temperature

–80 – +60°C

1.2

Extremes of air temperature

–80 – +60°C

0.1 K

I

0.5 K for ≤ –40ºC 0.3 K for > –40ºC and ≤ +40ºC 0.5 K for > +40ºC 0.1 K 20 s 1 min

20 s

1 min

CHaPTEr 1. GENEral

1.3 0.1 K I

Sea surface temperature

–2 – +40°C

0.2 K

2. 0.1 K I 0.1 K

Humidity 20 s 1 min 0.5 K Wet-bulb temperature (psychrometer) 1% I 1% 20 s 1 min 0.2 K If measured directly and in combination with air temperature (dry bulb) large errors are possible due to aspiration and cleanliness problems (see also note 11) 40 s 1 min Solid state and others 3% Solid state sensors may show significant temperature and humidity dependence

2.1

dewpoint temperature

–80 – +35°C

2.2

relative humidity

0 – 100%

I.1–19

I.1–20

(1) Reported Mode of resolution measurement/ observation Required measurement uncertainty Sensor time Output Achievable constant averaging time measurement uncertainty Remarks

(2)

(3)

(4)

(5)

(6)

(7)

(8)

(9)

Variable

Range

3. 0.1 hPa I 0.1 hPa 20 s 1 min 0.3 hPa

atmospheric pressure Both station pressure and MSl pressure Measurement uncertainty is seriously affected by dynamic pressure due to wind if no precautions are taken Inadequate temperature compensation of the transducer may affect the measurement uncertainty significantly 0.2 hPa difference between instantaneous values 2/8

3.1

Pressure

500 – 1 080 hPa

3.2

Tendency

Not specified

0.1 hPa

I

0.2 hPa

4. 4.1 1/8 I 1/8 n/a

clouds Cloud amount

0/8 – 8/8

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

4.2

Height of cloud base

0 m – 30 km

10 m

I

10 m for ≤ 100 m 10% for > 100 m

n/a

~10 m

Period (30 s) clustering algorithms may be used to estimate low cloud amount automatically achievable measurement uncertainty is undetermined because no clear definition exists for instrumentally measured cloud-base height (e.g. based on penetration depth or significant discontinuity in the extinction profile) Significant bias during precipitation

4.3

Height of cloud top

Not available

(1) Reported Mode of resolution measurement/ observation Required measurement uncertainty Sensor time Output Achievable constant averaging time measurement uncertainty Remarks

(2)

(3)

(4)

(5)

(6)

(7)

(8)

(9)

Variable

Range

5. 0.5 m s–1 a distance constant 2–5m 2 and/or 10 min 0.5 m s–1 for ≤ 5 m s–1 10% for > 5 m s–1 0.5 m s–1 for ≤ 5 m s–1 10% for > 5 m s–1 5°

Wind

5.1

Speed

0 – 75 m s–1

5.2

direction

0 – 360°



a



1s

2 and/or 10 min

average over 2 and/or 10 min Non-linear devices. Care needed in design of averaging process distance constant is usually expressed as response length averages computed over Cartesian components (see Part III, Chapter 3, section 3.6 of this Guide) 0.5 m s–1 for Highest 3 s average should ≤ 5 m s–1 be recorded 10% for > 5 m s–1

CHaPTEr 1. GENEral

5.3

Gusts

0.1 – 150 m s–1 a 10%

0.1 m s–1

3s

6. 6.1 0.1 mm T 0.1 mm for ≤ 5 mm 2% for > 5 mm n/a

Precipitation amount (daily)

0 – 500 mm

n/a

The larger of 5% or 0.1 mm

6.2

depth of snow

0 – 25 m

1 cm

a

1 cm for ≤ 20 cm 5% for > 20 cm 1 cm for ≤ 10 cm 10% for > 10 cm

Quantity based on daily amounts Measurement uncertainty depends on aerodynamic collection efficiency of gauges and evaporation losses in heated gauges average depth over an area representative of the observing site

6.3

Thickness of ice accretion on ships

Not specified

1 cm

I

I.1–21

I.1–22

(1) Reported Mode of resolution measurement/ observation I (trace): n/a for 0.02 – 0.2 mm h–1 0.1 mm h–1 for 0.2 – 2 mm h–1 5% for > 2 mm h–1 < 30 s 1 min uncertainty values for liquid precipitation only uncertainty is seriously affected by wind Sensors may show significant non-linear behaviour For < 0.2 mm h–1: detection only (yes/no) sensor time constant is significantly affected during solid precipitation using catchment type of gauges The larger of 0.1 h or 2% 0.4 MJ m–2 for ≤ 8 MJ m–2 5% for > 8 MJ m–2 n/a radiant exposure expressed as daily sums (amount) of (net) radiation Required measurement uncertainty Sensor time Output Achievable constant averaging time measurement uncertainty Remarks

(2)

(3)

(4)

(5)

(6)

(7)

(8)

(9)

Variable

Range

6.4

Precipitation intensity

0.02 mm h–1 0.1 mm h–1 – 2 000 mm h–1

7. 60 s T 0.1 h 20 s n/a

radiation

7.1

Sunshine duration (daily) 1 J m–2 T 20 s 0.4 MJ m–2 for ≤ 8 MJ m–2 5% for > 8 MJ m–2

0 – 24 h

7.2

Net radiation, radiant exposure (daily)

Not specified

8. 1m I 50 m for ≤ 600 m 10% for > 600 m – ≤ 1 600 m 20% for > 1500 m

Visibility < 30 s 1 and 10 min The larger of achievable measurement 20 m or 20% uncertainty may depend on the cause of obscuration Quantity to be averaged: extinction coefficient (see Part III, Chapter 3, section 3.6, of this Guide). Preference for averaging logarithmic values

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

8.1

Meteorological optical range (MOr)

10 m – 100 km

(1) Reported Mode of resolution measurement/ observation 1m a 10 m for ≤ 400 m 25 m for > 400 m – ≤ 800 m 10% for > 800 m < 30 s 1 and 10 min Required measurement uncertainty Sensor time Output Achievable constant averaging time measurement uncertainty Remarks

(2)

(3)

(4)

(5)

(6)

(7)

(8)

(9)

Variable

Range

8.2

runway visual range (rvr)

10 m – 1 500 m

The larger of In accordance with 20 m or 20% WMO-No. 49, volume II, attachment a (2004 ed.) and ICaO doc 9328-aN/908 (second ed., 2000) average over 20 min for instrumental measurements

9 9.1 0.1 m a 0.5 m for ≤ 5 m 10% for > 5 m 0.5 s 20 min

Waves Significant wave height 1s 1° a 10° 0.5 s 20 min a 0.5 s 0.5 s 20 min 0.5 m for ≤ 5m 10% for > 5m 0.5 s 20°

0 – 50 m

9.2

Wave period

0 – 100 s

9.3

Wave direction

0 – 360°

average over 20 min for instrumental measurements average over 20 min for instrumental measurements

CHaPTEr 1. GENEral

10. 10.1 0.1 mm T 0.1 mm for ≤ 5 mm 2% for > 5 mm n/a

evaporation amount of pan evaporation

0 – 100 mm

Notes: 1. Column 1 gives the basic variable. 2. Column 2 gives the common range for most variables; limits depend on local climatological conditions. 3. Column 3 gives the most stringent resolution as determined by the Manual on Codes (WMO-No. 306). 4. In column 4: I = Instantaneous: In order to exclude the natural small-scale variability and the noise, an average value over a period of 1 min is considered as a minimum and most suitable; averages over periods of up to 10 min are acceptable. A: = Averaging: Average values over a fixed period, as specified by the coding requirements. T: = Totals: Totals over a fixed period, as specified by coding requirements.

I.1–23

Column 5 gives the recommended measurement uncertainty requirements for general operational use, i.e. of Level II data according to FM 12, 13, 14, 15 and its BuFr equivalents. They have been adopted by all eight technical commissions and are applicable for synoptic, aeronautical, agricultural and marine meteorology, hydrology, climatology, etc. These requirements are applicable for both manned and automatic weather stations as defined in the Manual on the Global Observing System (WMO-No. 544). Individual applications may have less stringent requirements. The stated value of required measurement uncertainty represents the uncertainty of the reported value with respect to the true value and indicates the interval in which the true value lies with a stated probability. The recommended probability level is 95 per cent (k = 2), which corresponds to the 2 σ level for a normal (Gaussian) distribution of the variable. The assumption that all known corrections are taken into account implies that the errors in reported values will have a mean value (or bias) close to zero. Any residual bias should be small compared with the stated measurement uncertainty requirement. The true value is the value which, under operational conditions, perfectly characterizes the variable to be measured/observed over the representative time interval, area and/or volume required, taking into account siting and exposure. 6. Columns 2 to 5 refer to the requirements established by the CBS Expert Team on Requirements for Data from Automatic Weather Stations in 2004. 7. Columns 6 to 8 refer to the typical operational performance established by the CIMO Expert Team on Surface Technology and Measurement Techniques in 2004. 8. Achievable measurement uncertainty (column 8) is based on sensor performance under nominal and recommended exposure that can be achieved in operational practice. It should be regarded as a practical aid to users in defining achievable and affordable requirements. 9. n/a = not applicable. 10. The term uncertainty has preference over accuracy (i.e. uncertainty is in accordance with ISO standards on the uncertainty of measurements (ISO, 1995)). 11. Dewpoint temperature, relative humidity and air temperature are linked, and thus their uncertainties are linked. When averaging, preference is given to absolute humidity as the principal variable.

5.

I.1–24
ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

CHaPTEr 1. GENEral

I.1–25

aNNEx 1.C statIon exPosure descrIPtIon

The accuracy with which an observation describes the state of a selected part of the atmosphere is not the same as the uncertainty of the instrument, because the value of the observation also depends on the instrument’s exposure to the atmosphere. This is not a technical matter, so its description is the responsibility of the station observer or attendant. In practice, an ideal site with perfect exposure is seldom available and, unless the actual exposure is adequately documented, the reliability of observations cannot be determined (WMO, 2002). Station metadata should contain the following aspects of instrument exposure: (a) Height of the instruments above the surface (or below it, for soil temperature); (b) Type of sheltering and degree of ventilation for temperature and humidity; (c) Degree of interference from other instruments or objects (masts, ventilators); (d) Microscale and toposcale surroundings of the instrument, in particular: (i) The state of the enclosure’s surface, influencing temperature and humidity; nearby major obstacles (buildings, fences, trees) and their size; (ii) The degree of horizon obstruction for sunshine and radiation observations; (iii) Surrounding terrain roughness and major vegetation, influencing the wind; (iv) All toposcale terrain features such as small slopes, pavements, water surfaces;

(v) Major mesoscale terrain features, such as coasts, mountains or urbanization. Most of these matters will be semi-permanent, but any significant changes (growth of vegetation, new buildings) should be recorded in the station logbook, and dated. For documenting the toposcale exposure, a map with a scale not larger than 1:25 000 showing contours of ≈ 1 m elevation differences is desirable. On this map the locations of buildings and trees (with height), surface cover and installed instruments should be marked. At map edges, major distant terrain features (for example, builtup areas, woods, open water, hills) should be indicated. Photographs are useful if they are not merely close-ups of the instrument or shelter, but are taken at sufficient distance to show the instrument and its terrain background. Such photographs should be taken from all cardinal directions. The necessary minimum metadata for instrument exposure can be provided by filling in the template given on the next page for every station in a network (see Figure 1.3). An example of how to do this is shown in WMO (2003b). The classes used here for describing terrain roughness are given in Part I, Chapter 5, of the Guide. A more extensive description of metadata matters is given in WMO (2004).

I.1–26

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES
Station Elevation 0 200 m Enclosure Building Road x x Trees, bushes x (12) Height (m) of obstacle +3 Elevation contour Latitude

Update Longitude

N

Radiation horizon 1: 6 1: 10 1: 20 N Temperature and humidity: Surface cover under screen Soil under screen Precipitation: Wind: Gauge rim height Anenomoter height Free-standing? , width ,to E , to S, , length to W yes/no . . E S Sensor height Artificial ventilation? W N yes/no
8° 4° 0°

(if “no” above: building height Terrain roughness class: to N Remarks:

figure I.3. general template for station exposure metadata

I.1–27

CHaPTEr 1. GENEral

I.1–27

references and furtHer readIng

Bureau International des Poids et Mesures/Comité Consultatif de Thermométrie, 1990: The International Temperature Scale of 1990 (ITS90) (H. Preston Thomas). Metrologia, 1990, 27, pp. 3–10. Bureau International des Poids et Mesures, 1998: The International System of Units (SI). Seventh edition, BIPM, Sèvres/Paris. Brooks, C.E.P. and N. Carruthers,1953: Handbook of Statistical Methods in Meteorology. MO 538, Meteorological Office, London. Eisenhart, C., 1963: Realistic evaluation of the precision and accuracy of instrument calibration systems. National Bureau of Standards–C, Engineering and Instrumentation, Journal of Research, Volume 67C, Number 2, April–June 1963. International Civil Aviation Organization, 2002: World Geodetic System — 1984 (WGS-84) Manual. ICAO Doc 9674–AN/946. Second edition, Quebec. International Organization for Standardization, 1993a: International Vocabulary of Basic and General Terms in Metrology. Prepared by BIPM/ISO/OIML/ IEC/IFCC/IUPAC and IUPAP, second edition, Geneva. International Organization for Standardization, 1993b: ISO Standards Handbook: Quantities and Units. ISO 31:1992, third edition, Geneva. International Organization for Standardization, 1995: Guide to the Expression of Uncertainty of Measurement. Published in the name of BIPM/ IEC/IFCC/ISO/IUPAC/IUPAP and OIML, first edition, Geneva. International Union of Pure and Applied Physics, 1987: Symbols, Units, Nomenclature and Fundamental Constants in Physics. SUNAMCO Document IUPAP-25 (E.R. Cohen and P. Giacomo), reprinted from Physica 146A, pp. 1–68. Kok, C.J., 2000: On the Behaviour of a Few Popular Verification Scores in Yes/No Forecasting. Scientific Report, WR-2000-04, KNMI, De Bilt. Linacre, E., 1992: Climate Data and Resources – A Reference and Guide. Routledge, London, 366 pp. Murphy, A.H. and R.W. Katz (eds.), 1985: Probability, Statistics and Decision Making in the Atmospheric Sciences. Westview Press, Boulder. National Institute of Standards and Technology, 1995: Guide for the Use of the International System of Units (SI) (B.N. Taylor). NIST Special

Publication No. 811, Gaithersburg, United States. Natrella, M.G., 1966: Experimental Statistics. National Bureau of Standards Handbook 91, Washington DC. Orlanski, I., 1975: A rational subdivision of scales for atmospheric processes. Bulletin of the American Meteorological Society, 56, pp. 527-530. World Meteorological Organization, 1966: International Meteorological Tables (S. Letestu, ed.) (1973 amendment), WMO-No. 188. TP.94, Geneva. World Meteorological Organization, 1970: Performance Requirements of Aerological Instruments (C.L. Hawson). Technical Note No. 112, WMO-No. 267. TP.151, Geneva. World Meteorological Organization, 1981: Guide to Agricultural Meteorological Practices. Second edition, WMO-No. 134, Geneva. World Meteorological Organization, 1983: Guide to Climatological Practices. Second edition, WMO-No. 100, Geneva (updates available at http://www.wmo.int/web/wcp/ccl/). World Meteorological Organization, 1988: Technical Regulations. Volume I, Appendix A, WMO-No. 49, Geneva. World Meteorological Organization, 1989: Guide on the Global Observing System. WMO-No. 488, Geneva. World Meteorological Organization, 1990: Guide on Meteorological Observation and Information D i s t r i b u t i o n S y s t e m s a t A e ro d r o m e s. WMO-No. 731, Geneva. World Meteorological Organization, 1992a: International Meteorological Vocabulary. Second edition, WMO-No. 182, Geneva. World Meteorological Organization, 1992b: Manual on the Global Data-processing and Forecasting System. Volume I – Global Aspects, Appendix I-2, WMO-No. 485, Geneva. World Meteorological Organization, 1993a: Siting and Exposure of Meteorological Instruments (J. Ehinger). Instruments and Observing Methods Report No. 55, WMO/TD-No. 589, Geneva. World Meteorological Organization, 1993b: Weather Reporting. Volume A – Observing stations, WMO-No. 9, Geneva. World Meteorological Organization, 1994: Guide to Hydrological Practices. Fifth edition, WMO-No. 168, Geneva.

I.1–28

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

World Meteorological Organization, 2001: Lecture Notes for Training Agricultural Meteorological Personnel. Second edition, WMO-No. 551, Geneva. World Meteorological Organization, 2002: Station exposure metadata needed for judging and improving the quality of observations of wind, temperature and other parameters (J. Wieringa and E. Rudel). Papers Presented at the WMO Technical Conference on Meteorological and Environmental Instruments and Methods of

Observation (TECO–2002), Instruments and Obser ving Methods Report No. 75, WMO/TD-No. 1123, Geneva. World Meteorological Organization, 2003a: Manual on the Global Observing System. Volume I – Global Aspects, WMO-No. 544, Geneva. World Meterological Organization, 2003b: Guidelines on Climate Metadata and Homogenization (P. Llansó, ed.). World Climate Data and Monitoring Programme (WCDMP) Series Report No. 53, WMO/TD-No. 1186, Geneva.

CHaPTEr 2

MeasureMent of teMPerature

2.1 2.1.1

general

2.1.2

units anD scales

Definition

WMO (1992) defines temperature as a physical quantity characterizing the mean random motion of molecules in a physical body. Temperature is characterized by the behaviour whereby two bodies in thermal contact tend to an equal temperature. Thus, temperature represents the thermodynamic state of a body, and its value is determined by the direction of the net flow of heat between two bodies. In such a system, the body which overall loses heat to the other is said to be at the higher temperature. Defining the physical quantity temperature in relation to the “state of a body” however is difficult. A solution is found by defining an internationally approved temperature scale based on universal freezing and triple points.1 The current such scale is the International Temperature Scale of 1990 (ITS-90) 2 and its temperature is indicated by T90. For the meteorological range (–80 to +60°C) this scale is based on a linear relationship with the electrical resistance of platinum and the triple point of water, defined as 273.16 kelvin (BIPM, 1990). For meteorological purposes, temperatures are measured for a number of media. The most common variable measured is air temperature (at various heights). Other variables are ground, soil, grass minimum and seawater temperature. WMO (1992) defines air temperature as “the temperature indicated by a thermometer exposed to the air in a place sheltered from direct solar radiation”. Although this definition cannot be used as the definition of the thermodynamic quantity itself, it is suitable for most applications.

The thermodynamic temperature (T), with units of kelvin (K), (also defined as “kelvin temperature”), is the basic temperature. The kelvin is the fraction 1/273.16 of the thermodynamic temperature of the triple point of water. The temperature (t), in degrees Celsius (or “Celsius temperature”) defined by equation 2.1, is used for most meteorological purposes (from the ice-point secondary reference in Table 2 in the annex): t/°C = T/K – 273.15 (2.1)

A temperature difference of one degree Celsius (°C) unit is equal to one kelvin (K) unit. Note that the unit K is used without the degree symbol. In the thermodynamic scale of temperature, measurements are expressed as differences from absolute zero (0 K), the temperature at which the molecules of any substance possess no kinetic energy. The scale of temperature in general use since 1990 is the ITS-90 (see the annex), which is based on assigned values for the temperatures of a number of reproducible equilibrium states (see Table 1 in the annex) and on specified standard instruments calibrated at those temperatures. The ITS was chosen in such a way that the temperature measured against it is identical to the thermodynamic temperature, with any difference being within the present limits of measurement uncertainty. In addition to the defined fixed points of the ITS, other secondary reference points are available (see Table 2 in the annex). Temperatures of meteorological interest are obtained by interpolating between the fixed points by applying the standard formulae in the annex. 2.1.3 2.1.3.1 Meteorological requirements general

1

The authoritative body for this scale is the International Bureau of Weights and Measures/Bureau International des Poids et Mesures (BIPM), Sèvres (Paris); see http://www. bipm.org. BIPM’s Consultative Committee for Thermometry (CCT) is the executive body responsible for establishing and realizing the ITS. Practical information on ITS-90 can be found on the ITS-90 website: http://www.its-90.com.

2

Meteorological requirements for temperature measurements primarily relate to the following: (a) The air near the Earth’s surface; (b) The surface of the ground;

I.2–2 (c) (d) (e)

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

The soil at various depths; The surface levels of the sea and lakes; The upper air.

These measurements are required, either jointly or independently and locally or globally, for input to numerical weather prediction models, for hydrological and agricultural purposes, and as indicators of climatic variability. Local temperature also has direct physiological significance for the day-to-day activities of the world’s population. Measurements of temperature may be required as continuous records or may be sampled at different time intervals. This chapter deals with requirements relating to (a), (b) and (c). 2.1.3.2 accuracy requirements

All temperature-measuring instruments should be issued with a certificate confirming compliance with the appropriate uncertainty or performance specification, or a calibration certificate that gives the corrections that must be applied to meet the required uncertainty. This initial testing and calibration should be performed by a national testing institution or an accredited calibration laboratory. Temperature-measuring instruments should also be checked subsequently at regular intervals, the exact apparatus used for this calibration being dependent on the instrument or sensor to be calibrated. 2.1.3.3 response times of thermometers

The range, reported resolution and required uncertainty for temperature measurements are detailed in Part I, Chapter 1, of this Guide. In practice, it may not be economical to provide thermometers that meet the required performance directly. Instead, cheaper thermometers, calibrated against a laboratory standard, are used with corrections being applied to their readings as necessary. It is necessary to limit the size of the corrections to keep residual errors within bounds. Also, the operational range of the thermometer will be chosen to reflect the local climatic range. As an example, the table below gives an acceptable range of calibration and errors for thermometers covering a typical measurement range.

For routine meteorological observations there is no advantage in using thermometers with a very small time-constant or lag coefficient, since the temperature of the air continually fluctuates up to one or two degrees within a few seconds. Thus, obtaining a representative reading with such a thermometer would require taking the mean of a number of readings, whereas a thermometer with a larger time-constant tends to smooth out the rapid fluctuations. Too long a time-constant, however, may result in errors when long-period changes of temperature occur. It is recommended that the time-constant, defined as the time required by the thermometer to register 63.2 per cent of a step change in air temperature, should be 20 s. The time-constant depends on the air-flow over the sensor. 2.1.3.4 recording the circumstances in which measurements are taken

thermometer characteristic requirements
Thermometer type Span of scale (˚C) range of calibration (˚C) Maximum error Maximum difference between maximum and minimum correction within the range Maximum variation of correction within any interval of 10˚C Ordinary –30 to 45 –30 to 40 0°C and ice for t < 0°C, respectively of 0.5 and 0.1 K, for a relative humidity U of 50 per cent and a range of true air temperatures (where the dry-bulb reading is assumed to give the true air temperature). table 4.3. error in derived relative humidity resulting from wet- and ice-bulb index errorsε (tx) for U = 50 per cent
Air temperature in °C Error in relative humidity, ε (U) in per cent due to an error in wet- or ice-bulb temperature (tx) = 0.5 K –30 –20 –10 0 10 20 30 40 50 60 27 14 8 5 4 3 2 2 (tx) = 0.1 K 12 5 3 2 1 0.5 0.5 0.5 0

Precision platinum electrical resistance thermometers are widely used in place of mercury-in-glass thermometers, in particular where remote reading and continuous measurements are required. It is necessary to ensure that the devices, and the interfacing electrical circuits selected, meet the performance requirements. These are detailed in Part I, Chapter . Particular care should always be taken with regard to self-heating effects in electrical thermometers. The psychrometric formulae in Annex 4.B used for Assmann aspiration psychrometers are also valid if platinum resistance thermometers are used in place of the mercury-in-glass instruments, with different configurations of elements and thermometers. The formula for water on the wet bulb is also valid for some transversely ventilated psychrometers (WMO, 1989a).

(b)

Thermometer lag coefficients: To obtain the highest accuracy with a psychrometer it is desirable to arrange for the wet and dry bulbs to have approximately the same lag coefficient; with thermometers having the same bulb size, the wet bulb has an appreciably smaller lag than the dry bulb.

CHAPTER 4. MEAsuREMEnT of HuMIDITY

I.4–9

(c)

(d)

(e)

(f)

Errors relating to ventilation: Errors due to insufficient ventilation become much more serious through the use of inappropriate humidity tables (see sections covering individual psychrometer types). Errors due to excessive covering of ice on the wet bulb: Since a thick coating of ice will increase the lag of the thermometer, it should be removed immediately by dipping the bulb into distilled water. Errors due to contamination of the wet-bulb sleeve or to impure water: Large errors may be caused by the presence of substances that alter the vapour pressure of water. The wet bulb with its covering sleeve should be washed at regular intervals in distilled water to remove soluble impurities. This procedure is more frequently necessary in some regions than others, for example, at or near the sea or in areas subject to air pollution. Errors due to heat conduction from the thermometer stem to the wet-bulb system: The conduction of heat from the thermometer stem to the wet bulb will reduce the wet-bulb depression and lead to determinations of humidity that are too high. The effect is most pronounced at low relative humidity but can be effectively eliminated by extending the wet-bulb sleeve at least  cm beyond the bulb up the stem of the thermometer. The assmann aspirated psychrometer description

psychrometer in the field may be as good as the achievable accuracy stated in Table 4.1, and with great care it can be significantly improved. Annex 4.B lists standard formulae for the computation of measures of humidity using an Assmann psychrometer,4 which are the bases of some of the other artificially ventilated psychrometers, in the absence of well-established alternatives. 4.2.2.2 observation procedure

The wick, which must be free of grease, is moistened with distilled water. Dirty or crusty wicks should be replaced. Care should be taken not to introduce a water bridge between the wick and the radiation shield. The mercury columns of the thermometers should be inspected for breaks, which should be closed up or the thermometer should be replaced. The instrument is normally operated with the thermometers held vertically. The thermometer stems should be protected from solar radiation by turning the instrument so that the lateral shields are in line with the sun. The instrument should be tilted so that the inlet ducts open into the wind, but care should be taken so that solar radiation does not fall on the thermometer bulbs. A wind screen is necessary in very windy conditions when the rotation of the aspirator is otherwise affected. The psychrometer should be in thermal equilibrium with the surrounding air. At air temperatures above 0°C, at least three measurements at 1 min intervals should be taken following an aspiration period. Below 0°C it is necessary to wait until the freezing process has finished, and to observe whether there is water or ice on the wick. During the freezing and thawing processes the wet-bulb temperature remains constant at 0°C. In the case of outdoor measurements, several measurements should be taken and the average taken. Thermometer readings should be made with a resolution of 0.1 K or better. A summary of the observation procedure is as follows: (a) Moisten the wet bulb; (b) Wind the clockwork motor (or start the electric motor); (c) Wait  or  min or until the wet-bulb reading has become steady; (d) Read the dry bulb;
4

4.2.2

4.2.2.1

Two mercury-in-glass thermometers, mounted vertically side by side in a chromium- or nickelplated polished metal frame, are connected by ducts to an aspirator. The aspirator may be driven by a spring or an electric motor. One thermometer bulb has a well-fitted muslin wick which, before use, is moistened with distilled water. Each thermometer is located inside a pair of coaxial metal tubes, highly polished inside and out, which screen the bulbs from external thermal radiation. The tubes are all thermally insulated from each other. A WMO international intercomparison of Assmanntype psychrometers from 10 countries (WMO, 1989a) showed that there is good agreement between dryand wet-bulb temperatures of psychrometers with the dimensional specifications close to the original specification, and with aspiration rates above . m s–1. Not all commercially available instruments fully comply. A more detailed discussion is found in WMO (1989a). The performance of the Assmann

Recommended by the Commission for Instruments and Methods of Observation at its tenth session (1989).

I.4–10 (e) (f)

PART I. MEAsuREMEnT of METEoRologICAl VARIABlEs

Read the wet bulb; Check the reading of the dry bulb. exposure and siting

4.2.2.3

with sensibly the wet-bulb temperature and in sufficient (but not excessive) quantity. If no wick is used, the wet bulb should be protected from dirt by enclosing the bulb in a small glass tube between readings. It is recommended that screen psychrometers be artificially aspirated. Both thermometers should be aspirated at an air speed of about  m s–1. Both spring-wound and electrically driven aspirators are in common use. The air should be drawn in horizontally across the bulbs, rather than vertically, and exhausted in such a way as to avoid recirculation. The performance of the screen psychrometer may be much worse than that shown in Table 4.1, especially in light winds if the screen is not artificially ventilated. The psychrometric formulae given in section 4..1.1 apply to screen psychrometers, but the coefficients are quite uncertain. A summary of some of the formulae in use is given by Bindon (1965). If there is artificial ventilation at  m s–1 or more across the wet bulb, the values given in Annex 4.B may be applied, with a psychrometer coefficient of 6.5 · 10–4 K–1 for water. However, values from 6.50 to 6.78 · 10–4 are in use for wet bulbs above 0°C, and 5.70 to 6.5 · 10–4 for below 0°C. For a naturally ventilated screen psychrometer, coefficients in use range from 7.7 to 8.0 · 10–4 above freezing and 6.8 to 7.. 10–4 for below freezing when there is some air movement in the screen, which is probably nearly always the case. However, coefficients up to 1 · 10–4 for water and 10.6 · 10–4 for ice have been advocated for when there is no air movement. The psychrometer coefficient appropriate for a particular configuration of screen, shape of wet bulb and degree of ventilation may be determined by comparison with a suitable working or reference standard, but there will be a wide scatter in the data, and a very large experiment would be necessary to obtain a stable result. Even when a coefficient has been obtained by such an experiment, the confidence limits for any single observation will be wide, and there would be little justification for departing from established national practices. 4.2.3.2 special observation procedures

Observations should be made in an open area with the instrument either suspended from a clamp or attached using a bracket to a thin post, or held with one hand at arm’s length with the inlets slightly inclined into the wind. The inlets should be at a height of 1. to  m above ground for normal measurements of air temperature and humidity. Great care should be taken to prevent the presence of the observer or any other nearby sources of heat and water vapour, such as the exhaust pipe of a motor vehicle, from having an influence on the readings. 4.2.2.4 Calibration

The ventilation system should be checked regularly, at least once per month. The calibration of the thermometers should also be checked regularly. The two may be compared together, with both thermometers measuring the dry-bulb temperature. Comparison with a certified reference thermometer should be performed at least once a year. 4.2.2.5 maintenance

Between readings, the instrument should be stored in an unheated room or be otherwise protected from precipitation and strong insolation. When not in use, the instrument should be stored indoors in a sturdy packing case such as that supplied by the manufacturer. 4.2.3 4.2.3.1 screen psychrometer description

Two mercury-in-glass thermometers are mounted vertically in a thermometer screen. The diameter of the sensing bulbs should be about 10 mm. One of the bulbs is fitted with a wet-bulb sleeve, which should fit closely to the bulb and extend at least 0 mm up the stem beyond it. If a wick and water reservoir are used to keep the wet-bulb sleeve in a moist condition, the reservoir should preferably be placed to the side of the thermometer and with the mouth at the same level as, or slightly lower than, the top of the thermometer bulb. The wick should be kept as straight as possible and its length should be such that water reaches the bulb

The procedures described in section 4..1.5 apply to the screen psychrometer. In the case of a naturally aspirated wet bulb, provided that the

CHAPTER 4. MEAsuREMEnT of HuMIDITY

I.4–11

water reservoir has about the same temperature as the air, the correct wet-bulb temperature will be attained approximately 15 min after fitting a new sleeve; if the water temperature differs substantially from that of the air, it may be necessary to wait for 0 min. 4.2.3.3 exposure and siting

4.2.5.1

description

The exposure and siting of the screen are described in Part I, Chapter . 4.2.4 4.2.4.1 sling or whirling psychrometers description

A small portable type of whirling or sling psychrometer consists of two mercury-in-glass thermometers mounted on a sturdy frame, which is provided with a handle and spindle, and located at the furthest end from the thermometer bulbs, by means of which the frame and thermometers may be rotated rapidly about a horizontal axis. The wet-bulb arrangement varies according to individual design. Some designs shield the thermometer bulbs from direct insolation, and these are to be preferred for meteorological measurements. The psychrometric formulae in Annex 4.B may be used. 4.2.4.2 observation procedure

Air is drawn into a duct where it passes over an electrical heating element and then into a measuring chamber containing both dry- and wet-bulb thermometers and a water reservoir. The heating element control circuit ensures that the air temperature does not fall below a certain level, which might typically be 10°C. The temperature of the water reservoir is maintained in a similar way. Thus, neither the water in the reservoir nor the water at the wick should freeze, provided that the wet-bulb depression is less than 10 K, and that the continuous operation of the psychrometer is secured even if the air temperature is below 0°C. At temperatures above 10°C the heater may be automatically switched off, when the instrument reverts to normal psychrometric operation. Electrical thermometers are used so that they may be entirely enclosed within the measuring chamber and without the need for visual readings. A second dry-bulb thermometer is located at the inlet of the duct to provide a measurement of the ambient air temperature. Thus, the ambient relative humidity may be determined. The psychrometric thermometer bulbs are axially aspirated at an air velocity in the region of  m s–1. 4.2.5.2 observation procedure

The following guidelines should be applied: (a) All instructions with regard to the handling of Assmann aspirated psychrometers apply also to sling psychrometers; (b) Sling psychrometers lacking radiation shields for the thermometer bulbs should be shielded from direct insolation in some other way; (c) Thermometers should be read at once after aspiration ceases because the wet-bulb temperature will begin to rise immediately, and the thermometers are likely to be subject to insolation effects. 4.2.5 heated psychrometer

A heated psychrometer would be suitable for automatic weather stations. 4.2.5.3 exposure and siting

The instrument itself should be mounted outside a thermometer screen. The air inlet, where ambient air temperature is measured, should be inside the screen. 4.2.6 The WMo reference psychrometer

The principle of the heated psychrometer is that the water-vapour content of an air mass does not change if it is heated. This property may be exploited to the advantage of the psychrometer by avoiding the need to maintain an ice bulb under freezing conditions.

The reference psychrometer and procedures for its operation are described in WMO (199). The wetand dry-bulb elements are enclosed in an aspirated shield, for use as a free-standing instrument. Its significant characteristic is that the psychrometer coefficient is calculable from the theory of heat and mass exchanges at the wet bulb, and is different from the coefficient for other psychrometers, with a value of 6.5 · 10–4 K–1 at 50 per cent relative humidity, 0°C and 1 000 hPa. Its wet-bulb temperature is very close to the theoretical value (see Annex 4.A,

I.4–12

PART I. MEAsuREMEnT of METEoRologICAl VARIABlEs

paragraphs 18 and 19). This is achieved by ensuring that the evaporation at the wet bulb is very efficient and that extraneous heating is minimized. The nature of the air-flow over the wet bulb is controlled by careful shaping of the duct and the bulb, and by controlling the ventilation rate. The double shield is highly reflective externally, and blackened on the inside, and the thermometer elements are insulated and separated by a shield. The shields and the wet-bulb element (which contains the thermometer) are made of stainless steel to minimize thermal conduction. The procedures for the use of the reference psychrometer ensure that the wet bulb is completely free of grease, even in the monomolecular layers that always arise from handling any part of the apparatus with the fingers. This is probably the main reason for the close relation of the coefficient to the theoretical value, and its difference from the psychrometer coefficients of other instruments. The reference psychrometer is capable of great accuracy, 0.8 per cent uncertainty in relative humidity at 50 per cent relative humidity and 0°C. It has also been adopted as the WMO reference thermometer. It is designed for use in the field but is not suitable for routine use. It should be operated only by staff accustomed to very precise laboratory work. Its use as a reference instrument is discussed in section 4.9.7.

which is particularly relevant for use at low air temperatures. This procedure also results in a more linear response function, although the tensile strength is reduced. For accurate measurements, a single hair element is to be preferred, but a bundle of hairs is commonly used to provide a degree of ruggedness. Chemical treatment with barium (BaS) or sodium (NaS) sulfide yields further linearity of response. The hair hygrograph or hygrometer is considered to be a satisfactory instrument for use in situations or during periods where extreme and very low humidities are seldom or never found. The mechanism of the instrument should be as simple as possible, even if this makes it necessary to have a non-linear scale. This is especially important in industrial regions, since air pollutants may act on the surface of the moving parts of the mechanism and increase friction between them. The rate of response of the hair hygrometer is very dependent on air temperature. At –10°C the lag of the instrument is approximately three times greater than the lag at 10°C. For air temperatures between 0 and 0°C and relative humidities between 0 and 80 per cent a good hygrograph should indicate 90 per cent of a sudden change in humidity within about  min. A good hygrograph in perfect condition should be capable of recording relative humidity at moderate temperatures with an uncertainty of ± per cent. At low temperatures, the uncertainty will be greater. Using hair pre-treated by rolling (as described above) is a requirement if useful information is to be obtained at low temperatures. 4.3.2 Description

4.3 4.3.1

The hair hyGroMeTer

General considerations

Any absorbing material tends to equilibrium with its environment in terms of both temperature and humidity. The water-vapour pressure at the surface of the material is determined by the temperature and the amount of water bound by the material. Any difference between this pressure and the watervapour pressure of the surrounding air will be equalized by the exchange of water molecules. The change in the length of hair has been found to be a function primarily of the change in relative humidity with respect to liquid water (both above and below an air temperature of 0°C), with an increase of about  to .5 per cent when the humidity changes from 0 to 100 per cent. By rolling the hairs to produce an elliptical cross-section and by dissolving out the fatty substances with alcohol, the ratio of the surface area to the enclosed volume increases and yields a decreased lag coefficient

The detailed mechanism of hair hygrometers varies according to the manufacturer. Some instruments incorporate a transducer to provide an electrical signal, and these may also provide a linearizing function so that the overall response of the instrument is linear with respect to changes in relative humidity. The most commonly used hair hygrometer is the hygrograph. This employs a bundle of hairs held under slight tension by a small spring and connected to a pen arm in such a way as to magnify a change in the length of the bundle. A pen at the end of the pen arm is in contact with a paper chart fitted around a metal cylinder and registers the angular displacement of the arm.

CHAPTER 4. MEAsuREMEnT of HuMIDITY

I.4–13

The cylinder rotates about its axis at a constant rate determined by a mechanical clock movement. The rate of rotation is usually one revolution either per week or per day. The chart has a scaled time axis that extends round the circumference of the cylinder and a scaled humidity axis parallel to the axis of the cylinder. The cylinder normally stands vertically. The mechanism connecting the pen arm to the hair bundle may incorporate specially designed cams that translate the non-linear extension of the hair in response to humidity changes into a linear angular displacement of the arm. The hair used in hygrometers may be of synthetic fibre. Where human hair is used it is normally first treated as described in section 4..1 to improve both the linearity of its response and the response lag, although this does result in a lower tensile strength. The pen arm and clock assembly are normally housed in a box with glass panels which allow the registered humidity to be observed without disturbing the instrument, and with one end open to allow the hair element to be exposed in free space outside the limits of the box. The sides of the box are separate from the solid base, but the end opposite the hair element is attached to it by a hinge. This arrangement allows free access to the clock cylinder and hair element. The element may be protected by an open mesh cage. 4.3.3 observation procedure

the tensioning spring. However, the effect of hysteresis may be evidenced in the failure of the pen to return to its original position. 4.3.4 exposure and siting

The hygrograph or hygrometer should be exposed in a thermometer screen. Ammonia is very destructive to natural hair. Exposure in the immediate vicinity of stables and industrial plants using ammonia should be avoided. When used in polar regions, the hygrograph should preferably be exposed in a special thermometer screen which provides the instrument with sufficient protection against precipitation and drifting snow. For example, a cover for the thermometer screen can be made of fine-meshed net (Mullergas) as a precautionary measure to prevent the accumulation of snow crystals on the hairs and the bearing surfaces of the mechanical linkage. This method can be used only if there is no risk of the net being wetted by melting snow crystals. 4.3.5 4.3.5.1 sources of error Changes in zero offset

The hair hygrometer should always be tapped lightly before being read in order to free any tension in the mechanical system. The hygrograph should, as far as possible, not be touched between changes of the charts except in order to make of time marks. Both the hygrometer and the hygrograph can normally be read to the nearest 1 per cent of relative humidity. Attention is drawn to the fact that the hair hygrometer measures relative humidity with respect to saturation over liquid water even at air temperatures below 0°C. The humidity of the air may change very rapidly and, therefore, accurate setting of time marks on a hygrograph is very important. In making the marks, the pen arm should be moved only in the direction of decreasing humidity on the chart. This is done so that the hairs are slackened by the displacement and, to bring the pen back to its correct position, the restoring force is applied by

For various reasons which are poorly understood, the hygrograph is liable to change its zero. The most likely cause is that excess tension has been induced in the hairs. For instance, the hairs may be stretched if time marks are made in the direction of increasing humidity on the chart or if the hygrograph mechanism sticks during decreasing humidity. The zero may also change if the hygrograph is kept in very dry air for a long time, but the change may be reversed by placing the instrument in a saturated atmosphere for a sufficient length of time. 4.3.5.2 errors due to contamination of the hair

Most kinds of dust will cause appreciable errors in observations (perhaps as much as 15 per cent relative humidity). In most cases this may be eliminated, or at least reduced, by cleaning and washing the hairs. However, the harmful substances found in dust may also be destructive to hair (see section 4..4). 4.3.5.3 hysteresis

Hysteresis is exhibited both in the response of the hair element and in the recording mechanism of the hair hygrometer. Hysteresis in the recording

I.4–14

PART I. MEAsuREMEnT of METEoRologICAl VARIABlEs

mechanism is reduced through the use of a hair bundle, which allows a greater loading force to overcome friction. It should be remembered that the displacement magnification of the pen arm lever applies also to the frictional force between the pen and paper, and to overcome this force it requires a proportionately higher tension in the hair. The correct setting of the tensioning spring is also required to minimize hysteresis, as is the correct operation of all parts of the transducing linkage. The main fulcrum and any linearizing mechanism in the linkage introduce much of the total friction. Hysteresis in the hair element is normally a shortterm effect related to the absorption-desorption processes and is not a large source of error once vapour pressure equilibrium is established (see section 4..5.1 in respect of prolonged exposure at low humidity). 4.3.6 calibration and comparisons

The hair should be washed at frequent intervals using distilled water on a soft brush to remove accumulated dust or soluble contaminants. At no time should the hair be touched by fingers. The bearings of the mechanism should be kept clean and a small amount of clock oil should be applied occasionally. The bearing surfaces of any linearizing mechanism will contribute largely to the total friction in the linkage, which may be minimized by polishing the surfaces with graphite. This procedure may be carried out by using a piece of blotting paper rubbed with a lead pencil. With proper care, the hairs may last for several years in a temperate climate and when not subject to severe atmospheric pollution. Recalibration and adjustment will be required when hairs are replaced.

4.4

The chilleD-Mirror DeWpoinT hyGroMeTer

The readings of a hygrograph should be checked as frequently as is practical. In the case where wet- and dry-bulb thermometers are housed in the same thermometer screen, these may be used to provide a comparison whenever suitable steady conditions prevail, but otherwise field comparisons have limited value due to the difference in response rate of the instruments. Accurate calibration can only be obtained through the use of an environmental chamber and by comparison with reference instruments. The 100 per cent humidity point may be checked, preferably indoors with a steady air temperature, by surrounding the instrument with a saturated cloth (though the correct reading will not be obtained if a significant mass of liquid water droplets forms on the hairs). The ambient indoor humidity may provide a low relative humidity checkpoint for comparison against a reference aspirated psychrometer. A series of readings should be obtained. Long-term stability and bias may be appraised by presenting comparisons with a reference aspirated psychrometer in terms of a correlation function. 4.3.7 Maintenance

4.4.1 4.4.1.1

General considerations theory

The dewpoint (or frost-point) hygrometer is used to measure the temperature at which moist air, when cooled, reaches saturation and a deposit of dew (or ice) can be detected on a solid surface, which usually is a mirror. The deposit is normally detected optically. The principle of the measurement is described in section 4.1.4.5 and below. The thermodynamic dewpoint is defined for a plane surface of pure water. In practice, water droplets have curved surfaces, over which the saturation vapour pressure is higher than for the plane surface (known as the Kelvin effect). Hydrophobic contaminants will exaggerate the effect, while soluble ones will have the opposite effect and lower the saturation vapour pressure (the Raoult effect). The Kelvin and Raoult effects (which, respectively, raise and lower the apparent dewpoint) are minimized if the critical droplet size adopted is large rather than small; this reduces the curvature effect directly and reduces the Raoult effect by lowering the concentration of a soluble contaminant. 4.4.1.2 Principles

Observers should be encouraged to keep the hygrometer clean.

When moist air at temperature T, pressure p and mixing ratio rw (or ri) is cooled, it eventually reaches its saturation point with respect to a free water

CHAPTER 4. MEAsuREMEnT of HuMIDITY

I.4–15

surface (or to a free ice surface at lower temperatures) and a deposit of dew (or frost) can be detected on a solid non-hygroscopic surface. The temperature of this saturation point is called the thermodynamic dewpoint temperature T d (or the thermodynamic frost-point temperature T f ). The corresponding saturation vapour pressure with respect to water e’w (or ice e’i ) is a function of Td (or Tf ), as shown in the following equations:

The mirror should be equipped with a (preferably automatic) device for detecting contaminants that may increase or decrease the apparent dewpoint (see section 4.4..), so that they may be removed. 4.4.2.2 optical detection assembly

ew ( p, Td ) = f ( p ) ⋅ ew (Td ) = ’

r⋅p 0.621 98 + r

(4.)

ei’( p, T f ) = f ( p ) ⋅ ei (T f ) =

r⋅p 0.621 98 + r

(4.4)

The hygrometer measures T d or T f . Despite the great dynamic range of moisture in the troposphere, this instrument is capable of detecting both very high and very low concentrations of water vapour by means of a thermal sensor alone. Cooling using a low-boiling-point liquid has been used but is now largely superseded except for very low water-vapour concentrations. It follows from the above that it must also be possible to determine whether the deposit is supercooled liquid or ice when the surface temperature is at or below freezing point. The chilled-mirror hygrometer is used for meteorological measurements and as a reference instrument both in the field and in the laboratory. 4.4.2 4.4.2.1 Description sensor assembly

An electro-optical system is usually employed to detect the formation of condensate and to provide the input to the servo-control system to regulate the temperature of the mirror. A narrow beam of light is directed at the mirror at an angle of incidence of about 55°. The light source may be incandescent but is now commonly a light-emitting diode. In simple systems, the intensity of the directly reflected light is detected by a photodetector that regulates the cooling and heating assembly through a servo-control. The specular reflectivity of the surface decreases as the thickness of the deposit increases; cooling should cease while the deposit is thin, with a reduction in reflectance in the range of 5 to 40 per cent. More elaborate systems use an auxiliary photodetector which detects the light scattered by the deposit; the two detectors are capable of very precise control. A second, uncooled, mirror may be used to improve the control system. Greatest precision is obtained by controlling the mirror to a temperature at which condensate neither accumulates nor dissipates; however, in practice, the servo-system will oscillate around this temperature. The response time of the mirror to heating and cooling is critical in respect of the amplitude of the oscillation, and should be of the order of 1 to  s. The air-flow rate is also important for maintaining a stable deposit on the mirror. It is possible to determine the temperature at which condensation occurs with a precision of 0.05 K. It is feasible, but a time-consuming and skilled task, to observe the formation of droplets by using a microscope and to regulate the mirror temperature under manual control. 4.4.2.3 thermal control assembly

The most widely used systems employ a small polished-metal reflecting surface, cooled electrically using a Peltier-effect device. The sensor consists of a thin metallic mirror of small ( to 5 mm) diameter that is thermally regulated using a cooling assembly (and possibly a heater), with a temperature sensor (thermocouple or platinum resistance thermometer) embedded on the underside of the mirror. The mirror should have a high thermal conductance, optical reflectivity and corrosion resistance combined with a low permeability to water vapour. Suitable materials used include gold, rhodium-plated silver, chromiumplated copper and stainless steel.

A Peltier-effect thermo-junction device provides a simple reversible heat pump; the polarity of direct current energization determines whether heat is pumped to, or from, the mirror. The device is bonded to, and in good thermal contact with, the underside of the mirror. For very low dewpoints, a multistage Peltier device may be required.

I.4–16

PART I. MEAsuREMEnT of METEoRologICAl VARIABlEs

Thermal control is achieved by using an electrical servo-system that takes as input the signal from the optical detector subsystem. Modern systems operate under microprocessor control. A low-boiling-point fluid, such as liquid nitrogen, may be used to provide cooling, but this technique is no longer widely used. Similarly, electrical resistance wire may be used for heating but has now been superseded with the advent of small Peltier devices. 4.4.2.4 temperature display system

set the flow at a rate that is consistent with the stable operation of the mirror temperature servocontrol system and at an acceptable rate of response to changes in humidity. The optimum flow rate is dependent upon the moisture content of the air sample and is normally within the range of 0.5 to 1 l min–1. 4.4.3 observation procedure

The mirror temperature, as measured by the electrical thermometer embedded beneath the mirror surface, is presented to the observer as the dewpoint of the air sample. Commercial instruments normally include an electrical interface for the mirror thermometer and a digital display, but may also provide digital and analogue electrical outputs for use with data-logging equipment. A chart recorder is particularly useful for monitoring the performance of the instrument in the case where the analogue output provides a continuous registration of the mirror thermometer signal but the digital display does not. 4.4.2.5 auxiliary systems

The correct operation of a dewpoint hygrometer depends upon achieving an appropriate volume air-flow rate through the measuring chamber. The setting of a regulator for this purpose, usually a throttling device located downstream of the measuring chamber, is likely to require adjustment to accommodate diurnal variations in air temperature. Adjustment of the air-flow will disturb the operation of the hygrometer, and it may even be advisable to initiate a heating cycle. Both measures should be taken with sufficient time in order for a stable operation to be achieved before a reading is taken. The amount of time required will depend upon the control cycle of the individual instrument. The manufacturer’s instructions should be consulted to provide appropriate guidance on the air-flow rate to be set and on details of the instrument’s control cycle. The condition of the mirror should be checked frequently; the mirror should be cleaned as necessary. The stable operation of the instrument does not necessarily imply that the mirror is clean. It should be washed with distilled water and dried carefully by wiping it with a soft cloth or cotton dabstick to remove any soluble contaminant. Care must be taken not to scratch the surface of the mirror, most particularly where the surface has a thin plating to protect the substrate or where an ice/liquid detector is incorporated. If an air filter is not in use, cleaning should be performed at least daily. If an air filter is in use, its condition should be inspected at each observation. The observer should take care not to stand next to the air inlet or to allow the outlet to become blocked. For readings at, or below, 0°C the observer should determine whether the mirror condensate is supercooled water or ice. If no automatic indication is given, the mirror must be observed. From time to time the operation of any automatic system should be verified. An uncertainty of ±0. K over a wide dewpoint range (–60 to 50°C) is specified for the best instruments.

A microscope may be incorporated to provide a visual method to discriminate between supercooled water droplets and ice crystals for mirror temperatures below 0°C. Some instruments have a detector mounted on the mirror surface to provide an automatic procedure for this purpose (for example, capacitive sensor), while others employ a method based on reflectance. A microprocessor-based system may incorporate algorithms to calculate and display relative humidity. In this case, it is important that the instrument should discriminate correctly between a water and an ice deposit. Many instruments provide an automatic procedure for minimizing the effects of contamination. This may be a regular heating cycle in which volatile contaminants are evaporated and removed in the air stream. Systems with a wiper to automatically clean the mirror by means of a wiper are also in use. For meteorological measurements, and in most laboratory applications, a small pump is required to draw the sampled air through the measuring chamber. A regulating device is also required to

CHAPTER 4. MEAsuREMEnT of HuMIDITY

I.4–17

4.4.4

exposure and siting

The criteria for the siting of the sensor unit are similar to those for any aspirated hygrometer, although less stringent than for either a psychrometer or a relative humidity sensor, considering the fact that the dew or frost point of an air sample is unaffected by changes to the ambient temperature provided that it remains above the dewpoint at all times. For this reason, a temperature screen is not required. The sensor should be exposed in an open space and may be mounted on a post, within a protective housing structure, with an air inlet at the required level. An air-sampling system is required. This is normally a small pump that must draw air from the outlet port of the measuring chamber and eject it away from the inlet duct. Recirculation of the air-flow should be avoided as this represents a poor sampling technique, although under stable operation the water-vapour content at the outlet should be effectively identical to that at the inlet. Recirculation may be avoided by fixing the outlet above the inlet, although this may not be effective under radiative atmospheric conditions when a negative air temperature lapse rate exists. An air filter should be provided for continuous outdoor operations. It must be capable of allowing an adequate throughflow of air without a large blocking factor, as this may result in a significant drop in air pressure and affect the condensation temperature in the measuring chamber. A sintered metal filter may be used in this application to capture all but the smallest aerosol particles. A metal filter has the advantage that it may be heated easily by an electrical element in order to keep it dry under all conditions. It is more robust than the membrane-type filter and more suited to passing the relatively high air-flow rates required by the chilled-mirror method as compared with the sorption method. On the other hand, a metallic filter may be more susceptible to corrosion by atmospheric pollutants than some membrane filters. 4.4.5 calibration

the mirror temperature is below 0°C the deposit should be inspected visually, if this is possible, to determine whether it is of supercooled water or ice. A useful check is to compare the mirror temperature measurement with the air temperature while the thermal control system of the hygrometer is inactive. The instrument should be aspirated, and the air temperature measured at the mouth of the hygrometer air intake. This check is best performed under stable, non-condensing conditions. In bright sunshine, the sensor and duct should be shaded and allowed to come to equilibrium. The aspiration rate may be increased for this test. An independent field calibration of the mirror thermometer interface may be performed by simulating the thermometer signal. In the case of a platinum resistance thermometer, a standard platinum resistance simulation box, or a decade resistance box and a set of appropriate tables, may be used. A special simulator interface for the hygrometer control unit may also be required.

4.5

The liThiUM chloriDe heaTeD conDensaTion hyGroMeTer (DeW cell)

4.5.1 4.5.1.1

General considerations Principles

The physical principles of the heated salt-solution method are discussed in section 4.1.4.5.. The equilibrium vapour pressure at the surface of a saturated lithium chloride solution is exceptionally low. As a consequence, a solution of lithium chloride is extremely hygroscopic under typical conditions of surface atmospheric humidity; if the ambient vapour pressure exceeds the equilibrium vapour pressure of the solution, water vapour will condense over it (for example, at 0°C water vapour condenses over a plane surface of a saturated solution of lithium chloride to only 15 per cent relative humidity). A thermodynamically self-regulating device may be achieved if the solution is heated directly by passing an electrical current through it from a constantvoltage device. An alternating current should be used to prevent polarization of the solution. As the electrical conductivity decreases, so will the heating current, and an equilibrium point will be reached whereby a constant temperature is maintained; any

Regular comparisons against a reference instrument, such as an Assmann psychrometer or another chilled-mirror hygrometer, should be made as the operation of a field chilled mirror is subject to a number of influences which may degrade its performance. An instrument continuously in the field should be the subject of weekly check measurements. As the opportunity arises, its operation at both dew and frost points should be verified. When

I.4–18

PART I. MEAsuREMEnT of METEoRologICAl VARIABlEs

cooling of the solution will result in the condensation of water vapour, thus causing an increase in conductivity and an increase in heating current, which will reverse the cooling trend. Heating beyond the balance point will evaporate water vapour until the consequent fall in conductivity reduces the electrical heating to the point where it is exceeded by heat losses, and cooling ensues. It follows from the above that there is a lower limit to the ambient vapour pressure that may be measured in this way at any given temperature. Below this value, the salt solution would have to be cooled in order for water vapour to condense. This would be equivalent to the chilled-mirror method except that, in the latter case, condensation takes place at a lower temperature when saturation is achieved with respect to a pure water surface, namely, at the ambient dewpoint. A degree of uncertainty is inherent in the method due to the existence of four different hydrates of lithium chloride. At certain critical temperatures, two of the hydrates may be in equilibrium with the aqueous phase, and the equilibrium temperature achieved by heating is affected according to the hydrate transition that follows. The most serious ambiguity for meteorological purposes occurs for ambient dewpoint temperatures below –1°C. For an ambient dewpoint of –°C, the potential difference in equilibrium temperature, according to which one of the two hydrate-solution transitions takes place, results in an uncertainty of ±.5 K in the derived dewpoint value. 4.5.1.2 description

solution of lithium chloride, sometimes combined with potassium chloride. Bifilar silver or gold wire is wound over the covering of the bobbin, with equal spacing between the turns. An alternating electrical current source is connected to the two ends of the bifilar winding; this is commonly derived from the normal electrical supply (50 or 60 Hz). The lithium chloride solution is electrically conductive to a degree determined by the concentration of solute. A current passes between adjacent bifilar windings, which act as electrodes, and through the solution. The current heats the solution, which increases in temperature. Except under conditions of extremely low humidity, the ambient vapour pressure will be higher than the equilibrium vapour pressure over the solution of lithium chloride at ambient air temperature, and water vapour will condense onto the solution. As the solution is heated by the electrical current, a temperature will eventually be reached above which the equilibrium vapour pressure exceeds the ambient vapour pressure, evaporation will begin, and the concentration of the solution will increase. An operational equilibrium temperature exists for the instrument, depending upon the ambient water-vapour pressure. Above the equilibrium temperature, evaporation will increase the concentration of the solution, and the electrical current and the heating will decrease and allow heat losses to cause the temperature of the solution to fall. Below the equilibrium temperature, condensation will decrease the concentration of the solution, and the electrical current and the heating will increase and cause the temperature of the solution to rise. At the equilibrium temperature, neither evaporation nor condensation occurs because the equilibrium vapour pressure and the ambient vapour pressure are equal. In practice, the equilibrium temperature measured is influenced by individual characteristics of sensor construction and has a tendency to be higher than that predicted from equilibrium vapour-pressure data for a saturated solution of lithium chloride. However, reproducibility is sufficiently good to allow the use of a standard transfer function for all sensors constructed to a given specification. Strong ventilation affects the heat transfer characteristics of the sensor, and fluctuations in ventilation lead to unstable operation.

The dew-cell hygrometer measures the temperature at which the equilibrium vapour pressure for a saturated solution of lithium chloride is equal to the ambient water-vapour pressure. Empirical transformation equations, based on saturation vapour pressure data for lithium chloride solution and for pure water, provide for the derivation of the ambient water vapour and dewpoint with respect to a plane surface of pure water. The dewpoint temperature range of –1 to 5°C results in dew-cell temperatures in the range of 17 to 71°C. 4.5.1.3 sensors with direct heating

The sensor consists of a tube, or bobbin, with a resistance thermometer fitted axially within. The external surface of the tube is covered with a glass fibre material (usually tape wound around and along the tube) that is soaked with an aqueous

CHAPTER 4. MEAsuREMEnT of HuMIDITY

I.4–19

In order to minimize the risk of excessive current when switching on the hygrometer (as the resistance of the solution at ambient temperature is rather low), a current-limiting device, in the form of a small lamp, is normally connected to the heater element. The lamp is chosen so that, at normal bobbin-operating currents, the filament resistance will be low enough for the hygrometer to function properly, while the operating current for the incandescent lamp (even allowing for a bobbin-offering no electrical resistance) is below a value that might damage the heating element. The equilibrium vapour pressure for saturated lithium chloride depends upon the hydrate being in equilibrium with the aqueous solution. In the range of solution temperatures corresponding to dewpoints of –1 to 41°C monohydrate normally occurs. Below –1°C, dihydrate forms, and above 41°C, anhydrous lithium chloride forms. Close to the transition points, the operation of the hygrometer is unstable and the readings ambiguous. However, the –1°C lower dewpoint limit may be extended to –0°C by the addition of a small amount of potassium chloride (KCl). 4.5.1.4 sensors with indirect heating

A current-limiting device should be installed if not provided by the manufacturer, otherwise the high current may damage the sensor when the instrument is powered-up. 4.5.3 exposure and siting

The hygrometer should be located in an open area in a housing structure which protects it from the effects of wind and rain. A system for providing a steady aspiration rate is required. The heat from the hygrometer may affect other instruments; this should be taken into account when choosing its location. The operation of the instrument will be affected by atmospheric pollutants, particularly substances which dissociate in solutions and produce a significant ion concentration. 4.5.4 sources of error

An electrical resistance thermometer is required for measuring the equilibrium temperature; the usual sources of error for thermometry are present. The equilibrium temperature achieved is determined by the properties of the solute, and significant amounts of contaminant will have an unpredictable effect. Variations in aspiration affect the heat exchange mechanisms and, thus, the stability of operation of the instrument. A steady aspiration rate is required for a stable operation. 4.5.5 calibration

Improved accuracy, compared with the arrangement described in section 4.5.1., may be obtained when a solution of lithium chloride is heated indirectly. The conductance of the solution is measured between two platinum electrodes and provides control of a heating coil. 4.5.2 operational procedure

Readings of the equilibrium temperature of the bobbin are taken and a transfer function applied to obtain the dewpoint temperature. Disturbing the sensor should be avoided as the equilibrium temperature is sensitive to changes in heat losses at the bobbin surface. The instrument should be energized continuously. If allowed to cool below the equilibrium temperature for any length of time, condensation will occur and the electrolyte will drip off. Check measurements with a working reference hygrometer must be taken at regular intervals and the instrument must be cleaned and retreated with a lithium chloride solution, as necessary.

A field calibration should be performed at least once a month, by means of comparison with a working standard instrument. Calibration of the bobbin thermometer and temperature display should be performed regularly, as for other operational thermometers and display systems. 4.5.6 Maintenance

The lithium chloride should be renewed regularly. This may be required once a month, but will depend upon the level of atmospheric pollution. When renewing the solution, the bobbin should be washed with distilled water and fresh solution subsequently applied. The housing structure should be cleaned at the same time.

I.4–20

PART I. MEAsuREMEnT of METEoRologICAl VARIABlEs

Fresh solution may be prepared by mixing five parts by weight of anhydrous lithium chloride with 100 parts by weight of distilled water. This is equivalent to 1 g of anhydrous lithium chloride to 0 ml of water. The temperature-sensing apparatus should be maintained in accordance with the recommendations for electrical instruments used for making air temperature measurements, but bearing in mind the difference in the range of temperatures measured.

order to avoid polarization of the electrolyte. Low-frequency supply can be used, given that the DC resistance is to be measured, and therefore it is possible to employ quite long leads between the sensor and its electrical interface. 4.6.3 electrical capacitance

The method is based upon the variation of the dielectric properties of a solid, hygroscopic material in relation to the ambient relative humidity. Polymeric materials are most widely used for this purpose. The water bound in the polymer alters its dielectric properties owing to the large dipole moment of the water molecule. The active part of the humidity sensor consists of a polymer foil sandwiched between two electrodes to form a capacitor. The electrical impedance of this capacitor provides a measure of relative humidity. The nominal value of capacitance may be only a few or several hundred picofarads, depending upon the size of the electrodes and the thickness of the dielectric. This will, in turn, influence the range of excitation frequency used to measure the impedance of the device, which is normally at least several kilohertz and, thus, requires that short connections be made between the sensor and the electrical interface to minimize the effect of stray capacitance. Therefore, capacitance sensors normally have the electrical interface built into the probe, and it is necessary to consider the effect of environmental temperature on the performance of the circuit components. 4.6.4 observation procedure

4.6

elecTrical resisTive anD capaciTive hyGroMeTers

4.6.1

General considerations

Certain hygroscopic materials exhibit changes in their electrical properties in response to a change in the ambient relative humidity with only a small temperature dependence. Electrical relative humidity sensors are increasingly used for remote-reading applications, particularly where a direct display of relative humidity is required. Since many of them have very non-linear responses to changes in humidity, the manufacturers often supply them with special data-processing and display systems. 4.6.2 electrical resistance

Sensors made from chemically treated plastic material having an electrically conductive surface layer on the non-conductive substrate may be used for meteorological purposes. The surface resistivity varies according to the ambient relative humidity. The process of adsorption, rather than absorption, is dominant because the humidity-sensitive part of such a sensor is restricted to the surface layer. As a result, this type of sensor is capable of responding rapidly to a change in ambient humidity. This class of sensor includes various electrolytic types in which the availability of conductive ions in a hygroscopic electrolyte is a function of the amount of adsorbed water vapour. The electrolyte may take various physical forms, such as liquid or gel solutions, or an ion-exchange resin. The change in impedance to an alternating current, rather than to a direct current, is measured in

Sensors based on changes in the electronic properties of hygroscopic materials are frequently used for the remote reading of relative humidity and also for automatic weather stations. 4.6.5 exposure and siting

The sensors should be mounted inside a thermometer screen. The manufacturer’s advice regarding the mounting of the actual sensor should be followed. The use of protective filters is mandatory. Direct contact with liquid water will seriously harm sensors using hygroscopic electrolyte as a sensor element. Great care should be taken to prevent liquid water from reaching the sensitive element of such sensors.

CHAPTER 4. MEAsuREMEnT of HuMIDITY

I.4–21

4.6.6

calibration

Field and laboratory calibrations should be carried out as for hair hygrometers. Suitable auxiliary equipment to enable checks by means of salt solutions is available for most sensors of this type. 4.6.7 Maintenance

length can be determined by measuring the ratio of their intensities at the receiver. The most widely used source for this technique is a tungsten lamp, filtered to isolate a pair of wavelengths in the infrared region. The measuring path is normally greater than 1 m. Both types of EMR absorption hygrometers require frequent calibration and are more suitable for measuring changes in vapour concentration than absolute levels. The most widespread application of the EMR absorption hygrometer is to monitor very high frequency variations in humidity, since the method does not require the detector to achieve vapour-pressure equilibrium with the sample. The time-constant of an optical hygrometer is typically just a few milliseconds. The use of optical hygrometers remains restricted to research activities.

Observers should be encouraged to maintain the hygrometer in clean conditions (see section 4.1.4.10).

4.7

hyGroMeTers UsinG absorpTion of elecTroMaGneTic raDiaTion

The water molecule absorbs electromagnetic radiation (EMR) in a range of wavebands and discrete wavelengths; this property can be exploited to obtain a measure of the molecular concentration of water vapour in a gas. The most useful regions of the electromagnetic spectrum, for this purpose, lie in the ultraviolet and infrared regions. Therefore, the techniques are often classified as optical hygrometry or, more correctly, EMR absorption hygrometry. The method makes use of measurements of the attenuation of radiation in a waveband specific to water-vapour absorption, along the path between a source of the radiation and a receiving device. There are two principal methods for determining the degree of attenuation of the radiation as follows: (a) Transmission of narrow-band radiation at a fixed intensity to a calibrated receiver: The most commonly used source of radiation is hydrogen gas; the emission spectrum of hydrogen includes the Lyman-Alpha line at 11.6 nm, which coincides with a watervapour absorption band in the ultraviolet region where there is little absorption by other common atmospheric gases. The measuring path is typically a few centimetres in length; (b) Transmission of radiation at two wavelengths, one of which is strongly absorbed by water vapour and the other being either not absorbed or only very weakly absorbed: If a single source is used to generate the radiation at both wavelengths, the ratio of their emitted intensities may be accurately known, so that the attenuation at the absorbed wave-

4.8

safeTy

Chemical agents are widely used in the measurement of humidity. The properties of such agents should always be made known to the personnel handling them. All chemicals should be kept in secure and clearly labelled containers and stored in an appropriate environment. Instructions concerning the use of toxic materials may be prescribed by local authorities. Saturated salt solutions are widely used in the measurement of humidity. The notes that follow give some guidance for the safe use of some commonly used salts: (a) Barium chloride (BaCl): Colourless crystals; very soluble in water; stable, but may emit toxic fumes in a fire; no hazardous reaction with water, acids, bases, oxidizers or with combustible materials; ingestion causes nausea, vomiting, stomach pains and diarrhoea; harmful if inhaled as dust and if it comes into contact with the skin; irritating to eyes; treat with copious amounts of water and obtain medical attention if ingested; (b) Calcium chloride (CaCl): Colourless crystals; deliquescent; very soluble in water, dissolves with increase in heat; will initiate exothermic polymerization of methyl vinyl ether; can react with zinc to liberate hydrogen; no hazardous reactions with acids, bases, oxidizers or combustibles; irritating to the skin, eyes and

I.4–22

PART I. MEAsuREMEnT of METEoRologICAl VARIABlEs

(c)

respiratory system; ingestion causes gastric irritation; ingestion of large amounts can lead to hypercalcaemia, dehydration and renal damage; treat with copious amounts of water and obtain medical attention; Lithium chloride (LiCl): Colourless crystals; stable if kept dry; very soluble in water; may emit toxic fumes in a fire; ingestion may affect ionic balance of blood leading to anorexia, diarrhoea, vomiting, dizziness and central nervous system disturbances; kidney damage may result if sodium intake is low (provide plenty of drinking water and obtain medical attention); no hazardous reactions with water, acids, bases, oxidizers or combustibles;

(d)

(e)

Magnesium nitrate (Mg(NO)): Colourless crystals; deliquescent; very soluble in water; may ignite combustible material; can react vigorously with deoxidizers, can decompose spontaneously in dimethylformamide; may emit toxic fumes in a fire (fight the fire with a water spray); ingestion of large quantities can have fatal effects (provide plenty of drinking water and obtain medical attention); may irritate the skin and eyes (wash with water); Potassium nitrate (KNO): White crystals or crystalline powder; very soluble in water; stable but may emit toxic fumes in a fire (fight the fire with a water spray); ingestion of large quantities causes vomiting, but it is

table 4.4. standard instruments for the measurement of humidity
Dewpoint temperature Standard instrument Primary standard Requirement gravimetric hygrometer –60 to –15 –15 to 40 –60 to –35 –35 to 35 35 to 60 –75 to –15 –15 to 30 30 to 80 –75 to 30 0.3 0.1 0.25 0.03 0.25 0.25 0.1 0.2 0.2 5 to 100 5 to 100 0.2 0.2 Range (°C) Uncertainty (K) Relative humidity (%) Range Uncertainty

standard two-temperature humidity generator standard two-pressure humidity generator Secondary standard Requirement Chilled-mirror hygrometer Reference psychrometer Reference standard Requirement Reference psychrometer Chilled-mirror hygrometer Working standard Requirement Assmann psychrometer Chilled-mirror hygrometer

–80 to –15 –15 to 40 –60 to 40

0.75 0.25 0.15

5 to 100

0.5

5 to 100

0.6

–80 to –15 –15 to 40

1.0 0.3

5 to 100 5 to 100

1.5 0.6

–60 to 40

0.3

–15 to 40 –10 to 25 –10 to 30

0.5

5 to 100 40 to 90

2 1

0.5

CHAPTER 4. MEAsuREMEnT of HuMIDITY

I.4–23

(f)

rapidly excreted in urine (provide plenty of drinking water); may irritate eyes (wash with water); no hazardous reaction with water, acids, bases, oxidizers or combustibles; Sodium chloride (NaCl): Colourless crystals or white powder; very soluble in water; stable; no hazardous reaction with water, acids, bases, oxidizers or combustibles; ingestion of large amounts may cause diarrhoea, nausea, vomiting, deep and rapid breathing and convulsions (in severe cases obtain medical attention).

4.9.2

calibration intervals and methods

Regular calibration is required for all humidity sensors in the field. For chilled-mirror psychrometers and heated dewpoint hygrometers that use a temperature detector, calibration can be checked whenever a regular maintenance routine is performed. Comparison with a working standard, such as an Assmann psychrometer, should be performed at least once a month. The use of a standard type of aspirated psychrometer, such as the Assmann, as a working standard has the advantage that its integrity can be verified by comparing the dry- and wet-bulb thermometers, and that adequate aspiration may be expected from a healthy sounding fan. The reference instrument should itself be calibrated at an interval appropriate to its type. Saturated salt solutions can be applied with sensors that require only a small-volume sample. A very stable ambient temperature is required and it is difficult to be confident about their use in the field. When using salt solutions for control purposes, it should be borne in mind that the nominal humidity value given for the salt solution itself is not traceable to any primary standard. 4.9.3 laboratory calibration

Advice concerning the safe use of mercury is given in Part I, Chapter .

4.9

sTanDarD insTrUMenTs anD calibraTion

4.9.1

principles involved in the calibration of hygrometers

Precision in the calibration of humidity sensors entails special problems, to a great extent owing to the relatively small quantity of water vapour which can exist in an air sample at normal temperatures, but also due to the general difficulty of isolating and containing gases and, more particularly, vapour. An ordered hierarchy of international traceability in humidity standards is only now emerging. An absolute standard for humidity (namely, a realization of the physical definition for the quantity of humidity) can be achieved by gravimetric hygrometry. The reference psychrometer (within its limited range) is also a form of primary standard, in that its performance is calculable. The calibration of secondary, reference and working standards involves several steps. Table 4.4 shows a summary of humidity standard instruments and their performances. A practical field calibration is most frequently done by means of well-designed aspirated psychrometers and dewpoint sensors as working standards. These specific types of standards must be traceable to the higher levels of standards by careful comparisons. Any instrument used as a standard must be individually calibrated for all variables involved in calculating humidity (air temperature, wet-bulb temperature, dewpoint temperature, and so forth). Other factors affecting performance, such as air-flow, must also be checked.

Laboratory calibration is essential for maintaining accuracy in the following ways: (a) Field and working standard instruments: Laboratory calibration of field and working standard instruments should be carried out on the same regular basis as for other operational thermometers. For this purpose, the chilled-mirror sensor device may be considered separately from the control unit. The mirror thermometer should be calibrated independently and the control unit should be calibrated on the same regular basis as other items of precision electronic equipment. The calibration of a field instrument in a humidity generator is not strictly necessary if the components have been calibrated separately, as described previously. The correct operation of an instrument may be verified under stable room conditions by comparison with a reference instrument, such as an Assmann psychrometer or a standard chilled-mirror hygrometer. If the field instrument incorporates an ice detector, the correct operation of this system should be verified.

I.4–24 (b)

PART I. MEAsuREMEnT of METEoRologICAl VARIABlEs

Reference and standard instruments: Laboratory calibration of reference and standard instruments requires a precision humidity generator and a suitable transfer standard hygrometer. Two-pressure and two-temperature humidity generators are able to deliver a suitable controlled flow of air at a predetermined temperature and dewpoint. The calibration should be performed at least every 1 months and over the full range of the reference application for the instrument. The calibration of the mirror thermometer and the temperature display system should be performed independently at least once every 1 months. primary standards Gravimetric hygrometry

and allowed to expand isothermally in a second chamber at a lower pressure P. Both chambers are maintained at the same temperature in an oil bath. The relative humidity of the water vapour-gas mixture is straightforwardly related to the total pressures in each of the two chambers through Dalton’s law of partial pressures; the partial pressure e’ of the vapour in the low-pressure chamber will have the same relation to the saturation vapour pressure e’w as the total pressure in the high-pressure saturator has to the total pressure in the low-pressure chamber. Thus, the relative humidity Uw is given by: Uw = 100 · e’/e’w = 100 · P1/P (4.5)

4.9.4 4.9.4.1

The relation also holds for the solid phase if the gas is saturated with respect to ice at pressure P1: Ui = 100 · e’/e’i = 100 · P1/P 4.9.4.3 (4.6)

The gravimetric method yields an absolute measure of the water-vapour content of an air sample in terms of its humidity mixing ratio. This is obtained by first removing the water vapour from the sample using a known mass of a drying agent, such as anhydrous phosphorous pentoxide (PO5) or magnesium perchlorate (Mg(ClO4)). The mass of the water vapour is determined by weighing the drying agent before and after absorbing the vapour. The mass of the dry sample is determined either by weighing (after liquefaction to render the volume of the sample manageable) or by measuring its volume (and having knowledge of its density). The complexity of the apparatus required to accurately carry out the procedure described limits the application of this method to the laboratory environment. In addition, a substantial volume sample of air is required for accurate measurements to be taken and a practical apparatus requires a steady flow of the humid gas for a number of hours, depending upon the humidity, in order to remove a sufficient mass of water vapour for an accurate weighing measurement. As a consequence, the method is restricted to providing an absolute calibration reference standard. Such an apparatus is found mostly in national calibration standards laboratories. 4.9.4.2 dynamic two-pressure standard humidity generator

dynamic two-temperature standard humidity generator

This laboratory apparatus provides a stream of humid gas at temperature T 1 having a dew- or frost-point temperature T  . Two temperaturecontrolled baths, each equipped with heat exchangers and one with a saturator containing either water or ice, are used first to saturate the air-stream at temperature T1 and then to heat it isobarically to temperature T  . In practical designs, the air-stream is continuously circulated to ensure saturation. Test instruments draw off air at temperature T and a flow rate that is small in proportion to the main circulation. 4.9.5 secondary standards

A secondary standard instrument should be carefully maintained and removed from the calibration laboratory only for the purpose of calibration with a primary standard or for intercomparison with other secondary standards. Secondary standards may be used as transfer standards from the primary standards. A chilled-mirror hygrometer may be used as a secondary standard instrument under controlled conditions of air temperature, humidity and pressure. For this purpose, it should be calibrated from a recognized accredited laboratory, giving uncertainty limits throughout the operational range of the instrument. This calibration must be directly traceable to a primary standard and should be renewed at an appropriate interval (usually once every 1 months).

This laboratory apparatus serves to provide a source of humid gas whose relative humidity is determined on an absolute basis. A stream of the carrier gas is passed through a saturating chamber at pressure P1

CHAPTER 4. MEAsuREMEnT of HuMIDITY

I.4–25

General considerations for chilled-mirror hygrometers are discussed in section 4.4. This method presents a fundamental technique for determining atmospheric humidity. Provided that the instrument is maintained and operated correctly, following the manufacturer’s instructions, it can provide a primary measurement of dew or frost point within limits of uncertainty determined by the correspondence between the mirror surface temperature at the appropriate point of the condensation/evaporation cycle and the temperature registered by the mirror thermometer at the observation time. The Kelvin and Raoult effects upon the condensation temperature must be taken into consideration, and any change of the air pressure resulting from the sampling technique must be taken into account by using the equations given in section 4.4.1.. 4.9.6 Working standards (and field reference instruments)

frequently during the operation. The psychrometer’s description and operating instructions are given in WMO (199). 4.9.8 saturated salt solutions

Vessels containing saturated solutions of appropriate salts may be used to calibrate relative humidity sensors. Commonly used salts and their saturation relative humidities at 5°C are as follows: Barium chloride (BaCl): 90. per cent Sodium chloride (NaCl): 75. per cent Magnesium nitrate (Mg(NO)): 5.9 per cent Calcium chloride (CaCl): 9.0 per cent Lithium chloride (LiCl): 11.1 per cent It is important that the surface area of the solution is large compared to that of the sensor element and the enclosed volume of air so that equilibrium may be achieved quickly; an airtight access port is required for the test sensor. The temperature of the vessel should be measured and maintained at a constant level as the saturation humidity for most salts has a significant temperature coefficient. Care should be taken when using saturated salt solutions. The degree of toxicity and corrosivity of salt solutions should be known to the personnel dealing with them. The salts listed above may all be used quite safely, but it is nevertheless important to avoid contact with the skin, and to avoid ingestion and splashing into the eyes. The salts should always be kept in secure and clearly labelled containers which detail any hazards involved. Care should be taken when dissolving calcium chloride crystals in water, as much heat is evolved. Section 4.8 deals with chemical hazards in greater detail. Although the use of saturated salt solutions provides a simple method to adjust some (relative) humidity sensors, such adjustment cannot be considered as a traceable calibration of the sensors. The (nominal) values of salt solutions have, at the moment, generally no traceability to reference standards. Measurements from sensors adjusted by means of the saturated salt solution method should always be checked by calibration standards after adjustment.

A chilled-mirror hygrometer or an Assmann psychrometer may be used as a working standard for comparisons under ambient conditions in the field or the laboratory. For this purpose, it is necessary to have performed comparisons at least at the reference standard level. The comparisons should be performed at least once every 1 months under stable room conditions. The working standard will require a suitable aspiration device to sample the air. 4.9.7 The WMo reference psychrometer

This type of psychrometer is essentially a primary standard because its performance is calculable. However, its main use is as a highly accurate reference instrument, specifically for type-testing other instrument systems in the field. It is intended for use as a free-standing instrument, alongside the screen or other field instruments, and must be made precisely to its general specification and operated by skilled staff experienced in precise laboratory work; careful attention should be given to aspiration and to preventing the wet bulb from being contaminated by contact with fingers or other objects. There are, however, simple tests by which the readings may be validated at any time, and these should be used

I.4–26

PART I. MEAsuREMEnT of METEoRologICAl VARIABlEs

AnnEx 4.A definitions and sPeCifiCations of water vaPour in the atmosPhere
(adapted from the Technical Regulations (WMO-No. 49), Volume I, Appendix B)

(1) The mixing ratio r of moist air is the ratio of the mass mv of water vapour to the mass ma of dry air with which the water vapour is associated:

(5) The vapour pressure e’ of water vapour in moist air at total pressure p and with mixing ratio r is defined by:

r=

mv ma

(4.A.1)

e’ =

r p = xv ⋅ p 0.621 98 + r

(4.A.6)

() The specific humidity, mass concentration or moisture content q of moist air is the ratio of the mass mv of water vapour to the mass mv + ma of moist air in which the mass of water vapour mv is contained: mv q= (4.A.) mv + ma () Vapour concentration (density of water vapour in a mixture) or absolute humidity: For a mixture of water vapour and dry air the vapour concentration v is defined as the ratio of the mass of vapour mv to the volume V occupied by the mixture:

(6) Saturation: Moist air at a given temperature and pressure is said to be saturated if its mixing ratio is such that the moist air can coexist in neutral equilibrium with an associated condensed phase (liquid or solid) at the same temperature and pressure, the surface of separation being plane. (7) Saturation mixing ratio: The symbol rw denotes the saturation mixing ratio of moist air with respect to a plane surface of the associated liquid phase. The symbol ri denotes the saturation mixing ratio of moist air with respect to a plane surface of the associated solid phase. The associated liquid and solid phases referred to consist of almost pure water and almost pure ice, respectively, there being some dissolved air in each. (8) Saturation vapour pressure in the pure phase: The saturation vapour pressure ew of pure aqueous vapour with respect to water is the pressure of the vapour when in a state of neutral equilibrium with a plane surface of pure water at the same temperature and pressure; similarly for ei with respect to ice; ew and ei are temperaturedependent functions only, namely: ew = ew (T) ei = ei (T) (4.A.7) (4.A.8)

ρv =

mv V

(4.A.)

(4) Mole fraction of the water vapour of a sample of moist air: The mole fraction xv of the water vapour of a sample of moist air, composed of a mass ma of dry air and a mass mv of water vapour, is defined by the ratio of the number of moles of water vapour (nv = mv/Mv) to the total number of moles of the sample nv + na, where na indicates the number of moles of dry air (na = ma/Ma) of the sample concerned. This gives:

nv xv = na + nv or: (4.A.4)

xv =

r 0.621 98 + r

(4.A.5)

where r is merely the mixing ratio (r = mv/ma) of the water vapour of the sample of moist air.

(9) Mole fraction of water vapour in moist air saturated with respect to water: The mole fraction of water vapour in moist air saturated with respect to water, at pressure p and temperature T, is the mole fraction xvw of the water vapour of a sample of moist air, at the same pressure p and the same temperature T, that is in stable equilibrium in the presence of a plane surface of water

CHAPTER 4. MEAsuREMEnT of HuMIDITY

I.4–27

containing the amount of dissolved air corresponding to equilibrium. Similarly, xvi will be used to indicate the saturation mole fraction with respect to a plane surface of ice containing the amount of dissolved air corresponding to equilibrium. (10) Saturation vapour pressure of moist air: The saturation vapour pressure with respect to water e’w of moist air at pressure p and temperature T is defined by:

(15)1 The relative humidity Uw with respect to water of moist air at pressure p and temperature T is the ratio in per cent of the vapour mole fraction xv to the vapour mole fraction xvw which the air would have if it were saturated with respect to water at the same pressure p and temperature T. Accordingly:

⎛ x ⎞ ⎛ pxv ⎞ U w = 100 ⎜ v ⎟ = 100 ⎜ ⎟ ⎝ xvw ⎠ p,T ⎝ pxvw ⎠ p,T ⎛ eʹ ⎞ = 100 ⎜ ⎟ ʹ ⎝ ew ⎠ p,T
(4.A.15)

ew = ʹ

rw p = xvw ⋅ p 0.621 98 + rw

(4.A.9)

Similarly, the saturation vapour pressure with respect to ice e’i of moist air at pressure p and temperature T is defined by:

eiʹ =

ri p = xvi ⋅ p 0.621 98 + ri

where subscripts p,T indicate that each term is subject to identical conditions of pressure and temperature. The last expression is formally similar to the classic definition based on the assumption of Dalton’s law of partial pressures. Uw is also related to the mixing ratio r by:

(4.A.10)

(11) Relations between saturation vapour pressures of the pure phase and of moist air: In the meteorological range of pressure and temperature the following relations hold with an error of 0.5 per cent or less: e’w = ew e’i = ei (4.A.11) (4.A.1)

U w = 100

r 0.621 98 + rw ⋅ rw 0.621 98 + r

(4.A.16)

where rw is the saturation mixing ratio at the pressure and temperature of the moist air. (16)1 The relative humidity Ui with respect to ice of moist air at pressure p and temperature T is the ratio in per cent of the vapour mole fraction xv to the vapour mole fraction xvi which the air would have if it were saturated with respect to ice at the same pressure p and temperature T. Corresponding to the defining equation in paragraph 15:

(1) The thermodynamic dewpoint temperature Td of moist air at pressure p and with mixing ratio r is the temperature at which moist air, saturated with respect to water at the given pressure, has a saturation mixing ratio rw equal to the given mixing ratio r. (1) The thermodynamic frost-point temperature Tf of moist air at pressure p and mixing ratio r is the temperature at which moist air, saturated with respect to ice at the given pressure, has a saturation mixing ratio ri equal to the given ratio r. (14) The dewpoint and frost-point temperatures so defined are related to the mixing ratio r and pressure p by the respective equations:

Ui = 100

xv xvi

p,T

= 100

pxv pxvi

= p,T e ei

p,T

(4.A.17)

ew ( p,Td ) = f ( p) ⋅ ew (Td ) = xv ⋅ p = ʹ

r⋅p (4.A.1) 0.621 98 + r

(17) Relative humidity at temperatures less than 0°C is to be evaluated with respect to water. The advantages of this procedure are as follows: (a) Most hygrometers which are essentially responsive to the relative humidity indicate relative humidity with respect to water at all temperatures; (b) The majority of clouds at temperatures below 0°C consist of water, or mainly of water; (c) Relative humidities greater than 100 per cent would in general not be observed. This is of
1

ei ( p, T f ) = f ( p ) ⋅ ei (T f ) = xv ⋅ p =

r⋅p 0.621 98 + r

(4.A.14)

Equations 4.A.15 and 4.A.17 do not apply to moist air when pressure p is less than the saturation vapour pressure of pure water and ice, respectively, at temperature T.

I.4–28

PART I. MEAsuREMEnT of METEoRologICAl VARIABlEs

(d)

particular importance in synoptic weather messages, since the atmosphere is often supersaturated with respect to ice at temperatures below 0°C; The majority of existing records of relative humidity at temperatures below 0°C are expressed on a basis of saturation with respect to water.

where Lv(Tw ) is the heat of vaporization of water at temperature Tw , cpa is the specific heat of dry air at constant pressure; and cpv is the specific heat of water vapour at constant pressure.
Note: Thermodynamic wet-bulb temperature as here defined

has for some time been called “temperature of adiabatic saturation” by air-conditioning engineers.

(18) The thermodynamic wet-bulb temperature of moist air at pressure p, temperature T and mixing ratio r is the temperature Tw attained by the moist air when brought adiabatically to saturation at pressure p by the evaporation into the moist air of liquid water at pressure p and temperature Tw and containing the amount of dissolved air corresponding to equilibrium with saturated air of the same pressure and temperature. Tw is defined by the equation:

(19) The thermodynamic ice-bulb temperature of moist air at pressure p, temperature T and mixing ratio r is the temperature Ti at which pure ice at pressure p must be evaporated into the moist air in order to saturate it adiabatically at pressure p and temperature Ti. The saturation is with respect to ice. Ti is defined by the equation:

h ( p,T, r ) + ⎡ ri ( p,Ti ) − r ⎤ hi ( p,Ti ) ⎣ ⎦ = h ( p,Ti , ri ( p,Ti ))

(4.A.0)

h ( p,T, r ) + ⎡ rw ( p,Tw ) − r ⎤ hw ( p,Tw ) ⎣ ⎦ = h ( p,Tw , rw ( p,Tw ))

(4.A.18)

where rw(p,Tw) is the mixing ratio of saturated moist air at pressure p and temperature Tw ; hw(p,Tw) is the enthalpy of 1 gram of pure water at pressure p and temperature Tw ; h(p,T,r) is the enthalpy of 1 + rw grams of moist air, composed of 1 gram of dry air and r grams of water vapour, at pressure p and temperature T; and h(p,Tw ,rw (p,Tw )) is the enthalpy of 1 + rw grams of saturated air, composed of 1 gram of dry air and rw grams of water vapour, at pressure p and temperature Tw. (This is a function of p and Tw only and may appropriately be denoted by hsw(p,Tw).) If air and water vapour are regarded as ideal gases with constant specific heats, the above equation becomes:

where ri(p, Ti) is the mixing ratio of saturated moist air at pressure p and temperature Ti; hi(p, Ti) is the enthalpy of 1 gram of pure ice at pressure p and temperature Ti ; h(p,T,r) is the enthalpy of 1 + r grams of moist air, composed of 1 gram of dry air and r grams of water vapour, at pressure p and temperature T; and h(p,Ti ,ri (p,Ti)) is the enthalpy of 1 + ri grams of saturated air, composed of 1 gram of dry air and ri grams of water vapour, at pressure p and temperature Ti . (This is a function of p and Ti only, and may appropriately be denoted by hsi(p,Ti ).) If air and water vapour are regarded as ideal gases with constant specific heats, the above equation becomes:

T − Tw =

⎡ rw ( p,Tw ) − r ⎤ Lv (Tw ) ⎣ ⎦ c pa + rc pv

T − Ti =

⎡ ri ( p,Ti ) − r ⎤ Ls (Ti ) ⎣ ⎦ c p + rc pv

(4.A.1)

(4.A.19) where Ls(Ti ) is the heat of sublimation of ice at temperature Ti. The relationship between Tw and Ti as defined and the wet-bulb or ice-bulb temperature as indicated by a particular psychrometer is a matter to be determined by carefully controlled experiment, taking into account the various variables concerned, for example, ventilation, size of thermometer bulb and radiation.



The enthalpy of a system in equilibrium at pressure p and temperature T is defined as E + pV, where E is the internal energy of the system and V is its volume. The sum of the enthalpies of the phases of a closed system is conserved in adiabatic isobaric processes.

CHAPTER 4. MEAsuREMEnT of HuMIDITY

I.4–29

AnnEx 4.B formulae for the ComPutation of measures of humidity
(see also section 4.1.2) Saturation vapour pressure: ew(t) = 6.11 exp [17.6 t/(4.1 + t)] = f(p) · ew(t) ei(t) = 6.11 exp [.46 t/(7.6 + t)] e’i(p,t) = f(p) · ei(t) f(p) = 1.0016 + .15 · 10–6 p – 0.074 p–1 Dew point and frost point: Water (–45 to 60°C) (pure phase) e’w (p,t) Moist air Ice (–65 to 0°C) (pure phase) Moist air [see note]

td = tf =

243.12 ⋅ ln [ eʹ / 6.112 f ( p)] 17.62 − ln [ eʹ / 6.112 f ( p)]
272.62 ⋅ ln [ eʹ / 6.112 f ( p)] 22.46 − ln [ eʹ / 6.112 f ( p)]

Water (–45 to 60°C)
Ice (–65 to 0°C)

Psychrometric formulae for the Assmann psychrometer: e’ = e’w (p,tw) - 6.5 . 10–4 · (1 + 0.000 944 tw) · p · (t – tw) e’ = e’i (p,ti) - 5.75 . 10–4 · p · (t – ti) Relative humidity: U = 100 e’/e’w(p,t) % U = 100 e’w(p,td)/e’w(p,t) % Units applied: t = air temperature (dry-bulb temperature); tw = wet-bulb temperature; ti = ice-bulb temperature; td = dewpoint temperature; tf = frost-point temperature; p = pressure of moist air; ew(t) = saturation vapour pressure in the pure phase with regard to water at the dry-bulb temperature; ew(tw) = saturation vapour pressure in the pure phase with regard to water at the wet-bulb temperature; ei(t) = saturation vapour pressure in the pure phase with regard to ice at the dry-bulb temperature; ei(ti) = saturation vapour pressure in the pure phase with regard to ice at the ice-bulb temperature; e′w (t) = saturation vapour pressure of moist air with regard to water at the dry-bulb temperature; e′w (tw) = saturation vapour pressure of moist air with regard to water at the wet-bulb temperature; e′i (t) = saturation vapour pressure of moist air with regard to ice at the dry-bulb temperature; e′i (ti) = saturation vapour pressure of moist air with regard to ice at the ice-bulb temperature; U = relative humidity.
Note: In fact, f is a function of both pressure and temperature, i.e. f = f(p, t), as explained in WMO (1966) in the introduction to Table 4.10. In practice, the temperature dependency (±0.1%) is much lower with respect to pressure (0 to +0.6%). Therefore, the temperature dependency may be omitted in the formula above (see also WMO (1989a), Chapter 10). This formula, however, should be used only for pressure around 1 000 hPa (i.e. surface measurements) and not for upper-air measurements, for which WMO (1966), Table 4.10 should be used.

Water Ice

I.4–30

PART I. MEAsuREMEnT of METEoRologICAl VARIABlEs

referenCes and further readinG

Bindon, H.H., 1965: A critical review of tables and charts used in psychrometry. In: A. Wexler (ed.), Humidity and Moisture, Volume 1, Reinhold, New York, pp. –15. Sonntag, D., 1990: Important new values of the physical constants of 1986, vapour pressure formulations based on the ITS-90 and psychrometer formulae. Zeitschrift für Meteorologie, Volume 40, Number 5, pp. 40–44. Sonntag, D., 1994: Advancements in the field of hygrometry. Zeitschrift für Meteorologie, Volume , Number , pp. 51–66. Wexler, A. (ed.), 1965: Humidity and Moisture. Volumes 1 and , Reinhold, New York. World Meteorological Organization, 1966: International Meteorological Tables (S. Letestu, ed.). WMO-No. 188.TP.94, Geneva.

World Meteorological Organization, 1988: Technical Regulations. Volume I, WMO-No. 49, Geneva. World Meteorological Organization, 1989a: WMO Assmann Aspiration Psychrometer Intercomparison (D. Sonntag). Instruments and Observing Methods Report No. 4, WMO/TDNo. 89, Geneva. World Meteorological Organization, 1989b: WMO International Hygrometer Intercomparison (J. Skaar, K. Hegg, T. Moe and K. Smedstud). Instruments and Observing Methods Report No. 8, WMO/TD-No. 16, Geneva. World Meteorological Organization, 199: Measurement of Temperature and Humidity (R.G. Wylie and T. Lalas). Technical Note No. 194, WMO-No. 759, Geneva.

CHAPTER 5

measurement of surface wind

5.1 5.1.1

General

may indicate sharp wave-front gusts with a short duration. For the definition of gust duration an ideal measuring chain is used, namely a single filter that takes a running average over t0 seconds of the incoming wind signal. Extremes detected behind such a filter are defined as peak gusts with duration t0. Other measuring systems with various filtering elements are said to measure gusts with duration t0 when a running average filter with integration time t0 would have produced an extreme with the same height (see Beljaars, 1987; WMO, 1987 for further discussion). Standard deviation is:

Definitions

The following definitions are used in this chapter (see Mazzarella, 1972, for more details). Wind velocity is a three-dimensional vector quantity with small-scale random fluctuations in space and time superimposed upon a larger-scale organized flow. It is considered in this form in relation to, for example, airborne pollution and the landing of aircraft. For the purpose of this Guide, however, surface wind will be considered mainly as a twodimensional vector quantity specified by two numbers representing direction and speed. The extent to which wind is characterized by rapid fluctuations is referred to as gustiness, and single fluctuations are called gusts. Most users of wind data require the averaged horizontal wind, usually expressed in polar coordinates as speed and direction. More and more applications also require information on the variability or gustiness of the wind. For this purpose, three quantities are used, namely the peak gust and the standard deviations of wind speed and direction. Averaged quantities are quantities (for example, horizontal wind speed) that are averaged over a period of 10 to 60 min. This chapter deals mainly with averages over 10 min intervals, as used for forecasting purposes. Climatological statistics usually require averages over each entire hour, day and night. Aeronautical applications often use shorter averaging intervals (see Part II, Chapter 2). Averaging periods shorter than a few minutes do not sufficiently smooth the usually occurring natural turbulent fluctuations of wind; therefore, 1 min “averages” should be described as long gusts. Peak gust is the maximum observed wind speed over a specified time interval. With hourly weather reports, the peak gust refers to the wind extreme in the last full hour. Gust duration is a measure of the duration of the observed peak gust. The duration is determined by the response of the measuring system. Slowly responding systems smear out the extremes and measure long smooth gusts; fast response systems

su = (ui − U )2 =

(( Σ ( u ) − ( Σ u ) n ) n ) i 2 i 2

(5.1)

where u is a time-dependent signal (for example, horizontal wind speed) with average U and an overbar indicates time-averaging over n samples ui. The standard deviation is used to characterize the magnitude of the fluctuations in a particular signal. Time-constant (of a first-order system) is the time required for a device to detect and indicate about 63 per cent of a step-function change. Response length is approximately the passage of wind (in metres) required for the output of a wind-speed sensor to indicate about 63 per cent of a step-function change of the input speed. Critical damping (of a sensor such as a wind vane, having a response best described by a second-order differential equation) is the value of damping which gives the most rapid transient response to a step change without overshoot. Damping ratio is the ratio of the actual damping to the critical damping. Undamped natural wavelength is the passage of wind that would be required by a vane to go through one period of an oscillation if there were no damping. It is less than the actual “damped” wavelength by a factor (1 − D 2 ) if D is the damping ratio.

I.5–2 5.1.2

PART I. MEAsuREMEnT of METEoRologICAl VARIABlEs

Units and scales

Wind speed should be reported to a resolution of 0.5 m s–1 or in knots (0.515 m s–1) to the nearest unit, and should represent, for synoptic reports, an average over 10 min. Averages over a shorter period are necessary for certain aeronautical purposes (see Part II, Chapter 2). Wind direction should be reported in degrees to the nearest 10°, using a 01 ... 36 code (for example, code 2 means that the wind direction is between 15 and 25°), and should represent an average over 10 min (see Part II, Chapter 2, for synoptical purposes). Wind direction is defined as the direction from which the wind blows, and is measured clockwise from geographical north, namely, true north. “Calm” should be reported when the average wind speed is less than 1 kn. The direction in this case is coded as 00. Wind direction at stations within 1° of the North Pole or 1° of the South Pole should be reported according to Code Table 0878 in WMO (1995). The azimuth ring should be aligned with its zero coinciding with the Greenwich 0° meridian. There are important differences compared to the synoptic requirement for measuring and reporting wind speed and direction for aeronautical purposes at aerodromes for aircraft take-off and landing (see Part II, Chapter 2). Wind direction should be measured, namely, from the azimuth setting, with respect to true north at all meteorological observing stations. At aerodromes the wind direction must be indicated and reported with respect to magnetic north for aeronautical observations and with an averaging time of 2 min. Where the wind measurements at aerodromes are disseminated beyond the aerodrome as synoptic reports, the direction must be referenced to true north and have an averaging time of 10 min. 5.1.3 Meteorological requirements

Apart from mean wind speed and direction, many applications require standard deviations and extremes (see section 5.8.2). The required accuracy is easily obtained with modern instrumentation. The most difficult aspect of wind measurement is the exposure of the anemometer. Since it is nearly impossible to find a location where the wind speed is representative of a large area, it is recommended that estimates of exposure errors be made (see section 5.9). Many applications require information about the gustiness of the wind. Such applications provide “nowcasts” for aircraft take-off and landing, windload climatology, air pollution dispersion problems and exposure correction. Two variables are suitable for routine reading, namely the standard deviation of wind speed and direction and the 3 s peak gust (see Recommendations 3 and 4 (CIMO-X) (WMO, 1990)). 5.1.4 Methods of measurement and observation

Surface wind is usually measured by a wind vane and cup or propeller anemometer. When the instrumentation is temporarily out of operation or when it is not provided, the direction and force of the wind may be estimated subjectively (the table below provides wind speed equivalents in common use for estimations). The instruments and techniques specifically discussed here are only a few of the more convenient ones available and do not comprise a complete list. The references and further reading at the end of this chapter provide a good literature on this subject. The sensors briefly described below are cup-rotor and propeller anemometers, and direction vanes. Cup and vane, propeller and vane, and propellers alone are common combinations. Other classic sensors, such as the pitot tube, are less used now for routine measurements but can perform satisfactorily, while new types being developed or currently in use as research tools may become practical for routine measurement with advanced technology. For nearly all applications, it is necessary to measure the averages of wind speed and direction. Many applications also need gustiness data. A wind-measuring system, therefore, consists not only of a sensor, but also of a processing and recording system. The processing takes care of the averaging and the computation of the standard deviations and extremes. In its simplest form, the processing can be done by writing the wind signal with a pen recorder and estimating the mean and extreme by reading the record.

Wind observations or measurements are required for weather monitoring and forecasting, for wind-load climatology, for probability of wind damage and estimation of wind energy, and as part of the estimation of surface fluxes, for example, evaporation for air pollution dispersion and agricultural applications. Performance requirements are given in Part I, Chapter 1, Annex 1.B. An accuracy for horizontal speed of 0.5 m s–1 below 5 m s–1 and better than 10 per cent above 5 m s–1 is usually sufficient. Wind direction should be measured with an accuracy of 5°.

CHAPTER 5. MEAsuREMEnT of suRfACE wInd

I.5–3

wind speed equivalents
Beaufort scale number and description Wind speed equivalent at a standard height of 10 m above open flat ground (kn) (m s–1) (km h–1) (mi h–1) Specifications for estimating speed over land

0 1 2 3 4 5 6

Calm light air light breeze gentle breeze Moderate breeze fresh breeze strong breeze

4 m s–1) and to average σu/U and/or σθ over all available data per wind sector class (30° wide) and per season (surface roughness depends, for example, on tree foliage). The values of z0u can now be determined with the above equations, where comparison of the results from σu and σθ give some idea of the accuracy obtained. In cases where no standard deviation information is available, but the maximum gust is determined per wind speed averaging period (either 10 min or 1 h), the ratios of these maximum gusts to the averages in the same period (gust factors) can also be used to determine z 0u (Verkaik, 2000). Knowledge of system dynamics, namely, the response length of the sensor and the response time of the recording chain, is required for this approach. terrain classification from davenport (1960) adapted by wieringa (1980b) in terms of aerodynamic roughness length z0
Class 1 2 3 4 5 6 7 8 Short terrain description open sea, fetch at least 5 km Mud flats, snow; no vegetation, no obstacles open flat terrain; grass, few isolated obstacles low crops; occasional large obstacles, x/H > 20 High crops; scattered obstacles, 15 < x/H < 20 Parkland, bushes; numerous obstacles, x/H ≈ 10 Regular large obstacle coverage (suburb, forest) City centre with high- and low-rise buildings z0 (m) 0.000 2 0.005 0.03 0.10 0.25 0.5 1.0 ≥2

note:

Here x is a typical upwind obstacle distance and H is the

height of the corresponding major obstacles. for more detailed and

where cu = 2.2 and cv = 1.9 and = 0.4 for unfiltered measurements of su and sd. For the measuring systems

updated terrain class descriptions see davenport and others (2000) (see also Part II, Chapter 11, Table 11.2).

CHAPTER 5. MEAsuREMEnT of suRfACE wInd

I.5–13

references and further reading

Ackermann, G.R., 1983: Means and standard deviations of horizontal wind components. Journal of Climate and Applied Meteorology, 22, pp. 959–961. Albers, A., H. Klug and D. Westermann, 2000: Outdoor Comparison of Cup Anemometers. DEWI Magazin, No. 17, August 2000. Beljaars, A.C.M., 1987: The influence of sampling and filtering on measured wind gusts. Journal of Atmospheric and Oceanic Technology, 4, pp. 613–626. Busch, N.E. and L. Kristensen, 1976: Cup anemometer overspeeding. Journal of Applied Meteorology, 15, pp. 1328–1332. Coppin, P.A., 1982: An examination of cup anemometer overspeeding. Meteorologische Rundschau, 35, pp. 1–11. Curran, J.C., G.E. Peckham, D. Smith, A.S. Thom, J.S.G. McCulloch and I.C. Strangeways, 1977: Cairngorm summit automatic weather station. Weather, 32, pp. 60–63. Davenport, A.G., 1960: Rationale for determining design wind velocities. Journal of the Structural Division, American Society of Civil Engineers, 86, pp. 39–68. Davenport, A.G., C.S.B. Grimmond, T.R. Oke and J.Wieringa, 2000: Estimating the roughness of cities and sheltered country. Preprints of the Twelfth American Meteorological Society Conference on Applied Climatology (Asheville, NC , United States), pp. 96–99. Evans, R.A. and B.E. Lee, 1981: The problem of anemometer exposure in urban areas: a windtunnel study. Meteorological Magazine, 110, pp. 188–189. Frenkiel, F.N., 1951: Frequency distributions of velocities in turbulent flow. Journal of Meteorology, 8, pp. 316–320. Gill, G.C., L.E. Olsson, J. Sela and M. Suda, 1967: Accuracy of wind measurements on towers or stacks. Bulletin of the American Meteorological Society, 48, pp. 665–674. Gold, E., 1936: Wind in Britain – The Dines and some notable records during the last 40 years. Quarterly Journal of the Royal Meteorological Society, 62, pp. 167–206. Grimmond, C.S.B., T.S. King, M. Roth and T.R. Oke, 1998: Aerodynamic roughness of urban areas derived from wind observations. Boundary Layer Meteorology, 89, pp. 1–24.

Kaimal, J.C., 1980: Sonic anemometers. Air-sea Interaction: Instr uments and Methods (F. Dobson, L. Hasse and R. Davis, eds), Plenum Press, New York, pp. 81–96. Lenschow, D.H. (ed.), 1986: Probing the Atmospheric Boundary Layer. American Meteorological Society, Boston. MacCready, P.B., 1966: Mean wind speed measurements in turbulence. Journal of Applied Meteorology, 5, pp. 219–225. MacCready, P.B. and H.R. Jex, 1964: Response characteristics and meteorological utilization of propeller and vane wind sensors. Journal of Applied Meteorology, 3, pp. 182–193. Makinwa, K.A.A., J.H. Huijsing and A. Hagedoorn, 2001: Industrial design of a solid-state wind sensor. Proceedings of the First ISA/IEEE Conference, Houston, November 2001, pp. 68–71. Mazzarella, D.A., 1972: An inventory of specifications for wind-measuring instruments. Bulletin of the American Meteorological Society, 53, pp. 860–871. Mollo-Christensen, E. and J.R. Seesholtz, 1967: Wind tunnel measurements of the wind disturbance field of a model of the Buzzards Bay Entrance Light Tower. Journal of Geophysical Research, 72, pp. 3549–3556. Patterson, J., 1926: The cup anemometer. Transactions of the Royal Society of Canada, 20, Series III, pp. 1–54. Smith, S.D., 1980: Dynamic anemometers. Air-sea Interaction: Instr uments and Methods (F. Dobson, L. Hasse and R. Davis, eds.). Plenum Press, New York, pp. 65–80. Taylor, P.A. and R.J. Lee, 1984: Simple guidelines for estimating wind speed variations due to small scale topographic features. Climatological Bulletin, Canadian Meteorological and Oceanographic Society, 18, pp. 3–22. Van Oudheusden, B.W. and J.H. Huijsing, 1991: Microelectronic thermal anemometer for the measurement of surface wind. Journal of Atmospheric and Oceanic Technology, 8, pp. 374–384. Verkaik, J.W., 2000: Evaluation of two gustiness models for exposure correction calculations. Journal of Applied Meteorology, 39, pp. 1613–1626. Walmsley, J.L., I.B. Troen, D.P. Lalas and P.J. Mason, 1990: Surface-layer flow in complex terrain: Comparison of models and full-scale observations. Boundary-Layer Meteorology, 52, pp. 259–281.

I.5–14

PART I. MEAsuREMEnT of METEoRologICAl VARIABlEs

Wieringa, J., 1967: Evaluation and design of wind vanes. Journal of Applied Meteorology, 6, pp. 1114–1122. Wieringa, J., 1980a: A revaluation of the Kansas mast influence on measurements of stress and cup anemometer overspeeding. BoundaryLayer Meteorology, 18, pp. 411–430. Wieringa, J., 1980b: Representativeness of wind observations at airports. Bulletin of the American Meteorological Society, 61, pp. 962–971. Wieringa, J., 1983: Description requirements for assessment of non-ideal wind stations, for example Aachen. Journal of Wind Engineering and Industrial Aerodynamics, 11, pp. 121–131. Wieringa, J., 1986: Roughness-dependent geographical interpolation of surface wind speed averages. Quarterly Journal of the Royal Meteorological Society, 112, pp. 867–889. Wieringa, J., 1996: Does representative wind information exist? Journal of Wind Engineering and Industrial Aerodynamics, 65, pp. 1–12. Wieringa, J. and F.X.C.M. van Lindert, 1971: Application limits of double-fin and coupled wind vanes. Journal of Applied Meteorology, 10, pp. 137–145. World Meteorological Organization, 1981: Review of Reference Height for and Averaging Time of S u r f a c e Wi n d M e a s u r e m e n t s a t S e a (F.W. Dobson). Marine Meteorology and Related Oceanographic Activities Report No. 3, Geneva. World Meteorological Organization, 1984a: Compendium of Lecture Notes for Training Class IV Meteorological Personnel (B.J. Retallack). Volume II – Meteorology (second edition), WMO-No. 266, Geneva. World Meteorological Organization, 1984b: Distortion of the wind field by the Cabauw Meteorological Tower (H.R.A. Wessels). Papers Presented at the WMO Technical Conference on Instruments and Cost-effective Meteorological Observations (TECEMO), Instruments and Observing Methods Report No. 15, Geneva.

World Meteorological Organization, 1987: The Measurement of Gustiness at Routine Wind Stations: A Review (A.C.M. Beljaars). Instruments and Observing Methods Report No. 31, Geneva. World Meteorological Organization, 1989: Wind Measurements Reduction to a Standard Level (R.J. Shearman and A.A. Zelenko). Marine Meteorology and Related Oceanographic Activities Report No. 22, WMO/TD-No 311, Geneva. World Meteorological Organization, 1990: Abridged Final Report of the Tenth Session of the Commission for Instruments and Methods of Observation. WMO-No. 727, Geneva. World Meteorological Organization, 1991: Guidance on the Establishment of Algorithms for Use in Synoptic Automatic Weather Stations: Processing of Surface Wind Data (D. Painting). Report of the CIMO Working Group on Surface Measurements, Instruments and Observing Methods Report No. 47, WMO/TD-No. 452, Geneva. World Meteorological Organization, 1995: Manual on Codes. Volume I.1, WMO-No. 306, Geneva. World Meteorological Organization, 2000: Wind measurements: Potential wind speed derived from wind speed fluctuations measurements, and the representativity of wind stations (J.P. van der Meulen). Papers Presented at the WMO Technical Conference on Meteorological and Environmental Instruments and Methods of Observation (TECO-2000), Instruments and Obser ving Methods Report No. 74, WMO/TD-No. 1028, p. 72, Geneva. World Meteorological Organization, 2001: Lecture notes for training agricultural meteorological personnel (second edition; J. Wieringa and J. Lomas). WMO-No. 551, Geneva (sections 5.3.3 and 9.2.4). Wyngaard, J.C., 1981: The effects of probe-induced flow distortion on atmospheric turbulence measurements. Journal of Applied Meteorology, 20, pp. 784–794.

CHaPTEr 6

MeasureMent of PrecIPItatIon

6.1

general

This chapter describes the well-known methods of precipitation measurements at ground stations. It does not discuss measurements which attempt to define the structure and character of precipitation, or which require specialized instrumentation, which are not standard meteorological observations (such as drop size distribution). Radar and satellite measurements, and measurements at sea, are discussed in Part II. Information on precipitation measurements which includes, in particular, more detail on snow cover measurements can also to be found in WMO (1992a; 1998). The general problem of representativeness is particularly acute in the measurement of precipitation. Precipitation measurements are particularly sensitive to exposure, wind and topography, and metadata describing the circumstances of the measurements are particularly important for users of the data. The analysis of precipitation data is much easier and more reliable if the same gauges and siting criteria are used throughout the networks. This should be a major consideration in designing networks. 6.1.1 Definitions

should be read to the nearest 0.2 mm and, if feasible, to the nearest 0.1 mm; weekly or monthly amounts should be read to the nearest 1 mm (at least). Daily measurements of precipitation should be taken at fixed times common to the entire network or networks of interest. Less than 0.1 mm (0.2 mm in the United States) is generally referred to as a trace. The rate of rainfall (intensity) is similarly expressed in linear measures per unit time, usually millimetres per hour. Snowfall measurements are taken in units of centimetres and tenths, to the nearest 0.2 cm. Less than 0.2 cm is generally called a trace. The depth of snow on the ground is usually measured daily in whole centimetres. 6.1.3 Meteorological and hydrological requirements

Part I, Chapter 1, Annex 1.B gives a broad statement of the requirements for accuracy, range and resolution for precipitation measurements, and gives 5 per cent as the achievable accuracy (at the 95 per cent confidence level). The common observation times are hourly, threehourly and daily, for synoptic, climatological and hydrological purposes. For some purposes, a much greater time resolution is required to measure very high rainfall rates over very short periods. For some applications, storage gauges are used with observation intervals of weeks or months or even a year in mountains and deserts. 6.1.4 6.1.4.1 Measurement methods Instruments

Precipitation is defined as the liquid or solid products of the condensation of water vapour falling from clouds or deposited from air onto the ground. It includes rain, hail, snow, dew, rime, hoar frost and fog precipitation. The total amount of precipitation which reaches the ground in a stated period is expressed in terms of the vertical depth of water (or water equivalent in the case of solid forms) to which it would cover a horizontal projection of the Earth’s surface. Snowfall is also expressed by the depth of fresh, newly fallen snow covering an even horizontal surface (see section 6.7). 6.1.2 units and scales

The unit of precipitation is linear depth, usually in millimetres (volume/area), or kg m–2 (mass/area) for liquid precipitation. Daily amounts of precipitation

Precipitation gauges (or raingauges if only liquid precipitation can be measured) are the most common instruments used to measure precipitation. Generally, an open receptacle with vertical sides is used, usually in the form of a right cylinder, with a funnel if its main purpose is to measure rain. Since various sizes and shapes of orifice and gauge heights are used in different countries, the measurements are not strictly comparable (WMO, 1989a). The volume or weight of the catch is measured, the latter in particular for solid precipitation. The gauge orifice may be at one of many specified

I.6–2

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

heights above the ground or at the same level as the surrounding ground. The orifice must be placed above the maximum expected depth of snow cover, and above the height of significant potential insplashing from the ground. For solid precipitation measurement, the orifice is above the ground and an artificial shield is placed around it. The most commonly used elevation height in more than 100 countries varies between 0.5 and 1.5 m (WMO, 1989a). The measurement of precipitation is very sensitive to exposure, and in particular to wind. Section 6.2 discusses exposure, while section 6.4 discusses at some length the errors to which precipitation gauges are prone, and the corrections that may be applied. This chapter also describes some other special techniques for measuring other types of precipitation (dew, ice, and the like) and snow cover. Some new techniques which are appearing in operational use are not described here, for example, the optical raingauge, which makes use of optical scattering. Useful sources of information on new methods under development are the reports of recurrent conferences, such as the international workshops on precipitation measurement (Slovak Hydrometeorological Institute and Swiss Federal Institute of Technology, 1993; WMO, 1989b) and those organized by the Commission for Instruments and Methods of Observation (WMO, 1998). Point measurements of precipitation serve as the primary source of data for areal analysis. However, even the best measurement of precipitation at one point is only representative of a limited area, the size of which is a function of the length of the accumulation period, the physiographic homogeneity of the region, local topography and the precipitation-producing process. Radar and, more recently, satellites are used to define and quantify the spatial distribution of precipitation. The techniques are described in Part II of this Guide. In principle, a suitable integration of all three sources of areal precipitation data into national precipitation networks (automatic gauges, radar, and satellite) can be expected to provide sufficiently accurate areal precipitation estimates on an operational basis for a wide range of precipitation data users. Instruments that detect and identify precipitation, as distinct from measuring it, may be used as present weather detectors, and are referred to in Part I, Chapter 14.

6.1.4.2

reference gauges and intercomparisons

Several types of gauges have been used as reference gauges. The main feature of their design is that of reducing or controlling the effect of wind on the catch, which is the main reason for the different behaviours of gauges. They are chosen also to reduce the other errors discussed in section 6.4. Ground-level gauges are used as reference gauges for liquid precipitation measurement. Because of the absence of wind-induced error, they generally show more precipitation than any elevated gauge (WMO, 1984). The gauge is placed in a pit with the gauge rim at ground level, sufficiently distant from the nearest edge of the pit to avoid insplashing. A strong plastic or metal anti-splash grid with a central opening for the gauge should span the pit. Provision should be made for draining the pit. Pit gauge drawings are given in WMO (1984). The reference gauge for solid precipitation is the gauge known as the Double Fence Intercomparison Reference. It has octagonal vertical double fences surrounding a Tretyakov gauge, which itself has a particular form of wind-deflecting shield. Drawings and a description are given by Goodison, Sevruk and Klemm (1989), in WMO (1985), and in the final report of the WMO intercomparison of solid precipitation gauges (WMO, 1998). Recommendations for comparisons of precipitation gauges against the reference gauges are given in Annex 6.A.1 6.1.4.3 documentation

The measurement of precipitation is particularly sensitive to gauge exposure, so metadata about the measurements must be recorded meticulously to compile a comprehensive station history, in order to be available for climate and other studies and quality assurance. Section 6.2 discusses the site information that must be kept, namely detailed site descriptions, including vertical angles to significant obstacles around the gauge, gauge configuration, height of the gauge orifice above ground and height of the wind speed measuring instrument above ground.
1 Recommended by the Commission for Instruments and Methods of Observation at its eleventh session (1994).

CHaPTEr 6. MEaSurEMENT OF PrECIPITaTION

I.6–3

Changes in observational techniques for precipitation, mainly the use of a different type of precipitation gauge and a change of gauge site or installation height, can cause temporal inhomogeneities in precipitation time series (see Part III, Chapter 2). The use of differing types of gauges and site exposures causes spatial inhomogeneities. This is due to the systematic errors of precipitation measurement, mainly the wind-induced error. Since adjustment techniques based on statistics can remove the inhomogeneities relative to the measurements of surrounding gauges, the correction of precipitation measurements for the wind-induced error can eliminate the bias of measured values of any type of gauge. The following sections (especially section 6.4) on the various instrument types discuss the corrections that may be applied to precipitation measurements. Such corrections have uncertainties, and the original records and the correction formulae should be kept. Any changes in the observation methods should also be documented.

plan should be made. Sites on a slope or the roof of a building should be avoided. Sites selected for measuring snowfall and/or snow cover should be in areas sheltered as much as possible from the wind. The best sites are often found in clearings within forests or orchards, among trees, in scrub or shrub forests, or where other objects act as an effective wind-break for winds from all directions. Preferably, however, the effects of the wind, and of the site on the wind, can be reduced by using a ground-level gauge for liquid precipitation or by making the air-flow horizontal above the gauge orifice using the following techniques (listed in order of decreasing effectiveness): (a) In areas with homogeneous dense vegetation; the height of such vegetation should be kept at the same level as the gauge orifice by regular clipping; (b) In other areas, by simulating the effect in (a) through the use of appropriate fence structures; (c) By using windshields around the gauge. The surface surrounding the precipitation gauge can be covered with short grass, gravel or shingle, but hard, flat surfaces, such as concrete, should be avoided to prevent excessive in-splashing.

6.2

siting anD exPosure

All methods for measuring precipitation should aim to obtain a sample that is representative of the true amount falling over the area which the measurement is intended to represent, whether on the synoptic scale, mesoscale or microscale. The choice of site, as well as the systematic measurement error, is, therefore, important. For a discussion of the effects of the site, see Sevruk and zahlavova (1994). The location of precipitation stations within the area of interest is important, because the number and locations of the gauge sites determine how well the measurements represent the actual amount of precipitation falling in the area. Areal representativeness is discussed at length in WMO (1992a), for rain and snow. WMO (1994) gives an introduction to the literature on the calculation of areal precipitation and corrections for topography. The effects on the wind field of the immediate surroundings of the site can give rise to local excesses and deficiencies in precipitation. In general, objects should not be closer to the gauge than a distance of twice their height above the gauge orifice. For each site, the average vertical angle of obstacles should be estimated, and a site

6.3

non-recorDing PreciPitation gauges

6.3.1 6.3.1.1

ordinary gauges Instruments

The commonly used precipitation gauge consists of a collector placed above a funnel leading into a container where the accumulated water and melted snow are stored between observation times. Different gauge shapes are in use worldwide as shown in Figure 6.1. Where solid precipitation is common and substantial, a number of special modifications are used to improve the accuracy of measurements. Such modifications include the removal of the raingauge funnel at the beginning of the snow season or the provision of a special snow fence (see WMO, 1998) to protect the catch from blowing out. Windshields around the gauge reduce the error caused by deformation of the wind field above the gauge and by snow drifting into the gauge. They are advisable for rain and essential for snow. A wide variety of gauges are in use (see WMO, 1989a).

I.6–4

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

1

2

3

4

5

6

figure 6.1. different shapes of standard precipitation gauges. the solid lines show streamlines and the dashed lines show the trajectories of precipitation particles. the first gauge shows the largest wind field deformation above the gauge orifice, and the last gauge the smallest. consequently, the wind-induced error for the first gauge is larger than for the last gauge (sevruk and nespor, 1994).

The stored water is either collected in a measure or poured from the container into a measure, or its level in the container is measured directly with a graduated stick. The size of the collector orifice is not critical for liquid precipitation, but an area of at least 200 cm2 is required if solid forms of precipitation are expected in significant quantity. An area of 200 to 500 cm2 will probably be found most convenient. The most important requirements of a gauge are as follows: (a) The rim of the collector should have a sharp edge and should fall away vertically on the inside, and be steeply bevelled on the outside; the design of gauges used for measuring snow should be such that any narrowing of the orifice caused by accumulated wet snow about the rim is small; (b) The area of the orifice should be known to the nearest 0.5 per cent, and the construction

(c)

(d) (e)

should be such that this area remains constant while the gauge is in normal use; The collector should be designed to prevent rain from splashing in and out. This can be achieved if the vertical wall is sufficiently deep and the slope of the funnel is sufficiently steep (at least 45 per cent). Suitable arrangements are shown in Figure 6.2; The construction should be such as to minimize wetting errors; The container should have a narrow entrance and be sufficiently protected from radiation to minimize the loss of water by evaporation. Precipitation gauges used in locations where only weekly or monthly readings are practicable should be similar in design to the type used for daily measurements, but with a container of larger capacity and stronger construction.

CHaPTEr 6. MEaSurEMENT OF PrECIPITaTION

I.6–5

The measuring cylinder should be made of clear glass or plastic which has a suitable coefficient of thermal expansion and should be clearly marked to show the size or type of gauge with which it is to be used. Its diameter should be less than 33 per cent of that of the rim of the gauge; the smaller the relative diameter, the greater the precision of measurement. The graduations should be finely engraved; in general, there should be marks at 0.2 mm intervals and clearly figured lines at each whole millimetre. It is also desirable that the line corresponding to 0.1 mm be marked. The maximum error of the graduations should not exceed ±0.05 mm at or above the 2 mm graduation mark and ±0.02 mm below this mark. To measure small precipitation amounts with adequate precision, the inside diameter of the measuring cylinder should taper off at its base. In all measurements, the bottom of the water meniscus should define the water level, and the cylinder should be kept vertical when reading, to avoid parallax errors. Repetition of the main graduation lines on the back of the measure is also helpful for reducing such errors.

displacement caused by the rod itself. The maximum error in the dip-rod graduation should not exceed ±0.5 mm at any point. A dip-rod measurement should be checked using a volumetric measure, wherever possible. 6.3.1.2 operation

The measuring cylinder must be kept vertical when it is being read, and the observer must be aware of parallax errors. Snow collected in nonrecording precipitation gauges should be either weighed or melted immediately after each observation and then measured using a standard graduated measuring cylinder. It is also possible to measure precipitation catch by accurate weighing, a procedure which has several advantages. The total weight of the can and contents is measured and the known weight of the can is subtracted. There is little likelihood of spilling the water and any water adhering to the can is included in the weight. The commonly used methods are, however, simpler and cheaper. 6.3.1.3 calibration and maintenance

≥90°

These lines must intersect the vertical wall below the rim of the gauge

The graduation of the measuring cylinder or stick must, of course, be consistent with the chosen size of the collector. The calibration of the gauge, therefore, includes checking the diameter of the gauge orifice and ensuring that it is within allowable tolerances. It also includes volumetric checks of the measuring cylinder or stick.

≥90°

figure 6.2. suitable collectors for raingauges

Dip-rods should be made of cedar wood, or another suitable material that does not absorb water appreciably and possesses only a small capillary effect. Wooden dip-rods are unsuitable if oil has been added to the collector to suppress evaporation. When this is the case, rods made of metal or other materials from which oil can be readily cleaned must be used. Non-metallic rods should be provided with a brass foot to avoid wear and be graduated according to the relative areas of cross-section of the gauge orifice and the collector; graduations should be marked at least every 10 mm and include an allowance for the

Routine maintenance should include, at all times, keeping the gauge level in order to prevent an out-of-level gauge (see Rinehart, 1983; Sevruk, 1984). As required, the outer container of the gauge and the graduate should be kept clean at all times both inside and outside by using a longhandled brush, soapy water and a clean water rinse. Worn, damaged or broken parts should be replaced, as required. The vegetation around the gauge should be kept trimmed to 5 cm (where applicable). The exposure should be checked and recorded. 6.3.2 storage gauges

Storage gauges are used to measure total seasonal precipitation in remote and sparsely inhabited areas. Such gauges consist of a collector above a funnel, leading into a container that is large enough to store the seasonal catch (or the monthly catch in wet areas). A layer of no less than 5 mm of a suitable oil or other evaporation

I.6–6

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

suppressant should be placed in the container to reduce evaporation (WMO, 1972). This layer should allow the free passage of precipitation into the solution below it. An antifreeze solution may be placed in the container to convert any snow which falls into the gauge into a liquid state. It is important that the antifreeze solution remain dispersed. A mixture of 37.5 per cent by weight of commercial calcium chloride (78 per cent purity) and 62.5 per cent water makes a satisfactory antifreeze solution. Alternatively, aqueous solutions of ethylene glycol or of an ethylene glycol and methanol mixture can be used. While more expensive, the latter solutions are less corrosive than calcium chloride and give antifreeze protection over a much wider range of dilution resulting from subsequent precipitation. The volume of the solution initially placed in the container should not exceed 33 per cent of the total volume of the gauge. In some countries, this antifreeze and oil solution is considered toxic waste and, therefore, harmful to the environment. Guidelines for the disposal of toxic substances should be obtained from local environmental protection authorities. The seasonal precipitation catch is determined by weighing or measuring the volume of the contents of the container (as with ordinary gauges; see section 6.3.1). The amount of oil and antifreeze solution placed in the container at the beginning of the season and any contraction in the case of volumetric measurements must be carefully taken into account. Corrections may be applied as with ordinary gauges. The operation and maintenance of storage gauges in remote areas pose several problems, such as the capping of the gauge by snow or difficulty in locating the gauge for recording the measurement, and so on, which require specific monitoring. Particular attention should be paid to assessing the quality of data from such gauges.

non-recording gauges. The particular cases of recording gauges are discussed in section 6.5. Comprehensive accounts of errors and corrections can be found in WMO (1982; 1984; 1986; and, specifically for snow, 1998). Details of the models currently used for adjusting raw precipitation data in Canada, Denmark, Finland, the Russian Federation, Switzerland and the United States are given in WMO (1982). WMO (1989a) gives a description of how the errors occur. There are collected conference papers on the topic in WMO (1986; 1989b). The amount of precipitation measured by commonly used gauges may be less than the actual precipitation reaching the ground by up to 30 per cent or more. Systematic losses will vary by type of precipitation (snow, mixed snow and rain, and rain). The systematic error of solid precipitation measurements is commonly large and may be of an order of magnitude greater than that normally associated with liquid precipitation measurements. For many hydrological purposes it is necessary first to make adjustments to the data in order to allow for the error before making the calculations. The adjustments cannot, of course, be exact (and may even increase the error). Thus, the original data should always be kept as the basic archives both to maintain continuity and to serve as the best base for future improved adjustments if, and when, they become possible. The true amount of precipitation may be estimated by correcting for some or all of the various error terms listed below: (a) Error due to systematic wind field deformation above the gauge orifice: typically 2 to 10 per cent for rain and 10 to 50 per cent for snow; (b) Error due to the wetting loss on the internal walls of the collector; (c) Error due to the wetting loss in the container when it is emptied: typically 2 to 15 per cent in summer and 1 to 8 per cent in winter, for (b) and (c) together; (d) Error due to evaporation from the container (most important in hot climates): 0 to 4 per cent; (e) Error due to blowing and drifting snow; (f) Error due to the in- and out-splashing of water: 1 to 2 per cent;

6.4

PreciPitation gauge errors anD corrections

It is convenient to discuss at this point the errors and corrections that apply in some degree to most precipitation gauges, whether they are recording or

CHaPTEr 6. MEaSurEMENT OF PrECIPITaTION

I.6–7

(g)

Random observational and instrumental errors, including incorrect gauge reading times.

The first six error components are systematic and are listed in order of general importance. The net error due to blowing and drifting snow and to inand out-splashing of water can be either negative or positive, while net systematic errors due to the wind field and other factors are negative. Since the errors listed as (e) and (f) above are generally difficult to quantify, the general model for adjusting the data from most gauges takes the following form: Pk = kPc = k (Pg +ΔP1 + ≡ΔP2 + ΔP3) where Pk is the adjusted precipitation amount; k (see Figure 6.3) is the adjustment factor for the effects of wind field deformation; Pc is the amount of precipitation caught by the gauge collector; Pg is the measured amount of precipitation in the gauge; ΔP1 is the adjustment for the wetting loss on the internal walls of the collector; ΔP2 is the adjustment for wetting loss in the container after emptying; and ΔP 3 is the adjustment for evaporation from the container. The corrections are applied to daily or monthly totals or, in some practices, to individual precipitation events. In general, the supplementary data needed to make such adjustments include the wind speed at the gauge orifice during precipitation, drop size, precipitation intensity, air temperature and humidity, and the characteristics of the gauge site. Wind speed and precipitation type or intensity may be sufficient variables to determine the corrections. Wind speed alone is sometimes used. At sites where such observations are not made, interpolation between the observations made at adjacent sites may be used for making such adjustments, but with caution, and for monthly rainfall data only. For most precipitation gauges, wind speed is the most important environmental factor contributing to the under-measurement of solid precipitation. These data must be derived from standard meteorological observations at the site in order to provide daily adjustments. In particular, if wind speed is not measured at gauge orifice height, it can be derived by using a mean wind speed reduction procedure after having

knowledge of the roughness of the surrounding surface and the angular height of surrounding obstacles. A suggested scheme is shown in Annex 6.B.2 This scheme is very site-dependent, and estimation requires a good knowledge of the station and gauge location. Shielded gauges catch more precipitation than their unshielded counterparts, especially for solid precipitation. Therefore, gauges should be shielded either naturally (for example, forest clearing) or artificially (for example, Alter, Canadian Nipher type, Tretyakov windshield) to minimize the adverse effect of wind speed on measurements of solid precipitation (refer to WMO, 1994 and 1998, for some information on shield design). Wetting loss (Sevruk, 1974a) is another cumulative systematic loss from manual gauges which varies with precipitation and gauge type; its magnitude is also a function of the number of times the gauge is emptied. Average wetting loss can be up to 0.2 mm per observation. At synoptic stations where precipitation is measured every 6 h, this can become a very significant loss. In some countries, wetting loss has been calculated to be 15 to 20 per cent of the measured winter precipitation. Correction for wetting loss at the time of observation is a feasible alternative. Wetting loss can be kept low in a well-designed gauge. The internal surfaces should be of a material which can be kept smooth and clean; paint, for example, is unsuitable, but baked enamel is satisfactory. Seams in the construction should be kept to a minimum. Evaporation losses (Sevruk, 1974b) vary by gauge type, climatic zone and time of year. Evaporation loss is a problem with gauges that do not have a funnel device in the bucket, especially in late spring at mid-latitudes. Losses of over 0.8 mm per day have been reported. Losses during winter are much less than during comparable summer months, ranging from 0.1 to 0.2 mm per day. These losses, however, are cumulative. In a well-designed gauge, only a small water surface is exposed, its ventilation is minimized, and the water temperature is kept low by a reflective outer surface. It is clear that, in order to achieve data compatibility when using different gauge types and shielding during all weather conditions, corrections to the
2 A wind reduction scheme recommended by the Commission for Instruments and Methods of Observation at its eleventh session (1994).

I.6–8
1.15

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

k
1.10

uhp = 3.0 m s-1 uhp = 3.0 m s-1 uhp = 1.0 m s-1 uhp = 1.0 m s-1

uhp = 3.0 m s-1 uhp = 3.0 m s-1 uhp = 1.0 m s-1 uhp = 1.0 m s-1

1.05

1.00

0

1

2.0

2 3 i (mm h-1)

4

5

0

1

3 2 i (mm h-1)

4

5

k
1.8 1.6 1.4 1.2 1.0 1

uhp = 3.0 m s-1 uhp = 2.0 m s-1 uhp = 1.5 m s-1 uhp = 1.0 m s-1

uhp = 3.0 m s-1 uhp = 2.0 m s-1 uhp = 1.5 m s-1 uhp = 1.0 m s-1

0

2

3

4

5

0

1

2

3

4

5

i (mm h-1)

i (mm h-1)

figure 6.3. conversion factor k defined as the ratio of “correct” to measured precipitation for rain (top) and snow (bottom) for two unshielded gauges in dependency of wind speed uhp, intensity i and type of weather situation according to nespor and sevruk (1999). on the left is the german Hellmann manual standard gauge, and on the right the recording, tipping-bucket gauge by lambrecht. Void symbols in the top diagrams refer to orographic rain, and black ones to showers. note the different scales for rain and snow. for shielded gauges, k can be reduced to 50 and 70 per cent for snow and mixed precipitation, respectively (WMo, 1998). the heat losses are not considered in the diagrams (in switzerland they vary with altitude between 10 and 50 per cent of the measured values of fresh snow).

actual measurements are necessary. In all cases where precipitation measurements are adjusted in an attempt to reduce errors, it is strongly recommended that both the measured and adjusted values be published.

6.5

recorDing PreciPitation gauges

limited to the measurement of rainfall. Some new automatic gauges that measure precipitation without using moving parts are available. These gauges use devices such as capacitance probes, pressure transducers, and optical or small radar devices to provide an electronic signal that is proportional to the precipitation equivalent. The clock device that times intervals and dates the time record is a very important component of the recorder. 6.5.1 6.5.1.1 Weighing-recording gauge Instruments

Recording precipitation automatically has the advantage that it can provide better time resolution than manual measurements, and it is possible to reduce the evaporation and wetting losses. These readings are of course subject to the wind effects discussed in section 6.4 Three types of automatic precipitation recorders are in general use, namely the weighing-recording type, the tilting or tipping-bucket type, and the float type. Only the weighing type is satisfactory for measuring all kinds of precipitation, the use of the other two types being for the most part

In these instruments, the weight of a container, together with the precipitation accumulated therein, is recorded continuously, either by means of a spring mechanism or with a system of balance weights. All precipitation, both liquid and solid, is recorded as it falls. This type of gauge normally has no provision for emptying itself; the capacity (namely, the maximum accumulation between

CHaPTEr 6. MEaSurEMENT OF PrECIPITaTION

I.6–9

recha rge) ra nges f rom 150 to 750 mm. The gauges must be maintained to minimize evaporation losses, which can be accomplished by adding sufficient oil or other evaporation suppressants inside the container to form a film over the water surface. Any difficulties arising from oscillation of the balance in strong winds can be reduced with an oil damping mechanism or, if recent work is substantiated, by suitably programming a microprocessor to eliminate this effect on the readings. Such weighing gauges are particularly useful for recording snow, hail, and mixtures of snow and rain, since the solid precipitation does not need to be melted before it can be recorded. For winter operation, the catchment container is charged with an antifreeze solution (see section 6.3.2) to dissolve the solid contents. The amount of antifreeze depends on the expected amount of precipitation and the minimum temperature expected at the time of minimum dilution. The weight of the catchment container, measured by a calibrated spring, is translated from a vertical to an angular motion through a series of levers or pulleys. This angular motion is then communicated mechanically to a drum or strip chart or digitized through a transducer. The accuracy of these types of gauges is related directly to their measuring and/or recording characteristics, which can vary with manufacturer. 6.5.1.2 errors and corrections

Some potential errors in manual methods of precipitation measurement can be eliminated or at least minimized by using weighing-recording gauges. Random measurement errors associated with human observer error and certain systematic errors, particularly evaporation and wetting loss, are minimized. In some countries, trace observations are officially given a value of zero, thus resulting in a biased underestimate of the seasonal precipitation total. This problem is minimized with weighing-type gauges, since even very small amounts of precipitation will accumulate over time. The correction of weighing gauge data on an hourly or daily basis may be more difficult than on longer time periods, such as monthly climatological summaries. Ancillary data from automatic weather stations, such as wind at gauge height, air temperature, present weather or snow depth, will be useful in interpreting and correcting accurately the precipitation measurements from automatic gauges. 6.5.1.3 calibration and maintenance

Except for error due to the wetting loss in the container when it is emptied, weighing-recording gauges are susceptible to all of the other sources of error discussed in section 6.4. It should also be noted that automatic recording gauges alone cannot identify the type of precipitation. A significant problem with this type of gauge is that precipitation, particularly freezing rain or wet snow, can stick to the inside of the gauge orifice and not fall into the bucket until some time later. This severely limits the ability of weighing-recording gauges to provide accurate timing of precipitation events. Another common fault with weighing-type gauges is wind pumping. This usually occurs during high winds when turbulent air currents passing over and around the catchment container cause oscillations in the weighing mechanism. By using programmable data-logging systems, errors associated with such anomalous recordings can be minimized by averaging readings over short time intervals, namely, 1 min. Timing errors in the instrument clock may assign the catch to the wrong period or date.

Weighing-recording gauges usually have few moving parts and, therefore, should seldom require calibration. Calibration commonly involves the use of a series of weights which, when placed in the bucket or catchment container, provide a predetermined value equivalent to an amount of precipitation. Calibrations should normally be done in a laboratory setting and should follow the manufacturer’s instructions. Routine maintenance should be conducted every three to four months, depending on precipitation conditions at the site. Both the exterior and interior of the gauge should be inspected for loose or broken parts and to ensure that the gauge is level. Any manual read-out should be checked against the removable data record to ensure consistency before removing and annotating the record. The bucket or catchment container should be emptied, inspected, cleaned, if required, and recharged with oil for rainfall-only operation or with antifreeze and oil if solid precipitation is expected (see section 6.3.2). The recording device should be set to zero in order to make maximum use of the gauge range. The tape, chart supply or digital memory as well as the power supply should be checked and replaced, if required. A voltohmmeter may be required to set the gauge output to zero when a data logger is used or to

I.6–10

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

check the power supply of the gauge or recording system. Timing intervals and dates of record must be checked. 6.5.2 tipping-bucket gauge

provide precipitation amount. It may also be used with a chart recorder. 6.5.2.2 errors and corrections

The tipping-bucket raingauge is used for measuring accumulated totals and the rate of rainfall, but does not meet the required accuracy because of the large non-linear errors, particularly at high precipitation rates. 6.5.2.1 Instruments

The principle behind the operation of this instrument is simple. A light metal container or bucket divided into two compartments is balanced in unstable equilibrium about a horizontal axis. In its normal position, the bucket rests against one of two stops, which prevents it from tipping over completely. Rain water is conducted from a collector into the uppermost compartment and, after a predetermined amount has entered the compartment, the bucket becomes unstable and tips over to its alternative rest position. The bucket compartments are shaped in such a way that the water is emptied from the lower one. Meanwhile, rain continues to fall into the newly positioned upper compartment. The movement of the bucket as it tips over can be used to operate a relay contact to produce a record consisting of discontinuous steps; the distance between each step on the record represents the time taken for a specified small amount of rain to fall. This amount of rain should not exceed 0.2 mm if detailed records are required. The bucket takes a small but finite time to tip and, during the first half of its motion, additional rain may enter the compartment that already contains the calculated amount of rainfall. This error can be appreciable during heavy rainfall (250 mm h–1), but it can be controlled. The simplest method is to use a device like a siphon at the foot of the funnel to direct the water to the buckets at a controlled rate. This smoothes out the intensity peaks of very shortperiod rainfall. Alternatively, a device can be added to accelerate the tipping action; essentially, a small blade is impacted by the water falling from the collector and is used to apply an additional force to the bucket, varying with rainfall intensity. The tipping-bucket gauge is particularly convenient for automatic weather stations because it lends itself to digital methods. The pulse generated by a contact closure can be monitored by a data logger and totalled over selected periods to

Since the tipping-bucket raingauge has sources of error which differ somewhat from those of other gauges, special precautions and corrections are advisable. Some sources of error include the following: (a) The loss of water during the tipping action in heavy rain can be minimized but not eliminated; (b) With the usual bucket design, the exposed water surface is large in relation to its volume, meaning that appreciable evaporation losses can occur, especially in hot regions. This error may be significant in light rain; (c) The discontinuous nature of the record may not provide satisfactory data during light drizzle or very light rain. In particular, the time of onset and cessation of precipitation cannot be accurately determined; (d) Water may adhere to both the walls and the lip of the bucket, resulting in rain residue in the bucket and additional weight to be overcome by the tipping action. Tests on waxed buckets produced a 4 per cent reduction in the volume required to tip the balance compared with non-waxed buckets. Volumetric calibration can change, without adjustment of the calibration screws, by variation of bucket wettability through surface oxidation or contamination by impurities and variations in surface tension; (e) The stream of water falling from the funnel onto the exposed bucket may cause over-reading, depending on the size, shape and position of the nozzle; (f) The instrument is particularly prone to bearing friction and to having an improperly balanced bucket because the gauge is not level. Careful calibration can provide corrections for the systematic parts of these errors. The measurements from tipping-bucket raingauges may be corrected for effects of exposure in the same way as other types of precipitation gauge. Heating devices can be used to allow for measurements during the cold season, particularly of solid precipitation. However, the performance of heated tipping-bucket gauges has been found to be very poor as a result of large errors due to both wind and evaporation of melting snow. Therefore, these types of gauges are not recommended for use in winter precipitation measurement in

CHaPTEr 6. MEaSurEMENT OF PrECIPITaTION

I.6–11

regions where temperatures fall below 0°C for prolonged periods. 6.5.2.3 calibration and maintenance

Calibration of the tipping bucket is usually accomplished by passing a known amount of water through the tipping mechanism at various rates and by adjusting the mechanism to the known volume. This procedure should be followed under laboratory conditions. Owing to the numerous error sources, the collection characteristics and calibration of tipping-bucket raingauges are a complex interaction of many variables. Daily comparisons with the standard raingauge can provide useful correction factors, and is good practice. The correction factors may vary from station to station. Correction factors are generally greater than 1.0 (under-reading) for low-intensity rain, and less than 1.0 (over-reading) for highintensity rain. The relationship between the correction factor and intensity is not linear but forms a curve. Routine maintenance should include cleaning the accumulated dirt and debris from funnel and buckets, as well as ensuring that the gauge is level. It is highly recommended that the tipping mechanism be replaced with a newly calibrated unit on an annual basis. Timing intervals and dates of records must be checked. 6.5.3 float gauge

the beginning or the end of the siphoning period, which should not be longer than 15 s. In some instruments, the float chamber assembly is mounted on knife edges so that the full chamber overbalances; the surge of the water assists the siphoning process, and, when the chamber is empty, it returns to its original position. Other rain recorders have a forced siphon which operates in less than 5 s. One type of forced siphon has a small chamber that is separate from the main chamber and accommodates the rain that falls during siphoning. This chamber empties into the main chamber when siphoning ceases, thus ensuring a correct record of total rainfall. A heating device (preferably controlled by a thermostat) should be installed inside the gauge if there is a possibility that water might freeze in the float chamber during the winter. This will prevent damage to the float and float chamber and will enable rain to be recorded during that period. A small heating element or electric lamp is suitable where a mains supply of electricity is available, otherwise other sources of power may be employed. One convenient method uses a short heating strip wound around the collecting chamber and connected to a large-capacity battery. The amount of heat supplied should be kept to the minimum necessary in order to prevent freezing, because the heat may reduce the accuracy of the observations by stimulating vertical air movements above the gauge and increasing evaporation losses. A large undercatch by unshielded heated gauges, caused by the wind and the evaporation of melting snow, has been reported in some countries, as is the case for weighing gauges (see section 6.5.1.2). Apart from the fact that calibration is performed using a known volume of water, the maintenance procedures for this gauge are similar to those of the weighing-recording gauge (see section 6.5.1.3).

In this type of instrument, the rain passes into a float chamber containing a light float. As the level of the water within the chamber rises, the vertical movement of the float is transmitted, by a suitable mechanism, to the movement of a pen on a chart or a digital transducer. By suitably adjusting the dimensions of the collector orifice, the float and the float chamber, any desired chart scale can be used. In order to provide a record over a useful period (24 h are normally required) either the float chamber has to be very large (in which case a compressed scale on the chart or other recording medium is obtained), or a mechanism must be provided for empt ying the f loat chamber automatically and quickly whenever it becomes full, so that the chart pen or other indicator returns to zero. Usually a siphoning arrangement is used. The actual siphoning process should begin precisely at the predetermined level with no tendency for the water to dribble over at either

6.6

MeasureMent of DeW, ice accuMulation anD fog PreciPitation

6.6.1

Measurement of dew and leaf wetness

The deposition of dew is essentially a nocturnal phenomenon and, although relatively small in amount and locally variable, is of much interest in

I.6–12

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

arid zones; in very arid regions, it may be of the same order of magnitude as the rainfall. The exposure of plant leaves to liquid moisture from dew, fog and precipitation also plays an important role in plant disease, insect activity, and the harvesting and curing of crops. In order to assess the hydrological contribution of dew, it is necessary to distinguish between dew formed: (a) As a result of the downward transport of atmospheric moisture condensed on cooled surfaces, known as dew-fall; (b) By water vapour evaporated from the soil and plants and condensed on cooled surfaces, known as distillation dew; (c) As water exuded by leaves, known as guttation. All three forms of dew may contribute simultaneously to the observed dew, although only the first provides additional water to the surface, and the latter usually results in a net loss. A further source of moisture results from fog or cloud droplets being collected by leaves and twigs and reaching the ground by dripping or by stem flow. The amount of dew deposited on a given surface in a stated period is usually expressed in units of kg m–2 or in millimetres depth of dew. Whenever possible, the amount should be measured to the nearest tenth of a millimetre.

measurements and the deposition of dew on a natural surface should, therefore, be established for each particular set of surface and exposure conditions; empirical relationships should also be established to distinguish between the processes of dew formation if that is important for the particular application. A number of instruments are in use for the direct measurement of the occurrence, amount and duration of leaf wetness and dew. Dew-duration recorders use either elements which themselves change in such a manner as to indicate or record the wetness period, or electrical sensors in which the electrical conductivity of the surface of natural or artificial leaves changes in the presence of water resulting from rain, snow, wet fog or dew. In dew balances, the amount of moisture deposited in the form of precipitation or dew is weighed and recorded. In most instruments providing a continuous trace, it is possible to distinguish between moisture deposits caused by fog, dew or rain by considering the type of trace. The only certain method of measuring net dew-fall by itself is through the use of a very sensitive lysimeter (see Part I, Chapter 10). In WMO (1992b) two particular electronic instruments for measuring leaf wetness are advocated for development as reference instruments, and various leaf-wetting simulation models are proposed. Some use an energy balance approach (the inverse of evaporation models), while others use correlations. Many of them require micrometeorological measurements. Unfortunately, there is no recognized standard method of measurement to verify them. 6.6.2 Measurement of ice accumulation

Leaf wetness may be described as light, moderate or heavy, but its most important measures are the time of onset or duration.
A review of the instruments designed for measuring dew and the duration of leaf wetness, as well as a bibliography, is given in WMO (1992b). The following methods for the measurement of leaf wetness are considered. The amount of dew depends critically on the properties of the surface, such as its radiative properties, size and aspect (horizontal or vertical). It may be measured by exposing a plate or surface, which can be natural or artificial, with known or standardized properties, and assessing the amount of dew by weighing it, visually observing it, or making use of some other quantity such as electrical conductivity. The problem lies in the choice of the surface, because the results obtained instrumentally are not necessarily representative of the dew deposit on the surrounding objects. Empirical relationships between the instrumental

Ice can accumulate on surfaces as a result of several phenomena. Ice accumulation from freezing precipitation, often referred to as glaze, is the most dangerous type of icing condition. It may cause extensive damage to trees, shrubs and telephone and power lines, and create hazardous conditions on roads and runways. Hoar frost (commonly called frost) forms when air with a dew-point temperature below freezing is brought to saturation by cooling. Hoar frost is a deposit of interlocking ice crystals formed by direct sublimation on objects, usually of small diameter, such as tree branches, plant stems, leaf edges, wires, poles, and so forth. Rime is a white or milky and opaque granular deposit of ice formed by the rapid freezing of supercooled water drops as they come into contact with an exposed object.

CHaPTEr 6. MEaSurEMENT OF PrECIPITaTION

I.6–13

6.6.2.1

Measurement methods

At meteorological stations, the observation of ice accumulation is generally more qualitative than quantitative, primarily due to the lack of a suitable sensor. Ice accretion indicators, usually made of anodized aluminium, are used to observe and report the occurrence of freezing precipitation, frost or rime icing. Observations of ice accumulation can include both the measurement of the dimensions and the weight of the ice deposit as well as a visual description of its appearance. These observations are particularly important in mountainous areas where such accumulation on the windward side of a mountain may exceed the normal precipitation. A system consisting of rods and stakes with two pairs of parallel wires (one pair oriented north-south and the other east-west) can be used to accumulate ice. The wires may be suspended at any level, and the upper wire of each pair should be removable. At the time of observation, both upper wires are removed, placed in a special container, and taken indoors for melting and weighing of the deposit. The cross-section of the deposit is measured on the permanently fixed lower wires. Recording instruments are used in some countries for continuous registration of rime. A vertical or horizontal rod, ring or plate is used as the sensor, and the increase in the amount of rime with time is recorded on a chart. A simple device called an icescope is used to determine the appearance and presence of rime and hoar frost on a snow surface. The ice-scope consists of a round plywood disc, 30 cm in diameter, which can be moved up or down and set at any height on a vertical rod fixed in the ground. Normally, the disc is set flush with the snow surface to collect the rime and hoar frost. Rime is also collected on a 20 cm diameter ring fixed on the rod, 20 cm from its upper end. A wire or thread 0.2 to 0.3 mm in diameter, stretched between the ring and the top end of the rod, is used for the observation of rime deposits. If necessary, each sensor can be removed and weighed. 6.6.2.2 Ice on pavements

ice. One sensor using two electrodes embedded in the road, flush with the surface, measures the electrical conductivity of the surface and readily distinguishes between dry and wet surfaces. A second measurement, of ionic polarizability, determines the ability of the surface, to hold an electrical charge; a small charge is passed between a pair of electrodes for a short time, and the same electrodes measure the residual charge, which is higher when there is an electrolyte with free ions, such as salty water. The polarizability and conductivity measurements together can distinguish between dry, moist and wet surfaces, frost, snow, white ice and some de-icing chemicals. However, because the polarizability of the non-crystalline black ice is indistinguishable from water under some conditions, the dangerous black ice state can still not be detected with the two sensors. In at least one system, this problem has been solved by adding a third specialized capacitive measurement which detects the unique structure of black ice. The above method is a passive technique. There is an active in situ technique that uses either a heating element, or both heating and cooling elements, to melt or freeze any ice or liquid present on the surface. Simultaneous measurements of temperature and of the heat energy involved in the thaw-freeze cycle are used to determine the presence of ice and to estimate the freezing point of the mixture on the surface. Most in situ systems include a thermometer to measure the road surface temperature. The quality of the measurement depends critically on the mounting (especially the materials) and exposure, and care must be taken to avoid radiation errors. There are two remote-sensing methods under development which lend themselves to car-mounted systems. The first method is based on the reflection of infrared and microwave radiation at several frequencies (about 3 000 nm and 3 GHz, respectively). The microwave reflections can determine the thickness of the water layer (and hence the risk of aquaplaning), but not the ice condition. Two infrared frequencies can be used to distinguish between dry, wet and icy conditions. It has also been demonstrated that the magnitude of reflected power at wavelengths around 2 000 nm depends on the thickness of the ice layer. The second method applies pattern recognition techniques to the reflection of laser light from the pavement, to distinguish between dry and wet surfaces, and black ice.

Sensors have been developed and are in operation to detect and describe ice on roads and runways, and to support warning and maintenance programmes. With a combination of measurements, it is possible to detect dry and wet snow and various forms of

I.6–14 6.6.3

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

Measurement of fog precipitation

Fog consists of minute water droplets suspended in the atmosphere to form a cloud at the Earth’s surface. Fog droplets have diameters from about 1 to 40 μm and fall velocities from less than 1 to approximately 5 cm s–1. In fact, the fall speed of fog droplets is so low that, even in light winds, the drops will travel almost horizontally. When fog is present, horizontal visibility is usually less than 5 km; it is rarely observed when the temperature and dewpoint differ by more than 2°C. Meteorologists are generally more concerned with fog as an obstruction to vision than as a form of precipitation. However, from a hydrological standpoint, some forested high-elevation areas experience frequent episodes of fog as a result of the advection of clouds over the surface of the mountain, where the consideration of precipitation alone may seriously underestimate the water input to the watershed (Stadtmuller and Agudelo, 1990). More recently, the recognition of fog as a water supply source in upland areas (Schemenauer and Cereceda, 1994a) and as a wet deposition pathway (Schemenauer and Cereceda, 1991; Vong, Sigmon and Mueller, 1991) has led to the requirement for standardizing methods and units of measurement. The following methods for the measurement of fog precipitation are considered. Although there have been a great number of measurements for the collection of fog by trees and various types of collectors over the last century, it is difficult to compare the collection rates quantitatively. The most widely used fogmeasuring instrument consists of a vertical wire mesh cylinder centrally fixed on the top of a raingauge in such a way that it is fully exposed to the free flow of the air. The cylinder is 10 cm in diameter and 22 cm in height, and the mesh is 0.2 cm by 0.2 cm (Grunow, 1960). The droplets from the moisture-laden air are deposited on the mesh and drop down into the gauge collector where they are measured or registered in the same way as rainfall. Some problems with this instrument are its small size, the lack of representativeness with respect to vegetation, the storage of water in the small openings in the mesh, and the ability of precipitation to enter directly into the raingauge portion, which confounds the measurement of fog deposition. In addition, the calculation of fog precipitation by simply subtracting the amount of rain in a standard raingauge (Grunow, 1963) from that in the fog collector leads to erroneous results whenever wind is present.

An inexpensive, 1 m2 standard fog collector and standard unit of measurement is proposed by Schemenauer and Cereceda (1994b) to quantify the importance of fog deposition to forested highelevation areas and to measure the potential collection rates in denuded or desert mountain ranges. The collector consists of a flat panel made of a durable polypropylene mesh and mounted with its base 2 m above the ground. The collector is coupled to a tipping-bucket raingauge to determine deposition rates. When wind speed measurements are taken in conjunction with the fog collector, reasonable estimates of the proportions of fog and rain being deposited on the vertical mesh panel can be taken. The output of this collector results in litres of water. Since the surface area is 1 m 2, this gives a collection in l m–2.

6.7

MeasureMent of snoWfall anD snoW cover

The authoritative texts on this topic are WMO (1994) and WMO (1992a), which cover the hydrological aspects, including the procedures, for snow surveying on snow courses. The following is a brief account of some simple and well-known methods, and a brief review of the instrumentation. Snowfall is the depth of freshly fallen snow deposited over a specified period (generally 24 h). Thus, snowfall does not include the deposition of drifting or blowing snow. For the purposes of depth measurements, the term “snow” should also include ice pellets, glaze, hail, and sheet ice formed directly or indirectly from precipitation. Snow depth usually means the total depth of snow on the ground at the time of observation. The water equivalent of a snow cover is the vertical depth of the water that would be obtained by melting the snow cover. 6.7.1 snowfall depth

Direct measurements of the depth of fresh snow on open ground are taken with a graduated ruler or scale. A sufficient number of vertical measurements should be made in places where drifting is considered absent in order to provide a representative average. Where the extensive drifting of snow has occurred, a greater number of measurements are needed to obtain a representative depth. Special precautions should be taken so as not to measure

CHaPTEr 6. MEaSurEMENT OF PrECIPITaTION

I.6–15

any previously fallen snow. This can be done by sweeping a suitable patch clear beforehand or by covering the top of the old snow surface with a piece of suitable material (such as wood with a slightly rough surface, painted white) and measuring the depth accumulated on it. On a sloping surface (to be avoided, if possible) measurements should still be taken with the measuring rod vertical. If there is a layer of old snow, it would be incorrect to calculate the depth of the new snow from the difference between two consecutive measurements of total depth of snow since lying snow tends to become compressed and to suffer ablation. 6.7.2 Direct measurements of snow cover depth

the type, amount and timing of precipitation. It is capable of an uncertainty of ±2.5 cm. 6.7.3 Direct measurements of snow water equivalent

The standard method of measuring water equivalent is by gravimetric measurement using a snow tube to obtain a sample core. This method serves as the basis for snow surveys, a common procedure in many countries for obtaining a measure of water equivalent. The method consists of either melting each sample and measuring its liquid content or by weighing the frozen sample. A measured quantity of warm water or a heat source can be used to melt the sample. Cylindrical samples of fresh snow may be taken with a suitable snow sampler and either weighed or melted. Details of the available instruments and sampling techniques are described in WMO (1994). Often a standard raingauge overflow can be used for this method. Snowgauges measure snowfall water equivalent directly. Essentially, any non-recording precipitation gauges can also be used to measure the water equivalent of solid precipitation. Snow collected in these types of gauges should be either weighed or melted immediately after each observation, as described in section 6.3.1.2. The recording-weighing gauge will catch solid forms of precipitation as well as liquid forms, and record the water equivalent in the same manner as liquid forms (see section 6.5.1). The water equivalent of solid precipitation can also be estimated using the depth of fresh snowfall. This measurement is converted to water equivalent by using an appropriate specific density. Although the relationship stating that 1 cm of fresh snow equals the equivalent of 1 mm of water may be used with caution for long-term average values, it may be highly inaccurate for a single measurement, as the specific density ratio of snow may vary between 0.03 and 0.4. 6.7.4 snow pillows

Depth measurements of snow cover or snow accumulated on the ground are taken with a snow ruler or similar graduated rod which is pushed down through the snow to the ground surface. It may be difficult to obtain representative depth measurements using this method in open areas since the snow cover drifts and is redistributed under the effects of the wind, and may have embedded ice layers that limit penetration with a ruler. Care should be taken to ensure that the total depth is measured, including the depth of any ice layers which may be present. A number of measurements are taken and averaged at each observing station. A number of snow stakes, painted with rings of alternate colours or another suitable scale, provide a convenient means of measuring the total depth of snow on the ground, especially in remote regions. The depth of snow at the stake or marker may be observed from distant ground points or from aircraft by means of binoculars or telescopes. The stakes should be painted white to minimize the undue melting of the snow immediately surrounding them. Aerial snow depth markers are vertical poles (of variable length, depending on the maximum snow depth) with horizontal cross-arms mounted at fixed heights on the poles and oriented according to the point of observation. The development of an inexpensive ultrasonic ranging device to provide reliable snow depth measurements at automatic stations has provided a feasible alternative to the standard observation, both for snow depth and fresh snowfall (Goodison and others, 1988). This sensor can be utilized to control the quality of automatic recording gauge measurements by providing additional details on

Snow pillows of various dimensions and materials are used to measure the weight of the snow that accumulates on the pillow. The most common pillows are flat circular containers (with a diameter of 3.7 m) made of rubberized material and filled with an antifreeze mixture of methyl alcohol and water or a methanol-glycol-water solution. The pillow is installed on the surface of the ground,

I.6–16

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

flush with the ground, or buried under a thin layer of soil or sand. In order to prevent damage to the equipment and to preserve the snow cover in its natural condition, it is recommended that the site be fenced in. Under normal conditions, snow pillows can be used for 10 years or more. Hydrostatic pressure inside the pillow is a measure of the weight of the snow on the pillow. Measuring the hydrostatic pressure by means of a float-operated liquid-level recorder or a pressure transducer provides a method of continuous measurement of the water equivalent of the snow cover. Variations in the accuracy of the measurements may be induced by temperature changes. In shallow snow cover, diurnal temperature changes may cause expansion or contraction of the fluid in the pillow, thus giving spurious indications of snowfall or snow melt. In deep mountain areas, diurnal temperature fluctuations are unimportant, except at the beginning and end of the snow season. The access tube to the measurement unit should be installed in a temperature-controlled shelter or in the ground to reduce the temperature effects. In situ and/or telemetry data-acquisition systems can be installed to provide continuous measurements of snow water equivalent through the use of charts or digital recorders. Snow pillow measurements differ from those taken with standard snow tubes, especially during the snow-melt period. They are most reliable when the snow cover does not contain ice layers, which can cause “bridging” above the pillows. A comparison of the water equivalent of snow determined by a snow pillow with measurements taken by the standard method of weighing shows that these may differ by 5 to 10 per cent. 6.7.5 radioisotope snowgauges

is either natural or artificial. One part (for example, the detector/source) of the system is located at the base of the snowpack, and the other at a height greater than the maximum expected snow depth. As snow accumulates, the count rate decreases in proportion to the water equivalent of the snowpack. Systems using an artificial source of radiation are used at fixed locations to obtain measurements only for that site. A system using naturally occurring uranium as a ring source around a single pole detector has been successfully used to measure packs of up to 500 mm of water equivalent, or a depth of 150 cm. A profiling radioactive snowgauge at a fixed location provides data on total snow water equivalent and density and permits an accurate study of the water movements and density changes that occur with time in a snowpack (Armstrong, 1976). A profiling gauge consists of two parallel vertical access tubes, spaced approximately 66 cm apart, which extend from a cement base in the ground to a height above the maximum expected depth of snow. A gamma ray source is suspended in one tube, and a scintillation gamma-ray detector, attached to a photomultiplier tube, in the other. The source and detector are set at equal depths within the snow cover and a measurement is taken. Vertical density profiles of the snow cover are obtained by taking measurements at depth increments of about 2 cm. A portable gauge (young, 1976) which measures the density of the snow cover by backscatter, rather than transmission of the gamma rays, offers a practical alternative to digging deep snow pits, while instrument portability makes it possible to assess areal variations of density and water equivalent. 6.7.6 natural gamma radiation

Nuclear gauges measure the total water equivalent of the snow cover and/or provide a density profile. They are a non-destructive method of sampling and are adaptable to in situ recording and/or telemetry systems. Nearly all systems operate on the principle that water, snow or ice attenuates radiation. As with other methods of point measurement, siting in a representative location is critical for interpreting and applying point measurements as areal indices. The gauges used to measure total water content consist of a radiation detector and a source, which

The method of gamma radiation snow surveying is based on the attenuation by snow of gamma radiation emanating from natural radioactive elements in the top layer of the soil. The greater the water equivalent of the snow, the more the radiation is attenuated. Terrestrial gamma surveys can consist of a point measurement at a remote location, a series of point measurements, or a selected traverse over a region (Loijens, 1975). The method can also be used on aircraft. The equipment includes a portable gamma-ray spectrometer that utilizes a small scintillation crystal to measure the rays in a wide spectrum and in three spectral windows (namely, potassium, uranium and thorium emissions). With this method, measurements of

CHaPTEr 6. MEaSurEMENT OF PrECIPITaTION

I.6–17

gamma levels are required at the point, or along the traverse, prior to snow cover. In order to obtain absolute estimates of the snow water equivalent, it is necessary to correct the readings for soil moisture changes in the upper 10 to 20 cm of soil for variations in background radiation resulting from cosmic rays, instrument drift and the washout of radon gas (which is a source of gamma radiation) in precipitation with subsequent build-up in the soil or snow. Also, in order to determine the relationship between spectrometer count rates and

water equivalent, supplementary snow water equivalent measurements are initially required. Snow tube measurements are the common reference standard. The natural gamma method can be used for snowpacks which have up to 300 mm water equivalent; with appropriate corrections, its precision is ±20 mm. The advantage of this method over the use of artificial radiation sources is the absence of a radiation risk.

I.6–18

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

aNNEx 6.a PrecIPItatIon IntercoMParIson sItes

The Commission for Instruments and Methods of Observation, at its eleventh session, held in 1994, made the following statement regarding precipitation intercomparison sites: The Commission recognized the benefits of national precipitation sites or centres where past, current and future instruments and methods of observation for precipitation can be assessed on an ongoing basis at evaluation stations. These stations should: (a) Operate the WMO recommended gauge configurations for rain (pit gauge) and snow (Double Fence Intercomparison Reference (DFIR)). Installation and operation will follow specifications of the WMO precipitation intercomparisons. A DFIR installation is not required when only rain is observed; (b) Operate past, current and new types of operational precipitation gauges or other methods of observation according to standard operating procedures and evaluate the accuracy and performance against WMO recommended reference instruments; (c) Take auxiliary meteorological measurements which will allow the development and tests

(d)

(e) (f)

(g)

for the application of precipitation correction procedures; Provide quality control of data and archive all precipitation intercomparison data, including the related meteorological observations and the metadata, in a readily acceptable format, preferably digital; Operate continuously for a minimum of 10 years; Test all precipitation correction procedures available (especially those outlined in the final reports of the WMO intercomparisons) on the measurement of rain and solid precipitation; Facilitate the conduct of research studies on precipitation measurements. It is not expected that the centres provide calibration or verification of instruments. They should make recommendations on national observation standards and should assess the impact of changes in observational methods on the homogeneity of precipitation time series in the region. The site would provide a reference standard for calibrating and validating radar or remote-sensing observations of precipitation.

CHaPTEr 6. MEaSurEMENT OF PrECIPITaTION

I.6–19

aNNEx 6.B suggested correctIon Procedures for PrecIPItatIon MeasureMents
The Commission for Instruments and Methods of Observation, at its eleventh session, held in 1994, made the following statement regarding the correction procedures for precipitation measurements: The correction methods are based on simplified physical concepts as presented in the Instruments Development Inquiry (Instruments and Observing Methods Report No. 24, WMO/TD-No. 231). They depend on the type of precipitation gauge applied. The effect of wind on a particular type of gauge has been assessed by using intercomparison measurements with the WMO reference gauges — the pit gauge for rain and the Double Fence Intercomparison Reference (DFIR) for snow as is shown in the International Comparison of National Precipitation Gauges with a Reference Pit Gauge (Instruments and Observing Methods Report No. 17, WMO/TDNo. 38) and by the preliminary results of the WMO Solid Precipitation Measurement Intercomparison. The reduction of wind speed to the level of the gauge orifice should be made according to the following formula: uhp = (log hz0–1) · (log Hz0–1)–1 · (1 – 0.024α) uH where uhp is the wind speed at the level of the gauge orifice; h is the height of the gauge orifice above ground; z0 is the roughness length (0.01 m for winter and 0.03 m for summer); H is the height of the wind speed measuring instrument above ground; uH is the wind speed measured at the height H above ground; and α is the average vertical angle of obstacles around the gauge. The latter depends on the exposure of the gauge site and can be based either on the average value of direct measurements, on one of the eight main directions of the wind rose of the vertical angle of obstacles (in 360°) around the gauge, or on the classification of the exposure using metadata as stored in the archives of Meteorological Services. The classes are as follows:
Class Exposed site Angle 0–5 Description Only a few small obstacles such as bushes, group of trees, a house Small groups of trees or bushes or one or two houses Parks, forest edges, village centres, farms, group of houses, yards Young forest, small forest clearing, park with big trees, city centres, closed deep valleys, strongly rugged terrain, leeward of big hills

Mainly exposed site Mainly protected site Protected site

6–12

13–19

20–26

Wetting losses occur with the moistening of the inner walls of the precipitation gauge. They depend on the shape and the material of the gauge, as well as on the type and frequency of precipitation. For example, for the Hellmann gauge they amount to an average of 0.3 mm on a rainy and 0.15 mm on a snowy day; the respective values for the Tretyakov gauge are 0.2 mm and 0.1 mm. Information on wetting losses for other types of gauges can be found in Methods of Correction for Systematic Error in Point Precipitation Measurement for Operational Use (WMO-No. 589).

CHaPTEr 6. MEaSurEMENT OF PrECIPITaTION

references and furtHer readIng

Armstrong, R.L., 1976: The application of isotopic profiling snow-gauge data to avalanche research. Proceedings of the Forty-fourth Annual Western Snow Conference, Atmospheric Environment Service, Canada, pp. 12–19. Goodison, B.E., J.R. Metcalfe, R.A. Wilson and K. Jones, 1988: The Canadian automatic snow depth sensor: A performance update. Proceedings of the Fifty-sixth Annual Western Snow Conference, Atmospheric Environment Service, Canada, pp. 178–181. Goodison, B.E., B. Sevruk, and S. Klemm, 1989: WMO solid precipitation measurement intercomparison: Objectives, methodology and analysis. In: International Association of Hydrological Sciences, 1989: Atmospheric deposition. Proceedings, Baltimore Symposium (May, 1989) IAHS Publication No. 179, Wallingford. Grunow, J., 1960: The productiveness of fog precipitation in relation to the cloud droplet spectrum. In: American Geophysical Union, 1960, Physics of precipitation. Geophysical Monograph No. 5, Proceedings of the Cloud Physics Conference (3–5 June 1959, Woods Hole, Massachusetts), Publication No. 746, pp. 110–117. Grunow, J., 1963: Weltweite Messungen des Nebelniederschlags nach der Hohenpeissenberger Methode. In: International Union of Geodesy and Geophysics, General Assembly (Berkeley, California, 19–31 August 1963), International Association of Scientific Hydrology Publication No. 65, 1964, pp. 324–342. Loijens, H.S., 1975: Measurements of snow water equivalent and soil moisture by natural gamma radiation. Proceedings of the Canadian Hydrological Symposium-75 (11–14 August 1975, Winnipeg), pp. 43–50. Nespor, V. and B. Sevruk, 1999: Estimation of windinduced error of rainfall gauge measurements using a numerical simulation. Journal of Atmospheric and Oceanic Technology, Volume 16, Number 4, pp. 450–464. Rinehart, R.E., 1983: Out-of-level instruments: Errors in hydrometeor spectra and precipitation measurements. Journal of Climate and Applied Meteorology, 22, pp. 1404–1410. Schemenauer, R.S. and P. Cereceda, 1991: Fog water collection in arid coastal locations. Ambio, Volume 20, Number 7, pp. 303–308.

Schemenauer, R.S. and P. Cereceda,1994a: Fog collection’s role in water planning for developing countries. Natural Resources Forum, Volume 18, Number 2, pp. 91–100. Schemenauer, R.S. and P. Cereceda, 1994b: A proposed standard fog collector for use in high-elevation regions. Journal of Applied Meteorology, Volume 33, Number 11, pp. 1313–1322. Sevruk, B., 1974a: Correction for the wetting loss of a Hellman precipitation gauge. Hydrological Sciences Bulletin, Volume 19, Number 4, pp. 549–559. Sevruk, B., 1974b: Evaporation losses from containers of Hellman precipitation gauges. Hydrological Sciences Bulletin, Volume 19, Number 2, pp. 231–236. Sevruk, B., 1984: Comments on “Out-of-level instruments: Errors in hydrometeor spectra and precipitation measurements”. Journal of Climate and Applied Meteorology, 23, pp. 988–989. Sevruk, B. and V. Nespor, 1994: The effect of dimensions and shape of precipitation gauges on the wind-induced error. In: M. Desbois and F. Desalmand (eds.): Global Precipitation and Climate Change, NATO ASI Series, I26, Springer Verlag, Berlin, pp. 231–246. Sevruk, B. and L. zahlavova, 1994: Classification system of precipitation gauge site exposure: Evaluation and application. International Journal of Climatology, 14(b), pp. 681–689. Slovak Hydrometeorological Institute and Swiss Federal Institute of Technology, 1993: Precipitation measurement and quality control. Proceedings of the International Symposium on Precipitation and Evaporation (B. Sevruk and M. Lapin, eds) (Bratislava, 20–24 September 1993), Volume I, Bratislava and zurich. Smith, J.L., H.G. Halverson, and R.A. Jones, 1972: Central Sierra Profiling Snowgauge: A Guide to Fabrication and Operation. USAEC Report TID25986, National Technical Information Service, U.S. Department of Commerce, Washington DC. Stadtmuller, T. and N. Agudelo, 1990: Amount and variability of cloud moisture input in a tropical cloud forest. In: Proceedings of the Lausanne Symposia (August/November), IAHS Publication No. 193, Wallingford.

CHaPTEr 6. MEaSurEMENT OF PrECIPITaTION

I.6–21

Vong, R.J., J.T. Sigmon and S.F. Mueller, 1991: Cloud water deposition to Appalachian forests. Environmental Science and Technology, 25(b), pp. 1014–1021. World Meteorological Organization, 1972: Evaporation losses from storage gauges (B. Sevruk) Distribution of Precipitation in Mountainous Areas, Geilo Symposium (Norway, 31 July–5 August 1972), Volume II, technical papers, WMO-No. 326, Geneva, pp. 96–102. World Meteorological Organization, 1982: Methods of Correction for Systematic Error in Point Precipitation Measurement for Operational Use (B. Sevruk). Operational Hydrology Report No. 21, WMO-No. 589, Geneva. World Meteorological Organization, 1984: International Comparison of National Precipitation Gauges with a Reference Pit Gauge (B. Sevruk and W.R. Hamon). Instruments and Observing Methods Report No. 17, WMO/TDNo. 38, Geneva. World Meteorological Organization, 1985: International Organizing Committee for the WMO Solid Precipitation Measurement Intercomparison. Final report of the first session (distributed to participants only), Geneva. World Meteorological Organization, 1986: Papers Presented at the Workshop on the Correction of Precipitation Measurements (B. Sevruk, ed.) (zurich, Switzerland, 1–3 April 1985). Instruments and Observing Methods Report No. 25, WMO/TD-No. 104, Geneva.

World Meteorological Organization, 1989a: Catalogue of National Standard Precipitation Gauges (B. Sevruk and S. Klemm). Instruments and Observing Methods Report No. 39, WMO/ TD-No. 313, Geneva. World Meteorological Organization, 1989b: International Workshop on Precipitation Measurements (B. Sevruk, ed.) (St Moritz, Switzerland, 3–7 December 1989). Instruments and Observing Methods Report No. 48, WMO/ TD-No. 328, Geneva. World Meteorological Organization, 1992a: Snow Cover Measurements and Areal Assessment of Precipitation and Soil Moisture (B. Sevruk, ed.). Operational Hydrology Report No. 35, WMO-No. 749, Geneva. World Meteorological Organization, 1992b: Report on the Measurement of Leaf Wetness (R.R. Getz). Agricultural Meteorology Report No. 38, WMO/TD-No. 478, Geneva. World Meteorological Organization, 1994: Guide to Hydrological Practices. Fifth edition, WMO-No. 168, Geneva. World Meteorological Organization, 1998: WMO Solid Precipitation Measurement Intercomparison: Final Report (B.E. Goodison, P.y.T. Louie and D. yang) Instruments and Observing Methods Report No. 67, WMO/TD-No. 872, Geneva. young, G.J., 1976: A portable profiling snow-gauge: Results of field tests on glaciers. Proceedings of the Forty-fourth Annual Western Snow Conference, Atmospheric Environment Service, Canada, pp. 7–11.

CHaPTEr 7

MeasureMent of radIatIon

7.1

general

The various fluxes of radiation to and from the Earth’s surface are among the most important variables in the heat economy of the Earth as a whole and at any individual place at the Earth’s surface or in the atmosphere. Radiation measurements are used for the following purposes: (a) To study the transformation of energy within the Earth-atmosphere system and its variation in time and space; (b) To analyse the properties and distribution of the atmosphere with regard to its constituents, such as aerosols, water vapour, ozone, and so on; (c) To study the distribution and variations of incoming, outgoing and net radiation; (d) To satisfy the needs of biological, medical, agricultural, architectural and industrial activities with respect to radiation; (e) To verify satellite radiation measurements and algorithms. Such applications require a widely distributed regular series of records of solar and terrestrial surface radiation components and the derivation of representative measures of the net radiation. In addition to the publication of serial values for individual observing stations, an essential objective must be the production of comprehensive radiation climatologies, whereby the daily and seasonal variations of the various radiation constituents of the general thermal budget may be more precisely evaluated and their relationships with other meteorological elements better understood. A very useful account of radiation measurements and the operation and design of networks of radiation stations is contained in WMO (1986a). This manual describes the scientific principles of the measurements and gives advice on quality assurance, which is most important for radiation measurements. The Baseline Surface Radiation Network (BSRN) Operations Manual (WMO, 1998) gives an overview of the latest state of radiation measurements. Following normal practice in this field, errors and uncertainties are expressed in this chapter as a 66 per cent confidence interval of the difference from the true quantity, which is similar to a standard

deviation of the population of values. Where needed, specific uncertainty confidence intervals are indicated and uncertainties are estimated using the International Organization for Standardization method (ISO, 1995). For example, 95 per cent uncertainty implies that the stated uncertainty is for a confidence interval of 95 per cent. 7.1.1 Definitions

Annex 7.A contains the nomenclature of radiometric and photometric quantities. It is based on definitions recommended by the International Radiation Commission of the International Association of Meteorology and Atmospheric Sciences and by the International Commission on Illumination (ICI). Annex 7.B gives the meteorological radiation quantities, symbols and definitions. Radiation quantities may be classified into two groups according to their origin, namely solar and terrestrial radiation. In the context of this chapter, “radiation” can imply a process or apply to multiple quantities. For example, “solar radiation” could mean solar energy, solar exposure or solar irradiance (see Annex 7.B). Solar energy is the electromagnetic energy emitted by the sun. The solar radiation incident on the top of the terrestrial atmosphere is called extraterrestrial solar radiation; 97 per cent of which is confined to the spectral range 290 to 3 000 nm is called solar (or sometimes shortwave) radiation. Part of the extra-terrestrial solar radiation penetrates through the atmosphere to the Earth’s surface, while part of it is scattered and/or absorbed by the gas molecules, aerosol particles, cloud droplets and cloud crystals in the atmosphere. Terrestrial radiation is the long-wave electromagnetic energy emitted by the Earth’s surface and by the gases, aerosols and clouds of the atmosphere; it is also partly absorbed within the atmosphere. For a temperature of 300 K, 99.99 per cent of the power of the terrestrial radiation has a wavelength longer than 3 000 nm and about 99 per cent longer than 5 000 nm. For lower temperatures, the spectrum is shifted to longer wavelengths.

I.7–2

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

Since the spectral distributions of solar and terrestrial radiation overlap very little, they can very often be treated separately in measurements and computations. In meteorology, the sum of both types is called total radiation. Light is the radiation visible to the human eye. The spectral range of visible radiation is defined by the spectral luminous efficiency for the standard observer. The lower limit is taken to be between 360 and 400 nm, and the upper limit between 760 and 830 nm (ICI, 1987). Thus, 99 per cent of the visible radiation lies between 400 and 730 nm. The radiation of wavelengths shorter than about 400 nm is called ultraviolet (UV), and longer than about 800 nm, infrared radiation. The UV range is sometimes divided into three sub-ranges (IEC, 1987): UV-A: UV-B: UV-C: 7.1.2 7.1.2.1 315–400 nm 280–315 nm 100–280 nm units and scales units

the uncertainty of radiation measurements. With the results of many comparisons of 15 individual absolute pyrheliometers of 10 different types, a WRR has been defined. The old scales can be transferred into the WRR using the following factors:
Ångström scale 1905

WRR

= 1.026

Smithsonian scale 1913

WRR

= 0.977

IPS 1956

WRR

= 1.026

The WRR is accepted as representing the physical units of total irradiance within 0.3 per cent (99 per cent uncertainty of the measured value). realization of the World radiometric reference: World standard group In order to guarantee the long-term stability of the new reference, a group of at least four absolute pyrheliometers of different design is used as the WSG. At the time of incorporation into this group, the instruments are given a reduction factor to correct their readings to the WRR. To qualify for membership of this group, a radiometer must fulfil the following specifications: (a) Long-term stability must be better than 0.2 per cent of the measured value; (b) The 95 per cent uncertainty of the series of measurements with the instrument must lie within the limits of the uncertainty of the WRR; (c) The instrument has to have a different design from the other WSG instruments. To meet the stability criteria, the instruments of the WSG are the subjects of an inter-comparison at least once a year, and, for this reason, WSG is kept at the WRC Davos. computation of world radiometric reference values In order to calibrate radiometric instruments, the reading of a WSG instrument, or one that is directly traceable to the WSG, should be used. During international pyrheliometer comparisons (IPCs), the WRR value is calculated from the mean of at least three participating instruments of the WSG. To yield WRR values, the readings of the WSG instruments are always corrected with the individual reduction factor, which is determined at the time of their

The International System of Units (SI) is to be preferred for meteorological radiation variables. A general list of the units is given in Annexes 7.A and 7.B. 7.1.2.2 standardization

The responsibility for the calibration of radiometric instruments rests with the World, Regional and National Radiation Centres, the specifications for which are given in Annex 7.C. Furthermore, the World Radiation Centre (WRC) at Davos is responsible for maintaining the basic reference, the World Standard Group (WSG) of instruments, which is used to establish the World Radiometric Reference (WRR). During international comparisons, organized every five years, the standards of the regional centres are compared with the WSG, and their calibration factors are adjusted to the WRR. They, in turn, are used to transmit the WRR periodically to the national centres, which calibrate their network instruments using their own standards. definition of the World radiometric reference In the past, several radiation references or scales have been used in meteorology, namely the Ångström scale of 1905, the Smithsonian scale of 1913, and the international pyrheliometric scale of 1956 (IPS 1956). The developments in absolute radiometry in recent years have very much reduced

CHaPTEr 7. MEaSurEMENT OF radIaTION

I.7–3

incorporation into the WSG. Since the calculation of the mean value of the WSG, serving as the reference, may be jeopardized by the failure of one or more radiometers belonging to the WSG, the Commission for Instruments and Methods of Observation resolved1 that at each IPC an ad hoc group should be established comprising the Rapporteur on Meteorological Radiation Instruments (or designate) and at least five members, including the chairperson. The director of the comparison must participate in the group’s meetings as an expert. The group should discuss the preliminary results of the comparison, based on criteria defined by the WRC, evaluate the reference and recommend the updating of the calibration factors. 7.1.3 7.1.3.1 Meteorological requirements data to be recorded

and best practice uncertainties are stated for the Global Climate Observing System’s Baseline Surface Radiation Network (see WMO, 1998). It may be said generally that good quality measurements are difficult to achieve in practice, and for routine operations they can be achieved only with modern equipment and redundant measurements. Some systems still in use fall short of best practice, the lesser performance having been acceptable for many applications. However, data of the highest quality are increasingly in demand. 7.1.3.3 sampling and recording

Irradiance and radiant exposure are the quantities most commonly recorded and archived, with averages and totals of over 1 h. There are also many requirements for data over shorter periods, down to 1 min or even tens of seconds (for some energy applications). Daily totals of radiant exposure are frequently used, but these are expressed as a mean daily irradiance. For climatological purposes, measurements of direct solar radiation shorter than a day are needed at fixed true solar hours, or at fixed airmass values. Measurements of atmospheric extinction must be made with very short response times to reduce the uncertainties arising from variations in air mass. For radiation measurements, it is particularly important to record and make available information about the circumstances of the observations. This includes the type and traceability of the instrument, its calibration history, and its location in space and time, spatial exposure and maintenance record. 7.1.3.2 uncertainty

The uncertainty requirements can best be satisfied by making observations at a sampling period less than the 1/e time-constant of the instrument, even when the data to be finally recorded are integrated totals for periods of up to 1 h, or more. The data points may be integrated totals or an average flux calculated from individual samples. Digital data systems are greatly to be preferred. Chart recorders and other types of integrators are much less convenient, and the resultant quantities are difficult to maintain at adequate levels of uncertainty. 7.1.3.4 times of observation

In a worldwide network of radiation measurements, it is important that the data be homogeneous not only for calibration, but also for the times of observation. Therefore, all radiation measurements should be referred to what is known in some countries as local apparent time, and in others as true solar time. However, standard or universal time is attractive for automatic systems because it is easier to use, but is acceptable only if a reduction of the data to true solar time does not introduce a significant loss of information (that is to say, if the sampling and storage rates are high enough, as indicated in section 7.1.3.3 above). See Annex 7.D for useful formulae for the conversion from standard to solar time. 7.1.4 Measurement methods

Statements of uncertainty for net radiation are given in Part I, Chapter 1. The required 66 per cent uncertainty for radiant exposure for a day, stated by WMO for international exchange, is 0.4 MJ m–2 for ≤ 8 MJ m–2 and 5 per cent for > 8 MJ m–2. There are no formally agreed statements of required uncertainty for other radiation quantities, but uncertainty is discussed in the sections of this chapter dealing with the various types of measurements,
1

Recommended by the Commission for Instruments and Methods of Observation at its eleventh session (1994).

Meteorological radiation instruments are classified using various criteria, namely the type of variable to be measured, the field of view, the spectral response, the main use, and the like. The most important types of classifications are listed in Table 7.1. The quality of the instruments is characterized by items (a) to (h) below. The instruments and their operation are described in sections 7.2 to 7.4 below. WMO (1986a) provides a detailed account of instruments and the principles according to which they operate.

I.7–4

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

Absolute radiometers are self-calibrating, meaning that the irradiance falling on the sensor is replaced by electrical power, which can be accurately measured. The substitution, however, cannot be perfect; the deviation from the ideal case determines the uncertainty of the radiation measurement. Most radiation sensors, however, are not absolute and must be calibrated against an absolute instrument. The uncertainty of the measured value, therefore, depends on the following factors, all of which should be known for a well-characterized instrument: (a) Resolution, namely, the smallest change in the radiation quantity which can be detected by the instrument; (b) Long-term drifts of sensitivity (the ratio of electrical output signal to the irradiance applied),

(c)

(d)

(e)

(f)

(g) (h)

namely, the maximum possible changeover, for example, one year; Changes in sensitivity owing to changes of environmental variables, such as temperature, humidity, pressure and wind; Non-linearity of response, namely, changes in sensitivity associated with variations in irradiance; Deviation of the spectral response from that postulated, namely the blackness of the receiving surface, the effect of the aperture window, and so on; Deviation of the directional response from that postulated, namely cosine response and azimuth response; Time-constant of the instrument or the measuring system; Uncertainties in the auxiliary equipment.

table 7.1. Meteorological radiation instruments
Instrument classification absolute pyrheliometer Pyrheliometer Parameter to be measured direct solar radiation direct solar radiation Main use Primary standard Secondary standard for calibrations (b) Network Network (a) Viewing angle (sr) (see Figure 7.1) 5 x 10–3 (approx. 2.5˚ half angle) 5 x 10–3 to 2.5 x 10–2 5 x 10–3 to 2.5 x 10–2 1 x 10–3 to 1 x 10–2 (approx. 2.3˚ full angle) 2π

Spectral pyrheliometer Sunphotometer

direct solar radiation in broad spectral bands (e.g., with OG 530, rG 630, etc. filters) direct solar radiation in narrow spectral bands (e.g., at 500 ±2.5 nm, 368±2.5 nm) (a) Global (solar) radiation (b) diffuse sky (solar) radiation (c) reflected solar radiation Global (solar) radiation in broadband spectral ranges (e.g., with OG 530, rG 630, etc. filters) Net global (solar) radiation upward long-wave radiation (downwardlooking) (b) downward long-wave radiation (upward-looking) Total radiation Net total radiation (a)

(a) Standard (b) Network (a) Working standard (b) Network Network

Pyranometer

Spectral pyranometer



Net pyranometer Pyrgeometer

(a) Working standard (b) Network Network

4π 2π

Pyrradiometer Net pyrradiometer

Working standard Network

2π 4π

CHaPTEr 7. MEaSurEMENT OF radIaTION R

I.7–5

Front aperture

d r

the definition of these angles refer to Figure 7.1. During the comparison of instruments with different view-limiting geometries, it should be kept in mind that the aureole radiation influences the readings more significantly for larger slope and aperture angles. The difference can be as great as 2 per cent between the two apertures mentioned above for an air mass of 1.0. In order to enable climatological comparison of direct solar radiation data during different seasons, it may be necessary to reduce all data to a mean sun-Earth distance: E = E/R
N 2

(7.1)

Receiving surface

figure 7.1. View-limiting geometry: the opening half-angle is arctan r/d; the slope angle is arctan (r–r)/d Instruments should be selected according to their end-use and the required uncertainty of the derived quantity. Certain instruments perform better for particular climates, irradiances and solar positions

where EN is the solar radiation, normalized to the mean sun-Earth distance, which is defined to be one astronomical unit (AU) (see Annex 7.D); E is the measured direct solar radiation; and R is the sun-Earth distance in astronomical units. 7.2.1 Direct solar radiation

7.2

MeasureMent of Direct solar raDiation

Direct solar radiation is measured by means of pyrheliometers, the receiving surfaces of which are arranged to be normal to the solar direction. By means of apertures, only the radiation from the sun and a narrow annulus of sky is measured, the latter radiation component is sometimes referred to as circumsolar radiation or aureole radiation. In modern instruments, this extends out to a halfangle of about 2.5° on some models, and to about 5° from the sun’s centre (corresponding, respectively, to 5 · 10–3 and 5 · 10–2 sr). The construction of the pyrheliometer mounting must allow for the rapid and smooth adjustment of the azimuth and elevation angles. A sighting device is usually included in which a small spot of light or solar image falls upon a mark in the centre of the target when the receiving surface is exactly normal to the direct solar beam. For continuous recording, it is advisable to use automatic sun-following equipment (sun tracker). As to the view-limiting geometry, it is recommended that the opening half-angle be 2.5° (5 · 10–3 sr) and the slope angle 1° for all new designs of direct solar radiation instruments. For

Some of the characteristics of operational pyrheliometers (other than primary standards) are given in Table 7.2 (adapted from ISO, 1990a), with indicative estimates of the uncertainties of measurements made with them if they are used with appropriate expertise and quality control. Cheaper pyrheliometers are available (see ISO, 1990a), but without effort to characterize their response the resulting uncertainties reduce the quality of the data, and, given that a sun tracker is required, in most cases the incremental cost for a good pyrheliometer is minor. The estimated uncertainties are based on the following assumptions: (a) Instruments are well-maintained, correctly aligned and clean; (b) 1 min and 1 h figures are for clear-sky irradiances at solar noon; (c) Daily exposure values are for clear days at mid-latitudes. 7.2.1.1 Primary standard pyrheliometers

An absolute pyrheliometer can define the scale of total irradiance without resorting to reference sources or radiators. The limits of uncertainty of the definition must be known; the quality of this knowledge determines the reliability of an absolute pyrheliometer. Only specialized laboratories should operate and maintain primary standards. Details of their construction and operation are given in WMO (1986a). However, for the sake of completeness, a brief account is given here.

I.7–6

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

All absolute pyrheliometers of modern design use cavities as receivers and electrically calibrated, differential heat-flux meters as sensors. At present, this combination has proved to yield the lowest uncertainty possible for the radiation levels encountered in solar radiation measurements (namely, up to 1.5 kW m–2). Normally, the electrical calibration is performed by replacing the radiative power by electrical power, which is dissipated in a heater winding as close as possible to where the absorption of solar radiation takes place. The uncertainties of such an instrument’s measurements are determined by a close examination of the physical properties of the instrument and by performing laboratory measurements and/or model calculations to determine the deviations from ideal behaviour, that is, how perfectly the electrical substitution can be achieved. This procedure is called characterization of the instrument. The following specification should be met by an absolute pyrheliometer (an individual instrument, not a type) to be designated and used as a primary standard: (a) At least one instrument out of a series of manufactured radiometers has to be fully characterized. The 95 per cent uncertainty of this characterization should be less than 2 W m–2 under the clear-sky conditions suitable for calibration (see ISO, 1990a). The 95 per cent uncertainty (for all components of the uncertainty) for a series of measurements should not exceed 4 W m–2 for any measured value; (b) Each individual instrument of the series must be compared with the one which has been characterized, and no individual instrument should deviate from this instrument by more than the characterization uncertainty as determined in (a) above; (c) A detailed description of the results of such comparisons and of the characterization of the instrument should be made available upon request; (d) Traceability to the WRR by comparison with the WSG or some carefully established reference with traceability to the WSG is needed in order to prove that the design is within the state of the art. The latter is fulfilled if the 95 per cent uncertainty for a series of measurements traceable to the WRR is less than 1 W m –2.

table 7.2. characteristics of operational pyrheliometers
Characteristic response time (95 per cent response) Zero offset (response to 5 K h–1 change in ambient temperature) resolution (smallest detectable change in W m–2) Stability (percentage of full scale, change/year) Temperature response (percentage maximum error due to change of ambient temperature within an interval of 50 K) High Good < 30 s

qualitya

qualityb

< 15 s

2 W m–2 4 W m–2 0.51 0.1 1 1 0.5 2

Non-linearity (percentage deviation 0.2 from the responsivity at 500 W m–2 due to the change of irradiance within 100 W m–2 to 1 100 W m–2) Spectral sensitivity (percentage deviation of the product of spectral absorptance and spectral transmittance from the corresponding mean within the range 300 to 3 000 nm) Tilt response (percentage deviation from the responsivity at 0° tilt (horizontal) due to change in tilt from 0° to 90° at 1 000 W m–2) achievable uncertainty, 95 per cent confidence level (see above) 1 min totals per cent kJ m–2 1 h totals per cent kJ m–2 daily totals per cent kJ m–2
Notes:
a

0.5

0.5

1.0

0.2

0.5

0.9 0.56 0.7 21 0.5 200

1.8 1 1.5 54 1.0 400

Near state of the art; suitable for use as a working standard; maintainable only at stations with special facilities and staff. acceptable for network operations.

b

7.2.1.2

secondary standard pyrheliometers

An absolute pyrheliometer which does not meet the specification for a primary standard or which

CHaPTEr 7. MEaSurEMENT OF radIaTION

I.7–7

is not fully characterized can be used as a secondary standard if it is calibrated by comparison with the WSG with a 95 per cent uncertainty for a series of measurements less than 1 W m–2. Other types of instruments with measurement uncertainties similar or approaching those for primary standards may be used as secondary standards. The Ångström compensation pyrheliometer has been, and still is, used as a convenient secondary standard instrument for the calibration of pyranometers and other pyrheliometers. It was designed by K. Ångström as an absolute instrument, and the Ångström scale of 1905 was based on it; now it is used as a secondary standard and must be calibrated against a standard instrument. The sensor consists of two platinized manganin strips, each of which is about 18 mm long, 2 mm wide and about 0.02 mm thick. They are blackened with a coating of candle soot or with an optical matt black paint. A thermo-junction of copperconstantan is attached to the back of each strip so that the temperature difference between the strips can be indicated by a sensitive galvanometer or an electrical micro-voltmeter. The dimensions of the strip and front diaphragm yield opening half-angles and slope angles as listed in Table 7.3.

where E is the irradiance in W m–2; K is the calibration constant determined by comparison with a primary standard (W m–2 A–2); and iL iR is the current in amperes measured with the left- or right-hand strip exposed to the direct solar beam, respectively. Before and after each series of measurements, the zero of the system is adjusted electrically by using either of the foregoing methods, the zeros being called “cold” (shaded) or “hot” (exposed), as appropriate. Normally, the first reading, say iR, is excluded and only the following iL–iR pairs are used to calculate the irradiance. When comparing such a pyrheliometer with other instruments, the irradiance derived from the currents corresponds to the geometric mean of the solar irradiances at the times of the readings of iL and iR. The auxiliary instrumentation consists of a power supply, a current-regulating device, a nullmeter and a current monitor. The sensitivity of the nullmeter should be about 0.05 · 10–6 A per scale division for a low-input impedance (< 10 Ω), or about 0.5 µV with a highinput impedance (> 10 KΩ). Under these conditions, a temperature difference of about 0.05 K between the junction of the copper-constantan thermocouple causes a deflection of one scale division, which indicates that one of the strips is receiving an excess heat supply amounting to about 0.3 per cent. The uncertainty of the derived direct solar irradiance is highly dependent on the qualities of the current-measuring device, whether a moving-coil milliammeter or a digital multi-meter which measures the voltage across a standard resistor, and on the operator’s skill. The fractional error in the output value of irradiance is twice as large as the fractional error in the reading of the electric current. The heating current is directed to either strip by means of a switch and is normally controlled by separate rheostats in each circuit. The switch can also cut the current off so that the zero can be determined. The resolution of the rheostats should be sufficient to allow the nullmeter to be adjusted to within one half of a scale division. 7.2.1.3 field and network pyrheliometers

table 7.3. View-limiting geometry of Ångström pyrheliometers
Angle Opening half-angle Slope angle Vertical 5° – 8° 0.7° – 1.0° Horizontal ~ 2° 1.2° – 1.6°

The measurement set consists of three or more cycles, during which the left- or right-hand strip is alternately shaded from or exposed to the direct solar beam. The shaded strip is heated by an electric current, which is adjusted in such a way that the thermal electromagnetic force of the thermocouple and, hence, the temperature difference between the two strips approximate zero. Before and after a measuring sequence, the zero is checked either by shading or by exposing both strips simultaneously. Depending on which of these methods is used and on the operating instructions of the manufacturer, the irradiance calculation differs slightly. The method adopted for the IPCs uses the following formula: E = K·iL·iR (7.2)

These pyrheliometers generally make use of a thermopile as the detector. They have similar viewlimiting geometry as standard pyrheliometers. Older models tend to have larger fields of view and slope angles. These design features were primarily designed to reduce the need for accurate sun

I.7–8

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

tracking. However, the larger the slope (and opening) angle, the larger the amount of aureole radiation sensed by the detector; this amount may reach several per cent for high optical depths and large limiting angles. With new designs of sun trackers, including computer-assisted trackers in both passive and active (sun-seeking) configurations, the need for larger slope angles is unnecessary. However, a slope angle of 1° is still required to ensure that the energy from the direct solar beam is distributed evenly on the detector; and allows for minor sun tracker pointing errors of the order of 0.1°. The intended use of the pyrheliometer may dictate the selection of a particular type of instrument. Some manually oriented models, such as the Linke Fuessner Actinometer, are used mainly for spot measurements, while others such as the EKO, Eppley, Kipp and zonen, and Middleton types are designed specifically for the long-term monitoring of direct irradiance. Before deploying an instrument, the user must consider the significant differences found among operational pyrheliometers as follows: (a) The field of view of the instrument; (b) Whether the instrument measures both the long-wave and short-wave portion of the spectrum (namely, whether the aperture is open or covered with a glass or quartz window); (c) The temperature compensation or correction methods; (d) The magnitude and variation of the zero irradiance signal; (e) If the instrument can be installed on an automated tracking system for long-term monitoring; (f) If, for the calibration of other operational pyrheliometers, differences (a) to (c) above are the same, and if the pyrheliometer is of the quality required to calibrate other network instruments. 7.2.1.4 calibration of pyrheliometers

son (for example, during the periodically organized IPCs) such a pyrheliometer can be used as a standard to calibrate, again by comparison with the sun as a source, secondary standards and field pyrheliometers. Secondary standards can also be used to calibrate field instruments, but with increased uncertainty. The quality of sun-source calibrations may depend on the aureole influence if instruments with different view-limiting geometries are compared. Also, the quality of the results will depend on the variability of the solar irradiance, if the time-constants and zero irradiance signals of the pyrheliometers are significantly different. Lastly, environmental conditions, such as temperature, pressure and net long-wave irradiance, can influence the results. If a very high quality of calibration is required, only data taken during very clear and stable days should be used. The procedures for the calibration of field pyrheliometers are given in an ISO standard (ISO, 1990b). From recent experience at IPCs, a period of five years between traceable calibrations to the WSG should suffice for primary and secondary standards. Field pyrheliometers should be calibrated every one to two years; the more prolonged the use and the more rigorous the conditions, the more often they should be calibrated. 7.2.2 spectral direct solar irradiance and measurement of optical depth

Spectral measurements of the direct solar irradiance are used in meteorology mainly to determine optical depth (see Annex 7.B) in the atmosphere. They are used also for medical, biological, agricultural and solar-energy applications. The aerosol optical depth represents the total extinction, namely, scattering and absorption by aerosols in the size range 100 to 10 000 nm radius, for the column of the atmosphere equivalent to unit optical air mass. Particulate matter, however, is not the only influencing factor for optical depth. Other atmospheric constituents such as air molecules (Rayleigh scatterers), ozone, water vapour, nitrogen dioxide and carbon dioxide also contribute to the total extinction of the beam. Most optical depth measurements are taken to understand better the loading of the atmosphere by aerosols. However, optical depth measurements of other constituents, such as water vapour, ozone and nitrogen dioxide, can be obtained if appropriate wavebands are selected.

All pyrheliometers, other than absolute pyrheliometers, must be calibrated by comparison using the sun as the source with a pyrheliometer that has traceability to the WSG and a likely uncertainty of calibration equal to or better than the pyrheliometer being calibrated. As all solar radiation data must be referred to the WRR, absolute pyrheliometers also use a factor determined by comparison with the WSG and not their individually determined one. After such a compari-

CHaPTEr 7. MEaSurEMENT OF radIaTION

I.7–9

table 7.4. specification of idealized schott glass filters
Approximate temperature coefficient of short-wave cut-off (nm K–1) 0.12 0.17 0.18

Schott type

Typical 50% cut-off wavelength (nm) Short Long 526 ± 2 630 ± 2 702 ± 2 2 900 2 900 2 900

Mean transmission (3 mm thickness) 0.92 0.92 0.92

OG 530 rG 630 rG 700

The temperature coefficients for Schott filters are as given by the manufacturer. The short-wave cut-offs are adjusted to the standard filters used for calibration. Checks on the short and long wavelength cut-offs are required for reducing uncertainties in derived quantities.

The aerosol optical depth δ a(λ) at a specific wavelength λ is based on the Bouguer-Lambert law (or Beer’s law for monochromatic radiation) and can be determined by:

directly to the evaluation of sun photometer data, but not to broadband pyrheliometer data. Aerosol optical depth observations should be made only when no visible clouds are within 10° of the sun. When sky conditions permit, as many observations as possible should be made in a day and a maximum range of air masses should be covered, preferably in intervals of Δm less than 0.2. Only instantaneous values can be used for the determination of aerosol optical depth; instantaneous means that the measurement process takes less than 1 s. 7.2.2.1 Broadband pyrheliometry

δ a (λ ) =

ln( E0 (λ ) / E (λ )) − ma

Σ (δ i (λ ) ⋅ mi )

(7.3)

where δ a (λ) is the aerosol optical depth at a waveband centred at wavelength λ; ma is the air mass for aerosols (unity for the vertical beam);δi is the optical depth for species i, other than aerosols at a waveband centred at wavelength λ; mi is the air mass for extinction species i, other than aerosols; E0(λ) is the spectral solar irradiance outside the atmosphere at wavelength λ; and E(λ) is the spectral solar irradiance at the surface at wavelength λ. Optical thickness is the total extinction along the path through the atmosphere, that is, the air mass multiplied by the optical depth mδ. Turbidity τ is the same quantity as optical depth, but using base 10 rather than base e in Beer’s Law, as follows: τ(λ)m = log (E0(λ)/E(λ)) accordingly: τ(λ) = 2.301δ(λ) (7.5) (7.4)

Broadband pyrheliometry makes use of a carefully calibrated pyrheliometer with broadband glass filters in front of it to select the spectral bands of interest. The specifications of the classical filters used are summarized in Table 7.4. The cut-off wavelengths depend on temperature, and some correction of the measured data may be needed. The filters must be properly cleaned before use. In operational applications, they should be checked daily and cleaned if necessary. The derivation of aerosol optical depth from broadband data is very complex, and there is no standard procedure. Use may be made both of tables which are calculated from typical filter data and of some assumptions on the state of the atmosphere. The reliability of the results depends on how well the filter used corresponds to the filter in the calculations and how good the atmospheric assumptions are. Details of the evaluation and the corresponding tables can be found in WMO (1978). A discussion of the techniques is given by Kuhn (1972) and Lal (1972).

In meteorology, two types of measurements are performed, namely broadband pyrheliometry and narrowband sun radiometry (sometimes called sun photometry). Since the aerosol optical depth is defined only for monochromatic radiation or for a very narrow wavelength range, it can be applied

I.7–10 7.2.2.2

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

sun radiometry (photometry) and aerosol optical depth

A narrowband sun radiometer (or photometer) usually consists of a narrowband interference filter and a photovoltaic detector, usually a silicon photodiode. The full field of view of the instrument is 2.5° with a slope angle of 1° (see Figure 7.1). Although the derivation of optical depth using these devices is conceptually simple, many early observations from these devices have not produced useful results. The main problems have been the shifting of the instrument response because of changing filter transmissions and detector characteristics over short periods, and poor operator training for manually operated devices. Accurate results can be obtained with careful operating procedures and frequent checks of instrument stability. The instrument should be calibrated frequently, preferably using in situ methods or using reference devices maintained by a radiation centre with expertise in optical depth determination. Detailed advice on narrowband sun radiometers and network operations is given in WMO (1993a). To calculate aerosol optical depth from narrowband sun radiometer data with small uncertainty, the station location, pressure, temperature, column ozone amount, and an accurate time of measurement must be known (WMO, 2005). The most accurate calculation of the total and aerosol optical depth from spectral data at wavelength λ (the centre wavelength of its filter) makes use of the following:
P ( SS0λ()λ )2 ) − P δ R (λ )mR − δO (λ )mO ... ( R
0
3 3

For all wavelengths, Rayleigh extinction must be considered. Ozone optical depth must be considered at wavelengths of less than 340 nm and throughout the Chappius band. Nitrogen dioxide optical depths should be considered for all wavelengths less than 650 nm, especially if measurements are taken in areas that have urban influences. Although there are weak water vapour absorption bands even within the 500 nm spectral region, water vapour absorption can be neglected for wavelengths less than 650 nm. Further references on wavelength selection can be found in WMO (1986b). A simple algorithm to calculate Rayleigh-scattering optical depths is a combination of the procedure outlined by Fröhlich and Shaw (1980), and the young (1981) correction. For more precise calculations the algorithm by Bodhaine and others (1999) is also available. Both ozone and nitrogen dioxide follow Beer’s law of absorption. The WMO World Ozone Data Centre recommends the ozone absorption coefficients of Bass and Paur (1985) in the UV region and Vigroux (1953) in the visible region. Nitrogen dioxide absorption coefficients can be obtained from Schneider and others (1987). For the reduction of wavelengths influenced by water vapour, the work of Frouin, Deschamps and Lecomte (1990) may be considered. Because of the complexity of water vapour absorption, bands that are influenced significantly should be avoided unless deriving water vapour amount by spectral solar radiometry. 7.2.3 exposure

δ a (λ ) =

ln

ma

(7.6)

For continuous recording and reduced uncertainties, an accurate sun tracker that is not influenced by environmental conditions is essential. Sun tracking to within 0.2° is required, and the instruments should be inspected at least once a day, and more frequently if weather conditions so demand (with protection against adverse conditions). The principal exposure requirement for a recording instrument is the same as that for a pyrheliometer namely, freedom from obstructions to the solar beam at all times and seasons of the year. Furthermore, the site should be chosen so that the incidence of fog, smoke and airborne pollution is as typical as possible of the surrounding area. For continuous recording, protection is needed against rain, snow, and so forth. The optical window, for instance, must be protected as it is usually made of quartz and is located in front of the instrument. Care must be taken to ensure that such a window is kept clean and that

where S(λ) is the instrument reading (for example, in volts or counts), S0(λ) is the hypothetical reading corresponding to the top of the atmosphere spectral solar irradiance at 1 AU (this can be established by extrapolation to air-mass zero by various Langley methods, or from the radiation centre which calibrated the instrument); R is the sun-Earth distance (in astronomical units; see Annex 7.D); P is the atmospheric pressure; P0 is the standard atmospheric pressure, and the second, third and subsequent terms in the top line are the contributions of Rayleigh, ozone and other extinctions. This can be simplified for less accurate work by assuming that the relative air masses for each of the components are equal.

CHaPTEr 7. MEaSurEMENT OF radIaTION

I.7–11

condensation does not appear on the inside. For successful derivation of aerosol optical depth such attention is required, as a 1 per cent change in transmission at unit air mass translates into a 0.010 change in optical depth. For example, for transmission measurements at 500 nm at clean sea-level sites, a 0.010 change represents between 20 to 50 per cent of the mean winter aerosol optical depth.

achieved with appropriate facilities, well-trained staff and good quality control under the sky conditions outlined in 7.2.1. 7.3.1 calibration of pyranometers

7.3

MeasureMent of global anD Diffuse sky raDiation

The solar radiation received from a solid angle of 2π sr on a horizontal surface is referred to as global radiation. This includes radiation received directly from the solid angle of the sun’s disc, as well as diffuse sky radiation that has been scattered in traversing the atmosphere. The instrument needed for measuring solar radiation from a solid angle of 2π sr into a plane surface and a spectral range from 300 to 3 000 nm is the pyranometer. The pyranometer is sometimes used to measure solar radiation on surfaces inclined in the horizontal and in the inverted position to measure reflected global radiation. When measuring the diffuse sky component of solar radiation, the direct solar component is screened from the pyranometer by a shading device (see section 7.3.3.3). Pyranometers normally use thermo-electric, photoelectric, pyro-electric or bimetallic elements as sensors. Since pyranometers are exposed continually in all weather conditions they must be robust in design and resist the corrosive effects of humid air (especially near the sea). The receiver should be hermetically sealed inside its casing, or the casing must be easy to take off so that any condensed moisture can be removed. Where the receiver is not permanently sealed, a desiccator is usually fitted in the base of the instrument. The properties of pyranometers which are of concern when evaluating the uncertainty and quality of radiation measurement are: sensitivity, stability, response time, cosine response, azimuth response, linearity, temperature response, thermal offset, zero irradiance signal and spectral response. Further advice on the use of pyranometers is given in ISO (1990c) and WMO (1998). Table 7.5 (adapted from ISO, 1990a) describes the characteristics of pyranometers of various levels of performance, with the uncertainties that may be

The calibration of a pyranometer consists of the determination of one or more calibration factors and the dependence of these on environmental conditions, such as: (a) Temperature; (b) Irradiance level; (c) Spectral distribution of irradiance; (d) Temporal variation; (e) Angular distribution of irradiance; (f) Inclination of instrument; (g) The net long-wave irradiance for thermal offset correction; (h) Calibration methods. Normally, it is necessary to specify the test environmental conditions, which can be quite different for different applications. The method and conditions must also be given in some detail in the calibration certificate. There are a variety of methods for calibrating pyranometers using the sun or laboratory sources. These include the following: (a) By comparison with a standard pyrheliometer for the direct solar irradiance and a calibrated shaded pyranometer for the diffuse sky irradiance; (b) By comparison with a standard pyrheliometer using the sun as a source, with a removable shading disc for the pyranometer; (c) With a standard pyheliometer using the sun as a source and two pyranometers to be calibrated alternately measuring global and diffuse irradiance; (d) By comparison with a standard pyranometer using the sun as a source, under other natural conditions of exposure (for example, a uniform cloudy sky and direct solar irradiance not statistically different from zero); (e) In the laboratory, on an optical bench with an artificial source, either normal incidence or at some specified azimuth and elevation, by comparison with a similar pyranometer previously calibrated outdoors; (f) In the laboratory, with the aid of an integrating chamber simulating diffuse sky radiation, by comparison with a similar type of pyranometer previously calibrated outdoors. These are not the only methods; (a), (b) and (c) and (d) are commonly used. However, it is essential that,

I.7–12

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

table 7.5. characteristics of operational pyranometers
Characteristic response time (95 per cent response) Zero offset: (a) response to 200 W m–2 net thermal radiation (ventilated) (b) response to 5 K h–1 change in ambient temperature resolution (smallest detectable change) Stability (change per year, percentage of full scale) directional response for beam radiation (the range of errors caused by assuming that the normal incidence responsivity is valid for all directions when measuring, from any direction, a beam radiation whose normal incidence irradiance is 1 000 W m–2) Temperature response (percentage maximum error due to any change of ambient temperature within an interval of 50 K) Non-linearity (percentage deviation from the responsivity at 500 W m–2 due to any change of irradiance within the range 100 to 1 000 W m–2) Spectral sensitivity (percentage deviation of the product of spectral absorptance and spectral transmittance from the corresponding mean within the range 300 to 3 000 nm) Tilt response (percentage deviation from the responsivity at 0˚ tilt (horizontal) due to change in tilt from 0˚ to 90˚ at 1 000 W m–2) achievable uncertainty (95 per cent confidence level): Hourly totals daily totals
Notes:
a b c

High quality

a

Good quality < 30 s 15 W m–2 4 W m–2 5 W m–2 1.5 20 W m–2

b

Moderate quality < 60 s 30 W m–2 8 W m–2 10 W m–2 3.0 30 W m–2

c

< 15 s 7 W m–2 2 W m–2 1 W m–2 0.8 10 W m–2

2

4

8

0.5

1

3

2

5

10

0.5

2

5

3% 2%

8% 5%

20% 10%

Near state of the art; suitable for use as a working standard; maintainable only at stations with special facilities and staff. acceptable for network operations. Suitable for low-cost networks where moderate to low performance is acceptable.

except for (b), either the zero irradiance signals for all instruments are known or pairs of identical model pyranometers in identical configurations are used. Ignoring these offsets and differences can bias the results significantly. Method (c) is considered to give very good results without the need for a calibrated pyranometer. It is difficult to determine a specific number of measurements on which to base the calculation of the pyranometer calibration factor. However, the standard error of the mean can be calculated and should be less than the desired limit when sufficient readings have been taken under the desired conditions. The principal variations (apart from

fluctuations due to atmospheric conditions and observing limitations) in the derived calibration factor are due to the following: (a) Departures from the cosine law response, particularly at solar elevations of less than 10° (for this reason it is better to restrict calibration work to occasions when the solar elevation exceeds 30°); (b) The ambient temperature; (c) Imperfect levelling of the receiver surface; (d) Non-linearity of instrument response; (e) The net long-wave irradiance between the detector and the sky. The pyranometer should be calibrated only in the position of use.

CHaPTEr 7. MEaSurEMENT OF radIaTION

I.7–13

When using the sun as the source, the apparent solar elevation should be measured or computed (to the nearest 0.01°) for this period from solar time (see Annex 7.D). The mean instrument or ambient temperature should also be noted. 7.3.1.1 By reference to a standard pyrheliometer and a shaded reference pyranometer

In this method, described in ISO (1993), the pyranometer’s response to global irradiance is calibrated against the sum of separate measurements of the direct and diffuse components. Periods with clear skies and steady radiation (as judged from the record) should be selected. The vertical component of the direct solar irradiance is determined from the pyrheliometer output, and the diffuse sky irradiance is measured with a second pyranometer that is continuously shaded from the sun. The direct component is eliminated from the diffuse sky pyranometer by shading the whole outer dome of the instrument with a disc of sufficient size mounted on a slender rod and held some distance away. The diameter of the disc and its distance from the receiver surface should be chosen in such a way that the screened angle approximately equals the aperture angles of the pyrheliometer. Rather than using the radius of the pyranometer sensor, the radius of the outer dome should be used to calculate the slope angle of the shading disc and pyranometer combination. This shading arrangement occludes a close approximation of both the direct solar beam and the circumsolar sky irradiance as sensed by the pyrheliometer. On a clear day, the diffuse sky irradiance is less than 15 per cent of the global irradiance; hence, the calibration factor of the reference pyranometer does not need to be known very accurately. However, care must be taken to ensure that the zero irradiance signals from both pyranometers are accounted for, given that for some pyranometers under clear sky conditions the zero irradiance signal can be as high as 15 per cent of the diffuse sky irradiance. The calibration factor is then calculated according to: E · sin h + Vsks = V · k or: k = (E sin h + Vsks)/V (7.8) (7.7)

where E is the direct solar irradiance measured with the pyrheliometer (W m–2), V is the global irradiance output of the pyranometer to be calibrated (µV); Vs is the diffuse sky iradiance output of the shaded reference pyranometer (µV), h is the apparent solar elevation at the time of reading; k is the calibration factor of the pyranometer to be calibrated (W m–2 µV–1); and ks is the calibration factor of the shaded reference pyranometer (W m–2 µV–1), and all the signal measurements are taken simultaneously. The direct, diffuse and global components will change during the comparison, and care must be taken with the appropriate sampling and averaging to ensure that representative values are used. 7.3.1.2 By reference to a standard pyrheliometer

This method, described in ISO (1993a), is similar to the method of the preceding paragraph, except that the diffuse sky irradiance signal is measured by the same pyranometer. The direct component is eliminated temporarily from the pyranometer by shading the whole outer dome of the instrument as described in section 7.3.1.1. The period required for occulting depends on the steadiness of the radiation flux and the response time of the pyranometer, including the time interval needed to bring the temperature and long-wave emission of the glass dome to equilibrium; 10 times the thermopile 1/e time-constant of the pyranometer should generally be sufficient. The difference between the representative shaded and unshaded outputs from the pyranometer is due to the vertical component of direct solar irradiance E measured by the pyrheliometer. Thus: E · sin h = (Vun – Vs) · k or: k = (S · sin h)/ (Vun – Vs) (7.10) (7.9)

where E is the representative direct solar irradiance at normal incidence measured by the pyrheliometer (W m–2); Vun is the representative output signal of the pyranometer (µV) when in unshaded (or global) irradiance mode; Vs is the representative output signal of the pyranometer (µV) when in shaded (or diffuse sky) irradiance mode; h is the apparent solar elevation, and k is the calibration factor (W m–2 µV–1), which is the inverse of the sensitivity (µV W–1 m2).

I.7–14

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

Both the direct and diffuse components will change during the comparison, and care must be taken with the appropriate sampling and averaging to ensure that representative values of the shaded and unshaded outputs are used for the calculation. To reduce uncertainties associated with representative signals, a continuous series of shade and un-shade cycles should be performed and time-interpolated values used to reduce temporal changes in global and diffuse sky irradiance. Since the same pyranometer is being used in differential mode, and the difference in zero irradiance signals for global and diffuse sky irradiance is negligible, there is no need to account for zero irradiances in equation 7.10. 7.3.1.3 alternate calibration using a pyrheliometer

provide an indication of the directional response. The resultant calibration information for both pyranometers is representative of the global calibration coefficients and produces almost identical information to method 7.3.1.1, but without the need for a calibrated pyranometer. As with method 7.3.1.1, to produce coefficients with minimum uncertainty this alternate method requires that the irradiance signals from the pyranometers be adjusted to remove any estimated zero irradiance offset. To reduce uncertainties due to changing directional response it is recommended to use a pair of pyranometers of the same model and observation pairs when sin h (t0) ~ sin h (t1). The method is ideally suited to automatic field monitoring situations where three solar irradiance components (direct, diffuse and global) are monitored continuously. Experience suggests that the data collection necessary for the application of this method may be conducted during as little as one day with the exchange of instruments taking place around solar noon. However, at a field site, the extended periods and days either side of the instrument change may be used for data selection, provided that the pyrheliometer has a valid calibration. 7.3.1.4 By comparison with a reference pyranometer

This method uses the same instrumental set-up as the method described in section 7.3.1.1, but only requires the pyrheliometer to provide calibrated irradiance data (E), and the two pyranometers are assumed to be un-calibrated (Forgan, 1996). The method calibrates both pyranometers by solving a pair of simultaneous equations analogous to equation 7.7. Irradiance signal data are initially collected with the pyrheliometer and one pyranometer (pyranometer A) measures global irradiance signals (VgA) and the other pyranometer (pyranometer B) measures diffuse irradiance signals (VdB) over a range of solar zenith angles in clear sky conditions. After sufficient data have been collected in the initial configuration, the pyranometers are exchanged so that pyranometer A, which initially measured the global irradiance signal, now measures the diffuse irradiance signal (VdA), and vice versa with regard to pyranometer B. The assumption is made that for each pyranometer the diffuse (kd) and global (kg) calibration coefficients are equal, and the calibration coefficient for pyranometer A is given by:

k A = kgA = kdA

(7.11)

with an identical assumption for pyranometer B coefficients. Then for a time t0 in the initial period a modified version of equation 7.7 is:

E (t 0 )sin(h(t 0 )) = k AVgA (t 0 ) − k BVdB (t 0 ).

(7.12)

For time t1 in the alternate period when the pyranometers are exchanged:

E (t1 )sin(h(t1 )) = k BVgB (t1 ) − k AVdA (t1 )

(7.13)

As the only unknowns in equations 7.12 and 7.13 are kA and kB, these can be solved for any pair of times (t0, t1). Pairs covering a range of solar elevations

As described in ISO (1992b), this method entails the simultaneous operation of two pyranometers mounted horizontally, side by side, outdoors for a sufficiently long period to acquire representative results. If the instruments are of the same model and monitoring configuration, only one or two days should be sufficient. The more pronounced the difference between the types of pyranometer configurations, the longer the period of comparison required. A long period, however, could be replaced by several shorter periods covering typical conditions (clear, cloudy, overcast, rainfall, snowfall, and so on). The derivation of the instrument factor is straightforward, but, in the case of different pyranometer models, the resultant uncertainty is more likely to be a reflection of the difference in model, rather than the stability of the instrument being calibrated. Data selection should be carried out when irradiances are relatively high and varying slowly. Each mean value of the ratio R of the response of the test instrument to that of the reference instrument may be used to calculate k = R · kr, where kr is the calibration factor of the reference, and k is the calibration factor being derived. During a sampling period,

CHaPTEr 7. MEaSurEMENT OF radIaTION

I.7–15

provided that the time between measurements is less than the 1/e time-constant of the pyranometers, data collection can occur during times of fluctuating irradiance. The mean temperature of the instruments or the ambient temperature should be recorded during all outdoor calibration work to allow for any temperature effects, for. 7.3.1.5 By comparison in the laboratory

7.3.1.6

routine checks on calibration factors

There are several methods for checking the constancy of pyranometer calibration, depending upon the equipment available at a particular station. Every opportunity to check the performance of pyranometers in the field must be seized. At field stations where carefully preserved standards (either pyrheliometers or pyranometers) are available, the basic calibration procedures described above may be employed. Where standards are not available, other techniques can be used. If there is a simultaneous record of direct solar radiation, the two records can be examined for consistency by the method used for direct standardization, as explained in section 7.3.1.2. This simple check should be applied frequently. If there are simultaneous records of global and diffuse sky radiation, the two records should be frequently examined for consistency. In periods of total cloud the global and diffuse sky radiation should be identical, and these periods can be used when a shading disc is used for monitoring diffuse sky radiation. When using shading bands it is recommended that the band be removed so that the diffuse sky pyranometer is measuring global radiation and its data can be compared to simultaneous data from the global pyranometer. The record may be verified with the aid of a travelling working standard sent from the central station of the network or from a nearby station. Lastly, if calibrations are not performed at the site, the pyranometer can be exchanged for a similar one sent from the calibration facility. Either of the last two methods should be used at least once a year. Pyranometers used for measuring reflected solar radiation should be moved into an upright position and checked using the methods described above. 7.3.2 Performance of pyranometers

There are two methods which involve laboratory-maintained artificial light sources providing either direct or diffuse irradiance. In both cases, the test pyranometer and a reference standard pyranometer are exposed under the same conditions. In one method, the pyranometers are exposed to a stabilized tungsten-filament lamp installed at the end of an optical bench. A practical source for this type of work is a 0.5 to 1.0 kW halogen lamp mounted in a water-cooled housing with forced ventilation and with its emission limited to the solar spectrum by a quartz window. This kind of lamp can be used if the standard and the instrument to be calibrated have the same spectral response. For general calibrations, a high-pressure xenon lamp with filters to give an approximate solar spectrum should be used. When calibrating pyranometers in this way, reflection effects should be excluded from the instruments by using black screens. The usual procedure is to install the reference instrument and measure the radiant flux. The reference is then removed and the measurement repeated using the test instrument. The reference is then replaced and another determination is made. Repeated alternation with the reference should produce a set of measurement data of good precision (about 0.5 per cent). In the other method, the calibration procedure uses an integrating light system, such as a sphere or hemisphere illuminated by tungsten lamps, with the inner surface coated with highly reflective diffuse-white paint. This offers the advantage of simultaneous exposure of the reference pyranometer and the instrument to be calibrated. Since the sphere or hemisphere simulates a sky with an approximately uniform radiance, the angle errors of the instrument at 45° dominate. As the cosine error at these angles is normally low, the repeatability of integrating-sphere measurements is generally within 0.5 per cent. As for the source used to illuminate the sphere, the same considerations apply as for the first method.

Considerable care and attention to details are required to attain the desirable standard of uncertainty. A number of properties of pyranometers and measurement systems should be evaluated so that the uncertainty of the resultant data can be estimated. For example, it has been demonstrated that, for a continuous record of global radiation without ancillary measurements of diffuse sky and direct radiation, an uncertainty better than 5 per cent in daily totals represents the result of good and careful work. Similarly, when a protocol similar to that proposed by WMO (1998) is used, uncertainties for daily total can be of the order of 2 per cent.

I.7–16 7.3.2.1

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

sensor levelling

For accurate global radiation measurements with a pyranometer it is essential that the spirit level indicate when the plane of the thermopile is horizontal. This can be tested in the laboratory on an optical levelling table using a collimated lamp beam at about a 20° elevation. The levelling screws of the instrument are adjusted until the response is as constant as possible during rotation of the sensor in the azimuth. The spirit-level is then readjusted, if necessary, to indicate the horizontal plane. This is called radiometric levelling and should be the same as physical levelling of the thermopile. However, this may not be true if the quality of the thermopile surface is not uniform. 7.3.2.2 change of sensitivity due to ambient temperature variation

calibrated in the orientation in which it will be used. A correction for tilting is not recommended unless the instrument’s response has been characterized for a variety of conditions. 7.3.2.4 Variation of response with angle of incidence

Thermopile instruments exhibit changes in sensitivity with variations in instrument temperature. Some instruments are equipped with integrated temperature compensation circuits in an effort to maintain a constant response over a large range of temperatures. The temperature coefficient of sensitivity may be measured in a temperature-controlled chamber. The temperature in the chamber is varied over a suitable range in 10° steps and held steady at each step until the response of the pyranometers has stabilized. The data are then fitted with a smooth curve. If the maximum percentage difference due to temperature response over the operational ambient range is 2 per cent or more, a correction should be applied on the basis of the fit of the data. If no temperature chamber is available, the standardization method with pyrheliometers (see section 7.3.1.l, 7.3.1.2 or 7.3.1.3) can be used at different ambient temperatures. Attention should be paid to the fact that not only the temperature, but also, for example, the cosine response (namely, the effect of solar elevation) and non-linearity (namely, variations of solar irradiance) can change the sensitivity. 7.3.2.3 Variation of response with orientation

The dependence of the directional response of the sensor upon solar elevation and azimuth is usually known as the Lambert cosine response and the azimuth response, respectively. Ideally, the solar irradiance response of the receiver should be proportional to the cosine of the zenith angle of the solar beam, and constant for all azimuth angles. For pyranometers, it is recommended that the cosine error (or percentage difference from ideal cosine response) be specified for at least two solar elevation angles, preferably 30° and 10°. A better way of prescribing the directional response is given in Table 7.5, which specifies the permissible error for all angles. Only lamp sources should be used to determine the variation of response with the angle of incidence, because the spectral distribution of the sun changes with the angle of elevation. Using the sun as a source, an apparent variation of response with solar elevation angle could be observed which, in fact, is a variation due to non-homogeneous spectral response. 7.3.2.5 uncertainties in hourly and daily totals

As most pyranometers in a network are used to determine hourly or daily exposures (or exposures expressed as mean irradiances), it is evident that the uncertainties in these values are important. Table 7.5 lists the expected maximum deviation from the true value, excluding calibration errors. The types of pyranometers in the third column of Table 7.5 (namely, those of moderate quality) are not suitable for hourly or daily totals, although they may be suitable for monthly and yearly totals. 7.3.3 installation and maintenance of pyranometers

The calibration factor of a pyranometer may very well be different when the instrument is used in an orientation other than that in which it was calibrated. Inclination testing of pyranometers can be conducted in the laboratory or with the standardization method described in section 7.3.1.1 or 7.3.1.2. It is recommended that the pyranometer be

The site selected to expose a pyranometer should be free from any obstruction above the plane of the sensing element and, at the same time, should be readily accessible. If it is impracticable to obtain such an exposure, the site must be as free as possible of obstructions that may shadow it at any time

CHaPTEr 7. MEaSurEMENT OF radIaTION

I.7–17

in the year. The pyranometer should not be close to light-coloured walls or other objects likely to reflect solar energy onto it; nor should it be exposed to artificial radiation sources. In most places, a flat roof provides a good location for mounting the radiometer stand. If such a site cannot be obtained, a stand placed some distance from buildings or other obstructions should be used. If practicable, the site should be chosen so that no obstruction, in particular within the azimuth range of sunrise and sunset over the year, should have an elevation exceeding 5°. Other obstructions should not reduce the total solar angle by more than 0.5 sr. At stations where this is not possible, complete details of the horizon and the solid angle subtended should be included in the description of the station. A site survey should be carried out before the initial installation of a pyranometer whenever its location is changed or if a significant change occurs with regard to any surrounding obstructions. An excellent method of doing this is to use a survey camera that provides azimuthal and elevation grid lines on the negative. A series of exposures should be made to identify the angular elevation above the plane of the receiving surface of the pyranometer and the angular range in azimuth of all obstructions throughout the full 360° around the pyranometer. If a survey camera is not available, the angular outline of obscuring objects may be mapped out by means of a theodolite or a compass and clinometer combination. The description of the station should include the altitude of the pyranometer above sea level (that is, the altitude of the station plus the height of pyranometer above the ground), together with its geographical longitude and latitude. It is also most useful to have a site plan, drawn to scale, showing the position of the recorder, the pyranometer, and all connecting cables. The accessibility of instrumentation for frequent inspection is probably the most important single consideration when choosing a site. It is most desirable that pyranometers and recorders be inspected at least daily, and preferably more often. The foregoing remarks apply equally to the exposure of pyranometers on ships, towers and buoys. The exposure of pyranometers on these platforms is a very difficult and sometimes hazardous undertaking. Seldom can an instrument be mounted where it is not affected by at least one significant obstruction (for example, a tower). Because of platform

motion, pyranometers are subject to wave motion and vibration. Precautions should be taken, therefore, to ensure that the plane of the sensor is kept horizontal and that severe vibration is minimized. This usually requires the pyranometer to be mounted on suitably designed gimbals. 7.3.3.1 correction for obstructions to a free horizon

If the direct solar beam is obstructed (which is readily detected on cloudless days), the record should be corrected wherever possible to reduce uncertainty. Only when there are separate records of global and diffuse sky radiation can the diffuse sky component of the record be corrected for obstructions. The procedure requires first that the diffuse sky record be corrected, and the global record subsequently adjusted. The fraction of the sky itself which is obscured should not be computed, but rather the fraction of the irradiance coming from that part of the sky which is obscured. Radiation incident at angles of less than 5° makes only a very small contribution to the total. Since the diffuse sky radiation limited to an elevation of 5° contributes less than 1 per cent to the diffuse sky radiation, it can normally be neglected. Attention should be concentrated on objects subtending angles of 10° or more, as well as those which might intercept the solar beam at any time. In addition, it must be borne in mind that light-coloured objects can reflect solar radiation onto the receiver. Strictly speaking, when determining corrections for the loss of diffuse sky radiation due to obstacles, the variance in sky radiance over the hemisphere should be taken into account. However, the only practical procedure is to assume that the radiance is isotropic, that is, the same from all parts of the sky. In order to determine the relative reduction in diffuse sky irradiance for obscuring objects of finite size, the following expression may be used: ΔEsky =π –1∫Φ ∫ Θ sin θ cos θd θd φ (7.14)

where θ is the angle of elevation; φ is the azimuth angle, Θ is the extent in elevation of the object; and Φ is the extent in azimuth of the object. The expression is valid only for obstructions with a black surface facing the pyranometer. For other objects, the correction has to be multiplied by a reduction factor depending on the reflectivity of the object. Snow glare from a low sun may even lead to an opposite sign for the correction.

I.7–18 7.3.3.2

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

Installation of pyranometers for measuring global radiation

A pyranometer should be securely attached to whatever mounting stand is available, using the holes provided in the tripod legs or in the baseplate. Precautions should always be taken to avoid subjecting the instrument to mechanical shocks or vibration during installation. This operation is best effected as follows. First, the pyranometer should be oriented so that the emerging leads or the connector are located poleward of the receiving surface. This minimizes heating of the electrical connections by the sun. Instruments with MollGorcynski thermopiles should be oriented so that the line of thermo-junctions (the long side of the rectangular thermopile) points east-west. This constraint sometimes conflicts with the first, depending on the type of instrument, and should have priority since the connector could be shaded, if necessary. When towers are nearby, the instrument should be situated on the side of the tower towards the Equator, and as far away from the tower as practical. Radiation reflected from the ground or the base should not be allowed to irradiate the instrument body from underneath. A cylindrical shading device can be used, but care should be taken to ensure that natural ventilation still occurs and is sufficient to maintain the instrument body at ambient temperature. The pyranometer should then be secured lightly with screws or bolts and levelled with the aid of the levelling screws and spirit-level provided. After this, the retaining screws should be tightened, taking care that the setting is not disturbed so that, when properly exposed, the receiving surface is horizontal, as indicated by the spirit-level. The stand or platform should be sufficiently rigid so that the instrument is protected from severe shocks and the horizontal position of the receiver surface is not changed, especially during periods of high winds and strong solar energy. The cable connecting the pyranometer to its recorder should have twin conductors and be waterproof. The cable should be firmly secured to the mounting stand to minimize rupture or intermittent disconnection in windy weather. Wherever possible, the cable should be properly buried and protected underground if the recorder is located at a distance. The use of shielded cable is recommended; the pyranometer, cable and recorder being connected by a very low resistance conductor to a

common ground. As with other types of thermoelectric devices, care must be exercised to obtain a permanent copper-to-copper junction between all connections prior to soldering. All exposed junctions must be weatherproof and protected from physical damage. After identification of the circuit polarity, the other extremity of the cable may be connected to the data-collection system in accordance with the relevant instructions. 7.3.3.3 Installation of pyranometers for measuring diffuse sky radiation

For measuring or recording separate diffuse sky radiation, the direct solar radiation must be screened from the sensor by a shading device. Where continuous records are required, the pyranometer is usually shaded either by a small metal disc held in the sun’s beam by a sun tracker, or by a shadow band mounted on a polar axis. The first method entails the rotation of a slender arm synchronized with the sun’s apparent motion. If tracking is based on sun synchronous motors or solar almanacs, frequent inspection is essential to ensure proper operation and adjustment, since spurious records are otherwise difficult to detect. Sun trackers with sun-seeking systems minimize the likelihood of such problems. The second method involves frequent personal attention at the site and significant corrections to the record on account of the appreciable screening of diffuse sky radiation by the shading arrangement. Assumptions about the sky radiance distribution and band dimensions are required to correct for the band and increase the uncertainty of the derived diffuse sky radiation compared to that using a sun-seeking disc system. Annex 7.E provides details on the construction of a shading ring and the necessary corrections to be applied. A significant error source for diffuse sky radiation data is the zero irradiance signal. In clear sky conditions the zero irradiance signal is the equivalent of 5 to 10 W m–2 depending on the pyranometer model, and could approach 15 per cent of the diffuse sky irradiance. The Baseline Surface Radiation Network (BSRN) Operations Manual (WMO, 1998) provides methods to minimize the influence of the zero irradiance signal. The installation of a diffuse sky pyranometer is similar to that of a pyranometer which measures global radiation. However, there is the complication of an equatorial mount or shadow-band stand. The distance to a neighbouring pyranometer should be sufficient to guarantee that the shading ring or disc

CHaPTEr 7. MEaSurEMENT OF radIaTION

I.7–19

never shadows it. This may be more important at high latitudes where the sun angle can be very low. Since the diffuse sky radiation from a cloudless sky may be less than one tenth of the global radiation, careful attention should be given to the sensitivity of the recording system. 7.3.3.4 Installation of pyranometers for measuring reflected radiation

7.3.3.6

Installation and maintenance of pyranometers on special platforms

Very special care should be taken when installing equipment on such diverse platforms as ships, buoys, towers and aircraft. Radiation sensors mounted on ships should be provided with gimbals because of the substantial motion of the platform. If a tower is employed exclusively for radiation equipment, it may be capped by a rigid platform on which the sensors can be mounted. Obstructions to the horizon should be kept to the side of the platform farthest from the Equator, and booms for holding albedometers should extend towards the Equator. Radiation sensors should be mounted as high as is practicable above the water surface on ships, buoys and towers, in order to keep the effects of water spray to a minimum. Radiation measurements have been taken successfully from aircraft for a number of years. Care must be exercised, however, in selecting the correct pyranometer and proper exposure. Particular attention must be paid during installation, especially for systems that are difficult to access, to ensure the reliability of the observations. It may be desirable, therefore, to provide a certain amount of redundancy by installing duplicate measuring systems at certain critical sites.

The height above the surface should be 1 to 2 m. In summer-time, the ground should be covered by grass that is kept short. For regions with snow in winter, a mechanism should be available to adjust the height of the pyranometer in order to maintain a constant separation between the snow and the instrument. Although the mounting device is within the field of view of the instrument, it should be designed to cause less than 2 per cent error in the measurement. Access to the pyranometer for levelling should be possible without disturbing the surface beneath, especially if it is snow. 7.3.3.5 Maintenance of pyranometers

Pyranometers in continuous operation should be inspected at least once a day and perhaps more frequently, for example when meteorological observations are being made. During these inspections, the glass dome of the instrument should be wiped clean and dry (care should be taken not to disturb routine measurements during the daytime). If frozen snow, glazed frost, hoar frost or rime is present, an attempt should be made to remove the deposit very gently (at least temporarily), with the sparing use of a de-icing fluid, before wiping the glass clean. A daily check should also ensure that the instrument is level, that there is no condensation inside the dome, and that the sensing surfaces are still black. In some networks, the exposed dome of the pyranometer is ventilated continuously by a blower to avoid or minimize deposits in cold weather, and to cool the dome in calm weather situations. The temperature difference between the ventilating air and the ambient air should not be more than about 1 K. If local pollution or sand forms a deposit on the dome, it should be wiped very gently, preferably after blowing off most of the loose material or after wetting it a little, in order to prevent the surface from being scratched. Such abrasive action can appreciably alter the original transmission properties of the material. Desiccators should be kept charged with active material (usually a colour-indicating silica gel).

7.4

MeasureMent of total anD long-Wave raDiation

The measurement of total radiation includes both short wavelengths of solar origin (300 to 3 000 nm) and longer wavelengths of terrestrial and atmospheric origin (3 000 to 100 000 nm). The instruments used for this purpose are pyrradiometers. They may be used for measuring either upward or downward radiation flux components, and a pair of them may be used to measure the differences between the two, which is the net radiation. Single-sensor pyrradiometers, with an active surface on both sides, are also used for measuring net radiation. Pyrradiometer sensors must have a constant sensitivity across the whole wavelength range from 300 to 100 000 nm. The measurement of long-wave radiation can be accomplished either indirectly, by subtracting the measured global radiation from the total radiation

I.7–20

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

measured, or directly, by using pyrgeometers. Most pyrgeometers eliminate the short wavelengths by means of filters which have a constant transparency to long wavelengths while being almost opaque to the shorter wavelengths (300 to 3 000 nm). Some pyrgeometers can be used only during the night as they have no means for eliminating solar short-wave radiation. 7.4.1 instruments for the measurement of total radiation

Table 7.7 lists the characteristics of pyrradiometers of various levels of performance, and the uncertainties to be expected in the measurements obtained from them. 7.4.2 calibration of pyrradiometers and net pyrradiometers

One problem with instruments for measuring total radiation is that there are no absorbers which have a completely constant sensitivity over the extended range of wavelengths concerned. The use of thermally sensitive sensors requires a good knowledge of the heat budget of the sensor. Otherwise, it is necessary to reduce sensor convective heat losses to near zero by protecting the sensor from the direct influence of the wind. The technical difficulties linked with such heat losses are largely responsible for the fact that net radiative fluxes are determined less precisely than global radiation fluxes. In fact, different laboratories have developed their own pyrradiometers on technical bases which they consider to be the most effective for reducing the convective heat transfer in the sensor. During the last few decades, pyrradiometers have been built which, although not perfect, embody good measurement principles. Thus, there is a great variety of pyrradiometers employing different methods for eliminating, or allowing for, wind effects, as follows: (a) No protection, in which case empirical formulae are used to correct for wind effects; (b) Determination of wind effects by the use of electrical heating; (c) Stabilization of wind effects through artificial ventilation; (d) Elimination of wind effects by protecting the sensor from the wind. Table 7.6 provides an analysis of the sources of error arising in pyrradiometric measurements and proposes methods for determining these errors. It is difficult to determine the precision likely to be obtained in practice. In situ comparisons at different sites between different designs of pyrradiometer yield results manifesting differences of up to 5 to 10 per cent under the best conditions. In order to improve such results, an exhaustive laboratory study should precede the in situ comparison in order to determine the different effects separately.

Pyrradiometers and net pyrradiometers can be calibrated for short-wave radiation using the same methods as those used for pyranometers (see section 7.3.1) using the sun and sky as the source. In the case of one-sensor net pyrradiometers, the downward-looking side must be covered by a cavity of known and steady temperature. Long-wave radiation calibration is best done in the laboratory with black body cavities. However, it is possible to perform field calibrations. In the case of a net pyrradiometer, the downward flux L↓ is measured separately by using a pyrgeometer; or the upper receiver may be covered as above with a cavity, and the temperature of the snow or water surface Ts is measured directly. In which case, the radiative flux received by the instrument amounts to: L* = L↓ – εσ Ts4 and: V = L* · K or K = V/L* (7.16) (7.15)

where ε is the emittance of the water or snow surface (normally taken as 1); σ is the StefanBoltzmann constant (5.670 4 · 10–8 W m–2 K–1); Ts is the underlying surface temperature (K); L↓ is the irradiance measured by the pyrgeometer or calculated from the temperature of the cavity capping the upper receiver (W m–2); L* is the radiative flux at the receiver (W m–2); V is the output of the instrument (µV); and K is sensitivity (µV/(W m–2)). The instrument sensitivities should be checked periodically in situ by careful selection of welldescribed environmental conditions with slowly varying fluxes. The symmetry of net pyrradiometers requires regular checking. This is done by inverting the instrument, or the pair of instruments, in situ and noting any difference in output. Differences of greater than 2 per cent of the likely full scale between the two directions demand instrument recalibration because either the ventilation rates or absorption factors have become significantly different for the two sensors. Such tests should also be carried out during calibration or installation.

CHaPTEr 7. MEaSurEMENT OF radIaTION

I.7–21

table 7.6. sources of error in pyrradiometric measurements
Elements influencing the measurements
Screening properties

Nature of influence on pyrradiometers with domes
Spectral characteristics of transmission

Effects on the precision of measurements

Methods for determining these characteristics

without domes
None (a) Spectral variations in calibration coefficient (b) The effect of reduced incident radiation on the detector due to short-wave diffusion in the domes (depends on thickness) (c) ageing and other variations in the sensors uncontrolled changes due to wind gusts are critical in computing the radiative flux divergence in the lowest layer of the atmosphere (a) determine spectrally the extinction in the screen (b) Measure the effect of diffuse sky radiation or measure the effect with a varying angle of incidence (c) Spectral analysis: compare with a new dome; determine the extinction of the dome

Convection effects

Changes due to non-radiative energy exchanges: sensordome environment (thermal resistance)

Changes due to nonradiative energy exchanges: sensor-air (variation in areal exchange coefficient) variation of the spectral character of the sensor and of the dissipation of heat by evaporation

Study the dynamic behaviour of the instrument as a function of temperature and speed in a wind tunnel

Effects of hydrometeors (rain, snow, fog, dew, frost) and dust

variation of the spectral transmission plus the non-radiative heat exchange by conduction and change

Changes due to variations in the spectral characteristics of the sensor and to non-radiative energy transfers

Study the influence of forced ventilation on the effects

Properties of the sensor surface (emissivity)

depends on the spectral absorption of the blackening substance on the sensor

Changes in calibration coefficient (a) as a function of spectral response (b) as a function of intensity and azimuth of incident radiation (c) as a function of temperature effects a temperature coefficient is required

(a) Spectrophotometric analysis of the calibration of the absorbing surfaces (b) Measure the sensor’s sensitivity variability with the angle of incidence

Temperature effects

Non-linearity of the sensor as a function of temperature

Study the influence of forced ventilation on these effects (a) Control the thermal capacity of the two sensor surfaces (b) Control the timeconstant over a narrow temperature range

asymmetry effects

(a) differences between the thermal capacities and resistance of the upward- and downward-facing sensors (b) differences in ventilation of the upward- and downward-facing sensors (c) Control and regulation of sensor levelling

(a) Influence on the time-constant of the instrument (b) Error in the determination of the calibration factors for the two sensors

I.7–22

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

table 7.7. characteristics of operational pyrradiometers
Characteristic
resolution (W m–2) Stability (annual change; per cent of full scale) Cosine response error at 10° elevation azimuth error at 10° elevation (additional to Temperature dependence (–20 to 40°C) Non-linearity (deviation from mean) variation in spectral sensitivity integrated over
Notes:
a b c

High quality
1 2% 3% 3% 1% 0.5% 2%

a

Good quality
5 5% 7% 5% 2% 2% 5%

b

Moderate quality
10 10% 15% 10% 5% 5% 10%

c

Near state of the art; maintainable only at stations with special facilities and specialist staff. acceptable for network operations. Suitable for low-cost networks where moderate to low performance is acceptable.

7.4.3

instruments for the measurement of long-wave radiation

and the black-body radiative temperature of the instrument. In general, this can be approximated by the following equation:

Over the last decade, significant advances have been made in the measurement of terrestrial radiation by pyrgeometers, which block out solar radiation. Early instruments of this type had significant problems with premature ageing of the materials used to block the short-wave portion of the spectrum, while being transparent to the long-wave portion. However, with the advent of the silicon domed pyrgeometer, this stability problem has been greatly reduced. Nevertheless, the measurement of terrestrial radiation is still more difficult and less understood than the measurement of solar irradiance. Pyrgeometers are subject to the same errors as pyrradiometers (see Table 7.6). Pyrgeometers have developed in two forms. In the first form, the thermopile receiving surface is covered with a hemispheric dome inside which an interference filter is deposited. In the second form, the thermopile is covered with a flat plate on which the interference filter is deposited. In both cases, the surface on which the interference filter is deposited is made of silicon. The first style of instrument provides a full hemispheric field of view, while for the second a 150° field of view is typical and the hemispheric flux is modelled using the manufacturer’s procedures. The argument used for the latter method is that the deposition of filters on the inside of a hemisphere has greater imprecisions than the modelling of the flux below 30° elevations. Both types of instruments are operated on the principle that the measured output signal is the difference between the irradiance emitted from the source

L ↓i =

V + 5.6704 ⋅ 10 −8 ⋅ Td4 K

(7.17)

where L↓i is the infrared terrestrial irradiance (W m–2); V is the voltage output from the sensing element (µV); K is the instrument sensitivity to infrared irradiance (µV/(W m–2)); and Td is the detector temperature (K). Several recent comparisons have been made using instruments of similar manufacture in a variety of measurement configurations. These studies have indicated that, following careful calibration, fluxes measured at night agree to within 2 per cent, but in periods of high solar energy the difference between instruments may reach 13 per cent. The reason for the differences is that the silicon dome and the associated interference filter do not have a sharp and reproducible cut-off between solar and terrestrial radiation, and it is not a perfect reflector of solar energy. Thus, solar heating occurs. By shading the instrument, ventilating it as recommended by ISO (1990a), and measuring the temperature of the dome and the instrument case, this discrepancy can be reduced to less than 5 per cent of the thermopile signal (approximately 15 W m–2). Based upon these and other comparisons, the following recommendations should be followed for the measurement of long-wave radiation: (a) When using pyrgeometers that have a builtin battery circuit to emulate the black-body condition of the instrument, extreme care must be taken to ensure that the battery is

CHaPTEr 7. MEaSurEMENT OF radIaTION

I.7–23

(b)

(c) (d)

well maintained. Even a small change in the battery voltage will significantly increase the measurement error. If at all possible, the battery should be removed from the instrument, and the case and dome temperatures of the instrument should be measured according to the manufacturer’s instructions; Where possible, both the case and dome temperatures of the instrument should be measured and used in the determination of irradiance; The instrument should be ventilated; For best results, the instrument should be shaded from direct solar irradiance by a small sun-tracking disc as used for diffuse sky radiation measurement.

level, it is necessary to place the pyranometers and pyrradiometers at a suitable distance from the ground to measure these upward components. Such measurements integrate the radiation emitted by the surface beneath the sensor. For pyranometers and pyrradiometers which have an angle of view of 2π sr and are installed 2 m above the surface, 90 per cent of all the radiation measured is emitted by a circular surface underneath having a diameter of 12 m (this figure is 95 per cent for a diameter of 17.5 m and 99 per cent for one of 39.8 m), assuming that the sensor uses a cosine detector. This characteristic of integrating the input over a relatively large circular surface is advantageous when the terrain has large local variations in emittance, provided that the net pyrradiometer can be installed far enough from the surface to achieve a field of view which is representative of the local terrain. The output of a sensor located too close to the surface will show large effects caused by its own shadow, in addition to the observation of an unrepresentative portion of the terrain. On the other hand, the readings from a net pyrradiometer located too far from the surface can be rendered unrepresentative of the fluxes near that surface because of the existence of undetected radiative flux divergences. Usually a height of 2 m above short homogeneous vegetation is adopted, while in the case of tall vegetation, such as a forest, the height should be sufficient to eliminate local surface heterogeneities adequately. 7.4.5 recording and data reduction

These instruments should be calibrated at national or regional calibration centres by using blackbody calibration units. Experiments using near-black-body radiators fashioned from large hollowed blocks of ice have also met with good success. The calibration centre should provide information on the best method of determining the atmospheric irradiance from a pyrgeometer depending upon which of the above recommendations are being followed. 7.4.4 installation of pyrradiometers and pyrgeometers

Pyrradiometers and pyrgeometers are generally installed at a site which is free from obstructions, or at least has no obstruction with an angular size greater than 5° in any direction, and which has a low sun angle at all times during the year. A daily check of the instruments should ensure that: (a) The instrument is level; (b) Each sensor and its protection devices are kept clean and free from dew, frost, snow and rain; (c) The domes do not retain water (any internal condensation should be dried up); (d) The black receiver surfaces have emissivities very close to 1. Additionally, where polythene domes are used, it is necessary to check from time to time that UV effects have not changed the transmission characteristics. A half-yearly exchange of the upper dome is recommended. Since it is not generally possible to directly measure the reflected solar radiation and the upward long-wave radiation exactly at the surface

In general, the text in section 7.1.3 applies to pyrradiometers and pyrgeometers. Furthermore, the following effects can specifically influence the readings of these radiometers, and they should be recorded: (a) The effect of hydrometeors on non-protected and non-ventilated instruments (rain, snow, dew, frost); (b) The effect of wind and air temperature; (c) The drift of zero of the data system. This is much more important for pyrradiometers, which can yield negative values, than for pyranometers, where the zero irradiance signal is itself a property of the net irradiance at the sensor surface. Special attention should be paid to the position of instruments if the derived long-wave radiation requires subtraction of the solar irradiance component measured by a pyranometer; the

I.7–24

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

pyrradiometer and pyrranometer should be positioned within 5 m of each other and in such a way that they are essentially influenced in the same way by their environment.

table 7.8. Photopic spectral luminous efficiency values (unity at wavelength of maximum efficacy)
Wavelength (nm) 380 390 400 410 420 430 440 450 460 470 480 490 500 510 520 530 540 550 560 570 580 Photopic V(λ) 0.000 04 0.000 12 0.000 4 0.001 2 0.004 0 0.011 6 0.023 0.038 0.060 0.091 0.139 0.208 0.323 0.503 0.710 0.862 0.954 0.995 0.995 0.952 0.870 Wavelength (nm) 590 600 610 620 630 640 650 660 670 680 690 700 710 720 730 740 750 760 770 780 Photopic V(λ) 0.757 0.631 0.503 0.381 0.265 0.175 0.107 0.061 0.032 0.017 0.008 2 0.004 1 0.002 1 0.001 05 0.000 52 0.000 25 0.000 12 0.000 06 0.000 03 0.000 015

7.5

MeasureMent of sPecial raDiation quantities

7.5.1

Measurement of daylight

Illuminance is the incident flux of radiant energy that emanates from a source with wavelengths between 380 and 780 nm and is weighted by the response of the human eye to energy in this wavelength region. The ICI has defined the response of the human eye to photons with a peak responsivity at 555 nm. Figure 7.2 and Table 7.8 provide the relative response of the human eye normalized to this frequency. Luminous efficacy is defined as the relationship between radiant emittance (W m–2) and luminous emittance (lm). It is a function of the relative luminous sensitivity V(λ) of the human eye and a normalizing factor Km (683) describing the number of lumens emitted per watt of electromagnetic radiation from a monochromatic source of 555.19 nm (the freezing point of platinum), as follows:
780 Φv = Km ∫380 Φ(λ) V (λ)dλ

(7.18)

where Φv is the luminous flux (lm m–2 or lux); Φ(λ) is the spectral radiant flux (W m–2 nm–1); V(λ) is the sensitivity of the human eye; and Km is the normalizing constant relating luminous to radiation quantities. Quantities and units for luminous variables are given in Annex 7.A.
1.0

7.5.1.1

Instruments

0.8 Relative response 0.6 0.4

Illuminance meters comprise a photovoltaic detector, one or more filters to yield sensitivity according to the V(λ) curve, and often a temperature control circuit to maintain signal stability. The ICI has developed a detailed guide to the measurement of daylight (ICI, 1994) which describes expected practices in the installation of equipment, instrument characterization, data-acquisition procedures and initial quality control. The measurement of global illuminance parallels the measurement of global irradiance. However, the standard illuminance meter must be temperature controlled or corrected from at least –10 to 40°C. Furthermore, it must be ventilated to prevent condensation and/or frost from coating the outer surface of the sensing element. Illuminance meters should normally be able to measure fluxes over the range 1 to 20 000 lx. Within this range, uncertainties should remain within the limits of Table 7.9. These values are

0.2 0.0 400 450 500 550 600 Wavelength (nm) 650 700 750

figure 7.2. relative luminous sensitivity V(λ) of the human eye for photopic vision

CHaPTEr 7. MEaSurEMENT OF radIaTION

I.7–25

table 7.9. specification of illuminance meters
Specification v(λ) match uv response Ir response Cosine response Fatigue at 10 klx Temperature coefficient linearity Settling time Uncertainty percentage 2.5 0.2 0.2 1.5 0.1 0.1 K–1 0.2 0.1 s

time. For the presentation of sky luminance data, stereographic maps depicting isolines of equal luminance are most useful.

7.6

MeasureMent of uv raDiation

based upon ICI recommendations (ICI, 1987), but only for uncertainties associated with high-quality illuminance meters specifically intended for external daylight measurements. Diffuse sky illuminance can be measured following the same principles used for the measurement of diffuse sky irradiance. Direct illuminance measurements should be taken with instruments having a field of view whose open half-angle is no greater than 2.85° and whose slope angle is less than 1.76°. 7.5.1.2 calibration

Irradiance (W m–2 nm–1)

Calibrations should be traceable to a Standard Illuminant A following the procedures outlined in ICI (1987). Such equipment is normally available only at national standards laboratories. The calibration and tests of specification should be performed yearly. These should also include tests to determine ageing, zero setting drift, mechanical stability and climatic stability. It is also recommended that a field standard be used to check calibrations at each measurement site between laboratory calibrations. 7.5.1.3 recording and data reduction

Measurements of solar UV radiation are in demand because of its effects on the environment and human health, and because of the enhancement of radiation at the Earth’s surface as a result of ozone depletion (Kerr and McElroy, 1993). The UV spectrum is conventionally divided into three parts, as follows: (a) UV-A is the band with wavelengths of 315 to 400 nm, namely, just outside the visible spectrum. It is less biologically active and its intensity at the Earth’s surface does not vary with atmospheric ozone content; (b) UV-B is defined as radiation in the 280 to 315 nm band. It is biologically active and its intensity at the Earth’s surface depends on the atmospheric ozone column, to an extent depending on wavelength. A frequently used expression of its biological activity is its erythemal effect, which is the extent to which it causes the reddening of white human skin; (c) UV-C, in wavelengths of 100 to 280 nm, is completely absorbed in the atmosphere and does not occur naturally at the Earth’s surface.
1.0E+1

1.0E+0

1.0E-1

1.0E-2

1.0E-3

The ICI has recommended that the following climatological variables be recorded: (a) Global and diffuse sky daylight illuminance on horizontal and vertical surfaces; (b) Illuminance of the direct solar beam; (c) Sky luminance for 0.08 sr intervals (about 10° · 10°) all over the hemisphere; (d) Photopic albedo of characteristic surfaces such as grass, earth and snow. Hourly or daily integrated values are usually needed. The hourly values should be referenced to true solar

1.0E-4

Extra-terrestrial irradiance
1.0E-5

Surface irradiance (250 milliatmosphere centimetre ozone) Surface irradiance (300 milliatmosphere centimetre ozone) Surface irradiance (350 milliatmosphere centimetre ozone)

1.0E-6

1.0E-7 280.00 290.00 300.00 310.00 320.00 330.00

Wavelength (nm)

figure 7.3. Model results illustrating the effect of increasing ozone levels on the transmission of uV-B radiation through the atmosphere

I.7–26

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

table 7.10. requirements for uV-B global spectral irradiance measurements
1. 2. 3. 4. uv-B Wavelength resolution – 1.0 nm or better Temporal resolution – 10 min or better directional (angular) – separation into direct and diffuse components or better; radiances Meticulous calibration strategy ancillary data (a) Required 1. 2. 3. 4. 1. 2. 3. 4. 5. 6. Total column ozone (within 100 km) aerosol optical depth Ground albedo Cloud cover aerosol; profile using lIdar vertical ozone distribution Sky brightness Global solar irradiance Polarization of zenith radiance Column water amount

This leads to further difficulties in the standardization of instruments and methods of observation. Guidelines and standard procedures have been developed on how to characterize and calibrate UV spectroradiometers and UV filter radiometers used to measure solar UV irradiance (see WMO, 1996; 1999a; 1999b; 2001). Application of the recommended procedures for data quality assurance performed at sites operating instruments for solar UV radiation measurements will ensure a valuable UV radiation database. This is needed to derive a climatology of solar UV irradiance in space and time for studies of the Earth’s climate. Requirements for measuring sites and instrument specifications are also provided in these documents. Requirements for UV-B measurements were put forward in the WMO Global Ozone Research and Monitoring Project (WMO, 1993b) and are reproduced in Table 7.10. The following instrument descriptions are provided for general information and for assistance in selecting appropriate instrumentation. 7.6.1 instruments

(b) Highly recommended

UV-B is the band on which most interest is centred for measurements of UV radiation. An alternative, but now non-standard, definition of the boundary between UV-A and UV-B is 320 nm rather than 315 nm. Measuring UV radiation is difficult because of the small amount of energy reaching the Earth’s surface, the variability due to changes in stratospheric ozone levels, and the rapid increase in the magnitude of the flux with increasing wavelength. Figure 7.3 illustrates changes in the spectral irradiance between 290 and 325 nm at the top of the atmosphere and at the surface in W m–2 nm–1. Global UV irradiance is strongly affected by atmospheric phenomena such as clouds, and to a lesser extent by atmospheric aerosols. The influence of surrounding surfaces is also significant because of multiple scattering. This is especially the case in snow-covered areas. Difficulties in the standardization of UV radiation measurement stem from the variety of uses to which the measurements are put. Unlike most meteorological measurements, standards based upon global needs have not yet been reached. In many countries, measurements of UV radiation are not taken by Meteorological Services, but by health or environmental protection authorities.

Three general types of instruments are available commercially for the measurement of UV radiation. The first class of instruments use broadband filters. These instruments integrate over either the UV-B or UV-A spectrum or the entire broadband UV region responsible for affecting human health. The second class of instruments use one or more interference filters to integrate over discrete portions of the UV-A and/or UV-B spectrum. The third class of instruments are
1.00E+0

1.00E-1

McKinlay and Diffey (1987) Parrish, Jaenicke and Anderson (1982) normalized to 1 at 250 nm

Erythemal action spectra

1.00E-2

1.00E-3

1.00E-4

1.00E-5 250.00 300.00 350.00 400.00

Wavelength (nm)

figure 7.4. erythemal curves as presented by Parrish, Jaenicke and anderson (1982) and McKinlay and diffey (1987)

CHaPTEr 7. MEaSurEMENT OF radIaTION

I.7–27

spectroradiometers that measure across a predefined portion of the spectrum sequentially using a fixed passband. 7.6.1.1 Broadband sensors

instruments provide simple algorithms to approximate erythemal dosage from the unweighted measurements. The maintenance of these instruments consists of ensuring that the domes are cleaned, the instrument is level, the desiccant (if provided) is active, and the heating/cooling system is working correctly, if so equipped. Otherwise, the care they require is similar to that of a pyranometer. 7.6.1.2 narrowband sensors

Most, but not all, broadband sensors are designed to measure a UV spectrum that is weighted by the erythemal function proposed by McKinlay and Diffey (1987) and reproduced in Figure 7.4. Another action spectrum found in some instruments is that of Parrish, Jaenicke and Anderson (1982). Two methods (and their variations) are used to accomplish this hardware weighting. One of the means of obtaining erythemal weighting is to first filter out nearly all visible wavelength light using UV-transmitting, black-glass blocking filters. The remaining radiation then strikes a UVsensitive phosphor. In turn, the green light emitted by the phosphor is filtered again by using coloured glass to remove any non-green visible light before impinging on a gallium arsenic or a gallium arsenic phosphorus photodiode. The quality of the instrument is dependent on such items as the quality of the outside protective quartz dome, the cosine response of the instrument, the temperature stability, and the ability of the manufacturer to match the erythemal curve with a combination of glass and diode characteristics. Instrument temperature stability is crucial, both with respect to the electronics and the response of the phosphor to incident UV radiation. Phosphor efficiency decreases by approximately 0.5 per cent K–1 and its wavelength response curve is shifted by approximately 1 nm longer every 10 K. This latter effect is particularly important because of the steepness of the radiation curve at these wavelengths. More recently, instruments have been developed to measure erythemally weighted UV irradiance using thin film metal interference filter technology and specially developed silicon photodiodes. These overcome many problems associated with phosphor technology, but must contend with very low photodiode signal levels and filter stability. Other broadband instruments use one or the other measurement technology to measure the complete spectra by using either a combination of glass filters or interference filters. The bandpass is as narrow as 20 nm full-width half-maximum (FWHM) to as wide as 80 nm FWHM for instruments measuring a combination of UV-A and UV-B radiation. Some manufacturers of these

The definition of narrowband for this classification of instrument is vague. The widest bandwidth for instruments in this category is 10 nm FWHM. The narrowest bandwidth at present for commercial instruments is of the order of 2 nm FWHM. These sensors use one or more interference filters to obtain information about a portion of the UV spectra. The simplest instruments consist of a single filter, usually at a wavelength that can be measured by a good-quality, UV enhanced photodiode. Wavelengths near 305 nm are typical for such instruments. The out-of-band rejection of such filters should be equal to, or greater than, 10 –6 throughout the sensitive region of the detector. Higher quality instruments of this type either use Peltier cooling to maintain a constant temperature near 20°C or heaters to increase the instrument filter and diode temperatures to above normal ambient temperatures, usually 40°C. However, the latter alternative markedly reduces the life of interference filters. A modification of this type of instrument uses a photomultiplier tube instead of the photodiode. This allows the accurate measurement of energy from shorter wavelengths and lower intensities at all measured wavelengths. Manufacturers of instruments that use more than a single filter often provide a means of reconstructing the complete UV spectrum through modelled relationships developed around the measured wavelengths. Single wavelength instruments are used similarly to supplement the temporal and spatial resolution of more sophisticated spectrometer networks or for long-term accurate monitoring of specific bands to detect trends in the radiation environment. The construction of the instruments must be such that the radiation passes through the filter close to normal incidence so that wavelength shifting to shorter wavelengths is avoided. For

I.7–28

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

example, a 10° departure from normal incidence may cause a wavelength shift of 1.5 nm, depending on the refractive index of the filter. The effect of temperature can also be significant in altering the central wavelength by about 0.012 nm K–1 on very narrow filters (< 1 nm). Maintenance for simple one-filter instruments is similar to that of the broadband instruments. For instruments that have multiple filters in a moving wheel assembly, maintenance will include determining whether or not the filter wheel is properly aligned. Regular testing of the highvoltage power supply for photomultiplierequipped instruments and checking the quality of the filters are also recommended. 7.6.1.3 spectroradiometers

of the measurements is usually between 0.5 and 2.0 nm. The time required to complete a full scan across the grating depends upon both the wavelength resolution and the total spectrum to be measured. Scan times to perform a spectral scan across the UV region and part of the visible region (290 to 450 nm) with small wavelength steps range from less than 1 min per scan with modern fast scanning spectroradiometers to about 10 min for some types of conventional high-quality spectroradiometers. For routine monitoring of UV radiation it is recommended that the instrument either be environmentally protected or developed in such a manner that the energy incident on a receiver is transmitted to a spectrometer housed in a controlled climate. In both cases, care must be taken in the development of optics so that uniform responsivity is maintained down to low solar elevations. The maintenance of spectroradiometers designed for monitoring UV-B radiation requires well-trained on-site operators who will care for the instruments. It is crucial to follow the manufacturer’s maintenance instructions because of the complexity of this instrument. 7.6.2 calibration

The most sophisticated commercial instruments are those that use either ruled or holographic gratings to disperse the incident energy into a spectrum. The low energy of the UV radiation compared with that in the visible spectrum necessitates a strong out-of-band rejection. This is achieved by using a double monochromator or by blocking filters, which transmit only UV radiation, in conjunction with a single monochromator. A photomultiplier tube is most commonly used to measure the output from the monochromator. Some less expensive instruments use photodiode or charge-coupled detector arrays. These instruments are unable to measure energy in the shortest wavelengths of the UV-B radiation and generally have more problems associated with stray light. Monitoring instruments are now available with several self-checking features. Electronic tests include checking the operation of the photomultiplier and the analogue to digital conversion. Tests to determine whether the optics of the instrument are functioning properly include testing the instrument by using internal mercury lamps and standard quartz halogen lamps. While these do not give absolute calibration data, they provide the operator with information on the stability of the instrument both with respect to spectral alignment and intensity. Commercially available instruments are constructed to provide measurement capabilities from approximately 290 nm to the mid-visible wavelengths, depending upon the type of construction and configuration. The bandwidth

The calibration of all sensors in the UV-B is both very important and difficult. Guidelines on the calibration of UV spectroradiometers and UV filter radiometers have been given in WMO (1996; 1999a; 1999b; 2001) and in the relevant scientific literature. Unlike pyranometers, which can be traced back to a standard set of instruments maintained at the WRR, these sensors must be either calibrated against light sources or against trap detectors. The latter, while promising in the long-term calibration of narrowband filter instruments, are still not readily available. Therefore, the use of standard lamps that are traceable to national standards laboratories remains the most common means of calibrating sensors measuring in the UV-B. Many countries do not have laboratories capable of characterizing lamps in the UV. In these countries, lamps are usually traceable to the National Institute of Standards and Technology in the United States or to the Physikalisch-Technische Bundesanstalt in Germany. It is estimated that a 5 per cent uncertainty in spot measurements at 300 nm can be achieved only under the most rigorous conditions at the present

CHaPTEr 7. MEaSurEMENT OF radIaTION

I.7–29

time. The uncertainty of measurements of daily totals is about the same, using best practice. Fast changes in cloud cover and/or cloud optical depths at the measuring site require fast spectral scans and small sampling time steps between subsequent spectral scans, in order to obtain representative daily totals of spectral UV irradiance. Measurements of erythemal irradiance would have uncertainties typically in the range 5 to 20 per cent, depending on a number of factors, including the quality of the procedures and the equipment. The sources of error are discussed in the following paragraphs and include: (a) Uncertainties associated with standard lamps; (b) The stability of instruments, including the stability of the spectral filter and, in older instruments, temperature coefficients; (c) Cosine error effects; (d) The fact that the calibration of an instrument varies with wavelength, and that: (i) The spectrum of a standard lamp is not the same as the spectrum being measured; (ii) The spectrum of the UV-B irradiance being measured varies greatly with the solar zenith angle. The use of standard lamps as calibration sources leads to large uncertainties at the shortest wavelengths, even if the transfer of the calibration is perfect. For example, at 250 nm the uncertainty associated with the standard irradiance is of the order of 2.2 per cent. When transferred to a standard lamp, another 1 per cent uncertainty is added. At 350 nm, these uncertainties decrease to approximately 1.3 and 0.7 per cent, respectively. Consideration must also be given to the set-up and handling of standard lamps. Even variations as small as 1 per cent in the current, for example, can lead to errors in the UV flux of 10 per cent or more at the shortest wavelengths. Inaccurate distance measurements between the lamp and the instrument being calibrated can also lead to errors in the order of 1 per cent as the inverse square law applies to the calibration. Webb, and others (1994) discuss various aspects of uncertainty as related to the use of standard lamps in the calibration of UV or visible spectroradiometers. While broadband instruments are the least expensive to purchase, they are the most difficult to characterize. The problems associated with these instruments stem from: (a) the complex set of filters used to integrate the incoming radiation into the erythemal signal; and (b) the fact that the spectral nature of the atmosphere changes with air

mass, ozone amount and other atmospheric constituents that are probably unknown to the instrument user. Even if the characterization of the instrument by using calibrated lamp sources is perfect, the changing spectral properties between the atmosphere and the laboratory would affect the uncertainty of the final measurements. The use of high-output deuterium lamps, a double monochromator and careful filter selection will help in the characterization of these instruments, but the number of laboratories capable of calibrating these devices is extremely limited. Narrowband sensors are easier to characterize than broadband sensors because of the smaller variation in calibrating source intensities over the smaller wavelength pass-band. Trap detectors could potentially be used effectively for narrowband sensors, but have been used only in research projects to date. In recalibrating these instruments, whether they have a single filter or multiple filters, care must be taken to ensure that the spectral characteristics of the filters have not shifted over time. Spectrometer calibration is straightforward, assuming that the instrument has been maintained between calibrations. Once again, it must be emphasized that the transfer from the standard lamp is difficult because of the care that must be taken in setting up the calibration (see above). The instrument should be calibrated in the same position as that in which the measurements are to be taken, as many spectroradiometers are adversely affected by changes in orientation. The calibration of a spectrometer should also include testing the accuracy of the wavelength positioning of the monochromator, checking for any changes in internal optical alignment and cleanliness, and an overall test of the electronics. Periodic testing of the out-of-band rejection, possibly by scanning a helium cadmium laser (λ = 325 nm), is also advisable. Most filter instrument manufacturers indicate a calibration frequency of once a year. Spectroradiometers should be calibrated at least twice a year and more frequently if they do not have the ability to perform self-checks on the photomultiplier output or the wavelength selection. In all cases, absolute calibrations of the instruments should be performed by qualified technicians at the sites on a regular time schedule. The sources used for calibration must guarantee that the calibration can be traced back to absolute

I.7–30

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

radiation standards kept at certified national metrological institutes. If the results of quality assurance routines applied at the sites indicate a significant change in an instrument’s performance or changes of its calibration level over time, an additional calibration may be needed in between two regular calibrations. All calibrations should be based on expertise and documentation available at

the site and on the guidelines and procedures such as those published in WMO (1996; 1999a; 1999b; 2001). In addition to absolute calibrations of instruments, inter-comparisons between the sources used for calibration, for example, calibration lamps, and the measuring instruments are useful to detect and remove inconsistencies or systematic differences between station instruments at different sites.

CHaPTEr 7. MEaSurEMENT OF radIaTION

I.7–31

aNNEx 7.a noMenclature of radIoMetrIc and PHotoMetrIc quantItIes

(1) radiometric quantities
Name radiant energy radiant flux Symbol Q, (W) Φ, (P) Unit J=W s W Relation –
Φ= dQ dt

Remarks – Power

radiant flux density

(M), (E)

W m–2

dΦ d2Q = dA dA ⋅ dt M= dΦ dA dΦ dA

radiant flux of any origin crossing an area element radiant flux of any origin emerging from an area element radiant flux of any origin incident onto an area element The radiance is a conservative quantity in an optical system

radiant exitance

M

W m–2

Irradiance

E

W m–2

E =

radiance

L

W m–2 sr–1

L=

d 2Φ d Ω ⋅ dA ⋅ cosθ dQ t2 = ∫ E dt dA t1 dΦ d

radiant exposure

H

J m–2

H =

May be used for daily sums of global radiation, etc.

radiant intensity

I

W sr–1

I=

May be used only for radiation outgoing from “point sources”

(2) Photometric quantities
Name Quantity of light luminous flux luminous exitance Illuminance light exposure luminous intensity luminance luminous flux density Symbol Qv Φv Mv Ev Hv Iv Lv (Mv ; Ev) Unit lm·s lm lm m–2 lm m–2 = lx lm m–2 s = lx·s lm sr–1 = cd lm m–2 s r–1 = cdm–2 lm m–2

I.7–32

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

(3) optical characteristics
Characteristic Symbol ε Definition Remarks

Emissivity

ε = 1 for a black body

absorptance

α

α=

Φa Φi

Φa and Φi are the absorbed and incident radiant flux, respectively

reflectance

ρ

ρ=

Φr Φi

Φr is the reflected radiant flux

Transmittance

τ

τ=

Φt Φi

Φt is the radiant flux transmitted through a layer or a surface

Optical depth

δ

τ = e−δ

In the atmosphere, δ is defined in the vertical. Optical thickness equals δ /cosΘθ, where θ is the apparent zenith angle

CHaPTEr 7. MEaSurEMENT OF radIaTION

I.7–33

aNNEx 7.B MeteorologIcal radIatIon quantItIes, syMBols and defInItIons

Quantity downward radiation

Symbol

Relation

Definitions and remarks downward radiant flux “ radiant energy “ radiant exitanceb “ irradiance “ radiance “ radiant exposure for a specified time interval upward radiant flux “ radiant energy “ radiant exitance “ irradiance “ radiance “ radiant energy per unit area for a specified time interval Hemispherical irradiance on a horizontal surface (θ ⋅ = apparent solar zenith angle)c Subscript d = diffuse

Units W J (W s) W m–2 W m–2 W m–2 sr–1 J m–2 per time interval W J (W s) W m–2 W m–2 W m–2 sr–1 J m–2 per time interval W m–2

Φ↓ a
Q↓ M↓ E↓ L↓ H↓

Q↓ = M↓ = E↓ = L↓ = H↓ = (g = (l = Q↑ = M↑ = Eε↑ = Lε↑ = H↑ =

Φ↓ = Φg↓ + Φl↓

Qg↓ + Ql↓ Mg↓ + Ml↓ Eg↓ + El↓ Lg↓ + Ll↓ Hg↓ + Hl↓ global) long wave) Qrε↑ + Qlε↑ Mrε↑ + Ml↑ ε Erε↑ + El↑ Lrε ↑ + Llε↑ Hrε↑ + Hl↑ ε

upward radiation

Φ↑ a
Q↑ M↑ E↑ L↑ H↑

Φ↑ = Φr↑ +Φl↑

Global radiation

Eg↓

Eg↓ = Ecosθ ⋅ + Ed↓

Sky radiation: downward diffuse solar radiation

Φdε↓ Qd↓ ε Mdε↓ Edε↓ Ldε↓ Hdε↓ Φl↑, Φl↓ Ql↑,εQlε↓ Mlε↑, Ml↓ε Elε↑, Elε↓ Hlε↑, Hlε↓ Φr↑
Qr↑ Mr↑ Er↑ Lr↑ Hr↑

as for downward radiation

upward/downward long-wave radiation

Subscript l = long wave. If only atmospheric radiation is considered, the subscript a may be added, e.g., Φl,a↑ss Subscript r = reflected (the subscript s (specular) and d (diffuse) may be used, if a distinction is to be made between these two components)

as for downward radiation

reflected solar radiation

as for downward radiation

Net radiation

Φ*
Q* M* E* L* H*

Q* = M↑ = E↑ = L↑ = H↑ =

Φ* = Φ ↓ – Φ↑

Q↓ – Q↑ M↓ – M↑ E↓ – E↑ L↓ – L↑ H↓ – H↑

The subscript g or l is to be added to each of the symbols if only short-wave or long-wave net radiation quantities are considered

as for downward radiation

I.7–34
Quantity direct solar radiation Solar constant a b

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES
Symbol E E0 Relation E = E0τ τ v e–δ/cosθ ⋅ Definitions and remarks τ = atmospheric transmittance δ = optical depth (vertical) Solar irradiance, normalized to mean sun-Earth distance Units W m–2 W m–2

c

The symbols – or + could be used instead of Θ↓ ΘorΘ ↑ (e.g., Φ+ ≡ Φ↑). Exitance is radiant flux emerging from the unit area; irradiance is radiant flux received per unit area. For flux density in general, the symbol M or E can be used. although not specifically recommended, the symbol F, defined as Φ/area, may also be introduced. In the case of inclined surfaces, θ ⋅ is the angle between the normal to the surface and the direction to the sun.

CHaPTEr 7. MEaSurEMENT OF radIaTION

I.7–35

aNNEx 7.C sPecIfIcatIons for World, regIonal and natIonal radIatIon centres

World radiation centres The World Radiation Centres were designated by the Executive Committee at its thirtieth session in 1978 through Resolution 11 (EC-XXX) to serve as centres for the international calibration of meteorological radiation standards within the global network and to maintain the standard instruments for this purpose. A World Radiation Centre shall fulfil the following requirements. It should either: 1. (a) Possess and maintain a group of at least three stable absolute pyrheliometers, with a traceable 95 per cent uncertainty of less than 1 W m–2 to the World Radiometric Reference, and in stable, clear sun conditions with direct irradiances above 700 Wm–2, 95 per cent of any single measurements of direct solar irradiance will be expected to be within 4 W m–2 of the irradiance. The World Radiation Centre Davos is requested to maintain the World Standard Group for realization of the World Radiometric Reference; (b) It shall undertake to train specialists in radiation; (c) The staff of the centre should provide for continuity and include qualified scientists with wide experience in radiation; (d) It shall take all steps necessary to ensure, at all times, the highest possible quality of its standards and testing equipment; (e) It shall serve as a centre for the transfer of the World Radiometric Reference to the regional centres; (f) It shall have the necessary laboratory and outdoor facilities for the simultaneous comparison of large numbers of instruments and for data reduction; (g) It shall follow closely or initiate developments leading to improved standards and/or methods in meteorological radiometry; (h) It shall be assessed an international agency or by CIMO experts, at least every five years, to verify traceablility of the direct solar radiation measurements; or 2. (a) Provide and maintain an archive for solar radiation data from all the Member States of WMO;

(b) The staff of the centre should provide for continuity and include qualified scientists with wide experience in radiation; (c) It shall take all steps necessary to ensure, at all times, the highest possible quality of, and access to, its database; (d) It shall be assessed be an international agency or by CIMO experts, at least every five years. regional radiation centres A Regional Radiation Centre is a centre designated by a regional association to serve as a centre for intraregional comparisons of radiation instruments within the Region and to maintain the standard instrument necessary for this purpose. A Regional Radiation Centre shall satisfy the following conditions before it is designated as such and shall continue to fulfil them after being designated: (a) It shall possess and maintain a standard group of at least three stable pyrheliometers, with a traceable 95 per cent uncertainty of less than 1 W m–2 to the World Standard Group, and in stable, clear sun conditions with direct irradiances above 700 W m–2, 95 per cent of any single measurements of direct solar irradiance will be expected to be within 6 W m–2 of the irradiance; (b) One of the radiometers shall be compared through a WMO/CIMO sanctioned comparison, or calibrated, at least once every five years against the World Standard Group; (c) The standard radiometers shall be intercompared at least once a year to check the stability of the individual instruments. If the mean ratio, based on at least 100 measurements, and with a 95 per cent, uncertainty less than 0.1 per cent, has changed by more than 0.2 per cent, and if the erroneous instrument cannot be identified, a recalibration at one of the World Radiation Centres must be performed prior to further use as a standard; (d) It shall have, or have access to, the necessary facilities and laboratory equipment for checking and maintaining the accuracy of the auxiliary measuring equipment;

I.7–36 (e)

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

(f)

(g)

It shall provide the necessary outdoor facilities for simultaneous comparison of national standard radiometers from the Region; The staff of the centre should provide for continuity and include a qualified scientist with wide experience in radiation; It shall be assessed by a national or international agency or by CIMO experts, at least every five years, to verify traceablility of the direct solar radiation measurements.

technical information for the operation and maintenance of the national network of radiation stations. Arrangements should be made for the collection of the results of all radiation measurements taken in the national network of radiation stations, and for the regular scrutiny of these results with a view to ensuring their accuracy and reliability. If this work is done by some other body, the National Radiation Centre shall maintain close liaison with the body in question. list of World and regional radiation centres World radiation Centres Davos (Switzerland) St Petersburg2 (Russian Federation)

national radiation centres A National Radiation Centre is a centre designated at the national level to serve as a centre for the calibration, standardization and checking of the instruments used in the national network of radiation stations and for maintaining the national standard instrument necessary for this purpose. A National Radiation Centre shall satisfy the following requirements: (a) It shall possess and maintain at least two pyrheliometers for use as a national reference for the calibration of radiation instruments in the national network of radiation stations with a traceable 95 per cent uncertainty of less than 4 W m–2 to the regional representation of the World Radiometric Reference, and in stable, clear sun conditions with direct irradiances above 700 W m–2, 95 per cent of any single measurements of direct solar irradiance will be expected to be within 20 W m–2 of the irradiance; (b) One of the national standard radiometers shall be compared with a regional standard at least once every five years; (c) The national standard radiometers shall be intercompared at least once a year to check the stability of the individual instruments. If the mean ratio, based on at least 100 measurements, and with a 95 per cent uncertainty less than 0.2 per cent, has changed by more than 0.6 per cent and if the erroneous instrument cannot be identified, a recalibration at one of the Regional Radiation Centres must be performed prior to further use as a standard; (d) It shall have or, have access to, the necessary facilities and equipment for checking the performance of the instruments used in the national network; (e) The staff of the centre should provide for continuity and include a qualified scientist with experience in radiation. National Radiation Centres shall be responsible for preparing and keeping up to date all necessary

regional radiation Centres Region I (Africa): Cairo Khartoum Kinshasa

Lagos Tamanrasset Tunis Region II (Asia): Pune (India) Tokyo (Japan) Region III (South America): Buenos Aires (Argentina) Santiago (Chile) Huayao (Peru) Region IV (North America, Central America and the Caribbean): Toronto (Canada) Boulder (United States) Mexico City/Colima (Mexico) Region V (South-West Pacific): Melbourne (Australia) Region VI (Europe): Budapest (Hungary) Davos (Switzerland) St Petersburg (Russian Federation) Norrköping (Sweden) Trappes/Carpentras (France) Uccle (Belgium) Lindenberg (Germany)
2 Mainly operated as a World Radiation Data Centre under the Global Atmosphere Watch Strategic Plan.

(Egypt) (Sudan) (Democratic Republic of the Congo) (Nigeria) (Algeria) (Tunisia)

CHaPTEr 7. MEaSurEMENT OF radIaTION

I.7–37

aNNEx 7.d useful forMulae

general All astronomical data can be derived from tables in the nautical almanacs or ephemeris tables. However, approximate formulae are presented for practical use. Michalsky (1988a, b) compared several sets of approximate formulae and found that the best are the equations presented as convenient approximations in the Astronomical Almanac (United States Naval Observatory, 1993). They are reproduced here for convenience. the position of the sun To determine the actual location of the sun, the following input values are required: (a) year; (b) Day of year (for example, 1 February is day 32); (c) Fractional hour in universal time (UT) (for example, hours + minute/60 + number of hours from Greenwich); (d) Latitude in degrees (north positive); (e) Longitude in degrees (east positive). To determine the Julian date (JD), the Astronomical Almanac determines the present JD from a prime JD set at noon 1 January 2000 UT. This JD is 2 451 545.0. The JD to be determined can be found from: JD = 2 432 916.5 + delta · 365 + leap + day + hour/24 where: delta = year – 1949 leap = integer portion of (delta/4) The constant 2 432 916.5 is the JD for 0000 1 January 1949 and is simply used for convenience. Using the above time, the ecliptic coordinates can be calculated according to the following stepss (L, g and l are in degrees): (a) n = JD – 2 451 545; (b) L (mean longitude) = 280.460 + 0.985 647 4 · n (0 ≤ L < 360°); (c) g (mean anomaly) = 357.528 + 0.985 600 3 · n (0 ≤ g < 360°); (d) l (ecliptic longitude) = L + 1.915 · sin (g) + 0.020 · sin (2g) (0 ≤ l < 360°);

(e)

ep (obliquity of the ecliptic) = 23.439 – 0.000 000 4 · n (degrees).

It should be noted that the specifications indicate that all multiples of 360° should be added or subtracted until the final value falls within the specified range. From the above equations, the celestial coordinates can be calculated – the right ascension (ra) and the declination (dec) – by: tan (ra) = cos (ep) · sin (l)/cos (l) sin (dec) = sin (ep) · sin (l) To convert from celestial coordinates to local coordinates, that is, right ascension and declination to azimuth (A) and altitude (a), it is convenient to use the local hour angle (h). This is calculated by first determining the Greenwich mean sidereal time (GMST, in hours) and the local mean sidereal time (LMST, in hours): GMST = 6.697 375 + 0.065 709 824 2 · n + hour (UT) where: 0 ≤ GMST < 24h LMST = GMST + (east longitude)/(15° h–1) From the LMST, the hour angle (ha) is calculated as (ha and ra are in degrees): ha = LMST – 15 · ra (–12 ≤ ha < 12h)

Before the sun reaches the meridian, the hour angle is negative. Caution should be observed when using this term, because it is opposite to what some solar researchers use. The calculations of the solar elevation (el) and the solar azimuth (az) follow (az and el are in degrees): sin (el) = sin (dec) · sin (lat) + cos (dec) · cos (lat) · cos (ha) and: sin (az) = –cos (dec) · sin (ha)/cos (el)

I.7–38

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

cos(az) = (sin(dec) – sin(el) · sin(lat))/ (cos(el) · cos(lat)) where the azimuth is from 0° north, positive through east. To take into account atmospheric refraction, and derive the apparent solar elevation (h) or the apparent solar zenith angle, the Astronomical Almanac proposes the following equations: (a) A simple expression for refraction R for zenith angles less than 75°: r = 0°.004 52 P tan z/(273 + T) where z is the zenith distance in degrees; P is the pressure in hectopascals; and T is the temperature in °C. (b) For zenith angles greater than 75° and altitudes below 15°, the following approximate formula is recommended:

air mass In calculations of extinction, the path length through the atmosphere, which is called the absolute optical air mass, must be known. The relative air mass for an arbitrary atmospheric constituent, m, is the ratio of the air mass along the slant path to the air mass in the vertical direction; hence, it is a normalizing factor. In a plane parallel, nonrefracting atmosphere m is equal to 1/sin h 0 or 1/cos z0. local apparent time The mean solar time, on which our civil time is based, is derived from the motion of an imaginary body called the mean sun, which is considered as moving at uniform speed in the celestial equator at a rate equal to the average rate of movement of the true sun. The difference between this fixed time reference and the variable local apparent time is called the equation of time, Eq, which may be positive or negative depending on the relative position of the true mean sun. Thus: LAT = LMT + Eq = CT + LC + Eq where LAT is the local apparent time (also known as TST, true solar time), LMT is the local mean time; CT is the civil time (referred to a standard meridian, thus also called standard time); and LC is the longitude correction (4 min for every degree). LC is positive if the local meridian is east of the standard and vice versa. For the computation of Eq, in minutes, the following approximation may be used: Eq = 0.017 2 + 0.428 1 cos Θ0 – 7.351 5 sin Θ0 – 3.349 5 cos 2Θ0 – 9.361 9 sin 2Θ0 where Θ0 = 2 πdn/365 in radians or Θ0 = 360 dn/365 in degrees, and where dn is the day number ranging from 0 on 1 January to 364 on 31 December for a normal year or to 365 for a leap year. The maximum error of this approximation is 35 s (which is excessive for some purposes, such as airmass determination).

r=

P(0.159 4 + 0.019 6 a + 0.000 02 a 2 ) [( 273 + T )(1 + 0.505a + 0.084 5a 2 )]

where a is the elevation (90° – z) where h = el + r and the apparent solar zenith angle z0 = z + r. sun-earth distance The present-day eccentricity of the orbit of the Earth around the sun is small but significant to the extent that the square of the sun-Earth distance R and, therefore, the solar irradiance at the Earth, varies by 3.3 per cent from the mean. In astronomical units (AU), to an uncertainty of 10–4: R = 1.000 14 – 0.016 71 · cos (g) – 0.000 14 · cos (2g) where g is the mean anomaly and is defined above. The solar eccentricity is defined as the mean sunEarth distance (1 AU, R ) divided by the actual sun0 Earth distance squared: E0 = (R0/R)2

CHaPTEr 7. MEaSurEMENT OF radIaTION

I.7–39

aNNEx 7.E dIffuse sKy radIatIon – correctIon for a sHadIng rIng

The shading ring is mounted on two rails oriented parallel to the Earth’s axis, in such a way that the centre of the ring coincides with the pyranometer during the equinox. The diameter of the ring ranges from 0.5 to 1.5 m and the ratio of the width to the radius b/r ranges from 0.09 to 0.35. The adjustment of the ring to the solar declination is made by sliding the ring along the rails. The length of the shading band and the height of the mounting of the rails relative to the pyranometer are determined from the solar position during the summer solstice; the higher the latitude, the longer the shadow band and the lower the rails.

D being the unobscured sky radiation. In the figure below, an example of this correction factor is given for both a clear and an overcast sky, compared with the corresponding empirical curves. It is evident that the deviations from the theoretical curves depend on climatological factors of the station and should be determined experimentally by comparing the instrument equipped with a shading ring with an instrument shaded by a continuously traced disc. If no experimental data are available for the station, data computed for the overcast case with the corresponding b/r should be used. Thus:

Dv b = cos3 δ (tset − trise ) ⋅ sin Φ ⋅ sin δ + cos Φ ⋅ cos δ ⋅ (s Several authors, for example, Drummond (1956), D overcast r Dehne (1980) and Le Baron, Peterson D Dirmhirn and b = cos3 (1980), have proposed formulae for voperational δ (tset − trise ) ⋅ sin Φ ⋅ sin δ + cos Φ ⋅ cos δ ⋅ (sin tset − sin trise ) D overcast r corrections to the sky radiation accounting for the part not measured due to the shadow band. For a where δ is the declination of the sun; Φ is the ring with b/r < 0.2, the radiation Dv lost during a geographic latitude; and trise and tset are the solar day can be expressed as: hour angle for set and rise, respectively (for details, see above). set b Dv ≈ cos3 δ ∫ L(t ) . sin h ⋅ (t ) dt r t rise t 1.15

F clear

where δ is the declination of the sun; t is the hour angle of the sun; t and t are the hour angle at rise set sunrise and sunset, respectively, for a mathematical horizon (Φ being the geographic latitude, t = – t rise set and cos t = – tan Φ . tan δ); L(t) is the sky radiance rise during the day; and h ⋅ is the solar elevation. With this expression and some assumptions on the sky radiance, a correction factor f can be determined: 1 f = D 1− v D

Correction factor

1.10

f clear f overcast

1.05

F overcast
1.00 –23.5° –20° –10° 0° 10° 20° 23.5°

Declination

comparison of calculated and empirically determined correction factors for a shading ring, with b/r = 0.169; f indicates calculated curves and F indicates empirical ones (after dehne, 1980).

I.7–40

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

references and furtHer readIng

Bass, A.M. and R.J Paur, 1985: The ultraviolet crosssections of ozone: I. The Measurements. Atmospheric Ozone (C.S. zerefos and A. Ghazi, eds.), Reidel, Dordrecht, pp. 606–610. Bodhaine, B.A., N.B. Wood, E.G. Dutton and J.R. Slusser, 1999: On Rayleigh optical depth calculations. Journal of Atmosheric Oceanic Technology, 16, pp. 1854–1861. Dehne, K., 1980: Vorschlag zur standardisierten Reduktion der Daten verschiedener nationaler Himmelsstrahlungs-Messnetze. Annalen der Meteorologie (Neue Folge), 16, pp. 57–59. Drummond, A.J., 1956: On the measurement of sky radiation. Archiv für Meteorologie, Geophysik und Bioklimatologie, Serie B, 7, pp. 413–436. Forgan, B.W., 1996: A new method for calibrating reference and field pyranometers. Journal of Atmospheric and Oceanic Technology, 13, pp. 638–645. Fröhlich, C. and G.E. Shaw, 1980: New determination of Rayleigh scattering in the terrestrial atmosphere. Applied Optics, Volume 19, Issue 11, pp. 1773–1775. Frouin, R., P.-y. Deschamps, and P. Lecomte, 1990: Determination from space of atmospheric total water vapour amounts by differential absorption near 940 nm: Theory and airborne verification. Journal of Applied Meteorology, 29, pp. 448–460. International Commission on Illumination, 1987: Methods of Characterizing Illuminance Meters and Luminance Meters. ICI-No. 69–1987. International Commission on Illumination, 1994: Guide to Recommended Practice of Daylight Measurement. ICI No. 108-1994. International Electrotechnical Commission, 1987: International Electrotechnical Vocabulary. Chapter 845: Lighting, IEC 60050-845. International Organization for Standardization, 1990a: Solar Energy – Specification and Classification of Instruments for Measuring Hemispherical Solar and Direct Solar Radiation. ISO 9060. International Organization for Standardization, 1990b: Solar Energy – Calibration of Field Pyrheliometers by Comparison to a Reference Pyrheliometer. ISO 9059. International Organization for Standardization, 1990c: Solar Energy – Field Pyranometers – Recommended Practice for Use. ISO/TR 9901.

International Organization for Standardization, 1992: Solar Energy – Calibration of field pyranometers by comparison to a reference pyranometer. ISO 9847. International Organization for Standardization, 1993: Solar Energy – Calibration of a pyranometer using a pyrheliometer. ISO 9846. International Organization for Standardization, 1995: Guide to the Expression of Uncertainty in Measurement, Geneva. Kerr, J.B. and T.C. McElroy, 1993: Evidence for large upward trends of ultraviolet-B radiation linked to ozone depletion. Science, 262, pp. 1032–1034. Kuhn, M., 1972: Die spektrale Transparenz der antarktischen Atmosphäre. Teil I: Meßinstrumente und Rechenmethoden. Archiv für Meteorologie, Geophysik und Bioklimatologie, Serie B, 20, pp. 207–248. Lal, M., 1972: On the evaluation of atmospheric turbidity parameters from actinometric data. Geofísica Internacional, Volume 12, Number 2, pp. 1–11. Le Baron, B.A., W.A. Peterson, and I. Dirmhirn, 1980: Corrections for diffuse irradiance measured with shadowbands. Solar Energy, 25, pp. 1–13. Michalsky, J.J., 1988a: The astronomical almanac’s algorithm for approximate solar position (1950– 2050). Solar Energy, Volume 40, Number 3, pp. 227–235. Michalsky, J.J., 1988b: Errata. The astronomical almanac’s algorithm for approximate solar position (1950–2050). Solar Energy, Volume 41, Number 1. McKinlay A.F. and B.L. Diffey, 1987: A reference action spectrum for ultraviolet induced erythema in human skin. In: W.F. Passchier, and B.F.M. Bosnjakovic (eds), Human Exposure to Ultraviolet Radiation: Risks and Regulations, Elsevier, Amsterdam, pp. 83–87. Parrish, J.A., K.F Jaenicke and R.R. Anderson, 1982: Erythema and melanogenesis action spectra of normal human skin. Photochemistry and Photobiology, 36, pp. 187–191. Rüedi, I., 2001: International Pyrheliometer Comparison IPC-IX, Results and Symposium. MeteoSwiss Working Report No. 197, Davos and zurich. Schneider, W., G.K. Moortgat, G.S. Tyndall and J.P. Burrows, 1987: Absorption cross-sections of NO2 in the UV and visible region (200–700 nm) at 298 K. Journal of Photochemistry and Photobiology, A: Chemistry, 40, pp. 195–217.

I.7–41 United States Naval Observatory, 1993: The Astronomical Almanac, Nautical Almanac Office, Washington DC. Vigroux, E., 1953: Contribution à l’étude expérimentale de l’absorption de l’ozone. Annales de Physique, 8, pp. 709–762. Webb, A.R, B.G. Gardiner, M. Blumthaler and P. Foster, 1994: A laboratory investigation of two ultraviolet spectroradiometers. Photochemistry and Photobiology, Volume 60, No. 1, pp. 84–90. World Meteorological Organization, 1978: International Operations Handbook for Measurement of Background Atmospheric Pollution. WMO-No. 491, Geneva. World Meteorological Organization, 1986a: Revised Instruction Manual on Radiation Instruments and Measurements. World Climate Research Programme Publications Series No. 7, WMO/TDNo. 149, Geneva. World Meteorological Organization, 1986b: Recent Progress in Sunphotometry: Determination of the Aerosol Optical Depth. Environmental Pollution Monitoring and Research Programme Report No. 43, WMO/TD-No. 143, Geneva. World Meteorological Organization, 1993a: Report of the WMO Workshop on the Measurement of Atmospheric Optical Depth and Turbidity (Silver Spring, United States, 6–10 December 1993). Global Atmosphere Watch Report No. 101, WMO/TD-No. 659, Geneva. World Meteorological Organization, 1993b: Report of the Second Meeting of the Ozone Research Managers of the Parties to the Vienna Convention for the Protection of the Ozone Layer (Geneva, 10–12 March, 1993). WMO Global Ozone Research and Monitoring Project Report No. 32, Geneva. World Meteorological Organization, 1996: WMO/ UMAP Workshop on Broad-band UV Radiometers (Garmisch-Partenkirchen, Germany, 22–23 April 1996). Global Atmosphere Watch Report No. 120, WMO/TD-No. 894, Geneva. World Meteorological Organization, 1998: Baseline Surface Radiation Network (BSRN): Operations Manual. WMO/TD-No. 879, Geneva. World Meteorological Organization, 1999a: Guidelines for Site Quality Control of UV Monitoring. Global Atmosphere Watch Report No. 126, WMO/TD-No. 884, Geneva. World Meteorological Organization, 1999b: Report of the LAP/COST/WMO Intercomparison of Erythemal Radiometers. (Thessaloniki, Greece, 13–23 September 1999). WMO Global Atmosphere Watch Report No. 141, WMO/TDNo. 1051, Geneva. World Meteorological Organization, 2001: Instruments to Measure Solar Ultraviolet Radiation. Part 1: Spectral instruments, Global Atmosphere Watch Report No. 125, WMO/TD-No. 1066, Geneva. World Meteorological Organization, 2005: WMO/ GAW Experts Workshop on a Global Surface-Based Network for Long Term Observations of Column Aerosol Optical Properties (Davos, Switzerland, 8–10 March 2004). Global Atmosphere Watch Report No. 162, WMO/TD-No. 1287, Geneva. young, A.T., 1981: On the Rayleigh-scattering optical depth of the atmosphere. Journal of Applied Meteorology, 20, pp. 328–330.

CHaPTEr 8

MeasureMent of sunsHIne duratIon

8.1

general

The term “sunshine” is associated with the brightness of the solar disc exceeding the background of diffuse sky light, or, as is better observed by the human eye, with the appearance of shadows behind illuminated objects. As such, the term is related more to visual radiation than to energy radiated at other wavelengths, although both aspects are inseparable. In practice, however, the first definition was established directly by the relatively simple Campbell-Stokes sunshine recorder (see section 8.2.3), which detects sunshine if the beam of solar energy concentrated by a special lens is able to burn a special dark paper card. This recorder was already introduced in meteorological stations in 1880 and is still used in many networks. Since no international regulations on the dimensions and quality of the special parts were established, applying different laws of the principle gave different sunshine duration values. In order to homogenize the data of the worldwide network for sunshine duration, a special design of the Campbell-Stokes sunshine recorder, the socalled interim reference sunshine recorder (IRSR), was recommended as the reference (WMO, 1962). The improvement made by this “hardware definition” was effective only during the interim period needed for finding a precise physical definition allowing for both designing automatic sunshine recorders and approximating the “scale” represented by the IRSR as near as possible. With regard to the latter, the settlement of a direct solar threshold irradiance corresponding to the burning threshold of the Campbell-Stokes recorders was strongly advised. Investigations at different stations showed that the threshold irradiance for burning the card varied between 70 and 280 W m– 2 (Bider, 1958; Baumgartner, 1979). However, further investigations, especially performed with the IRSR in France, resulted in a mean value of 120 W m–2, which was finally proposed as the threshold of direct solar irradiance to distinguish bright sunshine.1 With regard to the spread of test results, a threshold accuracy of 20 per cent in instrument specifications is accepted. A pyrheliometer was
1

recommended as the reference sensor for the detection of the threshold irradiance. For future refinement of the reference, the settlement of the field-of-view angle of the pyrheliometer seems to be necessary (see Part I, Chapter 7, sections 7.2 and 7.2.1.3). 8.1.1 Definition

According to WMO (2003),2 sunshine duration during a given period is defined as the sum of that sub-period for which the direct solar irradiance exceeds 120 W m–2. 8.1.2 units and scales

The physical quantity of sunshine duration (SD) is, evidently, time. The units used are seconds or hours. For climatological purposes, derived terms such as “hours per day” or “daily sunshine hours” are used, as well as percentage quantities, such as “relative daily sunshine duration”, where SD may be related to the extra-terrestrial possible, or to the maximum possible, sunshine duration (SD0 and SDmax, respectively). The measurement period (day, decade, month, year, and so on) is an important addendum to the unit. 8.1.3 Meteorological requirements

Performance requirements are given in Part I, Chapter 1. Hours of sunshine should be measured with an uncertainty of ±0.1 h and a resolution of 0.1 h. Since the number and steepness of the threshold transitions of direct solar radiation determine the possible uncertainty of sunshine duration, the meteorological requirements on sunshine recorders are essentially correlated with the climatological cloudiness conditions (WMO, 1985). In the case of a cloudless sky, only the hourly values at sunrise or sunset constellations can (depending on the amount of dust) be erroneous because of an imperfectly adjusted threshold or spectral dependencies.
2

Recommended by the Commission for Instruments and Methods of Observation at its eighth session (1981) through Recommendation 10 (CIMO-VIII).

Recommended by the Commission for Instruments and Methods of Observation at its tenth session (1989) through Recommendation 16 (CIMO-X).

I.8–2

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

In the case of scattered clouds (cumulus, stratocumulus), the steepness of the transition is high and the irradiance measured from the cloudy sky with a pyrheliometer is generally lower than 80 W m–2; that means low requirements on the threshold adjustment. But the field-of-view angle of the recorder can influence the result if bright cloud clusters are near the sun. The highest precision is required if high cloud layers (cirrus, altostratus) with small variations of the optical thickness attenuate the direct solar irradiance around the level of about 120 W m–2. The field-of-view angle is effective as well as the precision of the threshold adjustment. The requirements on sunshine recorders vary, depending on site and season, according to the dominant cloud formation. The latter can be roughly described by three ranges of relative daily sunshine duration SD/SD0 (see section 8.1.2), namely “cloudy sky” by (0 ≤ SD/SD0 < 0.3), “scattered clouds” by (0.3 ≤ SD/SD0 < 0.7) and “fair weather” by (0.7 ≤ SD/SD0 ≤ 1.0). The results for dominant clouded sky generally show the highest percentage of deviations from the reference. 8.1.3.1 application of sunshine duration data

the extra-terrestrial possible SD value), and a and b are constants which have to be determined monthly. The uncertainty of the monthly means of daily global irradiation derived in this way from Campbell-Stokes data was found to be lower than 10 per cent in summer, and rose up to 30 per cent in winter, as reported for German stations (Golchert, 1981). The Ångström formula implies the inverse correlation between cloud amount and sunshine duration. This relationship is not fulfilled for high and thin cloudiness and obviously not for cloud fields which do not cover the sun, so that the degree of inverse correlation depends first of all on the magnitude of the statistical data collected (Stanghellini, 1981; Angell, 1990). The improvement of the accuracy of SD data should reduce the scattering of the statistical results, but even perfect data can generate sufficient results only on a statistical basis. 8.1.3.3 requirement of automated records

One of the first applications of SD data was to characterize the climate of sites, especially of health resorts. This also takes into account the psychological effect of strong solar light on human well-being. It is still used by some local authorities to promote tourist destinations. The description of past weather conditions, for instance of a month, usually contains the course of daily SD data. For these fields of application, an uncertainty of about 10 per cent of mean SD values seemed to be acceptable over many decades. 8.1.3.2 correlations to other meteorological variables

Since electrical power is available in an increasing number of places, the advantage of the CampbellStokes recorder of being self-sufficient is of decreasing importance. Furthermore, the required daily maintenance requirement of replacing the burn card makes the use of Campbell-Stokes recorders problematic at either automatic weather stations or stations with reduced numbers of personnel. Another essential reason to replace Campbell-Stokes recorders by new automated measurement procedures is to avoid the expense of visual evaluations and to obtain more precise results on data carriers permitting direct computerized data processing. 8.1.4 Measurement methods

The most important correlation between sunshine duration and global solar radiation G is described by the so-called Ångström formula: G/G = a + b · (SD/SD )
0 0

The principles used for measuring sunshine duration and the pertinent types of instruments are briefly listed in the following methods: (a) Pyrheliometric method: Pyrheliometric detection of the transition of direct solar irradiance through the 120 W m–2 threshold (according to Recommendation 10 (CIMO-VIII)). Duration values are readable from time counters triggered by the appropriate upward and downward transitions. Type of instrument: pyrheliometer combined with an electronic or computerized threshold discriminator and a time-counting device.

(8.1)

where G/G0 is the so-called clearness index (related to the extra-terrestrial global irradiation), SD/SD0 is the corresponding sunshine duration (related to

CHaPTEr 8. MEaSurEMENT OF SuNSHINE duraTION

I.8–3

(b)

Pyranometric method: (i) Pyranometric measurement of global (G) and diffuse (D) solar irradiance to derive the direct solar irradiance as the WMO threshold discriminator value and further as in (a) above. Type of instrument: Radiometer systems of two fitted pyranometers and one sunshade device combined with an electronic or computerized threshold discriminator and a time-counting device. (ii) Pyranometric measurement of global (G) solar irradiance to roughly estimate sunshine duration. Type of instrument: a pyranometer combined with an electronic or computerized device which is able to deliver 10 min means as well as minimum and maximum global (G) solar irradiance within those 10 min.

(rotating diaphragm or mirror, for instance) and combined with an electronic discriminator and a time-counting device. The sunshine duration measurement methods described in the following paragraphs are examples of ways to achieve the above-mentioned principles. Instruments using these methods, with the exception of the Foster switch recorder, participated in the WMO Automatic Sunshine Duration Measurement Comparison in Hamburg from 1988 to 1989 and in the comparison of pyranometers and electronic sunshine duration recorders of Regional Association VI in Budapest in 1984 (WMO, 1986). The description of the Campbell-Stokes sunshine recorder in section 8.2.3 is relatively detailed since this instrument is still widely used in national networks, and the specifications and evaluation rules recommended by WMO should be considered (however, note that this method is no longer recommended, 3 since the duration of bright sunshine is not recorded with sufficient consistency). A historical review of sunshine recorders is given in Coulson (1975), Hameed and Pittalwala (1989) and Sonntag and Behrens (1992).

(c)

Burn method: Threshold effect of burning paper caused by focused direct solar radiation (heat effect of absorbed solar energy). The duration is read from the total burn length. Type of instrument: Campbell-Stokes sunshine recorders, especially the recommended version, namely the IRSR (see section 8.2).

8.2 (d) Contrast method: Discrimination of the insolation contrasts between some sensors in different positions to the sun with the aid of a specific difference of the sensor output signals which corresponds to an equivalent of the WMO recommended threshold (determined by comparisons with reference SD values) and further as in (b) above. Type of instrument: Specially designed multisensor detectors (mostly equipped with photovoltaic cells) combined with an electronic discriminator and a time counter. (e) Scanning method: Discrimination of the irradiance received from continuously scanned, small sky sectors with regard to an equivalent of the WMO recommended irradiance threshold (determined by comparisons with reference SD values). Type of instrument: One-sensor receivers equipped with a special scanning device 8.2.1 8.2.1.1

instruMents anD sensors

Pyrheliometric method general

This method, which represents a direct consequence of the WMO definition of sunshine (see section 8.1.1) and is, therefore, recommended to obtain reference values of sunshine duration, requires a weatherproof pyrheliometer and a reliable solar tracker to point the radiometer automatically or at least semi-automatically to the position of the sun. The method can be modified by the choice of pyrheliometer, the field-of-view angle of which influences the irradiance measured when clouds surround the sun. The sunshine threshold can be monitored by the continuous comparison of the pyrheliometer output with the threshold equivalent voltage Vth = 120 W m–2 · R µV W–1 m2, which is calcultable
3

See Recommendation 10 (CIMO-VIII).

I.8–4

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

from the responsivity R of the pyrheliometer. A threshold transition is detected if ΔV = V – V th changes its sign. The connected time counter is running when ΔV > 0. 8.2.1.2 sources of error

(c)

The correction of shade-ring losses.

The field-of-view angle is not yet settled by agreed definitions (see Part I, Chapter 7, sections 7.2 and 7.2.1.3). Greater differences between the results of two pyrheliometers with different field-of-view angles are possible, especially if the sun is surrounded by clouds. Furthermore, typical errors of pyrheliometers, namely tilt effect, temperature dependence, non-linearity and zero-offset, depend on the class of the pyrheliometer. Larger errors appear if the alignment to the sun is not precise or if the entrance window is covered by rain or snow. 8.2.2 8.2.2.1 Pyranometric method general

As a special modification, the replacement of the criterion in equation 8.3 by a statistically derived parameterization formula (to avoid the determination of the solar zenith angle) for applications in more simple data-acquisition systems should be mentioned (Sonntag and Behrens, 1992). The pyranometric method using only one pyranometer to estimate sunshine duration is based on two assumptions on the relation between irradiance and cloudiness as follows: (a) A rather accurate calculation of the potential global irradiance at the Earth’s surface based on the calculated value of the extraterrestrial irradiation (G 0) by taking into account diminishing due to scattering in the atmosphere. The diminishing factor depends on the solar elevation h and the turbidity T of the atmosphere. The ratio between the measured global irradiance and this calculated value of the clear sky global irradiance is a good measure for the presence of clouds; (b) An evident difference between the minimum and maximum value of the global irradiance, measured during a 10 min interval, presumes a temporary eclipse of the sun by clouds. On the other hand, in the case of no such difference, there is no sunshine or sunshine only during the 10 min interval (namely, SD = 0 or SD = 10 min). Based on these assumptions, an algorithm can be used (Slob and Monna, 1991) to calculate the daily SD from the sum of 10 min SD. Within this algorithm, SD is determined for succeeding 10 min intervals (namely, SD10’ = ƒ · 10 min, where ƒ is the fraction of the interval with sunshine, 0 ≤ ƒ ≤ 1). The diminishing factor largely depends on the optical path of the sunlight travelling through the atmosphere. Because this path is related to the elevation of the sun, h = 90° – z, the algorithm discriminates between three time zones. Although usually ƒ = 0 or ƒ = 1, special attention is given to 0 < ƒ < 1. This algorithm is given in the annex. The uncertainty is about 0.6 h for daily sums. 8.2.2.2 sources of error

The pyranometric method to derive sunshine duration data is based on the fundamental relationship between the direct solar radiation (I) and the global (G) and diffuse (D) solar radiation: I · cos ζ = G – D (8.2)

where ζ is the solar zenith angle and I · cos ζ is the horizontal component of I. To fulfil equation 8.2 exactly, the shaded field-of-view angle of the pyranometer for measuring D must be equal to the field-of-view angle of the pyrheliometer (see Part I, Chapter 7). Furthermore, the spectral ranges, as well as the time-constants of the pyrheliometers and pyranometers, should be as similar as possible. In the absence of a sun-tracking pyrheliometer, but where computer-assisted pyranometric measurements of G and D are available, the WMO sunshine criterion can be expressed according to equation 8.2 by: (G–D)/cos ζ > 120 W m–2 which is applicable to instantaneous readings. The modifications of this method in different stations concern first of all: (a) The choice of pyranometer; (b) The shading device applied (shade ring or shade disc with solar tracker) and its shade geometry (shade angle); (8.3)

According to equation 8.3, the measuring errors in global and diffuse solar irradiance are propagated by the calculation of direct solar irradiance and are strongly amplified with increasing solar zenith angles. Therefore, the accuracy of corrections for

CHaPTEr 8. MEaSurEMENT OF SuNSHINE duraTION

I.8–5

losses of diffuse solar energy by the use of shade rings (WMO, 1984a) and the choice of pyranometer quality is of importance to reduce the uncertainty level of the results. 8.2.3 the campbell-stokes sunshine recorder (burn method)

using this method, does not provide accurate data of sunshine duration. The table below summarizes the main specifications and requirements for a Campbell-Stokes sunshine recorder of the IRSR grade. A recorder to be used as an IRSR should comply with the detailed specifications issued by the UK Met Office, and IRSR record cards should comply with the detailed specifications issued by Météo-France. 8.2.3.1 adjustments

The Campbell-Stokes sunshine recorder consists essentially of a glass sphere mounted concentrically in a section of a spherical bowl, the diameter of which is such that the sun’s rays are focused sharply on a card held in grooves in the bowl. The method of supporting the sphere differs according to whether the instrument is operated in polar, temperate or tropical latitudes. To obtain useful results, both the spherical segment and the sphere should be made with great precision, the mounting being so designed that the sphere can be accurately centred therein. Three overlapping pairs of grooves are provided in the spherical segment so that the cards can be suitable for different seasons of the year (one pair for both equinoxes), their length and shape being selected to suit the geometrical optics of the system. It should be noted that the aforementioned problem of burns obtained under variable cloud conditions indicates that this instrument, and indeed any instrument

In installing the recorder, the following adjustments are necessary: (a) The base must be levelled; (b) The spherical segment should be adjusted so that the centre line of the equinoctial card lies in the celestial Equator (the scale of latitude marked on the bowl support facilitates this task); (c) The vertical plan through the centre of the sphere and the noon mark on the spherical segment must be in the plane of the geographic meridian (north-south adjustment).

campbell-stokes recorder (Irsr grade) specifications Glass sphere Shape: uniform Spherical segment Material: Gunmetal or equivalent durability Record cards Material: Good quality pasteboard not affected appreciably by moisture accurate to within 0.3 mm 0.4 ± 0.05 mm Within 2 per cent

diameter: Colour:

10 cm very pale or colourless

radius:

73 mm

Width: Thickness: Moisture effect:

additional specifications: (a)

refractive index: 1.52 ± 0.02 Focal length: 75 mm for sodium “d” light

Central noon line engraved transversely across inner surface

(b)

adjustment for Colour: inclination of segment to horizontal according Graduations: to latitude double base with provision for levelling and azimuth setting

dark, homogeneous, no difference detected in diffuse daylight Hour-lines printed in black

(c)

I.8–6

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

A recorder is best tested for (c) above by observing the image of the sun at the local apparent noon; if the instrument is correctly adjusted, the image should fall on the noon mark of the spherical segment or card. 8.2.3.2 evaluation

tudes higher than about 65°, some countries use modified versions. One possibility is to use two Campbell-Stokes recorders operated back to back, one of them being installed in the standard manner, while the other should be installed facing north. In many climates, it may be necessary to heat the device to prevent the deposition of frost and dew. Comparisons in climates like that of northern Europe between heated and normally operated instruments have shown that the amount of sunshine not measured by a normal version, but recorded by a heat device, is about 1 per cent of the monthly mean in summer and about 5 to 10 per cent of the monthly mean in winter. 8.2.3.4 sources of error

In order to obtain uniform results from CampbellStokes recorders, it is especially important to conform closely to the following directions for measuring the IRSR records. The daily total duration of bright sunshine should be determined by marking off on the edge of a card of the same curvature the lengths corresponding to each mark and by measuring the total length obtained along the card at the level of the recording to the nearest tenth of an hour. The evaluation of the record should be made as follows: (a) In the case of a clear burn with round ends, the length should be reduced at each end by an amount equal to half the radius of curvature of the end of the burn; this will normally correspond to a reduction of the overall length of each burn by 0.1 h; (b) In the case of circular burns, the length measured should be equal to half the diameter of the burn. If more than one circular burn occurs on the daily record, it is sufficient to consider two or three burns as equivalent to 0.1 h of sunshine; four, five, six burns as equivalent to 0.2 h of sunshine; and so on in steps of 0.1 h; (c) Where the mark is only a narrow line, the whole length of this mark should be measured, even when the card is only slightly discoloured; (d) Where a clear burn is temporarily reduced in width by at least a third, an amount of 0.1 h should be subtracted from the total length for each such reduction in width, but the maximum subtracted should not exceed one half of the total length of the burn. In order to assess the random and systematic errors made while evaluating the records and to ensure the objectivity of the results of the comparison, it is recommended that the evaluations corresponding to each one of the instruments compared be made successively and independently by two or more persons trained in this type of work. 8.2.3.3 special versions

The errors of this recorder are mainly generated by the dependence on the temperature and humidity of the burn card as well as by the overburning effect, especially in the case of scattered clouds (Ikeda, Aoshima and Miyake, 1986). The morning values are frequently disturbed by dew or frost at middle and high latitudes. 8.2.4 contrast-evaluating devices

The Foster sunshine switch is an optical device that was introduced operationally in the network of the United States in 1953 (Foster and Foskett, 1953). It consists of a pair of selenium photocells, one of which is shielded from direct sunshine by a shade ring. The cells are corrected so that in the absence of the direct solar beam no signal is produced. The switch is activated when the direct solar irradiance exceeds about 85 W m–2 (Hameed and Pittalwala, 1989). The position of the shade ring requires adjustments only four times a year to allow for seasonal changes in the sun’s apparent path across the sky. 8.2.5 contrast-evaluating and scanning devices general

8.2.5.1

Since the standard Campbell-Stokes sunshine recorder does not record all the sunshine received during the summer months at stations with lati-

A number of different opto-electronic sensors, namely contrast-evaluating and scanning devices (see, for example, WMO, 1984b), were compared during the WMO Automatic Sunshine Duration Measurement Comparison at the Regional Radiation Centre of Regional Association VI in Hamburg (Germany) from 1988 to 1989. The report of this

CHaPTEr 8. MEaSurEMENT OF SuNSHINE duraTION

I.8–7

comparison contains detailed descriptions of all the instruments and sensors that participated in this event. 8.2.5.2 sources of error

(c)

The distribution of cloudiness over the sky or solar radiation reflected by the surroundings can influence the results because of the different procedures to evaluate the contrast and the relatively large field-of-view angles of the cells in the arrays used. Silicon photovoltaic cells without filters typically have the maximum responsivity in the nearinfrared, and the results, therefore, depend on the spectrum of the direct solar radiation. Since the relatively small, slit-shaped, rectangular field-of-view angles of this device differ considerably from the circular-symmetrical one of the reference pyrheliometer, the cloud distribution around the sun can cause deviations from the reference values. Because of the small field of view, an imperfect glass dome may be a specific source of uncertainty. The spectral responsivity of the sensor should also be considered in addition to solar elevation error. At present, only one of the commercial recorders using a pyroelectric detector is thought to be free of spectral effects.

as a factor of the local climate and should be well documented, as mentioned above; The site should be free of surrounding surfaces that could reflect a significant amount of direct solar radiation to the detector. Reflected radiation can influence mainly the results of the contrast-measuring devices. To overcome this interference, white paint should be avoided and nearby surfaces should either be kept free of snow or screened.

The adjustment of the detector axis is mentioned above. For some detectors, the manufacturers recommend tilting the axis, depending on the season.

8.4

general sources of error

The uncertainty of sunshine duration recorded using different types of instrument and methods was demonstrated as deviations from reference values in WMO for the weather conditions of Hamburg (Germany) in 1988–1989. The reference values are also somewhat uncertain because of the uncertainty of the calibration factor of the pyrheliometer used and the dimensions of its field-of-view angle (dependency on the aureole). For single values, the time constant should also be considered. General sources of uncertainty are as follows: (a) The calibration of the recorder (adjustment of the irradiance threshold equivalent (see section 8.5)); (b) The typical variation of the recorder response due to meteorological conditions (for example, temperature, cloudiness, dust) and the position of the sun (for example, errors of direction, solar spectrum); (c) The poor adjustment and instability of important parts of the instrument; (d) The simplified or erroneous evaluation of the values measured; (e) Erroneous time-counting procedures; (f) Dirt and moisture on optical and sensing surfaces; (g) Poor quality of maintenance.

8.3

exPosure of sunshine Detectors

The three essential aspects for the correct exposure of sunshine detectors are as follows: (a) The detectors should be firmly fixed to a rigid support. This is not required for the SONI (WMO, 1984b) sensors that are designed also for use on buoys; (b) The detector should provide an uninterrupted view of the sun at all times of the year throughout the whole period when the sun is more than 3° above the horizon. This recommendation can be modified in the following cases: (i) Small antennas or other obstructions of small angular width (≤2°) are acceptable if no alternative site is available. In this case, the position, elevation and angular width of obstructions should be well documented and the potential loss of sunshine hours during particular hours and days should be estimated by the astronomical calculation of the apparent solar path; (ii) In mountainous regions (valleys, for instance), natural obstructions are acceptable

8.5

calibration

The following general remarks should be made before the various calibration methods are described:

I.8–8 (a) (b)

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

(c)

(d)

(e)

(f)

No standardized method to calibrate SD detectors is available; For outdoor calibrations, the pyrheliometric method has to be used to obtain reference data; Because of the differences between the design of the SD detectors and the reference instrument, as well as with regard to the natural variability of the measuring conditions, calibration results must be determined by long-term comparisons (some months); Generally the calibration of SD detectors requires a specific procedure to adjust their threshold value (electronically for opto-electric devices, by software for pyranometric systems); For opto-electric devices with an analogue output, the duration of the calibration period should be relatively short; The indoor method (using a lamp) is recommended primarily for regular testing of the stability of field instruments. outdoor methods comparison of sunshine duration data

six months at European mid-latitudes. Therefore, the facilities to calibrate network detectors should permit the calibration of several detectors simultaneously. (The use of qtot as a correction factor for the Σ SD values gives reliable results only if the periods to be evaluated have the same cloud formation as during the calibration period. Therefore, this method is not recommended.) If the method is applied to data sets which are selected according to specific measurement conditions (for example, cloudiness, solar elevation angle, relative sunshine duration, daytime), it may be possible, for instance, to find factors q sel = Σ sel SD ref/ Σ sel SD cal statistically for different types of cloudiness. The factors could also be used to correct data sets for which the cloudiness is clearly specified. On the other hand, an adjustment of the threshold equivalent voltage is recommended, especially if qsel values for worse cloudiness conditions (such as cirrus and altostratus) are considered. An iterative procedure to validate the adjustment is also necessary; depending on the weather, some weeks or months of comparison may be needed. 8.5.1.2 comparison of analogue signals

8.5.1 8.5.1.1

Reference values SDref have to be measured simultaneously with the sunshine duration values SDcal of the detector to be calibrated. The reference instrument used should be a pyrheliometer on a solar tracker combined with an irradiance threshold discriminator (see section 8.1.4). Alternatively, a regularly recalibrated sunshine recorder of selected precision may be used. Since the accuracy requirement of the sunshine threshold of a detector varies with the meteorological conditions (see section 8.1.3), the comparison results must be derived statistically from data sets covering long periods. If the method is applied to the total data set of a period (with typical cloudiness conditions), the first calibration result is the ratio qtot = Σtot SDref /Σtot SDcal. For q >1 or q {0.3 + exp(–TL/(0.9 + 9.4 sin (h)} and Gmax–Gmin < 0.1 G0 with TL = 10? If “yes” If “no” c1 ƒ=1

Result

ƒ=0

ƒ=0

ƒ=1

ƒ=0

ƒ=1

ƒ=1

ƒ=0

CHaPTEr 8. MEaSurEMENT OF SuNSHINE duraTION

I.8–11

references and furtHer readIng

Angell, J.K., 1990: Variation in United States cloudiness and sunshine duration between 1950 and the drought year of 1988. Journal of Climate, 3, pp. 296–308. Baumgartner, T., 1979: Die Schwellenintensität des Sonnenscheinautographen Campbell-Stokes an wolkenlosen Tagen. Arbeitsberichte der Schweizerischen Meteorologischen Zentralanstalt, No. 84, zürich. Bider, M., 1958: Über die Genauigkeit der Registrierungen des Sonnenscheinautographen Campbell-Stokes. Archiv für Meteorologie, Geophysik und Bioklimatologie, Serie B, Volume 9, No. 2, pp. 199–230. Coulson, K.L., 1975: Solar and Terrestrial Radiation. Methods and Measurements. Academic Press, New york, pp. 215–233. Foster, N.B. and L.W. Foskett, 1953: A photoelectric sunshine recorder. Bulletin of the American Meteorological Society, 34, pp. 212–215. Golchert, H.J., 1981: Mittlere Monatliche Globalstrahlungsverteilungen in der Bundesrepublik Deutschland. Meteorologische Rundschau, 34, pp. 143–151. Hameed, S. and I. Pittalwala, 1989: An investigation of the instrumental effects on the historical sunshine record of the United States. Journal of Climate, 2, pp. 101–104. Ikeda, K., T. Aoshima and y. Miyake, 1986: Development of a new sunshine-duration meter. Journal of the Meteorological Society of Japan, Volume 64, Number 6, pp. 987–993. Jaenicke, R. and F. Kasten, 1978: Estimation of atmospheric turbidity from the burned traces of the Campbell-Stokes sunshine recorder. Applied Optics, 17, pp. 2617–2621. Painter, H.E., 1981: The performance of a CampbellStokes sunshine recorder compared with a simultaneous record of normal incidence irradiance. The Meteorological Magazine, 110, pp. 102–109. Slob, W.H. and W.A.A. Monna, 1991: Bepaling van een directe en diffuse straling en van zonneschijnduur uit 10-minuutwaarden van de globale straling. KNMI TR136, de Bilt. Sonntag, D. and K. Behrens, 1992: Ermittlung der Sonnenscheindauer aus pyranometrisch gemessenen Bestrahlungsstärken der Global-und Himmelsstrahlung. Berichte des Deutschen Wetterdienstes, No. 181.

Stanghellini, C., 1981: A simple method for evaluating sunshine duration by cloudiness observations. Journal of Applied Meteorology, 20, pp. 320–323. World Meteorological Organization, 1962: Abridged Final Report of the Third Session of the Commission for Instruments and Methods of Observation. WMO-No. 116 R.P. 48, Geneva. World Meteorological Organization, 1982: Abridged Final Report of the Eighth Session of the Commission for Instruments and Methods of Observation. WMO-No. 590, Geneva. World Meteorological Organization, 1984a: Diffuse solar radiation measured by the shade ring method improved by a new correction formula (K. Dehne). Papers Presented at the WMO Technical Conference on Instruments and Cost-effective Meteorological Observations (TECIMO). Instruments and Observing Methods Report No. 15, Geneva, pp. 263–267. World Meteorological Organization, 1984b: A new sunshine duration sensor (P. Lindner). Papers Presented at the WMO Technical Conference on Instruments and Cost-effective Meteorological Observations (TECIMO). Instruments and Observing Methods Report No. 15, Geneva, pp. 179–183. World Meteorological Organization, 1985: Dependence on threshold solar irradiance of measured sunshine duration (K. Dehne). Papers Presented at the Third WMO Technical Conference on Instruments and Methods of Observation (TECIMO III). Instruments and Observing Methods Report No. 22, WMO/TD-No. 50, Geneva, pp. 263–271. World Meteorological Organization, 1986: Radiation and Sunshine Duration Measurements: Comparison of Pyranometers and Electronic Sunshine Duration Recorders of RA VI (G. Major). WMO Instruments and Observing Methods Report No. 16, WMO/TD-No. 146, Geneva. World Meteorological Organization, 1990: Abridged Final Report of the Tenth Session of the Commission for Instruments and Methods of Observation. WMO-No. 727, Geneva. World Meteorological Organization, 2003: Manual on the Global Observing System. WMO-No. 544, Geneva.

CHaPTEr 9

MeasureMent of VIsIBIlIty

9.1 9.1.1

general

Definitions

Visibility was first defined for meteorological purposes as a quantity to be estimated by a human observer, and observations made in that way are widely used. However, the estimation of visibility is affected by many subjective and physical factors. The essential meteorological quantity, which is the transparency of the atmosphere, can be measured objectively and is represented by the meteorological optical range (MOR). The meteorological optical range is the length of path in the atmosphere required to reduce the luminous flux in a collimated beam from an incandescent lamp, at a colour temperature of 2 700 K, to 5 per cent of its original value, the luminous flux being evaluated by means of the photometric luminosity function of the International Commission on Illumination. Visibility, meteorological visibility (by day) and meteorological visibility at night1 are defined as the greatest distance at which a black object of suitable dimensions (located on the ground) can be seen and recognized when observed against the horizon sky during daylight or could be seen and recognized during the night if the general illumination were raised to the normal daylight level (WMO, 1992a; 2003). Visual range (meteorological): Distance at which the contrast of a given object with respect to its background is just equal to the contrast threshold of an observer (WMO, 1992a). Airlight is light from the sun and the sky which is scattered into the eyes of an observer by atmospheric suspensoids (and, to a slight extent, by
1

air molecules) lying in the observer’s cone of vision. That is, airlight reaches the eye in the same manner as diffuse sky radiation reaches the Earth’s surface. Airlight is the fundamental factor limiting the daytime horizontal visibility for black objects, because its contributions, integrated along the cone of vision from eye to object, raise the apparent luminance of a sufficiently remote black object to a level which is indistinguishable from that of the background sky. Contrary to subjective estimates, most of the airlight entering observers’ eyes originates in portions of their cone of vision lying rather close to them. The following four photometric qualities are defined in detail in various standards, such as by the International Electrotechnical Commission (IEC, 1987): (a) Luminous flux (symbol: F (or Φ); unit: lumen) is a quantity derived from radiant flux by evaluating the radiation according to its action upon the International Commission on Illumination standard photometric observer; (b) Luminous intensity (symbol: I; unit: candela or lm sr–1) is luminous flux per unit solid angle; (c) Luminance (symbol: L; unit: cd m–2) is luminous intensity per unit area; (d) Illuminance (symbol; E, unit; lux or lm m–2) is luminous flux per unit area. The extinction coefficient (symbol σ) is the proportion of luminous flux lost by a collimated beam, emitted by an incandescent source at a colour temperature of 2 700 K, while travelling the length of a unit distance in the atmosphere. The coefficient is a measure of the attenuation due to both absorption and scattering. The luminance contrast (symbol C) is the ratio of the difference between the luminance of an object and its background and the luminance of the background. The contrast threshold (symbol ε) is the minimum value of the luminance contrast that the human eye can detect, namely, the value which allows an object to be distinguished from its background. The contrast threshold varies with the individual.

To avoid confusion, visibility at night should not be defined in general as “the greatest distance at which lights of specified moderate intensity can be seen and identified” (see the Abridged Final Report of the Eleventh Session of the Commission for Instruments and Methods of Observation (WMO-No. 807)). If visibility should be reported based on the assessment of light sources, it is recommended that a visual range should be defined by specifying precisely the appropriate light intensity and its application, like runway visual range. Nevertheless, at its eleventh session CIMO agreed that further investigations were necessary in order to resolve the practical difficulties of the application of this definition.

I.9–2

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

The illuminance threshold (symbol Et) is the smallest illuminance, required by the eye, for the detection of point sources of light against a background of specified luminance. The value of Et, therefore, varies according to lighting conditions. The transmission factor (symbol T) is defined, for a collimated beam from an incandescent source at a colour temperature of 2 700 K, as the fraction of luminous flux which remains in the beam after traversing an optical path of a given length in the atmosphere. The transmission factor is also called the transmission coefficient. The terms transmittance or transmissive power of the atmosphere are also used when the path is defined, that is, of a specific length (for example, in the case of a transmissometer). In this case, T is often multiplied by 100 and expressed in per cent. 9.1.2 units and scales

climatology. Here, visibility must be representative of the optical state of the atmosphere. Secondly, it is an operational variable which corresponds to specific criteria or special applications. For this purpose, it is expressed directly in terms of the distance at which specific markers or lights can be seen. One of the most important special applications is found in meteorological services to aviation (see Part II, Chapter 2). The measure of visibility used in meteorology should be free from the influence of extra-meteorological conditions; it must be simply related to intuitive concepts of visibility and to the distance at which common objects can be seen under normal conditions. MOR has been defined to meet these requirements, as it is convenient for the use of instrumental methods by day and night, and as the relations between MOR and other measures of visibility are well understood. MOR has been formally adopted by WMO as the measure of visibility for both general and aeronautical uses (WMO, 1990a). It is also recognized by the International Electrotechnical Commission (IEC, 1987) for application in atmospheric optics and visual signalling. MOR is related to the intuitive concept of visibility through the contrast threshold. In 1924, Koschmieder, followed by Helmholtz, proposed a value of 0.02 for ε . Other values have been proposed by other authors. They vary from 0.007 7 to 0.06, or even 0.2. The smaller value yields a larger estimate of the visibility for given atmospheric conditions. For aeronautical requirements, it is accepted that ε is higher than 0.02, and it is taken as 0.05 since, for a pilot, the contrast of an object (runway markings) with respect to the surrounding terrain is much lower than that of an object against the horizon. It is assumed that, when an observer can just see and recognize a black object against the horizon, the apparent contrast of the object is 0.05, and, as explained below, this leads to the choice of 0.05 as the transmission factor adopted in the definition of MOR. Accuracy requirements are discussed in Part I, Chapter 1. 9.1.4 Measurement methods

The meteorological visibility or MOR is expressed in metres or kilometres. The measurement range varies according to the application. While for synoptic meteorological requirements, the scale of MOR readings extends from below 100 m to more than 70 km, the measurement range may be more restricted for other applications. This is the case for civil aviation, where the upper limit may be 10 km. This range may be further reduced when applied to the measurement of runway visual range representing landing and take-off conditions in reduced visibility. Runway visual range is required only between 50 and 1 500 m (see Part II, Chapter 2). For other applications, such as road or sea traffic, different limits may be applied according to both the requirements and the locations where the measurements are taken. The errors of visibility measurements increase in proportion to the visibility, and measurement scales take this into account. This fact is reflected in the code used for synoptic reports by the use of three linear segments with decreasing resolution, namely, 100 to 5 000 m in steps of 100 m, 6 to 30 km in steps of 1 km, and 35 to 70 km in steps of 5 km. This scale allows visibility to be reported with a better resolution than the accuracy of the measurement, except when visibility is less than about 1 000 m. 9.1.3 Meteorological requirements

The concept of visibility is used extensively in meteorology in two distinct ways. First, it is one of the elements identifying air-mass characteristics, especially for the needs of synoptic meteorology and

Visibilit y is a complex psycho -physica l phenomenon, governed mainly by the atmospheric extinction coefficient associated with solid and liquid particles held in suspension in the atmosphere; the extinction is caused primarily by

CHaPTEr 9. MEaSurEMENT OF vISIBIlITY

I.9–3

scattering rather than by absorption of the light. Its estimation is subject to variations in individual perception and interpretative ability, as well as the light source characteristics and the transmission factor. Thus, any visual estimate of visibility is subjective. When visibility is estimated by a human observer it depends not only on the photometric and dimensional characteristics of the object which is, or should be, perceived, but also on the observer’s contrast threshold. At night, it depends on the intensity of the light sources, the background illuminance and, if estimated by an observer, the adaptation of the observer’s eyes to darkness and the observer’s illuminance threshold. The estimation of visibility at night is particularly problematic. The first definition of visibility at night in section 9.1.1 is given in terms of equivalent daytime visibility in order to ensure that no artificial changes occur in estimating the visibility at dawn and twilight. The second definition has practical applications especially for aeronautical requirements, but it is not the same as the first and usually gives different results. Both are evidently imprecise. Instrumental methods measure the extinction coefficient from which the MOR may be calculated. The visibility may then be calculated from knowledge of the contrast and illuminance thresholds, or by assigning agreed values to them. It has been pointed out by Sheppard (1983) that:
“strict adherence to the definition (of MOR) would require mounting a transmitter and receiver of appropriate spectral characteristics on two platforms which could be separated, for example along a railroad, until the transmittance was 5 per cent. Any other approach gives only an estimate of MOR.”

radiation in the visible light spectrum. The terms photopic vision and scotopic vision refer to daytime and night-time conditions, respectively. The adjective photopic refers to the state of accommodation of the eye for daytime conditions of ambient luminance. More precisely, the photopic state is defined as the visual response of an observer with normal sight to the stimulus of light incident on the retinal fovea (the most sensitive central part of the retina). The fovea permits fine details and colours to be distinguished under such conditions of adaptation. In the case of photopic vision (vision by means of the fovea), the relative luminous efficiency of the eye varies with the wavelength of the incident light. The luminous efficiency of the eye in photopic vision is at a maximum for a wavelength of 555 nm. The response curve for the relative efficiency of the eye at the various wavelengths of the visible spectrum may be established by taking the efficiency at a wavelength of 555 nm as a reference value. The curve in Figure 9.1, adopted by the International Commission on Illumination for an average normal observer, is therefore obtained.

1

Relative luminous efficiency

V2 0.8

0.6

0.4

0.2

400

450

500

550

600

650

700

750

Wavelength (nm)

However, fixed instruments are used on the assumption that the extinction coefficient is independent of distance. Some instruments measure attenuation directly and others measure the scattering of light to derive the extinction coefficient. These are described in section 9.3. The brief analysis of the physics of visibility in this chapter may be useful for understanding the relations between the various measures of the extinction coefficient, and for considering the instruments used to measure it. Visual perception — photopic and scotopic vision The conditions of visual perception are based on the measurement of the photopic efficiency of the human eye with respect to monochromatic

figure 9.1. relative luminous efficiency of the human eye for monochromatic radiation. the continuous line indicates daytime vision, while the broken line indicates night-time vision. Night-time vision is said to be scotopic (vision involving the rods of the retina instead of the fovea). The rods, the peripheral part of the retina, have no sensitivity to colour or fine details, but are particularly sensitive to low light intensities. In scotopic vision, maximum luminous efficiency corresponds to a wavelength of 507 nm. Scotopic vision requires a long period of accommodation, up to 30 min, whereas photopic vision requires only 2 min.

I.9–4 Basic equations

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

where Lh is the luminance of the horizon, and Lb is the luminance of the object. The luminance of the horizon arises from the airlight scattered from the atmosphere along the observer’s line of sight. It should be noted that, if the object is darker than the horizon, C is negative, and that, if the object is black (Lb = 0), C = –1. In 1924, Koschmieder established a relationship, which later became known as Koschmieder’s law, between the apparent contrast (Cx) of an object, seen against the horizon sky by a distant observer, and its inherent contrast (C0), namely, the contrast that the object would have against the horizon when seen from very short range. Koschmieder’s relationship can be written as: Cx = C0 e-σx (9.9)

The basic equation for visibility measurements is the Bouguer-Lambert law: F = F0 e-σx (9.1)

where F is the luminous flux received after a length of path x in the atmosphere and F0 is the flux for x = 0. Differentiating, we obtain:

s=

− dF 1 ⋅ F dx

(9.2)

Note that this law is valid only for monochromatic light, but may be applied to a spectral flux to a good approximation. The transmission factor is: T = F/F0 (9.3)

Mathematical relationships between MOR and the different variables representing the optical state of the atmosphere may be deduced from the BouguerLambert law. From equations 9.1 and 9.3 we may write: T = F/F0 = e-σx (9.4)

This relationship is valid provided that the scatter coefficient is independent of the azimuth angle and that there is uniform illumination along the whole path between the observer, the object and the horizon. If a black object is viewed against the horizon (C0 = –1) and the apparent contrast is –0.05, equation 9.9 reduces to: 0.05 = e-σx (9.10)

If this law is applied to the MOR definition T = 0.05, then x = P and the following may be written: T = 0.05 = e-σP (9.5)

Hence, the mathematical relation of MOR to the extinction coefficient is: P = (1/σ) · ln (1/0.05) ≈ 3/σ (9.6)

Comparing this result with equation 9.5 shows that when the magnitude of the apparent contrast of a black object, seen against the horizon, is 0.05, that object is at MOR (P). Meteorological visibility at night The distance at which a light (a night visibility marker) can be seen at night is not simply related to MOR. It depends not only on MOR and the intensity of the light, but also on the illuminance at the observer’s eye from all other light sources. In 1876, Allard proposed the law of attenuation of light from a point source of known intensity (I) as a function of distance (x) and extinction coefficient (σ). The illuminance (E) of a point light source is given by: E = I · x –2 · e-σx (9.11)

where ln is the log to base e or the natural logarithm. When combining equation 9.4, after being deduced from the Bouguer-Lambert law, and equation 9.6, the following equation is obtained: P = x · ln (0.05)/ln (T) (9.7)

This equation is used as a basis for measuring MOR with transmissometers where x is, in this case, equal to the transmissometer baseline a in equation 9.14. Meteorological visibility in daylight The contrast of luminance is:

C=

Lb − Lh Lh

(9.8)

When the light is just visible, E = Et and the following may be written: σ = (1/x) · ln {I/(Et · x2)} (9.12)

CHaPTEr 9. MEaSurEMENT OF vISIBIlITY

I.9–5

Noting that P = (1/σ) · ln (1/0.05) in equation 9.6, we may write: P = x · ln (1/0.05)/ln (I/(Et · x2) (9.13)

The relationship between MOR and the distance at which lights can be seen is described in section 9.2.3, while the application of this equation to visual observations is described in section 9.2.

9.2

visual estiMation of Meteorological oPtical range

9.2.1

general

A meteorological observer can make a visual estimation of MOR using natural or man-made objects (groups of trees, rocks, towers, steeples, churches, lights, and so forth). Each station should prepare a plan of the objects used for observation, showing their distances and bearings from the observer. The plan should include objects suitable for daytime observations and objects suitable for night-time observations. The observer must also give special attention to significant directional variations of MOR. Observations should be made by observers who have “normal” vision and have received suitable training. The observations should normally be made without any additional optical devices (binoculars, telescope, theodolite, and the like) and, preferably, not through a window, especially when objects or lights are observed at night. The eye of the observer should be at a normal height above the ground (about 1.5 m); observations should, thus, not be made from the upper storeys of control towers or other high buildings. This is particularly important when visibility is poor. When visibility varies in different directions, the value recorded or reported may depend on the use to be made of the report. In synoptic messages, the lower value should be reported, but in reports for aviation the guidance in WMO (1990a) should be followed. 9.2.2 estimation of meteorological optical range by day For daytime observations, the visual estimation of visibility gives a good approximation of the true value of MOR.

Provided that they meet the following requirements, objects at as many different distances as possible should be selected for observation during the day. Only black, or nearly black, objects which stand out on the horizon against the sky should be chosen. Light-coloured objects or objects located close to a terrestrial background should be avoided as far as possible. This is particularly important when the sun is shining on the object. Provided that the albedo of the object does not exceed about 25 per cent, no error larger than 3 per cent will be caused if the sky is overcast, but it may be much larger if the sun is shining. Thus, a white house would be unsuitable, but a group of dark trees would be satisfactory, except when brightly illuminated by sunlight. If an object against a terrestrial background has to be used, it should stand well in front of the background, namely, at a distance at least half that of the object from the point of observation. A tree at the edge of a wood, for example, would not be suitable for visibility observations. For observations to be representative, they should be made using objects subtending an angle of no less than 0.5° at the observer’s eye. An object subtending an angle less than this becomes invisible at a shorter distance than would large objects in the same circumstances. It may be useful to note that a hole of 7.5 mm in diameter, punched in a card and held at arm’s length, subtends this angle approximately; a visibility object viewed through such an aperture should, therefore, completely fill it. At the same time, however, such an object should not subtend an angle of more than 5°. 9.2.3 estimation of meteorological optical range at night

Methods which may be used to estimate MOR at night from visual observations of the distance of perception of light sources are described below. Any source of light may be used as a visibility object, provided that the intensity in the direction of observation is well defined and known. However, it is generally desirable to use lights which can be regarded as point sources, and whose intensity is not greater in any one more favoured direction than in another and not confined to a solid angle which is too small. Care must be taken to ensure the mechanical and optical stability of the light source. A distinction should be made between sources known as point sources, in the vicinity of which there is no other source or area of light, and clusters

I.9–6

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

of lights, even though separated from each other. In the latter case, such an arrangement may affect the visibility of each source considered separately. For measurements of visibility at night, only the use of suitably distributed point sources is recommended. It should be noted that observations at night, using illuminated objects, may be affected appreciably by the illumination of the surroundings, by the physiological effects of dazzling, and by other lights, even when these are outside the field of vision and, more especially, if the observation is made through a window. Thus, an accurate and reliable observation can be made only from a dark and suitably chosen location. Furthermore, the importance of physiological factors cannot be overlooked, since these are an important source of measurement dispersion. It is essential that only qualified observers with normal vision take such measurements. In addition, it is necessary to allow a period of adaptation (usually from 5 to 15 min) during which the eyes become accustomed to the darkness. For practical purposes, the relationship between the distance of perception of a light source at night and the value of MOR can be expressed in two different ways, as follows: (a) For each value of MOR, by giving the value of luminous intensity of the light, so that there is a direct correspondence between the distance where it is barely visible and the value of MOR; (b) For a light of a given luminous intensity, by giving the correspondence between the distance of perception of the light and the value of MOR. The second relationship is easier and also more practical to use since it would not be an easy matter to install light sources of differing intensities at different distances. The method involves using light sources which either exist or are installed around the station and replacing I, x and Et in equation 9.13 by the corresponding values of the available light sources. In this way, the Meteorological Services can draw up tables giving values of MOR as a function of background luminance and the light sources of known intensity. The values to be assigned to the illuminance threshold Et vary considerably in accordance with the ambient luminance. The following values, considered as average observer values, should be used: (a) 10–6.0 lux at twilight and at dawn, or when there is appreciable light from artificial

(b) (c)

sources; 10–6.7 lux in moonlight, or when it is not yet quite dark; 10–7.5 lux in complete darkness, or with no light other than starlight.

Tables 9.1 and 9.2 give the relations between MOR and the distance of perception of light sources for each of the above methods for different observation conditions. They have been compiled to guide Meteorological Services in the selection or installation of lights for night visibility observations and in the preparation of instructions for their observers for the computation of MOR values. table 9.1. relation between Mor and intensity of a just-visible point source for three values of Et
MOR P (m) 100 200 500 1 000 2 000 5 000 10 000 20 000 50 000 Luminous intensity (candela) of lamps only just visible at distances given in column P Twilight –6.0 (Et = 10 ) 0.2 0.8 5 20 80 500 2 000 8 000 50 000 Moonlight –6.7 (Et = 10 ) 0.04 0.16 1 4 16 100 400 1 600 10 000 Complete darkness –7.5 (Et = 10 ) 0.006 0.025 0.16 0.63 2.5 16 63 253 1 580

table 9.2. relation between Mor and the distance at which a 100 cd point source is just visible for three values of Et
MOR P (m) 100 200 500 1 000 2 000 5 000 10 000 20 000 50 000 Distance of perception (metres) of a lamp of 100 cd as a function of MOR value Twilight –6.0 (Et = 10 ) 250 420 830 1 340 2 090 3 500 4 850 6 260 7 900 Moonlight –6.7 (Et = 10 ) 290 500 1 030 1 720 2 780 5 000 7 400 10 300 14 500 Complete darkness –7.5 (Et = 10 ) 345 605 1 270 2 170 3 650 6 970 10 900 16 400 25 900

An ordinary 100 W incandescent bulb provides a light source of approximately 100 cd. In view of the substantial differences caused by relatively small variations in the values of the visual illuminance threshold and by different conditions

CHaPTEr 9. MEaSurEMENT OF vISIBIlITY

I.9–7

of general illumination, it is clear that Table 9.2 is not intended to provide an absolute criterion of visibility, but indicates the need for calibrating the lights used for night-time estimation of MOR so as to ensure as far as possible that night observations made in different locations and by different Services are comparable. 9.2.4 estimation of meteorological optical range in the absence of distant objects

At certain locations (open plains, ships, and so forth), or when the horizon is restricted (valley or cirque), or in the absence of suitable visibility objects, it is impossible to make direct estimations, except for relatively low visibilities. In such cases, unless instrumental methods are available, values of MOR higher than those for which visibility points are available have to be estimated from the general transparency of the atmosphere. This can be done by noting the degree of clarity with which the most distant visibility objects stand out. Distinct outlines and features, with little or no fuzziness of colours, are an indication that MOR is greater than the distance between the visibility object and the observer. On the other hand, indistinct visibility objects are an indication of the presence of haze or of other phenomena reducing MOR. 9.2.5 general Observations of objects should be made by observers who have been suitably trained and have what is usually referred to as normal vision. This human factor has considerable significance in the estimation of visibility under given atmospheric conditions, since the perception and visual interpretation capacity vary from one individual to another. accuracy of daytime visual estimates of meteorological optical range Observations show that estimates of MOR based on instrumental measurements are in reasonable agreement with daytime estimates of visibility. Visibility and MOR should be equal if the observer’s contrast threshold is 0.05 (using the criterion of recognition) and the extinction coefficient is the same in the vicinity of both the instrument and the observer. Middleton (1952) found, from 1000 measurements, that the mean contrast ratio threshold for a group accuracy of visual observations

of 10 young airmen trained as meteorological observers was 0.033 with a range, for individual observations, from less than 0.01 to more than 0.2. Sheppard (1983) has pointed out that when the Middleton data are plotted on a logarithmic scale they show good agreement with a Gaussian distribution. If the Middleton data represent normal observing conditions, we must expect daylight estimates of visibility to average about 14 per cent higher than MOR with a standard deviation of 20 per cent of MOR. These calculations are in excellent agreement with the results from the First WMO Intercomparison of Visibility Measurements (WMO, 1990b), where it was found that, during daylight, the observers’ estimates of visibility were about 15 per cent higher than instrumental measurements of MOR. The interquartile range of differences between the observer and the instruments was about 30 per cent of the measured MOR. This corresponds to a standard deviation of about 22 per cent, if the distribution is Gaussian. accuracy of night-time visual estimates of meteorological optical range From table 9.2 in section 9.2.3, it is easy to see how misleading the values of MOR can be if based simply on the distance at which an ordinary light is visible, without making due allowance for the intensity of the light and the viewing conditions. This emphasizes the importance of giving precise, explicit instructions to observers and of providing training for visibility observations. Note that, in practice, the use of the methods and tables described above for preparing plans of luminous objects is not always easy. The light sources used as objects are not necessarily well located or of stable, known intensity, and are not always point sources. With respect to this last point, the lights may be wide- or narrow-beam, grouped, or even of different colours to which the eye has different sensitivity. Great caution must be exercised in the use of such lights. The estimation of the visual range of lights can produce reliable estimates of visibility at night only when lights and their background are carefully chosen; when the viewing conditions of the observer are carefully controlled; and when considerable time can be devoted to the observation to ensure that the observer’s eyes are fully accommodated to the viewing conditions. Results from the First WMO Intercomparison of Visibility Measurements (WMO, 1990b) show that, during the hours of darkness, the observer’s estimates of visibility were about 30 per

I.9–8

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

cent higher than instrumental measurements of MOR. The interquartile range of differences between the observer and the instruments was only slightly greater than that found during daylight (about 35 to 40 per cent of the measured MOR).

distant object with that of the sky background (for example, the Lohle telephotometer), but they are not normally used for routine measurements since, as stated above, it is preferable to use direct visual observations. These instruments may, however, be useful for extrapolating MOR beyond the most distant object. Visual extinction meters A very simple instrument for use with a distant light at night takes the form of a graduated neutral filter, which reduces the light in a known proportion and can be adjusted until the light is only just visible. The meter reading gives a measure of the transparency of the air between the light and the observer, and, from this, the extinction coefficient can be calculated. The overall accuracy depends mainly on variations in the sensitivity of the eye and on fluctuations in the radiant intensity of the light source. The error increases in proportion to MOR. The advantage of this instrument is that it enables MOR values over a range from 100 m to 5 km to be measured with reasonable accuracy, using only three well-spaced lights, whereas without it a more elaborate series of lights would be essential if the same degree of accuracy were to be achieved. However, the method of using such an instrument (determining the point at which a light appears or disappears) considerably affects the accuracy and homogeneity of the measurements. transmissometers The use of a transmissometer is the method most commonly used for measuring the mean extinction coefficient in a horizontal cylinder of air between a transmitter, which provides a modulated flux light source of constant mean power, and a receiver incorporating a photodetector (generally a photodiode at the focal point of a parabolic mirror or a lens). The most frequently used light source is a halogen lamp or xenon pulse discharge tube. Modulation of the light source prevents disturbance from sunlight. The transmission factor is determined from the photodetector output and this allows the extinction coefficient and the MOR to be calculated. Since transmissometer estimates of MOR are based on the loss of light from a collimated beam, which depends on scatter and absorption, they are closely related to the definition of MOR. A good, well-maintained transmissometer

9.3

instruMental MeasureMent of the Meteorological oPtical range

9.3.1

general

The adoption of certain assumptions allows the conversion of instrumental measurements into MOR. It is not always advantageous to use an instrument for daytime measurements if a number of suitable visibility objects can be used for direct observations. However, a visibility-measuring instrument is often useful for night observations or when no visibility objects are available, or for automatic observing systems. Instruments for the measurement of MOR may be classified into one of the following two categories: (a) Those measuring the extinction coefficient or transmission factor of a horizontal cylinder of air: Attenuation of the light is due to both scattering and absorption by particles in the air along the path of the light beam; (b) Those measuring the scatter coefficient of light from a small volume of air: In natural fog, absorption is often negligible and the scatter coefficient may be considered as being the same as the extinction coefficient. Both of the above categories include instruments used for visual measurements by an observer and instruments using a light source and an electronic device comprising a photoelectric cell or a photodiode to detect the emitted light beam. The main disadvantage of visual measurements is that substantial errors may occur if observers do not allow sufficient time for their eyes to become accustomed to the conditions (particularly at night). The main characteristics of these two categories of MOR-measuring instruments are described below. 9.3.2 instruMents Measuring the extinction coefficient

telephotometric instruments A number of telephotometers have been designed for daytime measurement of the extinction coefficient by comparing the apparent luminance of a

CHaPTEr 9. MEaSurEMENT OF vISIBIlITY

I.9–9

working within its range of highest accuracy provides a very good approximation to the true MOR. There are two types of transmissometer: (a) Those with a transmitter and a receiver in different units and at a known distance from each other, as illustrated in Figure 9.2;
Light source

hydrometeors (such as rain, or snow) or lithometeors (such as blowing sand) MOR values must be treated with circumspection. If the measurements are to remain acceptable over a long period, the luminous flux must remain constant during this same period. When halogen light is used, the problem of lamp filament ageing is less critical and the flux remains more constant. However, some transmissometers use feedback systems (by sensing and measuring a small portion of the emitted flux) giving greater homogeneity of the luminous flux with time or compensation for any change. As will be seen in the section dealing with the accuracy of MOR measurements, the value adopted for the transmissometer baseline determines the MOR measurement range. It is generally accepted that this range is between about 1 and 25 times the baseline length. A further refinement of the transmissometer measurement principle is to use two receivers or retroreflectors at different distances to extend both the lower limit (short baseline) and the upper limit (long baseline) of the MOR measurement range. These instruments are referred to as “double baseline” instruments. In some cases of very short baselines (a few metres), a photodiode has been used as a light source, namely, a monochromatic light close to infrared. However, it is generally recommended that polychromatic light in the visible spectrum be used to obtain a representative extinction coefficient. Visibility lidars The lidar (light detection and ranging) technique as described for the laser ceilometer in Part I, Chapter 15, may be used to measure visibility when the beam is directed horizontally. The rangeresolved profile of the backscattered signal S depends on the output signal S0, the distance x, the back scatter coefficient β, and transmission factor T, such that: S(x) ~ S0 • 1/ x2 • β(x) • T2 where T = ∫ – σ(x) dx (9.15)

Baseline

Photodetector

Transmitter unit

Receiver unit

figure 9.2. double-ended transmissometer (b) Those with a transmitter and a receiver in the same unit, with the emitted light being reflected by a remote mirror or retroreflector (the light beam travelling to the reflector and back), as illustrated in Figure 9.3.
Light source

Folded baseline

Retroreflector

Transmitter-receiver unit Photodetector

figure 9.3. single-ended transmissometer

The distance covered by the light beam between the transmitter and the receiver is commonly referred to as the baseline and may range from a few metres to 150 m (or even 300 m) depending on the range of MOR values to be measured and the applications for which these measurements are to be used. As seen in the expression for MOR in equation 9.7, the relation: P = a ·ln (0.05)/ln (T) (9.14)

where a is the transmissometer baseline, is the basic formula for transmissometer measurements. Its validity depends on the assumptions that the application of the Koschmieder and BouguerLambert laws is acceptable and that the extinction coefficient along the transmissometer baseline is the same as that in the path between an observer and an object at MOR. The relationship between the transmission factor and MOR is valid for fog droplets, but when visibility is reduced by other

Under the condition of horizontal homogeneity of the atmosphere, β and σ are constant and the extinction coefficient σ is determined from only two points of the profile: ln (S(x) • x2/ S0) ~ ln β – 2 σ x (9.16)

I.9–10

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

In an inhomogeneous atmosphere the rangedependent quantities of β(x) and σ(x) may be separated with the Klett Algorithm (Klett, 1985). As MOR approaches 2 000 m, the accuracy of the lidar method becomes poor. 9.3.3 instruments measuring the scatter coefficient

located in the same housing and below the light source where it receives the light backscattered by the volume of air sampled. Several researchers have tried to find a relationship between visibility and the coefficient of back scatter, but it is generally accepted that that correlation is not satisfactory.

The attenuation of light in the atmosphere is due to both scattering and absorption. The presence of pollutants in the vicinity of industrial zones, ice crystals (freezing fog) or dust may make the absorption term significant. However, in general, the absorption factor is negligible and the scatter phenomena due to reflection, refraction, or diffraction on water droplets constitute the main factor reducing visibility. The extinction coefficient may then be considered as equal to the scatter coefficient, and an instrument for measuring the latter can, therefore, be used to estimate MOR. Measurements are most conveniently taken by concentrating a beam of light on a small volume of air and by determining, through photometric means, the proportion of light scattered in a sufficiently large solid angle and in directions which are not critical. Provided that it is completely screened from interference from other sources of light, or that the light source is modulated, an instrument of this type can be used during both the day and night. The scatter coefficient b is a function that may be written in the following form: π 2π b= (9.17) ∫ I (φ )sin(φ )dφ Φv 0 where Φv is the flux entering the volume of air V and I(Φ) is the intensity of the light scattered in direction Φ with respect to the incident beam. Note that the accurate determination of b requires the measurement and integration of light scattered out of the beam over all angles. Practical instruments measure the scattered light over a limited angle and rely on a high correlation between the limited integral and the full integral. Three measurement methods are used in these instruments: back scatter, forward scatter, and scatter integrated over a wide angle. (a) Back scatter: In these instruments (Figure 9.4), a light beam is concentrated on a small volume of air in front of the transmitter, the receiver being

Transmitter

Sampling volume

Receiver

figure 9.4. Visibility meter measuring back scatter (b) Forward scatter: Several authors have shown that the best angle is between 20 and 50°. The instruments, therefore, comprise a transmitter and a receiver, the angle between the beams being 20 to 50°. Another arrangement involves placing either a single diaphragm half-way between a transmitter and a receiver or two diaphragms each a short distance from either a transmitter or a receiver. Figure 9.5 illustrates the two configurations that have been used.

Transmitter

Sampling volume

Receiver Sampling volume

Transmitter

Receiver

figure 9.5. two configurations of visibility meters measuring forward scatter

(c)

Scatter over a wide angle: Such an instrument, illustrated in Figure 9.6, which is usually known as an integrating nephelometer, is based on the principle of measuring scatter

CHaPTEr 9. MEaSurEMENT OF vISIBIlITY

I.9–11

over as wide an angle as possible, ideally 0 to 180°, but in practice about 0 to 120°. The receiver is positioned perpendicularly to the axis of the light source which provides light over a wide angle. Although, in theory, such an instrument should give a better estimate of the scatter coefficient than an instrument measuring over a small range of scattering angles, in practice it is more difficult to prevent the presence of the instrument from modifying the extinction coefficient in the air sampled. Integrating nephelometers are not widely used for measuring MOR, but this type of instrument is often used for measuring pollutants.
Light source

9.3.4

instrument exposure and siting

Measuring instruments should be located in positions which ensure that the measurements are representative for the intended purpose. Thus, for general synoptic purposes, the instruments should be installed at locations free from local atmospheric pollution, for example, smoke, industrial pollution, dusty roads. The volume of air in which the extinction coefficient or scatter coefficient is measured should normally be at the eye level of an observer, about 1.5 m above the ground. It should be borne in mind that transmissometers and instruments measuring the scatter coefficient should be installed in such a way that the sun is not in the optical field at any time of the day, either by mounting with a north-south optical axis (to ±45°) horizontally, for latitudes up to 50°, or by using a system of screens or baffles. For aeronautical purposes, measurements are to be representative of conditions at the airport. These conditions, which relate more specifically to airport operations, are described in Part II, Chapter 2. The instruments should be installed in accordance with the directions given by the manufacturers. Particular attention should be paid to the correct alignment of transmissometer transmitters and receivers and to the correct adjustment of the light beam. The poles on which the transmitter/receivers are mounted should be mechanically firm (while remaining frangible when installed at airports) to avoid any misalignment due to ground movement during freezing and, particularly, during thawing. In addition, the mountings must not distort under the thermal stresses to which they are exposed. 9.3.5 calibration and maintenance

Receiver

Black hole

figure 9.6. Visibility meter measuring scattered light over a wide angle In all the above instruments, as for most transmissometers, the receivers comprise photodetector cells or photodiodes. The light used is pulsed (for example, high-intensity discharge into xenon). These types of instruments require only limited space (1 to 2 m in general). They are, therefore, useful when no visibility objects or light sources are available (onboard ships, by roadsides, and so forth). Since the measurement relates only to a very small volume of air, the representativeness of measurements for the general state of the atmosphere at the site may be open to question. However, this representativeness can be improved by averaging a number of samples or measurements. In addition, smoothing of the results is sometimes achieved by eliminating extreme values. The use of these types of instruments has often been limited to specific applications (for example, highway visibility measurements, or to determine whether fog is present) or when less precise MOR measurements are adequate. These instruments are now being used in increasing numbers in automatic meteorological observation systems because of their ability to measure MOR over a wide range and their relatively low susceptibility to pollution compared with transmissometers.

In order to obtain satisfactory and reliable observations, instruments for the measurement of MOR should be operated and maintained under the conditions prescribed by the manufacturers, and should be kept continuously in good working order. Regular checks and calibration in accordance with the manufacturer’s recommendations should ensure optimum performance. Calibration in very good visibility (over 10 to 15 km) should be carried out regularly. Atmospheric conditions resulting in erroneous calibration

I.9–12

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

must be avoided. When, for example, there are strong updraughts, or after heavy rain, considerable variations in the extinction coefficient are encountered in the layer of air close to the ground; if several transmissometers are in use on the site (in the case of airports), dispersion is observed in their measurements. Calibration should not be attempted under such conditions. Note that in the case of most transmissometers, the optical surfaces must be cleaned regularly, and daily servicing must be planned for certain instruments, particularly at airports. The instruments should be cleaned during and/or after major atmospheric disturbances, since rain or violent showers together with strong wind may cover the optical systems with a large number of water droplets and solid particles resulting in major MOR measurement errors. The same is true for snowfall, which could block the optical systems. Heating systems are often placed at the front of the optical systems to improve instrument performance under such conditions. Air-blowing systems are sometimes used to reduce the above problems and the need for frequent cleaning. However, it must be pointed out that these blowing and heating systems may generate air currents warmer than the surrounding air and may adversely affect the measurement of the extinction coefficient of the air mass. In arid zones, sandstorms or blowing sand may block the optical system and even damage it. 9.3.6 sources of error in the measurement of meteorological optical range and estimates of accuracy

caution. Another factor that must be taken into account when discussing representativeness of measurements is the homogeneity of the atmosphere itself. At all MOR values, the extinction coefficient of a small volume of the atmosphere normally fluctuates rapidly and irregularly, and individual measurements of MOR from scatter meters and short baseline transmissometers, which have no in-built smoothing or averaging system, show considerable dispersion. It is, therefore, necessary to take many samples and to smooth or average them to obtain a representative value of MOR. The analysis of the results from the First WMO Intercomparison of Visibility Measurements (WMO, 1990b) indicates that, for most instruments, no benefit is gained by averaging over more than 1 min, but for the “noisiest” instruments an averaging time of 2 min is preferable. accuracy of telephotometers and visual extinction meters Visual measurements based on the extinction coefficient are difficult to take. The main source of error is the variability and uncertainty of the performance of the human eye. These errors have been described in the sections dealing with the methods of visual estimation of MOR. accuracy of transmissometers The sources of error in transmissometer measurements may be summarized as follows: (a) Incorrect alignment of transmitters and receivers; (b) Insufficient rigidity and stability of transmitter/receiver mountings (freezing and thawing of the ground, thermal stress); (c) Ageing and incorrect centring of lamps; (d) Calibrating error (visibility too low or calibration carried out in unstable conditions affecting the extinction coefficient); (e) Instability of system electronics; (f) Remote transmission of the extinction coefficient as a low-current signal subject to interference from electromagnetic fields (particularly at airports). It is preferable to digitize the signals; (g) Disturbance due to rising or setting of the sun, and poor initial orientation of the transmissometers; (h) Atmospheric pollution dirtying the optical systems; (i) Local atmospheric conditions (for example,

general All practical operational instruments for the measurement of MOR sample a relatively small region of the atmosphere compared with that scanned by a human observer. Instruments can provide an accurate measurement of MOR only when the volume of air that they sample is representative of the atmosphere around the point of observation out to a radius equal to MOR. It is easy to imagine a situation, with patchy fog or a local rain or snow storm, in which the instrument reading is misleading. However, experience has shown that such situations are not frequent and that the continuous monitoring of MOR using an instrument will often lead to the detection of changes in MOR before they are recognized by an unaided observer. Nevertheless, instrumental measurements of MOR must be interpreted with

CHaPTEr 9. MEaSurEMENT OF vISIBIlITY

I.9–13

rain showers and strong winds, snow) giving unrepresentative extinction coefficient readings or diverging from the Koschmieder law (snow, ice crystals, rain, and so forth). The use of a transmissometer that has been properly calibrated and well maintained should give good representative MOR measurements if the extinction coefficient in the optical path of the instrument is representative of the extinction coefficient everywhere within the MOR. However, a transmissometer has only a limited range over which it can provide accurate measurements of MOR. A relative error curve for MOR may be plotted by differentiating the basic transmissometer formula (see equation 9.7). Figure 9.7 shows how the relative error varies with transmission, assuming that the measurement accuracy of the transmission factor T is 1 per cent.
Relative error in MOR for 1 per cent error in transmittance

of 1.25 and 10.7 times the baseline length, the relative MOR error should be low and of the order of 5 per cent, assuming that the error of T is 1 per cent. The relative error of MOR exceeds 10 per cent when MOR is less than 0.87 times the baseline length or more than 27 times this length. When the measurement range is extended further, the error increases rapidly and becomes unacceptable. However, results from the First WMO Intercomparison of Visibility Measurements (WMO, 1990b) show that the best transmissometers, when properly calibrated and maintained, can provide measurements of MOR with a standard error of about 10 per cent when MOR is up to 60 times their baseline. accuracy of scatter meters The principal sources of error in measurements of MOR taken with scatter meters are as follows: (a) Calibration error (visibility too low or calibration carried out in unstable conditions affecting the extinction coefficient); (b) Lack of repeatability in terms of procedure or materials when using opaque scatterers for calibration; (c) Instability of system electronics; (d) Remote transmission of the scatter coefficient as a low-current or voltage signal subject to interference from electromagnetic fields (particularly at airports). It is preferable to digitize the signals; (e) Disturbance due to rising or setting of the sun, and poor initial orientation of the instrument; (f) Atmospheric pollution dirtying the optical systems (these instruments are much less sensitive to dirt on their optics than transmissometers, but heavy soiling does have an effect); (g) Atmospheric conditions (for example, rain, snow, ice crystals, sand, local pollution) giving a scatter coefficient that differs from the extinction coefficient. Results from the First WMO Intercomparison of Visibility Measurements (WMO, 1990b) show that scatter meters are generally less accurate than transmissometers at low values of MOR and show greater variability in their readings. There was also evidence that scatter meters, as a class, were more affected by precipitation than transmissometers. However, the best scatter meters showed little or no susceptibility to precipitation and provided estimates of MOR with

60% Transmissometer baseline 75 m 50%

40%

30% MOR 55 m to 4 000 m

20%

10%

MOR 65 m to 2 000 m MOR 95 m to 800 m

0

10

20

30

40

50

60

70

80

90

100%

Transmittance

figure 9.7. error in measurements of meteorological optical range as a function of a 1 per cent error in transmittance This 1 per cent value of transmission error, which may be considered as correct for many older instruments, does not include instrument drift, dirt on optical components, or the scatter of measurements due to the phenomenon itself. If the accuracy drops to around 2 to 3 per cent (taking the other factors into account), the relative error values given on the vertical axis of the graph must be multiplied by the same factor of 2 or 3. Note also that the relative MOR measurement error increases exponentially at each end of the curve, thereby setting both upper and lower limits to the MOR measurement range. The example shown by the curve indicates the limit of the measuring range if an error of 5, 10 or 20 per cent is accepted at each end of the range measured, with a baseline of 75 m. It may also be deduced that, for MOR measurements between the limits

I.9–14

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

standard deviation of about 10 per cent over a range of MOR from about 100 m to 50 km. Almost all the scatter meters in the intercomparison exhibited significant systematic error over part of their measurement range. Scatter meters showed very

low susceptibility to contamination of their optical systems. An overview of the differences between scatter meters and transmissometers is given by WMO (1992b).

CHaPTEr 9. MEaSurEMENT OF vISIBIlITY

I.9–15

references and furtHer readIng

International Electrotechnical Commission, 1987: International Electrotechnical Vocabulary. Chapter 845: Lighting, IEC 50. Middleton, W.E.K., 1952: Vision Through the Atmosphere. University of Toronto Press, Toronto. Sheppard, B.E., 1983: Adaptation to MOR. Preprints of the Fifth Symposium on Meteorological Observations and Instrumentation (Toronto, 11–15 April 1983), pp. 226–269. Klett, J.D., 1985: Lidar inversion with variable backscatter/extinction ratios. Applied Optics, 24, pp. 1638–1643. World Meteorological Organization, 1989: Guide on the Global Observing System. WMO-No. 488, Geneva. World Meteorological Organization, 1990a: Guide on Meteorological Observation and Information Distribution Systems at Aerodromes. WMONo. 731, Geneva.

World Meteorological Organization, 1990b: The First WMO Intercomparison of Visibility Measurements: Final Report (D.J. Griggs, D.W. Jones, M. Ouldridge and W.R. Sparks). Instruments and Observing Methods Report No. 41, WMO/TD-No. 401, Geneva. World Meteorological Organization, 1992a: International Meteorological Vocabulary. WMONo. 182, Geneva. World Meteorological Organization, 1992b: Visibility measuring instruments: Differences between scatterometers and transmissometers (J.P. van der Meulen). Papers Presented at the WMO Technical Conference on Instruments and Methods of Observation (TECO-92) (Vienna, Austria, 11–15 May 1992), Instruments and Observing Methods Report No. 49, WMO/TD-No. 462, Geneva. World Meteorological Organization, 2003: Manual on the Global Observing System. WMO-No. 544, Geneva.

CHaPTEr 10

MeasureMent of eVaPoratIon

10.1 10.1.1

general

10.1.3

Meteorological requirements

Definitions

The International Glossary of Hydrology (WMO/ UNESCO, 1992) and the International Meteorological Vocabulary (WMO, 1992) present the following definitions (but note some differences): (Actual) evaporation: Quantity of water evaporated from an open water surface or from the ground. Transpiration: Process by which water from vegetation is transferred into the atmosphere in the form of vapour. (Actual) evapotranspiration (or effective evapotranspiration): Quantity of water vapour evaporated from the soil and plants when the ground is at its natural moisture content. Potential evaporation (or evaporativity): Quantity of water vapour which could be emitted by a surface of pure water, per unit surface area and unit time, under existing atmospheric conditions. Potential evapotranspiration: Maximum quantity of water capable of being evaporated in a given climate from a continuous expanse of vegetation covering the whole ground and well supplied with water. It includes evaporation from the soil and transpiration from the vegetation from a specific region in a specific time interval, expressed as depth of water. If the term potential evapotranspiration is used, the types of evaporation and transpiration occurring must be clearly indicated. For more details on these terms refer to WMO (1994). 10.1.2 units and scales

Estimates both of evaporation from free water surfaces and from the ground and of evapotranspiration from vegetation-covered surfaces are of great importance to hydrological modelling and in hydrometeorological and agricultural studies, for example, for the design and operation of reservoirs and irrigation and drainage systems. Performance requirements are given in Part I, Chapter 1. For daily totals, an extreme outer range is 0 to 100 mm, with a resolution of 0.1 mm. The uncertainty, at the 95 per cent confidence level, should be ±0.1 mm for amounts of less than 5 mm, and ±2 per cent for larger amounts. A figure of 1 mm has been proposed as an achievable accuracy. In principle, the usual instruments could meet these accuracy requirements, but difficulties with exposure and practical operation cause much larger errors (WMO, 1976). Factors affecting the rate of evaporation from any body or surface can be broadly divided into two groups, meteorological factors and surface factors, either of which may be rate-limiting. The meteorological factors may, in turn, be subdivided into energy and aerodynamic variables. Energy is needed to change water from the liquid to the vapour phase; in nature, this is largely supplied by solar and terrestrial radiation. Aerodynamic variables, such as wind speed at the surface and vapour pressure difference between the surface and the lower atmosphere, control the rate of transfer of the evaporated water vapour. It is useful to distinguish between situations where free water is present on the surface and those where it is not. Factors of importance include the amount and state of the water and also those surface characteristics which affect the transfer process to the air or through the body surface. Resistance to moisture transfer to the atmosphere depends, for example, on surface roughness; in arid and semi-arid areas, the size and shape of the evaporating surface is also extremely important. Transpiration from vegetation, in addition to the meteorological and surface factors already noted, is largely determined by plant characteristics and responses. These include, for example, the number

The rate of evaporation is defined as the amount of water evaporated from a unit surface area per unit of time. It can be expressed as the mass or volume of liquid water evaporated per area in unit of time, usually as the equivalent depth of liquid water evaporated per unit of time from the whole area. The unit of time is normally a day. The amount of evaporation should be read in millimetres (WMO, 2003). Depending on the type of instrument, the usual measuring accuracy is 0.1 to 0.01 mm.

I.10–2

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

and size of stomata (openings in the leaves), and whether these are open or closed. Stomatal resistance to moisture transfer shows a diurnal response but is also considerably dependent upon the availability of soil moisture to the rooting system. The availability of soil moisture for the roots and for the evaporation from bare soil depends on the capillary supply, namely, on the texture and composition of the soil. Evaporation from lakes and reservoirs is influenced by the heat storage of the water body. Methods for estimating evaporation and evapotranspiration are generally indirect; either by point measurements by an instrument or gauge, or by calculation using other measured meteorological variables (WMO, 1997). 10.1.4 Measurement methods

For reservoirs or lakes, and for plots or small catchments, estimates may be made by water budget, energy budget, aerodynamic and complementarity approaches. The latter techniques are discussed in section 10.5. It should also be emphasized that different evaporimeters or lysimeters represent physically different measurements. The adjustment factors required for them to represent lake or actual or potential evaporation and evapotranspiration are necessarily different. Such instruments and their exposure should, therefore, always be described very carefully and precisely, in order to understand the measuring conditions as fully as possible. More details on all methods are found in WMO (1994).

Direct measurements of evaporation or evapotranspiration from extended natural water or land surfaces are not practicable at present. However, several indirect methods derived from point measurements or other calculations have been developed which provide reasonable results. The water loss from a standard saturated surface is measured with evaporimeters, which may be classified as atmometers and pan or tank evaporimeters. These instruments do not directly measure either evaporation from natural water surfaces, actual evapotranspiration or potential evapotranspiration. The values obtained cannot, therefore, be used without adjustment to arrive at reliable estimates of lake evaporation or of actual and potential evapotranspiration from natural surfaces. An evapotranspirometer (lysimeter) is a vessel or container placed below the ground surface and filled with soil, on which vegetation can be cultivated. It is a multi-purpose instrument for the study of several phases of the hydrological cycle under natural conditions. Estimates of evapotranspiration (or evaporation in the case of bare soil) can be made by measuring and balancing all the other water budget components of the container, namely, precipitation, underground water drainage, and change in water storage of the block of soil. Usually, surface runoff is eliminated. Evapotranspirometers can also be used for the estimation of the potential evaporation of the soil or of the potential evapotranspiration of plantcovered soil, if the soil moisture is kept at field capacity.

10.2 10.2.1

atMoMeters

instrument types

An atmometer is an instrument that measures the loss of water from a wetted, porous surface. The wetted surfaces are either porous ceramic spheres, cylinders, plates, or exposed filter-paper discs saturated with water. The evaporating element of the livingstone atmometer is a ceramic sphere of about 5 cm in diameter, connected to a water reservoir bottle by a glass or metal tube. The atmospheric pressure on the surface of the water in the reservoir keeps the sphere saturated with water. The Bellani atmometer consists of a ceramic disc fixed in the top of a glazed ceramic funnel, into which water is conducted from a burette that acts as a reservoir and measuring device. The evaporating element of the Piche evaporimeter is a disc of filter paper attached to the underside of an inverted graduated cylindrical tube, closed at one end, which supplies water to the disc. Successive measurements of the volume of water remaining in the graduated tube will give the amount lost by evaporation in any given time. 10.2.2 Measurement taken by atmometers

Although atmometers are frequently considered to give a relative measure of evaporation from plant surfaces, their measurements do not, in fact, bear any simple relation to evaporation from natural surfaces.

CHaPTEr 10. MEaSurEMENT OF EvaPOraTION

I.10–3

Readings from Piche evaporimeters with carefully standardized shaded exposures have been used with some success to derive the aerodynamic term, a multiplication of a wind function and the saturation vapour pressure deficit, required for evaporation estimation by, for example, Penman’s combination method after local correlations between them were obtained. While it may be possible to relate the loss from atmometers to that from a natural surface empirically, a different relation may be expected for each type of surface and for differing climates. Atmometers are likely to remain useful in small-scale surveys. Their great advantages are their small size, low cost and small water requirements. Dense networks of atmometers can be installed over a small area for micrometeorological studies. The use of atmometers is not recommended for water resource surveys if other data are available. 10.2.3 sources of error in atmometers

The adoption of the Russian 20 m2 tank as the international reference evaporimeter has been recommended. 10.3.1 united states class a pan

The United States Class A pan is of cylindrical design, 25.4 cm deep and 120.7 cm in diameter. The bottom of the pan is supported 3 to 5 cm above the ground level on an open-frame wooden platform, which enables air to circulate under the pan, keeps the bottom of the pan above the level of water on the ground during rainy weather, and enables the base of the pan to be inspected without difficulty. The pan itself is constructed of 0.8 mm thick galvanized iron, copper or monel metal, and is normally left unpainted. The pan is filled to 5 cm below the rim (which is known as the reference level). The water level is measured by means of either a hookgauge or a fixed-point gauge. The hookgauge consists of a movable scale and vernier fitted with a hook, the point of which touches the water surface when the gauge is correctly set. A stilling well, about 10 cm across and about 30 cm deep, with a small hole at the bottom, breaks any ripples that may be present in the tank, and serves as a support for the hookgauge during an observation. The pan is refilled whenever the water level, as indicated by the gauge, drops by more than 2.5 cm from the reference level. 10.3.2 russian ggi-3000 pan

One of the major problems in the operation of atmometers is keeping the evaporating surfaces clean. Dirty surfaces will affect significantly the rate of evaporation, in a way comparable to the wet bulb in psychrometry. Furthermore, the effect of differences in their exposure on evaporation measurements is often remarkable. This applies particularly to the exposure to air movement around the evaporating surface when the instrument is shaded.

10.3

evaPoration Pans anD tanks

Evaporation pans or tanks have been made in a variety of shapes and sizes and there are different modes of exposing them. Among the various types of pans in use, the United States Class A pan, the Russian GGI-3000 pan and the Russian 20 m2 tank are described in the following subsections. These instruments are now widely used as standard network evaporimeters and their performance has been studied under different climatic conditions over fairly wide ranges of latitude and elevation. The pan data from these instruments possess stable, albeit complicated and climate-zone-dependent, relationships with the meteorological elements determining evaporation, when standard construction and exposure instructions have been carefully followed.

The Russian GGI-3000 pan is of cylindrical design, with a surface area of 3 000 cm2 and a depth of 60 cm. The bottom of the pan is cone-shaped. The pan is set in the soil with its rim 7.5 cm above the ground. In the centre of the tank is a metal index tube upon which a volumetric burette is set when evaporation observations are made. The burette has a valve, which is opened to allow its water level to equalize that in the pan. The valve is then closed and the volume of water in the burette is accurately measured. The height of the water level above the metal index tube is determined from the volume of water in, and the dimensions of, the burette. A needle attached to the metal index tube indicates the height to which the water level in the pan should be adjusted. The water level should be maintained so that it does not fall more than 5 mm or rise more than 10 mm above the needle point. A GGI-3000 raingauge with a collector that has an area of 3 000 cm2 is usually installed next to the GGI-3000 pan.

I.10–4 10.3.3

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

russian 20 M2 tank

(a)

This tank has a surface of 20 m2 and a diameter of about 5 m; it is cylindrical with a flat bottom and is 2 m deep. It is made of 4 to 5 mm thick welded iron sheets and is installed in the soil with its rim 7.5 cm above the ground. The inner and exposed outer surfaces of the tank are painted white. The tank is provided with a replenishing vessel and a stilling well with an index pipe upon which the volumetric burette is set when the water level in the tank is measured. Inside the stilling well, near the index pipe, a small rod terminating in a needle point indicates the height to which the water level is to be adjusted. The water level should always be maintained so that it does not fall more than 5 mm below or rise more than 10 mm above the needle point. A graduated glass tube attached laterally to the replenishing tank indicates the amount of water added to the tank and provides a rough check of the burette measurement. 10.3.4 Measurements taken by evaporation pans and tanks

(b)

(c)

Sunken, where the main body of the tank is below ground level, the evaporating surface being at or near the level of the surrounding surface; Above ground, where the whole of the pan and the evaporation surface are at some small height above the ground; Mounted on moored floating platforms on lakes or other water bodies.

The rate of evaporation from a pan or tank evaporimeter is measured by the change in level of its free water surface. This may be done by such devices as described above for Class A pans and GGI-3000 pans. Several types of automatic evaporation pans are in use. The water level in such a pan is kept constant by releasing water into the pan from a storage tank or by removing water from the pan when precipitation occurs. The amount of water added to, or removed from, the pan is recorded. In some tanks or pans, the level of the water is also recorded continuously by means of a float in the stilling well. The float operates a recorder. Measurements of pan evaporation are the basis of several techniques for estimating evaporation and evapotranspiration from natural surfaces whose water loss is of interest. Measurements taken by evaporation pans are advantageous because they are, in any case, the result of the impact of the total meteorological variables, and because pan data are available immediately and for any period required. Pans are, therefore, frequently used to obtain information about evaporation on a routine basis within a network. 10.3.5 exposure of evaporation pans and tanks

Evaporation stations should be located at sites that are fairly level and free from obstructions such as trees, buildings, shrubs or instrument shelters. Such single obstructions, when small, should not be closer than 5 times their height above the pan; for clustered obstructions, this becomes 10 times. Plots should be sufficiently large to ensure that readings are not influenced by spray drift or by upwind edge effects from a cropped or otherwise different area. Such effects may extend to more than 100 m. The plot should be fenced off to protect the instruments and to prevent animals from interfering with the water level; however, the fence should be constructed in such a way that it does not affect the wind structure over the pan. The ground cover at the evaporation station should be maintained as similar as possible to the natural cover common to the area. Grass, weeds, and the like should be cut frequently to keep them below the level of the pan rim with regard to sunken pans (7.5 cm). Preferably this same grass height of below 7.5 cm applies also to Class A pans. Under no circumstance should the instrument be placed on a concrete slab or asphalt, or on a layer of crushed rock. This type of evaporimeter should not be shaded from the sun. 10.3.6 sources of error in evaporation pans and tanks

The mode of pan exposure leads both to various advantages and to sources of measurement errors. Pans installed above the ground are inexpensive and easy to install and maintain. They stay cleaner than sunken tanks as dirt does not, to any large extent, splash or blow into the water from the surroundings. Any leakage that develops after installation is relatively easy to detect and rectify. However, the amount of water evaporated is greater than that from sunken pans, mainly because of the additional radiant energy intercepted by the sides. Adverse side-wall effects can be largely eliminated by using an insulated pan, but this adds to the cost,

Three types of exposures are mainly used for pans and tanks as follows:

CHaPTEr 10. MEaSurEMENT OF EvaPOraTION

I.10–5

would violate standard construction instructions and would change the “stable” relations mentioned in section 10.3. Sinking the pan into the ground tends to reduce objectionable boundary effects, such as radiation on the side walls and heat exchange between the atmosphere and the pan itself. But the disadvantages are as follows: (a) More unwanted material collects in the pan, with the result that it is difficult to clean; (b) Leaks cannot easily be detected and rectified; (c) The height of the vegetation adjacent to the pan is somewhat more critical. Moreover, appreciable heat exchange takes place between the pan and the soil, and this depends on many factors, including soil type, water content and vegetation cover. A floating pan approximates more closely evaporation from the lake than from an onshore pan exposed either above or at ground level, even though the heat-storage properties of the floating pan are different from those of the lake. It is, however, influenced by the particular lake in which it floats and it is not necessarily a good indicator of evaporation from the lake. Observational difficulties are considerable and, in particular, splashing frequently renders the data unreliable. Such pans are also costly to install and operate. In all modes of exposure it is most important that the tank should be made of non-corrosive material and that all joints be made in such a way as to minimize the risk of the tank developing leaks. Heavy rain and very high winds are likely to cause splash-out from pans and may invalidate the measurements. The level of the water surface in the evaporimeter is important. If the evaporimeter is too full, as much as 10 per cent (or more) of any rain falling may splash out, leading to an overestimate of evaporation. Too low a water level will lead to a reduced evaporation rate (of about 2.5 per cent for each centimetre below the reference level of 5 cm, in temperate regions) due to excessive shading and sheltering by the rim. If the water depth is allowed to become very shallow, the rate of evaporation rises due to increased heating of the water surface. It is advisable to restrict the permitted water-level range either by automatic methods, by adjusting the level at each reading, or by taking action to

remove water when the level reaches an upper-limit mark, and to add water when it reaches a lowerlimit mark. 10.3.7 Maintenance of evaporation pans and tanks

An inspection should be carried out at least once a month, with particular attention being paid to the detection of leaks. The pan should be cleaned out as often as necessary to keep it free from litter, sediment, scum and oil films. It is recommended that a small amount of copper sulphate, or of some other suitable algacide, be added to the water to restrain the growth of algae. If the water freezes, all the ice should be broken away from the sides of the tank and the measurement of the water level should be taken while the ice is floating. Provided that this is done, the fact that some of the water is frozen will not significantly affect the water level. If the ice is too thick to be broken the measurement should be postponed until it can be broken, the evaporation should then be determined for the extended period. It is often necessary to protect the pan from birds and other small animals, particularly in arid and tropical regions. This may be achieved by the use of the following: (a) Chemical repellents: In all cases where such protection is used, care must be taken not to change significantly the physical characteristics of the water in the evaporimeter; (b) A wire-mesh screen supported over the pan: Standard screens of this type are in routine use in a number of areas. They prevent water loss caused by birds and animals, but also reduce the evaporation loss by partly shielding the water from solar radiation and by reducing wind movement over the water surface. In order to obtain an estimate of the error introduced by the effect of the wiremesh screen on the wind field and the thermal characteristics of the pan, it is advisable to compare readings from the protected pan with those of a standard pan at locations where interference does not occur. Tests with a protective cylinder made of 25 mm hexagonal-mesh steel wire netting supported by an 8 mm steel-bar framework showed a consistent reduction of 10 per cent in the evaporation rate at three different sites over a two-year period.

I.10–6 10.4

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES evaPotransPiroMeters (lysiMeters)

Several types of lysimeters have been described in the technical literature. Details of the design of some instruments used in various countries are described in WMO (1966; 1994). In general, a lysimeter consists of the soil-filled inner container and retaining walls or an outer container, as well as special devices for measuring percolation and changes in the soil-moisture content. There is no universal international standard lysimeter for measuring evapotranspiration. The surface area of lysimeters in use varies from 0.05 to some 100 m2 and their depth varies from 0.1 to 5 m. According to their method of operation, lysimeters can be classified into non-weighable and weighable instruments. Each of these devices has its special merits and drawbacks, and the choice of any type of lysimeter depends on the problem to be studied. Non-weighable (percolation-type) lysimeters can be used only for long-term measurements, unless the soil-moisture content can be measured by some independent and reliable technique. Large-area percolation-type lysimeters are used for water budget and evapotranspiration studies of tall, deeprooting vegetation cover, such as mature trees. Small, simple types of lysimeters in areas with bare soil or grass and crop cover could provide useful results for practical purposes under humid conditions. This type of lysimeter can easily be installed and maintained at a low cost and is, therefore, suitable for network operations. Weighable lysimeters, unless of a simple microlysimeter-type for soil evaporation, are much more expensive, but their advantage is that they secure reliable and precise estimates of short-term values of evapotranspiration, provided that the necessary design, operation and siting precautions have been taken. Several weighing techniques using mechanical or hydraulic principles have been developed. The simpler, small lysimeters are usually lifted out of their sockets and transferred to mechanical scales by means of mobile cranes. The container of a lysimeter can be mounted on a permanently installed mechanical scale for continuous recording. The design of the weighing and recording system can be considerably simplified by using load cells with strain gauges of variable electrical resistance. The hydraulic weighing systems use the

principle of fluid displacement resulting from the changing buoyancy of a floating container (socalled floating lysimeter), or the principle of fluid pressure changes in hydraulic load cells. The large weighable and recording lysimeters are recommended for precision measurements in research centres and for standardization and parameterization of other methods of evapotranspiration measurement and the modelling of evapotranspiration. Small weighable types of lysimeters are quite useful and suitable for network operation. Microlysimeters for soil evaporation are a relatively new phenomenon. 10.4.1 Measurements taken by lysimeters

The rate of evapotranspiration may be estimated from the general equation of the water budget for the lysimeter containers. Evapotranspiration equals precipitation/irrigation minus percolation minus change in water storage. Hence, the observational programme on lysimeter plots includes precipitation/irrigation, percolation and change in soil water storage. It is useful to complete this programme through observations of plant growth and development. Precipitation – and irrigation, if any – is preferably measured at ground level by standard methods. Percolation is collected in a tank and its volume may be measured at regular intervals or recorded. For precision measurements of the change in water storage, the careful gravimetric techniques described above are used. When weighing, the lysimeter should be sheltered to avoid wind-loading effects. The application of the volumetric method is quite satisfactory for estimating long-term values of evapotranspiration. With this method, measurements are taken of the amount of precipitation and percolation. It is assumed that a change in water storage tends to zero over the period of observation. Changes in the soil moisture content may be determined by bringing the moisture in the soil up to field capacity at the beginning and at the end of the period. 10.4.2 exposure of evapotranspirometers

Observations of evapotranspiration should be representative of the plant cover and moisture conditions of the general surroundings of the station (WMO, 2003). In order to simulate representative evapotranspiration rates, the soil and

CHaPTEr 10. MEaSurEMENT OF EvaPOraTION

I.10–7

plant cover of the lysimeter should correspond to the soil and vegetation of the surrounding area, and disturbances caused by the existence of the instrument should be minimized. The most important requirements for the exposure of lysimeters are given below. In order to maintain the same hydromechanical properties of the soil, it is recommended that the lysimeter be placed into the container as an undisturbed block (monolith). In the case of light, rather homogenous soils and a large container, it is sufficient to fill the container layer by layer in the same sequence and with the same density as in the natural profile. In order to simulate the natural drainage process in the container, restricted drainage at the bottom must be prevented. Depending on the soil texture, it may be necessary to maintain the suction at the bottom artificially by means of a vacuum supply. Apart from microlysimeters for soil evaporation, a lysimeter should be sufficiently large and deep, and its rim as low as practicable, to make it possible to have a representative, free-growing vegetation cover, without restriction to plant development. In general, the siting of lysimeters is subject to fetch requirements, such as that of evaporation pans, namely, the plot should be located beyond the zone of influence of buildings, even single trees, meteorological instruments, and so on. In order to minimize the effects of advection, lysimeter plots should be located at a sufficient distance from the upwind edge of the surrounding area, that is, not less than 100 to 150 m. The prevention of advection effects is of special importance for measurements taken at irrigated land surfaces. 10.4.3 sources of error in lysimeter measurements

(i) Thermal isolation from the subsoil; (ii) Thermal effects of the air rising or descending between the container and the retaining walls; (iii) Alteration of the thermal properties of the soil through alteration of its texture and its moisture conditions; (d) Insufficient equivalence of the water budget to that of the surrounding area caused by: (i) Disturbance of soil structure; (ii) Restricted drainage; (iii) Vertical seepage at walls; (iv) Prevention of surface runoff and lateral movement of soil water. Some suitable arrangements exist to minimize lysimeter measurement errors, for example, regulation of the temperature below the container, reduction of vertical seepage at the walls by flange rings, and so forth. In addition to the careful design of the lysimeter equipment, sufficient representativeness of the plant community and the soil type of the area under study is of great importance. Moreover, the siting of the lysimeter plot must be fully representative of the natural field conditions. 10.4.4 lysimeters maintenance

Several arrangements are necessary to maintain the representativeness of the plant cover inside the lysimeter. All agricultural and other operations (sowing, fertilizing, mowing, and the like) in the container and surrounding area should be carried out in the same way and at the same time. In order to avoid errors due to rainfall catch, the plants near and inside the container should be kept vertical, and broken leaves and stems should not extend over the surface of the lysimeter. The maintenance of the technical devices is peculiar to each type of instrument and cannot be described here. It is advisable to test the evapotranspirometer for leaks at least once a year by covering its surface to prevent evapotranspiration and by observing whether, over a period of days, the volume of drainage equals the amount of water added to its surface. 10.5 estiMation of evaPoration froM natural surfaces

Lysimeter measurements are subject to several sources of error caused by the disturbance of the natural conditions by the instrument itself. Some of the major effects are as follows: (a) Restricted growth of the rooting system; (b) Change of eddy diffusion by discontinuity between the canopy inside the lysimeter and in the surrounding area. Any discontinuity may be caused by the annulus formed by the containing and retaining walls and by discrepancies in the canopy itself; (c) Insufficient thermal equivalence of the lysimeter to the surrounding area caused by:

Consideration of the factors which affect evaporation, as outlined in section 10.1.3, indicates that the rate of evaporation from a natural surface

I.10–8

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

will necessarily differ from that of an evaporimeter exposed to the same atmospheric conditions, because the physical characteristics of the two evaporating surfaces are not identical. In practice, evaporation or evapotranspiration rates from natural surfaces are of interest, for example, reservoir or lake evaporation, crop evaporation, as well as areal amounts from extended land surfaces such as catchment areas. In particular, accurate areal estimates of evapotranspiration from regions with varied surface characteristics and land-use patterns are very difficult to obtain (WMO, 1966; 1997). Suitable methods for the estimation of lake or reservoir evaporation are the water budget, energy budget and aerodynamic approaches, the combination method of aerodynamic and energy-balance equations, and the use of a complementarity relationship between actual and potential evaporation. Furthermore, pan evaporation techniques exist which use pan evaporation for the establishment of a lake-to-pan relation. Such relations are specific to each pan type and mode of exposure. They also depend on the climatic conditions (see WMO, 1985; 1994 (Chapter 37)). The water non-limiting point or areal values of evapotranspiration from vegetation-covered land surfaces may be obtained by determining such potential (or reference crop) evapotranspiration with the same methods as those indicated above for lake applications, but adapted to vegetative conditions. Some methods use additional growth stage-dependent coefficients for each type of vegetation, such as crops, and/or an integrated crop stomatal resistance value for the vegetation as a whole. The Royal Netherlands Meteorological Institute employs the following procedure established by G.F. Makkink (Hooghart, 1971) for calculating the daily (24 h) reference vegetation evaporation from the averaged daily air temperature and the daily amount of global radiation as follows: Saturation vapour pressure at air temperature T:

Psychrometric constant: Δ(T) = 0.646 + 0.0006T Specific heat of evaporation of water: λ(T) = 1 000 . (2 501 – 2.38 . T) Density of water: ρ = 1 000 Global radiation (24 h amount): Q Air temperature (24 h average): T Daily reference vegetation evaporation: °C J/m2 kg/m3 J/kg hPa/°C

Er =

1 000 ⋅ 0.65 ⋅ δ (T ) ⋅Q {δ (T ) + γ (T )} ⋅ ρ ⋅ λ (T )

mm

Note: The constant 1 000 is for conversion from metres to millimetres; the constant 0.65 is a typical empirical constant.

By relating the measured rate of actual evapotranspiration to estimates of the water non-limiting potential evapotranspiration and subsequently relating this normalized value to the soil water content, soil water deficits, or the water potential in the root zone, it is possible to devise coefficients with which the actual evapotranspiration rate can be calculated for a given soil water status. Point values of actual evapotranspiration from land surfaces can be estimated more directly from observations of the changes in soil water content measured by sampling soil moisture on a regular basis. Evapotranspiration can be measured even more accurately using a weighing lysimeter. Further methods make use of turbulence measurements (for example, eddy-correlation method) and profile measurements (for example, in boundary-layer data methods and, at two heights, in the Bowen-ratio energy-balance method). They are much more expensive and require special instruments and sensors for humidity, wind speed and temperature. Such estimates, valid for the type of soil and canopy under study, may be used as reliable independent reference values in the development of empirical relations for evapotranspiration modelling.

es (T ) = 6.107 ⋅ 10

7.5⋅

T 237.3+ T

hPa

Slope of the curve of saturation water vapour pressure versus temperature at T:

(T ) =

7.5 ⋅ 237.3 ⋅ ln (10) ⋅ es (T ) ( 237.3 + T )2

hPa/°C

CHaPTEr 10. MEaSurEMENT OF EvaPOraTION

I.10–9

The difficulty in determining basin evapotranspiration arises from the discontinuities in surface characteristics which cause variable evapotranspiration rates within the area under consideration. When considering short-term values, it is necessary to estimate evapotranspiration by using empirical relationships. Over a long period (in order to minimize storage effects) the water-budget approach can be used to estimate basin evapotranspiration (see WMO, 1971). One approach, suitable for estimates from extended areas, refers to the atmospheric water balance and derives areal evapotranspiration from radiosonde data. WMO (1994, Chapter 38) describes the abovementioned methods, their advantages and their application limits.

The measurement of evaporation from a snow surface is difficult and probably no more accurate than the computation of evaporation from water. Evaporimeters made of polyethylene or colourless plastic are used in many countries for the measurement of evaporation from snow-pack surfaces; observations are made only when there is no snowfall. Estimates of evaporation from snow cover can be made from observations of air humidity and wind speed at one or two levels above the snow surface and at the snow-pack surface, using the turbulent diffusion equation. The estimates are most reliable when evaporation values are computed for periods of five days or more.

I.10–10

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

references and furtHer readIng

Hooghart, J.C. (ed.), 1971: Evaporation and Weather. TNO Committee of Hydrological Research, Technical Meeting 44, Proceedings and Information No. 39, TNO, The Hague. World Meteorological Organization, 1966: Measurement and Estimation of Evaporation and Evapotranspiration. Technical Note No. 83, WMO-No. 201.TP.105, Geneva. World Meteorological Organization, 1971: Problems of Evaporation Assessment in the Water Balance (C.E. Hounam). WMO/IHD Report No. 13, WMO-No. 285, Geneva. World Meteorological Organization, 1973: Atmospheric Vapour Flux Computations for Hydrological Purposes (J.P. Peixoto). WMO/IHD Report No. 20, WMO-No. 357, Geneva. World Meteorological Organization, 1976: The CIMO International Evaporimeter Comparisons. WMO-No. 449, Geneva. World Meteorological Organization, 1977: Hydrological Application of Atmospheric Vapour-Flux Analyses (E.M. Rasmusson). Operational Hydrology Report No. 11, WMO-No. 476, Geneva.

World Meteorological Organization, 1985: Casebook on Operational Assessment of Areal Evaporation. Operational Hydrology Report No. 22, WMONo. 635, Geneva. World Meteorological Organization, 1992: International Meteorological Vocabulary. Second edition, WMO-No. 182, Geneva. World Meteorological Organization/United Nations Educational, Scientific and Cultural Organization, 1992: International Glossary of Hydrology. WMO-No. 385, Geneva. World Meteorological Organization, 1994: Guide to Hydrological Practices. Fifth edition, WMONo. 168, Geneva. World Meteorological Organization, 1997: Estimation of Areal Evapotranspiration. Technical Reports in Hydrology and Water Resources No. 56, WMO/ TD-No. 785, Geneva. World Meteorological Organization, 2003: Manual on the Global Observing System. Volume I, WMO-No. 544, Geneva.

CHaPTEr 11

MeasureMent of soIl MoIsture

11.1

general

Soil moisture is an important component in the atmospheric water cycle, both on a small agricultural scale and in large-scale modelling of land/ atmosphere interaction. Vegetation and crops always depend more on the moisture available at root level than on precipitation occurrence. Water budgeting for irrigation planning, as well as the actual scheduling of irrigation action, requires local soil moisture information. Knowledge of the degree of soil wetness helps to forecast the risk of flash floods, or the occurrence of fog. Nevertheless, soil moisture has been seldom observed routinely at meteorological stations. Documentation of soil wetness was usually restricted to the description of the “state of the ground” by means of WMO Code Tables 0901 and 0975, and its measurement was left to hydrologists, agriculturalists and other actively interested parties. Around 1990 the interest of meteorologists in soil moisture measurement increased. This was partly because, after the pioneering work by Deardorff (1978), numerical atmosphere models at various scales became more adept at handling fluxes of sensible and latent heat in soil surface layers. Moreover, newly developed soil moisture measurement techniques are more feasible for meteorological stations than most of the classic methods. To satisfy the increasing need for determining soil moisture status, the most commonly used methods and instruments will be discussed, including their advantages and disadvantages. Some less common observation techniques are also mentioned. 11.1.1 Definitions

Soil water content on the basis of mass is expressed in the gravimetric soil moisture content, θg, defined by: θg = Mwater/Msoil (11.1)

where Mwater is the mass of the water in the soil sample and Msoil is the mass of dry soil that is contained in the sample. Values of θg in meteorology are usually expressed in per cent. Because precipitation, evapotranspiration and solute transport variables are commonly expressed in terms of flux, volumetric expressions for water content are often more useful. The volumetric soil moisture content of a soil sample, θv, is defined as: θv = Vwater/Vsample (11.2)

where Vwater is the volume of water in the soil sample and Vsample is the total volume of dry soil + air + water in the sample. Again, the ratio is usually expressed in per cent. The relationship between gravimetric and volumetric moisture contents is: θv = θg ( ρb/ρw) (11.3)

where ρb is the dry soil bulk density and ρw is the soil water density. The basic technique for measuring soil water content is the gravimetric method, described below in section 11.2. Because this method is based on direct measurements, it is the standard with which all other methods are compared. Unfortunately, gravimetric sampling is destructive, rendering repeat measurements on the same soil sample impossible. Because of the difficulties of accurately measuring dry soil and water volumes, volumetric water contents are not usually determined directly. soil water potential

Soil moisture determinations measure either the soil water content or the soil water potential. soil water content Soil water content is an expression of the mass or volume of water in the soil, while the soil water potential is an expression of the soil water energy status. The relation between content and potential is not universal and depends on the characteristics of the local soil, such as soil density and soil texture.

Soil water potential describes the energy status of the soil water and is an important parameter for water transport analysis, water storage estimates and soil-plant-water relationships. A difference in water potential between two soil locations indicates a tendency for water flow, from high to low potential. When the soil is drying, the water potential becomes more negative and the work that must be

I.11–2

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

done to extract water from the soil increases. This makes water uptake by plants more difficult, so the water potential in the plant drops, resulting in plant stress and, eventually, severe wilting. Formally, the water potential is a measure of the ability of soil water to perform work, or, in the case of negative potential, the work required to remove the water from the soil. The total water potential ψt, the combined effect of all force fields, is given by: ψt = ψz + ψm + ψo + ψp (11.4)

class of units are those of pressure head in (centi)metres of water or mercury, energy per unit weight. The relation of the three potential unit classes is: ψ (J kg–1) = γ × ψ (Pa) = [ψ (m)] / g (11.5)

where ψz is the gravitational potential, based on elevation above the mean sea level; m is the matric potential, suction due to attraction of water by the soil matrix; o is the osmotic potential, due to energy effects of solutes in water; and p is the pressure potential, the hydrostatic pressure below a water surface. The potentials which are not related to the composition of water or soil are together called hydraulic potential, ψh. In saturated soil, this is expressed as ψh = ψz + ψp, while in unsaturated soil, it is expressed as ψh = ψz + ψm. When the phrase “water potential” is used in studies, maybe with the notation ψw, it is advisable to check the author’s definition because this term has been used for ψm + ψz as well as for ψm + ψo. The gradients of the separate potentials will not always be significantly effective in inducing flow. For example, ψ 0 requires a semi-permeable membrane to induce flow, and ψp will exist in saturated or ponded conditions, but most practical applications are in unsaturated soil. 11.1.2 units

where γ = 10 3 kg m –3 (density of water) and g = 9.81m s–2 (gravity acceleration). Because the soil water potential has a large range, it is often expressed logarithmically, usually in pressure head of water. A common unit for this is called pF, and is equal to the base–10 logarithm of the absolute value of the head of water expressed in centimetres. 11.1.3 Meteorological requirements

Soil consists of individual particles and aggregates of mineral and organic materials, separated by spaces or pores which are occupied by water and air. The relative amount of pore space decreases with increasing soil grain size (intuitively one would expect the opposite). The movement of liquid water through soil depends upon the size, shape and generally the geometry of the pore spaces. If a large quantity of water is added to a block of otherwise “dry” soil, some of it will drain away rapidly by the effects of gravity through any relatively large cracks and channels. The remainder will tend to displace some of the air in the spaces between particles, the larger pore spaces first. Broadly speaking, a well-defined “wetting front” will move downwards into the soil, leaving an increasingly thick layer retaining all the moisture it can hold against gravity. That soil layer is then said to be at “field capacity”, a state that for most soils occurs about ψ ≈ 10 kPa (pF ≈ 2). This state must not be confused with the undesirable situation of “saturated” soil, where all the pore spaces are occupied by water. After a saturation event, such as heavy rain, the soil usually needs at least 24 h to reach field capacity. When moisture content falls below field capacity, the subsequent limited movement of water in the soil is partly liquid, partly in the vapour phase by distillation (related to temperature gradients in the soil), and sometimes by transport in plant roots. Plant roots within the block will extract liquid water from the water films around the soil particles with which they are in contact. The rate at which this extraction is possible depends on the soil moisture potential. A point is reached at which the forces holding moisture films to soil particles cannot be overcome by root suction plants are starved of

In solving the mass balance or continuity equations for water, it must be remembered that the components of water content parameters are not dimensionless. Gravimetric water content is the weight of soil water contained in a unit weight of soil (kg water/kg dry soil). Likewise, volumetric water content is a volume fraction (m3 water/m3 soil). The basic unit for expressing water potential is energy (in joules, kg m2 s–2) per unit mass, J kg–1. Alternatively, energy per unit volume (J m–3) is equivalent to pressure, expressed in pascals (Pa = kg m–1 s–2). Units encountered in older literature are bar (= 100 kPa), atmosphere (= 101.32 kPa), or pounds per square inch (= 6.895 kPa). A third

CHaPTEr 11. MEaSurEMENT OF SOIl MOISTurE

I.11–3

water and lose turgidity: soil moisture has reached the “wilting point”, which in most cases occurs at a soil water potential of –1.5 MPa (pF = 4.2). In agriculture, the soil water available to plants is commonly taken to be the quantity between field capacity and the wilting point, and this varies highly between soils: in sandy soils it may be less than 10 volume per cent, while in soils with much organic matter it can be over 40 volume per cent. Usually it is desirable to know the soil moisture content and potential as a function of depth. Evapotranspiration models concern mostly a shallow depth (tens of centimetres); agricultural applications need moisture information at root depth (order of a metre); and atmospheric general circulation models incorporate a number of layers down to a few metres. For hydrological and waterbalance needs – such as catchment-scale runoff models, as well as for effects upon soil properties such as soil mechanical strength, thermal conductivity and diffusivity – information on deep soil water content is needed. The accuracy needed in water content determinations and the spatial and temporal resolution required vary by application. An often-occurring problem is the inhomogeneity of many soils, meaning that a single observation location cannot provide absolute knowledge of the regional soil moisture, but only relative knowledge of its change. 11.1.4 Measurement methods

dielectric measurement methods were only developed well after 1980, so too-early reviews should not be relied on much when choosing an operational method. There are four operational alternatives for the determination of soil water content. First, there is classic gravimetric moisture determination, which is a simple direct method. Second, there is lysimetry, a non-destructive variant of gravimetric measurement. A container filled with soil is weighed either occasionally or continuously to indicate changes in total mass in the container, which may in part or totally be due to changes in soil moisture (lysimeters are discussed in more detail in Part I, Chapter 10). Third, water content may be determined indirectly by various radiological techniques, such as neutron scattering and gamma absorption. Fourth, water content can be derived from the dielectric properties of soil, for example, by using timedomain reflectometry. Soil water potential measurement can be performed by several indirect methods, in particular using tensiometers, resistance blocks and soil psychrometers. None of these instruments is effective at this time over the full range of possible water potential values. For extended study of all methods of various soil moisture measurements, up-to-date handbooks are provided by Klute (1986), Dirksen (1999), and Smith and Mullins (referenced here as Gardner and others, 2001, and Mullins, 2001).

The methods and instruments available to evaluate soil water status may be classified in three ways. First, a distinction is made between the determination of water content and the determination of water potential. Second, a so-called direct method requires the availability of sizeable representative terrain from which large numbers of soil samples can be taken for destructive evaluation in the laboratory. Indirect methods use an instrument placed in the soil to measure some soil property related to soil moisture. Third, methods can be ranged according to operational applicability, taking into account the regular labour involved, the degree of dependence on laboratory availability, the complexity of the operation and the reliability of the result. Moreover, the preliminary costs of acquiring instrumentation must be compared with the subsequent costs of local routine observation and data processing. Reviews such as WMO (1968; 1989; 2001) and Schmugge, Jackson and McKim (1980) are very useful for learning about practical problems, but

11.2

graviMetric Direct MeasureMent of soil Water content

The gravimetric soil moisture content θg is typically determined directly. Soil samples of about 50 g are removed from the field with the best available tools (shovels, spiral hand augers, bucket augers, perhaps power-driven coring tubes), disturbing the sample soil structure as little as possible (Dirksen, 1999). The soil sample should be placed immediately in a leak-proof, seamless, pre-weighed and identified container. As the samples will be placed in an oven, the container should be able to withstand high temperatures without melting or losing significant mass. The most common soil containers are aluminium cans, but non-metallic containers should be used if the samples are to be dried in microwave ovens in the laboratory. If soil samples are to be transported for a considerable distance, tape should be used to seal the container to avoid moisture loss by evaporation.

I.11–4

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

The samples and container are weighed in the laboratory both before and after drying, the difference being the mass of water originally in the sample. The drying procedure consists in placing the open container in an electrically heated oven at 105°C until the mass stabilizes at a constant value. The drying times required usually vary between 16 and 24 h. Note that drying at 105°±5°C is part of the usually accepted definition of “soil water content”, originating from the aim to measure only the content of “free” water which is not bound to the soil matrix (Gardner and others, 2001). If the soil samples contain considerable amounts of organic matter, excessive oxidation may occur at 105°C and some organic matter will be lost from the sample. Although the specific temperature at which excessive oxidation occurs is difficult to specify, lowering the oven temperature from 105 to 70°C seems to be sufficient to avoid significant loss of organic matter, but this can lead to water content values that are too low. Oven temperatures and drying times should be checked and reported. Microwave oven drying for the determination of gravimetric water contents may also be used effectively (Gee and Dodson, 1981). In this method, soil water temperature is quickly raised to boiling point, then remains constant for a period due to the consumption of heat in vaporizing water. However, the temperature rapidly rises as soon as the energy absorbed by the soil water exceeds the energy needed for vaporizing the water. Caution should be used with this method, as temperatures can become high enough to melt plastic containers if stones are present in the soil sample. Gravimetric soil water contents of air-dry (25°C) mineral soil are often less than 2 per cent, but, as the soil approaches saturation, the water content may increase to values between 25 and 60 per cent, depending on soil type. Volumetric soil water content, θv, may range from less than 10 per cent for air-dry soil to between 40 and 50 per cent for mineral soils approaching saturation. Soil θv determination requires measurement of soil density, for example, by coating a soil clod with paraffin and weighing it in air and water, or some other method (Campbell and Henshall, 2001). Water contents for stony or gravelly soils can be grossly misleading. When rocks occupy an appreciable volume of the soil, they modify direct measurement of soil mass, without making a similar contribution to the soil porosity. For example, gravimetric water content may be 10 per cent for a soil sample with a bulk density of 2 000 kg m–3;

however, the water content of the same sample based on finer soil material (stones and gravel excluded) would be 20 per cent, if the bulk density of fine soil material was 1 620 kg m–3. Although the gravimetric water content for the finer soil fraction, θg,fines, is the value usually used for spatial and temporal comparison, there may also be a need to determine the volumetric water content for a gravelly soil. The latter value may be important in calculating the volume of water in a root zone. The relationship between the gravimetric water content of the fine soil material and the bulk volumetric water content is given by: θv,stony = θg,fines ( ρb/ ρv)(1 + Mstones/Mfines) (11.6) where θv,stony is the bulk volumetric water content of soil containing stones or gravel and Mstones and Mfines are the masses of the stone and fine soil fractions (Klute, 1986). 11.3 soil Water content: inDirect MethoDs

The capacity of soil to retain water is a function of soil texture and structure. When removing a soil sample, the soil being evaluated is disturbed, so its water-holding capacity is altered. Indirect methods of measuring soil water are helpful as they allow information to be collected at the same location for many observations without disturbing the soil water system. Moreover, most indirect methods determine the volumetric soil water content without any need for soil density determination. 11.3.1 radiological methods

Two different radiological methods are available for measuring soil water content. One is the widely used neutron scatter method, which is based on the interaction of high-energy (fast) neutrons and the nuclei of hydrogen atoms in the soil. The other method measures the attenuation of gamma rays as they pass through soil. Both methods use portable equipment for multiple measurements at permanent observation sites and require careful calibration, preferably with the soil in which the equipment is to be used. When using any radiation-emitting device, some precautions are necessary. The manufacturer will provide a shield that must be used at all times. The only time the probe leaves the shield is when it is lowered into the soil access tube. When the guidelines and regulations regarding radiation hazards stipulated by the manufacturers and health

CHaPTEr 11. MEaSurEMENT OF SOIl MOISTurE

I.11–5

authorities are followed, there is no need to fear exposure to excessive radiation levels, regardless of the frequency of use. Nevertheless, whatever the type of radiation-emitting device used, the operator should wear some type of film badge that will enable personal exposure levels to be evaluated and recorded on a monthly basis. 11.3.1.1 neutron scattering method

Usually, a straight tube with a diameter of 5 cm is sufficient for the probe to be lowered into the tube without a risk of jamming. Care should be taken in installing the access tube to ensure that no air voids exist between the tube and the soil matrix. At least 10 cm of the tube should extend above the soil surface, in order to allow the box containing the electronics to be mounted on top of the access tube. All access tubes should be fitted with a removable cap to keep rainwater from entering the tubes. In order to enhance experimental reproducibility, the soil water content is not derived directly from the number of slow neutrons detected, but rather from a count ratio (CR), given by: CR = Csoil/Cbackground (11.7)

In neutron soil moisture detection (Visvalingam and Tandy, 1972; Greacen, 1981), a probe containing a radioactive source emitting high-energy (fast) neutrons and a counter of slow neutrons is lowered into the ground. The hydrogen nuclei, having about the same mass as neutrons, are at least 10 times as effective for slowing down neutrons upon collision as most other nuclei in the soil. Because in any soil most hydrogen is in water molecules, the density of slow “thermalized” neutrons in the vicinity of the neutron probe is nearly proportional to the volumetric soil water content. Some fraction of the slowed neutrons, after a number of collisions, will again reach the probe and its counter. When the soil water content is large, not many neutrons are able to travel far before being thermalized and ineffective, and then 95 per cent of the counted returning neutrons come from a relatively small soil volume. In wet soil, the “radius of influence” may be only 15 cm, while in dry soil that radius may increase to 50 cm. Therefore, the measured soil volume varies with water content, and thin layers cannot be resolved. This method is hence less suitable to localize water-content discontinuities, and it cannot be used effectively in the top 20 cm of soil on account of the soil air discontinuity. Several source and detector arrangements are possible in a neutron probe, but it is best to have a probe with a double detector and a central source, typically in a cylindrical container. Such an arrangement allows for a nearly spherical zone of influence and leads to a more linear relation of neutron count to soil water content. A cable is used to attach a neutron probe to the main instrument electronics, so that the probe can be lowered into a previously installed access tube. The access tube should be seamless and thick enough (at least 1.25 mm) to be rigid, but not so thick that the access tube itself slows neutrons down significantly. The access tube must be made of non-corrosive material, such as stainless steel, aluminium or plastic, although polyvinylchloride should be avoided as it absorbs slow neutrons.

where Csoil is the count of thermalized neutrons detected in the soil and Cbackground is the count of thermalized neutrons in a reference medium. All neutron probe instruments now come with a reference standard for these background calibrations, usually against water. The standard in which the probe is placed should be at least 0.5 m in diameter so as to represent an “infinite” medium. Calibration to determine Cbackground can be done by a series of ten 1 min readings, to be averaged, or by a single 1 h reading. Csoil is determined from averaging several soil readings at a particular depth/location. For calibration purposes, it is best to take three samples around the access tube and to average the water contents corresponding to the average CR calculated for that depth. A minimum of five different water contents should be evaluated for each depth. Although some calibration curves may be similar, a separate calibration for each depth should be conducted. The lifetime of most probes is more than 10 years. 11.3.1.2 gamma-ray attenuation

Whereas the neutron method measures the volumetric water content in a large sphere, gamma-ray absorption scans a thin layer. The dual-probe gamma device is nowadays mainly used in the laboratory since dielectric methods became operational for field use. Another reason for this is that gamma rays are more dangerous to work with than neutron scattering devices, as well as the fact that the operational costs for the gamma rays are relatively high. Changes in gamma attenuation for a given mass absorption coefficient can be related to changes in total soil density. As the attenuation of gamma rays is due to mass, it is not possible to determine water content unless the attenuation of gamma rays due

I.11–6

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

to the local dry soil density is known and remains unchanged with changing water content. Determining accurately the soil water content from the difference between the total and dry density attenuation values is therefore not simple. Compared to neutron scattering, gamma-ray attenuation has the advantage of allowing accurate measurements at a few centimetres below the airsurface interface. Although the method has a high degree of resolution, the small soil volume evaluated will exhibit more spatial variation due to soil heterogeneities (Gardner and Calissendorff, 1967). 11.3.2 soil water dielectrics

where ε is the dielectric constant of the soil water system. This empirical relationship has proved to be applicable in many soils, roughly independent of texture and gravel content (Drungil, Abt and Gish, 1989). However, soil-specific calibration is desirable for soils with low density or with a high organic content. For complex soil mixtures, the De Loor equation has proved useful (Dirksen and Dasberg, 1993). Generally, the parallel probes are separated by 5 cm and vary in length from 10 to 50 cm; the rods of the probe can be of any metallic substance. The sampling volume is essentially a cylinder of a few centimetres in radius around the parallel probes (Knight, 1992). The coaxial cable from the probe to the signal-processing unit should not be longer than about 30 m. Soil water profiles can be obtained from a buried set of probes, each placed horizontally at a different depth, linked to a field data logger by a multiplexer. 11.3.2.2 frequency-domain measurement

When a medium is placed in the electric field of a capacitor or waveguide, its influence on the electric forces in that field is expressed as the ratio between the forces in the medium and the forces which would exist in vacuum. This ratio, called permittivity or “dielectric constant”, is for liquid water about 20 times larger than that of average dry soil, because water molecules are permanent dipoles. The dielectric properties of ice, and of water bound to the soil matrix, are comparable to those of dry soil. Therefore, the volumetric content of free soil water can be determined from the dielectric characteristics of wet soil by reliable, fast, non-destructive measurement methods, without the potential hazards associated with radioactive devices. Moreover, such dielectric methods can be fully automated for data acquisition. At present, two methods which evaluate soil water dielectrics are commercially available and used extensively, namely time-domain reflectometry and frequencydomain measurement. 11.3.2.1 time-domain reflectometry

While time-domain refletometry uses microwave frequencies in the gigahertz range, frequencydomain sensors measure the dielectric constant at a single microwave megahertz frequency. The microwave dielectric probe utilizes an open-ended coaxial cable and a single reflectometer at the probe tip to measure amplitude and phase at a particular frequency. Soil measurements are referenced to air, and are typically calibrated with dielectric blocks and/or liquids of known dielectric properties. One advantage of using liquids for calibration is that a perfect electrical contact between the probe tip and the material can be maintained (Jackson, 1990). As a single, small probe tip is used, only a small volume of soil is ever evaluated, and soil contact is therefore critical. As a result, this method is excellent for laboratory or point measurements, but is likely to be subject to spatial variability problems if used on a field scale (Dirksen, 1999).

Time-domain reflectometry is a method which determines the dielectric constant of the soil by monitoring the travel of an electromagnetic pulse, which is launched along a waveguide formed by a pair of parallel rods embedded in the soil. The pulse is reflected at the end of the waveguide and its propagation velocity, which is inversely proportional to the square root of the dielectric constant, can be measured well by actual electronics. The most widely used relation between soil dielectrics and soil water content was experimentally summarized by Topp, Davis and Annan (1980) as follows: θv = –0.053 + 0.029 ε – 5.5 · 10–4 ε2 + 4.3 · 10–6 ε3 (11.8)

11.4

soil Water Potential instruMentation

The basic instruments capable of measuring matric potential are sufficiently inexpensive and reliable to be used in field-scale monitoring programmes. However, each instrument has a limited accessible water potential range. Tensiometers work well only in wet soil, while resistance blocks do better in moderately dry soil.

CHaPTEr 11. MEaSurEMENT OF SOIl MOISTurE

I.11–7

11.4.1

tensiometers

The most widely used and least expensive water potential measuring device is the tensiometer. Tensiometers are simple instruments, usually consisting of a porous ceramic cup and a sealed plastic cylindrical tube connecting the porous cup to some pressure-recording device at the top of the cylinder. They measure the matric potential, because solutes can move freely through the porous cup. The tensiometer establishes a quasi-equilibrium condition with the soil water system. The porous ceramic cup acts as a membrane through which water flows, and therefore must remain saturated if it is to function properly. Consequently, all the pores in the ceramic cup and the cylindrical tube are initially filled with de-aerated water. Once in place, the tensiometer will be subject to negative soil water potentials, causing water to move from the tensiometer into the surrounding soil matrix. The water movement from the tensiometer will create a negative potential or suction in the tensiometer cylinder which will register on the recording device. For recording, a simple U-tube filled with water and/or mercury, a Bourdon-type vacuum gauge or a pressure transducer (Marthaler and others, 1983) is suitable. If the soil water potential increases, water moves from the soil back into the tensiometer, resulting in a less negative water potential reading. This exchange of water between the soil and the tensiometer, as well as the tensiometer’s exposure to negative potentials, will cause dissolved gases to be released by the solution, forming air bubbles. The formation of air bubbles will alter the pressure readings in the tensiometer cylinder and will result in faulty readings. Another limitation is that the tensiometer has a practical working limit of ψ ≈ –85 kPa. Beyond –100 kPa (≈ 1 atm), water will boil at ambient temperature, forming water vapour bubbles which destroy the vacuum inside the tensiometer cylinder. Consequently, the cylinders occasionally need to be de-aired with a hand-held vacuum pump and then refilled. Under drought conditions, appreciable amounts of water can move from the tensiometer to the soil. Thus, tensiometers can alter the very condition they were designed to measure. Additional proof of this process is that excavated tensiometers often have accumulated large numbers of roots in the proximity of the ceramic cups. Typically, when the tensiometer acts as an “irrigator”, so much water is lost through the ceramic cups that a vacuum in the cylinder cannot be maintained, and the tensiometer gauge will be inoperative.

Before installation, but after the tensiometer has been filled with water and degassed, the ceramic cup must remain wet. Wrapping the ceramic cup in wet rags or inserting it into a container of water will keep the cup wet during transport from the laboratory to the field. In the field, a hole of the appropriate size and depth is prepared. The hole should be large enough to create a snug fit on all sides, and long enough so that the tensiometer extends sufficiently above the soil surface for deairing and refilling access. Since the ceramic cup must remain in contact with the soil, it may be beneficial in stony soil to prepare a thin slurry of mud from the excavated site and to pour it into the hole before inserting the tensiometer. Care should also be taken to ensure that the hole is backfilled properly, thus eliminating any depressions that may lead to ponded conditions adjacent to the tensiometer. The latter precaution will minimize any water movement down the cylinder walls, which would produce unrepresentative soil water conditions. Only a small portion of the tensiometer is exposed to ambient conditions, but its interception of solar radiation may induce thermal expansion of the upper tensiometer cylinder. Similarly, temperature gradients from the soil surface to the ceramic cup may result in thermal expansion or contraction of the lower cylinder. To minimize the risk of temperature-induced false water potential readings, the tensiometer cylinder should be shaded and constructed of non-conducting materials, and readings should be taken at the same time every day, preferably in the early morning. A new development is the osmotic tensiometer, where the tube of the meter is filled with a polymer solution in order to function better in dry soil. For more information on tensiometers see Dirksen (1999) and Mullins (2001). 11.4.2 resistance blocks

Electrical resistance blocks, although insensitive to water potentials in the wet range, are excellent companions to the tensiometer. They consist of electrodes encased in some type of porous material that within about two days will reach a quasi-equilibrium state with the soil. The most common block materials are nylon fabric, fibreglass and gypsum, with a working range of about –50 kPa (for nylon) or –100 kPa (for gypsum) up to –1 500 kPa. Typical block sizes are 4 cm × 4 cm × 1 cm. Gypsum blocks last a few years, but less in very wet or saline soil (Perrier and Marsh, 1958).

I.11–8

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

This method determines water potential as a function of electrical resistance, measured with an alternating current bridge (usually ≈ 1 000 Hz) because direct current gives polarization effects. However, resistance decreases if soil is saline, falsely indicating a wetter soil. Gypsum blocks are less sensitive to soil saltiness effects because the electrodes are consistently exposed to a saturated solution of calcium sulphate. The output of gypsum blocks must be corrected for temperature (Aggelides and Londra, 1998). Because resistance blocks do not protrude above the ground, they are excellent for semi-permanent agricultural networks of water potential profiles, if installation is careful and systematic (WMO, 2001). When installing the resistance blocks it is best to dig a small trench for the lead wires before preparing the hole for the blocks, in order to minimize water movement along the wires to the blocks. A possible field problem is that shrinking and swelling soil may break contact with the blocks. On the other hand, resistance blocks do not affect the distribution of plant roots. Resistance blocks are relatively inexpensive. However, they need to be calibrated individually. This is generally accomplished by saturating the blocks in distilled water and then subjecting them to a predetermined pressure in a pressure-plate apparatus (Wellings, Bell and Raynor, 1985), at least at five different pressures before field installation. Unfortunately, the resistance is less on a drying curve than on a wetting curve, thus generating hysteresis errors in the field because resistance blocks are slow to equilibrate with varying soil wetness (Tanner and Hanks, 1952). As resistance-block calibration curves change with time, they need to be calibrated before installation and to be checked regularly afterwards, either in the laboratory or in the field.

The lowest water potential typically associated with active plant water uptake corresponds to a relative humidity of between 98 and 100 per cent. This implies that, if the water potential in the soil is to be measured accurately to within 10 kPa, the temperature would have to be controlled to better than 0.001 K. This means that the use of field psychrometers is most appropriate for low matric potentials, of less than –300 kPa. In addition, the instrument components differ in heat capacities, so diurnal soil temperature fluctuations can induce temperature gradients in the psychrometer (Brunini and Thurtell, 1982). Therefore, Spanner psychrometers should not be used at depths of less than 0.3 m, and readings should be taken at the same time each day, preferably in the early morning. In summary, soil psychrometry is a difficult and demanding method, even for specialists.

11.5

reMote sensing of soil Moisture

11.4.3

Psychrometers

Psychrometers are used in laboratory research on soil samples as a standard for other techniques (Mullins, 2001), but a field version is also available, called the Spanner psychrometer (Rawlins and Campbell, 1986). This consists of a miniature thermocouple placed within a small chamber with a porous wall. The thermocouple is cooled by the Peltier effect, condensing water on a wire junction. As water evaporates from the junction, its temperature decreases and a current is produced which is measured by a meter. Such measurements are quick to respond to changes in soil water potential, but are very sensitive to temperature and salinity (Merrill and Rawlins, 1972).

Earlier in this chapter it was mentioned that a single observation location cannot provide absolute knowledge of regional soil moisture, but only relative knowledge of its change, because soils are often very inhomogeneous. However, nowadays measurements from space-borne instruments using remote-sensing techniques are available for determining soil moisture in the upper soil layer. This allows interpolation at the mesoscale for estimation of evapotranspiration rates, evaluation of plant stress and so on, and also facilitates moisture balance input in weather models (Jackson and Schmugge, 1989; Saha, 1995). The usefulness of soil moisture determination at meteorological stations has been increased greatly thereby, because satellite measurements need “ground truth” to provide accuracy in the absolute sense. Moreover, station measurements are necessary to provide information about moisture in deeper soil layers, which cannot be observed from satellites or aircraft. Some principles of the airborne measurement of soil moisture are briefly given here; for more details see Part II, Chapter 8. Two uncommon properties of the water in soil make it accessible to remote sensing. First, as already discussed above in the context of time-domain reflectometry, the dielectric constant of water is of an order of magnitude larger than that of dry soils at microwave lengths. In remote sensing, this feature can be used either passively or actively (Schmugge, Jackson and McKim, 1980). Passive sensing analyses the natural microwave emissions from the Earth’s surface, while active sensing refers to evaluating back scatter of a satellite-sent signal.

CHaPTEr 11. MEaSurEMENT OF SOIl MOISTurE

I.11–9

The microwave radiometer response will range from an emissivity of 0.95 to 0.6 or lower for passive microwave measurements. For the active satellite radar measurements, an increase of about 10 db in return is observed as soil goes from dry to wet. The microwave emission is referred to as brightness temperature Tb and is proportional to the emissivity β and the temperature of the soil surface, Tsoil, or: Tb = β Tsoil (11.9)

10 cm and 1 m, and also lower if there is much deep infiltration. Observation frequency should be approximately once every week. Indirect measurement should not necessarily be carried in the meteorological enclosure, but rather near it, below a sufficiently horizontal natural surface which is typical of the uncultivated environment. Representativity of any soil moisture observation point is limited because of the high probability of significant variations, both horizontally and vertically, of soil structure (porosity, density, chemical composition). Horizontal variations of soil water potential tend to be relatively less than such variations of soil water content. Gravimetric water content determinations are only reliable at the point of measurement, making a large number of samples necessary to describe adequately the soil moisture status of the site. To estimate the number of samples n needed at a local site to estimate soil water content at an observed level of accuracy (L), the sample size can be estimated from: n = 4 (σ2/L2) (11.11)

where Tsoil is in kelvin and ≡β depends on soil texture, surface roughness and vegetation. Any vegetation canopy will influence the soil component. The volumetric water content is related to the total active backscatter St by: θv = L (St – Sv) (RA)–1 (11.10)

where L is a vegetation attenuation coefficient; Sv is the back scatter from vegetation; R is a soil surface roughness term; and A is a soil moisture sensitivity term. As a result, microwave response to soil water content can be expressed as an empirical relationship. The sampling depth in the soil is of the order of 5 to 10 cm. The passive technique is robust, but its pixel resolution is limited to not less than 10 km because satellite antennas have a limited size. The active satellite radar pixel resolution is more than a factor of 100 better, but active sensing is very sensitive to surface roughness and requires calibration against surface data. The second remote-sensing feature of soil water is its relatively large heat capacity and thermal conductivity. Therefore, moist soils have a large thermal inertia. Accordingly, if cloudiness does not interfere, remote sensing of the diurnal range of surface temperature can be used to estimate soil moisture (Idso and others, 1975; Van de Griend, Camillo and Gurney, 1985).

where σ2 is the sample variance generated from a preliminary sampling experiment. For example, suppose that a preliminary sampling yielded a (typical) σ2 of 25 per cent and the accuracy level needed to be within 3 per cent, 12 samples would be required from the site (if it can be assumed that water content is normally distributed across the site). A regional approach divides the area into strata based on the uniformity of relevant variables within the strata, for example, similarity of hydrological response, soil texture, soil type, vegetative cover, slope, and so on. Each stratum can be sampled independently and the data recombined by weighing the results for each stratum by its relative area. The most critical factor controlling the distribution of soil water in low-sloping watersheds is topography, which is often a sufficient criterion for subdivision into spatial units of homogeneous response. Similarly, sloping rangeland will need to be more intensely sampled than flat cropland. However, the presence of vegetation tends to diminish the soil moisture variations caused by topography.

11.6

site selection anD saMPle size

Standard soil moisture observations at principal stations should be made at several depths between

I.11–10

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

references and furtHer readIng

Aggelides, S.M. and P.A. Londra, 1998: Comparison of empirical equations for temperature correction of gypsum sensors. Agronomy Journal, 90, pp. 441–443. Brunini, O. and G.W. Thurtell, 1982: An improved thermocouple hygrometer for in situ measurements of soil water potential. Soil Science Society of America Journal, 46, pp. 900–904. Campbell, D.J. and J.K. Henshall, 2001: Bulk density. In: K.A. Smith and C.E. Mullins, Soil and Environmental Analysis: Physical Methods, Marcel Dekker, New york, pp. 315–348. Deardorff, J.W., 1978: Efficient prediction of ground surface temperature and moisture, with inclusion of a layer of vegetation. Journal of Geophysical Research, 83, pp. 1889–1904. Dirksen, C., 1999: Soil Physics Measurements. Catena Verlag, Reiskirchen, Germany, 154 pp. Dirksen, C. and S. Dasberg, 1993: Improved calibration of time domain reflectometry soil water content measurements. Soil Science Society of America Journal, 57, pp. 660–667. Drungil, C.E.C., K. Abt and T.J. Gish, 1989: Soil moisture determination in gravelly soils with time domain reflectometry. Transactions of the American Society of Agricultural Engineering, 32, pp. 177–180. Gardner, W.H. and C. Calissendorff, 1967: Gammaray and neutron attenuation measurement of soil bulk density and water content. Proceedings of the Symposium on the Use of Isotope and Raditation Techniques in Soil Physics and Irrigation Studies (Istanbul, 12-16 June 1967). International Atomic Energy Agency, Vienna, pp. 101–112. Gardner, C.M.K., D.A. Robinson, K. Blyth and J.D. Cooper, 2001: Soil water content. In: K.A. Smith, and C.E. Mullins, Soil and Environmental Analysis: Physical Methods, Marcel Dekker, New york, pp. 1–64. Gee, G.W. and M.E. Dodson, 1981: Soil water content by microwave drying: A routine procedure. Soil Science Society of America Journal, 45, pp. 1234–1237. Greacen, E.L., 1981: Soil Water Assessment by the Neutron Method. CSIRO, Australia, 140 pp. Idso, S.B., R.D. Jackson, R.J. Reginato and T.J. Schmugge, 1975: The utility of surface temperature measurements for the remote sensing of sun for soil water status. Journal of Geophysical Research, 80, pp. 3044–3049.

Jackson, T.J., 1990: Laboratory evaluation of a fieldportable dielectric/soil moisture probe. IEEE Transactions on Geoscience and Remote Sensing, 28, pp. 241–245. Jackson, T.J. and T.J. Schmugge, 1989: Passive microwave remote sensing system for soil moisture: Some supporting research. IEEE Transactions on Geoscience and Remote Sensing, 27, pp. 225–235. Klute, A. (ed.), 1986: Methods of Soil Analysis, Part 1: Physical and Mineralogical Methods. American Society of Agronomy, Madison, Wisconsin, United States, 1188 pp. Knight, J.H., 1992: Sensitivity of time domain reflectometry measurements to lateral variations in soil water content. Water Resources Research, 28, pp. 2345–2352. Marthaler, H.P., W. Vogelsanger, F. Richard and J.P. Wierenga, 1983: A pressure transducer for field tensiometers. Soil Science Society of America Journal, 47, pp. 624–627. Merrill, S.D. and S.L. Rawlins, 1972: Field measurement of soil water potential with thermocouple psychrometers. Soil Science, 113, pp. 102–109. Mullins, C.E., 2001: Matric potential. In: K.A. Smith, and C.E. Mullins, Soil and Environmental Analysis: Physical Methods. Marcel Dekker, New york, pp. 65–93. Perrier, E.R. and A.W. Marsh, 1958: Performance characteristics of various electrical resistance units and gypsum materials. Soil Science, 86, pp. 140–147. Rawlins, S.L. and G.S. Campbell, 1986: Water potential: Thermocouple psychrometry. In: A. Klute, (ed.), Methods of Soil Analysis – Part 1: Physical and Mineralogical Methods. American Society of Agronomy, Madison, Wisconsin, United States, pp. 597–618. Saha, S.K., 1995: Assessment of regional soil moisture conditions by coupling satellite sensor data with a soil-plant system heat and moisture balance model. International Journal of Remote Sensing, 16, pp. 973–980. Schmugge, T.J., T.J. Jackson, and H.L. McKim, 1980: Survey of methods for soil moisture determination. Water Resources Research, 16, pp. 961–979. Tanner, C.B. and R.J. Hanks, 1952: Moisture hysteresis in gypsum moisture blocks. Soil Science Society of America Proceedings, 16, pp. 48–51. Topp, G.C., J.L. Davis and A.P. Annan, 1980: Electromagnetic determination of soil water

CHaPTEr 11. MEaSurEMENT OF SOIl MOISTurE

I.11–11

content: Measurement in coaxial transmission lines. Water Resources Research, 16, pp. 574-582. Van de Griend, A.A., P.J. Camillo and R.J. Gurney, 1985: Discrimination of soil physical parameters, thermal inertia and soil moisture from diurnal surface temperature fluctuations. Water Resources Research, 21, pp. 997–1009. Visvalingam, M. and J.D. Tandy, 1972: The neutron method for measuring soil moisture content: A review. European Journal of Soil Science, 23, pp. 499–511. Wellings, S.R., J.P. Bell and R.J. Raynor, 1985: The Use of Gypsum Resistance Blocks for Measuring Soil Water Potential in the Field. Report No. 92,

Institute of Hydrology, Wallingford, United Kingdom. World Meteorological Organization, 1968: Practical Soil Moisture Problems in Agriculture. Technical Note No. 97, WMO-No. 235.TP.128, Geneva. World Meteorological Organization, 1989: Land Management in Arid and Semi-arid Areas. Technical Note No. 186, WMO-No. 662, Geneva. World Meteorological Organization, 2001: Lecture Notes for Training Agricultural Meteorological Personnel (J. Wieringa and J. Lomas). Second edition, WMO-No. 551, Geneva.

CHaPTEr 12

MeasureMent of uPPer-aIr Pressure, teMPerature and HuMIdIty

12.1 12.1.1

general

Definitions

for atmospheric constituents, such as ozone concentration or radioactivity. These additional measurements are not discussed in any detail in this chapter. 12.1.2 units used in upper-air measurements

The following definitions from WMO (1992; 2003a) are relevant to upper-air measurements using a radiosonde: Radiosonde: Instrument intended to be carried by a balloon through the atmosphere, equipped with devices to measure one or several meteorological variables (pressure, temperature, humidity, etc.), and provided with a radio transmitter for sending this information to the observing station. Radiosonde observation: An observation of meteorological variables in the upper air, usually atmospheric pressure, temperature and humidity, by means of a radiosonde. Note: The radiosonde may be attached to a balloon, or it may be dropped (dropsonde) from an aircraft or rocket. Radiosonde station: A station at which observations of atmospheric pressure, temperature and humidity in the upper air are made by electronic means. Upper-air observation: A meteorological observation made in the free atmosphere, either directly or indirectly. Upper-air station, upper air synoptic station, aerological station: A surface location from which upper-air observations are made. Sounding: Determination of one or several upper-air meteorological variables by means of instruments carried aloft by balloon, aircraft, kite, glider, rocket, and so on. This chapter will primarily deal with radiosonde systems. Measurements using special platforms or specialized equipment, or made indirectly by remote-sensing methods will be discussed in various chapters of Part II of this Guide. Radiosonde systems are normally used to measure pressure, temperature and relative humidity. At most operational sites, the radiosonde system is also used for upper-wind determination (see Part I, Chapter 13). In addition, some radiosondes are flown with sensing systems

The units of measurement for the meteorological variables of radiosonde observations are hectopascals for pressure, degrees Celsius for temperature, and per cent for relative humidity. Relative humidity is reported relative to saturated vapour pressure over a water surface, even at temperatures less than 0°C. The unit of geopotential height used in upper-air observations is the standard geopotential metre, defined as 0.980 665 dynamic metres. In the troposphere, the value of the geopotential height is approximately equal to the geometric height expressed in metres. The values of the physical functions and constants adopted by WMO (1988) should be used in radiosonde computations. 12.1.3 12.1.3.1 Meteorological requirements radiosonde data for meteorological operations

Upper-air measurements of temperature and relative humidity are two of the basic measurements used in the initialization of the analyses of numerical weather prediction models for operational weather forecasting. Radiosondes provide most of the in situ temperature and relative humidity measurements over land, while radiosondes launched from remote islands or ships provide a limited coverage over the oceans. Temperatures with resolution in the vertical similar to radiosondes can be observed by aircraft either during ascent, descent, or at cruise levels. The aircraft observations are used to supplement the radiosonde observations, particularly over the sea. Satellite observations of temperature and water vapour distribution have lower vertical resolution than radiosonde or aircraft measurements. Satellite observations have greatest impact on numerical weather prediction analyses over

I.12–2

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

the oceans and other areas of the globe where radiosonde and aircraft observations are sparse or unavailable. Accurate measurements of the vertical structure of temperature and water vapour fields in the troposphere are extremely important for all types of forecasting, especially regional and local forecasting. The measurements indicate the existing structure of cloud or fog layers in the vertical. Furthermore, the vertical structure of temperature and water vapour fields determines the stability of the atmosphere and, subsequently, the amount and type of cloud that will be forecast. Radiosonde measurements of the vertical structure can usually be provided with sufficient accuracy to meet most user requirements. However, negative systematic errors in radiosonde relative humidity measurements of high humidity in clouds cause problems in numerical weather prediction analyses, if the error is not compensated. High-resolution measurements of the vertical structure of temperature and relative humidity are important for environmental pollution studies (for instance, identifying the depth of the atmospheric boundary layer). High resolution in the vertical is also necessary for forecasting the effects of atmospheric refraction on the propagation of electromagnetic radiation or sound waves. Civil aviation, artillery and other ballistic applications, such as space vehicle launches, have operational requirements for measurements of the density of air at given pressures (derived from radiosonde temperature and relative humidity measurements). Radiosonde observations are vital for studies of upper-air climate change. Hence, it is important to keep adequate records of the systems used for measurements and also of any changes in the operating or correction procedures used with the equipment. In this context, it has proved necessary to establish the changes in radiosonde instruments and practices that have taken place since radiosondes were used on a regular basis (see for instance WMO, 1993a). Climate change studies based on radiosonde measurements require extremely high stability in the systematic errors of the radiosonde measurements. However, the errors in early radiosonde measurements of some meteorological variables, particularly relative humidity and pressure, were too high to provide acceptable longterm references at all heights reported by the radiosondes. Thus, improvements to and changes in radiosonde design were necessary. Furthermore,

expenditure limitations on meteorological operations require that radiosonde consumables remain cheap if widespread radiosonde use is to continue. Therefore, certain compromises in system measurement accuracy have to be accepted by users, taking into account that radiosonde manufacturers are producing systems that need to operate over an extremely wide range of meteorological conditions: 1 050 to 5 hPa for pressure 50 to –90°C for temperature 100 to 1 per cent for relative humidity with the systems being able to sustain continuous reliable operation when operating in heavy rain, in the vicinity of thunderstorms, and in severe icing conditions. 12.1.3.2 relationships between satellite and radiosonde upper-air measurements

Nadir-viewing satellite observing systems do not measure vertical structure with the same accuracy or degree of confidence as radiosonde or aircraft systems. The current satellite temperature and water vapour sounding systems either observe upwelling radiances from carbon dioxide or water vapour emissions in the infrared, or alternatively oxygen or water vapour emissions at microwave frequencies (see Part II, Chapter 8). The radiance observed by a satellite channel is composed of atmospheric emissions from a range of heights in the atmosphere. This range is determined by the distribution of emitting gases in the vertical and the atmospheric absorption at the channel frequencies. Most radiances from satellite temperature channels approximate mean layer temperatures for a layer at least 10 km thick. The height distribution (weighting function) of the observed temperature channel radiance will vary with geographical location to some extent. This is because the radiative transfer properties of the atmosphere have a small dependence on temperature. The concentrations of the emitting gas may vary to a small extent with location and cloud; aerosol and volcanic dust may also modify the radiative heat exchange. Hence, basic satellite temperature sounding observations provide good horizontal resolution and spatial coverage worldwide for relatively thick layers in the vertical, but the precise distribution in the vertical of the atmospheric emission observed may be difficult to specify at any given location. Most radiances observed by nadir-viewing satellite water vapour channels in the troposphere originate from layers of the atmosphere about 4 to 5 km

CHaPTEr 12. MEaSurEMENT OF uPPEr-aIr PrESSurE, TEMPEraTurE aNd HuMIdITY

I.12–3

thick. The pressures of the atmospheric layers contributing to the radiances observed by a water vapour channel vary with location to a much larger extent than for the temperature channels. This is because the thickness and central pressure of the layer observed depend heavily on the distribution of water vapour in the vertical. For instance, the layers observed in a given water vapour channel will be lowest when the upper troposphere is very dry. The water vapour channel radiances observed depend on the temperature of the water vapour. Therefore, water vapour distribution in the vertical can be derived only once suitable measurements of vertical temperature structure are available. Limb-viewing satellite systems can provide measurements of atmospheric structure with higher vertical resolution than nadir-viewing systems; an example of this type of system is temperature and water vapour measurement derived from global positioning system (GPS) radio occultation. In this technique, vertical structure is measured along paths in the horizontal of at least 200 km (Kursinski and others, 1997). Thus, the techniques developed for using satellite sounding information in numerical weather prediction models incorporate information from other observing systems, mainly radiosondes and aircraft. This information may be contained in an initial estimate of vertical structure at a given location, which is derived from forecast model fields or is found in catalogues of possible vertical structure based on radiosonde measurements typical of the geographical location or air mass type. In addition, radiosonde measurements are used to cross-reference the observations from different satellites or the observations at different view angles from a given satellite channel. The comparisons may be made directly with radiosonde observations or indirectly through the influence from radiosonde measurements on the vertical structure of numerical forecast fields. Hence, radiosonde and satellite sounding systems are complementary observing systems and provide a more reliable global observation system when used together. 12.1.3.3 Maximum height of radiosonde observations

of the higher cost of the balloons and gas necessary to lift the equipment to the lowest pressures. Temperature errors in many radiosonde systems increase rapidly at low pressures. Therefore, some of the available radiosonde systems are unsuitable for observing at the lowest pressures. The problems associated with the contamination of sensors during flight and very long timeconstants of sensor response at low temperatures and pressures limit the usefulness of radiosonde relative humidity measurements to the troposphere. 12.1.3.4 accuracy requirements

This section and the next summarize the requirements for radiosonde accuracy and compare them with operational performance. A detailed discussion of performance and sources of errors is given in later sections. The practical accuracy requirements for radiosonde observations are included in Annex 12.A. WMO (1970) describes a very useful approach to the consideration of the performance of instrument systems, which bears on the system design. Performance is based on observed atmospheric variability. Two limits are defined as follows: (a) The limit of performance beyond which improvement is unnecessary for various purposes; (b) The limit of performance below which the data obtained would be of negligible value for various purposes. The performance limits derived by WMO (1970) for upper-wind and for radiosonde temperature, relative humidity and geopotential height measurements are contained in Tables 1 to 4 of Annex 12.B. 12.1.3.5 temperature: requirements and performance

Radiosonde observations are used regularly for measurements up to heights of about 35 km. However, many observations worldwide will not be made to heights greater than about 25 km, because

Most modern radiosonde systems measure temperature in the troposphere with a standard error of between 0.1 and 0.5 K. This performance is usually within a factor of three of the optimum performance suggested in Table 2 of Annex 12.B. Unfortunately, standard errors larger than 1 K are still found in some radiosonde networks in tropical regions. The measurements at these stations fall outside the lower performance limit found in Table 2 of Annex 12.B, and are in the category where the measurements have negligible value for the stated purpose.

I.12–4

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

At pressures higher than about 30 hPa in the types, errors at temperatures lower than -40°C may stratosphere, the measurement accuracy of most exceed the limit where the measurements have no modern radiosondes is similar to the measurevalue for the stated purpose. ment accuracy in the troposphere. Thus, in this part of the stratosphere, radiosonde measurement 12.1.3.7 geopotential heights errors are about twice the stated optimum performance limit. At pressures lower than 30 Errors in geopotential height determined from hPa, the errors in older radiosonde types increase radiosonde observations differ according to rapidly with decreasing pressure and in some whether the height is for a specified pressure level cases approach the limit where they cease to be or for the height of a given turning point in the useful for the stated purpose. The rapid escalation temperature or relative humidity structure, such in radiosonde temperature measurement errors at as the tropopause. The error, ε zα (t 1 ), in the very low pressure results from an increase in geopotential height at a given time into flight is temperature errors associated with infrared and given by: solar radiation coupled with a rapid increase in errors in the heights assigned to the temperatures. p1 + ε p ( p1 ) p At very low pressures, even relatively small errors δT δT R 1 dp R dp ∫ [Tv ( (12.1) ( p ) − ε p ( p )] ε z t1 = ∫ [ε T ( p ) − ε p ( p )] + p) + εT in the radiosonde pressure measurements will δp δp gp p g p p 0 1 produce large errors in height and, hence, reported p1 + ε p ( p1 ) p temperature (see section 12.1.3.7). δT δ R 1 dp R ε z t1 = ≡[ε T ( p ) − ε p ( p )] + ≡ [Tv ( p) + εT ( p) − δ T εp ( p)] dp δp gp p g p p p 0 1 12.1.3.6 relative humidity where p 0 is the surface pressure; p 1 is the true Errors in modern radiosonde relative humidity pressure at time t1; p1 + εp(p1) is the actual presmeasurements are at least a factor of two or three sure indicated by the radiosonde at time t1; εT (p) larger than the optimum performance limit for and εp (p) are the errors in the radiosonde temperhigh relative humidity suggested in Table 3 of ature and pressure measurements, respectively, Annex 12.B, for the troposphere above the convecas a function of pressure; T v (p) is the virtual tive boundary layer. Furthermore, the errors in temperature at pressure p; and R and g are radiosonde relative humidity measurements the gas and gravitational constants as specified in increase as temperature decreases. For some sensor WMO (1988).

()

()

table 12.1. errors in geopotential height (m) (typical errors in standard levels, εz(ps), and significant levels, εz(t1), for given temperature and pressure errors, at or near specified levels. errors are similar in northern and southern latitudes.) 300 hPa Temperature error εT = 0.25 K, pressure error εp = 0 hPa Standard and significant levels Temperature error εT = 0 K, pressure error εp = –1 hPa 25˚n Standard level Significant level 50˚n summer Standard level Significant level 50˚n winter Standard level Significant level 3 26 5 70 6 213 –4 625 3 26 5 72 1 223 –20 680 3 27 12 72 –2 211 –24 650 9 17 26 34 100 hPa 30 hPa 10 hpa

CHaPTEr 12. MEaSurEMENT OF uPPEr-aIr PrESSurE, TEMPEraTurE aNd HuMIdITY

I.12–5

For a specified standard pressure level, ps, the pressure of the upper integration limit in the height computation is specified and is not subject to the radiosonde pressure error. Hence, the error in the standard pressure level geopotential height reduces to: R ps δT dp ε z ( ps ) = ε p ( p )] ≡ [ εT ( p ) − (12.2) g p0 δp p Table 12.1 shows, the errors in geopotential height that are caused by radiosonde sensor errors for typical atmospheres. It shows that the geopotentials of given pressure levels can be measured quite well, which is convenient for the synoptic and numerical analysis of constant pressure surfaces. However, large errors may occur in the heights of significant levels such as the tropopause and other turning points, and other levels may be calculated between the standard levels. Large height errors in the stratosphere resulting from pressure sensor errors of 2 or 3 hPa are likely to be of greatest significance in routine measurements in the tropics, where there are always significant temperature gradients in the vertical throughout the stratosphere. Ozone concentrations in the stratosphere also have pronounced gradients in the vertical, and height assignment errors will introduce significant errors into the ozonesonde reports at all latitudes. The optimum performance requirements for the heights of isobaric surfaces in a synoptic network, as stated in Table 4 of Annex 12.B, place extremely stringent requirements on radiosonde measurement accuracy. For instance, the best modern radiosondes would do well if height errors were only a factor of five higher than the optimum performance in the troposphere and an order of magnitude higher than the optimum performance in the stratosphere. 12.1.4 Measurement methods

solar radiation. In most modern radiosondes, coatings are applied to the temperature sensor to minimize solar heating. Software corrections for the residual solar heating are then applied during data processing. Nearly all relative humidity sensors require some protection from rain. A protective cover or duct reduces the ventilation of the sensor and hence the speed of response of the sensing system as a whole. The cover or duct also provides a source of contamination after passing through cloud. However, in practice, the requirement for protection for relative humidity sensors from rain or ice is usually more important than perfect exposure to the ambient air. Thus, protective covers or ducts are usually used with a relative humidity sensor. Pressure sensors are usually mounted internally to minimize the temperature changes in the sensor during flight and to avoid conflicts with the exposure of the temperature and relative-humidity sensors. Other important features required in radiosonde design are reliability, robustness, light weight and small dimensions. With modern electronic multiplexing readily available, it is also important to sample the radiosonde sensors at a high rate. If possible, this rate should be about once per second, corresponding to a minimum sample separation of about 5 m in the vertical. Since radiosondes are generally used only once, or not more than a few times, they must be designed for mass production at low cost. Ease and stability of calibration is very important, since radiosondes must often be stored for long periods (more than a year) prior to use. (Many of the most important Global Climate Observing System stations, for example, in Antarctica, are on sites where radiosondes cannot be delivered more than once per year.) A radiosonde should be capable of transmitting an intelligible signal to the ground receiver over a slant range of at least 200 km. The voltage of the radiosonde battery varies with both time and temperature. Therefore, the radiosonde must be designed to accept battery variations without a loss of measurement accuracy or an unacceptable drift in the transmitted radio frequency. 12.1.4.2 radio frequency used by radiosondes

This section discusses radiosonde methods in general terms. Details of instrumentation and procedures are given in other sections. 12.1.4.1 constraints on radiosonde design

Certain compromises are necessary when designing a radiosonde. Temperature measurements are found to be most reliable when sensors are exposed unprotected above the top of the radiosonde, but this also leads to direct exposure to

The radio frequency spectrum bands currently used for most radiosonde transmissions are shown in Table 12.2. These correspond to the meteorological aids allocations specified by the International Telecommunication Union (ITU) Radiocommunication Sector radio regulations.

I.12–6

ParT I. MEaSurEMENT OF METEOrOlOGICal varIaBlES

table 12.2. Primary frequencies used by radiosondes in the meteorological aids bands
Radio frequency band (MHz) Status ITU regions

400.15 – 406 1 668.4 – 1 700
Note:

Primary Primary

all all

tries. Therefore, preparations for the future in most countries should be based on the principle that radiosonde transmitters and receivers will have to work with bandwidths of much less than 1 MHz in order to avoid interfering signals. Transmitter stability may have to be better than ±5 kHz in countries with dense radiosonde networks, and not worse than about ±200 kHz in most of the remaining countries. National Meteorological Services need to maintain contact with national radiocommunication authorities in order to keep adequate radio frequency allocations and to ensure that their operations are protected from interference. Radiosonde operations will also need to avoid interference with, or from, data collection platforms transmitting to meteorological satellites between 401 and 403 MHz, with the downlinks from meteorological satellites between 1 690 and 1 700 MHz and with the command and data acquisition operations for meteorological satellites at a limited number of sites between 1 670 and 1 690 MHz.

Most secondary radar systems manufactured and

deployed in the Russian Federation operate in a radio frequency band centred at 1 780 MHz.

The radio frequency actually chosen for radiosonde operations in a given location will depend on various factors. At sites where strong upper winds are common, slant ranges to the radiosonde are usually large and balloon elevations are often very low. Under these circumstances, the 400-MHz band will normally be chosen for use since a good communication link from the radiosonde to the ground system is more readily achieved at 400 MHz than at 1680 MHz. When upper winds are not so strong, the choice of frequency will, on average, be usually determined by the method of upper-wind measurement used (see Part I, Chapter 13). The frequency band of 400 MHz is usually used when navigational aid windfinding is chosen, and 1680 MHz when radiotheodolites or a tracking antenna are to be used with the radiosonde system. The radio frequencies listed in Table 12.2 are allocated on a shared basis with other services. In some countries, the national radiocommunication authority has allocated part of the bands to other users, and the whole of the band is not available for radiosonde operations. In other countries, where large numbers of radiosonde systems are deployed in a dense network, there are stringent specifications on radio frequency drift and bandwidth occupied by an individual flight. Any organization proposing to fly radiosondes should check that suitable radio frequencies are available for their use and should also check that they will not interfere with the radiosonde operations of the National Meteorological Service. There is now strong pressures, supported by government radiocommunication agencies, to improve the efficiency of radio frequency use. Therefore, radiosonde operations will have to share with a greater range of users in the future. Wideband radiosonde systems occupying most of the available spectrum of the meteorological aids bands will become impracticable in many coun-

12.2 12.2.1

raDiosonDe electronics

general features

A basic radiosonde design usually comprises three main parts as follows: (a) The sensors plus references; (b) An electronic transducer, converting the output of the sensors and references into electrical signals; (c) The radio transmitter. In rawinsonde systems (see Part I, Chapter 13), there are also electronics associated with the reception and retransmission of radionavigation signals, or transponder system electronics for use with secondary radars. Radiosondes are usually required to measure more than one meteorological variable. Reference signals are used to compensate for instability in the conversion between sensor output and transmitted telemetry. Thus, a method of switching between various sensors and references in a predetermined cycle is required. Most modern radiosondes use electronic switches operating at high speed with one measurement cycle lasting typically between 1 and 2 s. This rate of sampling allows the meteorological variables to be sampled at height intervals of between 5 and 10 m at normal rates of ascent.

CHaPTEr 12. MEaSurEMENT OF uPPEr-aIr PrESSurE, TEMPEraTurE aNd HuMIdITY

I.12–7

12.2.2

Power supply for radiosondes

Radiosonde batteries should be of sufficient capacity to power the radiosonde for the required flight time in all atmospheric conditions. For radiosonde ascents to 5 hPa, radiosonde batteries should be of sufficient capacity to supply the required currents for up to three hours, given that ascents may often be delayed and that flight times may be as long as two hours. Three hours of operation would be required if descent data from the radiosonde were to be used. Batteries should be as light as practicable and should have a long storage life. They should also be environmentally safe following use. Many modern radiosondes can tolerate significant changes in output voltage during flight. Two types of batteries are in common use, the dry-cell type and water-activated batteries. Dry batteries have the advantage of being widely available at very low cost because of the high volume of production worldwide. However, they may have the disadvantage of having limited shelf life. Also, their output voltage may vary more during discharge than that of water-activated batteries. Water-activated batteries usually use a cuprous chloride and sulphur mixture. The batteries can be stored for long periods. The chemical reactions in water-activated batteries generate internal heat, reducing the need for thermal insulation and helping to stabilize the temperature of the radiosonde electronics during flight. These batteries are not manufactured on a large scale for other users. Therefore, they are generally manufactured directly by the radiosonde manufacturers. Care must be taken to ensure that batteries do not constitute an environmental hazard once the radiosonde falls to the ground after the balloon has burst. 12.2.3 12.2.3.1 Methods of data transmission radio transmitter

mitter power output lower than 250 mW. At 1 680 MHz the most widely used radiosonde type has a power output of about 330 mW. The modulation of the transmitter varies with radiosonde type. It would be preferable in future that radiosonde manufacturers standardize the transmission of data from the radiosonde to the ground station. In any case, the radiocommunication authorities in many regions of the world will require that radiosonde transmitters meet certain specifications in future, so that the occupation of the radiofrequency spectrum is minimized and other users can share the nominated meteorological aids radiofrequency bands (see section 12.1.4.2).

12.3 12.3.1

teMPerature sensors

general requirements

The best modern temperature sensors have a speed of response to changes of temperature which is fast enough to ensure that systematic bias from thermal lag during an ascent remains less than 0.1 K through any layer of depth of 1 km. At typical radiosonde rates of ascent, this is achieved in most locations with a sensor time-constant with a response faster than 1 s in the early part of the ascent. In addition, the temperature sensors should be designed to be as free as possible from radiation errors introduced by direct or backscattered solar radiation or heat exchange in the infrared. Infrared errors can be avoided by using sensor coatings that have low emissivity in the infrared. In the past, the most widely used white sensor coatings had high emissivity in the infrared. Measurements by these sensors were susceptible to significant errors from infrared heat exchange (see section 12.8.3.3). Temperature sensors also need to be sufficiently robust to withstand buffeting during launch and sufficiently stable to retain accurate calibration over several years. Ideally, the calibration of temperature sensors should be sufficiently reproducible to make individual sensor calibration unnecessary. The main types of temperature sensors in routine use are thermistors (ceramic resistive semiconductors), capacitive sensors, bimetallic sensors and thermocouples. The rate of response of the sensor is usually measured in terms of the time-constant of response, τ. This is defined (as in section 1.6.3 in Part I, Chapter 1) by: dTe /dt = 1/τ · (Te – T) (12.3)

A wide variety of transmitter designs are in use. Solid-state circuitry is mainly used up to 400 MHz and valve (cavity) oscillators may be used at 1 680 MHz. Modern transmitter designs are usually crystal-controlled to ensure a good frequency stability during the sounding. Good frequency stability during handling on the ground prior to launch and during flight are important. At 400 MHz, widely used radiosonde types are expected to have a trans-

I.12–8

Part I. MeasureMent of MeteorologIcal VarIaBles

where Te is the temperature of the sensor and T is the true air temperature. Thus, the time-constant is defined as the time required to respond by 63 per cent to a sudden change of temperature. The time-constant of the temperature sensor is proportional to thermal capacity and inversely proportional to the rate of heat transfer by convection from the sensor. Thermal capacity depends on the volume and composition of the sensor, whereas the heat transfer from the sensor depends on the sensor surface area, the heat transfer coefficient and the rate of the air mass flow over the sensor. The heat transfer coefficient has a weak dependence on the diameter of the sensor. Thus, the time-constants of response of temperature sensors made from a given material are approximately proportional to the ratio of the sensor volume to its surface area. Consequently, thin sensors of large surface area are the most effective for obtaining a fast response. The variation of the time-constant of response with the mass rate of air flow can be expressed as: τ = τ · (ρ · v)–n
0

derived from a combination of laboratory testing and comparisons with very-fast response sensors during ascent in radiosonde comparison tests. As noted above, modern capacitative sensors and bead thermistors have time-constants of response faster than 1 s at 1 000 hPa.
12.3.2 Thermistors

Thermistors are usually made of a ceramic material whose resistance changes with temperature. The sensors have a high resistance that decreases with absolute temperature. The relationship between resistance, R, and temperature, T, can be expressed approximately as: R = A · exp (B/T) (12.5)

(12.4)

where ρ is the air density, v the air speed over the sensor, and n a constant.
Note: For a sensor exposed above the radiosonde body on an

where A and B are constants. Sensitivity to temperature changes is very high, but the response to temperature changes is far from linear since the sensitivity decreases roughly with the square of the absolute temperature. As thermistor resistance is very high, typically tens of thousands of ohms, self-heating from the voltage applied to the sensor is negligible. It is possible to manufacture very small thermistors and, thus, fast rates of response can be obtained. Solar heating of a modern chip thermistor is around 1°C at 10 hPa. 12.3.3 Thermocapacitors

outrigger, v would correspond to the rate of ascent, but the air speed over the sensor may be lower than the rate of ascent if the sensor were mounted in an internal duct.

The value of n varies between 0.4 and 0.8, depending on the shape of the sensor and on the nature of the air flow (laminar or turbulent). Representative values of the time-constant of response of the older types of temperature sensors are shown in Table 12.3 at pressures of 1 000, 100 and 10 hPa, for a rate of ascent of 5 m s–1. These values were
Table 12.3. Typical time-constants of response of radiosonde temperature sensors
Temperature sensor T at 1 000 hPa(s) 3 (h+R). If V > 2V0 then the orbit becomes parabolic, the satellite has reached escape velocity, and it will not remain in orbit around the Earth. A geostationary orbit is achieved if the satellite orbits in the same direction as the Earth’s rotation, with a period of one day. If the orbit is circular above the Equator it becomes stationary relative to the Earth and, therefore, always views the same area

Satellite h r r a (1– e) Perigee R

Apogee

a

ae

θ

Earth centre a 1– e2

Earth

(a) Elements of a satellite elliptical orbit

(b) Circular satellite orbit

Satellite

Aries θ Perigee i Inclination

Equatorial plane Orbit plane

Ω

w

N

(c) Satellite orbital elements on the celestrial shell

figure 8.1. geometry of satellite orbits

Chapter 6. roCket measurements in the stratosphere and mesosphere

ii.6–5

the sensors in order to avoid heating of the sensors due to the Joule effect caused by the electromag‑ ρCd As Vr ⋅ Vr dV netic energy radiated from the transmitter; the m = mg − − ρ Vb (6.5)2 mω × V g− dt 2 power of the latter should, in any case, be limited to the minimum necessary (from 200 to dV mw). ρCd As Vr ⋅ Vr 500 m = mg − − ρ Vb g − 2 mω × V With the use of such low transmission power, dt 2 together with a distance between the transmitter and the receiving station which may be as much as 150 km, it is usually necessary to use high gain where A s is the cross‑sectional area of the sphere; directional receiving antennas. C d is the coefficient of drag; g is the acceleration due to gravity; m is the sphere mass; V is the On reception, and in order to be able to assign the sphere velocity; V r is the motion of the sphere data to appropriate heights, the signals obtained relative to the air; Vb is the volume of the sphere; after demodulation or decoding are recorded on ρ is the atmospheric density; and ω is the Earth’s multichannel magnetic tape together with the angular velocity. time‑based signals from the tracking radar. Time correlation between the telemetry signals and radar The relative velocity of the sphere with respect position data is very important. to air mass is defined as Vr = V – Va, where Va is the total wind velocity. C d is calculated on the basis of the relative velocity of the sphere. The terms on the right hand side of equation 6.5 6.4 TemperaTure measuremenT by represent the gravity, friction, buoyancy, and inflaTable falling sphere Coriolis forces, respectively. The inflatable falling sphere is a simple 1 m diameter mylar balloon containing an inflation mechanism and nominally weighs about 155 g. The sphere is deployed at an altitude of approximately 115 km where it begins its free fall under gravitational and wind forces. After being deployed the sphere is inflated to a super pressure of approximately 10 to 12 hPa by the vaporization of a liquid, such as isopentane. The surface of the sphere is metallized to enable radar tracking for position information as a function of time. To achieve the accuracy and precision required, the radar must be a high‑precision tracking system, such as an FPS‑16 C‑band radar or better. The radar‑measured position information and the coefficient of drag are then used in the equations of motion to calculate atmospheric density and winds. The calculation of density requires knowledge of the sphere’s coefficient of drag over a wide range of flow conditions (Luers, 1970; Engler and Luers, 1978). Pressure and temperature are also calculated for the same altitude increments as density. Sphere measurements are affected only by the external physical forces of gravity, drag acceleration and winds, which makes the sphere a potentially more accurate measurement than other in situ measurements (Schmidlin, Lee and Michel, 1991). The motion of the falling sphere is described by a simple equation of motion in a frame of reference having its origin at the centre of the Earth, as follows: After simple mathematical manipulation, equa‑ tion 6.5 is decomposed into three orthogonal components, including the vertical component of the equation of motion from which the density is calculated, thus obtaining:

ρ=

¨ 2 m ( gz − z − C z ) . Cd As Vr (z − wz ) + 2Vb gz

(6.6)

where gz is the acceleration of gravity at level z; wz is the vertical wind component, usually assumed to be zero; ż is the vertical component of the .. sphere’s velocity; and z is the vertical compo‑ nent of the sphere’s acceleration. The magnitudes of the buoyancy force (V b g z ) and the Coriolis force (C z ) terms compared to the other terms of equation 6.7 are small and are either neglected or treated as perturbations. The temperature profile is extracted from the retrieved atmospheric density using the hydro‑ static equation and the equation of state, as follows:

Tz = Ta

ρa M 0 a + o ∫ ρh gdh ρ z Rρ z h

(6.7)

where h is the height, the variable of integration; M0 is the molecular weight of dry air; R is the univer‑ sal gas constant; Ta is temperature in K at reference

ii.6–6

part ii. observing systems

altitude a; Tz is temperature in K at level z; ρa is the density at reference altitude a; ρh is the density to be integrated over the height interval h to a; and ρz is the density at altitude z. Note that the source of temperature error is the uncertainty associated with the retrieved density value. The error in the calculated density is comprised of high and low spatial frequency components. The high frequency component may arise from many sources, such as measure‑ ment error, computational error and/or atmospheric variability, and is somewhat random. Nonetheless, the error amplitude may be suppressed by statistical averaging. The low frequency component, however, including bias and linear variation, may be related to actual atmospheric features and is difficult to separate from the measurement error.

and g0 is the acceleration due to gravity at sea level; M is the molecular weight of the air; pi is the pressure at the upper level zi; pi–1 is the pressure at the lower level zi–1; rT is the radius of the Earth; R is the gas constant (for a perfect gas); Ti is the temperature at the upper level zi; Ti–1 is the temperature at the lower level zi–1; zi is the upper level; and zi–1 is the lower level. By comparison with a balloon‑borne radiosonde from which a pressure value p is obtained, an initial pressure pi may be determined for the rocket sound‑ ing at the common level zi, which usually lies near 20 km, or approximately 50 hPa. Similarly, by using the perfect gas law (equation 6.9), the density profile ρ can be determined. This method is based on step‑by‑step integration from the lower to the upper levels. It is, therefore, necessary to have very accurate height and temper‑ ature data for the various levels. 6.5.2 speed of sound, thermal conductivity and viscosity

6.5

CalCulaTion of oTher aerologiCal variables

6.5.1

pressure and density

Knowledge of the air temperature, given by the sensor as a function of height, enables atmospheric pressure and density at various levels to be deter‑ mined. In a dry atmosphere with constant molecular weight, and making use of the hydrostatic equation:

Using the basic data for pressure and temperature, other parameters, which are essential for elaborat‑ ing simulation models, are often computed, such as the following: (a) The speed of sound Vs:

Vs = γ R

T M

1 2

(6.12)

dp = –gρdz and the perfect gas law:

(6.8) (b)

where = Cp /Cv;
The coefficient of thermal conductivity, , of the air, expressed in W m–1 K–1:

ρ=

M p ⋅ R T

(6.9)

κ=
(c)

2.650 2 ⋅ 10 −3 ⋅ T 2 T + 2454 ⋅ 10
12 T

3

(6.13)

the relationship between pressures pi and pi–1 at the two levels zi and zi–1 between which the tempera‑ ture gradient is approximately constant may be expressed as:

The coefficient of viscosity of the air μ, expressed in N s m–2:

pi = ai . pi–1 where: (6.10)

μ=

2 1.458 ⋅ 10 −6 ⋅ T 2 T + 110.4

3 3

(6.14)

6.6

neTworks and Comparisons

ai = exp
2

−M rT ⋅ g0 RTi −1 rT + zi −1

2

⋅ 1−

−M rT ⋅ go RTi −1 rT + zi −1

⋅ 1−

Ti − Ti −1 2Ti −1 T

{zi − zi −1 }

(6.11) At present, only one or two countries carry out Ti − Ti −1 {zi − zi −regular soundings of the upper atmosphere. 1} 2Ti −1 T Reduction in operational requirements and the high costs associated with the launch operation tend to limit the number of stations and launching frequency.

CHaPTEr 8. SaTEllITE OBSErvaTIONS

II.8–7

These include a 20-channel high-resolution infrared radiation sounder (HIRS), a 4-channel microwave sounding unit (MSU), and a 3-channel infrared stratospheric sounding unit (SSU). The instrument characteristics of the TOVS are described in Table 8.3, which shows the number of channels; the nadir field of view; the aperture; viewing scan angle; swath width; the number of pixels viewed per swath (steps); and data digitization level, for four instruments carried on NOAA series polar-orbiting satellites. Comparable data for the AVHRR are also included for comparison. Annexes 8.A and 8.B contain details of the AVHRR and HIRS channels and their applications. There are other instruments on the NOAA polar orbiters, including the solar backscatter ultraviolet (SBUV) and the Earth radiation budget experiment (ERBE) radiometers. In mid-latitudes, a polar orbiter passes overhead twice daily. Selection of the time of day at which this occurs at each longitude involves optimizing the operation of instruments and reducing the times needed between observations and the delivery of data to forecast computer models. The addition of a 20-channel microwave sounder, the advanced microwave sounding unit, beginning on NOAA-K, will greatly increase the data flow from the spacecraft. This, in turn, will force changes in the direct-broadcast services. Two other sensors with a total of seven channels, the MSU and the SSU, are to be eliminated at the same time. 8.2.3.2 Imager The radiometer used on United States geostationary satellites up to GOES-7 (all of which were stabilized by spinning) has a name that reflects its lineage; visible infrared spin-scan radiometer (VISSR) refers to its imaging channels. As the VISSR atmospheric sounder (VAS), it now includes 12 infrared channels. Eight parallel visible fields of view (0.55–0.75 µm) view the sunlit Earth with 1 km resolution. Sounder Twelve infrared channels observe upwelling terrestrial radiation in bands from 3.945 to 14.74 µm. Of these, two are window channels and observe the surface, seven observe radiation in the atmospheric carbon dioxide absorption bands, while the remaining three observe radiation in the water vapour bands. The selection of channels has the effect of geostationary satellites

observing atmospheric radiation from varying heights within the atmosphere. Through a mathematical inversion process, an estimate of temperatures versus height in the lower atmosphere and stratosphere can be obtained. Another output is an estimate of atmospheric water vapour, in several deep layers. The characteristics of the VAS/VISSR instrument are shown in Table 8.4, which provides details of the scans by GOES satellites, including nadir fields of view for visible and infrared channels; scan angles (at the spacecraft); the resulting swath width on the Earth’s surface; the number of picture elements (pixels) per swath; and the digitization level for each pixel. Ancillary sensors Two additional systems for data collection are operational on the GOES satellites. Three sensors combine to form the space environment monitor. These report solar X-ray emission levels and monitor magnetic field strength and arrival rates for high-energy particles. A data-collection system receives radioed reports from Earth-located data-collection platforms and, via transponders, forwards these to a central processing facility. Platform operators may also receive their data by direct broadcast. New systems GOES-8, launched in 1994, has three-axis stabilization and no longer uses the VAS/VISSR system. It has an imager and a sounder similar in many respects to AVHRR and TOVS, respectively, but with higher horizontal resolution. 8.2.4 current operational meteorological and related satellite series

For details of operational and experimental satellites see WMO (1994b). For convenience, a brief description is given here. The World Weather Watch global observation satellite system is summarized in Figure 8.4. There are many other satellites for communication, environmental and military purposes, some of which also have meteorological applications. The following are low orbiting satellites: (a) TIROS-N/NOAA-A series: The United States civil satellites. The system comprises at least two satellites, the latest of which is NOAA-12, launched in 1991. They provide image

II.8–8

ParT II. OBSErvING SYSTEMS

table 8.3. Instrument systems on noaa satellites
Instrument Number of channels 3 4 20 Field of view (km) 147 105 17 Aperture (cm) 8 — 15 Scan angle (°) ±40 ±47.4 ±49.5 Swath width (km) ±736 ±1 174 ±1 120 Steps Data (bits) 8 11 56 12 12 13

SSu MSu HIrS

table 8.4. Visible and infrared instrument systems on noaa spin-scanning geostationary satellites
Channel visible Infrared Field of view (km) 1 7–14 Scan angle (°) ±8.70 ±3.45 Swath width (km) ±9 050 ±2 226 Pixels/Swath 8 x 15 228 3 822 Digits (bits) 6 10

(b)

(c)

(d)

(e) (f)

services and carry instruments for temperature sounding as well as for data collection and data platform location. Some of the products of the systems are provided on the Global Telecommunication System (GTS); DMSP series: the United States military satellites. These provide image and microwave sounding data, and the SSM/I instrument provides microwave imagery. Their real-time transmissions are encrypted, but can be made available for civil use; METEOR-2, the Russian series: These provide image and sounding services, but lower quality infrared imagery. Limited data available on the GTS includes cloud images at southern polar latitudes; FY-1 series: Launched by China, providing imaging services, with visible and infrared channels; SPOT: A French satellite providing commercial high-resolution imaging services; ERS-1: An experimental European Space Agency satellite providing sea-surface temperatures, surface wind and wave information and other oceanographic and environmental data, launched in 1991.

(b)

(c)

(d)

GMS: The Japanese satellites, providing a range of services similar to GOES, but with no soundings, operating at 140°E; METEOSAT: The EUMETSAT satellites built by the European Space Agency, providing a range of services similar to GOES, operating at zero longitude; INSAT: The Indian satellite with threeaxis stabilization located at 74°E initially launched in 1989, providing imagery, but only cloud-drift winds are available on the GTS.

There are, therefore, effectively four geosynchronous satellites presently in operation.

8.3 8.3.1

Meteorological observations

retrieval of geophysical quantities from radiance measurements

The following are goestationary satellites: (a) GOES: The United States satellites. At present the GOES series products include imagery, soundings and cloud motion data. When two satellites are available, they are usually located at 75°W and 135°W;

The quantity measured by the sensors on satellites is radiance in a number of defined spectral bands. The data are transmitted to ground stations and may be used to compile images, or quantitatively to calculate temperatures, concentrations of water vapour and other radiatively active gases, and other properties of the Earth’s surface and atmosphere. The measurements taken may be at many levels, and from them profiles through the atmosphere may be constructed.

CHaPTEr 8. SaTEllITE OBSErvaTIONS

II.8–9

METEOR (Russian Federation)

GOMS (Russian Federation) 76°E 850 KM

GMS (Japan) 140°E

METEOSAT (EUMETSAT) 0°Longitude

35 800 KM

GOES-W (United States) 135°W

GOES-E (United States) 75°W GEOSTATIONARY ORBIT POLAR ORBIT TIROS (United States)

SUBSATELLITE POINT

figure 8.4. the World Weather Watch global observation satellite system Conceptually, images are continuous two-dimensional distributions of brightness. It is this continuity that the brain seems so adept at handling. In practice, satellite images are arrangements of closely spaced picture elements (pixels), each with a particular brightness. When viewed at a suitable distance, they are indistinguishable from continuous functions. The eye and brain exploit the relative contrasts within scenes at various spatial frequencies, to identify positions and types of many weather phenomena. It is usual to use the sounding data in numerical models, and hence, they, and most other quantitative data derived from the array of pixels, are often treated as point values. The radiance data from the visible channels may be converted to brightness, or to the reflectance of the surface being observed. Data from the infrared channels may be converted to temperature, using the concept of brightness temperature (see section 8.3.1.1). There are limits to both the amount and the quality of information that can be extracted from a field of radiances measured from a satellite. It is useful to consider an archetypal passive remote-sensing system to see where these limits arise. It is assumed that the surface and atmosphere together reflect, or emit, or both, electromagnetic radiation towards the system. The physical processes may be summarized as follows. The variations in reflected radiation are caused by: (a) Sun elevation; (b) Satellite-sun azimuth angle; (c) Satellite viewing angle; (d) Transparency of the object; (e) Reflectivity of the underlying surface; (f) The extent to which the object is filling the field of view; (g) Overlying thin layers (thin clouds or aerosols). Many clouds are far from plane parallel and horizontally homogeneous. It is also known, from the interpretation of common satellite images, that other factors of importance are: (a) Sun-shadowing by higher objects; (b) The shape of the object (the cloud topography) giving shades and shadows in the reflected light. Variations in emitted radiation are mainly caused by: (a) The satellite viewing angle; (b) Temperature variations of the cloud; (c) Temperature variations of the surface (below the cloud);

II.8–10 (d) (e) (f) (g)

PART II. OBseRvIng sysTems

The temperature profile of the atmosphere; Emissivity variations of the cloud; Emissivity variations of the surface; Variations within the field of view of the satellite instrument; (h) The composition of the atmosphere between the object and the satellite (water vapour, carbon dioxide, ozone, thin clouds, aerosols, etc.).

This is Wien’s law. For the sun, T is 6 000 K and is 0.48μ. For the Earth, T is 290 K and λm is 10μ. The total flux emitted by a black body is: E = Bλ dλ = T4

m

(8.8)

Essentially, the system consists of optics to collect the radiation, a detector to determine how much there is, some telecommunications equipment to digitize this quantity (conversion into counts) and transmit it to the ground, some more equipment to receive the information and decode it into something useful, and a device to display the information. At each stage, potentially useful information about a scene being viewed is lost. This arises as a consequence of a series of digitization processes that transform the continuous scene. These include resolutions in space, wavelength and radiometric product, as discussed in section 8.3.1.2. 8.3.1.1 Radiance and brightness temperature

Where σ is Stefan’s constant. B is proportional to T at microwave and far infrared wavelengths (the RayleighJeans part of the spectrum). The typical dependence of B on T for λ at or below λm is shown in Figure 8.5. If radiance in a narrow wavelength band is measured, the Planck function can be used to calculate the temperature of the black body that emitted it:

Tλ =

c2 ⎡ c1 ⎡ λ ln 5 + 1 λ Bλ

(8.9)

where c1 and c2 are derived constants. This is known as the brightness temperature, and for most purposes the radiances transmitted from the satellite are converted to these quantities Tλ.
4μ (2 500 cm-1)

Emission from a black body A black body absorbs all radiation which falls upon it. In general, a body absorbs only a fraction of incident radiation; the fraction is known as the absorptivity, and it is wavelength dependent. Similarly, the efficiency for emission is known as the emissivity. At a given wavelength: emissivity = absorptivity (8.5)
B (μ . T) B (μ . 273)
2

6.7μ (1 500 cm-1)

10μ (1 000 cm-1) 15μ Microwave and far infrared

1

BαT

This is Kirchhoff’s law. The radiance (power per unit area per steradian) per unit wavelength interval emitted by a black body at temperature T and at wavelength λ is given by:

0 200 250 300

T, Temperature (K)

Bλ (T ) =

2≠ hc 2 λ −5 exp (hc / k λT ) − 1

(8.6)

Figure 8.5. Temperature dependence of the Planck function

where Bλ (W m–2 sr–1 cm–1) and its equivalent in wave number units, Bν (W m–2 sr–1 cm), are known as the Planck function. c, h and k are the speed of light, the Planck constant, and the Boltzmann constant, respectively. The following laws can be derived from equation 8.6. Bλ peaks at wavelength λm given by:

Atmospheric absorption Atmospheric absorption in the infrared is dominated by absorption bands of water, carbon dioxide, ozone, and so on. Examination of radiation within these bands enables the characteristics of the atmosphere to be determined: its temperature and the concentration of the absorbers. However, there are regions of the spectrum where absorption is low, providing the possibility for a satellite sensor to view the surface or

λm T = 0.29 deg.cm

(8.7)

CHaPTEr 8. SaTEllITE OBSErvaTIONS

II.8–11

cloud top and to determine its temperature or other characteristics. Such spectral regions are called “windows”. There is a particularly important window near the peak of the Earth/atmosphere emission curve, around 11 µm (see Figure 8.3). 8.3.1.2 resolution

object within the scene with the spatial response of the detector and the time of each integration. An alternative to scanning by moving mirrors is the use of linear arrays of detectors. With no moving parts, they are much more reliable than mirrors; however, they introduce problems in the intercalibration of the different detectors. Radiometric resolution

Spatial resolution The continuous nature of the scene is divided into a number of discrete picture elements or pixels that are governed by the size of the optics, the integration time of the detectors and possibly by subsequent sampling. The size of the object that can be resolved in the displayed image depends upon the size of these pixels. Owing to the effects of diffraction by elements of the optical system, the focused image of a distant point object in the scene has a characteristic angular distribution known as a point spread function or airy pattern (Figure 8.6(a)). Two distant point objects that are displaced within the field of view are considered separable (i.e. the Rayleigh criterion) if the angle between the maxima of their point spread functions is greater than /D, where is the wavelength of the radiation and D is the diameter of the beam (Figure 8.6(b)). However, if these two point spread functions are close enough to be focused on the same detector, they cannot be resolved. In many remote-sensing systems, it is the effective displacement of adjacent detectors that limits the spatial resolution. Only if they are close together, as in Figure 8.6(c), can the two objects be resolved. A general method of determining the resolution of the optical system is by computing or measuring its modulation transfer function. The modulation of a sinusoidal function is the ratio of half its peak-to-peak amplitude to its mean value. The modulation transfer function is derived by evaluating the ratio of the output to input modulations as a function of the wavelength (or spatial frequency) of the sinusoid. In practice, many space-borne systems use the motion of the satellite to extend the image along its track, and moving mirrors to build up the picture across the track. In such systems, the focused image of the viewed objects is scanned across a detector. The output from the detector is integrated over short periods to achieve the separation of objects. The value obtained for each integration is a complicated convolution of the point-spread functions of every

The instantaneous scene is focused by the optics onto a detector which responds to the irradiance upon it. The response can either be through a direct effect on the electronic energy levels within the detector (quantum detection) or through the radiation being absorbed, warming the detector and changing some characteristic of it, such as resistance (thermal detection). Voltages caused by a number of extraneous sources are also detected, including those due to the following: (a) The thermal motion of electrons within the detector (Johnson noise); (b) Surface irregularities and electrical contacts; (c) The quantum nature of electrical currents (shot noise).

Contains 85% total irradiance Maximum values 1.7% and 0.4%

Radius r (a) The irradiance profile of the airy diffraction pattern

Unresolved Just resolved Well resolved (b) Optical separation of adjacent points

Detectors Resolved but not detected

Detectors Resolved and detected

(c) Effect of detectors on resolution

figure 8.6. optical resolution

II.8–12

ParT II. OBSErvING SYSTEMS

To increase the signal to noise ratio, the system can be provided with large collecting optics, cooled detectors and long detector integration times. The combination of signal and noise voltages (an analogue signal) is integrated in time to produce a digital value. The sequence of integrated values corresponding to each line of the scene has then to be encoded and transmitted to the ground. Having received the data, decoded and processed them into useful products, the images can be displayed on a suitable device. Usually, this involves representing each pixel value as a suitable colour on a monitor or shade of grey on a facsimile recorder. Display resolution Thus, the continuous observed scene has been transformed into discrete pixels on a monitor. The discrete nature of the image is only noticeable when the resolutions of the image and the display device are grossly mismatched. The pixels on a typical monitor are separated by approximately 0.3 mm. Each pixel itself comprises three dots of different coloured phosphors. At a reasonable viewing distance of 75 cm, the eye can only resolve the pixels if they have high contrast. Note that the resolution of the eye, about 0.2 mrad, is limited by the separation of the photosensitive cells in the retina. The last part of the system involves the interpretive skills of the forecaster, who uses the images to obtain information about weather systems. 8.3.1.3 calibration

zenith) that is reflected and measured by the satellite radiometer in the spectral interval valid for each channel. Atmospheric absorption and scattering effects are neglected. The term equivalent albedo is used here to indicate that it is not a strictly true albedo value due to the fact that measurements are taken in a limited spectral interval and that the values are not corrected for atmospheric effects. To calculate the reflectance of each pixel (considering the dependence of varying solar zenith angle, varying satellite zenith angle and varying sun-satellite azimuth angle), the concept of bidirectional reflectance may be applied: Ri (μ0,μ, ) = Ai /µ0 (8.12)

where Ri is bidirectional reflectance; µ0 is the cosine of the solar zenith angle; µ is the cosine of the satellite zenith angle; and is the sun-satellite azimuth angle. One disadvantage of a fixed pre-launch calibration algorithm is that conditions in the satellite orbit could be considerably different from ground conditions, thus leading to incorrect albedo values. The effects of radiometer degradations with time can also seriously affect the calibration. Both effects have been observed for earlier satellites. Also, changes in calibration techniques and coefficients from one satellite to the next in the series need attention by the user. The conclusion is that, until an on-board calibration technique can be realized, radiometer data from the visible channels have to be examined carefully to discover discrepancies from the nominal calibration algorithms. Calibration of infrared channels

Calibration of the visible channels The two visible channels on the AVHRR instrument are calibrated before launch. Radiances measured by the two channels are calculated from: Li = Ai Si and Ai = Gi Xi + Ii (8.11) (8.10)

where i is the channel number; L is radiance (W m–2 sr–1); X is the digital count (10 bits); G is the calibration gain (slope); I is the calibration intercept; A is equivalent albedo; and S is equivalent solar radiance, computed from the solar constant and the spectral response of each channel. G and I are measured before launch. Equivalent albedo, A, is the percentage of the incoming top of the atmosphere solar radiance (with the sun in

Unlike the visible channels, the infrared channels are calibrated continuously on board the satellite. A linear relation is established between the radiometer digital counts and radiance. The calibration coefficients may be estimated for every scan line by using two reference measurements. A cold reference point is obtained by viewing space which acts as a black body at about 3 K, essentially a zero radiance source. The other reference point is obtained from an internal black body, the temperature of which is monitored. The Planck function (see section 8.3.2) then gives the radiance (W m–2 sr–1) at each wavelength. A linear relationship between radiance and digital counts derived from the fixed points is used. A small non-linear correction is also applied. Difficulties of various sorts may arise. For example, during some autumn months, the calibration of NOAA-10 channel 3 data has suffered from

CHaPTEr 8. SaTEllITE OBSErvaTIONS

II.8–13

serious errors (giving temperatures too high). Although the reason for this is not clear, it may be caused by conditions when the satellite in the ascending node turns from illuminated to dark conditions. Rapid changes of internal black body temperatures could then occur and the application of a constant calibration algorithm may be incorrect. Calibration of HIRS and MSU For HIRS (see Annex 8.B), calibration measurements are taken every 40 scan lines and occupy 3 scan lines (for which no Earth-view data are available). The procedure is essentially the same as for the AVHRR, using the two known temperatures. For MSU (see Annex 8.B), the calibration sequence takes place at the end of each scan line and so that no Earth view data are lost. Again, a two-point calibration is provided from warm and cold reference sources. However, for MSU channel frequencies and typical Earth-view temperatures, the measured radiances are in the Rayleigh-Jeans tail of the Planck function, where radiance is proportional to brightness temperature. Therefore, the data may be calibrated into brightness temperature directly (see section 8.3.2). 8.3.1.4 digitization

the projection in which the image is displayed. This is necessary partly because of the distortions arising from viewing the curved Earth using a scanning mirror, and partly because of the need to use images in conjunction with other meteorological data on standard chart backgrounds. A key element in the process of remapping the image as seen from space (“space-view”), to fit the required projection, is knowing the position on the Earth of each pixel (“navigation”). This is achieved by knowing the orbital characteristics of the satellite (supplied by the satellite operator), the precise time at which each line of the image was recorded, and the geometry of the scan. In practice, the remapping is carried out as follows. The position within the space-view scene that corresponds to the centre of each pixel in the final reprojected image is located, using the orbital data and the geometry of the final projection. The values of the pixels at, and in the locality of, this point are used to compute a new value. Effectively, this is a weighted average of the nearby values and is assigned to the pixel in the final image. Many sophisticated methods have been studied to perform this weighted average. Most are not applicable to near-real-time applications due to the large amount of computing effort required. However, the increasing availability of parallel processing computing is expected to change this position. 8.3.2 vertical profiles of temperature and humidity the tIros operational vertical sounder system

The digitization of the radiance provides a number of discrete values separated by constant steps. The temperature differences corresponding to these steps in radiance define the quanta of temperature in the final image. Owing to the non-linearity of the black-body function with temperature, the size of these steps depends upon the temperature. AVHRR data are digitized using 10 bits, thereby providing 1 024 different values. For the thermal infrared channels, the temperature step at 300 K is about 0.1 K, but it is 0.3 K at 220 K.

8.3.2.1

Other systems are digitized using different numbers of bits. The infrared images for METEOSAT use 8 bits, but the visible and water-vapour channels have only 6 significant bits. Interestingly, tests have demonstrated that a monochrome satellite image Lλ = Bλ (T ( Ps ))τ λ ( Ps ) + can be displayed without serious degradation using 0 the equivalent of only 5 bits. dτ ( p ) Lλ = Bλ (T ( Ps ))τ λ ( Ps ) + ∫ Bλ (T ( p )) λ dp dp ps 8.3.1.5 remapping The requirements for the rapid processing of large amounts of data are best met by using digital computers. In an operational system, the most intensive computational task is to change

The TIROS-N/NOAA-A series of satellites carry the TOVS system, including the HIRS and MSU instruments. They observe radiation upwelling from the Earth and atmosphere, which is given by the radiative transfer equation (RTE):

ps

≡Bλ (T ( p))

0

dτ λ ( (8.13) p ) dp dp

where B≡ is the Planck function at wavelength λ; Lλ is the upwelling irradiance; T(p) is the temperature as a function of pressure p; ps is the surface pressure; and τλ is the transmittance.

II.8–14

ParT II. OBSErvING SYSTEMS

The first term is the contribution from the Earth’s surface and the second the radiation from the atmosphere; dτλ/dp is called the weighting function. The solution of the RTE is the basis of atmospheric sounding. The upwelling irradiance at the top of the atmosphere arises from a combination of the Planck function and the spectral transmittance. The Planck function conveys temperature information; the transmittance is associated with the absorption and density profile of radiatively active gases; and the weighting function contains profile information. For different wavelengths, the weighting function will peak at different altitudes. Temperature soundings may be constructed if a set of wavelength intervals can be chosen such that the corresponding radiances originate to a significant extent from different layers in the atmosphere. Figure 8.7 shows typical weighting functions which have been used for processing data from HIRS. The solution of the RTE is very complex, mainly because of the overlap in the weighting functions shown in Figure 8.7. A number of different methods have been developed to derive temperature and humidity profiles. A general account of several methods is given by Smith (1985), and develop1 2 3 4 5 6 8 10 1

ments are reported in the successive reports of the TOVS Study Conferences (CIMSS, 1991). Early methods which were widely used were based on regressions between radiances and ground truth (from radiosondes), under various atmospheric conditions. Better results are obtained from solutions of the RTE, described as physical retrievals. The basic principle by which water vapour concentration is calculated is illustrated by a procedure used in some physical retrieval schemes. The temperature profile is calculated using wavelengths in which carbon dioxide emits, and it is also calculated using wavelengths in which water vapour emits, with an assumed vertical distribution of water vapour. The difference between the two temperature profiles is due to the difference between the assumed and the actual water vapour profiles, and the actual profile may therefore be deduced. In most Meteorological Services, the retrieval of geophysical quantities for use in numerical weather prediction is carried out by using physical methods. At NOAA, data are retrieved by obtaining a first guess using a library search method followed by a full physical retrieval based

MIRS long wave CO2 channels

2 3 4 5 6 8 10

MIRS short wave CO2/H2O channels

Height
(17)

(1)
20 30 40 50 60 80 100 200 300 400 500 600 800 1 000 1 2 3 4 5 6 8 10 20 30 40 50 60 80 100 200 300 400 500 600 800 1 000 20

(2) (3)

30 40 50 60 80 100 200

4
(16) (15) (13)
0 0.2 0.4

(5) (7)
0 0.2 0.4

(4)

(6)

0.6

0.8

1.0

300 400 500 600 800 1 000 100

(14)
0.8 1.0

0.6

3

Weighting function for scan angle significantly off nadir Weighting function for nadir view

SSU 15μm CO2 channels (3)
200

MIRS water vapour and long wave window channels

2 2

(2) (1) (4)
400 300

1

(12)

(3) MSU microwave O2 channels (2)

500 600 700 800

(11)

Weighting function

(1)
0 0.2 0.4 0.6 0.8 1.0

900 1 000 0 0.2

(8)
0.4

(10)
0.6 0.8 1.0

figure 8.7. toVs weighting functions (normalized)

figure 8.8. schematic illustration of a group of weighting functions for nadir viewing and the effect of scanning off nadir on one of these functions

Chapter 8. SateLLIte OBSerVatIONS

II.8–15

on a solution of the RTE. Other Services, such as the UK Met Office and the Australian Bureau of Meteorology, use a numerical model first guess followed by a full solution of the RTE. The latest development is a trend towards a varia‑ tional solution of the RTE in the presence of all other data available at the time of analysis. This can be extended to four dimensions to allow asynoptic data to contribute over a suitable period. It is necessary for all methods to identify and use pixels with no cloud, or to allow for the effects of cloud. Procedures for this are described in section 8.3.3. 8.3.2.2 The limb effect

cients for each scan angle. However, if a regression retrieval is performed in which one set of coeffi‑ cients (appropriate to a zero scan angle) is used, all brightness temperatures must be converted to the same angle of view, usually the nadir. The weakness of the regression approach to the limb effect is the difficulty of developing regres‑ sions for different cloud, temperature and moisture regimes. A better approach, which has now become operational in some centres, is to use the physical retrieval method in which the radiative transfer equation is solved for every scan angle at which measurements are required. Limb scanning for soundings Operational meteorological sounders look straight down from the satellite to the Earth’s surface, but an alternative approach is to look at the Earth’s limb. The weighting functions are very sharp for limb‑scanning sensors and always peak at the high‑ est pressure in the field of view. Hence, good vertical resolution (1 km) is obtained with a horizontal resolution of around 10 km. Somewhat poorer reso‑ lutions are available with vertical sounding, although it is not possible to make measurements lower than about 15 km altitude with limb‑sound‑ ing techniques, and therefore vertical sounding is necessary for tropospheric measurements. 8.3.2.3 Resolution and accuracy

The limb effect is illustrated in Figure 8.8. As the angle of view moves away from the vertical, the path length of the radiation through the atmos‑ phere increases. Therefore, the transmittances from all levels to space decrease and the peak of the weighting function rises. If the channel senses radi‑ ation from an atmospheric layer in which there is a temperature lapse rate, the measured radiance will change; for tropospheric channels it will tend to decrease. It is, therefore, necessary for some appli‑ cations to convert the measured radiances to estimate the brightness temperature that would have been measured if the instrument had viewed the same volume vertically. The limb‑correction method may be applied, or a physical retrieval method. Limb corrections are applied to brightness tempera‑ tures measured at non‑zero nadir angle. They are possible because the weighting function of the nadir view for one channel will, in general, peak at a level intermediate between the weighting func‑ tion peaks of two channels at the angle of measurement. Thus, for a given angle, θ, the differ‑ ence between the brightness temperature at nadir and at the angle of measurement may be expressed as a linear combination of the measured brightness temperatures in a number of channels:

(TB )θ = 0 − (TB )θ = aθi + i i 0

j =1

Σ aθji (TB )θj

1

(8.14)

The coefficient aθ is found by multiple linear regres‑ ji sion on synthetic brightness temperatures computed for a representative set of profiles. It is possible to remove the need for a limb correc‑ tion. For example, a temperature retrieval algorithm may be used with a different set of regression coeffi‑

The accuracy of satellite retrievals is difficult to assess. As with many other observing systems, there is the problem of determining “what is truth?” A widely used method of assessing accuracy is the study of statistics of differences between retrievals and collocated radiosonde profiles. Such statistics will include the retrieval errors and will also contain contributions from radiosonde errors (which include the effects of both discrepancies from the true profile along the radiosonde ascent path and the degree to which this profile is representative of the surrounding volume of atmosphere) and collo‑ cation errors caused by the separation in space and time between the satellite sounding and the radio‑ sonde ascent. Although retrieval‑radiosonde collocation statistics are very useful, they should not be treated simply as measurements of retrieval error. Brightness temperatures It is important to note the strong non‑linearity in the equations converting radiances to brightness temperatures. This means that, when dealing

II.8–16

ParT II. OBSErvING SYSTEMS

320

300

Brightness temperature (K)

280

260

240

220

200 0 200 400 600 800 1 000 0 200 400 600 800 1 000

Channel 3 10-bit count

Channel 4 10-bit count

figure 8.9. typical calibration curves for aVHrr channels 3 and 4 digital counts to brightness temperatures. the curve for aVHrr channel 5 is very similar to the curve for aVHrr channel 4. with brightness temperatures, the true temperature measurement accuracy of the radiometer varies with the temperature. This is not the case when handling radiances as these are linearly related to the radiometer counts. In the AVHRR, all three infrared channels have rapidly decreasing accuracy for lower temperatures. This can be seen in Figure 8.9 (which shows only two channels). table 8.5. uncertainty (K) of aVHrr Ir channels
Temperature (K) 200 220 270 320 Channel 3 ~10 2.5 0.18 0.03 Channel 4 ~0.3 0.22 0.10 0.06

channel 4. Channel 5 is very similar to channel 4. Channel 3 is much less accurate at low temperatures, but better than channel 4, at temperatures higher than 290 K. Soundings Figure 8.10 shows typical difference statistics from the UK Met Office retrieval system. The bias and standard deviation profiles for retrieval-radiosonde differences are shown. These are based on all collocations obtained from NOAA-11 retrievals during July 1991, with collocation criteria of 3 h time separation and 150 km horizontal separation. If the set of profiles in the collocations is large, and both are representative of the same population, the biases in these statistics should be very small. The biases found, about 1° at some pressure levels, are to be expected here, where collocations for a limited period and limited area may not be representative of a zonal set. The standard deviations, while they are larger than the equivalent values for retrieval errors alone, exhibit some of the expected characteristics of the retrieval error profile. They have a minimum in the mid-troposphere, with higher values near the surface and the tropopause. The lower tropospheric values reflect problems

Comparisons of measurement accuracies for channel 3 (Annex 8.A) and channel 4 show some differences. When treating 10-bit values, the uncertainties are as shown in Table 8.5. Channel 3 shows a stronger non-linearity than channel 4 leading to much lower accuracies for low temperatures than

CHaPTEr 8. SaTEllITE OBSErvaTIONS

II.8–17

associated with residual cloud contamination and various surface effects. Low-level inversions will also tend to cause retrieval problems. The tropopause values reflect both the lack of information in the radiances from this part of the profile, as well as the tendency of the retrieval method to smooth out features of this type. Resolution The field of view of the HIRS radiometer (Table 8.3) is about 17 km at the subsatellite point, and profile calculations can be made out to the edge of the swath, where the field is elliptical with an axis of about 55 km. Profiles can be calculated at any horizontal grid size, but they are not independent if they are closer than the field of view. Temperature soundings are calculated down to the cloud top, or to the surface if the MSU instrument is used. Over land and close to the coast, the horizontal variability of temperature and emissivity cause uncertainties which limit their use in numerical models below about 500 hPa. The vertical resolution of the observations is related to the weighting functions, and is typically about 3 km. This poor vertical resolution is one of the main shortcomings of the present sounding system for numerical weather prediction, and it will be improved in the next generation of sounding instruments, such as the atmospheric infrared sounder (AIRS) and the high-resolution interferameter sounder (HIS). 8.3.3 cloud and land surface characteristics and cloud clearing cloud and land surface observations

attenuation of radiation by the atmosphere. For thin clouds, temperatures in AVHRR channels 3 (3.7 µm) (Annex 8.A) are warmer than those in channel 4 (11 µm) (see Figure 8.11(a)). The converse is true for thick low cloud, this being the basis of the fog detection scheme described by Eyre, Brownscombe and Allam (1984) (see Figure 8.11(b)). The difference between AVHRR channels 4 and 5 (11 µm and 12 µm) is sensitive to the thickness of cloud and to the water vapour content of the atmosphere. A threshold applied to this difference facilitates the detection of thin cirrus. During the day, reflected solar radiation, adjusted to eliminate the effects of variations of solar elevation, can also be used. A threshold test separates bright cloud from dark surfaces. A fourth test uses the ratio for the radiance of the near infrared channel 2 (0.9 µm) to that of the visible channel 1 (0.6 µm). This ratio has a value that is: (a) Close to unity for clouds; (b) About 0.5 for water, due to the enhanced backscattering by aerosols at short wavelengths; (c) About 1.5 for land, and particularly growing vegetation, due to the high reflectance of leafy structures in the near infrared. Having detected the location of the pixels uncontaminated by cloud using these methods, it is possible to determine some surface parameters. Of these, the most important is sea-surface temperature (section 8.3.6). Land surfaces have highly variable emissivities that make calculations very uncertain. Cloud parameters can be extracted using extensions to the series of tests outlined previously. These include cloud-top temperatures, fractional cloud cover and optical thickness. The height of the cloud top may be calculated in several ways. The simplest method is to use brightness temperatures from one or more channels to calculate cloud-top temperature, and infer the height from a temperature profile, usually derived from a numerical model. This method works well for heavy stratiform and cumulus cloud fields, but not for semi-transparent clouds such as cirrus, or for fields of small cumulus clouds. Smith and Platt (1978) showed how to use the radiative transfer equation in close pairs of HIRS channels to calculate pressure and, hence, the height of the tops of scattered or thin cloud, with errors typically between half and a quarter of the cloud thickness of semi-transparent layers.

8.3.3.1

The scheme developed at the UK Met Office is typical of those that may be used to extract information about clouds and the surface. It applies a succession of tests to each pixel within a scene in attempts to identify cloud. The first is a threshold test in the infrared; essentially, any pixels colder than a specified temperature are deemed to contain cloud. The second test looks at the local variance of temperatures within an image. High values indicate either mixtures of clear and cloudy pixels or those containing clouds at different levels. Small values at low temperatures indicate fully cloudy pixels. The brightness temperatures of an object in different channels depend upon the variations with wavelength, the emissivity of the object and the

II.8–18

ParT II. OBSErvING SYSTEMS

It should be stressed that such products can be derived only from data streams that contain precise calibration data. These data can only be considered as images when they are displayed on a suitable device. Although in some cases they are derived to be used as input variables for mesoscale numerical models, much useful information can be gained through viewing them. Various combinations of radiometer channels are used to define particular types of clouds, snow and vegetation, as shown for example in Figure 8.12.
Retrieval
10 20 20 50 70 100 150 200 250 300 400 500 700 850 1000 9999 8888

8.3.3.2

soundings of the tIros operational vertical sounder in the presence of cloud

Cloud clearing Infrared radiances are affected markedly by the presence of clouds, since most are almost opaque in this wavelength region. Consequently, the algorithms used in the retrieval of tropospheric temperature must be able to detect clouds which have a significant effect on the radiances and, if possible, make allowances for these effects. This is usually done by
First guess (corrected)

Temperature (°C: 1° intervals)

BIAS
100 150 200 250 300 400 500 700 850 1000 9999 0

SD

RMS Dewpoint (°c : 1° intervals)

NUMBER (1000s)

BIAS

0

SD

0

RMS

Forecast bias correction (added)

8888 T8; 9999–1.5 m; All scan pos; No routes 1+2; All times; All surfaces (land + sea); BTRP > =–300; Elev < = 500 m; Col dist 0–150 km. Total profiles 9856; Sonde NOK 2122; BTRP NOK 1; Route NOK 166; Elev NOK 794. figure 8.10. error statistics for vertical profiles (uK Met office)

CHaPTEr 8. SaTEllITE OBSErvaTIONS

II.8–19

correcting the measured radiances to obtain “clear-column” values, namely, the radiances which would be measured from the same temperature and humidity profiles in the absence of cloud. In many retrieval schemes, the inversion process converts clear-column radiances to atmosphere parameters, and so a preliminary cloud-clearing step is required. Many of the algorithms developed are variants of the adjacent field of view or N* method (Smith, 1985). In this approach, the measured radiances, R1
11 τ = 0.7 0.7

and R2, in two adjacent fields of view (hereafter referred to as “spots”) of a radiometer channel can, under certain conditions, be expressed as follows: R1 = N1 Rcloudy + (1 – N1) Rclear R2 = N2 Rcloudy + (1 – N2) Rclear (8.15)

where Rclear and Rcloudy are the radiances appropriate to clear and completely overcast conditions, respectively; and N1 and N2 are the effective fractional

10

0.6

Radiance (W m-2 mm-1 sr-1) at 11 mm

Radiance (W m-2 μm-1 sr-1) at 3.75 μm

(a) The effect of semi-transparent cloud on radiances. Surface radiance Bs is reduced by semi-transparent cloud to τBs. Temperature corresponding to τBs is higher for 3.7 μm than for 11 μm.

9 Bs

0.5

μm

11

8

0.4

Bs μm 3.

7

75

0.3

τBs τBs 6 0.2

270

T11

280
T3.7

290

T8 300

310

0.1

Temperature (K)
11 E11 = 1.0 E3.7 = 0.85 10 0.6 0.7

Radiance (W m-2 μm-1 sr-1) at 3.75 μm

Radiance (W m-2 mm-1 sr-1) at 11 mm

(b) The effect of different emissivity on radiances. Radiance received at satellite, Bsat, is: Bsat = E B (Ts) where E is emissivity; B is blackbody function; and Ts is surface temperature. For low cloud and fog; E11 μm ≈ 1.0 E3.7 μm ≈ 0.85 The temperature corresponding to E B (Ts) is higher for 11 μm than for 3.7 μm.

9

0.5

8 Bs 7
75 μm

11

μm

0.4

3.

0.3 B3.7

6

0.2 EB3.7

270

280 T3.7

T11 290

300

310

0.1

Temperature (K)

figure 8.11. calculation of temperature in the presence of clouds

II.8–20

ParT II. OBSErvING SYSTEMS

cloud coverages in spots 1 and 2. In deriving these equations, the following assumptions have been made: (a) That the atmospheric profile and surface characteristics in the two spots are the same; (b) That only one layer of cloud is present; (c) That the cloud top has the same height (and temperature) in both spots. If the fractional cloud coverages in the two spots are different (N1 ≠ N2), equation 8.15 may be solved simultaneously to give the clear radiance:

Mean vector differences between cloud drift winds and winds measured by wind-finding radars within 100 nm were typically 3, 5 and 7 m s–1 for low, middle and high clouds, respectively, for one month. These indicate that the errors are comparable at low levels with those for conventional measurements. The wind estimation process is typically fully automatic. Target cloud areas covering about 20 × 20 pixels are chosen from half-hourly images using criteria which include a suitable range of brightness temperatures and gradients within each trial area. Once the targets have been selected, auto-tracking is performed, using typically a 6 or 12 h numerical prognosis as a first-guess field to search for wellcorrelated target areas. Root-mean-square differences may be used to compare the arrays of brightness temperatures of the target and search areas in order to estimate motion. The first guess reduces the size of the search area that is necessary to obtain the wind vector, but it also constrains the results to lie within a certain range of the forecast wind field. Error flags are assigned to each measurement on the basis of several characteristics, including the differences between the successive half-hour vectors and the difference between the measurement and the first-guess field. These error flags can be used in numerical analysis to give appropriate weight to the data. The number of measurements for each synoptic hour is, of course, limited by the existence of suitable clouds and is typically of the order of 600 vectors per hemisphere. At high latitudes, sequential images from polarorbiting satellites can be used to produce cloud motion vectors in the latitudes not reached by the geostationary satellites. A further development of the same technique is to calculate water vapour winds, using satellite images of the water vapour distribution. 8.3.4.2 scatterometer surface winds

Rclear = where N* = N1/N2.

R1 N * R2 1− N *

(8.16)

This method has been considerably elaborated, using HIRS and MSU channels, the horizontal resolution of which is the sufficient for the assumptions to hold true sufficiently often. In this method, regression between co-located measurements in the MSU2 channel and the HIRS channels is used, and the coefficients are updated regularly, usually on a weekly basis. Newer methods are now being applied, using AVHRR data to help clear the HIRS field of view. Furthermore, full physical retrieval methods are possible, using AVHRR and TOVS data, in which the fractional cloud cover and cloud height and amount can be explicitly computed from the observed radiances. 8.3.4 8.3.4.1 Wind measurements cloud drift winds

Cloud drift winds are produced from geostationary satellite images by tracking cloud tops, usually for two half-hour periods between successive infrared images. The accuracy of the winds is limited to the extent that cloud motion represents the wind (for example a convective cloud cluster may move with the speed of a mesoscale atmospheric disturbance, and not with the speed of an identifiable wind). It also depends on the extent to which a representative cloud height can be determined from the brightness temperature field. In addition, the accuracy of the winds is dependent on the time interval and, to a limited extent, on the correlations between the cloud images used in their calculation, the spatial resolution of these images, the error in the first-guess fields, the degree to which the first-guess field limits the search for correlated patterns in sequential images, and the amount of development taking place in the clouds.

The scatterometer is an instrument on the experimental ERS-1 satellite, which produces routine wind measurements over the sea surface. The technique will become operational on satellites now being prepared. As soon as microwave radar became widely used in the 1940s, it was found that at low elevation angles, surrounding terrain (or at sea, waves) caused large,

CHaPTEr 8. SaTEllITE OBSErvaTIONS

II.8–21

A1 – A2 Snow
25

T3 – T4
50

Ns

Cb

Sg/apr.

Cb Ac

Sea
0

Ci/sea
30

Cu/la Ci/la L a n d –25 Sunglint

Ac St Ci/la

Sg/sum.

Sc St

Ns

10 25 50 75 100 125 150 A1

Cu over land Snow
50 75 100 125 150 A1

Ci/sea Land 25 Sea

(a) Snow, cumulonimbus (Cb), nimbostratus (Ns), altocumulus (Ac), cumulus (Cu) over land, cirrus (Ci) over land, sunglint, land and sea in the A1 – (A1 – A2) feature space. The figure is extracted from the database for summer, NOAA-10 and a sun elevation around 40°.

(b) Object classes in the A1 – (T3 – T4) feature space. From the same database section as in (). Separability of snow and clouds is apparent. A problem is the discrimination of stratus and sunglint (Sg) during summer. Sunglint/spring is also included.

figure 8.12. Identification of cloud and surface properties often referred to as the radar equation. σ0 may be set in a linear form (as above) or in decibels (dB), i.e. σ0dB = 10 log10 σ0lin. Experimental evidence from scatterometers operating over the ocean shows that σ0 increases with surface wind speed (as measured by ships or buoys), decreases with incidence angle, and is dependent on the radar beam angle relative to wind direction. Figure 8.13 is a plot of σ0 aircraft data against wind direction for various wind speeds. Direction 0° corresponds to looking upwind, 90° to crosswind and 180° to downwind. The European Space Agency has coordinated a number of experiments to confirm these types of curves at 5.3 GHz, which is the operating frequency for this instrument on the ERS-1 satellite. Several aircraft scatterometers have been flown close to instrumented ships and buoys in the North Sea, the Atlantic and the Mediterranean. The σ0 data are then correlated with the surface wind, which has been adjusted to a common anemometer height of 10 m (assuming neutral stability). An empirical model function has been fitted to this data of the form: σ0 = a0 . Uγ (1 + a1cos + a2cos ) (8.19)

unwanted echoes. Ever since, designers and users of radar equipment have sought to reduce this noise. Researchers investigating the effect found that the backscattered echo from the sea became large with increasing wind speed, thus opening the possibility of remotely measuring the wind. Radars designed to measure this type of echo are known as scatterometers. Backscattering is due principally to in-phase reflections from a rough surface; for incidence angles of more than about 20° from the vertical, this occurs when the Bragg condition is met: Λsinθi = nλ/2 (8.17)

where Λ is the surface roughness wavelength; λ is the radar wavelength; and θi is the incidence angle and n = 1,2,3 ... First order Bragg scattering (n=1), at microwave frequencies, arises from the small ripples (cats’ paws) generated by the instantaneous surface wind stress. The level of backscatter from an extended target, such as sea surface, is generally termed the normalized radar cross-section, or σ0. For a given geometry and transmitted power, σ0 is proportional to the power received back at the radar. In terms of other known or measureable radar parameters:

P 64 π 3 R 4 α = R⋅ 2 2 PT λ LS G0 (G / G0 )2 A
0

(8.18)

where PT is the transmitted power and PR is the power received back at the radar; R is the slant range to the target of area A; λ is the radar wavelength; LS includes atmospheric attenuation and other system losses; G0 is the peak antenna gain; and G/G0 is the relative antenna gain in the target direction. Equation 8.18 is

where the coefficients a0, a1, a2 and are dependent on the incidence angle. This model relates the neutral stability wind speed at 10 m, U, and the wind direction relative to the radar, , to the normalized radar cross-section. It may also be the case that σ0 is a function of seasurface temperature, sea state and surface slicks (natural or man-made). However, these parameters have yet to be demonstrated as having any

II.8–22

ParT II. OBSErvING SYSTEMS

significant effect on the accuracy of wind vector retrieval. Since σ0 shows a clear relationship with wind speed and direction, in principle, measuring σ0 at two or more different azimuth angles allows both wind speed and direction to be retrieved. However, the direction retrieved may not be unique; there may be ambiguous directions. In 1978, a wind scatterometer was flown on a satellite – the SEASAT-A satellite scatterometer (SASS) – for the first time and ably demonstrated the accuracy of this new form of measurement. The specification was for root-mean-square accuracies of 2 m s–1 for wind speed and 20° for direction. Comparisons with conventional wind measurements showed that these figures were met if the rough wind direction was known, so as to select the best from the ambiguous set of SASS directions. The SASS instrument used two beams either side of the spacecraft, whereas the ERS-1 scatterometer uses a third, central beam to improve wind direction discrimination; however, since it is only a single-sided instrument, it provides less coverage. Each of the three antennas produces a narrow beam of radar energy in the horizontal, but wide beam in the vertical, resulting in a narrow band of illumination of the sea surface across the 500 km width of the swath. As the satellite travels

forward, the centre then rear beam measures from the same part of the ocean as the fore beam. Hence, each part of the swath, divided into 50 km squares, has three σ0 measurements taken at different relative directions to the local surface wind vector. Figure 8.14 shows the coverage of the scatterometer for the North Atlantic over 24 h. These swaths are not static and move westwards to fill in the large gaps on subsequent days. Even so, the coverage is not complete due to the relatively small swath width in relation to, for example, the AVHRR imager on the NOAA satellites. However, there is potentially a wind available every 50 km within the coverage area, globally, and the European Space Agency delivers this information to operational users within 3 h of the measurement time. The raw instrument data are recorded on board and replayed to European Space Agency ground stations each orbit, the principle station being at Kiruna in northern Sweden, where the wind vectors are derived. As already mentioned, the scatterometer principally measures the power level of the backscatter at a given location at different azimuth angles. Since we know the geometry, such as range and incidence angles, equation 8.18 can be used to calculate a triplet of values of σ0 for each cell. In theory, it should be possible to use the model function (equation 8.19) to extract the two pieces of information required (wind speed and direction) using appropriate simultaneous equations. However, in practice, this is not feasible; the three σ0s will have a finite measurement error, and the function itself is highly nonlinear. Indeed, the model, initially based on aircraft data, may not be applicable to all circumstances. Wind speed and direction must be extracted numerically, usually by minimizing a function of the form:
3
0 0 σ io − σ o (U , φiθi ) 0 σ i Kpi

° (dB)
0 –2 –4 –6 –8 –10 –12 –14

Wind-speed (m s-1) Range (dB)
30 25 20 42 40 37

15

34

10

29

R=
5 20

2

i =1



(8.20)

0

90

180

270

360

Wind direction (°)

figure 8.13. Measured backscatter, o (in decibels), against relative wind direction for different wind speeds. data are for 13 gHz, vertical polarization.

where R is effectively the sum of squares of the residuals, comparing the measured values of σ0 to those from the model function (using an estimate of wind speed and direction), weighted by the noise in each beam, Kpi, which is related to the S/N ratio. The wind vector estimate is refined so as to minimize R. Starting at different first-guess wind directions, the numerical solution can converge on up to four distinct, or ambiguous, wind vectors, although

CHaPTEr 8. SaTEllITE OBSErvaTIONS

II.8–23

there are often only two obviously different ones, usually about 180° apart. One of these is the “correct” solution, in that it is the closest to the true wind direction and within the required rootmean-square accuracies of 2 m s –1 and 20°. Algorithms have been developed to select the correct set of solutions. Numerical model wind fields are also used as first-guess fields to aid such analyses. Work is currently under way with ERS-1 data to calibrate and validate satellite winds using surface and low-level airborne measurements. 8.3.4.3 Microwave radiometer surface wind speed

(not over land) can be measured to an accuracy of a few metres per second using a regression equation on the brightness temperatures in several channels. Work continues to verify and develop these algorithms, which are not yet used operationally. 8.3.5 8.3.5.1 Precipitation Visible/infrared techniques

The special sensor microwave imagers (SSM/I) flying on the DMSP satellite provide microwave radiometric brightness temperatures at several frequencies (19, 22, 37 and 85.5 GHz) and both vertical and horizontal polarization. Several algorithms have been developed to measure a variety of meteorological parameters. Surface wind speeds over sea

Visible/infrared techniques derive qualitative or quantitative estimates of rainfall from satellite imagery through indirect relationships between solar radiance reflected by clouds (or cloud brightness temperatures) and precipitation. A number of methods have been developed and tested during the past 15 years with a measured degree of success. There are two basic approaches, namely the “life-history” and the “cloud-indexing” techniques. The first technique uses data from geostationary

N Kiruna

60°

15:00

23:00 13:20 11:40 00:30

27:30

20:00 10:00

40°

80°

40°



W

figure 8.14. ers-1 subsatellite tracks and wind scatterometer coverage of the north atlantic region over one day. the large gaps are partially filled on subsequent days; nominally this occurs in a three-day cycle. the dashed lines show the limits of reception for the Kiruna ground station in sweden.

II.8–24

ParT II. OBSErvING SYSTEMS

satellites, which produce images usually every half hour, and has been mostly applied to convective systems. The second technique, also based on cloud classification, does not require a series of consecutive observations of the same cloud system. It must be noted, however, that, up to now, none of these techniques has been shown to be “transportable”. In other words, relationships derived for a given region and a given period may not be valid for a different location and/or season. Other problems include difficulties in defining rain/ no-rain boundaries and inability to cope with the rainfall patterns at the mesoscale or local scale. Scientists working in this field are aware of these problems; for this reason it is current practice to speak of the derivation of “precipitation indices” rather than rain rates. 8.3.5.2 cloud-indexing methods

If desired, an additional term related to the visible image can be included on the right-hand side of equation 8.21. The next step is to relate PI to a physical quantity related in some way to rain. This is done by adjusting the coefficients A and the threshold level T0 by comparison with independent observations, such as raingauge or radar data. One of the problems inherent in this technique is the bias created by the potential presence of highlevel non-precipitating clouds such as cirrus. Another limitation resides in the fact that the satellite measurement represents an instantaneous observation integrated over space, while raingauge observations are integrated over time at a given site. 8.3.5.3 life-history methods

Cloud indexing was the first technique developed to estimate precipitation from space. It is based on the assumption that the probability of rainfall over a given area is related to the amount and type of cloudiness present over the area. Hence, it could be postulated that precipitation can be characterized by the structure of the upper surface of the associated cloudiness. In addition, in the case of convective precipitation, it could also be postulated that a relationship exists between the capacity of a cumuliform cloud to produce rain and its vertical as well as its horizontal dimensions. The vertical extent of a convective cloud is related to the cloud-top brightness temperature (higher cloud tops are associated with colder brightness temperatures). The approach is, therefore, to perform a cloud structure analysis (objective or subjective) based on the definition of a criterion relating cloudiness to a coefficient (or index) of precipitation. This characteristic may be, for instance, the number of image pixels above a given threshold level. The general approach for cloud-indexing methods involving infrared observations is to derive a relationship between a precipitation index (PI) and a function of the cloud surface area, S(TBB), associated with the background brightness temperature (TBB) colder than a given threshold T0. This relationship can be generally expressed as follows:

Life-history methods, as indicated by their name, are based on the observation of a series of consecutive images obtained from a geostationary satellite. It has been observed that the amount of precipitation associated with a given cloud is also related to its stage of development. Therefore, two clouds presenting the same aspect (from the visible infrared images point of view) may produce different quantities of rain, depending on whether they are growing or decaying. As with the cloud-indexing technique, a relationship is derived between a PI and a function of the cloud surface area, S(TBB), associated with a given brightness temperature (TBB) lying above a given threshold level. In addition, cloud evolution is taken into account and expressed in terms of the rate of change of S(TBB) between two consecutive observations. An equation, as complex as desired, may be derived between PI and functions of S(TBB) and its derivative with respect to time:

PI = A + A, S (TBB) + A for TBB < T0.

d S (TBB) dt

(8.22)

Here, also, another step is necessary in order to relate the PI defined by the equation to a physical quantity related to rain. Many such relationships have already been published. These publications have been discussed extensively, and it has been demonstrated, at least

PI = Ao + for TBBi < T0.

Σ A , S (TBBi) i i i I

(8.21)

I

CHaPTEr 8. SaTEllITE OBSErvaTIONS

II.8–25

for one instance, that taking into account the cloud evolution with time added unnecessary complexity and that comparable success could be obtained with a simple cloud-indexing technique. Recently, more physics has been introduced to the various schemes. Improvements include the following: (a) The use of cloud models to take into account the stratiform precipitation often associated with convective rainfall and to help with cloud classification; (b) The use of cloud microphysics, such as dropsize/rain-rate relations; (c) The introduction of simultaneous upper tropospheric water vapour observations; (d) The introduction of a time lag between the satellite observations and the ground-based measurements. It has also become evident that satellite data could be used in conjunction with radar observations, not only to validate a method, but as a complementary tool. FRONTIERS (the forecasting rain optimized using new techniques of interactively enhanced radar and satellite), developed by the UK Met Office, provides an example of the combined use of satellite imagery and radar observations. Various comparisons between different methods over the same test cases have now been performed and published. However, any final statement concerning the success (or lack thereof) of visible infrared methods must be treated with extreme caution. The degree of success is very strongly related to the space-time scales considered, and it cannot be expected that a regression developed and tested for use in climate studies will also be valid for the estimation of mesoscale precipitation. It must also be kept in mind that it is always easy to adjust regression coefficients for a particular case and claim that the method has been validated. 8.3.5.4 Microwave techniques

the microwave region (from 5 to 200 GHz in this case). Therefore, the background brightness temperature (TBB) of the ocean surface appears much colder in the microwave. Over land, the emissivity is close to one, but varies greatly depending on the soil moisture. As far as microwaves are concerned, several different effects are associated with the presence of clouds over the ocean. They are highly frequency dependent. Currently, active methods (space-borne radar) are being developed for experimental use. 8.3.6 sea-surface temperatures

Satellite measurements of radiation emitted from the ocean surface may be used to derive estimates of sea-surface temperature, to complement in situ observation systems (for example, ships, drifting buoys), for use in real-time meteorological or oceanographic applications, and in climate studies. Although satellites measure the temperature from a layer of ocean less than about 1 mm thick, the satellite data compares very favourably with conventional data. The great advantage of satellite data is geographical coverage, which generally far surpasses that available by conventional means. Also, in many cases, the frequency of satellite observations is better than that obtained using drifting buoys, although this depends on the satellite and the latitude of observation, among other things. Satellite sea-surface temperate measurements are most commonly made at infrared wavelengths and, to a lesser degree, at microwave wavelengths. Scanning radiometers are generally used. In the infrared, the essence of the derivation is to remove any pixels contaminated by cloud and to correct the measured infrared brightness temperatures for attenuation by water vapour. Cloud-free pixels must be identified extremely carefully so as to ensure that radiances for the ocean are not affected by clouds, which generally radiate at much colder temperatures than the ocean surface. Algorithms have been developed for the specific purpose of cloud clearing for infrared sea-surface temperature measurements (for example, Saunders and Kriebel, 1988). The satellite infrared sea-surface temperatures can be derived only in cloud-free areas, whereas at microwave wavelengths, cloud attenuation is far smaller so that in all but heavy convective situations the microwave data measurements are available. The disadvantage with the microwave data is that the instrument spatial resolution is usually of the order of several tens of kilometres, whereas infrared resolution is generally around 1 to

Visible infrared measurements represent observations of the upper surfaces of clouds only. In contrast, it is often believed that microwave radiation is not affected by the presence of clouds. This statement is not generally true. Its degree of validity varies with the microwave frequency used as well as with the type of cloud being observed. One major difference between infrared and microwave radiation is the fact that, while the ocean surface emissivity is nearly equal to one in the infrared, its value (although variable) is much smaller in

II.8–26

ParT II. OBSErvING SYSTEMS

5 km. Microwave sea-surface temperature measurements are discussed by Alishouse and McClain (1985). 8.3.6.1 Infrared techniques

infrared channels (for example, McClain, Pichel and Walton, 1985). Instruments A number of satellite-borne instruments have been used for sea-surface temperature measurements (Rao and others, 1990) as follows: (a) NOAA AVHRR; (b) GOES VAS; (c) NOAA HIRS/MSU; (d) GMS VISSR; (e) SEASAT and Nimbus-7 SMMR (scanning multichannel microwave radiometer); (f) DMSP SSM/T (special sensor microwave temperature sounder). By far the most widely used source of satellite seasurface temperatures has been the AVHRR, using channels 3, 4 and 5 (Annex 8.A). 8.3.6.2 comparison with ground-based observations

Most satellite measurements are taken in the 10.5 to 12.5 µm atmospheric window, for which corrections to measured brightness temperatures due to water vapour attenuation may be as much as 10 K in warm moist (tropical) atmospheres. Sea-surface temperature derivation techniques usually address this problem in one of two ways. In the differing path length (multilook) method, observations are taken of the same sea location at differing look angles. Because atmospheric attenuation is proportional to atmospheric path length, measurements at two look angles can be used to correct for the attenuation. An example of an instrument that uses this technique is the along track scanning radiometer (ATSR), a new generation infrared radiometer that has a dual angle view of the sea and is built specifically to provide accurate sea-surface temperature measurements (Prata) and others, 1990). It is carried on board the European Space Agency remote-sensing satellite ERS-1, launched in July 1991. In the split-window technique, atmospheric attenuation corrections can be made because of differential absorption in a given window region of the atmosphere (for example, 10.5 to 12.5 µm) and the highly wavelength-dependent nature of water vapour absorption. The differing infrared brightness temperatures measured for any two wavelengths within the infrared 10 to 12 µm window support theoretical studies which indicate a highly linear relation between any pair of infrared temperatures and the correction needed. Hence, the difference in atmospheric attenuation between a pair of wavelengths is proportional to the difference in attenuation between a second pair. One window is chosen as a perfect window (through which the satellite “sees” the ocean surface), and one wavelength is common to both pairs. A typical split window algorithm is of the form: TS = a0 + T11 + a1 (T11 – T12) (8.23)

Before considering the comparison of satellite-derived sea-surface temperatures with in situ measurements, it is important to understand what satellite instruments actually measure. Between about 3 and 14 µm, satellite radiometers measure only emitted radiation from a “skin” layer about 1 mm thick. The true physical temperature of this skin layer can differ from the sea temperature below (say, at a depth from a few metres to several tens of metres) by up to several K, depending on the prevailing conditions and on a number of factors such as: (a) The mixing of the upper layers of the ocean due to wind, or gravitational settling at night after the topmost layers radiatively cool; (b) Heating of the ocean surface by sunlight; (c) Evaporation; (d) Rainfall; (e) Currents; (f) Upwelling and downwelling. The most serious of these problems can be the heating of the top layer of the ocean on a calm sunny day. To some degree, the disparity between satellite sea-surface temperatures is circumvented by using daytime and night-time algorithms, which have been specially tuned to take into account diurnal oceanic effects. Alternatively, night-time satellite sea-surface temperatures are often preferred because the skin effect and the oceanic thermocline are at a minimum at night. It should also be remembered that ship measurements refer to a point value at a given depth (“intake temperature”) of 10 m or

where TS is the sea-surface temperature; T values are brightness temperatures at 11 or 12 µm, as indicated; and a0 and a1 are constants. Algorithms of this general form have been derived for use with daytime or night-time measurements, and using several

CHaPTEr 8. SaTEllITE OBSErvaTIONS

II.8–27

more, whereas the satellite is measuring radiance averaged over a large area (from 1 up to several tens or hundreds of square kilometres). Note that the ship data can often be highly variable in terms of quality. Rao and others (1990) show a comparison of global satellite multichannel satellite sea-surface temperatures with drifting buoys. The bias is very small and the root mean square deviation is about 0.5 K. Typically, comparisons of infrared satellite seasurface temperatures with in situ data (for example, buoys) show biases within 0.1 K and errors in the range of 0.4 to 0.6 K. Rao and others (1990) also show a comparison of microwave satellite sea-surface temperatures (using the SMMR instrument) with ship observations. The bias is 0.22 K and the standard deviation is 0.75 K for the one-month comparison. In summary, satellite derived sea-surface temperatures provide a very important source of observations for use in meteorological and oceanographic applications. Because satellite instruments provide distinctly different measurements of sea temperature than do ships or buoys, care must be taken when merging the satellite data with conventional data. However, many of these possible problems of merging slightly disparate data sets have been overcome by careful tuning of satellite sea-surface temperature algorithms to ensure that the satellite data are consistent with a reference point defined by drifting buoy observations. 8.3.7 upper tropospheric humidity

The product is extracted twice daily from METEOSAT (based on image data for 1100 and 2300 UTC) and are distributed over the GTS in the WMO SaTOB code. 8.3.8 total ozone

Solar ultraviolet light striking the atmosphere is partly absorbed and partly backscattered back to space. Since ozone is the principal backscatterer, the SBUV radiometer, which measures backscattered ultraviolet, allows calculations of the global distribution and time variation of atmospheric ozone. Measurements in the ultraviolet band, 160 to 400 µm, are now of great interest as being indicative of possible climate changes. In addition to the SBUV, the total ozone mapping spectrometer (TOMS) instrument carried on board Nimbus-7 is a monochrometer measuring radiation in six bands from 0.28 to 0.3125 µm. It has provided total ozone estimates within about 2 per cent of ground-based data for over a decade and has been one of the prime sources of data in monitoring the “ozone hole”. Rather than measure at ultraviolet or visible wavelengths, a 9.7 µm ozone absorption band in the thermal infrared has allowed measurement of total ozone column density by using satellite-borne radiometers which either limb scan or scan subsatellite (for example, the TOVS instrument package on NOAA satellites includes a 9.7 µm channel). The accuracy of this type of satellite measurement compared to ground-based (for example, Dobson spectrophotometer) data is around 10 per cent primarily because of the reliance upon only one channel (Ma, Smith and Woolf, 1984). It should be noted that the great advantage of the satellite data over ground-based data (ozone sondes or Dobson measurements) is the temporal and wide spatial coverage, making such data extremely important in monitoring global ozone depletion, especially over the polar regions, where conventional observation networks are very sparse. During the 1990s, further specialized satellite instruments which measure ozone levels or other related upper atmospheric constituents began to come into service. These included several instruments on the NASA upper atmosphere research satellite (UARS); the polar ozone and aerosol measurement instrument (POAM II) on Spot-3, a remote-sensing satellite launched in 1993; the stratospheric aerosol and gas experiment 3 (SAGE III); and a range of instruments

The method used to extract values of upper tropospheric humidity (from geostationary satellite data) is based on the interpretation of the 6.7 µm water-vapour channel radiances, and the results represent a mean value throughout a deep layer in the atmosphere between approximately 600 and 300 hPa. The limits of this atmospheric column cannot be precisely specified since the contribution function of the water-vapour channel varies in altitude in proportion to the water-vapour content of the atmosphere. The output of segment processing provides a description of all identified surfaces (cloud, land or sea), and the upper upper tropospheric humidity product is derived only for segments not containing medium and high cloud. The horizontal resolution is that of the nominal segment, and values are expressed as percentage relative humidity.

II.8–28

ParT II. OBSErvING SYSTEMS

which were scheduled for launch on the Earth Observation System (EOS) polar orbiters in the late 1990s. 8.3.9 volcanic ash detection

Volcanic ash clouds present a severe hazard to aviation. Since 1970 alone, there have been a large number of dangerous and costly incidents involving jet aircraft inadvertently flying through ash clouds ejected from volcanoes, especially in the Asian-Pacific region and the Pacific rim, where there are large numbers of active volcanoes. As a result of this problem, WMO, the International Civil Avation Organization and other organizations have been working actively toward the provision of improved detection and warning systems and procedures so that the risk to passengers and aircraft might be minimized. The discrimination of volcanic ash clouds from normal (water/ice) clouds using single channel infrared or visible satellite imagery is often extremely difficult, if not impossible, primarily because ash clouds often appear in regions where cloudiness and thunderstorm activity are common and the two types of clouds look similar. However, techniques have been developed for utilizing the split window channel on the NOAA AVHRR instrument to aid in distinguishing ash clouds from normal clouds, and to improve the delineation of ash clouds which may not be visible on single channel infrared images. The technique involving AVHRR relies on the fact that the microphysical properties of ash clouds are different from those of water/ice clouds in the thermal infrared, so that over ash cloud the brightness temperature difference between channels 4 and 5 of the AVHRR instrument, T4–T5, is usually negative and up to about –10 K, whereas for water/ice clouds T4–T5 is close to zero or small and positive (Prata, 1989 and Potts, 1993). This principle of detection of volcanic ash clouds is currently being used in the development of multichannel radiometers which are ground- or aircraft-based. Very few studies have taken place with in situ observations of volcanic ash clouds in order to ascertain the quality and accuracy of volcanic ash cloud discrimination using AVHRR. Ground-based reports of volcanic eruptions tend to be used operationally to alert meteorologists to the fact that satellite imagery can then be used to monitor the subsequent evolution and movement of ash

clouds. It should be noted that the technique has its limitations, for example, in cases where the ash cloud may be dispersed and underlying radiation for water/ice clouds or sea/land surfaces may result in T4–T5 values being close to zero or positive, rather than negative as expected over volcanic ash cloud. 8.3.10 normalized difference vegetation indices

Satellite observations may be used to identify and monitor vegetation (Rao and others, 1990). Applications include crop monitoring, deforestation, monitoring, forest management, drought assessment and flood monitoring. The technique relies on the fact that the reflectance of green vegetation is low at visible wavelengths but very high in the region from about 0.7 to 1.3 µm (owing to the interaction of the incident radiation with chlorophyll). However, the reflectance over surfaces such as soil or water remains low in the near-infrared and visible regions. Hence, satellite techniques for the assessment of vegetation generally use the difference in reflectivity between a visible channel and a near-infrared channel around 1 µm. As an example, the normalized difference vegetation index (NDVI) using AVHRR data, which is very widely used, is defined as: NDVI = (Ch2 – Ch1)/(Ch2 + Ch1) (8.24)

Values for this index are generally in the range of 0.1 to 0.6 over vegetation, with the higher values being associated with greater greenness and/or density of the plant canopy. By contrast, over clouds, snow, water or rock, NDVI is either very close to zero or negative. Satellite monitoring of vegetation was first used extensively around the mid-1970s. It has since been refined principally as a result of a gradual improvement in the theoretical understanding of the complex interaction between vegetation and incident radiation, and better knowledge of satellite instrument characteristics and corrections required for the satellite measurements. As with sea-surface temperature satellite measurements, the processing of satellite data for NDVIs involves many corrections – for geometry of satellite view and solar illumination, atmospheric effects such as aerosols and water vapour, instrument calibration characteristics, and so on. Also, at the outset, cloud clearing is carried out to obtain cloud-free pixels.

CHaPTEr 8. SaTEllITE OBSErvaTIONS

II.8–29

The three main instruments used in vegetation monitoring by satellite are the NOAA AVHRR, and the Landsat multispectral scanner and thematic mapper. Interpretation of NDVIs and application to various areas of meteorology or to Earth system science rely on an understanding of exactly what the satellite instrument is measuring, which is a complex problem. This is because within the field of view green leaves may be oriented at different angles, there may be different types of vegetation and there may be vegetation-free parts of the field of view. Nevertheless, NDVI correlates with groundmeasured parameters, as illustrated in Figure 8.15 (Paltridge and Barber, 1988), which shows NDVI

(called V0) plotted against fuel moisture content derived from ground sampling of vegetation at various locations viewed by the NOAA AVHRR instrument. The graph shows that NDVI is well correlated with fuel moisture content, except beyond a critical value of fuel moisture content for which the vegetation is very green, and for which the NDVI remains at a constant level. Hence, NDVIs may be very useful in fire weather forecasting. Figure 8.16 (Malingreau, 1986) shows NDVI development over a three-year period, in (a) a rice field area of Thailand and (b) a wheat-rice cropping system in China. Peaks in NDVI correspond to dry season and wet season rice crops in the (a) graph

Locations
1.0 0.9

Ararat Lilydale Yallourn

★ ● ■ ▲

0.8 0.7 0.6

Loy Yang

Vegetation index V0

● ★ ● ● ● ★ ▲ ● ● ● ▲ ● ■ ■ ● ▲ ▲ ■ ★ ● ★ ■★ ■ ● ▲ ■ ▲ ▲ ● ● ● ■ ■ ●

0.5 0.4

0.3 0.2 0.1 ★ ★ 0

★ ★ ★ ★ ★ ★

▲ ★

100

200

300

400

500

Fuel moisture content (%)

Very high

High

Medium

Low

Level of flammability (fire potential)

figure 8.15. full-cover, satellite-observed vegetation index as a function of fuel moisture content. each point is a particular location average at a particular sampling time (see text).

II.8–30

ParT II. OBSErvING SYSTEMS

and to wheat and rice croups in the (b) graph, respectively. 8.3.11 other parameters

to be accurately detected. In combination with channel 1 and 4 images, which may be used for the identification of smoke and cloud, respectively, channel 3 images are very useful in fire detection. Snow and ice can be detected using instruments such as AVHRR (visible and infrared) or the SMMR (microwave) on Nimbus-7 (Gesell, 1989). With AVHRR, the detection process involves the discrimination between snow/ice and various surfaces such as land, sea or cloud. The variation with wavelength of the spectral characteristics of these surfaces is exploited by using algorithms incorporating techniques such as thresholds; ratios of radiances or reflectivities at different wavelengths; differences between radiances or reflectivities; or spatial coherence. The disadvantage of using AVHRR is that detection is limited by the presence of cloud; this is important because cloudiness may be very high in the areas of interest. At microwave wavelengths, sea-ice detection relies on the strong contrast between sea and ice, due to the widely differing emissivities (and hence brightness temperatures) of these surfaces at microwave wavelengths. The main advantage of microwave detection is the all-weather capability, although the spatial resolution is generally tens of kilometres compared to 1 km for AVHRR.

A number of other parameters are now being estimated from satellites, including various atmospheric trace gases, soil moisture (from synthetic aperture radar data (ERS-1)), integrated water vapour (SSM/I), cloud liquid water (SSM/I), distribution of flood waters, and the Earth’s radiation budget (ERBE) (on the NOAA polar orbiters). Atmospheric pressure has not yet been reliably measured from space. Atmospheric instability can be measured from temperature and humidity profiles. Bush-fires have been successfully monitored using satellite instruments, especially the NOAA AVHRR (Robinson, 1991). Channel 3 (at the 3.7 µm window) is extremely sensitive to the presence of “hot spots”, namely, regions in which the brightness temperature may range from 400 up to about 1 000 K. It is sensitive because of the strong temperature sensitivity of the Planck function and the peaking of black-body radiance from hot objects at around 4 µm. Hot spots show up on channel 3 images extremely prominently, thereby allowing fire fronts
0.6

(a) NDVI development curve for irrigated rice in the Bangkok Plain (Thailand)
0.5 0.4 0.3

Dry season

Rainy season

8.4 8.4.1

relateD facilities

NDVI

satellite telemetry

0.2 0.1 0.0

–0.1 –0.2

A M J J A S O N D J F M A M J J A S O N D J F MA M J J A S O N D J F M

1982
0.6

1983

1984

1985

All satellites receive instructions and transmit data using telemetry facilities. However, all weather satellites in geostationary orbit and some in polar orbits have on-board transponders which receive data telemetered to them from data collection platforms (DCPs) at outstations. This facility allows the satellites to act as telemetering relay stations. The advantages offered by satellite telemetry are the following: (a) Repeater stations are not required; (b) The installation of outstations and receivers is simple; (c) Outstations can be moved from site to site with ease; (d) Outstations are unobtrusive; their antennas are small and do not require high masts; (e) There is little restriction through topography; (f) One receiver can receive data from outstations covering over a quarter of the Earth’s surface; (g) Because power requirements are minimal, solar power is adequate;

(b) NDVI development curve for wheat-rice cropping system of the Jiangsu Province (China)
0.5 0.4 0.3

Rice Wheat

NDVI

0.2 0.1 0.0

–0.1 –0.2 A M J J A S O N D J F M A M J J A S O N D J F M A M J J A S O N D J F M

1982

1983

1984

1985

figure 8.16. ndVI development curves for irrigated rice in thailand and wheat-rice in china

CHaPTEr 8. SaTEllITE OBSErvaTIONS
Meteosat
(B)

II.8–31

1

4

2

3

The signal level is such that it can be received by a 2 m diameter dish antenna, although 1.5 m is often adequate. The dish houses a “down converter”, used to convert the incoming signal from 1 694.5 MHz to 137 MHz for input to a receiver, which decodes the transmissions, outputting the data in ASCII characters to a printer or personal computer.
Ground Station (E)

(A) Outstation

(F) User Receiver

(C)

ESOC

(D)

figure 8.17. the Meteosat dcP telemetry system (h) (i) (j) Equipment reliability is high, both on board the spacecraft and in the field; A frequency licence is not required by the user, the satellite operator being licensed; As many receivers as required can be operated, without the need to increase power or facilities at the outstations. the Meteosat data collection platform telemetry system

The unit which forms the heart of an outstation is the DCP. This is an electronic unit, similar in many ways to a logger, which can accept either several analogue voltage inputs directly from sensors, or serial data (RS232) from a processing unit between the sensors and the DCP. It also contains a small memory to store readings taken between transmissions, a processor section for overall management, a clock circuit, the radio transmitter, and either a directional or omnidirectional antenna. Up to 600 bytes can be stored in the memory for transmission at 100 bits per second. This capacity can be doubled, but this requires two 1 min time slots for transmission. The capacity is set by the amount of data that can be transmitted in a 1 min time slot. When manufactured, DCPs are programmed with their address (an 8 digit octal number) and with their time of transmission, both specified by EUMETSAT. In future designs, these are likely to be programmable by the user, to provide greater flexibility. In operation, an operator sets the DCP’s internal clock is set by to GMT. This is carried out either with a “synchronizer unit” or with a portable personal computer. Up to a 15 s drift is permitted either way; thereafter it must be reset. At its appointed times, the DCP transmits the accumulated contents of its memory to METEOSAT, and thereafter clears it, ready to receive the next set of data for transmission at the next time slot. This operation is repeated indefinitely. The synchronizer (or personal computer) can also be used to give the station a name (e.g. its location) and to carry out a range of tests which include checking the clock setting, battery voltage, transmitter state, analogue inputs and the memory contents. It is also possible to speed up the clock to test overall performance, including the making of a test transmission (into a dummy load to prevent interference by transmitting outside the allocated time slot).

8.4.2

Figure 8.17 illustrates the METEOSAT DCP telemetry system. It should be noted that similar systems are implemented on the GOES, GMS and INSAT satellites and are outlined in WMO (1989). The systems for other geostationary satellites are similar. The outstation (A) transmits its measurements to METEOSAT (B) along path 1 at set time intervals (hourly, three-hourly, daily, etc.). It has a 1 min time slot in which to transmit its data, on a frequency of between 402.01 MHz and 402.20 MHz at a power of 5 W (25 to 40 W for mobile outstations, with omnidirectional antenna). The satellite immediately retransmits these data to the European Space Operations Centre (ESOC) ground station (C), sited in the Odenwald near Michelstadt, Germany, along path 2 at a frequency of around 1 675 MHz. From here, the data are sent by landline to ESOC, some 40 km north-west of Odenwald in Darmstadt (D). Here they are quality controlled, archived and, where appropriate, distributed on the Global Telecommunications Network. They are also retained at the ground station and returned to METEOSAT (multiplexed with imagery data) from a second dish antenna (E), along path 3, for retransmission to users via the satellite along path 4.

II.8–32

ParT II. OBSErvING SYSTEMS

A DCP will fit into a small housing and can be powered by a solar-charged battery. The remainder of the outstation comprises the sensors, which are similar to those at a conventional logging station or at a ground-based radio telemetry installation. 8.4.3 Images The images are built up, line by line, by a multispectral radiometer (see previous sections). METEOSAT spins on its axis at 100 revolutions per minute, scanning the Earth in horizontal lines from east to west. A mirror makes a small step from south to north at each rotation, building up a complete scan of the Earth in 25 min (including 5 min for resetting the mirror for the next scan). The visible image is formed of 5 000 lines, each of 5 000 pixels, giving a resolution of 2.5 km immediately beneath the satellite (the resolution is less at higher latitudes). The two infrared images each comprise 2 500 lines of 2 500 picture elements, giving a subsatellite resolution of 5 km. The images are transmitted digitally, line by line, at 330 000 bits per second, while the scanner is looking at space. These transmissions are not meant for the end-user and go directly to the ground station, where they are processed by ESOC and subsequently disseminated to users, back via METEOSAT, on two separate channels. The first channel is for high-quality digital image data for reception by a primary data user station. The second channel transmits the images in the analogue form known as weather facsimile (WEFAX), a standard used by most meteorological satellites (including polar orbiters). These can be received by secondary data user stations. Secondary data user stations receive images covering different sections of the Earth’s surface in the METEOSAT field of view. Transmissions follow a daily schedule, one image being transmitted every 4 min. These stations also receive the DCP transmissions. DCP data handling In addition to acquiring and disseminating the images, METEOSAT also currently has 66 channels for relaying DCP data from outstations to the Meteosat data handling

ground station. Of these, half are reserved for international use, that is for mobile DCPs passing from the field of view of one geostationary meteorological satellite into that of the next. The remainder are for fixed “regional” DCPs. Each channel can accommodate as many DCPs as its frequency of reporting and their report lengths permit. Thus, with three-hourly reporting times and 1 min messages from all DCPs, and with a 30 s buffer period between each (to allow for clock drift), each channel could accommodate 120 DCPs, making a total of 7 920. 8.4.4 Polar-orbiting satellite telemetry systems

Polar satellites have low orbits in the north/south direction with a period of about 100 min. Consequently, they do not appear stationary at one point in the sky. Instead, they appear over the horizon, pass across the sky (not necessarily directly overhead) and set at the opposite horizon. They are visible for about 10 min at each pass, but this varies depending on the angle at which they are visible. Such orbits dictate that a different mode of operation is necessary for a telemetry system using them. Unlike geostationary systems, the DCPs used with polar-orbiting satellites (called data collection systems – DCSs) cannot transmit at set times, nor can their antennas be directed at one point in the sky. Instead, the DCSs are given set intervals at which to transmit, ranging from 100 to 200 s. They use a similar, but not identical, frequency to DCPs, and their antennas are, necessarily, omnidirectional. Each outstation is given a slightly different transmission interval so as to reduce the chances of coincidental transmissions from two stations. Further separation of outstations is achieved by the fact that, owing to the satellite’s motion, a Doppler shift in received frequency occurs. This is different for each DCS because it occupies a different location relative to the satellite. This last feature is also used to enable the position of moving outstations to be followed. This is one of the useful features of polar orbits, and can enable, for example, a drifting buoy to be both tracked and its data collected. Furthermore, the buoy can move completely around the world and still be followed by the same satellite. This is the basis of the Argos system which operates on the NOAA satellites and is managed by France. Even fixed DCSs can make use of the feature, in that it enables data to be

CHaPTEr 8. SaTEllITE OBSErvaTIONS

II.8–33

collected from any point on Earth via the one satellite. The transmissions from DCSs are received by the satellite at some point in its overpass. The means of transferring the received data to the user has to be different from that adopted for METEOSAT. They follow two routes. In the first route, the received data are immediately retransmitted, in real time, in the ultra high frequency range, and can be received by a user’s receiver on an omnidirectional antenna. To ensure communication, both receiver and outstation must be within a range of not more than about 2 000 km of each other, since both must be able to see the satellite at the same time. In the second route, the received data are recorded on a magnetic tape logger on board the spacecraft and retransmitted to ground stations as the satellite

passes over. These stations are located in the United States and France (Argos system). From here, the data are put onto the GTS or sent as a printout by post if there is less urgency. The cost of using the polar satellites is not small, and, while they have some unique advantages over geostationary systems, they are of less general purpose use as telemetry satellites. Their greatest value is that they can collect data from high latitudes, beyond the reach of geostationary satellites. They can also be of value in those areas of the world not currently covered by geostationary satellites. For example, the Japanese GMS satellite does not currently provide a retransmission facility, and users can receive data only via the GTS. Until such a time as all of the Earth’s surface is covered by geostationary satellites with retransmission facilities, polar orbiting satellites will usefully fill the gap.

II.8–34

ParT II. OBSErvING SYSTEMS

aNNEx 8.a adVanced Very HIgH resolutIon radIoMeter cHannels
Nadir resolution 1.1 km: swath width > 2 600 km
Primary uses daytime cloud surface mapping Surface water, ice, snowmelt Sea-surface temperature, night-time cloud mapping Sea-surface temperature, day and night cloud mapping Sea-surface temperature, day and night cloud mapping Channel 1 2 3 4 5 Wavelength μm 0.58–0.68 0.725–1.10 3.55–3.93 10.30–11.30 11.50–12.50

CHaPTEr 8. SaTEllITE OBSErvaTIONS

II.8–35

aNNEx 8.B HIrs cHannels and tHeIr aPPlIcatIons the television infrared observation satellite operational vertical sounder high resolution infrared sounder channels
Channel 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Central wavelength μm 15.00 14.70 14.50 14.20 14.00 13.70 13.40 11.10 9.70 8.30 7.30 6.70 4.57 4.52 4.46 4.40 4.24 4.00 3.70 0.70 Cloud detection Surface temperature Temperature sounding Surface temperature and cloud detection Total ozone Water vapour sounding Primary uses Temperature sounding

Microwave sounding unit channels
Channel 1 2 3 4 Central wavelength GHz 50.31 53.73 54.96 57.95 Primary uses Surface emissivity and cloud attenuation Temperature sounding

stratospheric sounding unit channels Three 15 μm channels for temperature sounding.

II.8–36

ParT II. OBSErvING SYSTEMS

references and furtHer readIng

Alishouse, J.C. and E.P. McClain, 1985: Sea surface temperature determinations. Advances in Geophysics, Volume 27, pp. 279–296. Cooperative Institute for Meteorological Satellite Studies, 1991: Technical Proceedings of the Sixth International TOVS Study Conference. University of Wisconsin (see also proceedings of previous conferences, 1989, 1988). Eyre, J.R., J.L. Brownscombe and R.J. Allam, 1984: Detection of fog at night using advanced very high resolution radiometer (AVHRR) imagery. Meteorological Magazine, 113, pp. 266–271. Gesell, G., 1989: An algorithm for snow and ice detection using AVHRR data. International Journal of Remote-sensing, Volume 10, pp. 897–905. King-Hele, D., 1964: Theory of Satellite Orbits in an Atmosphere. Butterworths, London. Ma, X.L., W.L. Smith and H.M. Woolf, 1984: Total ozone from NOAA satellites: A physical model for obtaining measurements with high spatial resolution. Journal of Climate and Applied M e t e o r o l o g y , Vo l u m e 2 3 , I s s u e 9 , pp. 1309–1314. Malingreau, J.P., 1986: Global vegetation dynamics: Satellite observations over Asia. International Journal of Remote-sensing, Volume 7, pp. 1121-1146. Massey, H., 1964: Space Physics. Cambridge University Press, London. McClain, E.P., W.G. Pichel and C.C. Walton, 1985: Comparative performance of AVHRR-based multichannel sea surface temperatures. Journal of Geophysical Research, Volume 90, pp. 11587–11601. Paltridge, G.W. and J. Barber, 1988: Monitoring grassland dryness and fire potential in Australia with NOAA/AVHRR data. Remotesensing of the Environment, Volume 25, pp. 381–394. Potts, R.J., 1993: Satellite observations of Mt Pinatubo ash clouds. Australian Meteorological Magazine, Volume 42, pp. 59–68.

Prata, A.J., 1989: Observations of volcanic ash clouds in the 10-12 micron window using AVHRR/2 data. International Journal of Remotesensing, Volume 10, pp. 751–761. Prata, A.J., R.P. Cechet, I.J. Barton and D.T. LlewellynJones, 1990: The along-track scanning radiometer for ERS-1: Scan geometry and data simulation. IEEE Transactions on Geoscience and Remote-sensing, Volume 28, pp. 3–13. Rao, P.K., S.J. Holmes, R.K. Anderson, J.S. Winston and P.E. Lehr, 1990: Weather Satellites: Systems, Data, and Environmental Applications. American Meteorolological Society, Boston. Robinson, J.M., 1991: Fire from space: Global fire evaluation using infrared remote-sensing. International Journal of Remote-sensing, Volume 12, pp. 3–24. Saunders, R.W. and K.T. Kriebel, 1988: An improved method for detecting clear sky and cloudy radiances from AVHRR data. International Journal of Remote-sensing, Volume 9, pp. 123–150. Smith, W.L., 1985: Satellites. In D.D. Houghton (ed.): Handbook of Applied Meteorology, Wiley, New york, pp. 380–472. Smith, W.L. and C.M.R. Platt, 1978: Comparison of satellite-deduced cloud heights with indications from radiosonde and ground-based laser measurements. Journal of Applied Meteorology, Volume 17, Issue 17, pp. 1796–1802. World Meteorological Organization, 1989: Guide on the Global Observing System, WMO-No. 488, Geneva. World Meteorological Organization, 1994a: Information on Meteorological and Other Environmental Satellites. Third edition. WMO-No. 411, Geneva. World Meteorological Organization, 1994b: Application of Satellite Technology: Annual Progress Report 1993. WMO Satellite Report No. SAT-12, WMO/TD-No. 628, Geneva. World Meteorological Organization, 2003: Manual on the Global Observing System. WMO-No. 544, Geneva.

CHaPTEr 9

radar MeasureMents

9.1

general

This chapter is an elementary discussion of meteorological microwave radars – the weather radar – used mostly to observe hydrometeors in the atmosphere. It places particular emphasis on the technical and operational characteristics that must be considered when planning, developing and operating radars and radar networks in support of Meteorological and Hydrological Services. It is supported by a substantial list of references. It also briefly mentions the high frequency radar systems used for observation of the ocean surface. Radars used for vertical profiles are discussed in Part II, Chapter 5. 9.1.1 the weather radar

amplitude is used to determine a parameter called the reflectivity factor (Z) to estimate the mass of precipitation per unit volume or the intensity of precipitation through the use of empirical relations. A primary application is thus to detect, map and estimate the precipitation at ground level instantaneously, nearly continuously and over large areas. Some research radars have used reflectivity factors measured at two polarizations of the transmitted and received waveform. Research continues to determine the value and potential of polarization systems for precipitation measurement and target state, but operational systems do not exist at present. Doppler radars have the capability of determining the phase difference between the transmitted and received pulse. The difference is a measure of the mean Doppler velocity of the particles — the reflectivity weighted average of the radial components of the displacement velocities of the hydrometeors in the pulse volume. The Doppler spectrum width is a measurement of the spatial variability of the velocities and provides some indication of the wind shear and turbulence. Doppler radars offer a significant new dimension to weather radar observation and most new systems have this capability. Modern weather radars should have characteristics optimized to produce the best data for operational requirements, and should be adequately installed, operated and maintained to utilize the capability of the system to the meteorologists’ advantage. 9.1.2 radar characteristics, terms and units

Meteorological radars are capable of detecting precipitation and variations in the refractive index in the atmosphere which may be generated by local variations in temperature or humidity. Radar echoes may also be produced from airplanes, dust, birds or insects. This chapter deals with radars in common operational usage around the world. The meteorological radars having characteristics best suited for atmospheric observation and investigation transmit electromagnetic pulses in the 3–10 GHz frequency range (10–3 cm wavelength, respectively). They are designed for detecting and mapping areas of precipitation, measuring their intensity and motion, and perhaps their type. Higher frequencies are used to detect smaller hydrometeors, such as cloud or even fog droplets. Although this has valuable applications in cloud physics research, these frequencies are generally not used in operational forecasting because of excessive attenuation of the radar signal by the intervening medium. At lower frequencies, radars are capable of detecting variations in the refractive index of clear air, and they are used for wind profiling. Although they may detect precipitation, their scanning capabilities are limited by the size of the antenna required to achieve effective resolution. The returned signal from the transmitted pulse encountering a weather target, called an echo, has an amplitude, a phase and a polarization. Most operational radars worldwide are still limited to analysis of the amplitude feature that is related to the size distribution and numbers of particles in the (pulse) volume illuminated by the radar beam. The

The selection of the radar characteristics, and consideration of the climate and the application, are important for determining the acceptable accuracy of measurements for precipitation estimation (Tables 9.1, 9.2 and 9.3). 9.1.3 Meteorological applications

Radar observations have been found most useful for the following: (a) Severe weather detection, tracking and warning;

II.9–2 (b) (c)

ParT II. OBSErvING SYSTEMS

Surveillance of synoptic and mesoscale weather systems; Estimation of precipitation amounts.

table 9.2. some meteorological radar parameters and units
Symbol Ze Vr σv Zdr CDR LDR kdp ρ Parameter Equivalent or effective radar reflectivity Mean radial velocity Spectrum width differential reflectivity Circular depolarization ratio linear depolarization ratio Propagation phase Correlation coefficient Units mm6 m–3 or dBZ m s–1 m s–1 dB dB dB degree km–1

The radar characteristics of any one radar will not be ideal for all applications. The selection criteria of a radar system are usually optimized to meet several applications, but they can also be specified to best meet a specific application of major importance. The choices of wavelength, beamwidth, pulse length, and pulse repetition frequencies (PRFs) have particular consequences. Users should therefore carefully consider the applications and climatology before determining the radar specifications. Severe weather detection and warning A radar is the only realistic surface-based means of monitoring severe weather over a wide area. Radar echo intensities, area and patterns can be used to identify areas of severe weather, including thunderstorms with probable hail and damaging winds. Doppler radars that can identify and provide a measurement of intense winds associated with gust fronts, downbursts and tornadoes add a new dimension. The nominal range of coverage is about 200 km, which is sufficient for local short-range forecasting and warning. Radar networks are used to extend the coverage (Browning and others, 1982). Effective warnings require effective interpretation performed by alert and well-trained personnel. table 9.1. radar frequency bands
Radar band UHF L Sa Ca Xa Ku K Ka W a table 9.3. Physical radar parameters and units
Symbol Parameter Speed of light Transmitted frequency doppler frequency shift received power Transmitted power Pulse repetition frequency Pulse repetition time (=1/PrF) antenna rotation rate Transmitted wavelength azimuth angle Units m s–1 Hz Hz mW or dBm kW Hz ms degree s–1 or rpm cm degree degree s degree

c f fd P r t

P

Frequency 300–1 000 MHz 1 000–2 000 MHz 2 000–4 000 MHz 4 000–8 000 MHz 8 000–12 500 MHz 12.5–18 GHz 18–26.5 GHz 26.5–40 GHz 94 GHz

Wavelength 1–0.3 m 0.3–0.15 m 15–7.5 cm 7.5–3.75 cm 3.75–2.4 cm 2.4–1.66 cm 1.66–1.13 cm 1.13–0.75 cm 0.30 cm

Nominal 70 cm 20 cm 10 cm 5 cm 3 cm 1.50 cm 1.25 cm 0.86 cm 0.30 cm

PRF T

Ω λ

θ τ γ

Beamwidth between half power points Pulse width Elevation angle

Surveillance of synoptic and mesoscale systems Radars can provide a nearly continuous monitoring of weather related to synoptic and mesoscale storms over a large area (say a range of 220 km,

Most common weather radar bands.

CHaPTEr 9. radar MEaSurEMENTS

II.9–3

area 125 000 km2) if unimpeded by hills. Owing to ground clutter at short ranges and the Earth’s curvature, the maximum practical range for weather observation is about 200 km. Over large water areas, other means of observation are often not available or possible. Networks can extend the coverage and may be cost effective. Radars provide a good description of precipitation. Narrower beamwidths provide better resolution of patterns and greater effectiveness at longer ranges. In regions where very heavy and extensive precipitation is common, a 10-cm wavelength is needed for good precipitation measurements. In other areas, such as mid-latitudes, 5 cm radars may be effective at much lower cost. The 3 cm wavelength suffers from too much attenuation in precipitation to be very effective, except for very light rain or snow conditions. Development work is beginning on the concept of dense networks of 3 cm radars with polarimetric capabilities that could overcome the attenuation problem of stand-alone 3 cm radars. Precipitation estimation Radars have a long history of use in estimating the intensity and thereby the amount and distribution of precipitation with a good resolution in time and space. Most studies have been associated with rainfall, but snow measurements can also be taken with appropriate allowances for target composition. Readers should consult reviews by Joss and Waldvogel (1990), and Smith (1990) for a comprehensive discussion of the state of the art, the techniques, the problems and pitfalls, and the effectiveness and accuracy. Ground-level precipitation estimates from typical radar systems are made for areas of typically 2 km2, successively for 5–10 minute periods using low elevation plan position indicator scans with beamwidths of 1°. The radar estimates have been found to compare with spot precipitation gauge measurements within a factor of two. Gauge and radar measurements are both estimates of a continually varying parameter. The gauge samples an extremely small area (100 cm2, 200 cm2), while the radar integrates over a volume, on a much larger scale. The comparability may be enhanced by adjusting the radar estimates with gauge measurements. 9.1.4 Meteorological products

radar depend on the type of radar, its signal processing characteristics, and the associated radar control and analysis system. Most modern radars automatically perform a volume scan consisting of a number of full azimuth rotations of the antenna at several elevation angles. All raw polar data are stored in a three-dimensional array, commonly called the volume database, which serves as the data source for further data processing and archiving. By means of application software, a wide variety of meteorological products is generated and displayed as images on a high-resolution colour display monitor. Grid or pixel values and conversion to x-y coordinates are computed using three-dimensional interpolation techniques. For a typical Doppler weather radar, the displayed variables are reflectivity, rainfall rate, radial velocity and spectrum width. Each image pixel represents the colour-coded value of a selected variable. The following is a list of the measurements and products generated, most of which are discussed in this chapter: (a) The plan position indicator: A polar format display of a variable, obtained from a single full antenna rotation at one selected elevation. It is the classic radar display, used primarily for weather surveillance; (b) The range height indicator: A display of a variable obtained from a single elevation sweep, typically from 0 to 90°, at one azimuth. It is also a classic radar display that shows detailed cross-section structures and it is used for identifying severe storms, hail and the bright band; (c) The constant altitude plan position indicator (CAPPI): A horizontal cross-section display of a variable at a specified altitude, produced by interpolation from the volume data. It is used for surveillance and for identification of severe storms. It is also useful for monitoring the weather at specific flight levels for air traffic applications. The “no data” regions as seen in the CAPPI (close to and away from the radar with reference to the selected altitude) are filled with the data from the highest and lowest elevation, respectively, in another form of CAPPI, called “Pseudo CAPPI”; (d) Vertical cross-section: A display of a variable above a user-defined surface vector (not necessarily through the radar). It is produced by interpolation from the volume data; (e) The column maximum: A display, in plan, of the maximum value of a variable above each point of the area being observed; (f) Echo tops: A display, in plan, of the height of the highest occurrence of a selectable reflectivity contour, obtained by searching in

A radar can be made to provide a variety of meteorological products to support various applications. The products that can be generated by a weather

II.9–4

ParT II. OBSErvING SYSTEMS

(g)

the volume data. It is an indicator of severe weather and hail; Vertically integrated liquid: An indicator of the intensity of severe storms. It can be displayed, in plan, for any specified layer of the atmosphere.

and rain rate-reflectivity relationship inadequacies, contribute most to the inaccuracy. By considering only errors attributable to the radar system, the measurable radar parameters can be determined with an acceptable accuracy (Table 9.4). table 9.4. accuracy requirements
Parameter Definition azimuth angle γ V r In addition to these standard or basic displays, other products can be generated to meet the particular requirements of users for purposes such as hydrology, nowcasting (see section 9.10) or aviation: (a) Precipitation-accumulation: An estimate of the precipitation accumulated over time at each point in the area observed; (b) Precipitation subcatchment totals: Area-integrated accumulated precipitation; (c) Velocity azimuth display (VAD): An estimate of the vertical profile of wind above the radar. It is computed from a single antenna rotation at a fixed elevation angle; (d) Velocity volume processing, which uses three-dimensional volume data; (e) Storm tracking: A product from complex software to determine the tracks of storm cells and to predict future locations of storm centroids; (f) Wind shear: An estimate of the radial and tangential wind shear at a height specified by the user; (g) Divergence profile: An estimation of divergence from the radial velocity data from which divergence profile is obtained given some assumptions; (h) Mesocyclone: A product from sophisticated pattern recognition software that identifies rotation signatures within the three-dimensional base velocity data that are on the scale of the parent mesocyclonic circulation often associated with tornadoes; (i) Tornadic vortex signature: A product from sophisticated pattern recognition software that identifies gate-to-gate shear signatures within the three-dimensional base velocity data that are on the scale of tornadic vortex circulations. 9.1.5 radar accuracy requirements

Acceptable accuracy a 0.1˚ 0.1˚ 1.0 m s–1 1 dBZ 1 m s–1

Elevation angle Mean doppler velocity reflectivity factor doppler spectrum width

Z σv

a

These figures are relative to a normal Gaussian spectrum with

a standard deviation smaller than 4 m–1. Velocity accuracy deteriorates when the spectrum width grows, while reflectivity accuracy improves.

9.2 9.2.1

raDar technology

Principles of radar measurement

The accuracy requirements depend on the most important applications of the radar observations. Appropriately installed, calibrated and maintained modern radars are relatively stable and do not produce significant measurement errors. External factors, such as ground clutter effects, anomalous propagation, attenuation and propagation effects, beam effects, target composition, particularly with variations and changes in the vertical,

The principles of radar and the observation of weather phenomena were established in the 1940s. Since that time, great strides have been made in improving equipment, signal and data processing and its interpretation. The interested reader should consult some of the relevant texts for greater detail. Good references include Skolnik (1970) for engineering and equipment aspects; Battan (1981) for meteorological phenomena and applications; Atlas (1964; 1990), Sauvageot (1982) and WMO (1985) for a general review; Rinehart (1991) for modern techniques; and Doviak and zrnic (1993) for Doppler radar principles and applications. A brief summary of the principles follows. Most meteorological radars are pulsed radars. Electromagnetic waves at fixed preferred frequencies are transmitted from a directional antenna into the atmosphere in a rapid succession of short pulses. Figure 9.1 shows a directional radar antenna emitting a pulsed-shaped beam of electromagnetic

CHaPTEr 9. radar MEaSurEMENTS

II.9–5

energy over the Earth’s curved surface and illuminating a portion of a meteorological target. Many of the physical limitations and constraints of the observation technique are immediately apparent from the figure. For example, there is a limit to the minimum altitude that can be observed at far ranges due to the curvature of the Earth. A parabolic reflector in the antenna system concentrates the electromagnetic energy in a conical-shaped beam that is highly directional. The width of the beam increases with range, for example, a nominal 1° beam spreads to 0.9, 1.7 and 3.5 km at ranges of 50, 100, and 200 km, respectively. The short bursts of electromagnetic energy are absorbed and scattered by any meteorological targets encountered. Some of the scattered energy is reflected back to the radar antenna and receiver. Since the electromagnetic wave travels with the speed of light (that is, 2.99 × 108 m s–1), by measuring the time between the transmission of the pulse and its return, the range of the target is determined. Between successive pulses, the receiver listens for any return of the wave. The return signal from the target is commonly referred to as the radar echo. The strength of the signal reflected back to the radar receiver is a function of the concentration, size and water phase of the precipitation particles that make up the target. The power return, Pr, therefore provides a measure of the characteristics of the meteorological target and is, but not uniquely, related to a precipitation rate depending on the form of precipitation. The “radar range equation”

relates the power-return from the target to the radar characteristics and parameters of the target. The power measurements are determined by the total power backscattered by the target within a volume being sampled at any one instant — the pulse volume (i.e. sample volume). The pulse volume dimensions are dependent on the radar pulse length in space (h) and the antenna beam widths in the vertical ( b) and the horizontal (θb). The beam width, and therefore the pulse volume, increases with range. Since the power that arrives back at the radar is involved in a two-way path, the pulse-volume length is only one half pulse length in space (h/2) and is invariant with range. The location of the pulse volume in space is determined by the position of the antenna in azimuth and elevation and the range to the target. The range (r) is determined by the time required for the pulse to travel to the target and to be reflected back to the radar. Particles within the pulse volume are continuously shuffling relative to one another. This results in phase effects in the scattered signal and in intensity fluctuations about the mean target intensity. Little significance can be attached to a single echo intensity measurement from a weather target. At least 25 to 30 pulses must be integrated to obtain a reasonable estimation of mean intensity (Smith, 1995). This is normally carried out electronically in an integrator circuit. Further averaging of pulses in range, azimuth and time is often conducted to increase the sampling size and accuracy of the estimate. It follows that the space resolution is coarser. 9.2.2 the radar equation for precipitation targets

Target Antenna beamwidths qb, fb R

Antenna beam

PPI mode

h/2 Pulse volume H Antenna elevation 0° parallel to tangent of the Earth

Antenna height ha

γ elevation angle
Radar antenna

Antenna

Meteorological targets consist of a volume of more or less spherical particles composed entirely of ice and/or water and randomly distributed in space. The power backscattered from the target volume is dependent on the number, size, composition, relative position, shape and orientation of the scattering particles. The total power backscattered is the sum of the power backscattered by each of the scattering particles. Using this target model and electromagnetic theory, Probert-Jones (1962) developed an equation relating the echo power received by the radar to the parameters of the radar and the targets’ range and scattering characteristics. It is generally accepted as being a reliable relationship to provide quantitative reflectivity measurements with good accuracy, bearing in mind the generally realistic assumptions made in the derivation:

figure 9.1. Propagation of electromagnetic waves through the atmosphere for a pulse weather radar; ha is the height of the antenna above the earth’s surface, R is the range, h is the length of the pulse, h/2 is the sample volume depth and H is the height of the pulse above the earth’s surface

II.9–6
2

ParT II. OBSErvING SYSTEMS

≠3 P hG 2θbφb K 10 −18 Z Pr = ⋅ t ⋅ 1024 ln 2 λ2 r2

(e) (9.1)

where Pr is the power received back at the radar, averaged over several pulses, in watts; Pt is the peak power of the pulse transmitted by the radar in watts; h is the pulse length in space, in metres (h = cτ/2 where c is the speed of light and τ is the pulse duration); G is the gain of the antenna over an isotropic radiator; θb and b are the horizontal and vertical beamwidths, respectively, of the antenna radiation pattern at the –3 dB level of one-way transmission, in radians; λ is the wavelength of the transmitted wave, in metres; |K|2 is the refractive index factor of the target; r is the slant range from the radar to the target, in metres; and Z is the radar reflectivity factor (usually taken as the equivalent reflectivity factor Ze when the target characteristics are not well known), in mm6 m–3. The second term in the equation contains the radar parameters, and the third term the parameters depending on the range and characteristics of the target. The radar parameters, except for the transmitted power, are relatively fixed, and, if the transmitter is operated and maintained at a constant output (as it should be), the equation can be simplified to:

Multiple scattering (among particles) is negligible; (f) There is no attenuation in the intervening medium between the radar and the target volume; (g) The incident and backscattered waves are linearly co-polarized; (h) The main lobe of the antenna radiation pattern is Gaussian in shape; (i) The antenna is a parabolic reflector type of circular cross-section; (j) The gain of the antenna is known or can be calculated with sufficient accuracy; (k) The contribution of the side lobes to the received power is negligible; (l) Blockage of the transmitted signal by ground clutter in the beam is negligible; (m) The peak power transmitted (Pt) is the actual power transmitted at the antenna, that is, all wave guide losses, and so on, and attenuation in the radar dome, are considered; (n) The average power measured (Pr) is averaged over a sufficient number of pulses or independent samples to be representative of the average over the target pulse volume.

Pr =

CK Z r
2

2

(9.2)

where C is the radar constant. There are a number of basic assumptions inherent in the development of the equation which have varying importance in the application and interpretation of the results. Although they are reasonably realistic, the conditions are not always met exactly and, under particular conditions, will affect the measurements (Aoyagi and Kodaira, 1995). These assumptions are summarized as follows: (a) The scattering precipitation particles in the target volume are homogeneous dielectric spheres whose diameters are small compared to the wavelength, that is D < 0.06 λ for strict application of Rayleigh scattering approximations; (b) The pulse volume is completely filled with randomly scattered precipitation particles; (c) The reflectivity factor Z is uniform throughout the sampled pulse volume and constant during the sampling interval; (d) The particles are all water drops or all ice particles, that is, all particles have the same refractive index factor |K|2 and the power scattering by the particles is isotropic;

This simplified expression relates the echo power measured by the radar to the radar reflectivity factor Z, which is in turn related to the rainfall rate. These factors and their relationship are crucial for interpreting the intensity of the target and estimating precipitation amounts from radar measurements. Despite the many assumptions, the expression provides a reasonable estimate of the target mass. This estimate can be improved by further consideration of factors in the assumptions. 9.2.3 basic weather radar

The basic weather radar consists of the following: (a) A transmitter to produce power at microwave frequency; (b) An antenna to focus the transmitted microwaves into a narrow beam and receive the returning power; (c) A receiver to detect, amplify and convert the microwave signal into a low frequency signal; (d) A processor to extract the desired information from the received signal; (e) A system to display the information in an intelligible form. Other components that maximize the radar capability are: (a) A processor to produce supplementary displays;

CHaPTEr 9. radar MEaSurEMENTS

II.9–7

(b)

A recording system to archive the data for training, study and records.

A basic weather radar may be non-coherent, that is, the phase of successive pulses is random and unknown. Almost exclusively current systems use computers for radar control, digital signal processing, recording, product displays and archiving. The power backscattered from a typical radar is of the order of 10–8 to 10–15 W, covering a range of about 70 dB from the strongest to weakest targets detectable. To adequately cover this range of signals, a logarithmic receiver was used in the past. However, modern operational and research radars with linear receivers with 90 dB dynamic range (and other sophisticated features) are just being introduced (Heiss, McGrew and Sirmans, 1990; Keeler, Hwang and Loew, 1995). Many pulses must be averaged in the processor to provide a significant measurement; they can be integrated in different ways, usually in a digital form, and must account for the receiver transfer function (namely, linear or logarithmic). In practice, for a typical system, the signal at the antenna is received, amplified, averaged over many pulses, corrected for receiver transfer, and converted to a reflectivity factor Z using the radar range equation. The reflectivity factor is the most important parameter for radar interpretation. The factor derives from the Rayleigh scattering model and is defined theoretically as the sum of particle (drops) diameters to the sixth power in the sample volume: Z = ∑ vol D6 (9.3)

focus the waves into a pencil shaped beam. Larger reflectors create narrower beams, greater resolution and sensitivity at increasing costs. The beamwidth, the angle subtended by the line between the two points on the beam where the power is one half that at the axis, is dependent on the wavelength, and may be approximated by:

θe =

70 λ d

(9.4)

where the units of θe are degrees; and d is the antenna diameter in the same units as λ. Good weather radars have beamwidths of 0.5 to 1°. The useful range of weather radars, except for long-range detection only of thunderstorms, is of the order of 200 km. The beam at an elevation of, for example, 0.5° is at a height of 4 km above the Earth’s surface. Also, the beamwidth is of the order of 1.5 km or greater. For good quantitative precipitation measurements, the range is less than 200 km. At long ranges, the beam is too high for ground estimates. Also, beam spreading reduces resolution and the measurement can be affected by underfilling with target. Technically, there is a maximum unambiguous range determined by the pulse repetition frequency (equation 9.6) since the range must be measured during the listening period between pulses. At usual PRFs this is not a problem. For example, with a PRF of 250 pulses per second, the maximum range is 600 km. At higher PRFs, typically 1 000 pulses per second, required for Doppler systems, the range will be greatly reduced to about 150 km. New developments may ameliorate this situation (Joe, Passarelli and Siggia, 1995). 9.2.4 Doppler radar

where the unit of Z is mm6 m–3. In many cases, the numbers of particles, composition and shape are not known and an equivalent or effective reflectivity factor Ze is defined. Snow and ice particles must refer to an equivalent Ze which represents Z, assuming the backscattering particles were all spherical drops. A common practice is to work in a logarithmic scale or dBz units which are numerically defined as dBz = 10 log10 ze. Volumetric observations of the atmosphere are normally made by scanning the antenna at a fixed elevation angle and subsequently incrementing the elevation angle in steps at each revolution. An important consideration is the resolution of the targets. Parabolic reflector antennas are used to

The development of Doppler weather radars and their introduction to weather surveillance provide a new dimension to the observations (Heiss, McGrew and Sirmans, 1990). Doppler radar provides a measure of the targets’ velocity along a radial from the radar in a direction either towards or away from the radar. A further advantage of the Doppler technique is the greater effective sensitivity to low reflectivity targets near the radar noise level when the velocity field can be distinguished in a noisy Z field. At the normal speeds of meteorological targets, the frequency shift is relatively small compared with the radar frequency and is very difficult to measure. An easier task is to retain the phase of the transmitted pulse, compare it with the phase of the received pulse and then determine the change in phase between successive pulses. The time rate

II.9–8

ParT II. OBSErvING SYSTEMS

of change of the phase is then directly related to the frequency shift, which in turn is directly related to the target velocity – the Doppler effect. If the phase changes by more than ±180°, the velocity estimate is ambiguous. The highest unambiguous velocity that can be measured by a Doppler radar is the velocity at which the target moves, between successive pulses, more than a quarter of the wavelength. At higher speeds, an additional processing step is required to retrieve the correct velocity. The maximum unambiguous Doppler velocity depends on the radar wavelength (λ), and the PRF and can be expressed as:

Vmax = ±

PRF ⋅ λ 4

(9.5)

The maximum unambiguous range can be expressed as:

rmax =

c PRF ⋅ 2

phase of successive pulses is random but known, are cheaper and more common. Fully coherent radars typically employ klystrons in their high-power output amplifiers and have their receiver frequencies derived from the same source as their transmitters. This approach greatly reduces the phase instabilities found in semi-coherent systems, leading to improved ground clutter rejection and better discrimination of weak clear-air phenomena which might otherwise be masked. The microwave transmitter for non-coherent and semi-coherent radars is usually a magnetron, given that it is relatively simple, cheaper and provides generally adequate performance for routine observations. A side benefit of the magnetron is the reduction of Doppler response to second or third trip echoes (echoes arriving from beyond the maximum unambiguous range) due to their random phase, although the same effect could be obtained in coherent systems by introducing known pseudo-random phase disturbances into the receiver and transmitter. Non-coherent radars can be converted relatively easily to a semi-coherent Doppler system. The conversion should also include the more stable coaxial-type magnetron. Both reflectivity factor and velocity data are extracted from the Doppler radar system. The target is typically a large number of hydrometeors (rain drops, snow flakes, ice pellets, hail, etc.) of all shapes and sizes and moving at different speeds due to the turbulent motion within the volume and due to their fall speeds. The velocity field is therefore a spectrum of velocities — the Doppler spectrum (Figure 9.2). Two systems of different complexity are used to process the Doppler parameters. The simpler pulse pair processing (PPP) system uses the comparison of successive pulses in the time domain to extract mean velocity and spectrum width. The second and more complex system uses a fast Fourier transform (FFT) processor to produce a full spectrum of velocities in each sample volume. The PPP system is faster, less computationally intensive and better at low signal-to-noise ratios, but has poorer clutter rejection characteristics than the FFT system. Modern systems try to use the best of both approaches by removing clutter using FFT techniques and subsequently use PPP to determine the radial velocity and spectral width. 9.2.5 Polarization diversity radars

(9.6)

Thus, Vmax and rmax are related by the equation:

Vmax rmax

λc =± 8

(9.7)

These relationships show the limits imposed by the selection of the wavelength and PRF. A high PRF is desirable to increase the unambiguous velocity; a low PRF is desirable to increase the radar range. A compromise is required until better technology is available to retrieve the information unambiguously outside these limits (Doviak and zrnic, 1993; Joe, Passarelli and Siggia, 1995). The relationship also shows that the longer wavelengths have higher limits. In numerical terms, for a typical S-band radar with a PRF of 1 000 Hz, Vmax = ±25 m s–1, while for an X-band radar Vmax = ±8 m s–1. Because the frequency shift of the returned pulse is measured by comparing the phases of the transmitted and received pulses, the phase of the transmitted pulses must be known. In a non-coherent radar, the phase at the beginning of successive pulses is random and unknown, so such a system cannot be used for Doppler measurements; however, it can be used for the basic operations described in the previous section. Some Doppler radars are fully coherent; their transmitters employ very stable frequency sources, in which phase is determined and known from pulse to pulse. Semi-coherent radar systems, in which the

Experiments with polarization diversity radars have been under way for many years to determine their potential for enhanced radar observations of

CHaPTEr 9. radar MEaSurEMENTS

II.9–9

the weather (Bringi and Hendry, 1990). Promising studies point towards the possibility of differentiating between hydrometeor types, a step to discriminating between rain, snow and hail. There are practical technical difficulties, and the techniques and applications have not progressed beyond the research stage to operational usage. The potential value of polarization diversity measurements for precipitation measurement would seem to lie in the fact that better drop size distribution and knowledge of the precipitation types would improve the measurements. Recent work at the United States National Severe Storms Laboratory (Melnikov and others, 2002) on adding polarimetric capability to the NEXRAD radar has demonstrated a robust engineering design utilizing simultaneous transmission and reception of both horizontally and vertically polarized pulses. The evaluation of polarimetric moments, and derived products for rainfall accumulation and hydrometeor classification, has shown that this design holds great promise as a basis for adding polarization diversity to the entire NEXRAD network. There are two basic radar techniques in current usage. One system transmits a circularly polarized wave, and the copolar and orthogonal polarizations are measured. The other system alternately transmits pulses with horizontal then vertical polarization utilizing a high-power switch. The linear system is generally preferred since meteorological information retrieval is less calculation intensive. The latter technique is more common as conventional radars

are converted to have polarization capability. However, the former type of system has some distinct technological advantages. Various polarization bases (Holt, Chandra and Wood, 1995) and dual transmitter systems (Mueller and others, 1995) are in the experimental phase. The main differences in requirements from conventional radars relate to the quality of the antenna system, the accuracy of the electronic calibration and signal processing. Matching the beams, switching polarizations and the measurement of small differences in signals are formidable tasks requiring great care when applying the techniques. The technique is based on micro-differences in the scattering particles. Spherical raindrops become elliptically shaped with the major axis in the horizontal plane when falling freely in the atmosphere. The oblateness of the drop is related to drop size. The power backscattered from an oblate spheroid is larger for a horizontally polarized wave than for a vertically polarized wave assuming Rayleigh scattering. Using suitable assumptions, a drop size distribution can be inferred and thus a rainfall rate can be derived. The differential reflectivity, called ZDR, is defined as 10 times the logarithm of the ratio of the horizontally polarized reflectivity ZH and the vertically polarized reflectivity ZV. Comparisons of the equivalent reflectivity factor Ze and the differential reflectivity ZDR suggest that the target may be separated as being hail, rain, drizzle or snow (Seliga and Bringi, 1976).

0

Relative power (dB)

Ground clutter

Weather

–40 –Vmax

0

+Vmax

figure 9.2. the doppler spectrum of a weather echo and a ground target. the ground target contribution is centred on zero and is much narrower than the weather echo.

II.9–10

ParT II. OBSErvING SYSTEMS

As an electromagnetic wave propagates through a medium with oblate particles, the phase of the incident beam is altered. The effect on the vertical and horizontal phase components depends on the oblateness and is embodied in a parameter termed the specific differential phase (KDP). For heavy rainfall measurements, KDP has certain advantages (zrnic and Ryzhkov, 1995). English and others (1991) demonstrated that the use of KDP for rainfall estimation is much better than Z for rainfall rates greater than about 20 mm hr–1 at the S-band. Propagation effects on the incident beam due to the intervening medium can dominate target backscatter effects and confound the interpretation of the resulting signal. Bebbington (1992) designed a parameter for a circularly polarized radar, termed the degree of polarization, which was insensitive to propagation effects. This parameter is similar to linear correlation for linearly polarized radars. It appears to have value in target discrimination. For example, extremely low values are indicative of scatterers that are randomly oriented such as those caused by airborne grass or ground clutter (Holt and others, 1993). 9.2.6 ground clutter rejection

used to generate a clutter map that is subtracted from the radar pattern collected in precipitating conditions. The problem with this technique is that the pattern of ground clutter changes over time. These changes are primarily due to changes in meteorological conditions; a prime example is anomalous propagation echoes that last several hours and then disappear. Micro-changes to the environment cause small fluctuations in the pattern of ground echoes which confound the use of clutter maps. Adaptive techniques (Joss and Lee, 1993) attempt to determine dynamically the clutter pattern to account for the short-term fluctuations, but they are not good enough to be used exclusively, if at all. Doppler processing techniques attempt to remove the clutter from the weather echo from a signalprocessing perspective. The basic assumption is that the clutter echo is narrow in spectral width and that the clutter is stationary. However, to meet these first criteria, a sufficient number of pulses must be acquired and processed in order to have sufficient spectral resolution to resolve the weather from the clutter echo. A relatively large Nyquist interval is also needed so that the weather echo can be resolved. The spectral width of ground clutter and weather echo is generally much less than 1–2 m s–1 and greater than 1–2 m s–1, respectively. Therefore, Nyquist intervals of about 8 m s–1 are needed. Clutter is generally stationary and is identified as a narrow spike at zero velocity in the spectral representation (Figure 9.2). The spike has finite width because the ground echo targets, such as swaying trees, have some associated motions. Time domain processing to remove the zero velocity (or DC) component of a finite sequence is problematic since the filtering process will remove weather echo at zero velocity as well (zrnic and Hamidi, 1981). Adaptive spectral (Fourier transform) processing can remove the ground clutter from the weather echoes even if they are overlapped (Passarelli and others, 1981; Crozier and others, 1991). This is a major advantage of spectral processing. Stripped of clutter echo, the significant meteorological parameters can be computed. An alternative approach takes advantage of the observation that structures contributing to ground clutter are very small in scale (less than, for example, 100 m). Range sampling is carried out at a very fine resolution (less than 100 m) and clutter is identified using reflectivity and Doppler signal processing. Range averaging (to a final resolution of 1 km) is performed with clutter-free range bins. The philosophy is to detect and ignore range bins with clutter, rather than to correct for

Echoes due to non-precipitation targets are known as clutter, and should be eliminated. Echoes caused by clear air or insects which can be used to map out wind fields are an exception. Clutter can be the result of a variety of targets, including buildings, hills, mountains, aircraft and chaff, to name just a few. Good radar siting is the first line of defence against ground clutter effects. However, clutter is always present to some extent. The intensity of ground clutter is inversely proportional to wavelength (Skolnik, 1970), whereas backscatter from rain is inversely proportional to the fourth power of wavelength. Therefore, shorter wavelength radars are less affected by ground clutter. Point targets, like aircraft, can be eliminated, if they are isolated, by removing echoes that occupy a single radar resolution volume. Weather targets are distributed over several radar resolution volumes. The point targets can be eliminated during the data-processsing phase. Point targets, like aircraft echoes, embedded within precipitation echoes may not be eliminated with this technique depending on relative strength. Distributed targets require more sophisticated signal and data-processing techniques. A conceptually attractive idea is to use clutter maps. The patterns of radar echoes in non-precipitating conditions are

CHaPTEr 9. radar MEaSurEMENTS

II.9–11

the clutter (Joss and Lee, 1993; Lee, Della Bruna and Joss, 1995). This is radically different from the previously discussed techniques and it remains to be seen whether the technique will be effective in all situations, in particular in anomalous propagation situations where the clutter is widespread. Polarization radars can also identify clutter. However, more work is needed to determine their advantages and disadvantages. Clutter can be reduced by careful site selection (see section 9.7). Radars used for long-range surveillance, such as for tropical cyclones or in a widely scattered network, are usually placed on hilltops to extend the useful range, and are therefore likely to see many clutter echoes. A simple suppression technique is to scan automatically at several elevations, and to discard the data at the shorter ranges from the lower elevations, where most of the clutter exists. By processing the radar data into CAPPI products, low elevation data is rejected automatically at short ranges.

to strike the Earth and cause ground echoes not normally encountered. The phenomenon occurs when the index of refraction decreases rapidly with height, for example, an increase in temperature and a decrease in moisture with height. These echoes must be dealt with in producing a precipitation map. This condition is referred to as anomalous propagation (AP or ANAPROP). Some “clear air” echoes are due to turbulent inhomogeneities in the refractive index found in areas of turbulence, layers of enhanced stability, wind shear cells, or strong inversions. These echoes usually occur in patterns, mostly recognizable, but must be eliminated as precipitation fields (Gossard and Strauch, 1983). 9.3.2 attenuation in the atmosphere

Microwaves are subject to attenuation owing to atmospheric gases, clouds and precipitation by absorption and scattering. Attenuation by gases Gases attenuate microwaves in the 3–10 cm bands. Absorption by atmospheric gases is due mainly to water vapour and oxygen molecules. Attenuation by water vapour is directly proportional to the pressure and absolute humidity and increases almost linearly with decreasing temperature. The concentration of oxygen, to altitudes of 20 km, is relatively uniform. Attenuation is also proportional to the square of the pressure. Attenuation by gases varies slightly with the climate and the season. It is significant at weather radar wavelengths over the longer ranges and can amount to 2 to 3 dB at the longer wavelengths and 3 to 4 dB at the shorter wavelengths, over a range of 200 km. Compensation seems worthwhile and can be quite easily accomplished automatically. Attenuation can be computed as a function of range on a seasonal basis for ray paths used in precipitation measurement and applied as a correction to the precipitation field. Attenuation by hydrometeors Attenuation by hydrometeors can result from both absorption and scattering. It is the most significant source of attenuation. It is dependent on the shape, size, number and composition of the particles. This dependence has made it very difficult to overcome in any quantitative way using radar observations alone. It has not been satisfactorily overcome for automated operational measurement systems yet. However, the phenomenon must be recognized and

9.3

ProPagation anD scattering of raDar signals

Electromagnetic waves propagate in straight lines, in a homogeneous medium, with the speed of light. The Earth’s atmosphere is not homogeneous and microwaves undergo refraction, absorption and scattering along their path. The atmosphere is usually vertically stratified and the rays change direction depending on the changes in height of the refractive index (or temperature and moisture). When the waves encounter precipitation and clouds, part of the energy is absorbed and a part is scattered in all directions or back to the radar site. 9.3.1 refraction in the atmosphere

The amount of bending of electromagnetic waves can be predicted by using the vertical profile of temperature and moisture (Bean and Dutton, 1966). Under normal atmospheric conditions, the waves travel in a curve bending slightly earthward. The ray path can bend either upwards (sub-refraction) or more earthward (super-refraction). In either case, the altitude of the beam will be in error using the standard atmosphere assumption. From a precipitation measurement standpoint, the greatest problem occurs under super-refractive or “ducting” conditions. The ray can bend sufficiently

II.9–12

ParT II. OBSErvING SYSTEMS

the effects reduced by some subjective intervention using general knowledge. Attenuation is dependent on wavelength. At 10 cm wavelengths, the attenuation is rather small, while at 3 cm it is quite significant. At 5 cm, the attenuation may be acceptable for many climates, particularly in the high mid-latitudes. Wavelengths below 5 cm are not recommended for good precipitation measurement except for short-range applications (Table 9.5). table 9.5. one-way attenuation relationships
Wavelength (cm) 10 5 3.2 Relation (dB km–1) 0.000 343 R0.97 0.00 18 R1.05 0.01 R1.21

which is the justification for equation 9.3. |K|2, the refractive index factor, is equal to 0.93 for liquid water and 0.197 for ice. The radar power measurements are used to derive the scattering intensity of the target by using equation 9.2 in the form:

z=

CP r r 2 K
2

(9.9)

The method and problems of interpreting the reflectivity factor in terms of precipitation rate (R) are discussed in section 9.9. 9.3.4 scattering in clear air

After Burrows and Attwood (1949). One-way specific attenuations at 18˚C. R is in units of mm hr–1.

For precipitation estimates by radar, some general statements can be made with regard to the magnitude of attenuation. Attenuation is dependent on the water mass of the target, thus heavier rains attenuate more; clouds with much smaller mass attenuate less. Ice particles attenuate much less than liquid particles. Clouds and ice clouds cause little attenuation and can usually be ignored. Snow or ice particles (or hailstones) can grow much larger than raindrops. They become wet as they begin to melt and result in a large increase in reflectivity and, therefore, in attenuation properties. This can distort precipitation estimates. 9.3.3 scattering by clouds and precipitation

In regions without precipitating clouds, it has been found that echoes are mostly due to insects or to strong gradients of refractive index in the atmosphere. The echoes are of very low intensity and are detected only by very sensitive radars. Equivalent Ze values for clear air phenomena generally appear in the range of –5 to –55 dBz, although these are not true Z parameters, with the physical process generating the echoes being entirely different. For precipitation measurement, these echoes are a minor “noise” in the signal. They can usually be associated with some meteorological phenomenon such as a sea breeze or thunderstorm outflows. Clear air echoes can also be associated with birds and insects in very low concentrations. Echo strengths of 5 to 35 dBz are not unusual, especially during migrations (Table 9.6). table 9.6. typical backscatter cross-sections for various targets
Object aircraft Human Weather balloon Birds Bees, dragonflies, moths 2 mm water drop σb (m ) 10 to 1 000 0.14 to 1.05 0.01 0.001 to 0.01 3 x 10–6 to 10–5 1.8 x 10–10
2

The signal power detected and processed by the radar (namely, echo) is power backscattered by the target, or by hydrometeors. The backscattering cross-section (σb) is defined as the area of an isotropic scatterer that would return to the emitting source the same amount of power as the actual target. The backscattering cross-section of spherical particles was first determined by Mie (1908). Rayleigh found that, if the ratio of the particle diameter to the wavelength was equal to or less than 0.06, a simpler expression could be used to determine the backscatter cross-section:

≠ 5 K D6 sb = λ4

2

(9.8)

Although normal radar processing would interpret the signal in terms of Z or R, the scattering properties of the clear atmosphere are quite different from that of hydrometeors. It is most often expressed in terms of the structure parameter of refractive index, Cn2. This is a measure of the mean-square fluctuations

CHaPTEr 9. radar MEaSurEMENTS

II.9–13

of the refractive index as a function of distance (Gossard and Strauch, 1983).

9.4 9.4.1

velocity MeasureMents

the Doppler spectrum

Doppler radars measure velocity by estimating the frequency shift produced by an ensemble of moving targets. Doppler radars also provide information about the total power returned and about the spectrum width of the precipitation particles within the pulse volume. The mean Doppler velocity is equal to the mean motion of scatterers weighted by their cross-sections and, for near horizontal antenna scans, is essentially the air motion towards or away from the radar. Likewise, the spectrum width is a measure of the velocity dispersion, that is, the shear or turbulence within the resolution volume. A Doppler radar measures the phase of the returned signal by referencing the phase of the received signal to the transmitter. The phase is measured in rectangular form by producing the in-phase (I) and quadrature (Q) components of the signal. The I and Q are samples at a fixed range location. They are collected and processed to obtain the mean velocity and spectrum width. 9.4.2 Doppler ambiguities

and zrnic, 1993) or continuity techniques (Eilts and Smith, 1990). In the former, radial velocity estimates are collected at two different PRFs with different maximum unambiguous velocities and are combined to yield a new estimate of the radial velocity with an extended unambiguous velocity. For example, a C band radar using PRFs of 1 200 and 900 Hz has nominal unambiguous velocities of 16 and 12 m s–1, respectively. The amount of aliasing can be deduced from the difference between the two velocity estimates to dealias the velocity to an extended velocity range of ±48 m s–1 (Figure 9.3). Continuity techniques rely on having sufficient echo to discern that there are aliased velocities and correcting them by assuming velocity continuity (no discontinuities of greater than 2Vmax). There is a range limitation imposed by the use of high PRFs (greater than about 1 000 Hz) as described in section 9.2. Echoes beyond the maximum range will be aliased back into the primary range. For radars with coherent transmitters (e.g, klystron systems), the echoes will appear within the primary range. For coherent-on-receive systems, the second trip echoes will appear as noise (Joe, and Passarelli and Siggia, 1995; Passarelli and others 1981). 9.4.3 vertically pointing measurements

To detect returns at various ranges from the radar, the returning signals are sampled periodically, usually about every µs, to obtain information about every 150 m in range. This sampling can continue until it is time to transmit the next pulse. A sample point in time (corresponding to a distance from the radar) is called a range gate. The radial wind component throughout a storm or precipitation area is mapped as the antenna scans. A fundamental problem with the use of any pulse Doppler radar is the removal of ambiguity in Doppler mean velocity estimates, that is, velocity folding. Discrete equi-spaced samples of a timevarying function result in a maximum unambiguous frequency equal to one half of the sampling frequency (fs). Subsequently, frequencies greater than fs/2 are aliased (“folded”) into the Nyquist co-interval (±fs/2) and are interpreted as velocities within ±λfs/4, where λ is the wavelength of transmitted energy. Techniques to dealias the velocities include dual PRF techniques (Crozier and others, 1991; Doviak

In principle, a Doppler radar operating in the vertically-pointing mode is an ideal tool for obtaining accurate cloud-scale measurements of vertical wind speeds and drop-size distributions (DSDs). However, the accuracy of vertical velocities and DSDs derived from the Doppler spectra have been limited by the strong mathematical interdependence of the two quantities. The real difficulty is that the Doppler spectrum is measured as a function of the scatterers’ total vertical velocity – due to terminal hydrometeor fall speeds, plus updrafts or downdrafts. In order to compute the DSD from a Doppler spectrum taken at vertical incidence, the spectrum must be expressed as a function of terminal velocity alone. Errors of only ±0.25 m s–1 in vertical velocity can cause errors of 100 per cent in drop number concentrations (Atlas, Scrivastava and Sekhon, 1973). A dual-wavelength technique has been developed (termed the Ratio method) by which vertical air velocity may be accurately determined independently of the DSD. In this approach, there is a trade-off between potential accuracy and potential for successful application.

II.9–14 9.4.4

ParT II. OBSErvING SYSTEMS

Measurement of velocity fields

A great deal of information can be determined in real time from a single Doppler radar. It should be noted that the interpretation of radial velocity estimates from a single radar is not always unambiguous. Colour displays of single-Doppler radial velocity patterns aid in the real-time interpretation of the associated reflectivity fields and can reveal important features not evident in the reflectivity structures alone (Burgess and Lemon, 1990). Such a capability is of particular importance in the identification and tracking of severe storms. On typical colour displays, velocities between ± Vmax are assigned 1 of 8 to 15 colours or more. Velocities extending beyond the Nyquist interval enter the scale of colours at the opposite end. This process may be repeated if the velocities are aliased more than one Nyquist interval. Doppler radar can also be used to derive vertical profiles of horizontal winds. When the radar’s antenna is tilted above the horizontal, increasing range implies increasing height. A profile of wind with height can be obtained by sinusoidal curve-fitting to the observed data (termed velocity azimuth display (VAD) after Lhermitte and Atlas, 1961) if the wind is relatively uniform over the area of the scan. The winds along the zero radial velocity contour are perpendicular to the radar beam axis. The colour display may be used to easily interpret VAD data obtained from large-scale precipitation systems. Typical elevated conical scan patterns in widespread
32

precipitation reveal an S-shaped zero radial velocity contour as the mean wind veers with height (Wood and Brown, 1986). On other occasions, closed contours representing jets are evident. Since the measurement accuracy is good, divergence estimates can also be obtained by employing the VAD technique. This technique cannot be accurately applied during periods of convective precipitation around the radar. However, moderately powerful, sensitive Doppler radars have successfully obtained VAD wind profiles and divergence estimates in the optically clear boundary layer during all but the coldest months, up to heights of 3 to 5 km above ground level. The VAD technique seems well suited for winds from precipitation systems associated with extratropical and tropical cyclones. In the radar’s clear-air mode, a time series of measurements of divergence and derived vertical velocity is particularly useful in nowcasting the probability of deep convection. Since the mid-1970s, experiments have been made for measuring three-dimensional wind fields using multiple Doppler arrays. Measurements taken at a given location inside a precipitation area may be combined, by using a proper geometrical transformation, in order to obtain the three wind components. Such estimations are also possible with only two radars, using the continuity equation. Kinematic analysis of a wind field is described in Browning and Wexler (1968).

Measured velocity or velocity difference (m s–1)

16

0

–16

–32 –48

–32

–16

0

16

32

48

Actual velocity (m s–1)

figure 9.3. solid and dashed lines show doppler velocity measurements taken with two different pulse repetition frequencies (1 200 and 900 Hz for a c band radar). speeds greater than the maximum unambiguous velocities are aliased. the differences (dotted line) between the doppler velocity estimates are distinct and can be used to identify the degree of aliasing.

CHaPTEr 9. radar MEaSurEMENTS

II.9–15

9.5

sources of error

Radar beam filling In many cases, and especially at large ranges from the radar, the pulse volume is not completely filled with homogeneous precipitation. Precipitation intensities often vary widely on small scales; at large distances from the radar, the pulse volume increases in size. At the same time, the effects of the Earth’s curvature become significant. In general, measurements may be quantitatively useful for ranges of less than 100 km. This effect is important for cloudtop height measurements and the estimation of reflectivity. Non-uniformity of the vertical odistribution of precipitation The first parameter of interest when taking radar measurements is usually precipitation at ground level. Because of the effects of beam width, beam tilting and the Earth’s curvature, radar measurements of precipitation are higher than average over a considerable depth. These measurements are dependent on the details of the vertical distribution of precipitation and can contribute to large errors for estimates of precipitation on the ground. Variations in the Z-R relationship A variety of Z-R relationships have been found for different precipitation types. However, from the radar alone (except for dual polarized radars) these variations in the types and size distribution of hydrometeors cannot be estimated. In operational applications, this variation can be a significant source of error. Attenuation by intervening precipitation Attenuation by rain may be significant, especially at the shorter radar wavelengths (5 and 3 cm). Attenuation by snow, although less than for rain, may be significant over long path lengths. Beam blocking Depending on the radar installation, the radar beam may be partly or completely occulted by the topography or obstacles located between the radar and the target. This results in underestimations of reflectivity and, hence, of rainfall rate. Attenuation due to a wet radome Most radar antennas are protected from wind and rain by a radome, usually made of fibreglass. The

radome is engineered to cause little loss in the radiated energy. For instance, the two-way loss due to this device can be easily kept to less than 1 dB at the C band, under normal conditions. However, under intense rainfall, the surface of the radome can become coated with a thin film of water or ice, resulting in a strong azimuth dependent attenuation. Experience with the NEXRAD WSR-88D radars shows that coating radomes with a special hydrophobic paint essentially eliminates this source of attenuation, at least at 10 cm wavelengths. Electromagnetic interference Electromagnetic interference from other radars or devices, such as microwave links, may be an important factor of error in some cases. This type of problem is easily recognized by observation. It may be solved by negotiation, by changing frequency, by using filters in the radar receiver, and sometimes by software. Ground clutter The contamination of rain echoes by ground clutter may cause very large errors in precipitation and wind estimation. The ground clutter should first be minimized by good antenna engineering and a good choice of radar location. This effect may be greatly reduced by a combination of hardware clutter suppression devices (Aoyagi, 1983) and through signal and data processing. Ground clutter is greatly increased in situations of anomalous propagation. Anomalous propagation Anomalous propagation distorts the radar beam path and has the effect of increasing ground clutter by refracting the beam towards the ground. It may also cause the radar to detect storms located far beyond the usual range, making errors in their range determination because of range aliasing. Anomalous propagation is frequent in some regions, when the atmosphere is subject to strong decreases in humidity and/or increases in temperature with height. Clutter returns owing to anomalous propagation may be very misleading to untrained human observers and are more difficult to eliminate fully by processing them as normal ground clutter. Antenna accuracy The antenna position may be known within 0.2° with a well-engineered system. Errors may also be produced by the excessive width of the radar beam

II.9–16

ParT II. OBSErvING SYSTEMS

or by the presence of sidelobes, in the presence of clutter or of strong precipitation echoes. Electronics stability Modern electronic systems are subject to small variations with time. This may be controlled by using a well-engineered monitoring system, which will keep the variations of the electronics within less than 1 dB, or activate an alarm when a fault is detected. Processing accuracy The signal processing must be designed to optimize the sampling capacities of the system. The variances in the estimation of reflectivity, Doppler velocity and spectrum width must be kept to a minimum. Range and velocity aliasing may be significant sources of error. Radar range equation There are many assumptions in interpreting radar-received power measurements in terms of the meteorological parameter Z by the radar range equation. Non-conformity with the assumptions can lead to error.

due both to an increase in the amount of material and to the difficulty in meeting tolerances over a greater size. Within the bands of weather radar interest (S, C, X and K), the sensitivity of the radar or its ability to detect a target is strongly dependent on the wavelength. It is also significantly related to antenna size, gain and beamwidth. For the same antenna, the target detectability increases with decreasing wavelength. There is an increase in sensitivity of 8.87 dB in theory and 8.6 dB in practice from 5 to 3 cm wavelengths. Thus, the shorter wavelengths provide better sensitivity. At the same time, the beamwidth is narrower for better resolution and gain. The great disadvantage is that smaller wavelengths have much larger attenuation. 9.6.3 attenuation

9.6

oPtiMizing raDar characteristics

Radar rays are attenuated most significantly in rain, less in snow and ice, and even less in clouds and atmospheric gases. In broad terms, attenuation at the S band is relatively small and generally not too significant. The S band radar, despite its cost, is essential for penetrating the very high reflectivities in mid-latitude and subtropical severe storms with wet hail. X-band radars can be subject to severe attenuation over short distances, and they are not suitable for precipitation rate estimates, or even for surveillance, except at very short range when shadowing or obliteration of more distant storms by nearer storms is not important. The attenuation in the C band lies between the two. 9.6.4 transmitter power

9.6.1

selecting a radar

A radar is a highly effective observation system. The characteristics of the radar and the climatology determine the effectiveness for any particular application. No single radar can be designed to be the most effective for all applications. Characteristics can be selected to maximize the proficiency to best suit one or more applications, such as tornado detection. Most often, for general applications, compromises are made to meet several user requirements. Many of the characteristics are interdependent with respect to performance and, hence, the need for optimization in reaching a suitable specification. Cost is a significant consideration. Much of the interdependence can be visualized by reference to the radar range equation. A brief note on some of the important factors follows. 9.6.2 Wavelength

Target detectability is directly related to the peak power output of the radar pulse. However, there are practical limits to the amount of power output that is dictated by power tube technology. Unlimited increases in power are not the most effective means of increasing the target detectability. For example, doubling the power only increases the system sensitivity by 3 dB. Technically, the maximum possible power output increases with wavelength. Improvements in receiver sensitivity, antenna gain, or choice of wavelength may be better means of increasing detection capability. Magnetrons and klystrons are common power tubes. Magnetrons cost less but are less frequency stable. For Doppler operation, the stability of klystrons was thought to be mandatory. An analysis by Strauch (1981) concluded that magnetrons could be quite effective for general meteorological applications; many Doppler radars today are based on magnetrons. Ground echo rejection techniques and clear air detection applications may favour

The larger the wavelength, the greater the cost of the radar system, particularly antenna costs for comparable beamwidths (i.e. resolution). This is

CHaPTEr 9. radar MEaSurEMENTS

II.9–17

klystrons. On the other hand, magnetron systems simplify rejecting second trip echoes. At normal operating wavelengths, conventional radars should detect rainfall intensities of the order of 0.1 mm h–1 at 200 km and have peak power outputs of the order of 250 kW or greater in the C band. 9.6.5 Pulse length

9.6.7

antenna system, beamwidth, and speed and gain

Weather radars normally use a horn fed antenna with a parabolic reflector to produce a focused narrow conical beam. Two important considerations are the beamwidth (angular resolution) and the power gain. For common weather radars, the size of the antenna increases with wavelength and with the narrowness of the beam required. Weather radars normally have beamwidths in the range of 0.5 to 2.0°. For a 0.5 and 1.0° beam at a C band wavelength, the antenna reflector diameter is 7.1 and 3.6 m, respectively; at S band it is 14.3 and 7.2 m. The cost of the antenna system and pedestal increases much more than linearly with reflector size. There is also an engineering and cost limit. The tower must also be appropriately chosen to support the weight of the antenna. The desirability of having a narrow beam to maximize the resolution and enhance the possibility of having the beam filled with target is particularly critical for the longer ranges. For a 0.5° beam, the azimuthal (and vertical) cross-beam width at 50, 100 and 200 km range is 0.4, 0.9 and 1.7 km, respectively. For a 1.0° beam, the widths are 0.9, 1.7 and 3.5 km. Even with these relatively narrow beams, the beamwidth at the longer ranges is substantially large. The gain of the antenna is also inversely proportional to the beamwidth and thus, the narrower beams also enhance system sensitivity by a factor equal to differential gain. The estimates of reflectivity and precipitation require a nominal minimal number of target hits to provide an acceptable measurement accuracy. The beam must thus have a reasonable dwell time on the target in a rotating scanning mode of operation. Thus, there are limits to the antenna rotation speed. Scanning cycles cannot be decreased without consequences. For meaningful measurements of distributed targets, the particles must have sufficient time to change their position before an independent estimate can be made. Systems generally scan at the speed range of about 3 to 6 rpm. Most weather radars are linearly polarized with the direction of the electric field vector transmitted being either horizontal or vertical. The choice is not clear cut, but the most common polarization is horizontal. Reasons for favouring horizontal polarization include: (a) sea and ground echoes are generally less with horizontal; (b) lesser sidelobes

The pulse length determines the target resolving power of the radar in range. The range resolution or the ability of the radar to distinguish between two discrete targets is proportional to the half pulse length in space. For most klystrons and magnetrons, the maximum ratio of pulse width to PRF is about 0.001. Common pulse lengths are in the range of 0.3 to 4 µs. A pulse length of 2 µs has a resolving power of 300 m, and a pulse of 0.5 µs can resolve 75 m. Assuming that the pulse volume is filled with target, doubling the pulse length increases the radar sensitivity by 6 dB with receiver-matched filtering, while decreasing the resolution; decreasing the pulse length decreases the sensitivity while increasing the resolution. Shorter pulse lengths allow more independent samples of the target to be acquired in range and the potential for increased accuracy of estimate. 9.6.6 Pulse repetition frequency

The PRF should be as high as practical to obtain the maximum number of target measurements per unit time. A primary limitation of the PRF is the unwanted detection of second trip echoes. Most conventional radars have unambiguous ranges beyond the useful range of weather observation by the radar. An important limit on weather target useful range is the substantial height of the beam above the Earth even at ranges of 250 km. For Doppler radar systems, high PRFs are used to increase the Doppler unambiguous velocity measurement limit. The disadvantages of higher PRFs are noted above. The PRF factor is not a significant cost consideration but has a strong bearing on system performance. Briefly, high PRFs are desirable to increase the number of samples measured, to increase the maximum unambiguous velocity that can be measured, and to allow higher permissible scan rates. Low PRFs are desirable to increase the maximum unambiguous range that can be measured, and to provide a lower duty cycle.

II.9–18

ParT II. OBSErvING SYSTEMS

in the horizontal provide more accurate measurements in the vertical; and (c) greater backscatter from rain due to the falling drop ellipticity. However, at low elevation angles, better reflection of horizontally polarized waves from plane ground surfaces may produce an unwanted range-dependent effect. In summary, a narrow beamwidth affects system sensitivity, detectability, horizontal and vertical resolution, effective range and measurement accuracy. The drawback of small beamwidth is mainly cost. For these reasons, the smallest affordable beamwidth has proven to improve greatly the utility of the radar (Crozier and others, 1991). 9.6.8 typical weather radar characteristics

As discussed, the radar characteristics and parameters are interdependent. The technical limits on the radar components and the availability of manufactured components are important considerations in the design of radar systems. The Z only radars are the conventional non-coherent pulsed radars that have been in use for decades and are still very useful. The Doppler radars are the new generation of radars that add a new dimension to the observations. They provide estimates of radial velocity. The micro-Doppler radars are radars developed for better detection of small-scale microbursts and tornadoes over very limited areas, such as for air-terminal protection.

9.7 9.7.1

raDar installation

The characteristics of typical radars used in general weather applications are given in Table 9.7. table 9.7. specifications of typical meteorological radars
Type Band
Frequency (GHz) Wavelength (cm) Peak power (kw)

optimum site selection

Z only Doppler Z only C 5.6 5.33 250 2.0 C 5.6 5.33 250 0.5, 2.0 S 3.0 10.0 500 0.25, 4.0 200– 800 log –110 3.7

Doppler S 2.8 10.7 1 000 1.57, 4.5 300– 1 400

MicroDoppler C 5.6 5.4 250 1.1

Pulse length ( s)
PrF (Hz) receiver MdS (dBm) antenna diameter (m) Beamwidth (°) Gain (dB) Polarization rotation rate (rpm)

Optimum site selection for installing a weather radar is dependent on the intended use. When there is a definite zone that requires storm warnings, the best compromise is usually to locate the equipment at a distance of between 20 and 50 km from the area of interest, and generally upwind of it according to the main storm track. It is recommended that the radar be installed slightly away from the main storm track in order to avoid measurement problems when the storms pass over the radar. At the same time, this should lead to good resolution over the area of interest and permit better advance warning of the coming storms (Leone and others, 1989). In the case of a radar network intended primarily for synoptic applications, radars at mid-latitudes should be located at a distance of approximately 150 to 200 km from each another. The distance may be increased at latitudes closer to the Equator, if the radar echoes of interest frequently reach high altitudes. In all cases, narrow-beam radars will yield the best accuracy for precipitation measurements. The choice of radar site is influenced by many economic and technical factors as follows: (a) The existence of roads for reaching the radar; (b) The availability of power and telecommunication links. It is frequently necessary to add commercially available lightning protection devices;

250– 300 log –105 3.7

250– 1 200 log/lin –105 6.2

235– 2 000

log/lin log/lin –113 8.6 –106 7.6

1.1 44 H
6

0.6 48 H
1–6

1.8 38.5 H
3

1.0 45 H
6

0.5 51 H
5

CHaPTEr 9. radar MEaSurEMENTS

II.9–19

(c) (d) (e)

(f)

(g)

(h)

The cost of land; The proximity to a monitoring and maintenance facility; Beam blockage obstacles must be avoided. No obstacle should be present at an angle greater than a half beamwidth above the horizon, or with a horizontal width greater than a half beamwidth; Ground clutter must be avoided as much as possible. For a radar to be used for applications at relatively short range, it is sometimes possible to find, after a careful site inspection and examination of detailed topographic maps, a relatively flat area in a shallow depression, the edges of which would serve as a natural clutter fence for the antenna pattern sidelobes with minimum blockage of the main beam. In all cases, the site survey should include a camera and optical theodolite check for potential obstacles. In certain cases, it is useful to employ a mobile radar system for confirming the suitability of the site. On some modern radars, software and hardware are available to greatly suppress ground clutter with minimum rejection of weather echoes (Heiss, McGrew and Sirmans, 1990); When the radar is required for long-range surveillance, as may be the case for tropical cyclones or other applications on the coast, it will usually be placed on a hill-top. It will see a great deal of clutter, which may not be so important at long ranges (see section 9.2.6 for clutter suppression); Every survey on potential sites should include a careful check for electromagnetic interference, in order to avoid as much as possible interference with other communication systems such as television, microwave links or other radars. There should also be confirmation that microwave radiation does not constitute a health hazard to populations living near the proposed radar site (Skolnik, 1970; Leone and others, 1989). telecommunications and remote displays

advances, in many countries, “nowcasting” is carried out at sites removed from the radar location. Pictures may be transmitted by almost any modern transmission means, such as telephone lines (dedicated or not), fibre optic links, radio or microwave links, and satellite communication channels. The most widely used transmission systems are dedicated telephone lines, because they are easily available and relatively low in cost in many countries. It should be kept in mind that radars are often located at remote sites where advanced telecommunication systems are not available. Radar pictures may now be transmitted in a few seconds due to rapid developments in communication technology. For example, a product with a 100 km range with a resolution of 0.5 km may have a file size of 160 kBytes. Using a compression algorithm, the file size may be reduced to about 20 to 30 kBytes in GIF format. This product file can be transmitted on an analogue telephone line in less than 30 s, while using an ISDN 64 kbps circuit it may take no more than 4 s. However, the transmission of more reflectivity levels or of additional data, such as volume scans of reflectivity or Doppler data, will increase the transmission time.

9.8

calibration anD Maintenance

The calibration and maintenance of any radar should follow the manufacturer’s prescribed procedures. The following is an outline. 9.8.1 calibration

9.7.2

Recent developments in telecommunications and computer technology allow the transmission of radar data to a large number of remote displays. In particular, computer systems exist that are capable of assimilating data from many radars as well as from other data sources, such as satellites. It is also possible to monitor and to control remotely the operation of a radar which allows unattended operation. Owing to these technical

Ideally, the complete calibration of reflectivity uses an external target of known radar reflectivity factor, such as a metal-coated sphere. The concept is to check if the antenna and wave guides have their nominal characteristics. However, this method is very rarely used because of the practical difficulties in flying a sphere and multiple ground reflections. Antenna parameters can also be verified by sun flux measurements. Routine calibration ignores the antenna but includes the wave guide and transmitter receiver system. Typically, the following actions are prescribed: (a) Measurement of emitted power and waveform in the proper frequency band; (b) Verification of transmitted frequency and frequency spectrum;

II.9–20 (c)

ParT II. OBSErvING SYSTEMS

(d)

Injection of a known microwave signal before the receiver stage, in order to check if the levels of reflectivity indicated by the radar are correctly related to the power of the input; Measurement of the signal to noise ratio, which should be within the nominal range according to radar specifications.

many radars, there might be a centralized logistic supply and a repair workshop. The latter receives failed parts from the radars, repairs them and passes them on to logistics for storage as stock parts, to be used as needed in the field. For corrective maintenance, the Service should be sufficiently equipped with the following: (a) Spare parts for all of the most sensitive components, such as tubes, solid state components, boards, chassis, motors, gears, power supplies, and so forth. Experience shows that it is desirable to have 30 per cent of the initial radar investment in critical spare parts on the site. If there are many radars, this percentage may be lowered to about 20 per cent, with a suitable distribution between central and local maintenance; (b) Test equipment, including the calibration equipment mentioned above. Typically, this would amount to approximately 15 per cent of the radar value; (c) Well-trained personnel capable of identifying problems and making repairs rapidly and efficiently. Competent maintenance organization should result in radar availability 96 per cent of the time on a yearly basis, with standard equipment. Better performances are possible at a higher cost. Recommended minimum equipment for calibration and maintenance includes the following: (a) Microwave signal generator; (b) Microwave power meter; (c) MHz oscilloscope; (d) Microwave frequency meter; (e) Standard gain horns; (f) Intermediate frequency signal generator; (g) Microwave components, including loads, couplers, attenuators, connectors, cables, adapters, and so on; (h) Versatile microwave spectrum analyser at the central facility; (i) Standard electrical and mechanical tools and equipment.

If any of these calibration checks indicate any changes or biases, corrective adjustments need to be made. Doppler calibration includes: the verification and adjustment of phase stability using fixed targets or artificial signals; the scaling of the real and imaginary parts of the complex video; and the testing of the signal processor with known artificially generated signals. Levelling and elevation are best checked by tracking the position of the sun in receive-only mode and by using available sun location information; otherwise mechanical levels on the antenna are needed. The presence or absence of echoes from fixed ground targets may also serve as a crude check of transmitter or receiver performance. Although modern radars are usually equipped with very stable electronic components, calibrations must be performed often enough to guarantee the reliability and accuracy of the data. Calibration must be carried out either by qualified personnel, or by automatic techniques such as online diagnostic and test equipment. In the first case, which requires manpower, calibration should optimally be conducted at least every week; in the second, it may be performed daily or even semi-continuously. Simple comparative checks on echo strength and location can be made frequently, using two or more overlapping radars viewing an appropriate target. 9.8.2 Maintenance

Modern radars, if properly installed and operated, should not be subject to frequent failures. Some manufacturers claim that their radars have a mean time between failures (MTBF) of the order of a year. However, these claims are often optimistic and the realization of the MTBF requires scheduled preventive maintenance. A routine maintenance plan and sufficient technical staff are necessary in order to minimize repair time. Preventive maintenance should include at least a monthly check of all radar parts subject to wear, such as gears, motors, fans and infrastructures. The results of the checks should be written in a radar logbook by local maintenance staff and, when appropriate, sent to the central maintenance facility. When there are

9.9

PreciPitation MeasureMents

The measurement of precipitation by radars has been a subject of interest since the early days of radar meteorology. The most important advantage of using radars for precipitation measurements is the coverage of a large area with high spatial and temporal resolution from a single observing point and in real time.

CHaPTEr 9. radar MEaSurEMENTS

II.9–21

Furthermore, the two-dimensional picture of the weather situation can be extended over a very large area by compositing data from several radars. However, only recently has it become possible to take measurements over a large area with an accuracy that is acceptable for hydrological applications. Unfortunately, a precise assessment of this accuracy is not possible – partly because no satisfactory basis of comparison is available. A common approach is to use a network of gauges as a reference against which to compare the radar estimates. This approach has an intuitive appeal, but suffers from a fundamental limitation: there is no reference standard against which to establish the accuracy of areal rainfall measured by the gauge network on the scale of the radar beam. Nature does not provide homogeneous, standard rainfall events for testing the network, and there is no higher standard against which to compare the network data. Therefore, the true rainfall for the area or the accuracy of the gauge network is not known. Indeed, there are indications that the gauge accuracy may, for some purposes, be far inferior to what is commonly assumed, especially if the estimates come from a relatively small number of raingauges (Neff, 1977). 9.9.1 Precipitation characteristics affecting radar measurements: the Z-R relation

factor and precipitation rate (Ze against R), precipitation amounts can be estimated reasonably well in snow conditions (the value of 0.208, instead of 0.197 for ice, accounts for the change in particle diameter for water and ice particles of equal mass). The rainfall rate (R) is a product of the mass content and the fall velocity in a radar volume. It is roughly proportional to the fourth power of the particle diameters. Therefore, there is no unique relationship between radar reflectivity and the precipitation rate since the relationship depends on the particle size distribution. Thus, the natural variability in drop-size distributions is an important source of uncertainty in radar precipitation measurements. Empirical Z-R relations and the variations from storm to storm and within individual storms have been the subject of many studies over the past forty years. A Z-R relation can be obtained by calculating values of Z and R from measured drop-size distributions. An alternative is to compare Z measured aloft by the radar (in which case it is called the “equivalent radar reflectivity factor” and labelled Ze) with R measured at the ground. The latter approach attempts to reflect any differences between the precipitation aloft and that which reaches the ground. It may also include errors in the radar calibration, so that the result is not strictly a Z-R relationship. The possibility of accounting for part of the variability of the Z-R relation by stratifying storms according to rain type (such as convective, noncellular, orographic) has received a good deal of attention. No great improvements have been achieved and questions remain as to the practicality of applying this technique on an operational basis. Although variations in the drop-size distribution are certainly important, their relative importance is frequently overemphasized. After some averaging over time and/or space, the errors associated with these variations will rarely exceed a factor of two in rain rate. They are the main sources of the variations in well-defined experiments at near ranges. However, at longer ranges, errors caused by the inability to observe the precipitation close to the ground and beam-filling are usually dominant. These errors, despite their importance, have been largely ignored. Because of growth or evaporation of precipitation, air motion and change of phase (ice and water in the melting layer, or bright band), highly variable vertical reflectivity profiles are observed, both within a given storm and from storm to storm. Unless the beam width is quite narrow, this will

Precipitation is usually measured by using the Z-R relation: Z = A Rb (9.10)

where A and b are constants. The relationship is not unique and very many empirical relations have been developed for various climates or localities and storm types. Nominal and typical values for the index and exponent are A = 200, b = 1.60 (Marshall and Palmer, 1948; Marshall and Gunn, 1952). The equation is developed under a number of assumptions that may not always be completely valid. Nevertheless, history and experience have shown that the relationship in most instances provides a good estimate of precipitation at the ground unless there are obvious anomalies. There are some generalities that can be stated. At 5 and 10 cm wavelengths, the Rayleigh approximation is valid for most practical purposes unless hailstones are present. Large concentrations of ice mixed with liquid can cause anomalies, particularly near the melting level. By taking into account the refractive index factor for ice (i.e., |K|2 = 0.208) and by choosing an appropriate relation between the reflectivity

II.9–22

ParT II. OBSErvING SYSTEMS

lead to a non-uniform distribution of reflectivity within the radar sample volume. In convective rainfall, experience shows that there is less difficulty with the vertical profile problem. However, in stratiform rain or snow, the vertical profile becomes more important. With increasing range, the beam becomes wider and higher above the ground. Therefore, the differences between estimates of rainfall by radar and the rain measured at the ground also increase. Reflectivity usually decreases with height; therefore, rain is underestimated by radar for stratiform or snow conditions. At long ranges, for low-level storms, and especially when low antenna elevations are blocked by obstacles such as mountains, the underestimate may be severe. This type of error often tends to dominate all others. This is easily overlooked when observing storms at close ranges only, or when analysing storms that are all located at roughly the same range. These and other questions, such as the choice of the wavelength, errors caused by attenuation, considerations when choosing a radar site for hydrological applications, hardware calibration of radar systems, sampling and averaging, and meteorological adjustment of radar data are discussed in Joss and Waldvogel (1990), Smith (1990) and Sauvageot (1994). The following considers only rainfall measurements; little operational experience is available about radar measurements of snow and even less about measurements of hail. 9.9.2 Measurement procedures

in real time. Today, the data may be obtained in three dimensions in a manageable form, and the computing power is available for accomplishing these tasks. Much of the current research is directed towards developing techniques for doing so on an operational basis (Ahnert and others, 1983). The methods of approach for (b) to (d) above and the adequacy of results obtained from radar precipitation measurement greatly depend on the situation. This can include the specific objective, the geographic region to be covered, the details of the application, and other factors. In certain situations, an interactive process is desirable, such as that developed for FRONTIERS and described in Appendix A of Joss and Waldvogel (1990). It makes use of all pertinent information available in modern weather data centres. To date, no one method of compensating for the effects of the vertical reflectivity profile in real time is widely accepted ((b) above). However, three compensation methods can be identified: (a) Range-dependent correction: The effect of the vertical profile is associated with the combination of increasing height of the beam axis and spreading of the beam with range. Consequently, a climatological mean range-dependent factor can be applied to obtain a first-order correction. Different factors may be appropriate for different storm categories, for example, convective versus stratiform; (b) Spatially-varying adjustment: In situations where the precipitation characteristics vary systematically over the surveillance area, or where the radar coverage is non-uniform because of topography or local obstructions, corrections varying with both azimuth and range may be useful. If sufficient background information is available, mean adjustment factors can be incorporated in suitable look-up tables. Otherwise, the corrections have to be deduced from the reflectivity data themselves or from comparisons with gauge data (a difficult proposition in either case); (c) Full vertical profiles: The vertical profiles in storms vary with location and time, and the lowest level visible to the radar usually varies because of irregularities in the radar horizon. Consequently, a point-by-point correction process using a representative vertical profile for each zone of concern may be needed to obtain the best results. Representative profiles can be obtained from the radar volume scan data themselves, from climatological summaries, or from storm models. This is the most complex approach but can be implemented

The basic procedure for deducing rainfall rates from measured radar reflectivities for hydrological applications requires the following steps: (a) Making sure that the hardware is stable by calibration and maintenance; (b) Correcting for errors using the vertical reflectivity profile; (c) Taking into account all the information about the Ze-R relationship and deducing the rainfall; (d) Adjustment with raingauges. The first three parts are based on known physical factors, and the last one uses a statistical approach to compensate for residual errors. This allows the statistical methods to work most efficiently. In the past, a major limitation on carrying out these steps was caused by analogue circuitry and photographic techniques for data recording and analyses. It was, therefore, extremely difficult to determine and make the necessary adjustments, and certainly not

CHaPTEr 9. radar MEaSurEMENTS

II.9–23

with modern data systems (Joss and Lee, 1993). After making the profile corrections, a reflectivity/ rain-rate relationship should be used which is appropriate to the situation, geography and season, in order to deduce the value of R ((c) in the first paragraph of this section). There is general agreement that comparisons with gauges should be made routinely, as a check on radar performance, and that appropriate adjustments should be made if a radar bias is clearly indicated ((d) in the first paragraph of this section). In situations where radar estimates are far from the mark due to radar calibration or other problems, such adjustments can bring about significant improvements. However, the adjustments do not automatically ensure improvements in radar estimates, and sometimes the adjusted estimates are poorer than the original ones. This is especially true for convective rainfall where the vertical extent of echo mitigates the difficulties associated with the vertical profile, and the gauge data are suspect because of unrepresentative sampling. Also, the spatial decorrelation distance may be small, and the gauge-radar comparison becomes increasingly inaccurate with distance from the gauge. A general guideline is that the adjustments will produce consistent improvements only when the systematic differences (that is, the bias) between the gauge and radar rainfall estimates are larger than the standard deviation of the random scatter of the gauge versus radar comparisons. This guideline makes it possible to judge whether gauge data should be used to make adjustments and leads to the idea that the available data should be tested before any adjustment is actually applied. Various methods for accomplishing this have been explored, but at this time there is no widely accepted approach. Various techniques for using polarization diversity radar to improve rainfall measurements have been proposed. In particular, it has been suggested that the difference between reflectivities measured at horizontal and vertical polarization (Z DR) can provide useful information about drop-size distributions (Seliga and Bringi, 1976). An alternate method is to use KDP that depends on large oblate spheroids distorting the shape of the transmitted wave. The method depends on the hydrodynamic distortions of the shapes of large raindrops, with more intense rainfalls with larger drops giving stronger polarization signatures. There is still considerable controversy, however, as to whether

this technique has promise for operational use for precipitation measurement (English and others, 1991). At close ranges (with high spatial resolution), polarization diversity radars may give valuable information about precipitation particle distributions and other parameters pertinent to cloud physics. At longer ranges, it is impossible to be sure that the radar beam is filled with a homogeneous distribution of hydrometeors. Consequently, the empirical relationship of the polarimetric signature to the drop-size distribution increases uncertainty. Of course, knowing more about Z-R will help, but, even if multiparameter techniques worked perfectly well, the error caused by Z-R could be reduced only from 33 to 17 per cent, as shown by Ulbrich and Atlas (1984). For short-range hydrological applications, the corrections for other biases (already discussed) are usually much greater, perhaps by an order of magnitude or more. 9.9.3 state of the art and summary

Over the years, much research has been directed towards exploring the potential of radars as an instrument for measuring rain. In general, radar measurements of rain, deduced from an empirical Z-R relation, agree well with gauge measurements for ranges close to the radar. Increased variability and underestimation by the radar occur at longer ranges. For example, the Swiss radar estimates, at a range of 100 km on average, only 25 per cent of the actual raingauge amount, despite the fact that it measures 100 per cent at close ranges. Similar, but not quite so dramatic, variations are found in flat country or in convective rain. The reasons are the Earth curvature, shielding by topography and the spread of the radar beam with range. Thus, the main shortcoming in using radars for precipitation measurements and for hydrology in operational applications comes from the inability to measure precipitation close enough to the ground over the desired range of coverage. Because this problem often does not arise in well-defined experiments, it has not received the attention that it deserves as a dominant problem in operational applications. Thanks to the availability of inexpensive, high-speed data-processing equipment, it is now possible to determine the echo distribution in the whole radar coverage area in three dimensions. This knowledge, together with knowledge about the position of the radar and the orography around it, makes it possible

II.9–24

ParT II. OBSErvING SYSTEMS

to correct in real time for a large fraction of – or at least to estimate the magnitude of – the vertical profile problem. This correction allows extension of the region in which accuracy acceptable for many hydrological applications is obtained. To make the best possible use of radars, the following rules should be respected: (a) The radar site should be chosen such that precipitation is seen by the radar as close as possible to the ground. “Seen” means here that there is no shielding or clutter echoes, or that the influence of clutter can be eliminated, for instance by Doppler analysis. This condition may frequently restrict the useful radar range for quantitative work to the nearest 50 to 100 km; (b) Wavelength and antenna size should be chosen such that a suitable compromise between attenuation caused by precipitation and good spatial resolution is achieved. At longer ranges, this may require a shorter wavelength to achieve a sufficiently narrow beam, or a larger antenna if S band use is necessary, due to frequent attenuation by huge intense cells; (c) Systems should be rigorously maintained and quality controlled, including by ensuring the sufficient stability and calibration of equipment; (d) Unless measurements of reflectivity are taken immediately over the ground, they should be corrected for errors originating from the vertical profile of reflectivity. As these profiles change with time, reflectivity should be monitored continuously by the radar. The correction may need to be calculated for each pixel, as it depends on the height of the lowest visible volume above the ground. It is important that the correction for the vertical reflectivity profile, as it is the dominant one at longer ranges, should be carried out before any other adjustments; (e) The sample size must be adequate for the application. For hydrological applications, and especially when adjusting radar estimates with gauges, it is desirable to integrate the data over a number of hours and/or square kilometres. Integration has to be performed over the desired quantity (the linear rainfall rate R) to avoid any bias caused by this integration. Even a crude estimate of the actual vertical reflectivity profile can produce an important improvement. Polarimetric measurements may provide some further improvement, but it has yet to be demonstrated that the additional cost and

complexity and risk of misinterpreting polarization measurements can be justified for operational applications in hydrology. The main advantages of radars are their high spatial and temporal resolution, wide area coverage and immediacy (real-time data). Radars also have the capability of measuring over inaccessible areas, such as lakes, and of following a “floating target” or a “convective complex” in a real-time sequence, for instance, to make a short-term forecast. Although it is only to a lesser degree suited to giving absolute accuracy in measuring rain amounts, good quantitative information is already obtained from radar networks in many places. It is unlikely that radars will ever completely replace the raingauge, since gauges provide additional information and are essential for adjusting and/or checking radar indications. On the other hand, as many specialists have pointed out, an extremely dense and costly network of gauges would be needed to obtain a resolution that would be easily attainable with radars. 9.9.4 area-time integral technique

Climatological applications not requiring real-time data can take advantage of the close relationship between the total amount of rainfall and the area and duration of a rain shower (Byers, 1948; Leber, Merrit and Robertson, 1961). Without using a Z-R relationship, Doneaud and others (1984; 1987) found a linear relationship between the rained-upon area and the total rainfall within that area with a very small dispersion. This relationship is dependent on the threshold selected to define the rain area. While this has limited use in real-time short-term forecasting applications, its real value should be in climatological studies and applications.

9.10

severe Weather Detection anD noWcasting aPPlications

9.10.1

utilization of reflectivity information

The most commonly used criterion for radar detection of potentially severe thunderstorms today is reflectivity intensity. Operational forecasters are advised to look for regions of high reflectivities (50 dBZ or greater). These include the spiral-bands and eyewall structures that identify tropical cyclones. Hook or finger-like echoes, overhangs and other echo shapes obtained from radar volume scans are used to warn of tornadoes or severe thunderstorms

CHaPTEr 9. radar MEaSurEMENTS

II.9–25

(Lemon, Burgess and Brown, 1978), but the false alarm rate is high. Improved severe thunderstorm detection has been obtained recently through the processing of digital reflectivity data obtained by automatic volume-scanning at 5 to 10 minute update rates. Reflectivity mass measurements such as vertically integrated liquid and severe weather probability have led to improved severe thunderstorm detection and warning, especially for hail. Many techniques have been proposed for identifying hail with 10 cm conventional radar, such as the presence of 50 dBz echo at 3 or 8 km heights (Dennis, Schock and Koscielski, 1970; Lemon, Burgess and Brown, 1978). However, verification studies have not yet been reported for other parts of the world. Federer and others (1978) found that the height of the 45 dBz contour must exceed the height of the zero degree level by more than 1.4 km for hail to be likely. An extension of this method has been verified at the Royal Netherlands Meteorological Institute and is being used operationally (Holleman, and others, 2000; Holleman, 2001). A different approach towards improved hail detection involves the application of dual-wavelength radars – usually X and S bands (Eccles and Atlas, 1973). The physics of what the radar sees at these various wavelengths is crucial for understanding the strengths and limitations of these techniques (hydrometeor cross-section changes or intensity distribution). Studies of polarization diversity show some promise of improved hail detection and heavy rainfall estimation based upon differential reflectivity (ZDR) as measured by a dual-polarization Doppler radar (Seliga and Bringi, 1976). Since the late 1970s, computer systems have been used to provide time lapse and zoom capabilities for radar data. The British FRONTIERS system (Browning and Collier, 1982; Collier, 1989), the Japanese AMeDAS system, the French ARAMIS system (Commission of the European Communities, 1989) and the United States PROFS system allow the user to interact and produce composite colour displays from several remote radars at once, as well as to blend the radar data with other types of information. The synthesis of radar data with raingauge data provides a powerful nowcasting product for monitoring rainfall. “Radar-AMeDAS Precipitation Analysis” is one of the products provided in Japan (Makihara, 2000). Echo intensity obtained from a radar network is converted into precipitation rate using a Ze-R relationship, and 1 h precipitation amount is estimated from the precipitation rate.

The estimated amounts are then calibrated using raingauge precipitation amounts to provide a map of 1 h precipitation amount with high accuracy. 9.10.2 utilization of Doppler information

The best method for measuring winds inside precipitation is the multiple Doppler method, which has been deployed since the mid-1970s for scientific field programmes of limited duration. However, real-time operational use of dual- or triple-Doppler analyses is not anticipated at present because of spatial coverage requirements. An exception may be the limited area requirements of airports, where a bistatic system may be useful (Wurman, Randall and Burghart, 1995). The application of Doppler radar to real-time detection and tracking of severe thunderstorms began in the early 1970s. Donaldson (1970) was probably the first to identify a vortex flow feature in a severe thunderstorm. Quasi-operational experiments have demonstrated that a very high percentage of these single-Doppler vortex signatures are accompanied by damaging hail, strong straight wind or tornadoes (Ray and others, 1980; JDOP, 1979). Since then, the existence of two useful severe storm features with characteristic patterns or “signatures” has become apparent. The first was that of a mesocyclone, which is a vertical column of rising rotating air typically 2 to 10 km in diameter. The mesocyclone signature (or velocity couplet) is observed forming in the mid-levels of a storm and descending to cloud base, coincident with tornado development (Burgess, 1976; Burgess and Lemon, 1990). This behaviour has led to improved tornado warning lead times, of 20 min or longer, during quasi-operational experiments in Oklahoma (JDOP, 1979). Most of the Doppler observances have been made in the United States, and it is not known if this signature can be generalized yet. During experiments in Oklahoma, roughly 50 per cent of all mesocyclones produced verified tornadoes; also, all storms with violent tornadoes formed in environments with strong shear and possessed strong mesocyclones (Burgess and Lemon, 1990). The second signature – the tornado vortex signature (TVS) – is produced by the tornado itself. It is the location of a very small circulation embedded within the mesocyclone. In some cases, the TVS has been detected aloft nearly half an hour or more before a tornado touched the ground. Several years of experience with TVS have demonstrated its great utility for determining tornado location, usually

II.9–26

ParT II. OBSErvING SYSTEMS

within ±1 km. It is estimated that 50 to 70 per cent of the tornadoes east of the Rocky Mountain high plains in the United States can be detected (Brown and Lemon, 1976). Large Doppler spectrum widths (second moment) have been identified with tornado location. However, large values of spectrum width have also been well correlated with large values during storm turbulence. Divergence calculated from the radial velocity data appears to be a good measure of the total divergence. Estimations of storm-summit radial divergence match those of the echo-top height, which is an updraft strength indicator. Quasi-operational Doppler experiments have shown that an increase in divergence magnitude is likely to be the earliest indicator that a storm is becoming severe. Moreover, large divergence values near the storm top were found to be a useful hail indicator. Low-level divergence signatures of downbursts have been routinely made with terminal Doppler weather radars for the protection of aircraft during take off and landing. These radars are specially built for limited area surveillance and repeated rapid scanning of the air space around the airport terminals. The microburst has a life cycle of between 10 to 20 min, which requires specialized radar systems for effective detection. In this application, the radar-computer system automatically provides warnings to the air-traffic control tower (Michelson, Schrader and Wieler, 1990). Doppler radar studies of the role of boundary layer convergence lines in new thunderstorm formations support earlier satellite cloud-arc studies. There are indications that mesoscale boundary-layer convergence lines (including intersecting gust fronts from prior convection) play a major role in determining where and when storms will form. Wilson and Schreiber (1986) have documented and explained several cases of tornado genesis by nonprecipitation induced wind shear lines, as observed by Doppler radar (Mueller and Carbone, 1987). Recent improvements in digital radar data-processing and display techniques have led to the development of new quantitative, radar-based products for hydrometeorological applications. A number of European countries and Japan are using such radar products with numerical models for operational flood forecasting and control (for example, see Cluckie and Owens, 1987). Thus, major advances now appear possible in the 0 to 2 h time-specific forecasts of thunderstorms. The development of this potential will require

the efficient integration of Doppler radar, highresolution satellite data, and surface and sounding data. Doppler radars are particularly useful for monitoring tropical cyclones and providing data on their eye, eyewall and spiral-band dynamic evolution, as well as the location and intensity of hurricane-force winds (Ruggiero and Donaldson, 1987; Baynton, 1979).

9.11

high frequency raDars for ocean surface MeasureMents

Radio signals in the high-frequency radio band (from 3 to 30 MHz) are backscattered from waves on the sea surface, and their frequency is Doppler shifted. They can be detected by a high-frequency radar set-up to observe them. The strength of the returned signal is due to constructive interference of the rays scattered from successive sea waves spaced so that the scattered rays are in resonance, as occurs in a diffraction grating. In the case of grazing incidence, the resonance occurs when the sea wavelength is half the radio wavelength. The returned signal is Doppler shifted because of the motion of the sea waves. From the Doppler spectrum it is possible to determine the direction of motion of the sea waves, with a left-right ambiguity across the direction of the beam that can be resolved by making use of other information, such as a first-guess field. If the sea waves are in equilibrium with the surface wind, this yields the wind direction; this is the basic sea measurement taken with high-frequency radar. Analysis of the returned spectrum can be developed further to yield the spectrum of sea waves and an indication of wind speed. Measurements can be obtained up to 200 km or more with ground-wave radars, and up to 3 000 km or more with sky-wave radars (using reflection from the ionosphere). The latter are known as over-the-horizon radars. Most operational high frequency radars are military, but some are used to provide routine wind direction data, over very wide areas, to Hydrometeorological Services. Accounts of high frequency radars with meteorological applications, with extensive further references, are given in Shearman (1983), Dexter, Heron and Ward (1982), Keenan and Anderson (1987), and Harlan and Georges (1994).

CHaPTEr 9. radar MEaSurEMENTS

II.9–27

references and furtHer readIng
Ahnert, P.R., M. Hudlow, E. Johnson, D. Greene and M. Dias, 1983: Proposed on-site processing system for NEXRAD. Preprints of the Twenty-first Conference on Radar Meteorology (Edmonton, Canada), American Meteorological Society, Boston, pp. 378–385. Aoyagi, J., 1983: A study on the MTI weather radar system for rejecting ground clutter. Papers in Meteorology and Geophysics, Volume 33, Number 4, pp. 187–243. Aoyagi, J. and N. Kodaira, 1995: The reflection mechanism of radar rain echoes. Preprints of the Twenty-seventh Conference on Radar Meteorology (Vail, Colorado), American Meteorological Society, Boston, pp. 246–248. Atlas, D., 1964: Advances in radar meteorology. Advances in Geophysics (H.E. Landsberg and J. Van Meighem, eds.), Volume 10, Academic Press, New york, pp. 317–479. Atlas, D. (ed.), 1990: Radar in Meteorology. American Meteorological Society, Boston. Atlas, D., R.C. Scrivastava and R.S. Sekhon, 1973: Doppler radar characteristics of precipitation at vertical incidence. Reviews of Geophysics and Space Physics, Volume 11, Number 1, pp. 1–35. Battan, L.J., 1981: Radar Observation of the Atmosphere. University of Chicago Press, Chicago. Baynton, H.W., 1979: The case for Doppler radars along our hurricane affected coasts. Bulletin of the American Meteorological Society, Volume 60, pp. 1014–1023. Bean, B.R. and E.J. Dutton, 1966: Radio Meteorology. Washington DC, U.S. Government Printing Office. Bebbington, D.H.O., 1992: Degree of Polarization as a Radar Parameter and its Susceptibility to Coherent Propagation Effects. Preprints from URSI Commission F Symposium on Wave Propagation and Remote Sensing (Ravenscar, United Kingdom) pp. 431–436. Bringi, V.N. and A. Hendry, 1990: Technology of polarization diversity radars for meteorology. In: Radar in Meteorology (D. Atlas, ed.) American Meteorological Society, Boston, pp. 153–190. Browning, K.A. and R. Wexler, 1968: The determination of kinetic properties of a wind field using Doppler radar. Journal of Applied Meteorology, Volume 7, pp. 105–113. Brown, R.A. and L.R. Lemon, 1976: Single Doppler radar vortex recognition: Part II – Tornadic vortex signatures. Preprints of the Seventeenth Conference on Radar Meteorology (Seattle, Washington), American Meteorological Society, Boston, pp. 104–109. Browning, K.A. and C.G. Collier, 1982: An integrated radar-satellite nowcasting system in the United Kingdom. In: Nowcasting (K.A. Browning, ed.). Academic Press, London, pp. 47–61. Browning, K.A., C.G. Collier, P.R. Larke, P. Menmuir, G.A. Monk and R.G. Owens, 1982: On the forecasting of frontal rain using a weather radar network. Monthly Weather Review, Volume 110, pp. 534–552. Burgess, D.W., 1976: Single Doppler radar vortex recognition: Part I – Mesocyclone signatures. Preprints of the Seventeenth Conference on Radar Meteorology, (Seattle, Washington), American Meteorological Society, Boston, pp. 97–103. Burgess, D.W. and L.R. Lemon, 1990: Severe thunderstorm detection by radar. In Radar in Meteorology (D. Atlas, ed.). American Meteorological Society, Boston, pp. 619–647. Burrows, C.R. and S.S. Attwood, 1949: Radio Wave Propagation. Academic Press, New york. Byers, H.R., 1948: The use of radar in determining the amount of rain falling over a small area. Transactions of the American Geophysical Union, pp. 187–196. Cluckie, I.D. and M.E. Owens, 1987: Real-time Rainfall Run-off Models and Use of Weather Radar Information. In: Weather Radar and Flood Forecasting (V.K. Collinge and C. Kirby, eds). John Wiley and Sons, New york. Collier, C.G., 1989: Applications of Weather Radar Systems: A Guide to Uses of Radar Data in Meteorology and Hydrology. John Wiley and Sons, Chichester, England. Commission of the European Communities, 1990: Une revue du programme ARAMIS (J.L. Cheze). Seminar on Cost Project 73: Weather Radar Networking (Brussels, 5–8 September 1989), pp. 80–85. Crozier, C.L., P. Joe, J. Scott, H. Herscovitch and T. Nichols, 1991: The King City operational Doppler radar: Development, all-season applications and forecasting. Atmosphere-Ocean, Volume, 29, pp. 479–516. Dennis, A.S., C.A. Schock, and A. Koscielski, 1970: Characteristics of hailstorms of western South Dakota. Journal of Applied Meteorology, Volume 9, pp. 127–135. Dexter, P.E., M.L. Heron and J.F. Ward, 1982: Remote sensing of the sea-air interface using HF radars. Australian Meteorological Magazine, Volume 30, pp. 31–41.

II.9–28

ParT II. OBSErvING SYSTEMS

Donaldson, R.J., Jr., 1970: Vortex signature recognition by a Doppler radar. Journal of Applied Meteorology, Volume 9, pp. 661–670. Doneaud, A.A., S. Ionescu-Niscov, D.L. Priegnitz and P.L. Smith, 1984: The area-time integral as an indicator for convective rain volumes. Journal of Climate and Applied Meteorology, Volume 23, pp. 555–561. Doneaud, A.A., J.R. Miller Jr., L.R. Johnson, T.H. Vonder Haar and P. Laybe, 1987: The area-time integral technique to estimate convective rain volumes over areas applied to satellite data: A preliminary investigation. Journal of Climate and Applied Meteorology, Volume 26, pp. 156–169. Doviak, R.J. and D.S. zrnic, 1993: Doppler Radar and Weather Observations. Second edition, Academic Press, San Diego. Eccles, P.J. and D. Atlas, 1973: A dual-wavelength radar hail detector. Journal of Applied Meteorology, Volume 12, pp. 847–854. Eilts, M.D. and S.D. Smith, 1990: Efficient dealiasing of Doppler velocities using local environment constraints. Journal of Atmospheric and Oceanic Techonology, Volume 7, pp. 118–128. English, M.E., B. Kochtubajda, F.D. Barlow, A.R. Holt and R. McGuiness, 1991: Radar measurement of rainfall by differential propagation phase: A pilot experiment. Atmosphere-Ocean, Volume 29, pp. 357–380. Federer, B., A. Waldvogel, W. Schmid, F. Hampel, E. Rosini, D. Vento and P. Admirat, 1978: Grossversuch IV: Design of a randomized hail suppression experiment using the Soviet method. Pure and Applied Geophysics, 117, pp. 548–571. Gossard, E.E. and R.G. Strauch, 1983: Radar Observations of Clear Air and Clouds. Elsevier Scientific Publication, Amsterdam. Harlan, J.A. and T.M. Georges, 1994: An empirical relation between ocean-surface wind direction and the Bragg line ratio of HF radar sea echo spectra. Journal of Geophysical Research, Volume 99, C4, pp. 7971–7978. Heiss, W.H., D.L. McGrew, and D. Sirmans, 1990: NEXRAD: Next generation weather radar (WSR88D). Microwave Journal, Volume 33, Number 1, pp. 79–98. Holleman, I., 2001: Hail Detection Using Single-polarization Radar. Scientific Report, Royal Netherlands Meteorological Institute (KNMI) WR-2001-01, De Bilt. Holleman, I., H.R.A. Wessels, J.R.A. Onvlee and S.J.M. Barlag, 2000: Development of a hail-detection product. Physics and Chemistry of the Earth, Part B, 25, pp. 1293–1297.

Holt, A.R., M. Chandra, and S.J. Wood, 1995: Polarisation diversity radar observations of storms at C-Band. Preprints of the Twenty-seventh Conference on Radar Meteorology (Vail, Colorado), American Meteorological Society, Boston, pp. 188–189. Holt, A.R., P.I. Joe, R. McGuinness and E. Torlaschi, 1993: Some examples of the use of degree of polarization in interpreting weather radar data. Proceedings of the Twenty-sixth International Conference on Radar Meteorology, American Meteorological Society, pp. 748–750. Joe, P., R.E. Passarelli and A.D. Siggia, 1995: Second trip unfolding by phase diversity processing. Preprints of the Twenty-seventh Conference on Radar Meteorology (Vail, Colorado), American Meteorological Society, Boston, pp. 770–772. Joint Doppler Operational Project, 1979: Final Report on the Joint Doppler Operational Project. NOAA Technical Memorandum, ERL NSSL-86, Norman, Oklahoma, National Severe Storms Laboratory. Joss, J. and A. Waldvogel, 1990: Precipitation measurement and hydrology. In: Radar in Meteorology (D. Atlas, ed.), American Meteorological Society, Boston, pp. 577–606. Joss, J. and R.W. Lee, 1993: Scan strategy, clutter suppression calibration and vertical profile corrections. Preprints of the Twenty-sixth Conference on Radar Meteorology (Norman, Oklahoma), American Meteorological Society, Boston, pp. 390–392. Keeler, R.J., C.A. Hwang and E. Loew, 1995: Pulse compression weather radar waveforms. Preprints of the Twenty-seventh Conference on Radar Meteorology (Vail, Colorado), American Meteorological Society, Boston, pp. 767–769. Keenan, T.D. and S.J. Anderson, 1987: Some examples of surface wind field analysis based on Jindalee skywave radar data. Australian Meteorological Magazine, 35, pp. 153–161. Leber, G.W., C.J. Merrit, and J.P. Robertson, 1961: WSR-57 Analysis of Heavy Rains. Preprints of the Ninth Weather Radar Conference, American Meteorological Society, Boston, pp. 102–105. Lee, R., G. Della Bruna and J. Joss, 1995: Intensity of ground clutter and of echoes of anomalous propagation and its elimination. Preprints of the Twenty-seventh Conference on Radar Meteorology (Vail, Colorado), American Meteorological Society, Boston, pp. 651–652. Lemon, L.R., D.W. Burgess and R.A. Brown, 1978: Tornadic storm airflow and morphology derived from single-Doppler radar measurements. Monthly Weather Review, Volume 106, pp. 48–61.

CHaPTEr 9. radar MEaSurEMENTS

II.9–29

Leone, D.A., R.M. Endlich, J. Petriceks, R.T.H. Collis and J.R. Porter, 1989: Meteorological considerations used in planning the NEXRAD network. Bulletin of the American Meteorological Society, Volume, 70, pp. 4–13. Lhermitte, R. and D. Atlas, 1961: Precipitation motion by pulse Doppler radar. Preprints of the Ninth Weather Radar Conference, American Meteorological Society, Boston, pp. 218–233. Makihara, y., 2000: Algorithms for precipitation nowcasting focused on detailed analysis using radar and raingauge data. Technical Report of the Meteorological Research Institute, JMA, 39, pp. 63–111. Marshall, J.S. and K.L.S. Gunn, 1952: Measurement of snow parameters by radar. Journal of Meteorology, Volume 9, pp. 322–327. Marshall, J.S. and W.M. Palmer, 1948: The distribution of raindrops with size. Journal of Meteorology, Volume 5, pp. 165–166. Melnikov, V., D.S. zrnic, R.J. Dovink and J.K. Carter, 2002: Status of the dual polarization upgrade on the NOAA’s research and development WSR-88D. Preprints of the Eighteenth International Conference on Interactive Information Processing Systems (Orlando, Florida), American Meteorological Society, Boston, pp. 124–126. Michelson, M., W.W. Schrader and J.G. Wilson, 1990: Terminal Doppler weather radar. Microwave Journal, Volume 33, Number 2, pp. 139–148. Mie, G., 1908: Beiträge zur Optik träber Medien, speziell kolloidaler Metalläsungen. Annalen der Physik, 25, pp. 377–445. Mueller, C.K. and R.E. Carbone, 1987: Dynamics of a thunderstorm outflow. Journal of the Atmospheric Sciences, Vo l u m e 44, pp. 1879–1898. Mueller, E.A., S.A. Rutledge, V.N. Bringi, D. Brunkow, P.C. Kennedy, K. Pattison, R. Bowie and V. Chandrasekar, 1995: CSU-CHILL radar upgrades. Preprints of the Twenty-seventh Conference on Radar Meteorology (Vail, Colorado), American Meteorological Society, Boston, pp. 703–706. Neff, E.L., 1977: How much rain does a rain gage gage? Journal of Hydrology, Volume 35, pp. 213–220. Passarelli, R.E., Jr., P. Romanik, S.G. Geotis and A.D. Siggia, 1981: Ground clutter rejection in the frequency domain. Preprints of the Twentieth Conference on Radar Meteorology (Boston, Massachusetts), American Meteorological Society, Boston, pp. 295–300. Probert-Jones, J.R., 1962: The radar equation in meteorology. Quarterly Journal of the Royal Meteorological Society, Volume 88, pp. 485–495.

Ray, P.S., C.L. ziegler, W. Bumgarner and R.J. Serafin, 1980: Single- and multiple-Doppler radar observations of tornadic storms. Monthly Weather Review, Volume 108, pp. 1607–1625. Rinehart, R.E., 1991: Radar for Meteorologists. Grand Forks, North Dakota, University of North Dakota, Department of Atmopheric Sciences. Ruggiero, F.H. and R.J. Donaldson, Jr., 1987: Wind field derivatives: A new diagnostic tool for analysis of hurricanes by a single Doppler radar. Preprints of the Seventeenth Conference on Hurricanes and Tropical Meteorology (Miami, Florida), American Meteorological Society, Boston, pp. 178–181. Sauvageot, H., 1982: Radarmétéorologie. Eyrolles, Paris. Sauvageot, H., 1994: Rainfall measurement by radar: A review. Atmospheric Research, Volume 35, pp. 27–54. Seliga, T.A. and V.N. Bringi, 1976: Potential use of radar differential reflectivity measurements at orthogonal polarizations for measuring precipitation. Journal of Applied Meteorology, Volume 15, pp. 69–76. Shearman, E.D. R., 1983: Radio science and oceanography. Radio Science, Volume 18, Number 3, pp. 299–320. Skolnik, M.I. (ed.), 1970: Radar Handbook. McGrawHill, New york. Smith, P.L., 1990: Precipitation measurement and hydrology: Panel report. In: Radar in Meteorology. (D. Atlas, ed.), American Meteorological Society, Boston, pp. 607–618. Smith, P.L., 1995: Dwell time considerations for weather radars. Preprints of the Twenty-seventh Conference on Radar Meteorology (Vail, Colorado), American Meteorological Society, Boston, pp. 760–762. Strauch, R.G., 1981: Comparison of meteorological Doppler radars with magnetron and klystron transmitters. Preprints of the Twentieth Conference on Radar Meteorology (Boston, Massachusetts), American Meteorological Society, Boston, pp. 211–214. Ulbrich, C.W. and D. Atlas, 1984: Assessment of the contribution of differential polarization to improve rainfall measurements. Radio Science, Volume 19, Number 1, pp. 49–57. Wilson, J.W. and W.E. Schreiber, 1986: Initiation of convective storms at radar-observed boundary-layer convergence lines. Monthly Weather Review, Volume 114, pp. 2516–2536. Wood, V.T. and R.A. Brown, 1986: Single Doppler velocity signature interpretation of nondivergent environmental winds. Journal of Atmospheric and Oceanic Technology, Volume 3, pp. 114–128.

II.9–30

ParT II. OBSErvING SYSTEMS

World Meteorological Organization, 1985: Use of Radar in Meteorology (G.A. Clift). Technical Note No. 181, WMO-No. 625, Geneva. Wurman, J., M. Randall and C. Burghart, 1995: Real-time vector winds from a bistatic Doppler radar network. Preprints of the Twenty-seventh Conference on Radar Meteorology (Vail, Colorado), American Meteorological Society, Boston, pp. 725–728.

zrnic, D.S. and S. Hamidi, 1981: Considerations for the Design of Ground Clutter Cancelers for Weather Radar. Report DOT/FAA/RD-81/72, NTIS, pp. 77. zrnic, D.S. and A.V. Ryzhkov, 1995: Advantages of rain measurements using specific differential phase. Preprints of the Twenty-seventh Conference on Radar Meteorology (Vail, Colorado), American Meteorological Society, pp. 35–37.

CHaPTEr 10

Balloon tecHnIques

10.1 10.1.1

balloons

Main types of balloons

Two main categories of balloons are used in meteorology, as follows: (a) Pilot balloons, which are used for the visual measurement of upper wind, and ceiling balloons for the measurement of cloud-base height. Usually they do not carry an appreciable load and are therefore considerably smaller than radiosonde balloons. They are almost invariably of the spherical extensible type and their chief requirement, apart from the ability to reach satisfactory heights, is that they should keep a good spherical shape while rising; (b) Balloons which are used for carrying recording or transmitting instruments for routine upperair observations are usually of the extensible type and spherical in shape. They are usually known as radiosonde or sounding balloons. They should be of sufficient size and quality to enable the required load (usually 200 g to 1 kg) to be carried up to heights as great as 35 km (WMO, 2002) at a rate of ascent sufficiently rapid to enable reasonable ventilation of the measuring elements. For the measurement of upper winds by radar methods, large pilot balloons (100 g) or radiosonde balloons are used depending on the weight and drag of the airborne equipment. Other types of balloons used for special purposes are not described in this chapter. Constant-level balloons that rise to, and float at, a pre-determined level are made of inextensible material. Large constant-level balloons are partly filled at release. Super-pressure constant-level balloons are filled to extend fully the balloon at release. Tetroons are small super-pressure constant-level balloons, tetrahedral in shape, used for trajectory studies. The use of tethered balloons for profiling is discussed in Part II, Chapter 5. 10.1.2 balloon materials and properties

is stronger and can be made with a thicker film for a given performance. It is less affected by temperature, but more affected by the ozone and ultraviolet radiation at high altitudes, and has a shorter storage life. Both materials may be compounded with various additives to improve their storage life, strength and performance at low temperatures both during storage and during flight, and to resist ozone and ultraviolet radiation. As one of the precautions against explosion, an antistatic agent may also be added during the manufacture of balloons intended to be filled with hydrogen. There are two main processes for the production of extensible balloons. A balloon may be made by dipping a form into latex emulsion, or by forming it on the inner surface of a hollow mould. Moulded balloons can be made with more uniform thickness, which is desirable for achieving high altitudes as the balloon expands, and the neck can be made in one piece with the body, which avoids the formation of a possible weak point. Polyethylene is the inextensible material used for constant-level balloons. 10.1.3 balloon specifications

The finished balloons should be free from foreign matter, pinholes or other defects and must be homogeneous and of uniform thickness. They should be provided with necks of between 1 and 5 cm in diameter and 10 to 20 cm long, depending on the size of the balloon. In the case of sounding balloons, the necks should be capable of withstanding a force of 200 N without damage. In order to reduce the possibility of the neck being pulled off, it is important that the thickness of the envelope should increase gradually towards the neck; a sudden discontinuity of thickness forms a weak spot. Balloons are distinguished in size by their nominal weights in grams. The actual weight of individual balloons should not differ from the specified nominal weight by more than 10 per cent, or preferably 5 per cent. They should be capable of expanding to at least four times, and preferably five or six times, their unstretched diameter and of maintaining this expansion for at least 1 h. When inflated, balloons should be spherical or pear-shaped.

The best basic materials for extensible balloons are high-quality natural rubber latex and a synthetic latex based upon polychloroprene. Natural latex holds its shape better than polychloroprene – which

II.10–2

ParT II. OBSErvING SYSTEMS

The question of specified shelf life of balloons is important, especially in tropical conditions. Artificial ageing tests exist but they are not reliable guides. One such test is to keep sample balloons in an oven at a temperature of 80°C for four days, this being reckoned as roughly equivalent to four years in the tropics, after which the samples should still be capable of meeting the minimum expansion requirement. Careful packing of the balloons so that they are not exposed to light (especially sunlight), fresh air or extremes of temperature is essential if rapid deterioration is to be prevented. Balloons manufactured from synthetic latex incorporate a plasticizer to resist the stiffening or freezing of the film at the low temperatures encountered near and above the tropopause. Some manufacturers offer alternative balloons for daytime and night-time use, the amount of plasticizer being different.

V=

qLn ( L + W )1/ 3

(10.3)

in which q and n depend on the drag coefficient, and therefore on the Reynolds number, vρD/µ (µ being the viscosity of the air). Unfortunately, a large number of meteorological balloons, at some stages of flight, have Reynolds numbers within the critical region of 1.105 to 3.105, where a rapid change of drag coefficient occurs, and they may not be perfectly spherical. Therefore, it is impracticable to use a simple formula which is valid for balloons of different sizes and different free lifts. The values of q and n in the above equation must, therefore, be derived by experiment; they are typically, very approximately, about 150 and about 0.5, respectively if the ascent rate is expressed in m min–1. Other factors, such as the change of air density and gas leakage, can also affect the rate of ascent and can cause appreciable variation with height. In conducting soundings during precipitation or in icing conditions, a free lift increase of up to about 75 per cent, depending on the severity of the conditions, may be required. An assumed rate of ascent should not be used in any conditions other than light precipitation. A precise knowledge of the rate of ascent is not usually necessary except in the case of pilot- and ceiling-balloon observations, where there is no other means of determining the height. The rate of ascent depends largely on the free lift and air resistance acting on the balloon and train. Drag can be more important, especially in the case of non-spherical balloons. Maximum height depends mainly on the total lift and on the size and quality of the balloon. 10.2.2 balloon performance

10.2 10.2.1

balloon behaviour

rate of ascent

From the principle of buoyancy, the total lift of a balloon is given by the buoyancy of the volume of gas in it, as follows: T = V (ρ – ρg ) = 0.523 D3 (ρ – ρg ) (10.1)

where T is the total lift; V is the volume of the balloon; ρ is the density of the air; ρg is the density of the gas; and D is the diameter of the balloon, which is assumed to be spherical. All units are in the International System of Units. For hydrogen at ground level, the buoyancy (ρ – ρg) is about 1.2 kg m–3. All the quantities in equation 10.1 change with height. The free lift L of a balloon is the amount by which the total lift exceeds the combined weight W of the balloon and its load (if any): L=T–W (10.2)

namely, it is the net buoyancy or the additional weight which the balloon, with its attachments, will just support without rising or falling. It can be shown by the principle of dynamic similarity that the rate of ascent V of a balloon in still air can be expressed by a general formula:

The table below lists typical figures for the performance of various sizes of balloons. They are very approximate. If precise knowledge of the performance of a particular balloon and train is necessary, it must be obtained by analysing actual flights. Balloons can carry payloads greater than those listed in the table if the total lift is increased. This is achieved by using more gas and by increasing the volume of the balloon, which will affect the rate of ascent and the maximum height. The choice of a balloon for meteorological purposes is dictated by the load, if any, to be carried, the rate of ascent, the altitude required, whether the balloon is to be used for visual tracking, and by the cloud cover with regard to its

CHaPTEr 10. BallOON TECHNIQuES

II.10–3

typical balloon performance
Weight (g) diameter at release (cm) Payload (g) Free lift (g) rate of ascent (m min–1) Maximum height (km) 10 30 0 5 60 12 30 50 0 60 150 13 100 90 0 300 250 20 200 120 250 500 300 21 350 130 250 600 300 26 600 140 250 900 300 31 1 000 160 250 1 100 300 34 1 500 180 1 000 1 300 300 34 3 000 210 1 000 1 700 300 38

colour. Usually, a rate of ascent between 300 and 400 m min–1 is desirable in order to minimize the time required for observation; it may also be necessary in order to provide sufficient ventilation for the radiosonde sensors. In choosing a balloon, it is also necessary to bear in mind that the altitude attained is usually less when the temperature at release is very low. For balloons used in regular operations, it is beneficial to determine the free lift that produces optimum burst heights. For instance, it has been found that a reduction in the average rate of ascent from 390 to 310 m min –1 with some mid-size balloons by reducing the amount of gas for inflation may give an increase of 2 km, on average, in the burst height. Burst height records should be kept and reviewed to ensure that optimum practice is sustained. Daytime visual observations are facilitated by using uncoloured balloons on clear sunny days, and dark-coloured ones on cloudy days. The performance of a balloon is best gauged by the maximum linear extension it will withstand before bursting and is conveniently expressed as the ratio of the diameter (or circumference) at burst to that of the unstretched balloon. The performance of a balloon in flight, however, is not necessarily the same as that indicated by a bursting test on the ground. Performance can be affected by rough handling when the balloon is filled and by stresses induced during launches in gale conditions. In flight, the extension of the balloon may be affected by the loss of elasticity at low temperatures, by the chemical action of oxygen, ozone and ultraviolet radiation, and by manufacture faults such as pinholes or weak spots. A balloon of satisfactory quality should, however, give at least a fourfold extension in an actual sounding. The thickness of

the film at release is usually in the range of 0.1 to 0.2 mm. There is always a small excess of pressure p1 within the balloon during ascent, amounting to a few hPa, owing to the tension of the rubber. This sets a limit to the external pressure that can be reached. It can be shown that, if the temperature is the same inside and outside the balloon, this limiting pressure p is given by:

p=

1.07W Wp1 + 0.075 p1 ≅ L0 L0

(10.4)

where W is the weight of the balloon and apparatus; and L0 is the free lift at the ground, both expressed in grams. If the balloon is capable of reaching the height corresponding with p, it will float at this height.

10.3 10.3.1

hanDling balloons

storage

It is very important that radiosonde balloons should be correctly stored if their best performance is still to be obtained after several months. It is advisable to restrict balloon stocks to the safe minimum allowed by operational needs. Frequent deliveries, wherever possible, are preferable to purchasing in large quantities with consequent long periods of storage. To avoid the possibility of using balloons that have been in storage for a long period, balloons should always be used in the order of their date of manufacture. It is generally possible to obtain the optimum performance up to about 18 months after manufacture, provided that the storage conditions are

II.10–4

ParT II. OBSErvING SYSTEMS

carefully chosen. Instructions are issued by many manufacturers for their own balloons and these should be observed meticulously. The following general instructions are applicable to most types of radiosondes balloons. Balloons should be stored away from direct sunlight and, if possible, in the dark. At no time should they be stored adjacent to any source of heat or ozone. Balloons made of either polychloroprene or a mixture, or polychloroprene and natural rubber may deteriorate if exposed to the ozone emitted by large electric generators or motors. All balloons should be kept in their original packing until required for preflight preparations. Care should be taken to see that they do not come into contact with oil or any other substance that may penetrate the wrapping and damage the balloons. Wherever possible, balloons should be stored in a room at temperatures of 15 to 25°C; some manufacturers give specific guidance on this point and such instructions should always be followed. 10.3.2 conditioning

ventilated (e.g. NFPA, 1999). If hydrogen gas is to be used, special safety precautions are essential (see section 10.6). The building should be free from any source of sparks, and all electric switches and fittings should be spark-proof; other necessary details are given in section 10.6.2. If helium gas is to be used, provision may be made for heating the building during cold weather. The walls, doors and floor should have a smooth finish and should be kept free from dust and grit. Heating hydrogen-inflation areas can be accomplished by steam, hot water or any other indirect means; however, electric heating, if any, shall be in compliance with national electrical c o d e s ( e . g . N F PA 5 0 A f o r C l a s s I , Division 2, locations). Protective clothing (see section 10.6.4) should be worn during inflation. The operator should not stay in a closed room with a balloon containing hydrogen. The hydrogen supply should be controlled and the filling operation observed, from outside the filling room if the doors are shut, and the doors should be open when the operator is in the room with the balloon. Balloons should be inflated slowly because sudden expansion may cause weak spots in the balloon film. It is desirable to provide a fine adjustment valve for regulating the gas flow. The desired amount of inflation (free lift) can be determined by using either a filling nozzle of the required weight or one which forms one arm of a balance on which the balloon lift can be weighed. The latter is less convenient, unless it is desirable to allow for variations in the weights of balloons, which is hardly necessary for routine work. It is useful to have a valve fitted to the weight type of the filler, and a further refinement, used in some services, is to have a valve that can be adjusted to close automatically at the required lift. 10.3.4 launching

Balloons made from natural rubber do not require special heat treatment before use, as natural rubber does not freeze at the temperatures normally experienced in buildings used for human occupation. It is, however, preferable for balloons that have been stored for a long period at temperatures below 10°C to be brought to room temperature for some weeks before use. Polychloroprene balloons suffer a partial loss of elasticity during prolonged storage at temperatures below 10°C. For the best results, this loss should be restored prior to inflation by conditioning the balloon. The manufacturer’s recommendations should be followed. It is common practice to place the balloon in a thermally insulated chamber with forced air circulation, maintained at suitable temperature and humidity for some days before inflation, or alternatively to use a warm water bath. At polar stations during periods of extremely low temperatures, the balloons to be used should have special characteristics that enable them to maintain strength and elasticity in such conditions. 10.3.3 inflation

The balloon should be kept under a shelter until everything is ready for its launch. Prolonged exposure to bright sunshine should be avoided as this may cause a rapid deterioration of the balloon fabric and may even result in its bursting before leaving the ground. Protective clothing should be worn during manual launches. No special difficulties arise when launching radiosonde balloons in light winds. Care should always be taken to see that there is no risk of the balloon and instruments striking obstructions before they

If a balloon launcher is not used, a special room, preferably isolated from other buildings, should be provided for filling balloons. It should be well

CHaPTEr 10. BallOON TECHNIQuES

II.10–5

rise clear of trees and buildings in the vicinity of the station. Release problems can be avoided to a large extent by carefully planning the release area. It should be selected to have a minimum of obstructions that may interfere with launching; the station buildings should be designed and sited considering the prevailing wind, gust effects on the release area and, in cold climates, drifting snow. It is also advisable in high winds to keep the suspension of the instrument below the balloon as short as possible during launching, by using some form of suspension release or unwinder. A convenient device consists of a reel on which the suspension cord is wound and a spindle to which is attached an air brake or escapement mechanism that allows the suspension cord to unwind slowly after the balloon is released. Mechanical balloon launchers have the great advantage that they can be designed to offer almost fool-proof safety, by separating the operator from the balloon during filling and launching. They can be automated to various degrees, even to the point where the whole radiosonde operation requires no operator to be present. They might not be effective at wind speeds above 20 m s–1. Provision should be made for adequate ventilation of the radiosonde sensors before release, and the construction should desirably be such that the structure will not be damaged by fire or explosion.

At one time, night ascents were carried with a small candle in a translucent paper lantern suspended some 2 m or so below the balloon. However, there is a risk of flash or explosion if the candle is brought near the balloon or the source of hydrogen, and there is a risk of starting a forest fire or other serious fires upon return to the Earth. Thus, the use of candles is strongly discouraged. 10.4.2 Parachutes

In order to reduce the risk of damage caused by a falling sounding instrument, it is usual practice to attach a simple type of parachute. The main requirements are that it should be reliable when opening and should reduce the speed of descent to a rate not exceeding about 5 m s–1 near the ground. It should also be water-resistant. For instruments weighing up to 2 kg, a parachute made from waterproof paper or plastic film of about 2 m diameter and with strings about 3 m long is satisfactory. In order to reduce the tendency for the strings to twist together in flight it is advisable to attach them to a light hoop of wood, plastic or metal of about 40 cm in diameter just above the point where they are joined together. When a radar reflector for wind-finding is part of the train it can be incorporated into the parachute and can serve to keep the strings apart. The strings and attachments must be able to withstand the opening of the parachute. If light-weight radiosondes are used (less than about 250 g), the radar reflector alone may provide sufficient drag during descent.

10.4

accessories for balloon ascents

10.4.1

illumination for night ascents

10.5 10.5.1

gases for inflation

The light source in general use for night-time pilot-balloon ascents at night is a small electric torch battery and lamp. A battery of two 1.5 V cells, or a water-activated type used with a 2.5 V 0.3 A bulb, is usually suitable. Alternatively, a device providing light by means of chemical fluorescence may be used. For high-altitude soundings, however, a more powerful system of 2 to 3 W, together with a simple reflector, is necessary. If the rate of ascent is to remain unchanged when a lighting unit is to be used, a small increase in free lift is theoretically required; that is to say, the total lift must be increased by more than the extra weight carried (see equation 10.3). In practice, however, the increase required is probably less than that calculated since the load improves the aerodynamic shape and the stability of the balloon.

general

The two gases most suitable for meteorological balloons are helium and hydrogen. The former is much to be preferred on account of the fact that is free from risk of explosion and fire risks. However, since the use of helium is limited mainly to the few countries which have an abundant natural supply, hydrogen is more generally used (see WMO, 1982). The buoyancy (total lift) of helium is 1.115 kg m–3, at a pressure of 1 013 hPa and a temperature of 15°C. The corresponding figure for pure hydrogen is 1.203 kg m–3 and for commercial hydrogen the figure is slightly lower than this. It should be noted that the use of hydrogen aboard ships is no longer permitted under the general conditions imposed for marine insurance. In these

II.10–6

ParT II. OBSErvING SYSTEMS

circumstances, the extra cost of using helium has to be reckoned against the life-threatening hazards to and the extra cost of insurance, if such insurance can be arranged. Apart from the cost and trouble of transportation, the supply of compressed gas in cylinders affords the most convenient way of providing gas at meteorological stations. However, at places where the cost or difficulty of supplying cylinders is prohibitive, the use of an on-station hydrogen generator (see section 10.5.3) should present no great difficulties. 10.5.2 gas cylinders

(a) (b) (c) (d) (e) (f) (g)

Ferro-silicon and caustic soda with water; Aluminium and caustic soda with water; Calcium hydride and water; Magnesium-iron pellets and water; Liquid ammonia with hot platinum catalyst; Methanol and water with a hot catalyst; Electrolysis of water.

For general use, steel gas cylinders, capable of holding 6 m3 of gas compressed to a pressure of 18 MPa (10 MPa in the tropics), are probably the most convenient size. However, where the consumption of gas is large, as at radiosonde stations, larger capacity cylinders or banks of standard cylinders all linked by a manifold to the same outlet valve can be useful. Such arrangements will minimize handling by staff. In order to avoid the risk of confusion with other gases, hydrogen cylinders should be painted a distinctive colour (red is used in many countries) and otherwise marked according to national regulations. Their outlet valves should have left-handed threads to distinguish them from cylinders of non-combustible gases. Cylinders should be provided with a cap to protect the valves in transit. Gas cylinders should be tested at regular intervals ranging from two to five years, depending on the national regulations in force. This should be performed by subjecting them to an internal pressure of at least 50 per cent greater than their normal working pressure. Hydrogen cylinders should not be exposed to heat and, in tropical climates, they should be protected from direct sunshine. Preferably, they should be stored in a well-ventilated shed which allows any hydrogen leaks to escape to the open air. 10.5.3 hydrogen generators

Most of the chemicals used in these methods are hazardous, and the relevant national standards and codes of practice should be scrupulously followed, including correct markings and warnings. They require special transportation, storage, handling and disposal. Many of them are corrosive, as is the residue after use. If the reactions are not carefully controlled, they may produce excess heat and pressure. Methanol, being a poisonous alcohol, can be deadly if ingested, as it may be by substance abusers. In particular, caustic soda, which is widely used, requires considerable care on the part of the operator, who should have adequate protection, especially for the eyes, from contact not only with the solution, but also with the fine dust which is liable to arise when the solid material is being put into the generator. An eye-wash bottle and a neutralizing agent, such as vinegar, should be kept at hand in case of an accident. Some of the chemical methods operate at high pressure, with a consequential greater risk of an accident. High-pressure generators should be tested every two years to a pressure at least twice that of the working pressure. They should be provided with a safety device to relieve excess pressure. This is usually a bursting disc, and it is very important that the operational instructions should be strictly followed with regard to the material, size and form of the discs, and the frequency of their replacement. Even if a safety device is efficient, its operation is very liable to be accompanied by the ejection of hot solution. High-pressure generators must be carefully cleaned out before recharging since remains of the previous charge may considerably reduce the available volume of the generator and, thus, increase the working pressure beyond the design limit. Unfortunately, calcium hydride and magnesium-iron, which have the advantage of avoiding the use of caustic soda, are expensive to produce and are, therefore, likely to be acceptable only for special purposes. Since these two materials produce hydrogen from water, it is essential that they be stored in containers which are completely damp-proof. In

Hydrogen can be produced on site in various kinds of hydrogen generators. All generator plants and hydrogen storage facilities shall be legibly marked and with adequate warnings according to national regulations (e.g. “This unit contains hydrogen”; “Hydrogen – Flammable gas – No smoking – No open flames”). The following have proven to be the most suitable processes for generating hydrogen for meteorological purposes:

CHaPTEr 10. BallOON TECHNIQuES

II.10–7

the processes using catalysts, care must be taken to avoid catalyst contamination. All systems produce gas at sufficient pressure for filling balloons. However, the production rates of some systems (electrolysis in particular) are too low, and the gas must be produced and stored before it is needed, either in compressed form or in a gasholder. The processes using the electrolysis of water or the catalytic cracking of methanol are attractive because of their relative safety and moderate recurrent cost, and because of the non-corrosive nature of the materials used. These two processes, as well as the liquid ammonia process, require electric power. The equipment is rather complex and must be carefully maintained and subjected to detailed daily check procedures to ensure that the safety control systems are effective. Water for electrolysis must have low mineral content.

by completely separating the operator from the hydrogen. An essential starting point for the consideration of hydrogen precautions is to follow the various national standards and codes of practice concerned with the risks presented by explosive atmospheres in general. Additional information on the precautions that should be followed will be found in publications dealing with explosion hazards, such as in hospitals and other industrial situations where similar problems exist. The operator should never be in a closed room with an inflated balloon. Other advice on safety matters can be found throughout the chapter. 10.6.2 building design

10.6

use of hyDrogen anD safety Precautions

Provisions should be made to avoid the accumulation of free hydrogen and of static charges as well as the occurrence of sparks in any room where hydrogen is generated, stored or used. The accumulation of hydrogen must be avoided even when a balloon bursts within the shelter during the course of inflation (WMO, 1982). Safety provisions must be part of the structural design of hydrogen buildings (NFPA, 1999; SAA, 1985). Climatic conditions and national standards and codes are constraints within which it is possible to adopt many designs and materials suitable for safe hydrogen buildings. Codes are advisory and are used as a basis of good practice. Standards are published in the form of specifications for materials, products and safe practices. They should deal with topics such as flame-proof electric-light fittings, electrical apparatus in explosive atmospheres, the ventilation of rooms with explosive atmospheres, and the use of plastic windows, bursting discs, and so on (WMO, 1982). Both codes and standards should contain information that is helpful and relevant to the design of hydrogen buildings. Furthermore, it should be consistent with recommended national practice. Guidance should be sought from national standards authorities when hydrogen buildings are designed or when the safety of existing buildings is reviewed, in particular for aspects such as the following: (a) The preferred location for hydrogen systems; (b) The fire resistance of proposed materials, as related to the fire-resistance ratings that must be respected; (c) Ventilation requirements, including a roof of light construction to ensure that hydrogen

10.6.1

general

Hydrogen can easily be ignited by a small spark and burns with a nearly invisible flame. It can burn when mixed with air over a wide range of concentrations, from 4 to 74 per cent by volume (NFPA, 1999), and can explode in concentrations between 18 and 59 per cent. In either case, a nearby operator can receive severe burns over the entire surface of any exposed skin, and an explosion can throw the operator against a wall or the ground, causing serious injury. It is possible to eliminate the risk of an accident by using very carefully designed procedures and equipment, provided that they are diligently observed and maintained (Gremia, 1977; Ludtke and Saraduke, 1992; NASA, 1968). The provision of adequate safety features for the buildings in which hydrogen is generated and stored, or for the areas in which balloons are filled or released, does not always receive adequate attention (see the following section). In particular, there must be comprehensive training and continual meticulous monitoring and inspection to ensure that operators follow the procedures. The great advantage of automatic balloon launchers (see section 10.3.4) is that they can be made practically fool-proof and prevent operator injuries,

II.10–8

ParT II. OBSErvING SYSTEMS

(d) (e) (f)

and products of an explosion are vented from the highest point of the building; Suitable electrical equipment and wiring; Fire protection (extinguishers and alarms); Provision for the operator to control the inflation of the balloon from outside the filling room.

Measures should be taken to minimize the possibility of sparks being produced in rooms where hydrogen is handled. Thus, any electrical system (switches, fittings, wiring) should be kept outside these rooms; otherwise, special spark-proof switches, pressurized to prevent the ingress of hydrogen, and similarly suitable wiring, should be provided. It is also advisable to illuminate the rooms using exterior lights which shine in through windows. For the same reasons, any tools used should not produce sparks. The observer’s shoes should not be capable of emitting sparks, and adequate lightning protection should be provided. If sprinkler systems are used in any part of the building, consideration should be given to the possible hazard of hydrogen escaping after the fire has been extinguished. Hydrogen detection systems exist and may be used, for instance, to switch off power to the hydrogen generator at 20 per cent of the lower explosive limit and should activate an alarm, and then activate another alarm at 40 per cent of the lower explosive limit. A hazard zone should be designated around the generator, storage and balloon area into which entry is permitted only when protective clothing is worn (see section 10.6.4). Balloon launchers (see section 10.3.4) typically avoid the need for a special balloon-filling room, and greatly simplify the design of hydrogen facilities. 10.6.3 static charges

Charges on balloons are more difficult to deal with. Balloon fabrics, especially pure latex, are very good insulators. Static charges are generated when two insulating materials in contact with each are separated. A single brief contact with the observer’s clothing or hair can generate a 20 kV charge, which is more than sufficient to ignite a mixture of air and hydrogen if it is discharged through an efficient spark. Charges on a balloon may take many hours to dissipate through the fabric to earth or naturally into the surrounding air. Also, it has been established that, when a balloon bursts, the separation of the film along a split in the fabric can generate sparks energetic enough to cause ignition. Electrostatic charges can be prevented or removed by spraying water onto the balloon during inflation, by dipping balloons into antistatic solution (with or without drying them off before use), by using balloons with an antistatic additive in the latex, or by blowing ionized air over the balloon. Merely earthing the neck of the balloon is not sufficient. The maximum electrostatic potential that can be generated or held on a balloon surface decreases with increasing humidity, but the magnitude of the effect is not well established. Some tests carried out on inflated 20 g balloons indicated that spark energies sufficient to ignite hydrogen-oxygen mixtures are unlikely to be reached when the relative humidity of the air is greater than 60 per cent. Other studies have suggested relative humidities from 50 to 76 per cent as safe limits, yet others indicate that energetic sparks may occur at even higher relative humidity. It may be said that static discharge is unlikely when the relative humidity exceeds 70 per cent, but this should not be relied upon (see Cleves, Sumner and Wyatt, 1971). It is strongly recommended that fine water sprays be used on the balloon because the wetting and earthing of the balloon will remove most of the static charges from the wetted portions. The sprays should be designed to wet as large an area of the balloon as possible and to cause continuous streams of water to run from the balloon to the floor. If the doors are kept shut, the relative humidity inside the filling room can rise to 75 per cent or higher, thus reducing the probability of sparks energetic enough to cause ignition. Balloon release should proceed promptly once the sprays are turned off and the filling-shed doors opened. Other measures for reducing the build-up of static charge include the following (WMO, 1982):

The hazards of balloon inflation and balloon release can be considerably reduced by preventing static charges in the balloon-filling room, on the observer’s clothing, and on the balloon itself. Loeb (1958) provides information on the static electrification process. Static charge control is effected by good earthing provisions for hydrogen equipment and filling-room fittings. Static discharge grips for observers can remove charges generated on clothing (WMO, 1982).

CHaPTEr 10. BallOON TECHNIQuES

II.10–9

(a)

(b) (c) (d)

(e)

The building should be provided with a complete earthing (grounding) system, with all fittings, hydrogen equipment and the lightning conductor separately connected to a single earth, which itself must comply with national specifications for earth electrodes. Provision should be made to drain electrical charges from the floor; Static discharge points should be provided for the observers; The windows should be regularly coated with an antistatic solution; Operators should be encouraged not to wear synthetic clothing or insulating shoes. It is good practice to provide operators with partially conducting footwear; Any contact between the observer and the balloon should be minimized; this can be facilitated by locating the balloon filler at a height of 1 m or more above the floor.

10.6.4

Protective clothing and first-aid facilities

Proper protective clothing should be worn whenever hydrogen is being used, during all parts of the operations, including generation procedures, when handling cylinders, and during balloon inflation and release. The clothing should include a light-weight flame-proof coat with a hood made of non-synthetic, antistatic material and a covering for the lower face, glasses or goggles, cotton gloves, and any locally recommended anti-flash clothing (see Hoschke and others, 1979). First-aid facilities appropriate to the installation should be provided. These should include initial remedies for flash burns and broken limbs. When chemicals are used, suitable neutralizing solutions should be on hand, for example, citric acid for caustic soda burns. An eye-wash apparatus ready for instant use should be available (WMO, 1982).

II.10–10

ParT II. OBSErvING SYSTEMS

references and furtHer readIng

Atmospheric Environment Service (Canada), 1978: The Use of Hydrogen for Meteorological Purposes in the Canadian Atmospheric Environment Service, Toronto. Cleves, A.C., J.F. Sumner and R.M.H. Wyatt, 1971: The Effect of Temperature and Relative Humidity on the Accumulation of Electrostatic Charges on Fabrics and Primary Explosives. Proceedings of the Third Conference on Static Electrification, (London). Gremia, J.O., 1977: A Safety Study of Hydrogen Balloon Inflation Operations and Facilities of the National Weather Service. Trident Engineering Associates, Annapolis, Maryland. Hoschke, B.N., and others, 1979: Report to the Bureau of Meteorology on Protection Against the Burn Hazard from Exploding Hydrogen-filled Meteorological Balloons. CSIRO Division of Textile Physics and the Department of Housing and Construction, Australia. Loeb, L.B., 1958: Static Electrification, SpringerVerlag, Berlin. Ludtke, P. and G. Saraduke, 1992: Hydrogen Gas Safety Study Conducted at the National Weather Service Forecast Office. Norman, Oklahoma. National Aeronautics and Space Administration, 1968: Hydrogen Safety Manual. NASA Technical Memorandum TM-X-52454, NASA Lewis Research Center, United States. National Fire Protection Association, 1999: NFPA 50A: Standard for Gaseous Hydrogen Systems at Consumer Sites. 1999 edition, National Fire Protection Association, Quincy, Maryland.

National Fire Protection Association, 2002: NFPA 68: Guide for Venting of Deflagrations. 2002 edition, National Fire Protection Association, Batterymarch Park, Quincy, Maryland. National Fire Protection Association, 2005: NFPA 70, National Electrical Code. 2005 edition, National Fire Protection Association, Quincy, Maryland. National Fire Protection Association, 2006: NFPA 220, Standard on Types of Building Construction. 2006 edition, National Fire Protection Association, Quincy, Maryland. Rosen, B., V.H. Dayan and R.L. Proffit, 1970: Hydrogen Leak and Fire Detection: A Survey. NASA SP-5092. Standards Association of Australia, 1970: AS C99: Electrical equipment for explosive atmospheres – Flameproof electric lightning fittings. Standards Association of Australia, 1980: AS 1829: Intrinsically safe electrical apparatus for explosive atmospheres. Standards Association of Australia, 1985: AS 1482: Electrical equipment for explosive atmospheres – Protection by ventilation – Type of protection V. Standards Association of Australia, 1995: ASNzS 1020: The control of undesirable static electricity. Standards Association of Australia, 2004: AS 1358: Bursting discs and bursting disc devices – Application selection and installation. World Meteorological Organization, 1982: Meteorological Balloons: The Use of Hydrogen for Inflation of Meteorological Balloons. Instruments and Observing Methods Report No. 13, Geneva.

CHaPTEr 11

urBan oBserVatIons

11.1

general

There is a growing need for meteorological observations conducted in urban areas. Urban populations continue to expand, and Meteorological Services are increasingly required to supply meteorological data in support of detailed forecasts for citizens, building and urban design, energy conservation, transportation and communications, air quality and health, storm water and wind engineering, and insurance and emergency measures. At the same time, Meteorological Services have difficulty in making urban observations that are not severely compromised. This is because most developed sites make it impossible to conform to the standard guidelines for site selection and instrument exposure given in Part I of this Guide owing to obstruction of air-flow and radiation exchange by buildings and trees, unnatural surface cover and waste heat and water vapour from human activities.

This chapter provides information to enable the selection of sites, the installation of a meteorological station and the interpretation of data from an urban area. In particular, it deals with the case of what is commonly called a “standard” climate station. Despite the complexity and inhomogeneity of urban environments, useful and repeatable observations can be obtained. Every site presents a unique challenge. To ensure that meaningful observations are obtained requires careful attention to certain principles and concepts that are virtually unique to urban areas. It also requires the person establishing and running the station to apply those principles and concepts in an intelligent and flexible way that is sensitive to the realities of the specific environment involved. Rigid “rules” have little utility. The need for flexibility runs slightly counter to the general notion of standardization that is promoted as WMO observing practice. In urban areas, it is sometimes necessary to accept exposure

(a) Mesoscale Urban “plume” PBL (b) Rural (b) Local scale Inertial sublayer Roughness sublayer (c) UCL Mixing layer UBL

Surface layer Urban
(c) Microscale

RBL Rural

Surface layer

Roughness sublayer UCL

figure 11.1. schematic of climatic scales and vertical layers found in urban areas: planetary boundary layer (PBl), urban boundary layer (uBl), urban canopy layer (ucl), rural boundary layer (rBl) (modified from oke, 1997).

II.11–2

ParT II. OBSErvING SYSTEMS

over non-standard surfaces at non-standard heights, to split observations between two or more locations, or to be closer than usual to buildings or waste heat exhausts. The units of measurement and the instruments used in urban areas are the same as those for other environments. Therefore, only those aspects that are unique to urban areas, or that are made difficult to handle because of the nature of cities, such as the choice of site, instrument exposure and the documentation of metadata, are covered in this chapter. The timing and frequency of observations and the coding of reports should follow appropriate standards (WMO, 1983; 1988; 1995; 2003b; 2006). With regard to automated stations and the requirements for message coding and transmission, quality control, maintenance (noting any special demands of the urban environment) and calibration, the recommendations of Part II, Chapter 1, should be followed. 11.1.1 11.1.1.1 Definitions and concepts station rationale (b)

of the guidelines in Part I of this Guide specifically aims to avoid microclimatic effects. The climate station recommendations are designed to standardize all sites, as far as practical. This explains the use of a standard height of measurement, a single surface cover, minimum distances to obstacles and little horizon obstruction. The aim is to achieve climate observations that are free of extraneous microclimate signals and hence characterize local climates. With even more stringent standards, first order stations may be able to represent conditions at synoptic space and time scales. The data may be used to assess climate trends at even larger scales. Unless the objectives are

The clarity of the reason for establishing an urban station is essential to its success. Two of the most usual reasons are the wish to represent the meteorological environment at a place for general climatological purposes and the wish to provide data in support of the needs of a particular user. In both cases, the spatial and temporal scales of interest must be defined, and, as outlined below, the siting of the station and the exposure of the instruments in each case may have to be very different. 11.1.1.2 Horizontal scales

(c)

very specialized, urban stations should also avoid microclimate influences; however, this is hard to achieve; Local scale: This is the scale that standard climate stations are designed to monitor. It includes landscape features such as topography, but excludes microscale effects. In urban areas this translates to mean the climate of neighbourhoods with similar types of urban development (surface cover, size and spacing of buildings, activity). The signal is the integration of a characteristic mix of microclimatic effects arising from the source area in the vicinity of the site. The source area is the portion of the surface upstream that contributes the main properties of the flux or meteorological concentration being measured (Schmid, 2002). Typical scales are one to several kilometres; Mesoscale: A city influences weather and climate at the scale of the whole city, typically tens of kilometres in extent. A single station is not able to represent this scale.
Vertical scales

11.1.1.3 There is no more important an input to the success of an urban station than an appreciation of the concept of scale. There are three scales of interest (Oke, 1984; Figure 11.1): (a) Microscale: Every surface and object has its own microclimate on it and in its immediate vicinity. Surface and air temperatures may vary by several degrees in very short distances, even millimetres, and air-flow can be greatly perturbed by even small objects. Typical scales of urban microclimates relate to the dimensions of individual buildings, trees, roads, streets, courtyards, gardens, and so forth. Typical scales extend from less than 1 m to hundreds of metres. The formulation

An essential difference between the climate of urban areas and that of rural or airport locations is that in cities the vertical exchanges of momentum, heat and moisture do not occur at a (nearly) plane surface, but in a layer of significant thickness, called the urban canopy layer (UCL) (Figure 11.1). The height of the UCL is approximately equivalent to that of the mean height of the main roughness elements (buildings and trees), zH (see Figure 11.4 for parameter definitions). The microclimatic effects of individual surfaces and obstacles persist for a short distance away from their source and are then mixed and muted by the action of turbulent eddies. The distance required before the effect is obliterated depends on the magnitude of the effect, wind speed and stability (namely, stable, neutral or unstable).

CHaPTEr 11. urBaN OBSErvaTIONS

II.11–3

This blending occurs both in the horizontal and the vertical. As noted, horizontal effects may persist up to a few hundred metres. In the vertical, the effects of individual features are discernible in the roughness sublayer (RSL), which extends from ground level to the blending height zr, where the blending action is complete. Rule-of-thumb estimates and field measurements indicate that zr can be as low as 1.5 zH at densely built (closely spaced) and homogeneous sites, but greater than 4 zH in low density areas (Grimmond and Oke, 1999; Rotach, 1999; Christen, 2003). An instrument placed below zr may register microclimate anomalies, but, above that, it “sees” a blended, spatially averaged signal that is representative of the local scale. There is another height restriction to consider. This arises because each local scale surface type generates an internal boundary layer, in which the flow structure and thermodynamic properties are adapted to that surface type. The height of the layer grows with increasing fetch (the distance upwind to the edge where the transition to a distinctly different surface type occurs). The rate at which the internal boundary layer grows with fetch distance depends on the roughness and stability. In rural conditions, the height to fetch ratios might vary from as small as 1:10 in unstable conditions to as large as 1:500 in stable cases, and the ratio decreases as the roughness increases (Garratt, 1992; Wieringa, 1993). Urban areas tend towards neutral stability owing to the enhanced thermal and mechanical turbulence associated with the heat island and their large roughness. Therefore, a height to fetch ratio of about 1:100 is considered typical. The internal boundary layer height is taken above the displacement height zd, which is the reference level for flow above the blending height. (For an explanation of zd, see Figure 11.4 and Note 2 in Table 11.2.) For example, take a hypothetical densely built district with zH of 10 m. This means that zr is at least 15 m. If this height is chosen to be the measurement level, the fetch requirement over similar urban terrain is likely to be at least 0.8 km, since fetch = 100 (zr – zd ), and zd will be about 7 m. This can be a significant site restriction because the implication is that, if the urban terrain is not similar out to at least this distance around the station site, observations will not be representative of the local surface type. At less densely developed sites, where heat island and roughness effects are less, the fetch requirements are likely to be greater. At heights above the blending height, but within the local internal boundary layer, measurements are within an inertial sublayer (Figure 11.1), where

standard boundary layer theory applies. Such theory governs the form of the mean vertical profiles of meteorological variables (including air temperature, humidity and wind speed) and the behaviour of turbulent fluxes, spectra and statistics. This provides a basis for: (a) The calculation of the source area (or “footprint”, see below) from which the turbulent flux or the concentration of a meteorological variable originates; hence, this defines the distance upstream for the minimum acceptable fetch; (b) The extrapolation of a given flux or property through the inertial layer and also downwards into the RSL (and, although it is less reliable, into the UCL). In the inertial layer, fluxes are constant with height and the mean value of meteorological properties are invariant horizontally. Hence, observations of fluxes and standard variables possess significant utility and are able to characterize the underlying local scale environment. Extrapolation into the RSL is less prescribed. 11.1.1.4 source areas (“footprints”)

A sensor placed above a surface “sees” only a portion of its surroundings. This is called the “source area” of the instrument which depends on its height and the characteristics of the process transporting the surface property to the sensor. For upwelling radiation signals (short- and long-wave radiation and surface temperature viewed by an infrared thermometer) the field of view of the instrument and the geometry of the underlying surface set what is seen. By analogy, sensors such as thermometers, hygrometers, gas analysers and anemometers “see” properties such as temperature, humidity, atmospheric gases and wind speed and direction which are carried from the surface to the sensor by turbulent transport. A conceptual illustration of these source areas is given in Figure 11.2. The source area of a downfacing radiometer with its sensing element parallel to the ground is a circular patch with the instrument at its centre (Figure 11.2). The radius (r) of the circular source area contributing to the radiometer signal at height (z1) is given in Schmid and others (1991):

r = z1

1 −1 F

−0.5

(11.1)

where F is the view factor, namely the proportion of the measured flux at the sensor for which that area is responsible. Depending on its field of view, a radiometer may see only a limited circle, or it may

II.11–4

ParT II. OBSErvING SYSTEMS

extend to the horizon. In the latter case, the instrument usually has a cosine response, so that towards the horizon it becomes increasingly difficult to define the actual source area seen. Hence, the use of the view factor which defines the area contributing a set proportion (often selected as 50, 90, 95, 99 or 99.5 per cent) of the instrument’s signal. The source area of a sensor that derives its signal via turbulent transport is not symmetrically distributed around the sensor location. It is elliptical in shape and is aligned in the upwind direction from the tower (Figure 11.2). If there is a wind, the effect of the surface area at the base of the mast is effectively zero, because turbulence cannot transport the influence up to the sensor level. At some distance in the upwind direction the source starts to affect the sensor; this effect rises to a peak, thereafter decaying at greater distances (for the shape in both the x and y directions see Kljun, Rotach and Schmid, 2002; Schmid, 2002). The distance upwind to the first surface area contributing to the signal, to the point of peak influence, to the furthest upwind surface influencing the measurement, and the area of the so-called “footprint” vary considerably over time. They depend on the height of measurement (larger at greater heights), surface roughness, atmospheric stability (increasing from unstable to stable) and whether a turbulent flux or a meteorological concentration is being measured (larger for the concentration) (Kljun, Rotach and Schmid, 2002).

Methods to calculate the dimensions of flux and concentration “footprints” are available (Schmid, 2002; Kljun and others, 2004). Although the situation illustrated in Figure 11.2 is general, it applies best to instruments placed in the inertial sublayer, well above the complications of the RSL and the complex geometry of the three-dimensional urban surface. Within the UCL, the way in which the effects of radiation and turbulent source areas decay with distance has not yet been reliably evaluated. It can be surmised that they depend on the same properties and resemble the overall forms of those in Figure 11.2. However, obvious complications arise due to the complex radiation geometry, and the blockage and channelling of flow, which are characteristic of the UCL. Undoubtedly, the immediate environment of the station is by far the most critical and the extent of the source area of convective effects grows with stability and the height of the sensor. The distance influencing screen-level (~1.5 m) sensors may be a few tens of metres in neutral conditions, less when they are unstable and perhaps more than 100 m when they are stable. At a height of 3 m, the equivalent distances probably extend up to about 300 m in the stable case. The circle of influence on a screen-level temperature or humidity sensor is thought to have a radius of about 0.5 km typically, but this is likely to depend upon the building density.

z

Radiation source area isopleths Turbulence source area isopleths

Sensor y

50% x 50% 90%

Wind 90%

figure 11.2. conceptual representation of source areas contributing to sensors for radiation and turbulent fluxes of concentrations. If the sensor is a radiometer 50 or 90 per cent of the flux originates from the area inside the perspective circle. If the sensor is responding to a property of turbulent transport, 50 or 90 per cent of the signal comes from the area inside the respective ellipses. these are dynamic in the sense they are oriented into the wind and hence move with wind direction and stability.

CHaPTEr 11. urBaN OBSErvaTIONS

II.11–5

11.1.1.5

Measurement approaches

It follows from the preceding discussion that, if the objective of an instrumented urban site is to monitor the local-scale climate near the surface, there are two viable approaches as follows: (a) Locate the site in the UCL at a location surrounded by average or “typical” conditions for the urban terrain, and place the sensors at heights similar to those used at non-urban sites. This assumes that the mixing induced by flow around obstacles is sufficient to blend properties to form a UCL average at the local scale; (b) Mount the sensors on a tall tower above the RSL and obtain blended values that can be extrapolated down into the UCL. In general, approach (a) works best for air temperature and humidity, and approach (b) for wind speed and direction and precipitation. For radiation, the only significant requirement is for an unobstructed horizon. Urban stations, therefore, often consist of instruments deployed both below and above roof level; this requires that site assessment and description include the scales relevant to both contexts. 11.1.1.6 urban site description

vehicles. Near the other end of the spectrum there are districts with low density housing of one- or two-storey buildings of relatively light construction and considerable garden or vegetated areas with low heat releases, but perhaps large irrigation inputs. No universally accepted scheme of urban classification for climatic purposes exists. A good approach to the built components is that of Ellefsen (1991) who developed a set of urban terrain zone (UTz) types. He initially differentiates according to 3 types of building contiguity (attached (row), detached but close-set, detached and open-set). These are further divided into a total of 17 sub-types by function, location in the city, and building height, construction and age. Application of the scheme requires only aerial photography, which is generally available, and the scheme has been applied in several cities around the world and seems to possess generality. Ellefsen’s scheme can be used to describe urban structure for roughness, airflow, radiation access and screening. It can be argued that the scheme indirectly includes aspects of urban cover, fabric and metabolism because a given structure carries with it the type of cover, materials and degree of human activity. Ellefsen’s scheme is less useful, however, when built features are scarce and there are large areas of vegetation (urban forest, low plant cover, grassland, scrub, crops), bare ground (soil or rock) and water (lakes, swamps, rivers). A simpler scheme of urban climate zones (UCzs) is illustrated in Table 11.1. It incorporates groups of Ellefsen’s zones, plus a measure of the structure, zH/W (see Table 11.1, Note c) shown to be closely related to both flow, solar shading and the heat island, and also a measure of the surface cover (% built) that is related to the degree of surface permeability. The importance of UCzs is not their absolute accuracy to describe the site, but their ability to classify areas of a settlement into districts, whch are similar in their capacity to modify the local climate, and to identify potential transitions to different UCzs. Such a classification is crucial when beginning to set up an urban station, so that the spatial homogeneity criteria are met approximately for a station in the UCL or above the RSL. In what follows, it is assumed that the morphometry of the urban area, or a portion of it, has been assessed using detailed maps, and/or aerial photographs, satellite imagery (visible and/or thermal), planning documents or at least a visual survey conducted from a vehicle and/or on foot. Although

The magnitude of each urban scale does not agree precisely with those commonly given in textbooks. The scales are conferred by the dimensions of the morphometric features that make up an urban landscape. This places emphasis on the need to adequately describe properties of urban areas which affect the atmosphere. The most important basic features are the urban structure (dimensions of the buildings and the spaces between them, the street widths and street spacing), the urban cover (built-up, paved and vegetated areas, bare soil, water), the urban fabric (construction and natural materials) and the urban metabolism (heat, water and pollutants due to human activity). Hence, the characterization of the sites of urban climate stations must take account of these descriptors, use them in selecting potential sites, and incorporate them in metadata that accurately describe the setting of the station. These four basic features of cities tend to cluster to form characteristic urban classes. For example, most central areas of cities have relatively tall buildings that are densely packed together, so the ground is largely covered with buildings or paved surfaces made of durable materials such as stone, concrete, brick and asphalt and where there are large releases from furnaces, air conditioners, chimneys and

II.11–6

ParT II. OBSErvING SYSTEMS

taBle 11.1. simplified classification of distinct urban forms arranged in approximate decreasing order of their ability to have an impact on local climate (oke, 2004 unpublished)
Urban climate zone a Image Roughness classb Aspect ratioc % built (impermeable)d

1. Intensely developed urban with detached close-set high-rise buildings with cladding, e.g. downtown towers 2. Intensely high density urban with 2–5 storey, attached or very-close set buildings often of bricks or stone, e.g. old city core 3. Highly developed, medium density urban with row or detached but close-set houses, stores and apartments, e.g. urban housing 4. Highly developed, low or medium density urban with large low buildings and paved parking, e.g. shopping malls, warehouses 5. Medium development, low density suburban with 1 or 2 storey houses, e.g. suburban houses 6. Mixed use with large buildings in open landscape, e.g. institutions such as hospitals, universities, airports 7. Semi-rural development, scattered houses in natural or agricultural areas, e.g. farms, estates
Buildings; Impervious ground; Vegetation Pervious ground Buildings; Impervious ground; Vegetation Pervious ground

8

>2

> 90%

7

1.0–2.5

> 85

7

0.5–1.5

70-85

5

0.05–0.2

70-95

6

0.2–0.6, p to > 1 with trees

35-65

5

0.1–0.5, depends on trees

< 40

4

> 0.05, depends on trees

< 10

a

A simplified set of classes that includes aspects of the schemes of Auer (1978) and Ellesfen (1990/91) plus physical measures relating to wind, and thermal and moisture control (columns on the right). Approximate correspondence between UCz and Ellefsen‘s urban terrain zones is: 1 (Dc1, Dc8), 2 (A1–A4, Dc2), 3 (A5, Dc3–5, Do2), 4 (Do1, Do4, Do5), 5 (Do3), 6 (Do6), 7 (none).

b c

Effective terrain roughness according to the Davenport classification (Davenport and others, 2000); see Table 11.2. Aspect ratio = Zh/W is the average height of the main roughness elements (buildings, trees) divided by their average spacing; in the city centre this is the street canyon height/width. This measurement is known to be related to flow regime types (Oke, 1987) and thermal controls (solar shading and longwave screening) Oke 1981. Tall trees increase this measure significantly.

d

Average proportion of ground plan covered by built features (buildings, roads and paved and other impervious areas); the rest of the area is occupied by pervious cover (green space, water and other natural surfaces). Permeability affects the moisture status of the ground and hence humidification and evaporative cooling potential.

CHaPTEr 11. urBaN OBSErvaTIONS

II.11–7

land-use maps can be helpful, it should be appreciated that they depict the function and not necessarily the physical form of the settlement. The task of urban description should result in a map with areas of UCzs delineated. Herein, the UCzs as illustrated in Table 11.1 are used. The categories may have to be adapted to accommodate special urban forms characteristic of some ancient cities or of unplanned urban development found in some less-developed countries. For example, many towns and cities in Africa and Asia do not have as large a fraction of the surface covered by impervious materials and roads may not be paved.

will be typical can be assessed using the ideas behind Table 11.1 and choosing extensive areas of similar urban development for closer investigation. The search can be usefully refined in the case of air temperature and humidity by conducting spatial surveys, wherein the sensor is carried on foot, or mounted on a bicycle or a car and taken through areas of interest. After several repetitions, cross-sections or isoline maps may be drawn (see Figure 11.3), revealing where areas of thermal or moisture anomaly or interest lie. Usually, the best time to do this is a few hours after sunset or before sunrise on nights with relatively calm air-flow and cloudless skies. This maximizes the potential for the differentiation of microclimate and local climate differences. It is not advisable to conduct such surveys close to sunrise or sunset because weather variables change so rapidly at these times that meaningful spatial comparisons are difficult. If the station is to be part of a network to characterize spatial features of the urban climate, a broader view is needed. This consideration should be informed by thinking about the typical spatial form of urban climate distributions. For example, the isolines of urban heat and moisture “islands” indeed look like the contours of their topographic namesakes (Figure 11.3). They have relatively sharp “cliffs”, often a “plateau” over much of the urban area interspersed with localized “mounds” and “basins” of warmth/coolness and moistness/ dryness. These features are co-located with patches of greater or lesser development such as clusters of apartments, shops, factories or parks, open areas or water. Therefore, a decision must be made: is the aim to make a representative sample of the UCz diversity, or is it to faithfully reflect the spatial structure? In most cases the latter is too ambitious with a fixed-station network in the UCL. This is because it will require many stations to depict the gradients near the periphery, the plateau region, and the highs and lows of the nodes of weaker and stronger than average urban development. If measurements are to be taken from a tower, with sensors above the RSL, the blending action produces more muted spatial patterns and the question of distance of fetch to the nearest border between UCzs, and the urban-rural fringe, becomes relevant. Whereas a distance to a change in UCz of 0.5 to 1 km may be acceptable inside the UCL, for a tower-mounted sensor the requirement is likely to be more like a few kilometres of fetch.

11.2

choosing a location anD site for an urban station

11.2.1

location

First, it is necessary to establish clearly the purpose of the station. If there is to be only one station inside the urban area it must be decided if the aim is to monitor the greatest impact of the city, or of a more representative or typical district, or if it is to characterize a particular site (where there may be perceived to be climate problems or where future development is planned). Areas where there is the highest probability of finding maximum effects can be judged initially by reference to the ranked list of UCz types in Table 11.1. Similarly, the likelihood that a station

B City core +8 +6 +4 A Park +4 +2

nd Wi

Built-up area

figure 11.3. typical spatial pattern of isotherms in a large city at night with calm, clear weather illustrating the heat island effect (after oke, 1982).

II.11–8

ParT II. OBSErvING SYSTEMS

Since the aim is to monitor local climate attributable to an urban area, it is necessary to avoid extraneous microclimatic influences or other local or mesoscale climatic phenomena that will complicate the urban record. Therefore, unless there is specific interest in topographically generated climate patterns, such as the effects of cold air drainage down valleys and slopes into the urban area, or the speed-up or sheltering of winds by hills and escarpments, or fog in river valleys or adjacent to water bodies, or geographically locked cloud patterns, and so on, it is sensible to avoid locations subject to such local and mesoscale effects. On the other hand, if a benefit or hazard is derived from such events, it may be relevant to design the network specifically to sample its effects on the urban climate, such as the amelioration of an overly hot city by sea or lake breezes. 11.2.2 siting

homogeneity for a screen-level or high-level (above-RSL) station are selected, it is helpful to identify potential “friendly” site owners who could host it. If a government agency is seeking a site, it may already own land in the area which is used for other purposes or have good relations with other agencies or businesses (offices, work yards, spare land, rights of way) including schools, universities, utility facilities (electricity, telephone, pipelines) and transport arteries (roads, railways). These are good sites, because access may be permitted and also because they also often have security against vandalism and may have electrical power connections. Building roofs have often used been as the site for meteorological observations. This may often have been based on the mistaken belief that at this elevation the instrument shelter is free from the complications of the UCL. In fact, roof-tops have their own very distinctly anomalous microclimates that lead to erroneous results. Air-flow over a building creates strong perturbations in speed, direction and gustiness which are quite unlike the flow at the same elevation away from the building or near the ground (Figure 11.5). Flat-topped buildings may actually create flows on their roofs that are counter to the main external flow, and speeds vary from extreme jetting to a near calm. Roofs are also constructed of materials that are thermally rather extreme. In light winds and cloudless skies they can become very hot by day and cold by night. Hence, there is often a sharp gradient of air temperature near the roof. Furthermore, roofs are designed to be waterproof and to shed water rapidly. This, together with their openness to solar radiation and the wind, makes them anomalously dry. In general, therefore, roofs are very poor locations for air temperature, humidity, wind and precipitation observations, unless the instruments are placed on very tall masts. They can, however, be good for observing incoming radiation components. Once the site has been chosen, it is essential that the details of the site characteristics (metadata) be fully documented (see section 11.4).

Once a choice of UCz type and its general location inside the urban area is made, the next step is to inspect the map, imagery and photographic evidence to narrow down candidate locations within a UCz. Areas of reasonably homogeneous urban development without large patches of anomalous structure, cover or material are sought. The precise definition of “reasonably” however is not possible; almost every real urban district has its own idiosyncrasies that reduce its homogeneity at some scale. Although a complete list is therefore not possible, the following are examples of what to avoid: unusually wet patches in an otherwise dry area, individual buildings that jut up by more than half the average building height, a large paved car park in an area of irrigated gardens, a large, concentrated heat source like a heating plant or a tunnel exhaust vent. Proximity to transition zones between different UCz types should be avoided, as should sites where there are plans for or the likelihood of major urban redevelopment. The level of concern about anomalous features decreases with distance away from the site itself, as discussed in relation to source areas. In practice, for each candidate site a “footprint” should be estimated for radiation (for example, equation 11.1) and for turbulent properties. Then, key surface properties such as the mean height and density of the obstacles and characteristics of the surface cover and materials should be documented within these footprints. Their homogeneity should then be judged, either visually or using statistical methods. Once target areas of acceptable

11.3 11.3.1

instruMent exPosure

Modifications to standard practice

In many respects, the generally accepted standards for the exposure of meteorological instruments set out in Part I of this Guide apply to urban sites. However, there will be many occasions when it is

CHaPTEr 11. urBaN OBSErvaTIONS

II.11–9

impossible or makes no sense to conform. This section recommends some principles that will assist in such circumstances; however, all eventualities cannot be anticipated. The recommendations here remain in agreement with general objectives set out in Part I, Chapter 1. Many urban stations have been placed over short grass in open locations (parks, playing fields) and as a result they are actually monitoring modified rural-type conditions, not representative urban ones. This leads to the curious finding that some rural-urban pairs of stations show no urban effect on temperature (Peterson, 2003). The guiding principle for the exposure of sensors in the UCL should be to locate them in such a manner that they monitor conditions that are representative of the environment of the selected UCz. In cities and towns it is inappropriate to use sites similar to those which are standard in open rural areas. Instead, it is recommended that urban stations should be sited over surfaces that, within a microscale radius, are representative of the local scale urban environment. The % built category (Table 11.1) is a crude guide to the recommended underlying surface. The requirement that most obviously cannot be met at many urban sites is the distance from obstacles — the site should be located well away from trees, buildings, walls or other obstructions (Chapter 1, Part I). Rather, it is recommended that the urban station be centred in an open space where the surrounding aspect ratio (zH/W) is approximately representative of the locality. When installing instruments at urban sites it is especially important to use shielded cables because of the ubiquity of power lines and other sources of electrical noise at such locations. 11.3.2 11.3.2.1 temperature air temperature

the lower UCL might be too well sheltered, forced ventilation of the sensor is recommended. If a network includes a mixture of sensor assemblies with/without shields and ventilation, this might contribute to inter-site differences. Practices should therefore be uniform. The surface over which air temperature is measured and the exposure of the sensor assembly should follow the recommendations given in the previous section, namely, the surface should be typical of the UCz and the thermometer screen or shield should be centred in a space with approximately average zH/W. In very densely built-up UCz this might mean that it is located only 5 to 10 m from buildings that are 20 to 30 m high. If the site is a street canyon, zH/W only applies to the cross-section normal to the axis of the street. The orientation of the street axis may also be relevant because of systematic sun-shade patterns. If continuous monitoring is planned, north-south oriented streets are favoured over east-west ones because there is less phase distortion, although daytime course of temperature may be rather peaked. At non-urban stations recommended screen height is between 1.25 and 2 m above ground level. While this is also acceptable for urban sites, it may be better to relax this requirement to allow greater heights. This should not lead to significant error in most cases, especially in densely built-up areas, because observations in canyons show very slight air temperature gradients through most of the UCL, provided that the location is more than 1 m from a surface (Nakamura and Oke, 1988). Measurements at heights of 3 or 5 m are not very different from those at the standard height, have slightly greater source areas and place the sensor beyond easy reach, thus preventing damage, and away from the path of vehicles. They also ensure greater dilution of vehicle exhaust heat and reduce contamination from dust. Air temperatures measured above the UCL, using sensors mounted on a tower, are influenced by air exchanged with the UCL plus the effects of the roofs. Roofs have much more thermic variability than most surfaces within the UCL. Most roofs are designed to insulate and hence to minimize heat exchange with the interior of the building. As a result, roof-surface temperatures often become very hot by day, whereas the partially shaded and better conducting canyon walls and floor are cooler. At night circumstances are reversed with the roofs being relatively cold and

The sensors in general use to measure air temperature (including their accuracy and response characteristics) are appropriate in urban areas. Careful attention to radiation shielding and ventilation is especially recommended. In the UCL, a sensor assembly might be relatively close to warm surfaces, such as a sunlit wall, a road or a vehicle with a hot engine, or it might receive reflected heat from glassed surfaces. Therefore, the shields used should block radiation effectively. Similarly, because an assembly placed in

II.11–10

ParT II. OBSErvING SYSTEMS

table 11.2. Davenport classification of effective terrain roughnessa
Class z0 (m) Landscape description

4 roughly open 5 rough 6 very rough 7 Skimming 8 Chaotic a 0.10 0.25 0.5 1.0 2.0

Moderately open country with occasional obstacles (e.g. isolated low buildings or trees) at relative horizontal separations of at least 20 obstacle heights Scattered obstacles (buildings) at relative distances of 8 to 12 obstacle heights for low solid objects (e.g. buildings) (analysis may need zd)b area moderately covered by low buildings at relative separations of 3 to 7 obstacle heights and no high trees (analysis requires zd)b densely built-up area without much building height variation (analysis requires zd)b City centres with mix of low and high-rise buildings (analysis by wind tunnel advised)

b

Abridged version (revised 2000, for urban roughness only) of Davenport and others (2000); for classes 1 to 3 and for rural classes 4 to 8, see Part I, Chapter 5, annex to this Guide and WMO (2003a). First order values of zd are given as fractions of average obstacle height, i.e.: 0.5 zH, 0.6 zH and 0.7 zH for Davenport classes 5, 6 and 7, respectively.

canyon surfaces warmer as they release their daytime heat uptake. There may also be complications due to the release of heat from roof exhaust vents. Therefore, while there is little variation of temperature with height in the UCL, there is a discontinuity near roof level both horizontally and vertically. Hence, if a meaningful spatial average is sought, sensors should be well above mean roof level, > 1.5 zH if possible, so that the mixing of roof and canyon air is accomplished. In decling with air temperature data from an elevated sensor, it is difficult to extrapolate these levels down towards screen level because currently no standard methods are available. Similarly, there is no simple, general scheme for extrapolating air temperatures horizontally inside the UCL. Statistical models work, but they require a large archive of existing observations over a dense network, which is not usually available. 11.3.2.2 surface temperature

sampled appropriately and its average emissivity known. 11.3.2.3 soil and road temperature

It is desirable to measure soil temperature in urban areas. The heat island effect extends down beneath a city and this may be of significance to engineering design for water pipes or road construction. In practice, the measurement of this variable may be difficult at more heavily developed urban sites. Bare ground may not be available, the soil profile is often highly disturbed, and at depth there may be obstructions or anomalously warm or cool artefacts (for example, empty, full or leaky water pipes, sewers, heat conduits). In urban areas, the measurement of grass minimum temperature has almost no practical utility. Temperature sensors are often embedded in road pavements, especially in areas subject to freezing. They are usually part of a monitoring station for highway weather. It is often helpful to have sensors beneath both the tire track area and the centre of the lane. 11.3.3 atmospheric pressure

Surface temperature is not commonly measured at urban stations, but it can be a very useful variable to use as input in models to calculate fluxes. A representative surface temperature requires the averaging of an adequate sample of the many surfaces, vertical as well as horizontal, that make up an urban area. This is possible only using infrared remote sensing either from a scanner mounted on an aircraft or satellite, or a downward-facing pyrgeometer, or one or more radiation thermometers of which the combined field of view covers a representative sample of the urban district. Hence, to obtain accurate results, the target must be

At the scale of urban areas it will probably not be necessary to monitor atmospheric pressure if there is already a synoptic station in the region. If pressure sensors are included, the recommendations of Part I, Chapter 3, apply. In rooms and elsewhere in the vicinity of buildings there is the probability of pressure “pumping” due to gusts. Also,

CHaPTEr 11. urBaN OBSErvaTIONS

II.11–11

Height Z Inertial sublayer Surface layer Zr ZH Z0 Zd 0
{

RSL UCL 0

{

Mean horizontal velocity, u

figure 11.4. generalized mean (spatial and temporal) wind velocity (u) profile in a densely developed urban area including the location of sublayers of the surface layer. the measures on the height scale are the mean height of the roughness elements (zH) the roughness sublayer (zr, or the blending height), the roughness length (z0) and zero-plane displacement length (zd). the dashed line represents the profile extrapolated from the inertial sublayer, the solid line represents the actual profile. interior-exterior pressure differences may exist if the sensor is located in an air-conditioned room. Both difficulties can be alleviated if a static pressure head is installed (see Part I, Chapter 3, section 3.8). 11.3.4 humidity Chapter 4, section 4.2 is essential, as is the provision of shielding from extraneous sources of solar and long-wave radiation. 11.3.5 The instruments normally used for humidity (Part I, Chapter 4) are applicable to urban areas. The guidelines given in section 11.3.2.1 for the siting and exposure of temperature sensors in the UCL, and above the RSL, apply equally to humidity sensors. Urban environments are notoriously dirty (dust, oils, pollutants). Several hygrometers are subject to degradation or require increased maintenance in urban environments. Hence, if psychrometric methods are used, the wet-bulb sleeve must be replaced more frequently than normal and close attention should be given to ensuring that the distilled water remains uncontaminated. The hair strands of a hair hygrometer can be destroyed by polluted urban air; hence, their use is not recommended for extended periods. The mirror of dew-point hygrometers and the windows of ultraviolet and infrared absorption hygrometers need to be cleaned frequently. Some instruments degrade to such an extent that the sensors have to be completely replaced fairly regularly. Because of shelter from wind in the UCL, forced ventilation at the rate recommended in Part I, Wind speed and direction

The measurement of wind speed and direction is highly sensitive to flow distortion by obstacles. Obstacles create alterations in the average wind flow and turbulence. Such effects apply at all scales of concern, including the effects of local relief due to hills, valleys and cliffs, sharp changes in roughness or in the effective surface elevation (zd, see below), perturbation of flow around clumps of trees and buildings, individual trees and buildings and even disturbance induced by the physical bulk of the tower or mounting arm to which the instruments are attached. 11.3.5.1 Mean wind profile

However, if a site is on reasonably level ground, has sufficient fetch downstream of major changes of roughness and is in a single UCz without anomalously tall buildings, a mean wind profile such as that in Figure 11.4 should exist. The mean is both spatial and temporal. Within the UCL no one site can be expected to possess such a profile. Individual locations experience highly variable speed and direction shifts as the air-stream interacts with individual building arrangements, streets, courtyards and trees. In street canyons, the shape of the profile

II.11–12

ParT II. OBSErvING SYSTEMS

is different for along-canyon versus across-canyon flow (Christen and others, 2002) and depends on position across and along the street (DePaul and Shieh, 1986). Wind speed gradients in the UCL are small until quite close to the surface. As a first approximation the profile in the UCL can be described by an exponential form (Britter and Hanna, 2003) merging with the log profile near the roof level (Figure 11.4). In the inertial sublayer, the Monin-Obukhov similarity theory applies, including the logarithmic law:

It is important to incorporate the displacement height zd into urban wind-profile assessments. Effectively, this is equivalent to setting a base for the logarithmic wind profile that recognizes the physical bulk of the urban canopy. It is like setting a new “ground surface” aloft, where the mean momentum sink for the flow is located (Figure 11.4). Depending on the building and tree density, this could set the base of the profile at a height of between 0.5 and 0.8 zH (Grimmond and Oke, 1999). Hence, failure to incorporate it in calculations causes large errors. First estimates can be made using the fractions of zH given in Table 11.2 (Note b). 11.3.5.2 Height of measurement and exposure

z uz = (u* / k ){ln[( z − zd ) / z0 ] + Ψ M L }

( )

(11.2)

where u∗ is the friction velocity; k is von Karman’s constant (0.40); z0 is the surface roughness length; z d is the zero-plane displacement height (Figure 11.4); L is the Obukhov stability length (= –u*3/[k(g/θv)QH]), where g is the gravitational acceleration, θv the virtual potential temperature and QH the turbulent sensible heat flux); and ΨM is a dimensionless function that accounts for the change in curvature of the wind profile away from the neutral profile with greater stability or instability1. In the neutral case (typically with strong winds and cloud) when ΨM is unity, equation 11.2 reduces to:

uz = (u* / k )ln[( z − zd ) / z0 ]

(11.3)

The wind profile parameters can be measured using a vertical array of anemometers, or measurements of momentum flux or gustiness from fast-response anemometry in the inertial layer, but estimates vary with wind direction and are sensitive to errors (Wieringa, 1996; Verkaik, 2000). Methods to parameterize the wind profile parameters z0 and zd for urban terrain are also available (for reviews, see Grimmond and Oke, 1999; Britter and Hanna, 2003). The simplest methods involve general descriptions of the land use and obstacles (see Tables 11.1 and 11.2 as well as Davenport and others, 2000; Grimmond and Oke, 1999), or a detailed description of the roughness element heights and their spacing from either a geographic information system of the building and street dimensions, or maps and aerial oblique photographs, or airborne/ satellite imagery and the application of one of several empirical formulae (for recommendations, see Grimmond and Oke, 1999).

The choice of height at which wind measurements should be taken in urban areas is a challenge. However, if some basic principles are applied, meaningful results can be attained. The poor placement of wind sensors in cities is the source of considerable wasted resources and effort and leads to potentially erroneous calculations of pollutant dispersion. Of course, this is even a source of difficulty in open terrain due to obstacles and topographic effects. This is the reason why the standard height for rural wind observations is set at 10 m above ground, not at screen level, and why there the anemometer should not be at a horizontal distance from obstructions of less than 10 obstacle heights (Part I, Chapter 5, section 5.9.2). In typical urban districts it is not possible to find such locations, for example, in a UCz with 10 m high buildings and trees it would require a patch that is at least 100 m in radius. If such a site exists it is almost certainly not representative of the zone. It has already been noted that the roughness sublayer, in which the effects of individual roughness elements persist, extends to a height of about 1.5 zH in a densely built-up area and perhaps higher in less densely developed sites. Hence, in the example district the minimum acceptable anemometer height is at least 15 m, not the standard 10 m. When buildings are much taller, an anemometer at the standard 10 m height would be well down in the UCL, and, given the heterogeneity of urban form and therefore of wind structure, there is little merit in placing a wind sensor beneath, or even at about, roof level. It is well known from wind tunnel and field observations that flow over an isolated solid obstacle, like a tall building, is greatly perturbed both

1

For more details on L and the form of the ψM function, see a standard micrometeorology text, for example, Stull, 1988; Garratt, 1992; or Arya, 2001. Note that u*≡ and QH should be evaluated in the inertial layer above the RSL.

CHaPTEr 11. urBaN OBSErvaTIONS

II.11–13

immediately over and around it. These perturbations include modifications to the streamlines, the presence of recirculation zones on the roof and in the so-called “bubble” or cavity behind it, and wake effects that persist in the downstream flow for tens of building height multiples that affect a large part of the neighbourhood (Figure 11.5). There are many examples of poorly exposed anemometer-vane systems in cities. The data registered by such instruments are erroneous, misleading, potentially harmful if used to obtain wind input for wind load or dispersion applications, and wasteful of resources. The inappropriateness of placing anemometers and vanes on short masts on the top of buildings cannot be over-emphasized. Speed and directions vary hugely in short distances, both horizontally and vertically. Results from instruments deployed in this manner bear little resemblance to the general flow and are entirely dependent on the specific features of the building itself, the mast location on the structure, and the angle-of-attack of the flow to the building. The circulating and vortex flows seen in Figure 11.5 mean that, if the mast is placed ahead of, on top of, or in the cavity zone behind a building, direction measurements could well be counter to those prevailing in the flow outside the influence of the building’s own wind climate (namely, in zone A of Figure 11.5a), and speeds are highly variable. To get outside the perturbed zone, wind instruments must be
(a) A A B C D

mounted at a considerable height. For example, it has been proposed that such sensors should be at a height greater than the maximum horizontal dimension of the major roof (Wieringa, 1996). This implies an expensive mast system, perhaps with guys that subtend a large area and perhaps difficulties in obtaining permission to install. Nevertheless, this is the only acceptable approach if meaningful data are to be measured. Faced with such realities, sensors should be mounted so that their signal is not overly compromised by their support structure. The following recommendations are made: (a) In urban districts with low element height and density (UCz 6 and 7), it may be possible to use a site where the “open country” standard exposure guidelines can be met. To use the 10 m height, the closest obstacles should be at least 10 times their height distant from the anemometer and not be more than about 6 m tall on average; (b) In more densely built-up districts, with relatively uniform element height and density (buildings and trees), wind speed and direction measurements should be taken with the anemometer mounted on a mast of open construction at a minimum height of 1.5 times the mean height of the elements; (c) In urban districts with scattered tall buildings the recommendations are as in (b), but with special attention to avoid the wake zone of the tall structures; (d) It is not recommended that measurements be taken of wind speed or direction in densely built areas with multiple high-rise structures unless a very tall tower is used. Anemometers on towers with open construction should be mounted on booms (cross-arms) that are long enough to keep the sensors at least two (preferable three) tower diameters’ distance from the side of the mast (Gill and others, 1967). Sensors should be mounted so that the least frequent flow direction passes through the tower. If this is not possible, or if the tower construction is not very open, two or three booms with duplicate sensors may have to be installed to avoid wake effects and upwind stagnation produced by the tower itself. If anemometer masts are to be mounted on tall or isolated buildings, the effects of the dimensions of that structure on the flow must be considered (see Part II, Chapter 5, section 5.3.3). This is likely to require analysis using wind tunnel, water flume or computational fluid dynamics models specifically

(b)

figure 11.5. typical two-dimensional flow around a building with flow normal to the upwind face (a): streamlines and flow zones; a represents undisturbed, B represents displacement, c represents cavity, d represents wake (after Halitsky, 1963), and (b): flow, and vortex structures (simplified after Hunt and others, 1978).

II.11–14

ParT II. OBSErvING SYSTEMS

tailored to the building of interest, and including its surrounding terrain and structures. The objective is to ensure that all wind measurements are taken at heights where they are representative of the upstream surface roughness at the local scale and are as free as possible of confounding influences from microscale or local scale surface anomalies. Hence the emphasis on gaining accurate measurements at whatever height is necessary to reduce error rather than measuring at a standard height. This may require splitting the wind site from the location of the other measurement systems. It may also result in wind observations at several different heights in the same settlement. That will necessitate extrapolation of the measured values to a common height, if spatial differences are sought or if the data are to form input to a mesoscale model. Such extrapolation is easily achieved by applying the logarithmic profile (equation 11.2) to two heights:

roughness elements, the corresponding height scale should incorporate zd. 11.3.5.3 Wind sensor considerations

u1 / uref = ln( z1 / z0 ) / ln( zref / z0 )

(11.4)

where zref is the chosen reference height; z1 is the height of the site anemometer; and z 0 is the roughness length of the UCz. In urban terrain it is correct to define the reference height to include the zero-plane displacement height, namely, both z 1 and z ref have the form (z x – z d), where the subscript x stands for “1” or “ref”. A suitable reference height could be 50 m above displacement height. Other exposure corrections for flow distortion, topography and roughness effects can be made as recommended in Part I, Chapter 5 (see 5.9.4: Exposure correction). It may well be that suitable wind observations cannot be arranged for a given urban site. In that case, it is still possible to calculate the wind at the reference height using wind observations at another urban station or the airport using the “logarithmic transformation” model of Wieringa (1986):

Instruments used to measure wind speed and direction, gustiness and other characteristics of the flow in non-urban environments are applicable to urban areas. In cities, wind direction should always be measured, as well as speed, in order to allow azimuth-dependent corrections of tower influence to be made. If mechanical cup anemometers are used, because of the dirtiness of the atmosphere maintenance should be more frequent and close attention should be given to bearings and corrosion. If measurements are taken in the UCL, gustiness may increase the problem of cup over-speeding, and too much shelter may cause anemometers to operate near or below their threshold minimum speed. This must be dealt with through heightened maintenance and perhaps by using fast-response anemometers, propeller-type anemometers or sonic anemometers. Propeller anemometers are less prone to over-speeding, and sonic anemometers, having no moving parts, are practically maintenance-free. However, they are expensive and require sophisticated electronic logging and processing and not all models work when it is raining. 11.3.6 Precipitation

uzA = uzB

ln( zr / z0 B ) ⋅ ln( zA / z0 A ) ln( zB / z0 B ) ⋅ ln( zr / z0 A )

(11.5)

where the subscripts A and B refer to the site of interest where winds are wanted and the site where standard wind measurements are available, respectively. The blending height zr should either be taken as 4 zH (section 11.1.1.3) or be given a standard value of 60 m; the method is not very sensitive to this term. Again, if either site has dense, tall

The instruments and methods for the measurement of precipitation given in Part I. Chapter 6 are also relevant to urban areas. The measurement of precipitation as rain or snow is always susceptible to errors associated with the exposure of the gauge, especially in relation to the wind field in its vicinity. Given the urban context and the highly variable wind field in the UCL and the RSL, concerns arise from four main sources as follows: (a) The interception of precipitation during its trajectory to the ground by nearby collecting surfaces such as trees and buildings; (b) Hard surfaces near the gauge which may cause splash-in into the gauge, and over-hanging objects which may drip precipitation into the gauge; (c) The spatial complexity of the wind field around obstacles in the UCL causes very localized concentration or absence of rain- or snow-bearing airflow; (d) The gustiness of the wind in combination with the physical presence of the gauge itself causes anomalous turbulence around it which leads to under- or over-catch.

CHaPTEr 11. urBaN OBSErvaTIONS

II.11–15

In open country, standard exposure requires that obstacles should be no closer than two times their height. In some ways, this is less restrictive than for temperature, humidity or wind measurements. However, in the UCL the turbulent activity created by flow around sharp-edged buildings is more severe than that around natural obstacles and may last for greater distances in their wake. Again, the highly variable wind speeds and directions encountered on the roof of a building make these sites to be avoided. On the other hand, unlike temperature, humidity and wind measurements, the object of precipitation measurements is often not for the analysis of local effects, except perhaps in the case of rainfall rate. Some urban effects on precipitation may be initiated at the local scale (for example, by a major industrial facility), but may not show up until well downwind of the city. Distinct patterns within an urban area are more likely to be due to relief or coastal topographic effects. Selecting an extensive open site in the city, where normal exposure standards can be met, may be acceptable, but it almost certainly will mean that the gauge will not be co-located with the air temperature, humidity and wind sensors. While the latter sensors need to be representative of the local scale urban structure, cover, fabric and metabolism of a specific UCz, this is not the case for precipitation. However, the local environment of the gauge is important if the station is to be used to investigate intra-urban patterns of precipitation type. For example, the urban heat island has an influence on the survival of different forms of precipitation, for example, snow or sleet at the cloud base may melt in the warmer urban atmosphere and fall as rain at the ground. This may result in snow at rural and suburban sites when the city centre registers rain. With regard to precipitation gauges in urban areas, it is recommended that: (a) Gauges should be located in open sites within the city where the standard exposure criteria can be met (for example, playing fields, open parkland with a low density of trees, urban airports); (b) Gauges should be located in conjunction with wind instruments if a representative exposure for them is found. In other than low density built-up sites, this probably entails mounting the gauge above roof level on a mast. This means that the gauge will be subject to greater than normal wind speed and hence the error of estimation will be greater than near the

(c)

(d)

surface, and the gauge output will have to be corrected. Such correction is feasible if wind is measured on the same mast. It also means that automatic recording is favoured, and the gauge must be checked regularly to ensure that it is level and that the orifice is free of debris; Gauges should not be located on the roofs of buildings unless they are exposed at a sufficient height to avoid the wind envelope of the building; The measurement of snowfall depth should be taken at an open site or, if made at developed sites, that a large spatial sample should be obtained to account for the inevitable drifting around obstacles. Such sampling should include streets oriented in different directions.

Urban hydrologists are interested in rainfall rates, especially during major storm events. Hence, tipping-bucket rain gauges or weighing gauges have utility. The measurement of rain and snowfall in urban areas stands to benefit from the development of techniques such as optical raingauges and radar. Dew, ice and fog precipitation also occurs in cities and can be of significance to the water budget, especially for certain surfaces, and may be relevant to applications such as plant diseases, insect activity, road safety and finding a supplementary source of water resources. The methods outlined in Part I, Chapter 6 are appropriate for urban sites. 11.3.7 radiation

At present, there is a paucity of radiation flux measurements conducted in urban areas. For example, there are almost none in the Global Energy Balance Archive of the World Climate Programme and the Atmospheric Radiation Measurement Program of the US Department of Energy. Radiation measurement sites are often located in rural or remote locations specifically to avoid the aerosol and gaseous pollutants of cities which “contaminate” their records. Even when a station has the name of a city, the metadata usually reveal they are actually located well outside the urban limits. If stations are located in the built-up area, only incoming solar (global) radiation is likely to be measured; neither incoming long-wave radiation nor any fluxes with outgoing components are monitored. For the most part, short-term experimental projects focusing specifically on urban effects measure both the receipt and loss of radiation in cities. All short- and long-wave fluxes are affected by the special properties of the atmosphere and surface of cities, and the same is true for the net all-wave radiation balance that

II.11–16

ParT II. OBSErvING SYSTEMS

effectively drives the urban energy balance (Oke, 1988a). All of the instruments, calibrations and corrections, and most of the field methods outlined in relation to the measurement of radiation at open country sites in Part I, Chapter 7, apply to urban areas. Only differences, or specific urban needs or difficulties, are mentioned here. 11.3.7.1 Incoming fluxes

documented (see section 11.4). Methods to correct for such interference are mentioned in Part I, Chapter 7. It is also important to ensure that there is no excessive reflection from very light-coloured walls that may extend above the local horizon. It is essential to clean the upper domes at regular intervals. In heavily polluted environments this may mean on a daily basis. Other incoming radiation fluxes are also desirable but their inclusion depends on the nature of the city, the potential applications and the cost of the sensors. These radiation fluxes (and their instruments) are the following: incoming direct solar beam (pyrheliometer), diffuse sky solar (pyranometer fitted with a shade ring or a shade disc on an equatorial mount), solar ultraviolet (broadband and narrowband sensors, and spectrometers) and long-wave (pyrgeometer). All of these radiation fluxes have useful applications: beam (pollution extinction coefficients), diffuse (interior daylighting, solar panels), ultraviolet (depletion by ozone and damage to humans, plants and materials) and long-wave (monitoring nocturnal cloud and enhancement of the flux by pollutants and the heat island effect). 11.3.7.2 outgoing and net fluxes

Incoming solar radiation is such a fundamental forcing variable of urban climate that its measurement should be given a high priority when a station is established or upgraded. Knowledge of this term, together with standard observations of air temperature, humidity and wind speed, plus simple measures of the site structure and cover, allows a meteorological pre-processor scheme (namely, methods and algorithms used to convert standard observation fields into the variables required as input by models, but not measured for example, fluxes, stability, mixing height, dispersion coefficients, and so on) such as the Hybrid Plume Dispersion Model (Hanna and Chang, 1992) or the Local-scale Urban Meteorological Parameterization Scheme (Grimmond and Oke, 2002) to be used to calculate much more sophisticated measurements such as atmospheric stability, turbulent statistics, the fluxes of momentum, heat and water vapour. These in turn make it possible to predict the mixing height and pollutant dispersion (COST 710, 1998; COST 715, 2001). Furthermore, solar radiation can be used as a surrogate for daytime cloud activity, and is the basis of applications in solar energy, daylight levels in buildings, pedestrian comfort, legislated rights to solar exposure and many other fields. At automatic stations, the addition of solar radiation measurement is simple and relatively inexpensive. The exposure requirements for pyranometers and other incoming flux sensors are relatively easily met in cities. The fundamental needs are for the sensor to be level, free of vibration and free of any obstruction above the plane of the sensing element including both fixed features (buildings, masts, trees and hills) and ephemeral ones (clouds generated from exhaust vents or pollutant plumes). Therefore, a high, stable and accessible platform like the roof of a tall building is often ideal. It may be impossible to avoid the short-term obstruction of direct-beam solar radiation impinging on an up-facing radiometer by masts, antennas, flag poles and similar structures. If this occurs, the location of the obstruction and the typical duration of its impact on the sensing element should be fully

The reflection of solar radiation and the emission and reflection of long-wave radiation from the underlying surface, and the net result of short-, long- and all-wave radiant fluxes, are currently seldom monitored at urban stations. This means that significant properties of the urban climate system remain unexplored. The albedo, which decides if solar radiation is absorbed by the fabric or is lost back to the atmosphere and space, will remain unknown. The opportunity to invert the StefanBoltzmann relation and solve for the surface radiant temperature is lost. The critical net radiation that supports warming/cooling of the fabric, and the exchanges of water and heat between the surface and the urban boundary layer, is missing. Of these, net all-wave radiation data lack the most. Results from a well-maintained net radiometer are invaluable to drive a pre-processor scheme and as a surrogate measurement of cloud. The main difficulty in measuring outgoing radiation terms accurately is the exposure of the down-facing radiometer to view a representative area of the underlying urban surface. The radiative source area (equation 11.1 and Figure 11.2) should ideally “see” a representative sample of the main surfaces contributing to the flux. In the standard exposure cases, defined in the relevant sections of Part I, Chapter 7, a sensor height of 2

CHaPTEr 11. urBaN OBSErvaTIONS

II.11–17

m is deemed appropriate over a short grass surface. At that height, 90 per cent of the flux originates from a circle with a diameter of 12 m on the surface. Clearly, a much greater height is necessary over an urban area in order to sample an area that contains a sufficient population of surface facets to be representative of that UCz. Considering the case of a radiometer at a height of 20 m (at the top of a 10 m high mast mounted on a 10 m high building) in a densely developed district, the 90 per cent source area has a diameter of 120 m at ground level. This might seem sufficient to “see” several buildings and roads, but it must also be considered that the system is threedimensional, not quasi-flat like grass. At the level of the roofs in the example, the source area is now only 60 m in diameter, and relatively few buildings may be viewed. The question becomes whether the sensor can “see” an appropriate mix of climatically active surfaces. This means that the sensor must not only see an adequate set of plan-view surface types, but also sample appropriate fractions of roof, wall and ground surfaces, including the correct fractions of each that are in the sun or shade. This is a non-trivial task that depends on the surface structure and the positions of both the sensor and the sun in space above the array. Soux, Voogt and Oke, 2004, developed a model to calculate these fractions for relatively simple urban-like geometric arrays. However, more work is needed before guidelines specific to individual UCz types are available. It seems likely that the sensor height has to be greater than that for turbulence measurements. The non-linear nature of radiative source area effects is clear from equation 11.1 (see Figure 11.2). The greater weighting of surfaces closer to the mast location means that the immediate surroundings are most significant. In the previous example of the radiometer at a height of 20 m on a 10 m high building, 50 per cent of the signal at the roof-level comes from a circle of only 20 m in diameter (perhaps only a single building). If the roof of that building, or any other surface on which the mast is mounted, has anomalous radioactive properties (albedo, emissivity or temperature), it disproportionately affects the flux, which is supposed to be representative of a larger area. Hence, roofs with large areas of glass or metal, or with an unusually dark or light colour, or those designed to hold standing water, should be avoided. The problems associated with down-facing radiometers at large heights include: (a) the difficulty of ensuring that the plane of the sensing element is

level; (b) ensuring that at large zenith angles the sensing element does not “see” direct beam solar radiation or incoming long-wave radiation from the sky; (c) considering whether there is need to correct results to account for radiative flux divergence in the air layer between the instrument height and the surface of interest. To eliminate extraneous solar or long-wave radiation near the horizon, it may be necessary to install a narrow collar that restricts the field of view to a few degrees less than 2 π. This will necessitate a small correction applied to readings to account for the missing diffuse solar input (see Part I, Chapter 7, Annex 7.E for the case of a shading ring) or the extra long-wave input from the collar. Inverted sensors may be subject to error because their back is exposed to solar heating. This should be avoided by using some form of shielding and insulation. Maintaining the cleanliness of the instrument domes and wiping away deposits of water or ice may also be more difficult. The inability to observe the rate and effectiveness of instrument ventilation at a certain height means that instruments that do not need aspiration should be preferred. The ability to lower the mast to attend to cleaning, the replacement of desiccant or polyethylene domes and levelling is an advantage. It is recommended that: (a) Down-facing radiometers be placed at a height at least equal to that of a turbulence sensor (namely, a minimum of 2 zH is advisable) and preferably higher; (b) The radiative properties of the immediate surroundings of the radiation mast should be representative of the urban district of interest. 11.3.8 sunshine duration

The polluted atmospheres of urban areas cause a reduction in sunshine hours compared with their surroundings or pre-urban values (Landsberg, 1981). The instruments, methods and exposure recommendations given in Part I, Chapter 8 are applicable to an urban station. 11.3.9 visibility and meteorological optical range

The effects of urban areas upon visibility and meteorological optical range (MOR) are complex because, while pollutants tend to reduce visibility and MOR through their impact on the attenuation of light and the enhancement of certain types of fog, urban heat and humidity island effects often act to diminish the frequency and severity of fog and low cloud. There is considerable practical value in having

II.11–18

ParT II. OBSErvING SYSTEMS

urban visibility and MOR information for fields such as aviation, road and river transportation and optical communications, and thus to include these observations at urban stations. The visual perception of visibility is hampered in cities. While there are many objects and lights that can serve as range targets, it may be difficult to obtain a sufficiently uninterrupted line of sight at the recommended height of 1.5 m. The use of a raised platform or the upper level of buildings is considered non-standard and is not recommended. Observations near roof level may also be affected by scintillation from heated roofs, or the “steaming” of water from wet roofs during drying, or pollutants and water clouds released from chimneys and other vents. Instruments to measure MOR, such as transmissometers and scatter meters, generally work well in urban areas. They require relatively short paths and will give good results if the optics are maintained in a clean state. Naturally, the instrument must be exposed at a location that is representative of the atmosphere in the vicinity, but the requirements are no more stringent than for other instruments placed in the UCL. It may be that, for certain applications, knowledge of the height variation of MOR is valuable, for example, the position of the fog top or the cloud base. 11.3.10 evaporation and other fluxes

that is likely to force evaporation at unrealistically high rates. Consider the case of an evaporation pan installed over a long period which starts out at a semi-arid site that converts to irrigated agricultural uses and is then encroached upon by suburban development and is later in the core of a heavily developed urban area. Its record of evaporation starts out very high, because it is an open water surface in hot, dry surroundings. Therefore, although actual evaporation in the area is very low, advection forces the loss from the pan to be large. Because the introduction of irrigation makes conditions cooler and more humid, the pan readings drop, but actual evaporation is large. Inasmuch as urban development largely reverses the environmental changes and reduces the wind speed near the ground, pan losses increase but the actual evaporation probably drops. Hence, throughout this sequence pan evaporation and actual evaporation are probably in anti-phase. During the agricultural period a pan coefficient might have been applied to convert the pan readings to those typical of short grass or crops. No such coefficients are available to convert pan to urban evaporation, even if the readings are not corrupted by the complexity of the UCL environment. In summary, the use of standard evaporation instruments in the UCL is not recommended. The dimensions and heterogeneity of urban areas renders the use of full-scale lysimeters impractical (for example, the requirement to be not less than 100 to 150 m from a change in surroundings). Micro-lysimeters can give the evaporation from individual surfaces, but they are still specific to their surroundings. Such lysimeters need careful attention, including soil monolith renewal to prevent drying out, and are not suitable for routine long-term observations. Spatially averaged evaporation and other turbulent fluxes (momentum, sensible heat, carbon dioxide) information can be obtained from observations above the RSL. Several of these fluxes are of greater practical interest in urban areas than in many open country areas. For example, the vertical flux of horizontal momentum and the integral wind statistics and spectra are needed to determine wind loading on structures and the dispersion of air pollutants. The sensible heat flux is an essential input to calculate atmospheric stability (for example, the flux Richardson number and the Obukhov length) and the depth of the urban mixing layer. Fast response eddy covariance or standard deviation methods are recommended, rather than

Urban development usually leads to a reduction in evaporation primarily due to the fact that built features seal the surface and that vegetation has been removed. Nonetheless, in some naturally dry regions, an increase may occur if water is imported from elsewhere and used to irrigate urban vegetation. Very few evaporation measurement stations exist in urban areas. This is understandable because it is almost impossible to interpret evaporation measurements conducted in the UCL using atmometers, evaporation pans or lysimeters. As detailed in Part I, Chapter 10, such measurements must be at a site that is representative of the area; not closer to obstacles than 5 times their height, or 10 times if they are clustered; not placed on concrete or asphalt; not unduly shaded; and free of hard surfaces that may cause splash-in. In addition to these concerns, the surfaces of the instruments are assumed to act as surrogates for vegetation or open water systems. Such surfaces are probably not representative of the surroundings at an urban site. Hence, they are in receipt of micro-advection

CHaPTEr 11. urBaN OBSErvaTIONS

II.11–19

profile gradient methods. Appropriate instruments include sonic anemometers, infrared hygrometers and gas analysers and scintillometers. The sensors should be exposed in the same manner as wind sensors: above the RSL but below the internal boundary layer of the UCz of interest. Again, such measurements rely on the flux “footprint” being large enough to be representative of the local area of interest. If such flux measurements are beyond the financial and technical resources available, a meteorological pre-processor scheme such as the Ozone Limiting Method, the Hybrid Plume Dispersion Method or the Local-scale Urban Meteorological Parameterization Scheme (see section 11.3.7) can be an acceptable method to obtain aerially representative values of urban evaporation and heat flux. Such schemes require only spatially representative observations of incoming solar radiation, air temperature, humidity and wind speed and general estimates of average surface properties such as albedo, emissivity, roughness length and the fractions of the urban district that are vegetated or built-up or irrigated. Clearly, the wind speed observations must conform to the recommendations in section 11.3.5. Ideally air temperature and humidity should also be observed above the RSL; however, if only UCL values are available, this is usually acceptable because such schemes are not very sensitive to these variables. 11.3.11 soil moisture

of irrigation. All of these elements lead to a very patchy urban soil moisture field that may have totally dry plots situated immediately adjacent to over-watered lawns. Hence, while some idea of local-scale soil moisture may be possible in areas with very low urban development, or where the semi-natural landscape has been preserved, it is almost impossible to characterize in most urban districts. Again, in this case it may be better to use rural values that give a regional background value rather than to have no estimate of soil moisture availability. 11.3.12 Present weather

If human observers or the usual instrumentation are available, the observation of present weather events and phenomena such as rime, surface ice, fog, dust and sand storms, funnel clouds and thunder and lightning can be valuable, especially those with practical implications for the efficiency or safety of urban activities, for example transportation. If archiving facilities are available, the images provided by webcams can provide very helpful evidence of clouds, short-term changes in cloud associated with fronts, fog banks that ebb and flow, low cloud that rises and falls, and the arrival of dust and sand storm fronts. 11.3.13 cloud

Knowledge of urban soil moisture can be useful, for example, to gardeners and in the calculation of evaporation. Its thermal significance in urban landscapes is demonstrated by the remarkably distinct patterns in remotely sensed thermal imagery. By day, any patch with active vegetation or irrigated land is noticeably cooler than land that is built on, paved or bare. However, the task of sampling to obtain representative values of soil moisture is daunting. Some of the challenges presented include the fact that large fractions of urban surfaces are completely sealed over by paved and built features; much of the exposed soil has been highly disturbed in the past during construction activity or following abandonment after urban use; the “soil” may actually be largely formed from the rubble of old buildings and paving materials or have been imported as soil or fill material from distant sites; or the soil moisture may be affected by seepage from localized sources such as broken water pipes or sewers or be the result

Although cloud cover observation is rare in large urban areas, this information is very useful. All of the methods and instruments outlined in Part I, Chapter 15 are applicable to urban areas. The large number and intensity of light sources in cities, combined with a hazy, sometimes polluted, atmosphere, make visual observation more difficult. Where possible, the observational site should avoid areas with particularly bright lighting. 11.3.14 atmospheric composition

The monitoring of atmospheric pollution in the urban environment is increasingly important. However, this is another specialist discipline and will not be dealt with in this chapter. Part I, Chapter 17 deals with the subject in the broader context of the Global Atmosphere Watch. 11.3.15 Profiling techniques for the urban boundary layer

Because urban influences extend throughout the planetary boundary layer (Figure 11.1), it is necessary to use towers and masts to obtain observations

II.11–20

ParT II. OBSErvING SYSTEMS

above the RSL to probe higher. Of special interest are effects on the wind field and the vertical temperature structure including the depth of the mixing layer and their combined role in affecting pollutant dispersion. All of the special profiling techniques outlined in Part II, Chapter 5 are relevant to urban areas. Acoustic sounders (sodars) are potentially very useful; nonetheless, it must be recognized that they suffer from two disadvantages: first, their signals are often interfered with by various urban sources of noise (traffic, aircraft, construction activity, and even lawnmowers); and second, they may not be allowed to operate if they cause annoyance to residents. Wind profiler radars, radio-acoustic sounding systems, microwave radiometers, microwave temperature profilers, laser radars (lidars) and modified ceilometers are all suitable systems to monitor the urban atmosphere if interference from ground clutter can be avoided. Similarly, balloons for wind tracking, boundary layer radiosondes (minisondes) and instrumented tethered balloons can all be used with good success rates as long as air traffic authorities grant permission for their use. Instrumented towers and masts can provide an excellent means of placing sensors above roof level and into the inertial sublayer, and very tall structures may permit measurements into the mixing layer above. However, it is necessary to emphasize the precautions given in Part II, Chapter 5, section 5.3.3 regarding the potential interference with atmospheric properties by the support structure. Although tall buildings may appear to provide a way to reach higher into the urban boundary layer, unless obstacle interference effects are fully assessed and measures instituted to avoid them, the deployment of sensors can be unfruitful and probably misleading. 11.3.16 satellite observations

data have been recorded, gathered and transmitted, in order to extract accurate conclusions from their analysis” (WMO, 2003a). It can be argued that this is even more critical for an urban station, because urban sites possess both an unusually high degree of complexity and a greater propensity to change. The complexity makes every site truly unique, whereas good open country sites conform to a relatively standard template. Change means that site controls are dynamic, meaning that documentation must be updated frequently. In Figure 11.6 it is assumed that the minimum requirements for station metadata set by WMO (2003a) are all met and that hopefully some or all of the best practices they recommend are implemented. Here, emphasis is placed on special urban characteristics that need to be included in the metadata, in particular under the categories of “local environment” and “historical events”. 11.4.1 local environment

Remote sensing by satellite with adequate resolution in the infrared may be relevant to extended urban areas; however, a description of this is outside the scope of this chapter. Some information is available in Part II, Chapter 8, and a review is given in Voogt and Oke, 2003.

11.4

MetaData

The full and accurate documentation of station metadata (see Part I, Chapter 1) is absolutely essential for any station “to ensure that the final data user has no doubt about the conditions in which

As explained in section 11.1.1, urban stations involve the exposure of instruments both within and above the urban canopy. Hence, the description of the surroundings must include both the microscale and the local scale. Following WMO (2003a), with adaptations to characterize the urban environment, it is recommended that the following descriptive information be recorded for the station: (a) A map at the local scale to mesoscale (~1:50 000) as in Figure 11.6(a), updated as necessary to describe large-scale urban development changes (for example, conversion of open land to housing, construction of a shopping centre or airport, construction of new tall buildings, cutting of a forest patch, drainage of a lake, creation of a detention pond). Ideally, an aerial photograph of the area should also be provided and a simple sketch map (at 1:500 000 or 1:1 000 000) to indicate the location of the station relative to the rest of the urbanized region (Figures 11.6(b) and (c)) and any major geographic features such as large water bodies, mountains and valleys or change in ecosystem type (desert, swamp, forest). An aerial oblique photograph can be especially illuminating because the height of buildings and trees can also be appreciated. If available, aerial or satellite infrared imagery may be instructive regarding important controls on microclimate. For example, relatively cool surfaces by day usually indicate the availability of moisture or materials with anomalous surface emissivity. Hotter than normal

CHaPTEr 11. urBaN OBSErvaTIONS

II.11–21

(a)

(b)

(c)

figure 11.6. Minimum information necessary to describe the local-scale environment of an urban station, consisting of (a) a template to document the local setting; (b) a sketch map to situate the station in the larger urban region; and (c) an aerial photograph.

(b) (c)

areas may be very dry, or have a low albedo or very good insulation. At night, relative coolness indicates good insulation, and relative warmth the opposite, or it could be a material with high thermal admittance that is releasing stored daytime heat or there could be an anomalous source of anthropogenic heat. UCz and Davenport roughness classes can be judged using Tables 11.1 or 11.2; A microscale sketch map (~1:5 000), according to metadata guidelines, updated each year (Figure 11.7(a)); Horizon mapping using a clinometer and compass survey in a circle around the screen (as shown in the diagram at the base of the template, Figure 11.7(a), and a fisheye lens photograph taken looking vertically at the zenith with the camera’s back placed on the ground near the screen, but not such that any of the sky is blocked by it (Figure 11.7(b)). If a fisheye lens is not available, a simpler approach is to take a photograph of a hemispheric reflector (Figure 11.7(c)). This should be updated every year or more frequently if there are marked changes in horizon obstruction, such as the construction or demolition of a new building nearby, or the removal of trees;

(d)

(e)

(f)

Photographs taken from cardinal directions of the instrument enclosure and of other instrument locations and towers; A microscale sketch of the instrument enclosure, updated when instruments are relocated or other significant changes occur; If some of the station’s measurements (wind, radiation) are made away from the enclosure (on masts, roof-tops, more open locations) repeat steps (b) to (d) above for each site. historical events

11.4.2

Urban districts are subject to many forces of change, including new municipal legislation that may change the types of land uses allowed in the area, or the height of buildings, or acceptable materials and construction techniques, or environmental, irrigation or traffic laws and regulations. Quite drastic alterations to an area may result from central planning initiatives for urban renewal. More organic alterations to the nature of a district also arise because of in- or out-migrations of groups of people, or when an area comes into, or goes out of, favour or style as a place to live or work. The urban area may be a centre of conflict and destruction. Such events should be documented so that later users of the data understand some of the

II.11–22

ParT II. OBSErvING SYSTEMS

(a)

(b)

(c)

figure 11.7. Information required to describe the microscale surroundings of an urban climate station; (a) a template for a metadata file; (b) an example of a fisheye lens photograph of a street canyon illustrating horizon obstruction; and (c) a uK Met office hemispheric reflector placed on a raingauge. context of changes that might appear in the urban climate. 11.4.3 observance of other WMo recommendations This question that is essentially unanswerable when a settlement has been built, and, even if the settlement had not been built, the landscape may well have evolved into a different state compared with the pre-existing one (for example, owing to other human activity such as agriculture or forestry). The assessment of urban effects is therefore fraught with methodological difficulties and no “truth” is possible, only surrogate approximations. If an urban station is being established alone, or as part of a network, to assess urban effects on weather and climate, it is recommended that careful consideration be given to the analysis given in Lowry (1977), and Lowry and Lowry (2001).

All other WMO recommendations regarding the documentation of metadata, including station identifiers, geographical data, instrument exposure, types of instruments, instrument mounting and sheltering, data recording and transmission, observing practices, metadata storage and access and data processing should be observed at urban stations.

11.5

assessMent of urban effects

11.6 The study of urban weather and climate possesses a perspective that is almost unique. People are curious about the role of humans in modifying the urban atmosphere. Therefore, unlike other environments of interest, where it is sufficient to study the atmosphere for its own sake or value, in urban areas there is interest in knowing about urban effects. This means assessing possible changes to meteorological variables as an urban area develops over time, compared to what would have happened had the settlement not been built.

suMMary of key Points for urban stations

11.6.1

Working principles

When establishing an urban station, the rigid guidelines for climate stations are often inappropriate. It is necessary to apply guiding principles rather than rules, and to retain a flexible approach. This often means different solutions for individual atmospheric properties and may mean that not all observations at a “site” are made at the same place.

CHaPTEr 11. urBaN OBSErvaTIONS

II.11–23

Because the environment of urban stations changes often as development proceeds, frequently updated metadata are as important as the meteorological data gathered. Without good station descriptions it is impossible to link measurements to the surrounding terrain. 11.6.2 site selection

(a)

(b)

An essential first step in selecting urban station sites is to evaluate the physical nature of the urban terrain, using a climate zone classification. This will reveal areas of homogeneity. Several urban terrain types make up an urban area. In order to build a picture of the climate of a settlement, multiple stations are required. Sites should be selected that are likely to sample air drawn across relatively homogenous urban terrain and are thus representative of a single climate zone. Care must be taken to ensure that microclimatic effects do not interfere with the objective of measuring the local-scale climate. 11.6.3 Measurements

(c)

(d)

(e)

With regard to measurements, the following key points should be taken into account:

Air temperature and humidity measurements taken within the UCL can be locally representative if the site is carefully selected. If these variables are observed above roof level, including above the RSL, there is no established link between them and those within the UCL; Wind and turbulent flux measurements should be taken above the RSL but within the internal boundary layer of the selected UCz. Such measurements must establish that the surface “footprint” contributing to the observations is representative of the climate zone. For wind, it is possible to link the flow at this level and that experienced within the canopy; Precipitation observations can be conducted either near the ground at an unobstructed site, or above the RSL, corrected according to parallel wind measurements; With the exception of incoming solar radiation measurements, roof-top sites are to be avoided, unless instruments are exposed on a tall mast; Measurements of the net and upwelling radiation fluxes must be taken at heights that make it possible to sample adequately the range of surface types and orientations typical of the terrain zone.

II.11–24

ParT II. OBSErvING SYSTEMS

references and furtHer readIng

Arya, P.S., 2001: Introduction to Micrometeorology. Academic Press, New york. Auer, A.H. Jr., 1978: Correlation of land use and cover with meteorological anomalies. Journal of Applied Meteorology, Volume 17, Issue 5 pp. 636–643. Britter, R.E. and S.R. Hanna, 2003: Flow and dispersion in urban areas. Annual Review of Fluid Mechanics, Volume 35, pp. 469–496. Christen, A., 2003: (personal communication). Institute of Meteorology, Climatology and Remote Sensing, University of Basel. Christen, A., R. Vogt, M.W. Rotach and E. Parlow, 2002: First results from BUBBLE I: Profiles of fluxes in the urban roughness sublayer. Proceedings of the Fourth Symposium on Urban Environment, (Norfolk, Virginia), American Meteorological Society, Boston, pp. 105-106. COST-710, 1998: Final Report: Harmonisation of the Pre-processing of Meteorological Data for Atmospheric Dispersion Models. European Commission. EUR 18195 EN. COST-715, 2001: Preparation of Meteorological Input Data for Urban Site Studies. European Commission, EUR 19446 EN. Davenport, A.G., C.S.B. Grimmond, T.R. Oke and J. Wieringa, 2000: Estimating the roughness of cities and sheltered country. Proceedings of the Twelfth Conference on Applied Climatology (Asheville, North Carolina), American Meteorological Society, Boston, pp. 96–99. DePaul, F.T. and C.M. Shieh, 1986: Measurements of wind velocity in a street canyon. Atmospheric Environment, Volume 20, pp. 455–459. Ellefsen, R., 1991: Mapping and measuring buildings in the canopy boundary layer in ten US cities. Energy and Buildings, Volume 16, pp. 1025–1049. Garratt, J.R., 1992: The Atmospheric Boundary Layer. Cambridge University Press, Cambridge. Gill, G.C., L.E. Olsson, J. Sela and M. Suda, 1967: Accuracy of wind measurements on towers or stacks. Bulletin of the American Meteorological Society, Volume 48, pp. 665–674. Grimmond, C.S.B. and T.R. Oke, 1999: Aerodynamic properties of urban areas derived from analysis of surface form. Journal of Applied Meteorology, Volume 38, Issue 9, pp. 1262–1292. Grimmond, C.S.B. and T.R. Oke, 2002: Turbulent heat fluxes in urban areas: Observations and a local-scale urban meteorological parameterization scheme (LUMPS). Journal of Applied

M e t e o r o l o g y , Vo l u m e 4 1 , I s s u e 7 , pp. 792–810. Halitsky, J., 1963: Gas diffusion near buildings. Transactions of the American Society of Heating, Refrigerating and Air-conditioning Engineers, Volume 69, pp. 464–485. Hanna, S.R. and J. C. Chang, 1992: Boundary-layer parameterizations for applied dispersion modeling over urban areas. Boundary-Layer Meteorology, Volume 58, pp. 229–259. Hunt, J.C.R., C.J. Abell, J.A. Peterka and H.Woo, 1978: Kinematical studies of the flow around free or surface-mounted obstacles: Applying topology to flow visualization. Journal of Fluid Mechanics, Volume, 86, pp. 179–200. Kljun, N., M. Rotach, and H.P. Schmid, 2002: A three-dimensional backward Lagrangian footprint model for a wide range of boundary-layer stratifications. Boundary-Layer Meteorology, Volume 103, Issue 2, pp. 205–226. Kljun, N., P. Calanca, M.W. Rotach, H.P. Schmid, 2004: A simple parameterization for flux footprint predictions. Boundary-Layer Meteorology, Volume 112, pp. 503–523. Landsberg, H.E., 1981: The Urban Climate. Academic Press, New york. Lowry, W.P., 1977: Empirical estimation of urban effects on climate: A problem analysis. Journal of Applied Meteorology, Volume 16, Issue 2, pp. 129–135. Lowry, W.P. and P.P. Lowry, 2001: Fundamentals of Biometeorology: Volume 2 – The Biological Environment. Chapter 17, Peavine Publications, St Louis, Missouri, pp. 496–575. Nakamura, y. and T.R. Oke, 1988: Wind, temperature and stability conditions in an east-west oriented urban canyon. Atmospheric Environment, 22, pp. 2691–2700. Oke, T.R. 1981: Canyon geometry and the nocturnal heat island: Comparison of scale model and field observations. Journal of Climatology, Volume 1, Issue 3, pp. 237–254. Oke, T.R., 1982: The energetic basis of the urban heat island. Quarterly Journal of the Royal Meteorological Society, Volume 108, Issue 455, pp. 1–24. Oke, T.R., 1984: Methods in urban climatology. In: Applied Climatology (W. Kirchofer, A. Ohmura and W. Wanner, eds). zürcher Geographische Schriften, Volume 14, pp. 19–29. Oke, T.R., 1988a: The urban energy balance. Progress in Physical Geography, Volume 12, pp. 471–508.

CHaPTEr 11. urBaN OBSErvaTIONS

II.11–25

Oke, T.R., 1988b: Street design and urban canopy layer climate. Energy and Buildings, Volume 11, pp. 103–113. Oke, T.R., 1997: Urban environments. In: The Surface Climates of Canada (W.G. Bailey, T.R. Oke and W.R. Rouse, eds). McGill-Queen’s University Press, Montreal, pp. 303–327. Peterson, T.C., 2003: Assessment of urban versus rural in situ surface temperatures in the contiguous United States: No difference found. Journal of Climate, Volume 16, pp. 2941–2959. Rotach, M.W., 1999: On the influence of the urban roughness sublayer on turbulence and dispersion. Atmospheric Environment, Volume 33, pp. 4001–4008. Schmid, H.P., 2002: Footprint modeling for vegetation atmosphere exchange studies: A review and perspective. Agricultural and Forest Meteorology, Volume 113, Number 1, pp. 159–183. Schmid, H.P., H.A. Cleugh, C.S.B. Grimmond and T.R. Oke, 1991: Spatial variability of energy fluxes in suburban terrain. Boundary-Layer Meteorology, Volume 54, Issue 3, pp. 249–276. Soux A., J.A. Voogt and T.R. Oke, 2004: A model to calculate what a remote sensor ‘sees’ of an urban surface. Boundary-Layer Meteorology, Volume 111, pp. 109–132. Stull, R.B., 1988: An Introduction to Boundary Layer Meteorology. Kluwer Academic Publishers, Dordrecht. Verkaik, J.W., 2000: Evaluation of two gustiness models for exposure correction calculations. Journal of Applied Meteorology, Volume 39, Issue 9, pp. 1613–1626.

Voogt, J.A. and T.R. Oke, 2003: Thermal remote sensing of urban climates. Remote Sensing of Environment, Volume 86, Number 3, pp. 370–384. Wieringa, J., 1986: Roughness-dependent geographical interpolation of surface wind speed averages. Quarterly Journal of the Royal Meteorological Society, Volume 112, Issue 473, pp. 867–889. Wieringa, J., 1993: Representative roughness parameters for homogeneous terrain. Boundary-Layer Meteorology, Volume 63, Number 4, pp. 323–363. Wieringa, J., 1996: Does representative wind information exist? Journal of Wind Engineering and Industrial Aerodynamics, Volume 65, Number 1, pp. 1–12. World Meteorological Organization, 1983: Guide to Climatological Practices. Second edition, WMO-No. 100, Geneva. World Meteorological Organization, 1988: Technical Regulations. Volume I, WMO-No. 49, Geneva. World Meteorological Organization, 1995: Manual on Codes. WMO-No. 306, Geneva. World Meteorological Organization, 2003a: Guidelines on Climate Metadata and Homogenization (E. Aguilar, I. Auer, M. Brunet, T.C. Peterson and J. Wieringa). World Climate Data and Monitoring Programme No. 53, WMO/TD-No. 1186, Geneva. World Meteorological Organization, 2003b: Manual on the Global Observing System. WMO-No. 544, Geneva. World Meteorological Organization, 2006: Initial Guidance to Obtain Representative Meteorological Observations at Urban Sites (T.R. Oke). Instruments and Observing Methods Report No. 81, WMO/ TD-No. 1250, Geneva.

CHaPTEr 12

road MeteorologIcal MeasureMents

12.1 12.1.1

general

12.1.3

road meteorological requirements

Definition

Road meteorological measurements are of particular value in countries where the serviceability of the transport infrastructure in winter exerts a great influence on the national economy. In some countries there will be other road hazards such as dust storms or volcanic eruptions. Safe and efficient road transport is adversely affected by the following conditions which affect speed, following distance, tyre adhesion and braking efficiency: poor visibility (heavy precipitation, fog, smoke, sand storm), high winds, surface flooding, land subsidence, snow, freezing precipitation and ice. 12.1.2 Purpose

This chapter should assist in standardizing road meteorological measurements with a method that adheres to WMO common standards as closely as possible. However, those users who may wish to employ road measurements in other meteorological applications will be advised of important deviations, in sensor exposure, for example. The needs of road network managers focus in four main areas (WMO 1997; 2003a): (a) Real-time observation of road meteorology: The practical objective, on the one hand, is to inform road users of the risks (forecast or real-time) that they are likely to face on designated routes; and on the other hand, to launch a series of actions aimed at increasing road safety, such as scraping snow or spreading chemical melting agents; (b) Improvement of pavement surface temperature forecasting: The measurements of road AWSs are the important input data for the temperature and pavement condition forecasting programmes which may be run by the NMHS. This authority has the capability of ensuring continuity and timeliness in the observations and in the forecast service. In practice, two tools are available to forecasters. The first tool is a computer model for the transposition of a weather forecast of atmospheric conditions to a pavement surface temperature forecast, taking account of the physical characteristics of each station. The second tool is the application of an algorithm based on a specific climatological study of the pavement surface; (c) Road climate database: The establishment of a road climatological database is important because in many situations an assessment of current events at a well-instrumented location enables experienced road network managers to transpose the data using the climate model to other locations they know well. In some cases, thermal fingerprints can be taken in order to model this spatial relationship. The recording of road weather data will be useful for analysing previous winter disturbances and for carrying out specific road-dedicated climatology studies. National Meteorological

The role of a road network manager is to ensure the optimal, safe, free flow of traffic on arterial routes. Operational decisions on the issuing of road weather information and on initiating de-icing and snow clearing operations are dependent on road meteorological observations that are increasingly made by special-purpose automatic weather stations (AWSs). While these stations should conform as far as practicable to the sensor exposure and measurement standards of conventional AWSs (see Part II, Chapter 1) they will have characteristics specific to their function, location and measurement requirements. The reliability of road meteorological measurement stations which supply data to a transport decision support system is critical: Each station will relate to the immediate environment of important high-density transport routes and may be responsible for feeding data to road meteorology forecast routines and for generating automatic alarms. Thus, equipment reliability and maintenance, power supply, communications continuity and data integrity are all important elements in the selection, implementation and management of the weather measurement network. These concerns point to the benefits of an effective collaboration between road management services and the National Meteorological and Hydrological Service (NMHS).

II.12–2

ParT II. OBSErvING SYSTEMS

(d)

and Hydrological Services can fill the data gaps and compare and provide quality assurance for measurements coming from different sources; Reliable data: Road managers do not need exceedingly accurate measurements (with the exception of road-surface temperature). Rather, they want the data to be as reliable as possible. That is to say the data must be a consistent reflection of the real situation, and the measuring devices must be robust. Communication and power supply continuity are often of prime importance.

12.2.2

station metadata

In every case it is important that the location and characteristics of the site and the specification of equipment and sensors are fully documented, including site plans and photographs. These metadata (Part I, Chapter 1 and Part III, Chapter 1) are invaluable for the management of the station and for comparing the quality of the measurements with those from other sites.

12.3 12.3.1

observeD variables

12.2

establishMent of a roaD Meteorological station

road meteorological measurements

12.2.1

standardized representative measurements

The general requirements for meteorological stations, their siting and the type and frequency of measurements are defined in WMO (1988; 2003a). It is recommended that these standards and other relevant material in this Guide should be adhered to closely when establishing road meteorological stations in order to make standardized, representative measurements that can be related to those from other road stations in the network and also to data from standard synoptic or climatological stations, except where the unique measurements for road meteorology demand other standards, for example, for the exposure of sensors. Advice on the optimum placement and density of stations may be obtained from the local branch office of the NHMS which will be able to access climatological data for the region. A meteorological station site is chosen so that it will properly represent a particular geographic region. A road meteorological station will be sited to best represent part of the road network or a particular stretch of important roadway that is known to suffer from weather-related or other hazards. The station must therefore be adjacent to the roadway so that road-surface sensors may be installed, and therefore some compromise on “ideal” meteorological siting and exposure may occur. The sensors are installed so that their exposure enables the best representation in space and time of the variable being measured, without undue interference from secondary influences. In general, the immediate site adjacent to the roadway should be level, with short grass, and not shaded by buildings or trees.

The important measurements at road weather stations for forecasting roadway conditions include air temperature and humidity, wind speed and direction, precipitation amount and type, visibility, global and long-wave radiation, road-surface temperature and road-surface condition. Some of the measurements, for example, temperature and humidity, will be used to forecast conditions of concern to road users, while others (wind and visibility) may indicate impending or real-time hazards; yet others (meteorological radiation, road-surface temperature and condition) are specific to predicting the performance of the road surface. The sensors will be selected for their accuracy, stability, ease of maintenance and calibration, and for having electrical outputs suitable for connecting with the automatic data-acquisition system. The choice of sensors and their exposure should conform to standard WMO practice and recommendations (see the relevant chapters in Part I of this Guide), except when these are incompatible with the specific requirements of road meteorology. Measurement accuracy should generally conform to the performances quoted in Part I, Chapter 1, Annex 1B. Note also the recommendations on the measurements at AWSs in Part II, Chapter 1. 12.3.1.1 air temperature

The sensor may be an electrical resistance thermometer (platinum or stable thermistor). The air-temperature sensor, its radiation shield or screen and exposure should conform to the guidelines of Part I, Chapter 2, with the shield mounted at a height of 1.25 to 2 m over short grass or natural soil.

CHaPTEr 12. rOad METEOrOlOGICal MEaSurEMENTS

II.12–3

Measurement issues: The sensor and screen should not be mounted above concrete or asphalt surfaces that could inflate the measured temperature. The placement of the shield should ensure that it is not subject to water spray from the wheels of passing traffic, which might cause significant sensing errors. 12.3.1.2 relative humidity (b)

The hygrometric sensor may be one of the thin-film electrical conductive or capacitive types (Part I, Chapter 4). A wet-bulb psychrometer is not recommended on account of the continual contamination of the wick by hydrocarbons. The sensor may be combined with or co-located with the air-temperature sensor in its radiation shield as long as the sensor thermal output (self-heating) is very low, so as not to influence the temperature measurement. Measurement issues: Note the same water spray hazard as for the temperature sensor. Humiditysensor performance is subject to the effects of contamination by atmospheric and vehicle pollution. Functional checks should be made regularly as part of the data-acquisition quality control, and calibration should be checked at least every six months, particularly before the winter season. A sensor that is not responding correctly must be replaced immediately. 12.3.1.3 Wind speed and direction

These variables are usually measured by either a pair of cup and vane sensors or by a propeller anemometer (Part I, Chapter 5) with pulse or frequency output. The sensors must be mounted at the standard height of 10 m above the ground surface and in a representative open area in order to carry out measurements not influenced by air-mass flow disturbances due to traffic and local obstacles. Measurement issues: The freezing of moving parts, water ingress and corrosion and lightning strike are potential hazards. 12.3.1.4 (a) Precipitation

Measurement issues: The gauge must be kept level and the funnel and buckets clean and free from obstruction. The tipping-bucket gauge is not satisfactory for indicating the onset of very light rain, or in prolonged periods of freezing weather. Totals will be lower than the true values because of wind effects around the gauge orifice, evaporation from the buckets between showers, and loss between tips of the buckets in heavy rain; Precipitation occurrence and type: Sensors are available which use electronic means (including heated grids, conductance and capacitance measurement) to estimate the character of precipitation (drizzle, rain or snow) falling on them. Optical sensors that determine the precipitation characteristic (size, density and motion of particles) by the scattering of a semiconductor laser beam offer better discrimination at much greater expense. Measurement issues: These sensing functions are highly desirable at all stations, but existing types of sensors are lacking in discrimination and stable reproducibility. Provisions must be made (heating cycles) to remove accumulated snow from the surface. The regular cleaning of sensitive elements and optical surfaces is required. Only sensors that are well documented and that can be calibrated against an appropriate reference should be installed. If any system uses an algorithm to derive a variable indirectly, the algorithm should also be documented. Meteorological radiation

12.3.1.5 (a)

Accumulated precipitation: The tipping-bucket recording gauge (Part I, Chapter 6) where increments of usually 0.2 mm of precipitation are summed, is commonly used at automatic stations. Heated gauges may be employed to measure snow or other solid precipitation. A rate of precipitation may be estimated by registering the number of counts in a fixed time interval.

Global radiation: The solar radiation (direct and diffuse) received from a solid angle of 2 π sr on a horizontal surface should be measured by a pyranometer using thermoelectric or photoelectric sensing elements (Part I, Chapter 7). The sensor should be located to have no significant nearby obstructions above the plane of the instrument and with no shadows or light reflections cast on the sensor. Although the location should be such as to avoid accidental damage to the sensor, it should be accessible for inspection and cleaning. Global radiation measured “on site” is particularly relevant to the road manager. It expresses the quantity of energy received by the road during the day. The relationship of incoming radiation to surface temperature and road inertia will depend on the constituent materials and dimensions of the pavement mass.

II.12–4

ParT II. OBSErvING SYSTEMS

(b)

Measurement issues: Obstructed sensor horizon, sensor not level, surface dirt, snow or frost obscuring the glass dome or sensing surface, and water condensation inside the glass dome; Long-wave radiation: A pyrgeometer may be used which measures radiation in the infrared by means of a thermopile, filtering out the visible spectrum. Mounted with the sensor facing upwards and a sufficiently unobstructed horizon, it determines the long-wave radiation received from the atmosphere, in particular at night, and gives an indication of cloud cover and therefore of roadway radiative cooling. A sensor sensitive to a spectrum from 5 to 50 µm, with a maximum sensitivity of 15 µV/Wm–2 and a response time lower than 5 s is adequate for road weather forecasting purposes. Measurement issues: See those for global radiation. Visibility

minimize radiation error. For long connecting cable lengths (over 20 m), cable resistance compensation is recommended. 12.3.1.8 road-pavement temperature

Temperatures of the pavement at 5, 10 and 20 cm below the road surface may be determined by sinking appropriately sheathed electrical resistance sensors at corresponding depths and using suitable bonding material. Measurement issues: See those for road-surface temperature. 12.3.1.9 road-surface condition and freezing temperature

12.3.1.6

Transmissometers and forward scatter meters may be applicable (Part I, Chapter 9). Measurement issues: Road managers are interested in visibilities below 200 m (the danger threshold). Maintaining sensor windows and lenses clean is important. Some systems will compensate for a degree of window contamination. An appropriate calibration procedure should be carried out during routine maintenance. 12.3.1.7 road-surface temperature

This sensor estimates the road-surface condition (dry, wet, frost, ice) and the freezing temperature of residual surface water. The sensor control circuit heats the sensor before cooling it, using the Peltier effect. The rate of cooling is a function of the surface condition and freezing temperature. See also Part I, Chapter 6, regarding ice on pavements. The sensor output should give road managers an indication of the chemical de-icing agent’s persistence at the specific location and enable them to optimize chemical spreading operations. Measurement issues: The sensor must not be covered by foreign matter or when road re-surfacing. The sensor requires regular cleaning. It is difficult to ensure a sensor response that is representative of the true road-surface condition because of the small sample size, the location on road surface and variable imbedding practices. Measurement depends on traffic density and is otherwise not very stable with time. This sensor, of which there are few alternative makes, may be difficult to obtain. The remote sensing of road-surface temperature by thermal infrared sensors is generally not practical because of the interference caused by water spray from vehicle tyres. Road-surface frost risk estimation may be improved through better measurement of temperature, air humidity and temperature in and on the road surface, namely, improved sensor exposure and reduction of systematic and random errors. 12.3.1.10 Video surveillance

Active sensors based on a 100 ohm platinum resistance and providing serial digital transmission are available, and may be imbedded in the road surface. The manufacturer’s instructions for the installation of the sensor and cabling and bonding to the road surface should be followed. The sensor has to be positioned out of the line of tyre tracks, otherwise the sensor surface will be soiled and measurements affected by friction heating. The sensor must lie in the road surface plane with no depression where water could gather and affect the measurement. The sensor’s correct position must be checked on a regular basis. Measurement issues: The thermal lag (timeconstant) of the sensor and the imbedding material should match that of the road-surface composition. The sensor should have a surface finish with low absorptance in the infrared to

Video surveillance is a component of what have come to be called intelligent transport systems. They are principally used for road-incident detection, but also give a useful indication of present weather for transport management. Image

CHaPTEr 12. rOad METEOrOlOGICal MEaSurEMENTS

II.12–5

processing algorithms will aid the discrimination between different weather conditions.

implementation of a road meteorological measurement network are encouraged to consider flexible and extendable equipment solutions with powerful programming options for sensor data processing and system control. The station data processing may include: control of the measurement cycle (initiation, frequency, time and date); complex sensor management (power on/off, sampling regime); sensor signal processing (filtering, conversion to scientific units, algorithms); data quality checks; alarm generation (variables outside pre-set limits, partial system failure, station integrity breached); data storage (short-term storage and archiving); output message generation (code form, communications protocol); communications management; and station housekeeping (power supply, sensor checks, communications). 12.4.3 network configuration and equipment options

12.4

choosing the roaD Weather station equiPMent

Part II, Chapter 1, gives information that may be applied to road meteorological measurement applications. In what follows, attention is drawn to the particular issues and concerns from the experience of road network managers, in particular the need for high performance where public safety is a primary issue. 12.4.1 the road environment

A road weather station is subject to considerable stress due to the vicinity of the roadway: vibration, vehicle ignition interference, exhaust pollution, corrosion from salt spray, and unwelcome attention from members of the public. In some respects the station may be considered to operate in an industrial environment, with all that that implies for the robustness of the design and concern for data integrity. Frequently met problems are: lack of protection against over-voltage on sensor interface circuits; inadequate electrical isolation between sensors, sensor cables and the data-acquisition unit; variable connector contact resistance causing calibration drift; measurement failure; and extended maintenance attention. 12.4.2 remote-station processing capability

There is a move in AWS design to include increased data-processing capacity and storage at the remote data-acquisition unit in order to employ processing algorithms that act on several sensor signals to give complex outputs; to provide for some level of quality assurance on the data; to provide two-way communications between the control centre and remote units for diagnostics of both the sensor and unit performance; and to provide for downloading new algorithms and software updates to the remote units. On the other hand, a network of remote stations which are not more complex than necessary for reliable data acquisition, and a central control and data-acquisition computer where the more complex algorithms, quality assurance and code translation is carried out as well as the higher level processing for road management decisions, may provide a more reliable and less costly overall system. Those planning for the

The selection of station equipment, communications and network control (the network infrastructure) should reflect the particular demands of road meteorology and the road network management decision-making. These choices will be materially affected by the relationship between the road network authority and the local NMHS. For example, the road network authority might contract the NMHS to provide road meteorology forecasting services and specified road data, to which the road network managers apply their physical criteria to make operational decisions. In this case, it would be logical for the road network stations to be an extension of the NMHS AWS network employing common station hardware, communications and maintenance service, with particular attention to network reliability, and including the special sensors, algorithms and software for the road meteorological task. However, if such close integration is impractical, the road authority may still wish to adopt some commonality with NMHS systems to take advantage of operational experience and the supply of hardware and spare parts. If an entirely new or separate network is required, the following guidelines are recommended for the choice of data-acquisition equipment and communications. Rather than develop new hardware and software for road meteorological measurement, it is wise to employ existing proven systems from reputable manufacturers and sources, with only necessary adaptation to the road network

II.12–6

ParT II. OBSErvING SYSTEMS

application, and taking advantage of the experience and advice of other road network administrations. The equipment and its software should be modular to allow for future added sensors and changes in sensor specifications. To facilitate the extension of the network after a few years it is most helpful if the hardware is sourced from standardized designs from a sound manufacturing base where later versions are likely to maintain technical compatibility with earlier generations. 12.4.4 Design for reliability

Good standardized installation and maintenance practices will contribute much to system reliability. System reliability is also related to the “mean time to repair”, which involves the call-out and travel time of maintenance staff to make equipment replacement from whole unit and module stock.

12.5 12.5.1

Message coDing

coding functions

Data-processing modules should be of industrystandard architecture with robust standard operating systems with a well-managed upgrade process. Application software should be written in a standard language and well documented. To achieve the desired reliability, special industrialized components and modules may be selected. A cheaper alternative may be to use standard commercial designs with redundant parallel or back-up systems to ensure system reliability. The design of the remote-unit power supply needs particular attention. An uninterruptible power supply may be recommended, but it should be recognized that communications systems will also depend on a functioning local power supply. Whatever the system design, housing the electronics in a robust, corrosion-resistant, secure, even temperature, dust- and moisture-free enclosure will add much to its reliability. Connectors carrying the sensor signals should be of high-quality industrial or military grade and well protected against cable strain, water ingress and corrosion. Sensor cabling should have an earth shield and a robust, waterproof insulating sheath and be laid in conduit. Special attention should be given to obviating the effect of electrical noise or interference introduced into the data-acquisition system through sensor cables, the power supply or communications lines. These unwanted electrical signals may cause sensor signal errors and corrupt data, and cause electronic failure, particularly in sensitive interface circuits. Great care needs to be given to: the design of sensor and communication line isolation and over-voltage protection, including an appropriate level of protection from atmospheric electricity; the adequate earthing or grounding of sensors, power supplies, communications modems and equipment cabinets; and to earth shielding all parts of the measurement chain, avoiding earth current loops which will cause measurement errors.

The message transmitted from the remote road meteorological station will contain a station identifier, the message time and date, sensor channel data, including channel identification, and some “housekeeping” data which may include information on station security, power supply, calibration and other data quality checks. This message will be contained in the code envelope relating to the communications channel with an address header, control information and redundancy check characters to provide for error detection. The road meteorological data part of the message may be coded in any efficient, unambiguous way that enables the central control and data-acquisition computer to decode and process before delivering intelligible guidance information to the network managers for their decision-making. 12.5.2 WMo standard coding

Designers of road meteorology measurement networks should also consider the value of WMO standard message coding (see WMO, 1995) which enables other users like NMHSs to receive the data by some arrangement and employ it in other meteorological applications. This message coding may be carried out at the remote AWS, which places demands on station software and processing, or, as is more likely, in the central control and data-acquisition computer after the completion of any quality assurance operations on the data.

12.6

central control anD Dataacquisition coMPuter

The functions of the central computer (or computers) have already been mentioned. The functions are to manage the network by controlling communications (see below), receive reports (road meteorological messages, AWS housekeeping messages and quality information), and process the road measurement data to give the road network managers the

CHaPTEr 12. rOad METEOrOlOGICal MEaSurEMENTS

II.12–7

operational information and decision-making tools that they require. The network architecture may be designed to enable the central computer to act as an Intranet or Web server to enable ready access to this information by regional managers and other users of the meteorological data. A separate computer will probably be allocated to manage the road network climate database and to produce and distribute analyses and statistical summaries. In a sophisticated network the central computer will manage certain maintenance and calibration operations, change AWS operating modes and update AWS software.

adopted by the road network management should be well defined and recorded in the station metadata or network manuals. 12.8.2 alarm generation

Alarm indications may be generated from sensor outputs when values exceed preset limits to initiate alarm messages from the AWS. The choice of alarms and limit tests will depend on national or regional practice. Some examples of alarms from road AWS follow. Note the use of the logical “and” and “or” combinations in the algorithms. Examples of alarms include: alarm 1: T(air) Or T(road surface) 3°C aNd T(extrapolated road surface)a ≤ 1°C alarm 2: T(air) ≤ 0°C alarm 3: First condition T(road surface) ≤ 1°C Or T(extrapolated road surface) ≤ 0°C Or T(pavement at –5 cm) ≤ 0°C Or T(pavement at –10 cm) ≤ –1°C Or T(pavement at –20 cm) ≤ –2°C aNd Second condition Carriage-way is not dry Or at least one precipitation count in the past hour Or relative humidity ≥ 95% Or T(road surface) – T(dewpoint) ≤ 1°C alarm 4: T(road surface) ≤ 0°C aNd detected state: frost or black ice alarm 5: First condition detected precipitation = snow or hail aNd Second condition T(air) ≤ 2°C Or T(road surface) ≤ 0°C alarm 6: Wind average ≥ 11 m s–1 aNd Wind direction referred to road azimuth, between 45° to 135° Or 225° to 315° alarm 7: visibility ≤ 200 m a Extrapolated road-surface temperature is calculated with an algorithm that takes account of the last measures and creates a quadratic equation. This can be ued to calculate estimates of temperatures over the next 3 h.

12.7

coMMunications consiDerations

A reliable telecommunications service that enables the network of stations to be effectively managed while it delivers the requisite data on time is vital. Since communications charges will make up a large proportion of the annual operating cost, the analysis of communications options is important, so that the cost per message can be optimized with respect to the level of service required. A detailed review of telecommunications options for the data collection and management of the road AWS is beyond the scope of this chapter (see Part II, Chapter 1, for guidance on data transmission). The communications solution selected will depend on the management objectives of the road meteorological measurement network and the services offered by the telecommunications providers of the country, with their attendant tariffs.

12.8

sensor signal Processing anD alarM generation

12.8.1

signal processing algorithms

The raw signal data from sensors must be processed or filtered to produce representative average values. This is either done in some active sensors, in the sensor interface in the data-acquisition unit, or in the higher level data processing of the station. The specifications for averaging the sensor outputs may be found in Part I, Chapter 1, Annex 1B. Algorithms which are applied to sensor outputs (or groups of outputs) either at the remote station or in the central computer should be from authoritative sources, rigorously tested and preferably published in the open literature. Any in-house algorithms

Other alarms may be set if faults are detected in sensors, message formats, power supplies or communications.

12.9

MeasureMent quality control

Good decision-making for road management is dependent on reliable measurements so that, when sensors, their cabling or their interfaces in the AWS

II.12–8

ParT II. OBSErvING SYSTEMS

develop a fault, the defective unit is detected and repaired without undue delay. It is very difficult for a road manager to detect erroneous measurements. Reference should be made to the guidance on quality control provided in Part II, Chapter 1 and in Part III, Chapter 1. Gross sensor faults may be detected by the AWS system software, which should then generate an alarm condition. 12.9.1 checking for spurious values

include advice on the maintenance and calibration of specific sensors. Note, however, that the road AWS exists in an environment with peculiar problems: vulnerability of the AWS and its sensors to accidental or intentional damage; exposure to severe vehicle exhaust pollution; electrical interference from vehicle ignition and nearby hightension power lines; corrosion from salt spray; and vibration (affecting connections between sensors and cables). 12.10.2 Maintenance plans and documentation

Measurements that fall outside the expected operating range for the sensor, for example, 0° to 359° for a wind vane, and dewpoint not greater than air temperature, may be rejected by setting limits for each variable. Where there has been a faulty zero output, a rapid drift or step change in sensor response, invalid measurements may be rejected by software that performs statistical analysis of measurements over time, either in the AWS if it has sufficient processing power, or in the central data acquisition computer. In some of the examples that follow, the standard deviation of the last n values is compared with a parameterized threshold. Examples of check algorithms (only for road meteorological measurements) include the following: (a) Test for all temperatures: Accept data only if standard deviation of the last 30 values is ≥ 0.2°C; (b) Test for wind speed: Accept data only if standard deviation of the last 20 values is ≥ 1 km hr–1; (c) Test for wind direction: Accept data only if standard deviation of the last 30 values is ≥ 10°; (d) Test for liquid precipitation: Check for consistency of amount with previous day’s count; (e) Test for snow precipitation: Check data if T(air) > 4°C; (f) Test for atmospheric long-wave radiation (AR) (related to cloud cover): Refuse data if AR > 130 W m–2, if relative humidity > 97% and AR > 10 W m–2, and if relative humidity ≥ 90% and AR >10 W m–2, for four successive hours.

Because operational decisions affecting road safety may critically depend on reliable AWS data, there are stringent requirements for maintenance of specific stations at particular times of the year. These considerations are outlined in the maintenance management plan for the network, which should include scheduled routine preventive maintenance as well as effective response to known fault conditions. The road network administration should have its own maintenance manual for its road meteorological stations, based on the manufacturer’s recommendations, information gleaned from this Guide and from its own experience. A good manual contains checklists to aid inspection and the performance of maintenance tasks. The administration may decide to contract out inspection and maintenance work to the local NMHS, which should have experience with this kind of instrumentation. 12.10.3 inspections and work programmes

Each station should undergo a complete maintenance programme twice a year, consisting of site maintenance (cutting grass and vegetation which could affect sensor exposure); checking enclosures for water ingress and replacing desiccants; treating and painting weathered and corroded enclosures, screens and supports; checking cable and connector integrity; cleaning and levelling sensors (noting the measurement issues referred to previously); and calibrating or replacing sensors and the AWS measurement chain. Road managers should maintain a physical inspection programme to check for the integrity and proper operation of their stations once a month in winter and once every two months in the summer. When conducting any work on the road surface, the regulation warning signs must be set out and approved safety clothing must be worn.

12.10

roaD Weather station Maintenance

12.10.1

the road environment

Reference should be made to Part I, Chapter 1 and Part II, Chapter 1 for the sections on inspection, maintenance and calibration. The chapters of Part I

CHaPTEr 12. rOad METEOrOlOGICal MEaSurEMENTS

II.12–9

12.11

training

To manage, operate and maintain a network of road meteorological stations in order to obtain a continuous flow of reliable data and to interpret that data to give fully meaningful information requires personnel with specific training in the necessary disciplines. Some of these areas of expertise are: the roadway environment and operational decision-making for the safe and efficient movement of traffic; remote data

acquisition, telecommunications and computing; the selection, application and maintenance of meteorological sensors and their signal processing; and the interpretation of meteorological and other data for the operational context. The administration responsible for the road network should collaborate with other agencies as necessary in order to ensure that the optimum mix of knowledge and training is maintained to ensure the successful operation of the road meteorological measurement network.

II.12–10

ParT II. OBSErvING SYSTEMS

references and furtHer readIng

World Road Association (PIARC), 2002: Proceedings of the Eleventh PIARC International Winter Road Congress (Sapporo, Japan). World Meteorological Organization, 1988: Technical Regulations. WMO-No. 49, Geneva. World Meteorological Organization, 1995: Manual on Codes. Volumes I.1 and I.2. WMO-No. 306, Geneva. World Meteorological Organization, 1997: Road Meteorological Observations (R.E.W. Pettifer and

J. Terpstra). Instruments and Observing Methods Report No. 61, WMO/TD-No. 842, Geneva. World Meteorological Organization, 2003a: Road Managers and Meteorologists over Road Meteorological Observations: The Result of Questionnaires (J.M. Terpstra and T. Ledent). Instruments and Observing Methods Report No. 77, WMO/TD-No. 1159, Geneva. World Meteorological Organization, 2003b: Manual on the Global Observing System. WMO-No. 544, Geneva.

PART III QUALITY ASSURANCE AND MANAGEMENT OF OBSERVING SYSTEMS

PART III. QUALITY ASSURANCE AND MANAGEMENT OF OBSERVING SYSTEMS CONTENTS
Page CHAPTER 1. QuAliTy mAnAgEmEnT ............................................................................................ iii.1–1 general ................................................................................................................................. The iSO 9000 family, iSO/iEC 17025, iSO/iEC 20000 and the WmO Quality management Framework ....................................................................... 1.3 introduction of quality management .................................................................................. 1.4 Accreditation of laboratories ............................................................................................... 1.5 Quality management tools .................................................................................................. 1.6 Factors affecting data quality ............................................................................................... 1.7 Quality assurance (quality control) ..................................................................................... 1.8 Performance monitoring ..................................................................................................... 1.9 Data homogeneity and metadata ........................................................................................ 1.10 network management ......................................................................................................... References and further reading ...................................................................................................... 1.1 1.2 iii.1–1 iii.1–2 iii.1–6 iii.1–8 iii.1–9 iii.1–9 iii.1–12 iii.1–14 iii.1–14 iii.1–16 iii.1–18

CHAPTER 2. SAmPling mETEOROlOgiCAl vARiAblES ............................................................... iii.2–1 2.1 general ................................................................................................................................. 2.2 Time series, power spectra and filters .................................................................................. 2.3 Determination of system characteristics ............................................................................. 2.4 Sampling .............................................................................................................................. References and further reading ...................................................................................................... iii.2–1 iii.2–3 iii.2–11 iii.2–12 iii.2–16

CHAPTER 3. DATA REDuCTiOn ....................................................................................................... iii.3–1 3.1 general ................................................................................................................................. 3.2 Sampling .............................................................................................................................. 3.3 Application of calibration functions ................................................................................... 3.4 linearization ........................................................................................................................ Averaging ............................................................................................................................. 3.5 Related variables and statistics............................................................................................. 3.6 3.7 Corrections........................................................................................................................... Quality management ........................................................................................................... 3.8 Compiling metadata ............................................................................................................ 3.9 References and further reading ...................................................................................................... iii.3–1 iii.3–2 iii.3–3 iii.3–3 iii.3–3 iii.3–4 iii.3–4 iii.3–5 iii.3–5 iii.3–6

CHAPTER 4. TESTing, CAlibRATiOn AnD inTERCOmPARiSOn................................................... iii.4–1 4.1 general ................................................................................................................................. 4.2 Testing .................................................................................................................................. 4.3 Calibration ........................................................................................................................... 4.4 intercomparisons ................................................................................................................. Annex 4.A. Procedures of WmO global and regional intercomparisons of instruments .............. Annex 4.b. guidelines for organizing WmO intercomparisons of instruments ........................... Annex 4.C. Reports of international comparisons conducted under the auspices of the commission for instruments and methods of observation ................................................. References and further reading ...................................................................................................... iii.4–1 iii.4–2 iii.4–4 iii.4–6 iii.4–8 iii.4–9 iii.4–14 iii.4–16

III.2

PART III. QUALITY ASSURANCE AND MANAGEMENT OF OBSERVING SYSTEMS

Page CHAPTER 5. TRAining OF inSTRumEnT SPECiAliSTS .................................................................. iii.5–1 5.1 introduction......................................................................................................................... 5.2 Appropriate training for operational requirements ............................................................. 5.3 Some general principles for training ................................................................................... 5.4 The training process............................................................................................................. 5.5 Resources for training .......................................................................................................... Annex. Regional training centres .................................................................................................. References and further reading ...................................................................................................... iii.5–1 iii.5–1 iii.5–2 iii.5–6 iii.5–11 iii.5–15 iii.5–16

CHaPTEr 1

qualIty ManageMent

1.1

general

This chapter is general and covers operational meteorological observing systems of any size or nature. Although the guidance it gives on quality management is expressed in terms that apply to large networks of observing stations, it should be read to apply even to a single station. Quality management Quality management provides the principles and the methodological frame for operations, and coordinates activities to manage and control an organization with regard to quality. Quality assurance and quality control are the parts of any successful quality management system. Quality assurance focuses on providing confidence that quality requirements will be fulfilled and includes all the planned and systematic activities implemented in a quality system so that quality requirements for a product or service will be fulfilled. Quality control is associated with those components used to ensure that the quality requirements are fulfilled and includes all the operational techniques and activities used to fulfil quality requirements. This chapter concerns quality management associated with quality control and quality assurance and the formal accreditation of the laboratory activities, especially from the point of view of meteorological observations of weather and atmospheric variables. The ISO 9000 family of standards is discussed to assist understanding in the course of action during the introduction of a quality management system in a National Meteorological and Hydrological Service (NMHS); this set of standards and contains the minimum processes that must be introduced in a quality management system for fulfilling the requirements of the ISO 9001 standard. The total quality management concept according to the ISO 9004 guidelines is then discussed, highlighting the views of users and interested parties. The ISO/IEC 17025 standard is introduced. The benefits to NMHSs and the Regional Instrument Centres (RICs) from accreditation through ISO/IEC 17025 are outlined along with a requirement for an accreditation process.

The ISO/IEC 20000 standard for information technology (IT) service management is introduced into the discussion, given that every observing system incorporates IT components. Quality assurance and quality control Data are of good quality when they satisfy stated and implied needs. Elsewhere in this Guide explicit or implied statements are given of required accuracy, uncertainty, resolution and representativeness, mainly for the synoptic applications of meteorological data, but similar requirements can be stated for other applications. It must be supposed that minimum total cost is also an implied or explicit requirement for any application. The purpose of quality management is to ensure that data meet requirements (for uncertainty, resolution, continuity, homogeneity, representativeness, timeliness, format, and so on) for the intended application, at a minimum practicable cost. All measured data are imperfect, but, if their quality is known and demonstrable, they can be used appropriately. The provision of good quality meteorological data is not a simple matter and is impossible without a quality management system. The best quality systems operate continuously at all points in the whole observing system, from network planning and training, through installation and station operations to data transmission and archiving, and they include feedback and follow-up provisions on timescales from near-real time to annual reviews and end-to-end process. The amount of resources required for an effective quality management system is a proportion of the cost of operating an observing system or network and is typically a few per cent of the overall cost. Without this expenditure, the data must be regarded as being of unknown quality, and their usefulness is diminished. An effective quality system is one that manages the linkages between preparation for data collection, data collection, data assurance and distribution to users to ensure that the user receives the required quantity. For many meteorological quantities, there are a number of these preparation-collection-assurance cycles between the field and the ultimate distribution to the user. It is essential that all these cycles are identified and the potential for divergence from the required quantity minimized. Many

III.1–2

ParT III. QualITY aSSuraNCE aNd MaNaGEMENT OF OBSErvING SYSTEMS

of these cycles will be so closely linked that they may be perceived as one cycle. Most problems occur when there are a number of cycles and they are treated as independent of one another. Once a datum from a measurement process is obtained, it remains the datum of the measurement process. Other subsequent processes may verify its worth as the quantity required, use the datum in an adjustment process to create the quality required, or reject the datum. However, none of these subsequent processes changes the datum from the measurement process. Quality control is the process by which an effort is made to ensure that the processes leading up to the datum being distributed are correct, and to minimize the potential for rejection or adjustment of the resultant datum. Quality control includes explicit control of the factors that directly affect the data collected and processed before distribution to users. For observations or measurements, this includes equipment, exposure, measurement procedures, maintenance, inspection, calibration, algorithm development, redundancy of measurements, applied research and training. In a data transmission sense, quality control is the process established to ensure that for data that is subsequently transmitted or forwarded to a user database, protocols are set up to ensure that only acceptable data are collected by the user. Quality assurance is the best-known component of quality management systems, and it is the irreducible minimum of any system. It consists of all the processes that are put in place to generate confidence and ensure that the data produced will have the required quality and also include the examination of data at stations and at data centres to verify that the data are consistent with the quality system goals, and to detect errors so that the data may be either flagged as unreliable, corrected or, in the case of gross errors, deleted. A quality system should include procedures for feeding back into the measurement and quality control process to prevent the errors from recurring. Quality assurance can be applied in real-time post measurement, and can feed into the quality control process for the next process of a quality system, but in general it tends to operate in non-real time. Real-time quality assurance is usually performed at the station and at meteorological analysis centres. Delayed quality assurance may be performed at analysis centres for the compilation of a refined database, and at climate centres or databanks for archiving. In all cases, the results should be

returned to the observation managers for follow-up. A common component of quality assurance is quality monitoring or performance monitoring, a non-real-time activity in which the performance of the network or observing system is examined for trends and systematic deficiencies. It is typically performed by the office that manages and takes responsibility for the network or system, and which can prescribe changes to equipment or procedures. These are usually the responsibility of the network manager, in collaboration with other specialists, where appropriate. Modern approaches to data quality emphasize the advantages of a comprehensive system for quality assurance, in which procedures are laid down for continuous interaction between all parties involved in the observing system, including top management and others such as designers and trainers who may otherwise have been regarded as peripheral to operational quality concerns after data collection. The formal procedures prescribed by the International Organization for Standardization (ISO) for quality management and quality assurance, and other detailed procedures used in manufacturing and commerce, are also appropriate for meteorological data.

1.2

the iso 9000 faMily, iso/iec 7025, iso/iec 20000 anD the WMo quality ManageMent fraMeWork

The chapter gives an explanation of the related ISO standards and how they inerconnect. Proficiency in ISO quality systems is available through certification or accreditation, and usually requires external auditing of the implemented quality system. Certification implies that the framework and procedures used in the organization are in place and used as stated. Accreditation implies that the framework and procedures used in the organization are in place, used as stated and technically able to achieve the required result. The assessment of technical competence is a mandatory requirement of accreditation, but not of certification. The ISO 9001 is a standard by which certification can be achieved by an organization, while accreditation against the ISO/IEC 17025 is commonly required for laboratories and routine observations.

CHaPTEr 1. QualITY MaNaGEMENT

III.1–3

The ISO 9000 standard has been developed to assist organizations of all types and sizes to implement and operate quality management systems. The ISO 9000 standard describes the fundamentals of quality management systems and gives definitions of the related terms (for example, requirement, customer satisfaction). The main concept is illustrated in Figure 1.1. The ISO 9001 standard specifies the requirements for a quality management system that can be certified in accordance with this standard. The ISO 9004 standard gives guidelines for continual improvement of the quality management system to achieve a total quality management system. The The ISO 19011 standard provides the guidance on auditing the quality management system. All these standards are described in more detail in the related documents of the WMO Quality Management Framework. 1.2.1 iso 9000: quality management systems – fundamentals and vocabulary

(g) (h)

Factual approach to decision-making; Mutually beneficial supplier relationships.

All these principles must be documented and put to practice to meet the requirements of the ISO 9000 and 9001 standards to achieve certification. The main topic of these standards is the process approach, which can simply be described as activities that use resources to transform inputs into outputs. The process-based quality management system is simply modelled in Figure 1.2. The basic idea is that of the mechanism likely to obtain continual improvement of the system and customer satisfaction through measuring the process indices (for example, computing time of a GME model, customer satisfaction, reaction time, and so forth), assessing the results, making management decisions for better resource management and obtaining inevitably better products. 1.2.2 iso 900: quality management systems – requirements

The following eight quality management principles are the implicit basis for the successful leadership of NMHSs of all sizes and for continual performance improvement: (a) Customer focus; (b) Leadership; (c) Involvement of people; (d) Process approach; (e) System approach to management; (f) Continual improvement;

The basic requirements for a quality management system are given by this standard, including processes for improvement and complaint management and carrying out management reviews. These processes are normally incorporated in the quality manual. The ISO 9001 standard focuses on management responsibility rather than technical activities.

ISO 19011 Quality management systems: Guidelines for quality and/or environmental management systems auditing

ISO 9000 Quality management systems: Fundamentals and vocabulary

ISO 9004 Quality management systems: ISO 9001 Quality management systems: Requirements Ability to fulfil customer requirements Guidelines for performance improvements Excellence models – EFQM – Malcolm Baldridge

Certification

EFQM: European Foundation for Quality Management

figure 1.1. the main concept of the Iso 9000 standards and the dependencies

III.1–4

ParT III. QualITY aSSuraNCE aNd MaNaGEMENT OF OBSErvING SYSTEMS

Continual improvement Customers (interested parties)
Management of resources Management responsibility

Customers (interested parties)
Measurement, analysis and improvement

P

A D

C

Indices

Requirements

Product realization

Product

Satisfaction

P = Plan, d = do, c = check, a = act

figure 1.2. the Pdca control circuit (also named the deming-circuit) To achieve certification in ISO 9001, six processes must be defined and documented by the organization (NMHS), as follows: (a) Control of documents; (b) Control of records; (c) Control of non-conforming products; (d) Corrective action; (e) Preventive action; (f) Internal audit. Furthermore, there must be a quality manual which states the policy (for example, the goal is to achieve regional leadership in weather forecasting) and the objectives of the organization (for example, improved weather forecasting: reduce false warning probability) and describes the process frameworks and their interaction. There must be statements for the following: (a) Management; (b) Internal communication; (c) Continual improvement; (d) System control (for example, through management reviews). Exclusions can be made, for example, for development (if there are no development activities in the organization). The documentation pyramid of the quality management system is shown in Figure 1.3. The process descriptions indicate the real activities in the organization, such as the data-acquisition process in the weather and climate observational networks. They provide information on the different process steps and the organizational units carrying out the steps, for cooperation and information sharing purposes. The documentation must differentiate between periodic and non-periodic processes. Examples of periodic processes are data acquisition or forecast dissemination. Examples of non-periodic processes include the installation of measurement equipment which starts with a user or component requirement (for example, the order to install a measurement network). Lastly, the instructions in ISO 9001 give detailed information on the process steps to be referenced in the process description (for example, starting instruction of an AWS). Forms and checklists are helpful tools to reduce the possibility that required tasks will be forgotten. 1.2.3 iso 9004: quality management systems – guidelines for performance improvements

The guidelines for developing the introduced quality management system to achieve business excellence are formulated in ISO 9004. The main aspect is the change from the customer position to the position of interested parties. Different excellence models can be developed by the ISO 9004 guidelines, for example, the Excellence Model of the European Foundation for Quality Management (EFQM)1 or the Malcolm Baldrige National Quality Award.2 Both excellence models are appropriately established and well respected in all countries of the world. The EFQM Excellence Model contains the following nine criteria which are assessed by an expert team of assessors:

1 2

See EFQM website at http://www.efqm.org. See the NIST website at http://www.qualiy.nist.gov.

CHaPTEr 1. QualITY MaNaGEMENT

III.1–5

Quality manual

Process descriptions

Instructions forms, checklists

figure 1.3. the documentation pyramid of a quality management system (a) (b) (c) (d) (e) (f) (g) (h) (i) Leadership; People; Policy and strategy; Partnerships and resources; Processes; People results; Customer results; Society results; Key performance results. (b) (c) Audit planning (establishing and implementing the audit programme); Audit activities (initiating the audit, preparing and conducting on-site audit activities, preparing the audit report); Training and education of the auditors (competence, knowledge, soft skills).

(d)

The Malcolm Baldrige model contains seven criteria similar to the EFQM Excellence Model, as follows: (a) Leadership; (b) Strategic planning; (c) Customer and market focus; (d) Measurement, analysis, and knowledge management; (e) Human resources focus; (f) Process management; (g) Results. There is no certification process for this standard, but external assessment provides the opportunity to draw comparisons with other organizations according to the excellence model (see also Figure 1.1). 1.2.4 iso 90: guidelines for quality and/or environmental management systems auditing

The manner in which audits are conducted depends on the objectives and scope of the audit which are set by the management or the audit client. The primary task of the first audit is to check the conformity of the quality management system with the ISO 9001 requirements. Further audits give priority to the interaction and interfaces of the processes. The audit criteria are the documentation of the quality management system, the process descriptions, the quality manual and the unique individual regulations. The audit planning published by the organization should specify the relevant departments of the organization, the audit criteria and the audit objectives, place, date and time to ensure a clear assignment of the audits. 1.2.5 iso/iec 7025: general requirements for the competence of testing and calibration laboratories

This standard is a guide for auditing quality or environmental management systems and does not have any regulatory character. The following detailed activities are described for auditing the organization: (a) Principles of auditing (ethical conduct, fair presentation, due professional care, independence, evidence-based approach);

This set of requirements is applicable to facilities, including laboratories and testing sites, that wish to have external accreditation of their competence in terms of their measurement and testing processes. The ISO/IEC 17025 standard aligns its management requirements with those of ISO 9001. This standard

III.1–6

ParT III. QualITY aSSuraNCE aNd MaNaGEMENT OF OBSErvING SYSTEMS

is divided into two main parts: management requirements and technical requirements. Hence, the quality management system must follow the requirements of the ISO 9001 standard, which include described processes, a management handbook that provides a connection between processes and goals and policy statements, and that these aspects be audited regularly. All laboratory processes must be approved, verified and validated in a suitable manner to meet the requirements. Furthermore, the roles of the quality management representative (quality manager) and the head of the laboratory must be determined. An essential component of the technical requirements is the development of uncertainty analyses for each of the measurement processes, including documented and verified traceability to international metrology standards. 1.2.6 iso/iec 20000: information technology – service management

(b)

Particular attention be paid to user support.

Special attention has been placed on the changemanagement process, which can contain release and configuration management. Incident and problem management is normally covered by the implementation of a user help desk. 1.2.7 WMo quality Management framework

The WMO Quality Management Framework gives the basic recommendations that were based on the experiences of NMHSs. The necessary conditions for successful certification against ISO 9001 are explained in WMO (2005a; 2005b). The Quality Management Framework is the guide for NMHSs, especially for NMHSs with little experience in a formal quality management system. The introduction of a quality management system is described only briefly in the following section, noting that WMO cannot carry out any certification against ISO 9001.

NMHSs make use of IT equipment to obtain data from the measuring networks to use in GME/LM models and to provide forecasters with the outputs of models. The recommendations of this standard are helpful for the implementation of reliable IT services. The new ISO/IEC 20000 standard summarizes the old British standard BS-15000 and the IT Infrastructure Library (ITIL) recommendations. The division of requirements follows the ITIL structure. The ITIL elements are divided into service delivery and service support with the following processes: Service delivery: (a) Service-level management; (b) Financial management; (c) IT service continuity management; (d) Availability management; (e) Capacity management. Service support: (a) Change management; (b) Incident management; (c) Problem management; (d) Release management; (e) Configuration management. Security management is common to both areas. All these require that: (a) The processes be adapted to the NMHS’s organization;

1.3

introDuction of quality ManageMent

The introduction of successful quality management depends heavily on the cooperation of senior management. The senior management of the NMHS must be committed to the quality management system and support the project team. The necessary conditions for successful certification are summarized and the terms of ISO 9001 standards are explained in ISO 20000. Senior-level management defines a quality policy and the quality objectives (including a quality management commitment), and staff have to be trained in sufficient quality management topics to understand the basis for the quality management process (see section 1.2.2). Most importantly, a project team should be established to manage the transition to a formal quality management system including definition and analysis of the processes used by the organization. To assist the project team, brief instructions can be given to the staff involved in the process definition, and these would normally include the following: (a) To document (write down) what each group does; (b) To indicate the existing documentation;

CHaPTEr 1. QualITY MaNaGEMENT

III.1–7

(c) (d)

To indicate the proof or indicators of what is done; To identify what can be done to continually improve the processes.

Even though these processes will meet the individual needs of NMHSs and provide them with subprocesses, normally there should be regulations for remedying incidents (for example, system failures, staff accidents). The processes must be introduced into the organization with clear quality objectives, and all staff must be trained in understanding the processes, including the use of procedures and checklists and the measurement of process indicators. Before applying for certification, the quality management sytem must be reviewed by carrying out internal audits in the departments and divisions of the organization, to check conformity of the quality management system as stated and as enacted. These documented reviews can be performed on products by specialized and trained auditors. The requirements and recommendations for these reviews are given in ISO 19011 (see section 1.2.4). The management review of the quality management system will include the following: (a) Audit results; (b) Customer feedback; (c) Process performance based on performance indicators;

Given that the documentation specifies what the organization does, it is essential that the main processes reflect the functions of the organization of the NMHS. These can be a part of the named processes (see Figure 1.4), for example: (a) Weather forecasting (including hydrometerological, agrometeorological, human biometeorological aspects) and weather warnings; (b) Consulting services (including climate and environment); (c) Data generation (from measurement and observational networks); (d) International affairs; (e) Research and development (global modelling, limited area models, instrumentation); (f) Technical infrastructure (computing and communications, engineering support, data management and IT support); (g) Administration processes (purchasing, financial and personnel management, organization, administration offices and immovables, knowledge management, central planning and control and legal affairs).

Process landscape in an NMHS (e.g. DWD)
Process results, catalogue of products/services

Production processes

Steering Develop- Technical resources ment systems

Installation, operation, development of technical systems, applications support Development and implementation of procedures and methods Development of numerical weather prediction models and related methodology Organizational development and steering instruments Management of staff, finances and procurement

Weather forecast and warning service

6 x consulting service

Atmospheric watch

Data generation, data management

Support processes

International activities

Management

Internal communicating reporting

System control (audits, reviews)

Improvement and complaint management

Management processes

DWD = Deutscher Wetterdienst

figure. 1.4. Process landscape of an nMHs (example: dWd, WMo 2005a)

III.1–8 (d) (e) (f) (g)

ParT III. QualITY aSSuraNCE aNd MaNaGEMENT OF OBSErvING SYSTEMS

Status of preventive and corrective actions; Follow-up actions from previous management reviews; Changes in the quality management system (policy of the organization); Recommendations for improvement.

1.4

accreDitation of laboratories

Accreditation requires additional processes and documentation and, most importantly, evidence that laboratory staff have been trained and have mastered the processes and methods to be accredited. The documentation must contain the following aspects: (a) A management manual for the laboratory; (b) The process descriptions mentioned in section 1.2; (c) The documentation of all processes and methods; (d) Work instructions for all partial steps in the processes and methods; (e) Equipment manuals (manual including calibrating certificates); (f) Maintenance manuals. Since procedures and methods are likely to change more frequently than the management aspects of the accreditation, the methods are usually not included in the management manual. However, there is specific reference to the procedures and methods used in the management manual. As it is unlikely that all aspects of the accreditation will be covered once the quality management system is introduced, it is recommended that a preaudit be conducted and coordinated with the certifying agency. In these pre-audits it would be normal for the certifying agency: (a) To assess staff and spatial prerequisites; (b) To assess the suitability of the management system; (c) To check the documentation; (d) To validate the scope of the accreditation. The accreditation procedure consists of assessments by an expert panel (external to the organization), which includes a representative from the certifying agency. The assessment panel will focus on two main areas as follows: (a) Documentation; (b) An examination of the facilities included in the scope of the accreditation (for example, laboratories, special field sites).

The assessment of documentation covers verification of the following documents: (a) A management manual (or laboratory guide); (b) Procedure instructions; (c) Work instructions; (d) Test instructions; (e) Equipment manuals; (f) Maintenance manuals; (g) Uncertainty analyses of specific quantities, test results and calibrations; (h) Proof documents (for example, that staff training has occurred and that quantities are traceable); (i) Records (for example, correspondence with the customer, generated calibration certificates). The external expert team could request additional documents, as all aspects of the ISO/EC 17025 standard are checked and in more detail than a certification under ISO 9001. Besides the inspection of the measurement methods and associated equipment, the assessment of the facilities in the scope of the accreditation will include the following: (a) Assessment of the staff (including training and responsibility levels); (b) Assessment of the infrastructure that supports the methods (for example, buildings, access). The following are also checked during the assessment to ensure that they meet the objectives required by management for accreditation: (a) Organizational structure; (b) Staff qualifications; (c) Adequacy of the technological facilities; (d) Customer focus. In addition, the assessment should verify that the laboratory has established proof of the following: (a) Technical competence (choice and use of the measuring system); (b) Calibration of measurement equipment; (c) Maintenance of measurement equipment; (d) Verification and validation of methods. Benefits and disadvantages of accreditation Through initial accreditation by an independent certifying agency NMHSs prove their competence in the area of meteorological measuring and testing methods according to a recognized standard. Once accreditation is established, there is an ongoing periodic external audit, which provides additional proof that standards have been maintained, but more importantly it helps

CHaPTEr 1. QualITY MaNaGEMENT

III.1–9

the organization to ensure that its own internal quality requirements are met. An accreditation with suitable scope also provides commercial opportunities for the calibration, verification and assessment of measurement devices. For organizations that do not have a quality management system in place, the benefits of accreditation are significant. First, it documents the organization’s system, and, through that, a process of analysis can be used to make the organization more efficient and effective. For example, one component of accreditation under ISO/EC 17025 requires uncertainty analyses for every calibration and verification test; such quantitative analyses provide information on where the most benefit can be achieved for the least resources. Accreditation or certification under any recognized quality framework requires registration and periodic audits by external experts and the certifying agency. These represent additional costs for the organization and are dependent on the scope of the accreditation and certification. Seeking accreditation before an effective quality management system is in place will lead to an increased use of resources and result in existing resources being diverted to establish a quality management system; there will also be additional periodic audit costs.

Failure Mode and Effects Analysis is a method for the examination of possible missing causes and faults and the probability of their appearance. The method can be used for analysing production processes and product specification. The aim of the optimization process is to reduce the risk priority number. The Six Sigma method was developed in the communications industry and uses statistical process controls to improve production. The objective of this method is to reduce process failure below a specific value.

1.6

factors affecting Data quality

The life history of instruments in field service involves different phases, such as planning according to user requirements, selection and installation of equipment, operation, calibration, maintenance and training activities. To obtain data of adequate or prescribed quality, appropriate actions must be taken at each of these phases. Factors affecting data quality are summarized in this section, and reference is made to more comprehensive information available in other chapters of this Guide and in other WMO Manuals and Guides. User requirements: The quality of a measuring system can be assessed by comparing user requirements with the ability of the systems to fulfil them. The compatibility of user data-quality requirements with instrumental performance must be considered not only at the design and planning phase of a project, but also continually during operation, and implementation must be planned to optimize cost/ benefit and cost/performance ratios. This involves a shared responsibility between users, instrument experts and logistic experts to match technical and financial factors. In particular, instrument experts must study the data quality requirements of the users to be able to propose specifications within the technical state of the art. This important phase of design is called value analysis. If it is neglected, as is often the case, it is likely that the cost or quality requirements, or both, will not be satisfied, possibly to such an extent that the project will fail and efforts will have been wasted. Functional and technical specifications: The translation of expressed requirements into functional specifications and then into technical specifications is a very important and complex task, which requires a sound knowledge of user requirements, meteorological measuring technology, methods of

1.5

quality ManageMent tools

Several well known tools exist to assist in the processes of a quality management system and its continuous improvement. Three examples of these tools are described below as an introduction: the Balanced Score card, Failure Mode and Effects Analysis, and Six Sigma. The Balanced Scorecard (Kaplan and Norton, 1996) has at a minimum four points of focus: finances, the customer, processes and employees. Often the general public is added given that public interests must always be taken into account. Each organization and organization element provides key performance indicators for each of the focus areas, which in turn link to the organization’s mission (or purpose, vision or goals) and the strategy (or working mission and vision).

III.1–10

ParT III. QualITY aSSuraNCE aNd MaNaGEMENT OF OBSErvING SYSTEMS

observation, WMO regulations, and relevant operational conditions and technical/administrative infrastructures. Because the specifications will determine the general functioning of a planned measuring system, their impact on data quality is considerable. Selection of instruments: Instruments should be carefully selected considering the required uncertainty, range and resolution (for definitions see Part I, Chapter 1), the climatological and environmental conditions implied by the users’ applications, the working conditions, and the available technical infrastructure for training, installation and maintenance. An inappropriate selection of instruments may yield poor quality data that may not be anticipated, causing many difficulties when they are subsequently discovered. An example of this is an underspecification resulting in excessive wear or drift. In general, only high quality instruments should be employed for meteorological purposes. Reference should be made to the relevant information given in the various chapters in this Guide. Further information on the performance of several instruments can be found in the reports of WMO international instrument intercomparisons and in the proceedings of WMO/CIMO and other international conferences on instruments and methods of observation. Acceptance tests: Before installation and acceptance, it is necessary to ensure that the instruments fulfil the original specifications. The performance of instruments, and their sensitivity to influence factors, should be published by manufacturers and are sometimes certified by calibration authorities. However, WMO instrument intercomparisons show that instruments may still be degraded by factors affecting their quality which may appear during the production and transportation phases. Calibration errors are difficult or impossible to detect when adequate standards and appropriate test and calibration facilities are not readily available. It is an essential component of good management to carry out appropriate tests under operational conditions before instruments are used for operational purposes. These tests can be applied both to determine the characteristics of a given model and to control the effective quality of each instrument. When purchasing equipment, consideration should be given to requiring the supplier to set up certified quality assurance procedures within its organization according to the requirements of the NMHS, thus reducing the need for acceptance testing by

the recipient. The extra cost when purchasing equipment may be justified by consequent lower costs for internal testing or operational maintenance, or by the assured quality of subsequent field operations. Compatibility: Data compatibility problems can arise when instruments with different technical characteristics are used for taking the same types of measurements. This can happen, for example, when changing from manual to automated measurements, when adding new instruments of different timeconstants, when using different sensor shielding, when applying different data reduction algorithms, and so on. The effects on data compatibility and homogeneity should be carefully investigated by long-term intercomparisons. Reference should be made to the various WMO reports on international instrument intercomparisons. Siting and exposure: The density of meteorological stations depends on the timescale and space scale of the meteorological phenomena to be observed and is generally specified by the users, or set by WMO regulations. Experimental evidence exists showing that improper local siting and exposure can cause a serious deterioration in the accuracy and representativeness of measurements. General siting and exposure criteria are given in Part I, Chapter 1, and detailed information appropriate to specific instruments is given in the various chapters of Part I. Further reference should be made to the regulations in WMO (2003). Attention should also be paid to external factors that can introduce errors, such as dust, pollution, frost, salt, large ambient temperature extremes or vandalism. Instrumental errors: A proper selection of instruments is a necessary, but not sufficient, condition for obtaining good-quality data. No measuring technique is perfect, and all instruments produce various systematic and random errors. Their impact on data quality should be reduced to an acceptable level by appropriate preventive and corrective actions. These errors depend on the type of observation; they are discussed in the relevant chapters of this Guide (see Part I). Data acquisition: Data quality is not only a function of the quality of the instruments and their correct siting and exposure, but also depends on the techniques and methods used to obtain data and to convert them into representative data. A distinction should be made between automated measurements and human observations. Depending on the technical characteristics of a

CHaPTEr 1. QualITY MaNaGEMENT

III.1–11

sensor, in particular its time-constant, proper sampling and averaging procedures must be applied. Unwanted sources of external electrical interference and noise can degrade the quality of the sensor output and should be eliminated by proper sensor-signal conditioning before entering the data-acquisition system. Reference should be made to sampling and filtering in Part II, Chapter 1 and in Part II, Chapter 2. In the case of manual instrument readings, errors may arise from the design, settings or resolution of the instrument, or from the inadequate training of the observer. For visual or subjective observations, errors can occur through an inexperienced observer misinterpreting the meteorological phenomena. Data processing: Errors may also be introduced by the conversion techniques or computational procedures applied to convert the sensor data into Level II or Level III data. Examples of this are the calculation of humidity values from measured relative humidity or dewpoint and the reduction of pressure to mean sea level. Errors also occur during the coding or transcription of meteorological messages, in particular if performed by an observer. Real-time quality control: Data quality depends on the real-time quality-control procedures applied during data acquisition and processing and during the preparation of messages, in order to eliminate the main sources of errors. These procedures are specific to each type of measurement but generally include gross checks for plausible values, rates of change and comparisons with other measurements (for example, dewpoint cannot exceed temperature). Special checks concern manually entered observations and meteorological messages. In AWSs, special built-in test equipment and software can detect specific hardware errors. The application of these procedures is most important since some errors introduced during the measuring process cannot be eliminated later. For an overview of manual and automatic methods in use, refer to other paragraphs of this chapter as well as to Part II, Chapter 1 and WMO (1989; 1992; 1993a; 2003). Performance monitoring: As real-time quality-control procedures have their limitations and some errors can remain undetected, such as long-term drifts in sensors and errors in data transmission, performance monitoring at the network level is required at meteorological analysis centres and by network managers. This monitoring is described in section 1.8 of this chapter. Information can also be found in Part II, Chapter 1 and in WMO

(1989). It is important to establish effective liaison procedures between those responsible for monitoring and for maintenance and calibration, to facilitate rapid response to fault or failure reports from the monitoring system. Testing and calibration: During their operation, the performance and instrumental characteristics of meteorological instruments change for reasons such as the ageing of hardware components, degraded maintenance, exposure, and so forth. These may cause long-term drifts or sudden changes in calibration. Consequently, instruments need regular inspection and calibration to provide reliable data. This requires the availability of standards and of appropriate calibration and test facilities. It also requires an efficient calibration plan and calibration housekeeping. See Part III, Chapter 4 for general information about test and calibration aspects and to the relevant chapters of Part I for individual instruments. Maintenance: Maintenance can be corrective (when parts fail), preventive (such as cleaning or lubrication) or adaptive (in response to changed requirements or obsolescence). The quality of the data provided by an instrument is considerably affected by the quality of its maintenance, which in turn depends mainly on the ability of maintenance personnel and the maintenance concept. The capabilities, personnel and equipment of the organization or unit responsible for maintenance must be adequate for the instruments and networks. Several factors have to be considered, such as a maintenance plan, which includes corrective, preventive and adaptive maintenance, logistic management, and the repair, test and support facilities. It must noted that the maintenance costs of equipment can greatly exceed its purchase costs (see Part II, Chapter 1). Training and education: Data quality also depends on the skills of the technical staff in charge of testing, calibration and maintenance activities, and of the observers making the observations. Training and education programmes should be organized according to a rational plan geared towards meeting the needs of users, and especially the maintenance and calibration requirements outlined above, and should be adapted to the system; this is particularly important for AWSs. As part of the system procurement, the manufacturer should be obliged to provide very comprehensive operational and technical documentation and to organize operational and

III.1–12

ParT III. QualITY aSSuraNCE aNd MaNaGEMENT OF OBSErvING SYSTEMS

technical training courses (see Part III, Chapter 5) in the NMHS. Metadata: A sound quality assurance entails the availability of detailed information on the observing system itself and in particular on all changes that occur during the time of its operation. Such information on data, known as metadata, enables the operator of an observing system to take the most appropriate preventive, corrective and adaptive actions to maintain or enhance data quality. Metadata requirements are further considered in section 1.9. For further information on metadata, see Part I, Chapter 1 (and Annex 1.C).

or at the various centres where the data are processed. Quality assurance procedures must be introduced and reassessed during the development phases of new sensors or observing systems (see Figure 5). 1.7.1 1.7.1.1 surface data Manual observations and staffed stations

The observer or the officer in charge at a station is expected to ensure that the data leaving the station have been quality controlled, and should be

1.7

quality assurance (quality control)

Monitoring Strategy of NMSs Internal/ external users/ customers

WMO (2003) prescribes that certain qualitycontrol procedures must be applied to all meteorological data to be exchanged internationally. Level I and Level II data, and the conversion from one to the other, must be subjected to quality control. WMO (1992) prescribes that quality-control procedures must be applied by meteorological data processing centres to most kinds of weather reports exchanged internationally, to check for coding errors, internal consistency, time and space consistency, and physical and climatological limits, and it specifies the minimum frequency and times for quality control. WMO (1989) gives general guidance on procedures. It emphasizes the importance of quality control at the station, because some errors occurring there cannot be subsequently corrected, and also points out the great advantages of automation. WMO (1993a) gives rather detailed descriptions of the procedures that may be used by numerical analysis centres, with advice on climatological limits, types of internal consistency checks, comparisons with neighbouring stations and with analyses and prognoses, and provides brief comments on the probabilities of rejecting good data and accepting false data with known statistical distributions of errors. Quality control, as specifically defined in section 1.1, is implemented in real time or near-real time to data acquisition and processing. In practice, responsibility for quality control is assigned to various points along the data chain. These may be at the station, if there is direct manual involvement in data acquisition,

NMS processes

Evaluation of requirements

Change ?

No

QA: Preventive actions

Development

Testing

Verification

QC: AWS preventive maintenance

Data acquisition/ generation

Validation

QC: Monitoring of data cables

Data management/ Transfer

QC: Consistency check of data

Database

Data centres

Archiving

NMS: National Meteorological or Hydrological Service QA: Quality assurance QC: Quality control

figure 1.5. Process for observation generation

CHaPTEr 1. QualITY MaNaGEMENT

III.1–13

provided with established procedures for attending to this responsibility. This is a specific function, in addition to other maintenance and record-keeping functions, and includes the following: (a) Internal consistency checks of a complete synoptic or other compound observation: In practice, they are performed as a matter of course by an experienced observer, but they should nevertheless be an explicit requirement. Examples of this are the relations between the temperature, the dewpoint and the daily extremes, and between rain, cloud and weather; (b) Climatological checks: These for consistency: The observer knows, or is provided with charts or tables of, the normal seasonal ranges of variables at the station, and should not allow unusual values to go unchecked; (c) Temporal checks: These should be made to ensure that changes since the last observation are realistic, especially when the observations have been made by different observers; (d) Checks of all arithmetical and table look-up operations; (e) Checks of all messages and other records against the original data. 1.7.1.2 automatic weather stations

checks to all data, even to those that are not used in real time, because later quality control tends to be less effective. If available, automation should of course be used, but certain quality-control procedures are possible without computers, or with only partial assistance by computing facilities. The principle is that every message should be checked, preferably at each stage of the complete data chain. The checks that have already been performed at stations are usually repeated at data centres, perhaps in more elaborate form by making use of automation. Data centres, however, usually have access to other network data, thus making a spatial check possible against observations from surrounding stations or against analysed or predicted fields. This is a very powerful method and is the distinctive contribution of a data centre. If errors are found, the data should be either rejected or corrected by reference back to the source, or should be corrected at the data centre by inference. The last of these alternatives may evidently introduce further errors, but it is nevertheless valid in many circumstances; data so corrected should be flagged in the database and should be used only carefully. The quality-control process produces data of established quality, which may then be used for real-time operations and for a databank. However, a by-product of this process should be the compilation of information about the errors that were found. It is good practice to establish at the first or subsequent data-processing point a system for immediate feedback to the origin of the data if errors are found, and to compile a record for use by the network manager in performance monitoring, as discussed below. This function is best performed at the regional level, where there is ready access to the field stations. The detailed procedures described in WMO (1993a) are a guide to controlling the quality control of data for international exchange, under the recommendations of WMO (1992).

At AWSs, some of the above checks should be performed by the software, as well as engineering checks on the performance of the system. These are discussed in Part II, Chapter 1. 1.7.2 upper-air data

The procedures for controlling the quality of upper-air data are essentially the same as those for surface data. Checks should be made for internal consistency (such as lapse rates and shears), for climatological and temporal consistency, and for consistency with normal surface observations. For radiosonde operations, it is of the utmost importance that the baseline initial calibration be explicitly and deliberately checked. The message must also be checked against the observed data. The automation of on-station quality control is particularly useful for upper-air data. 1.7.3 Data centres

1.7.4

interaction with field stations

Data should be checked in real time or as close to real time as possible, at the first and subsequent points where they are received or used. It is highly advisable to apply the same urgent

If quality is to be maintained, it is absolutely essential that errors be tracked back to their source, with some kind of corrective action. For data from staffed stations this is very effectively done in near-real time, not only because the data may be corrected, but also to identify the reason for the error and prevent it from recurring.

III.1–14

ParT III. QualITY aSSuraNCE aNd MaNaGEMENT OF OBSErvING SYSTEMS

It is good practice to assign a person at a data centre or other operational centre with the responsibility for maintaining near-real-time communication and effective working relations with the field stations, to be used whenever errors in the data are identified.

and data users. Miller and Morone (1993) describe a system with similar functions, in near-real time, making use of a mesoscale numerical model for the spatial and temporal tests on the data.

1.9 1.8
PerforMance Monitoring

Data hoMogeneity anD MetaData

The management of a network, or of a station, is greatly strengthened by keeping continuous records of performance, typically on a daily and monthly schedule. The objective of performance monitoring is to review continually the quality of field stations and of each observing system, such as for pressure measurement, or the radiosonde network. There are several aspects to performance monitoring, as follows: (a) Advice from data centres should be used to record the numbers and types of errors detected by quality-control procedures; (b) Data from each station should be compiled into synoptic and time-section sets. Such sets should be used to identify systematic differences from neighbouring stations, both in spatial fields and in comparative time series. It is useful to derive statistics of the mean and the scatter of the differences. Graphical methods are effective for these purposes; (c) Reports should be obtained from field stations about equipment faults, or other aspects of performance. These types of records are very effective in identifying systematic faults in performance and in indicating corrective action. They are powerful indicators of many factors that affect the data, such as exposure or calibration changes, deteriorating equipment, changes in the quality of consumables or the need for retraining. They are particularly important for maintaining confidence in automatic equipment. The results of performance monitoring should be used for feedback to the field stations, which is important to maintain motivation. The results also indicate when action is necessary to repair or upgrade the field equipment. Performance monitoring is a time-consuming task, to which the network manager must allocate adequate resources. WMO (1988) describes a system to monitor data from an AWS network, using a small, dedicated office with staff monitoring realtime output and advising the network managers

In the past, observational networks were primarily built to support weather forecasting activities. Operational quality control was focused mainly on identifying outliers, but rarely incorporated checks for data homogeneity and continuity of time series. The surge of interest in climate change, primarily as a result of concerns over increases in greenhouse gases, changed this situation. Data homogeneity tests have revealed that many of the apparent climate changes can be attributed to inhomogeneities in time series caused only by operational changes in observing systems. This section attempts to summarize these causes and presents some guidelines concerning the necessary information on data, namely, metadata, which should be made available to support data homogeneity and climate change investigations. 1.9.1 causes of data inhomogeneities

Inhomogeneities caused by changes in the observing system appear as abrupt discontinuities, gradual changes, or changes in variability. Abrupt discontinuities mostly occur due to changes in instrumentation, siting and exposure changes, station relocation, changes in the calculation of averages, data reduction procedures and the application of new calibration corrections. Inhomogeneities that occur as a gradually increasing effect may arise from a change in the surroundings of the station, urbanization and gradual changes in instrumental characteristics. Changes in variability are caused by instrument malfunctions. Inhomogeneities are further due to changes in the time of observations, insufficient routine inspection, maintenance and calibration, and unsatisfactory observing procedures. On a network level, inhomogeneities can be caused by data incompatibilities. It is obvious that all factors affecting data quality also cause data inhomogeneities. The historical survey of changes in radiosondes (WMO, 1993b) illustrates the seriousness of the problem and is a good example of the careful work that is necessary to eliminate it.

CHaPTEr 1. QualITY MaNaGEMENT

III.1–15

Changes in the surface-temperature record when manual stations are replaced by AWSs, and changes in the upper-air records when radiosondes are changed, are particularly significant cases of data inhomogeneities. These two cases are now well recognized and can, in principle, be anticipated and corrected, but performance monitoring can be used to confirm the effectiveness of corrections, or even to derive them. 1.9.2 Metadata

(iv) Instrument layout;3 (v) Facilities: data transmission, power supply, cabling; (vi) Climatological description; (c) Individual instrument information: (i) Type: manufacturer, model, serial number, operating principles; (ii) Performance characteristics; (iii) Calibration data and time; (iv) Siting and exposure: location, shielding, height above ground;3 (v) Measuring or observing programme; (vi) Times of observations; (vii) Observer; (viii) Data acquisition: sampling, averaging; (ix) Data-processing methods and algorithms; (x) Preventive and corrective maintenance; (xi) Data quality (in the form of a flag or uncertainty). recommendations for a metadata system

Data inhomogeneities should, as far as possible, be prevented by appropriate quality-assurance procedures with respect to quality control. However, this cannot always be accomplished as some causes of inhomogeneities, such as the replacement of a sensor, can represent real improvements in measuring techniques. It is important to have information on the occurrence, type and, especially, the time of all inhomogeneities that occur. After obtaining such information, climatologists can run appropriate statistical programs to link the previous data with the new data in homogeneous databases with a high degree of confidence. Information of this kind is commonly available in what is known as metadata — information on data — also called station histories. Without such information, many of the above-mentioned inhomogeneities may not be identified or corrected. Metadata can be considered as an extended version of the station administrative record, containing all possible information on the initial set-up, and type and times of changes that occurred during the life history of an observing system. As computer data management systems are an important aspect of quality data delivery, it is desirable that metadata should be available as a computer database enabling computerized composition, updating and use. 1.9.3 elements of a metadata database

1.9.4

The development of a metadata system requires considerable interdisciplinary organization, and its operation, particularly the scrupulous and accurately dated record of changes in the metadata base, requires constant attention. A useful survey of requirements is given in WMO (1994), with examples of the effects of changes in observing operations and an explanation of the advantages of good metadata for obtaining a reliable climate record from discontinuous data. The basic functional elements of a system for maintaining a metadatabase may be summarized as follows: (a) Standard procedures must be established for collecting overlapping measurements for all significant changes made in instrumentation, observing practices and sensor siting; (b) Routine assessments must be made of ongoing calibration, maintenance, and homogeneity problems for the purpose of taking corrective action, when necessary; (c) There must be open communication between the data collector and the researcher to provide feedback mechanisms for recognizing data problems, the correction or at least the potential for problems, and the improvement of, or addition to, documentation to meet initially unforeseen user requirements (for example, work groups); (d) There must be detailed and readily available documentation on the procedures, rationale, testing, assumptions and known problems involved in the construction of the data set from the measurements.

A metadata database contains initial set-up information together with updates whenever changes occur. Major elements include the following: (a) Network information: The operating authority, and the type and purpose of the network; (b) Station information: (i) Administrative information; (ii) Location: geographical coordinates, elevation(s);3 (iii) Descriptions of remote and immediate surroundings and obstacles;3
It is necessary to include maps and plans on appropriate scales.

3

III.1–16

ParT III. QualITY aSSuraNCE aNd MaNaGEMENT OF OBSErvING SYSTEMS

These four recommendations would have the effect of providing a data user with enough metadata to enable manipulation, amalgamation and summarization of the data with minimal assumptions regarding data quality and homogeneity.

(e)

1.10

netWork ManageMent

The administrative arrangements should enable the network manager to take, or arrange for, corrective action arising from qualitycontrol procedures, performance monitoring, the inspection programme, or any other factor affecting quality. One of the most important other factors is observer training, as described in Part III, Chapter 5, and the network manager should be able to influence the content and conduct of courses and how they are conducted or the prescribed training requirements. inspections

All the factors affecting data quality described in section 1.6 are the subject of network management. In particular, network management must include corrective action in response to the network performance revealed by quality-control procedures and performance monitoring. Networks are defined in WMO (2003), and guidance on network management in general terms is given in WMO (1989), including the structure and functions of a network management unit. Network management practices vary widely according to locally established administrative arrangements. It is highly desirable to identify a particular person or office as the network manager to whom operational responsibility is assigned for the impact of the various factors on data quality. Other specialists who may be responsible for the management and implementation of some of these factors must collaborate with the network manager and accept responsibility for their effect on data quality. The manager should keep under review the procedures and outcomes associated with all of the factors affecting quality, as discussed in section 1.6, including the following considerations: (a) The quality-control systems described in section 1.1 are operationally essential in any meteorological network and should receive priority attention by the data users and by the network management; (b) Performance monitoring is commonly accepted as a network management function. It may be expected to indicate the need for action on the effects of exposure, calibration and maintenance. It also provides information on the effects of some of the other factors; (c) Field station inspection described below, is a network management function; (d) Equipment maintenance may be a direct function of the network management unit. If not, there should be particularly effective collaboration between the network manager and the office responsible for the equipment;

1.10.1

Field stations should be inspected regularly, preferably by specially appointed, experienced inspectors. The objectives are to examine and maintain the work of the observers, the equipment and instrument exposure, and also to enhance the value of the data by recording the station history. At the same time, various administrative functions, which are particularly important for staffed stations, can be performed. The same principles apply to staffed stations, stations operated by part-time, voluntary or contract observers and, to a certain degree, to AWSs. Requirements for inspections are laid down in WMO (2003), and advice is given in WMO (1989). Inspections reports are part of the performance monitoring record. It is highly advisable to have a systematic and exhaustive procedure fully documented in the form of inspections and maintenance handbooks, to be used by the visiting inspectors. Procedures should include the details of subsequent reporting and follow-up. The inspector should attend, in particular, to the following aspects of station operations: (a) Instrument performance: Instruments requiring calibration must be checked against a suitable standard. Atmospheric pressure is the prime case, as all field barometers can drift to some degree. Mechanical and electrical recording systems must be checked according to established procedures. More complex equipment such as AWSs and radars need various physical and electrical checks. Anemometers and thermometer shelters are particularly prone to deterioration of various kinds, which may vitiate the data. The physical condition of all equipment should be examined for dirt, corrosion and so on;

CHaPTEr 1. QualITY MaNaGEMENT

III.1–17

(b)

(c)

Observing methods: Bad practice can easily occur in observing procedures, and the work of all observers should be continually reviewed. Uniformity in methods recording and coding is essential for synoptic and climatological use of the data; Exposure: Any changes in the surroundings of the station must be documented and corrected in due course, if practicable. Relocation may be necessary.

It is most important that all changes identified during the inspection should be permanently recorded and dated so that a station history can be compiled for subsequent use for climate studies and other purposes. An optimum frequency of inspection visits cannot be generally specified, even for one particular type of station. It depends on the quality of the observers and equipment, the rate at which the equipment and exposure deteriorates, and changes in the station staff and facilities. An inspection interval of two years may be acceptable for a well-established station, and six months may be appropriate for automatic stations. Some kinds of stations will have special inspection requirements. Some equipment maintenance may be performed by the inspector or by the inspection team, depending on the skills available. In general, there should be an equipment maintenance programme, as is the case for inspections. This is not discussed here because the requirements and possible organizations are very diverse.

Inspections of manual stations also serve the purpose of maintaining the interest and enthusiasm of the observers. The inspector must be tactful, informative, enthusiastic and able to obtain willing cooperation. A prepared form for recording the inspection should be completed for every inspection. It should include a checklist on the condition and installation of the equipment and on the ability and competence of the observers. The inspection form may also be used for other administrative purposes, such as an inventory.

III.1–18

ParT III. QualITY aSSuraNCE aNd MaNaGEMENT OF OBSErvING SYSTEMS

references and furtHer readIng

Deming, W.E., 1986: Out of the Crisis: Quality, Productivity and Competitive Position. University of Cambridge Press, Cambridge. International Organization for Standardization, 2005: Quality management systems – Fundamentals and vocabulary . ISO 9000:2005. International Organization for Standardization, 2000: Quality management systems – Requirements. ISO 9001:2000. International Organization for Standardization, 2000: Quality management systems – Guidelines for performance improvements. ISO 9004:2000. International Organization for Standardization, 2002: Guidelines for quality and/or environmental management systems auditing. ISO 19011:2002. International Organization for Standardization International Electrotechnical Commission, 2005: General requirements for the competence of testing and calibration laboratories. ISO/EC 17025:2005. International Organization for Standardization International Electrotechnical Commission, 2005: Information technology – Service management – Part 1: Specification. ISO/IEC 20000-1:2005. International Organization for Standardization: Information technology – Service management – Part 2: Code of practice, ISO/IEC 200002:2005-12, Geneva Kaplan, R.S., D.P. Norton, 1996: The Balanced Scorecard: Translating Strategy into Action. Harvard Business School Press, Boston. Miller, P.A. and L.L. Morone, 1993: Real-time quality control of hourly reports from the automated surface observing system. Preprints of the Eighth Symposium on Meteorological Observations and Instrumentation. American Meteorological Society, Boston, pp. 373-38. World Meteorological Organization, 1988: Practical experience of the operation of quality

evaluation programmes for automated surface observations both on land and over the sea (M. Field and J. Nash). Papers Presented at the WMO Technical Conference on Instruments and Methods of Observation (TECO-1988). Instruments and Observing Methods Report No. 33, WMO/ TD-No. 222, Geneva, pp. 335-340. World Meteorological Organization, 1989: Guide on the Global Observing and forecasting System. WMO-No. 488, Geneva. World Meteorological Organization, 1992: Manual on the Global Data-processing System. WMO-No. 485, Geneva. World Meteorological Organization, 1993a: Guide on the Global Data-Processing System. WMO-No. 305, Geneva. World Meteorological Organization, 1993b: Historical Changes in Radiosonde Instruments and Practices (D.J. Gaffen). Instruments and Observing Methods Report No. 50, WMO/ TD-No. 541, Geneva. World Meteorological Organization, 1994: Homogeneity of data and the climate record (K.D. Hadeen and N.B. Guttman). Papers Presented at the WMO Technical Conference on Instruments and Methods of Observation (TECO-94), Instruments and Observing Methods Report No. 57, WMO/TD-No. 588, Geneva, pp. 3–11. World Meteorological Organization, 2003: Manual on the Global Observing System. Volume I, WMO-No. 544, Geneva. World Meteorological Organization, 2005a: Quality Management Framework (QMF). First WMO Technical Report (revised edition), WMO/TDNo. 1268. World Meteorological Organization, 2005b: Guidelines on Quality Management Procedures and Practices for Public Weather Services. PWS-11, WMO/TD No. 1256, Geneva.

CHaPTEr 2

saMPlIng MeteorologIcal VarIaBles

2.1

general

of a rapidly varying quantity, wind being the prime example. It is therefore convenient to begin with a discussion of time series, spectra and filters in sections 2.2 and 2.3. Section 2.4 gives practical advice on sampling. The discussion here, for the most part, assumes digital techniques and automatic processing. It is important to recognize that an atmospheric variable is actually never sampled. It is only possible to come as close as possible to sampling the output of a sensor of that variable. The distinction is important because sensors do not create an exact analogue of the sensed variable. In general, sensors respond more slowly than the atmosphere changes, and they add noise. Sensors also do other, usually undesirable, things such as drift in calibration, respond non-linearly, interfere with the quantity that they are measuring, fail more often than intended, and so on, but this discussion will only be concerned with response and the addition of noise. There are many textbooks available to give the necessary background for the design of sampling systems or the study of sampled data. See, for example, Bendat and Piersol (1986) or Otnes and Enochson (1978). Other useful texts include Pasquill and Smith (1983), Stearns and Hush (1990), Kulhánek (1976), and Jenkins and Watts (1968). 2.1.1 Definitions

The purpose of this chapter is to give an introduction to this complex subject, for non-experts who need enough knowledge to develop a general understanding of the issues and to acquire a perspective of the importance of the techniques. Atmospheric variables such as wind speed, temperature, pressure and humidity are functions of four dimensions – two horizontal, one vertical, and one temporal. They vary irregularly in all four, and the purpose of the study of sampling is to define practical measurement procedures to obtain representative observations with acceptable uncertainties in the estimations of mean and variability. Discussion of sampling in the horizontal dimensions includes the topic of areal representativeness, which is discussed in Part I, Chapter 1, in other chapters on measurements of particular quantities, and briefly below. It also includes the topics of network design, which is a special study related to numerical analysis, and of measurements of area-integrated quantities using radar and satellites; neither of these is discussed here. Sampling in the vertical is briefly discussed in Part I, Chapters 12 and 13 and Part II, Chapter 5. This chapter is therefore concerned only with sampling in time, except for some general comments about representativeness. The topic can be addressed at two levels as follows: (a) At an elementary level, the basic meteorological problem of obtaining a mean value of a fluctuating quantity representative of a stated sampling interval at a given time, using instrument systems with long response times compared with the fluctuations, can be discussed. At the simplest level, this involves consideration of the statistics of a set of measurements, and of the response time of instruments and electronic circuits; (b) The problem can be considered more precisely by making use of the theory of time-series analysis, the concept of the spectrum of fluctuations, and the behaviour of filters. These topics are necessary for the more complex problem of using relatively fastresponse instruments to obtain satisfactory measurements of the mean or the spectrum

For the purposes of this chapter the following definitions are used: Sampling is the process of obtaining a discrete sequence of measurements of a quantity. A sample is a single measurement, typically one of a series of spot readings of a sensor system. Note that this differs from the usual meaning in statistics of a set of numbers or measurements which is part of a population. An observation is the result of the sampling process, being the quantity reported or recorded (often also called a measurement). In the context of time-series analysis, an observation is derived from a number of samples.

III.2–2

ParT III. QualITY aSSuraNCE aNd MaNaGEMENT OF OBSErvING SYSTEMS

The ISO definition of a measurement is a “set of operations having the object of determining the value of a quantity”. In common usage, the term may be used to mean the value of either a sample or an observation. The sampling time or observation period is the length of the time over which one observation is made, during which a number of individual samples are taken. The sampling interval is the time between successive observations. The sampling function or weighting function is, in its simplest definition, an algorithm for averaging or filtering the individual samples. The sampling frequency is the frequency at which samples are taken. The sample spacing is the time between samples. Smoothing is the process of attenuating the high frequency components of the spectrum without significantly affecting the lower frequencies. This is usually done to remove noise (random errors and fluctuations not relevant for the application). A filter is a device for attenuating or selecting any chosen frequencies. Smoothing is performed by a low-pass filter, and the terms smoothing and filtering are often used interchangeably in this sense. However, there are also high-pass and band-pass filters. Filtering may be a property of the instrument, such as inertia, or it may be performed electronically or numerically. 2.1.2 representativeness in time and space

To make observations representative, sensors are exposed at standard heights and at unobstructed locations and samples are processed to obtain mean values. In a few cases, sensors, for example transmissometers, inherently average spatially, and this contributes to the representativeness of the observation. The human observation of visibility is another example of this. However, the remaining discussion in this chapter will ignore spatial sampling and concentrate upon time sampling of measurements taken at a point. A typical example of sampling and time averaging is the measurement of temperature each minute (the samples), the computation of a 10 min average (the sampling interval and the sampling function), and the transmission of this average (the observation) in a synoptic report every 3 h. When these observations are collected over a period from the same site, they themselves become samples in a new time sequence with a 3 h spacing. When collected from a large number of sites, these observations also become samples in a spatial sequence. In this sense, representative observations are also representative samples. In this chapter we discuss the initial observation. 2.1.3 the spectra of atmospheric quantities

By applying the mathematical operation known as the Fourier transform, an irregular function of time (or distance) can be reduced to its spectrum, which is the sum of a large number of sinusoids, each with its own amplitude, wavelength (or period or frequency) and phase. In broad contexts, these wavelengths (or frequencies) define “scales” or “scales of motion” of the atmosphere. The range of these scales is limited in the atmosphere. At one end of the spectrum, horizontal scales cannot exceed the circumference of the Earth or about 40 000 km. For meteorological purposes, vertical scales do not exceed a few tens of kilometres. In the time dimension, however, the longest scales are climatological and, in principle, unbounded, but in practice the longest period does not exceed the length of records. At the short end, the viscous dissipation of turbulent energy into heat sets a lower bound. Close to the surface of the Earth, this bound is at a wavelength of a few centimetres and increases with height to a few metres in the stratosphere. In the time dimension, these wavelengths correspond to frequencies of tens of hertz. It is correct to say that atmospheric variables are bandwidth limited.

Sampled observations are made at a limited rate and for a limited time interval over a limited area. In practice, observations should be designed to be sufficiently frequent to be representative of the unsampled parts of the (continuous) variable, and are often taken as being representative of a longer time interval and larger area. The user of an observation expects it to be representative, or typical, of an area and time, and of an interval of time. This area, for example, may be “the airport” or that area within a radius of several kilometres and within easy view of a human observer. The time is the time at which the report was made or the message transmitted, and the interval is an agreed quantity, often 1, 2 or 10 min.

CHaPTEr 2. SaMPlING METEOrOlOGICal varIaBlES

III.2–3

S(n)

0.01

1 Frequency

100

Cycles/hour

300

1 Period

0.01

Hours

figure 2.1. a typical spectrum of a meteorological quantity Figure 2.1 is a schematic representation of a spectrum of a meteorological quantity such as wind, notionally measured at a particular station and time. The ordinate, commonly called energy or spectral density, is related to the variance of the fluctuations of wind at each frequency n. The spectrum in Figure 2.1 has a minimum of energy at the mesoscale around one cycle per hour, between peaks in the synoptic scale around one cycle per four days, and in the microscale around one cycle per minute. The smallest wavelengths are a few centimetres and the largest frequencies are tens of hertz. estimates of the mean, the extremes and the spectrum if systems are not designed correctly. Although measurements of spectra are non-routine, they have many applications. The spectrum of wind is important in engineering, atmospheric dispersion, diffusion and dynamics. The concepts discussed here are also used for quantitative analysis of satellite data (in the horizontal space dimension) and in climatology and micrometeorology. In summary, the argument is as follows: (a) An optimum sampling rate can be assessed from consideration of the variability of the quantity being measured. Estimates of the mean and other statistics of the observations will have smaller uncertainties with higher sampling frequencies, namely, larger samples; (b) The Nyquist theorem states that a continuous fluctuating quantity can be precisely determined by a series of equispaced samples if they are sufficiently close together; (c) If the sampling frequency is too low, fluctuations at the higher unsampled frequencies (above the Nyquist frequency, defined in section 2.2.1) will affect the estimate of the mean value. They will also affect the computation of the lower frequencies, and

2.2

tiMe series, PoWer sPectra anD filters

This section is a layperson’s introduction to the concepts of time-series analysis which are the basis for good practice in sampling. In the context of this Guide, they are particularly important for the measurement of wind, but the same problems arise for temperature, pressure and other quantities. They became important for routine meteorological measurements when automatic measurements were introduced, because frequent fast sampling then became possible. Serious errors can occur in the

III.2–4

PART III. QUALITY ASSURANCE AND MANAGEMENT OF OBSERVING SYSTEMS

(d)

(e)

the measured spectrum will be incorrect. This is known as aliasing. It can cause serious errors if it is not understood and allowed for in the system design; Aliasing may be avoided by using a high sampling frequency or by filtering so that a lower, more convenient sampling frequency can be used; Filters may be digital or analogue. A sensor with a suitably long response time acts as a filter.

frequency is characterized by two parameters. These can be most conveniently taken as the amplitude and phase of the frequency component. Thus, if equation 2.1 is expressed in its alternative form:

( f (t ) =



j =0

a Σ αj sin((jω t + φ j )

(2.2)

the amplitude and phase associated with each spec‑ tral contribution are αj and j. Both can be affected in sampling and processing. So far, it has been assumed that the function f(t) is known continuously throughout its range t=0 to t=τ. In fact, in most examples this is not the case; the meteorological variable is measured at discrete points in a time series, which is a series of N samples equally spaced Δt apart during a specified period τ=(N–1)Δt. The samples are assumed to be taken instantaneously, an assumption which is strictly not true, as all measuring devices require some time to determine the value they are measuring. In most cases, this is short compared with the sample spac‑ ing Δt. Even if it is not, the response time of the measuring system can be accommodated in the analysis, although that will not be addressed here. When considering the data that would be obtained by sampling a sinusoidal function at times Δt apart, it can be seen that the highest frequency that can be detected is 1/(2Δt), and that in fact any higher frequency sinusoid that may be present in the time series is represented in the data as having a lower frequency. The frequency 1/(2Δt) is called the Nyquist frequency, designated here as ny. The Nyquist frequency is sometimes called the folding frequency. This terminology comes from considera‑ tion of aliasing of the data. The concept is shown schematically in Figure 2.2. When a spectral analy‑ sis of a time series is made, because of the discrete nature of the data, the contribution to the estimate at frequency n also contains contributions from higher frequencies, namely from 2 jn y ± n (j = 1 to s∞). One way of visualizing this is to consider the frequency domain as if it were folded, in a concertina‑like way, at n = 0 and n = ny and so on in steps of ny. The spectral estimate at each frequency in the range is the sum of all the contri‑ butions of those higher frequencies that overlie it. The practical effects of aliasing are discussed in section 2.4.2. It is potentially a serious problem and should be considered when designing instrument systems. It can be avoided by minimizing, or reduc‑ ing to zero, the strength of the signal at frequencies above ny. There are a couple of ways of achieving this. First, the system can contain a low‑pass filter that

A full understanding of sampling involves knowl‑ edge of power spectra, the Nyquist theorem, filtering and instrument response. This is a highly specialized subject, requiring understanding of the characteris‑ tics of the sensors used, the way the output of the sensors is conditioned, processed and logged, the physical properties of the elements being measured, and the purpose to which the analysed data are to be put. This, in turn, may require expertise in the phys‑ ics of the instruments, the theory of electronic or other systems used in conditioning and logging processes, mathematics, statistics and the meteorol‑ ogy of the phenomena, all of which are well beyond the scope of this chapter. However, it is possible for a non‑expert to under‑ stand the principles of good practice in measuring means and extremes, and to appreciate the prob‑ lems associated with measurements of spectra. 2.2.1 Time‑series analysis

It is necessary to consider signals as being either in the time or the frequency domain. The fundamen‑ tal idea behind spectral analysis is the concept of Fourier transforms. A function, f(t), defined between t = 0 and t = τ can be transformed into the sum of a set of sinusoidal functions:

f (t ) =

Σ [A j sin (jwt ) + B j cos (jwt )] j=0

°

(2.1)

where ω = 2 π/τ. The right‑hand side of the equa‑ tion is a Fourier series. Aj and Bj are the amplitudes of the contributions of the components at frequen‑ cies nj = jω. This is the basic transformation between the time and frequency domains. The Fourier coef‑ ficients Aj and Bj relate directly to the frequency jω and can be associated with the spectral contribu‑ tions to f(t) at these frequencies. If the frequency response of an instrument is known – that is, the way in which it amplifies or attenuates certain frequencies – and if it is also known how these frequencies contribute to the original signal, the effect of the frequency response on the output signal can be calculated. The contribution of each

CHaPTEr 2. SaMPlING METEOrOlOGICal varIaBlES

III.2–5

L1 (S)

S(n) K1

(a) L2 0 K 3 (b) (c) ny K2

a b 2ny n 3ny

c 4ny

figure 2.2. a schematic illustration of aliasing a spectrum computed from a stationary time series. the spectrum can be calculated only over the frequency range zero to the nyquist frequency ny. the true values of the energies at higher frequencies are shown by the sectors marked a, b and c. these are “folded” back to the n = 0 to ny sector as shown by the broken lines (a), (b), (c). the computed spectrum, shown by the bold broken line (s), includes the sum of these.

attenuates contributions at frequencies higher than ny before the signal is digitized. The only disadvantage of this approach is that the timing and magnitude of rapid changes will not be recorded well, or even at all. The second approach is to have t small enough so that the contributions above the Nyquist frequency are insignificant. This is possible because the spectra of most meteorological variables fall off very rapidly at very high frequencies. This second approach will, however, not always be practicable, as in the example of three-hourly temperature measurements, where if t is of the order of hours, small scale fluctuations, of the order of minutes or seconds, may have relatively large spectral ordinates and alias strongly. In this case, the first method may be appropriate. 2.2.2 Measurement of spectra

It will be noted that phase is not relevant in this case. The spectrum of a fluctuating quantity can be measured in a number of ways. In electrical engineering it was often determined in the past by passing the signal through band-pass filters and by measuring the power output. This was then related to the power of the central frequency of the filter. There are a number of ways of approaching the numerical spectral analysis of a time series. The most obvious is a direct Fourier transform of the time-series. In this case, as the series is only of finite length, there will be only a finite number of frequency components in the transformation. If there are N terms in the time-series, there will be N/2 frequencies resulting from this analysis. A direct calculation is very laborious, and other methods have been developed. The first development was by Blackman and Tukey (1958), who related the auto-correlation function to estimates of various

The spectral density, at least as it is estimated from a time series, is defined as:

S( n j ) = ( A j + B j ) / n y = α j / ny

2

2

2

(2.3)

III.2–6

ParT III. QualITY aSSuraNCE aNd MaNaGEMENT OF OBSErvING SYSTEMS

spectral functions. (The auto-correlation function r(t) is the correlation coefficient calculated between terms in the time-series separated by a time interval t). This was appropriate for the lowpowered computing facilities of the 1950s and 1960s, but it has now been generally superseded by the so-called fast Fourier transform (FFT), which takes advantage of the general properties of a digital computer to greatly accelerate the calculations. The main limitation of the method is that the time-series must contain 2k terms, where k is an integer. In general, this is not a serious problem, as in most instances there are sufficient data to conveniently organize the series to such a length. Alternatively, some FFT computer programs can use an arbitrary number of terms and add synthetic data to make them up to 2k. As the time series is of finite duration (N terms), it represents only a sample of the signal of interest. Thus, the Fourier coefficients are only an estimate or the true, or population, value. To improve reliability, it is common practice to average a number of terms each side of a particular frequency and to assign this average to the value of that frequency. The confidence interval of the estimate is thereby shrunk. As a rule of thumb, 30º of freedom is suggested as a satisfactory number for practical purposes. Therefore, as each estimate made during the

Fourier transform has 2º of freedom (associated with the coefficients of the sine and cosine terms), about 15 terms are usually averaged. Note that 16 is a better number if an FFT approach is used as this is 2 4 and there are then exactly 2(k/ 2) –4 spectral estimates; for example, if there are 1 024 terms in the time series, there will be 512 estimates of the As and Bs, and 64 smoothed estimates. Increasingly, the use of the above analyses is an integral part of meteorological systems and relevant not only to the analysis of data. The exact form of spectra encountered in meteorology can show a wide range of shapes. As can be imagined, the contributions can be from the lowest frequencies associated with climate change through annual and seasonal contributions through synoptic events with periods of days, to diurnal and semi-diurnal contributions and local mesoscale events down to turbulence and molecular variations. For most meteorological applications, including synoptic analysis, the interest is in the range minutes to seconds. The spectrum at these frequencies will typically decrease very rapidly with frequency. For periods of less than 1 min, the spectrum often takes values proportional to n–5/3. Thus, there is often relatively little contribution from frequencies greater than 1 Hz.

1.0

0.8

Relative response

0.6

0.4

0.2

0

TI

2TI

3TI

4TI

Time

figure 2.3. the response of a first order system to a step function. at time T(I) the system has reached 63 per cent of its final value.

CHaPTEr 2. SaMPlING METEOrOlOGICal varIaBlES

III.2–7

2.0

0.1

Relative response

1.5 0.7

1.0

0.5

2.0

p

N

2p

N

Time

figure 2.4. the response of a second order system to a step function. pN is the natural period, related to k1 in equation 2.7, which, for a wind vane, depends on wind speed. the curves shown are for damping factors with values 0.1 (very lightly damped), 0.7 (critically damped, optimum for most purposes) and 2.0 (heavily damped). the damping factor is related to k2 in equation 2.7.

j= 0

Σ S (n j) = σ 2

One of the important properties of the spectrum is that: where σ2 is the variance of the quantity being measured. It is often convenient, for analysis, to express the spectrum in continuous form, so that equation 2.4 becomes:
0

It can be seen from equations 2.4 and 2.5 that changes caused to the spectrum, say by the instrument system, will alter the value of σ 2 and hence the statistical properties of the output relative to the input. This can be an important consideration in instrument design and data analysis. Note also that the left-hand side of equation 2.5 is the area under the curve in Figure 2.2. That area, and therefore the variance, is not changed by aliasing if the time series is stationary, that is if its spectrum does not change from time to time.

8

(2.4)

2.2.3

instrument system response

Sensors, and the electronic circuits that may be used with them comprising an instrument system, have response times and filtering characteristics that affect the observations. No meteorological instrument system, or any instrumental system for that matter, precisely follows the quantity it is measuring. There is, in general, no simple way of describing the response of a system, although there are some reasonable approximations to them. The simplest can be classified as first and second order responses. This refers to the order of the differential equation that is used to approximate the way the system responds. For a detailed examination of the concepts that follow, there are many references in physics textbooks and the literature (see MacCready and Jex, 1964). In the first order system, such as a simple sensor or the simplest low-pass filter circuit, the rate of change of the value recorded by the instrument is directly proportional to the difference between the value registered by the instrument and the true value of the variable. Thus, if the true value at time t is s(t) and the value measured by the sensor

∫ S(n) dn = σ 2

8

(2.5)

III.2–8

ParT III. QualITY aSSuraNCE aNd MaNaGEMENT OF OBSErvING SYSTEMS

Box car weighting function

Weighting factor w

Exponential weighting function

–4Ta

–3Ta

–2Ta

–Ta

O

Ta

Time

figure 2.5. the weighting factors for a first order (exponential) weighting function and a box car weighting function. for the box car Ta is Ts, the sampling time, and w = 1/N. for the first order function Ta is tI, the time constant of the filter, and w(t) = (1/TI) exp (–t/TI). is s0(t), the system is described by the first order differential equation: coefficient of a cup is lower if the air-flow is towards the front rather than towards the back. The wind vane approximates a second order system because the acceleration of the vane toward the true wind direction is proportional to the displacement of the vane from the true direction. This is, of course, the classical description of an oscillator (for example, a pendulum). Vanes, both naturally and by design, are damped. This occurs because of a resistive force proportional to, and opposed to, its rate of change. Thus, the differential equation describing the vane’s action is:

ds0 (t ) dt

=

s (t ) – s 0 (t ) TI

(2.6)

where T I is a constant with the dimension of time, characteristic of the system. A first order system’s response to a step function is proportional to exp(–t/T I), and T I is observable as the time taken, after a step change, for the system to reach 63 per cent of the final steady reading. Equation 2.6 is valid for many sensors, such as thermometers. A cup anemometer is a first order instrument, with the special property that TI is not constant. It varies with wind speed. In fact, the parameter s0TI is called the distance constant, because it is nearly constant. As can be seen in this case, equation 2.6 is no longer a simple first order equation as it is now non-linear and consequently presents considerable problems in its solution. A further problem is that TI also depends on whether the cups are speeding up or slowing down; that is, whether the right-hand side is positive or negative. This arises because the drag

d φ 0 (t ) dφ (t ) = k1[φ0 (t ) – φ (t )] – k2 0 o 2 dt dt

2

(2.7)

where is the true wind direction; 0 is the direction of the wind vane; and k1 and k2 are constants. The solution to this is a damped oscillation at the natural frequency of the vane (determined by the constant k1). The damping of course is very important; it is controlled by the constant k2. If it is too small, the vane will simply oscillate at the natural frequency; if too great, the vane will not respond to changes in wind direction.

CHaPTEr 2. SaMPlING METEOrOlOGICal varIaBlES

III.2–9

1.0

0.8

Exponential filter H(n)
0.6

Box car filter

0.4

0.2

0.01

0.1

Box car exponential
0.01

nTs

1.0

10.0

0.1

1.0

figure 2.6. frequency response functions for a first order (exponential) weighting function and a box car weighting function. the frequency is normalized for the first order filter by TI, the time constant, and for the box car filter by Ts, the sampling time.

nTI

It is instructive to consider how these two systems respond to a step change in their input, as this is an example of the way in which the instruments respond in the real world. Equations 2.6 and 2.7 can be solved analytically for this input. The responses are shown in Figures 2.3 and 2.4. Note how in neither case is the real value of the element measured by the system. Also, the choice of the values of the constants k 1 and k2 can have great effect on the outputs. An important property of an instrument system is its frequency response function or transfer function H(n). This function gives the amount of the spectrum that is transmitted by the system. It can be defined as: S(n)out = H(n) S(n)in (2.8)

which it can be calculated or measured are discussed in section 2.3. 2.2.4 filters

This section discusses the properties of filters, with examples of the ways in which they can affect the data. Filtering is the processing of a time series (either continuous or discrete, namely, sampled) in such a way that the value assigned at a given time is weighted by the values that occurred at other times. In most cases, these times will be adjacent to the given time. For example, in a discrete time series of N samples numbered 0 to N, with value yi, the value of the filtered observation yi might be defined as:

where the subscripts refer to the input and output spectra. Note that, by virtue of the relationship in equation 2.5, the variance of the output depends on H(n). H(n) defines the effect of the sensor as a filter, as discussed in the next section. The ways in

yi =

j = -m

Σ

m

w j yi+ j

(2.9)

Here there are 2m + 1 terms in the filter, numbered by the dummy variable j from –m to +m, and yi is centred at j = 0. Some data are rejected at the

III.2–10

PART III. QUALITY ASSURANCE AND MANAGEMENT OF OBSERVING SYSTEMS

5 0.1 4

H(n)

3

2 0.7 1 2.0 1 2 3

n/n

N

Figure 2.7. Frequency response functions for a second order system, such as a wind vane. The frequency is normalized by nN, the natural frequency, which depends on wind speed. The curves shown are for damping factors with values 0.1 (very lightly damped), 0.7 (critically damped, optimum for most purposes) and 2.0 (heavily damped).

beginning and end of the sampling time. wj is commonly referred to as a weighting function and typically:

magnitude for the period Ts, and zero beyond that. The frequency response functions, H(n), for these two are shown in Figure 2.6. In the figure, the frequencies have been scaled to show the similarity of the two response functions. It shows that an instrument with a response time of, say, 1 s has approximately the same effect on an input as a box car filter applied over 4 s. However, it should be noted that a box car filter, which is computed numerically, does not behave simply. It does not remove all the higher frequencies beyond the Nyquist frequency, and can only be used validly if the spectrum falls off rapidly above ny. Note that the box car filter shown in Figure 2.6 is an analytical solution for w as a continuous function; if the number of samples in the filter is small, the cut-off is less sharp and the unwanted higher frequency peaks are larger. See Acheson (1968) for practical advice on box car and exponential filtering, and a comparison of their effects. A response function of a second order system is given in Figure 2.7, for a wind vane in this case, showing how damping acts as a band-pass filter.

Σ j = -m

m

w =1 j (2.10)

so that at least the average value of the filtered series will have the same value as the original one. The above example uses digital filtering. Similar effects can be obtained using electronics (for example, through a resistor and capacitor circuit) or through the characteristics of the sensor (for example, as in the case of the anemometer, discussed earlier). Whether digital or analogue, a filter is characterized by H(n). If digital, H(n) can be calculated; if analogue, it can be obtained by the methods described in section 2.3. For example, compare a first order system with a response time of T I, and a “box car” filter of length Ts on a discrete time series taken from a sensor with much faster response. The forms of these two filters are shown in Figure 2.5. In the first, it is as though the instrument has a memory which is strongest at the present instant, but falls off exponentially the further in the past the data goes. The box car filter has all weights of equal

CHaPTEr 2. SaMPlING METEOrOlOGICal varIaBlES

III.2–11

It can be seen that the processing of signals by systems can have profound effects on the data output and must be expertly done. Among the effects of filters is the way in which they can change the statistical information of the data. One of these was touched on earlier and illustrated in equations 2.5 and 2.8. Equation 2.5 shows how the integral of the spectrum over all frequencies gives the variance of the time series, while equation 2.8 shows how filtering, by virtue of the effect of the transfer function, will change the measured spectrum. Note that the variance is not always decreased by filtering. For example, in certain cases, for a second order system the transfer function will amplify parts of the spectrum and possibly increase the variance, as shown in Figure 2.7. To give a further example, if the distribution is Gaussian, the variance is a useful parameter. If it were decreased by filtering, a user of the data would underestimate the departure from the mean of events occurring with given probabilities or return periods. Also, the design of the digital filter can have unwanted or unexpected effects. If Figure 2.6 is examined it can be seen that the response function for the box car filter has a series of maxima at frequencies above where it first becomes zero. This will give the filtered data a small periodicity at these frequencies. In this case, the effect will be minimal as the maxima are small. However, for some filter designs quite significant maxima can be introduced. As a rule of thumb, the smaller the number of weights, the greater the problem. In some instances, periodicities have been claimed in data that only existed because the data had been filtered. An issue related to the concept of filters is the length of the sample. This can be illustrated by noting that, if the length of record is of duration T, contributions to the variability of the data at frequencies below 1/T will not be possible. It can be shown that a finite record length has the effect of a high-pass filter. As for the low-pass filters discussed above, a high-pass filter will also have an impact on the statistics of the output data.

sampling frequency for the time series that the system produces. The procedure is to measure the transfer or response function H(n) in equation 2.8. The transfer function can be obtained in at least three ways – by direct measurement, calculation and estimation. 2.3.1 Direct measurement of response

Response can be directly measured using at least two methods. In the first method a known change, such as a step function, is applied to the sensor or filter and its response time measured; H(n) can then be calculated. In the second method, the output of the sensor is compared to another, much faster sensor. The first method is more commonly used than the second. A simple example of how to determine the response of a sensor to a known input is to measure the distance constant of a rotating-cup or propellor anemometer. In this example, the known input is a step function. The anemometer is placed in a constant velocity air-stream, prevented from rotating, then released, and its output recorded. The time taken by the output to increase from zero to 63 per cent of its final or equilibrium speed in the air-stream is the time “constant” (see section 2.2.3). If another sensor, which responds much more rapidly than the one whose response is to be determined, is available, then good approximations of both the input and output can be measured and compared. The easiest device to use to perform the comparison is probably a modern, two-channel digital spectrum analyser. The output of the fast-response sensor is input to one channel, the output of the sensor being tested to the other channel, and the transfer function automatically displayed. The transfer function is a direct description of the sensor as a filter. If the device whose response is to be determined is an electronic circuit, generating a known or even truly random input is much easier than finding a much faster sensor. Again, a modern, two-channel digital spectrum analyser is probably most convenient, but other electronic test instruments can be used. 2.3.2 calculation of response

2.3

DeterMination of systeM characteristics

The filtering characteristics of a sensor or an electronic circuit, or the system that they comprise, must be known to determine the appropriate

This is the approach described in section 2.2.3. If enough is known about the physics of a sensor/ filter, the response to a large variety of inputs may

III.2–12

ParT III. QualITY aSSuraNCE aNd MaNaGEMENT OF OBSErvING SYSTEMS

2.3.3
Atmosphere

estimation of response

SENSOR/TRANSDUCER

If the transfer functions of a transducer and each following circuit are known, their product is the transfer function of the entire system. If, as is usually the case, the transfer functions are low-pass filters, the aggregate transfer function is a low-pass filter whose cut-off frequency is less than that of any of the individual filters. If one of the individual cut-off frequencies is much less than any of the others, then the cut-off frequency of the aggregate is only slightly smaller. Since the cut-off frequency of a low-pass filter is approximately the inverse of its time-constant, it follows that, if one of the individual time-constants is much larger than any of the others, the timeconstant of the aggregate is only slightly larger.

SIGNAL CONDITIONING CIRCUITS

LOW-PASS FILTER

SAMPLE-AND-HOLD

CLOCK

2.4 2.4.1

saMPling

ANALOGUE-TO-DIGITAL CONVERTER

sampling techniques

PROCESSOR

Observation

figure 2.8. an instrument system be determined by either analytic or numerical solution. Both the response to specific inputs, such as a step function, and the transfer function can be calculated. If the sensor or circuit is linear (described by a linear differential equation), the transfer function is a complete description, in that it describes the amplitude and phase responses as a function of frequency, in other words, as a filter. Considering response as a function of frequency is not always convenient, but the transfer function has a Fourier transform counterpart, the impulse response function, which makes interpretation of response as a function of time much easier. This is illustrated in Figures 2.3 and 2.4 which represent response as a function of time. If obtainable, analytic solutions are preferable because they clearly show the dependence upon the various parameters.

Figure 2.8 schematically illustrates a typical sensor and sampling circuit. When exposed to the atmosphere, some property of the transducer changes with an atmospheric variable such as temperature, pressure, wind speed or direction, or humidity and converts that variable into a useful signal, usually electrical. Signal conditioning circuits commonly perform functions such as converting transducer output to a voltage, amplifying, linearizing, offsetting and smoothing. The low-pass filter finalizes the sensor output for the sample-and-hold input. The sample-and-hold and the analogue-to-digital converter produce the samples from which the observation is computed in the processor. It should be noted that the smoothing performed at the signal conditioning stage for engineering reasons, to remove spikes and to stabilize the electronics, is performed by a low-pass filter; it reduces the response time of the sensor and removes high frequencies which may be of interest. Its effect should be explicitly understood by the designer and user, and its cut-off frequency should be as high as practicable. So-called “smart sensors”, those with microprocessors, may incorporate all the functions shown. The signal conditioning circuitry may not be found in all sensors, or may be combined with other circuitry. In other cases, such as with a

CHaPTEr 2. SaMPlING METEOrOlOGICal varIaBlES

III.2–13

rotating-cup or propellor anemometer, it may be easy to speak only of a sensor because it is awkward to distinguish a transducer. In the few cases for which a transducer or sensor output is a signal whose frequency varies with the atmospheric variable being measured, the sample-and-hold and the analogue-to-digital converter may be replaced by a counter. But these are not important details. The important element in the design is to ensure that the sequence of samples adequately represents the significant changes in the atmospheric variable being measured. The first condition imposed upon the devices shown in Figure 2.8 is that the sensor must respond quickly enough to follow the atmospheric fluctuations which are to be described in the observation. If the observation is to be a 1, 2 or 10 min average, this is not a very demanding requirement. On the other hand, if the observation is to be that of a feature of turbulence, such as peak wind gust, care must be taken when selecting a sensor. The second condition imposed upon the devices shown in Figure 2.8 is that the sample-and-hold and the analogue-to-digital converter must provide enough samples to make a good observation. The accuracy demanded of meteorological observations usually challenges the sensor, not the electronic sampling technology. However, the sensor and the sampling must be matched to avoid aliasing. If the sampling rate is limited for technical reasons, the sensor/filter system must be designed to remove the frequencies that cannot be represented. If the sensor has a suitable response function, the low-pass filter may be omitted, included only as insurance, or may be included because it improves the quality of the signal input to the sample-and-hold. As examples, such a filter may be included to eliminate noise pick-up at the end of a long cable or to further smooth the sensor output. Clearly, this circuit must also respond quickly enough to follow the atmospheric fluctuations of interest. 2.4.2 sampling rates

A common practice for routine observations is to take one spot reading of the sensor (such as a thermometer) and rely on its time-constant to provide an approximately correct sampling time. This amounts to using an exponential filter (Figure 2.6). Automatic weather stations commonly use faster sensors, and several spot readings must be taken and processed to obtain an average (box car filter) or other appropriately weighted mean. A practical recommended scheme for sampling rates is as follows:1 (a) Samples taken to compute averages should be obtained at equispaced time intervals which: (i) Do not exceed the time-constant of the sensor; or (ii) Do not exceed the time-constant of an analogue low-pass filter following the linearized output of a fast-response sensor; or (iii) Are sufficient in number to ensure that the uncertainty of the average of the samples is reduced to an acceptable level, for example, smaller than the required accuracy of the average; (b) Samples to be used in estimating extremes of fluctuations, such as wind gusts, should be taken at rates at least four times as often as specified in (i) or (ii) above. For obtaining averages, somewhat faster sampling rates than (i) and (ii), such as twice per timeconstant, are often advocated and practised. Criteria (i) and (ii) derive from consideration of the Nyquist frequency. If the sample spacing t ≤ TI, the sampling frequency n ≥ 1/TI and nTI ≥ 1. It can be seen from the exponential curve in Figure 2.6 that this removes the higher frequencies and prevents aliasing. If t = T I, n y = 1/2T I and the data will be aliased only by the spectral energy at frequencies at nTI = 2 and beyond, that is where the fluctuations have periods of less than 0.5TI. Criteria (i) and (ii) are used for automatic sampling. The statistical criterion in (iii) is more applicable to the much lower sampling rates in manual observations. The uncertainty of the mean is inversely proportional to the square root of the number of observations, and its value can be determined from the statistics of the quantity.
1 As adopted by the Commission for Instruments and Methods of Observation at its tenth session (1989) through Recommendation 3 (CIMO-X).

For most meteorological and climatological applications, observations are required at intervals of 30 min to 24 hours, and each observation is made with a sampling time of the order of 1 to 10 min. Part I, Chapter 1, Annex 1.B gives a recent statement of requirements for these purposes.

III.2–14

ParT III. QualITY aSSuraNCE aNd MaNaGEMENT OF OBSErvING SYSTEMS

Criterion (b) emphasizes the need for high sampling frequencies, or more precisely, small time-constants, to measure gusts. Recorded gusts are smoothed by the instrument response, and the recorded maximum will be averaged over several times the time-constant. The effect of aliasing on estimates of the mean can be seen very simply by considering what happens when the frequency of the wave being measured is the same as the sampling frequency, or a multiple thereof. The derived mean will depend on the timing of the sampling. A sample obtained once per day at a fixed time will not provide a good estimate of mean monthly temperature. For a slightly more complex illustration of aliasing, consider a time series of three-hourly observations of temperature using an ordinary thermometer. If temperature changes smoothly with time, as it usually does, the daily average computed from eight samples is acceptably stable. However, if a mesoscale event (a thunderstorm) has occurred which reduced the temperature by many degrees for 30 min, the computed average is wrong. The reliability of daily averages depends on the usual weakness of the spectrum in the mesoscale and higher frequencies. However, the occurrence of a higher-frequency event (the thunderstorm) aliases the data, affecting the computation of the mean, the standard deviation and other measures of dispersion, and the spectrum. The matter of sampling rate may be discussed also in terms of Figure 2.8. The argument in section 2.2.1 was that, for the measurement of spectra, the sampling rate, which determines the Nyquist frequency, should be chosen so that the spectrum of fluctuations above the Nyquist frequency is too weak to affect the computed spectrum. This is achieved if the sampling rate set by the clock in Figure 2.8 is at least twice the highest frequency of significant amplitude in the input signal to the sample-and-hold. The wording “highest frequency of significant amplitude” used above is vague. It is difficult to find a rigorous definition because signals are never truly bandwidth limited. However, it is not difficult to ensure that the amplitude of signal fluctuations decreases rapidly with increasing frequency, and that the root-mean-square amplitude of fluctuations above a given frequency is either small in comparison with the quantization noise of the analogue-to-digital converter, small in comparison with an acceptable error or noise level in the

samples, or contributes negligibly to total error or noise in the observation. Section 2.3 discussed the characteristics of sensors and circuits which can be chosen or adjusted to ensure that the amplitude of signal fluctuations decreases rapidly with increasing frequency. Most transducers, by virtue of their inability to respond to rapid (highfrequency) atmospheric fluctuations and their ability to replicate faithfully slow (low-frequency) changes, are also low-pass filters. By definition, low-pass filters limit the bandwidth and, by Nyquist’s theorem, also limit the sampling rate that is necessary to reproduce the filter output accurately. For example, if there are real variations in the atmosphere with periods down to 100 ms, the Nyquist sampling frequency would be 1 per 50 ms, which is technically demanding. However, if they are seen through a sensor and filter which respond much more slowly, for example with a 10 s time-constant, the Nyquist sampling rate would be 1 sample per 5 s, which is much easier and cheaper, and preferable if measurements of the high frequencies are not required. 2.4.3 sampling rate and quality control

Many data quality control techniques of use in automatic weather stations depend upon the temporal consistency, or persistence, of the data for their effectiveness. As a very simple example, two hypothetical quality-control algorithms for pressure measurements at automatic weather stations should be considered. Samples are taken every 10 s, and 1 min averages computed each minute. It is assumed that atmospheric pressure only rarely, if ever, changes at a rate exceeding 1 hPa per minute. The first algorithm rejects the average if it differs from the previous one by more than 1 hPa. This would not make good use of the available data. It allows a single sample with as much as a 6 hPa error to pass undetected and to introduce a 1 hPa error in an observation. The second algorithm rejects a sample if it differs from the previous one by more than than 1 hPa. In this case, an average contains no error larger than about 0.16 (1/6) hPa. In fact, if the assumption is correct that atmospheric pressure only rarely changes at a rate exceeding 1 hPa per minute, the accept/reject criteria on adjacent samples could be tightened to 0.16 hPa and error in the average could be reduced even more. The point of the example is that data quality control procedures that depend upon temporal consistency (correlation) for their effectiveness are best applied

CHaPTEr 2. SaMPlING METEOrOlOGICal varIaBlES

III.2–15

to data of high temporal resolution (sampling rate). At the high frequency end of the spectrum in the sensor/filter output, correlation between adjacent samples increases with increasing sampling rate until the Nyquist frequency is reached, after which no further increase in correlation occurs. Up to this point in the discussion, nothing has been said which would discourage using a sensor/filter with a time-constant as long as the averaging period required for the observation is taken as a single sample to use as the observation. Although this would be minimal in its demands upon the digital subsystem, there is another consideration needed for effective data quality control. Observations can be grouped into three categories as follows:

(a) (b) (c)

Accurate (observations with errors less than or equal to a specified value); Inaccurate (observations with errors exceeding a specified value); Missing.

There are two reasons for data quality control, namely, to minimize the number of inaccurate observations and to minimize the number of missing observations. Both purposes are served by ensuring that each observation is computed from a reasonably large number of data quality-controlled samples. In this way, samples with large spurious errors can be isolated and excluded, and the computation can still proceed, uncontaminated by that sample.

III.2–16

ParT III. QualITY aSSuraNCE aNd MaNaGEMENT OF OBSErvING SYSTEMS

references and furtHer readIng

Acheson, D.T., 1968: An approximation to arithmetic averaging for meteorological variables. Journal of Applied Meteorology, Volume 7, pp. 548–553. Bendat, J.S. and A.G. Piersol, 1986: Random Data: Analysis and Measurement Procedures. Second edition, John Wiley and Sons, New york. Blackman, R.B. and J.W. Tukey, 1958: The Measurement of Power Spectra. Dover Publications, New york. Jenkins, G.M. and D.G. Watts, 1968: Spectral Analysis and its Applications. Holden-Day, San Francisco. Kulhánek, O., 1976: Introduction to Digital Filtering in Geophysics. Elsevier, Amsterdam.

MacCready, P.B. and H.R. Jex, 1964: Response characteristics and meteorlogical utilization of propeller and vane wind sensors. Journal of Applied Meteorology, Volume 3, Issue 2, pp. 182–193. Otnes, R.K. and L. Enochson, 1978: Applied Time Series Analysis. Volume 1: Basic techniques. John Wiley and Sons, New york. Pasquill, F. and F.B. Smith, 1983: Atmospheric Diffusion. Third edition. Ellis Horwood, Chichester. Stearns, S.D. and D.R. Hush, 1990: Digital Signal Analysis. Second edition. Prentice Hall, New Jersey.

CHAPTER 3

data reduction

3.1

General

3.1.2

Meteorological requirements

This chapter discusses in general terms the proce‑ dures for processing and/or converting data obtained directly from instruments into data suita‑ ble for meteorological users, in particular for exchange between countries. Formal regulations for the reduction of data to be exchanged interna‑ tionally have been prescribed by WMO, and are laid down in WMO (2003). Part I, Chapter 1, contains some relevant advice and definitions. 3.1.1 Definitions

In the discussion of the instrumentation associ‑ ated with the measurement of atmospheric variables, it has become useful to classify the obser‑ vational data according to data levels. This scheme was introduced in connection with the data‑processing system for the Global Atmospheric Research Programme, and is defined in WMO (1992; 2003). Level I data, in general, are instrument readings expressed in appropriate physical units, and referred to with geographical coordinates. They require conversion to the normal meteorological variables (identified in Part I, Chapter 1). Level I data themselves are in many cases obtained from the processing of electrical signals such as volt‑ ages, referred to as raw data. Examples of these data are satellite radiances and water‑vapour pressure. The data recognized as meteorological variables are Level II data. They may be obtained directly from instruments (as is the case for many kinds of simple instruments) or derived from Level I data. For example, a sensor cannot measure visibility, which is a Level II quantity; instead, sensors meas‑ ure the extinction coefficient, which is a Level I quantity. Level III data are those contained in internally consistent data sets, generally in grid‑point form. They are not within the scope of this Guide. Data exchanged internationally are Level II or Level III data.

Observing stations throughout the world routinely produce frequent observations in standard formats for exchanging high‑quality information obtained by uniform observing techniques, despite the differ‑ ent types of sensors in use throughout the world, or even within nations. To accomplish this, very considerable resources have been devoted over very many years to standardize content, quality and format. As automated observation of the atmos‑ phere becomes more prevalent, it becomes even more important to preserve this standardization and develop additional standards for the conver‑ sion of raw data into Level I data, and raw and Level I data into Level II data. 3.1.3 The data reduction process

The role of a transducer is to sense an atmospheric variable and convert it quantitatively into a useful signal. However, transducers may have secondary responses to the environment, such as tempera‑ ture‑dependent calibrations, and their outputs are subject to a variety of errors, such as drift and noise. After proper sampling by a data‑acquisition system, the output signal must be scaled and linearized according to the total system calibration and then filtered or averaged. At this stage, or earlier, it becomes raw data. The data must then be converted to measurements of the physical quantities to which the sensor responds, which are Level I data or may be Level II data if no further conversion is necessary. For some applications, additional varia‑ bles must be derived. At various stages in the process the data may be corrected for extraneous effects, such as exposure, and may be subjected to quality control. Data from conventional and automatic weather stations (AWSs) must, therefore, be subjected to many operations before they can be used. The whole process is known as data reduction and consists of the execution of a number of functions, comprising some or all of the following: (a) Transduction of atmospheric variables; (b) Conditioning of transducer outputs; (c) Data acquisition and sampling; (d) Application of calibration information; (e) Linearization of transducer outputs;

III.3–2 (f) (g) (h) (i) (j) (k) (l) (m) (n)

PART III. QUALITY ASSURANCE AND MANAGEMENT OF OBSERVING SYSTEMS

Extraction of statistics, such as the average; Derivation of related variables; Application of corrections; Data quality control; Data recording and storage; Compilation of metadata; Formatting of messages; Checking message contents; Transmission of messages.

such as the identification of cloud types, cannot be automated using either current or foreseeable technologies. Data acquisition and data‑processing software for AWSs are discussed at some length in Part II, Chapter 1, to an extent which is sufficiently general for any application of electrical transducers in mete‑ orology. Some general considerations and specific examples of the design of algorithms for synoptic AWSs are given in WMO (1987). In processing meteorological data there is usually one correct procedure, algorithm or approach, and there may be many approximations ranging in validity from good to useless. Experience strongly suggests that the correct approach is usually the most efficient in the long term. It is direct, requires a minimum of qualifications, and, once implemented, needs no further attention. Accordingly, the subsequent paragraphs are largely limited to the single correct approach, as far as exact solutions exist, to the problem under consideration.

The order in which these functions are executed is only approximately sequential. Of course, the first and the last function listed above should always be performed first and last. Linearization may immedi‑ ately follow or be inherent in the transducer, but it must precede the extraction of an average. Specific quality control and the application of corrections could take place at different levels of the data‑reduc‑ tion process. Depending on the application, stations can operate in a diminished capacity without incor‑ porating all of these functions. In the context of this Guide, the important func‑ tions in the data‑reduction process are the selection of appropriate sampling procedures, the applica‑ tion of calibration information, linearization when required, filtering and/or averaging, the derivation of related variables, the application of corrections, quality control, and the compilation of metadata. These are the topics addressed in this chapter. More explicit information on quality management is given in Part III, Chapter 1, and on sampling, filter‑ ing and averaging in Part III, Chapter 2. Once reduced, the data must be made available through coding, transmission and receipt, display, and archiving, which are the topics of other WMO Manuals and Guides. An observing system is not complete unless it is connected to other systems that deliver the data to the users. The quality of the data is determined by the weakest link. At every stage, quality control must be applied. Much of the existing technology and standardized manual techniques for data reduction can also be used by AWSs, which, however, make particular demands. AWSs include various sensors, standard computations for deriving elements of messages, and the message format itself. Not all sensors inter‑ face easily with automated equipment. Analytic expressions for computations embodied in tables must be recovered or discovered. The rules for encoding messages must be expressed in computer languages with degrees of precision, completeness and unambiguousness not demanded by natural language instructions prepared for human observers. Furthermore, some human functions,

3.2

SaMplinG

See Part III, Chapter 2 for a full discussion of sampling. The following is a summary of the main outcomes. It should be recognized that atmospheric varia‑ bles fluctuate rapidly and randomly because of ever‑present turbulence, and that transducer outputs are not faithful reproductions of atmos‑ pheric variables because of their imperfect dynamic characteristics, such as limited ability to respond to rapid changes. Transducers generally need equipment to amplify or protect their outputs and/or to convert one form of output to another, such as resistance to voltage. The circuitry used to accomplish this may also smooth or low‑pass filter the signal. There is a cut‑off frequency above which no significant fluctua‑ tions occur because none exist in the atmosphere and/or the transducer or signal conditioning circuitry has removed them. An important design consideration is how often the transducer output should be sampled. The definitive answer is: at an equispaced rate at least twice the cut‑off frequency of the transducer output signal. However, a simpler and equivalent rule usually suffices: the sampling interval should not exceed the largest of the time‑constants of all the devices and

CHAPTER 3. DATA REDUCTION

III.3–3

circuitry preceding the acquisition system. If the sampling rate is less than twice the cut‑off frequency, unnecessary errors occur in the variance of the data and in all derived quantities and statistics. While these increases may be acceptable in particular cases, in others they are not. Proper sampling always ensures minimum variance. Good design may call for incorporating a low‑pass filter, with a time‑constant about equal the sampling interval of the data‑acquisition system. It is also a precautionary measure to minimize the effects of noise, especially 50 or 60 Hz pick‑up from power mains by cables connecting sensors to processors and leakage through power supplies.

equipment often bears an unknown relationship to the national standard, and, in any case, it must be expected that calibration will change during trans‑ port, storage and use. Calibration changes must be recorded in the station’s metadata files.

3.4

linearizaTion

3.3

applicaTion of calibraTion funcTionS

If the transducer output is not exactly proportional to the quantity being measured, the signal must be linearized, making use of the instrument’s calibra‑ tion. This must be carried out before the signal is filtered or averaged. The sequence of operations “average then linearize” produces different results from the sequence “linearize then average” when the signal is not constant throughout the averaging period. Non‑linearity may arise in the following three ways (WMO, 1987): (a) Many transducers are inherently nonlinear, namely, their output is not proportional to the measured atmospheric variable. A ther‑ mistor is a simple example; (b) Although a sensor may incorporate linear transducers, the variables measured may not be linearly related to the atmospheric variable of interest. For example, the photodetector and shaft‑angle transducer of a rotating beam ceilometer are linear devices, but the ceilom‑ eter output signal (backscattered light inten‑ sity as a function of angle) is non‑linear in cloud height; (c) The conversion from Level I to Level II may not be linear. For example, extinction coef‑ ficient, not visibility or transmittance, is the proper variable to average in order to produce estimates of average visibility. In the first of these cases, a polynomial calibra‑ tion function is often used. If so, it is highly desirable to have standardized sensors with uniform calibration coefficients to avoid the problems that arise when interchanging sensors in the field. In the other two cases, an analytic function which describes the behaviour of the transducer is usually appropriate.

The WMO regulations (WMO, 2003) prescribe that stations be equipped with properly calibrated instruments and that adequate observational and measuring techniques are followed to ensure that the measurements are accurate enough to meet the needs of the relevant meteorological disciplines. The conversion of raw data from instruments into the corresponding meteorological variables is achieved by means of calibration functions. The proper application of calibration functions and any other systematic corrections are most critical for obtaining data that meet expressed accuracy requirements. The determination of calibration functions should be based on calibrations of all components of the meas‑ urement chain. In principle at least, and in practice for some meteorological quantities such as pressure, the calibration of field instruments should be tracea‑ ble to an international standard instrument, through an unbroken chain of comparisons between the field instrument and some or all of a series of standard instruments, such as a travelling standard, a working standard, a reference standard and a national stand‑ ard (see Part I, Chapter 1 for definitions). A description of the calibration procedures and systematic corrections associated with each of the basic meteorological variables is contained in each of the respective chapters in Part I. Field instruments must be calibrated regularly by an expert, with corresponding revisions to the cali‑ bration functions. It is not sufficient to rely on calibration data that is supplied along with the cali‑ bration equipment. The supplier’s calibration

3.5

averaGinG

The natural small‑scale variability of the atmos‑ phere makes smoothing or averaging necessary for obtaining representative observations and compat‑

III.3–4

PART III. QUALITY ASSURANCE AND MANAGEMENT OF OBSERVING SYSTEMS

ibility of data from different instruments. For international exchange and for many operational applications, the reported measurement must be representative of the previous 2 or 10 min for wind, and, by convention, of 1 to 10 min for other quantities. The 1 min practice arises in part from the fact that some conventional meteorological sensors have a response of the order of 1 min and a single reading is notionally a 1 min average or smoothed value. If the response time of the instru‑ ment is much faster, it is necessary to take samples and filter or average them. This is the topic of Part III, Chapter 2. See Part I, Chapter 1, (Annex 1.B), for the requirements of the averaging times typical of operational meteorological instrument systems. Two types of averaging or smoothing are commonly used, namely, arithmetic and expo‑ nential. The arithmetic average conforms with the normal meaning of average and is readily implemented digitally; this is the box car filter described in Part III, Chapter 2. An exponential average is the output of the simplest low‑pass filter representing the simplest response of a sensor to atmospheric fluctuations, and it is more convenient to implement in analogue circuitry than the arithmetic average. When the time‑constant of a simple filter is approximately half the sampling time over which an average is being calculated, the arithmetic and exponential smoothed values are practically indistinguisha‑ ble (see Part III, Chapter 2, and also Acheson, 1968). The outputs of fast‑response sensors vary rapidly thus necessitating high sampling rates for optimal (minimum uncertainty) averaging. To reduce the required sampling rate and still provide the optimal digital average, it could be possible to linearize the transducer output (where that is necessary), expo‑ nentially smooth it using analogue circuitry with time‑constant t c, and then sample digitally at intervals tc. Many other types of elaborate filters, computed digitally, have been used for special applications. Because averaging non‑linear variables creates diffi‑ culties when the variables change during the averaging period, it is important to choose the appropriate linear variable to compute the average. The table below lists some specific examples of elements of a synoptic observation which are reported as averages, with the corresponding linear variable that should be used.

3.6

relaTeD variableS anD STaTiSTicS

Besides averaged data, extremes and other variables that are representative for specific periods must be determined, depending on the purpose of the obser‑ vation. An example of this is wind gust measurements, for which higher sampling rates are necessary. Also, other quantities have to be derived from the averaged data, such as mean sea‑level pressure, visi‑ bility and dewpoint. At conventional manual stations, conversion tables are used. It is common practice to incorporate the tables into an AWS and to provide interpolation routines, or to incorporate the basic formulas or approximations of them. See the various chapters of Part I for the data conver‑ sion practices, and Part II, Chapter 1 for AWS practice. Quantities for which data conversion is necessary when averages are being computed
Quantity to be reported Wind speed and direction Dewpoint Visibility Quantity to be averaged Cartesian components Absolute humidity Extinction coefficient

3.7

correcTionS

The measurements of many meteorological quanti‑ ties have corrections applied to them either as raw data or at the Level I or Level II stage to correct for various effects. These corrections are described in the chapters on the various meteorological varia‑ bles in Part I. Corrections to raw data, for zero or index error, or for temperature, gravity and the like are derived from the calibration and characteriza‑ tion of the instrument. Other types of corrections or adjustments to the raw or higher level data include smoothing, such as that applied to cloud height measurements and upper‑air profiles, and corrections for exposure such as those sometimes applied to temperature, wind and precipitation observations. The algorithms for these types of corrections may, in some cases, be based on studies that are not entirely definitive; therefore, while they no doubt improve the accuracy of the data, the possibility remains that different algorithms may be derived in the future. In such a case, it may become necessary to recover the original uncor‑ rected data. It is, therefore, advisable for the algorithms to be well documented.

CHAPTER 3. DATA REDUCTION

III.3–5

3.8

QualiTy ManaGeMenT

Quality management is discussed in Part III, Chapter 1. Formal requirements are specified by WMO (2003) and general procedures are discussed in WMO (1989). Quality‑control procedures should be performed at each stage of the conversion of raw sensor output into meteorological variables. This includes the processes involved in obtaining the data, as well as reducing them to Level II data. During the process of obtaining data, the quality control should seek to eliminate both systematic and random measurement errors, errors due to departure from technical standards, errors due to unsatisfactory exposure of instruments, and subjec‑ tive errors on the part of the observer. Quality control during the reduction and conver‑ sion of data should seek to eliminate errors resulting from the conversion techniques used or the computational procedures involved. In order to improve the quality of data obtained at high sampling rates, which may generate increased

noise, filtering and smoothing techniques are employed. These are described earlier in this chap‑ ter, as well as in Part III, Chapter 2.

3.9

coMpilinG MeTaDaTa

Metadata are discussed in Part I, Chapter 1, in Part III, Chapter 1, and in other chapters concerning the various meteorological quantities. Metadata must be kept so that: (a) Original data can be recovered to be re‑worked, if necessary (with different filtering or correc‑ tions, for instance); (b) The user can readily discover the quality of the data and the circumstances under which it was obtained (such as exposure); (c) Potential users can discover the existence of the data. The procedures used in all the data‑reduction func‑ tions described above must therefore be recorded, generically for each type of data, and individually for each station and observation type.

III.3–6

PART III. QUALITY ASSURANCE AND MANAGEMENT OF OBSERVING SYSTEMS

references and further reading

Acheson, D.T., 1968: An approximation to arithme‑ tic averaging for meteorological variables. Journal of Applied Meteorology, Volume 7, Issue 4, pp. 548–553. World Meteorological Organization, 1987: Some General Considerations and Specific Examples in the Design of Algorithms for Synoptic Automatic Weather Stations (D.T. Acheson). Instruments and Observing Methods Report No. 19, WMO/ TD‑No. 230, Geneva.

World Meteorological Organization, 1992: Manual on the Global Data-Processing and Forecasting System. Volume I, WMO‑No. 485, Geneva. World Meteorological Organization, 1989: Guide on the Global Observing System. WMO‑No. 488, Geneva. World Meteorological Organization, 2003: Manual on the Global Observing System. Volume I: Global aspects. WMO‑No. 544, Geneva.

CHAPTER 4

testing, calibration and intercomparison

4.1

General

One of the purposes of WMO, set forth in Article 2 (c) of the WMO Convention, is “to promote standardization of meteorological and related observations and to ensure the uniform publication of observations and statistics”. For this purpose, sets of standard procedures and recommended practices have been developed, and their essence is contained in this Guide. Valid observational data can be obtained only when a comprehensive quality assurance programme is applied to the instruments and the network. Calibration and testing are inherent elements of a quality assurance programme. Other elements include clear definition of requirements, instrument selection deliberately based on the requirements, siting criteria, maintenance and logistics. These other elements must be considered when developing calibration and test plans. On an international scale, the extension of quality assurance programmes to include intercomparisons is important for the establishment of compatible data sets. Because of the importance of standardization across national boundaries, several WMO regional associations have set up Regional Instrument Centres to organize and assist with standardization and calibration activities. Their terms of reference and locations are given in Part I, Chapter , Annex .A. National and international standards and guidelines exist for many aspects of testing and evaluation, and should be used where appropriate. Some of them are referred to in this chapter. 4.1.1 Definitions

The ISO document is a joint production with the International Bureau of Weights and Measures, the International Organization of Legal Metrology, the International Electrotechnical Commission, and other similar international bodies. The ISO terminology differs from common usage in the following respects in particular: Accuracy (of a measurement) is the closeness of the agreement between the result of a measurement and its true value, and it is a qualitative term. The accuracy of an instrument is the ability of the instrument to give responses close to the true value, and it also is a qualitative term. It is possible to refer to an instrument or a measurement as having a high accuracy, but the quantitative measure of the accuracy is the uncertainty. Uncertainty is expressed as a measure of dispersion, such as a standard deviation or a confidence level. The error of a measurement is the result minus the true value (the deviation has the other sign), and it is composed of the random and systematic errors (the term bias is commonly used for systematic error). Repeatability is also expressed statistically and is the closeness of agreement of measurements taken under constant (defined) conditions. Reproducibility is the closeness of agreement under defined different conditions. ISO does not define precision, and advises against the use of the term. 4.1.2 Testing and calibration programmes

Definitions of terms in metrology are given by the International Organization for Standardization (ISO, 993). Many of them are reproduced in Part I, Chapter , and some are repeated here for convenience. They are not universally used and differ in some respects from terminology commonly used in meteorological practice. However, the ISO definitions are recommended for use in meteorology.
 Recommended by the Commission for Instruments and Methods of Observation at its ninth session (985) through Recommendation 9 (CIMO-IX).

Before using atmospheric measurements taken with a particular sensor for meteorological purposes, the answers to a number of questions are needed as follows: (a) What is the sensor or system accuracy? (b) What is the variability of measurements in a network containing such systems or sensors? (c) What change, or bias, will there be in the data provided by the sensor or system if its siting location is changed?

III.4–2 (d)

PART III. QUALITY ASSURANCE AND MANAGEMENT OF OBSERVING SYSTEMS

What change or bias will there be in the data if it replaces a different sensor or system measuring the same weather element(s)?

To answer these questions and to assure the validity and relevance of the measurements produced by a meteorological sensor or system, some combination of calibration, laboratory testing and functional testing is needed. Calibration and test programmes should be developed and standardized, based on the expected climatic variability, environmental and electromagnetic interference under which systems and sensors are expected to operate. For example, considered factors might include the expected range of temperature, humidity and wind speed; whether or not a sensor or system must operate in a marine environment, or in areas with blowing dust or sand; the expected variation in electrical voltage and phase, and signal and power line electrical transients; and the expected average and maximum electromagnetic interference. Meteorological Services may purchase calibration and test services from private laboratories and companies, or set up test organizations to provide those services. It is most important that at least two like sensors or systems be subjected to each test in any test programme. This allows for the determination of the expected variability in the sensor or system, and also facilitates detecting problems.

performance, maintenance and mean-timebetween-failure requirements under all expected operating, storage and transportation conditions. Test programmes are also designed to develop information on the variability that can be expected in a network of like sensors, in functional reproducibility, and in the comparability of measurements between different sensors or systems. Knowledge of both functional reproducibility and comparability is very important to climatology, where a single long-term database typically contains information from sensors and systems that through time use different sensors and technologies to measure the same meteorological variable. In fact, for practical applications, good operational comparability between instruments is a more valuable attribute than precise absolute calibration. This information is developed in functional testing. Even when a sensor or system is delivered with a calibration report, environmental and possibly additional calibration testing should be performed. An example of this is a modern temperature measurement system, where at present the probe is likely to be a resistance temperature device. Typically, several resistance temperature devices are calibrated in a temperature bath by the manufacturer and a performance specification is provided based on the results of the calibration. However, the temperature system which produces the temperature value also includes of power supplies and electronics, which can also be affected by temperature. Therefore, it is important to operate the electronics and probe as a system through the temperature range during the calibration. It is good practice also to replace the probe with a resistor with a known temperature coefficient, which will produce a known temperature output and operate the electronics through the entire temperature range of interest to ensure proper temperature compensation of the system electronics. Users should also have a programme for testing randomly selected production sensors and systems, even if pre-production units have been tested, because even seemingly minor changes in material, configurations or manufacturing processes may affect the operating characteristics of sensors and systems. The International Organization for Standardization has standards (ISO, 989a, 989b) which specify sampling plans and procedures for the inspection of lots of items.

4.2 4.2.1

TesTinG

The purpose of testing

Sensors and systems are tested to develop information on their performance under specified conditions of use. Manufacturers typically test their sensors and systems and in some cases publish operational specifications based on their test results. However, it is extremely important for the user Meteorological Service to develop and carry out its own test programme or to have access to an independent testing authority. Testing can be broken down into environmental testing, electrical/electromagnetic interference testing and functional testing. A test programme may consist of one or more of these elements. In general, a test programme is designed to ensure that a sensor or system will meet its specified

CHAPTER 4. TESTING, CALIBRATION AND INTERCOMPARISON

III.4–3

4.2.2 4.2.2.1

environmental testing definitions

item is stored unprotected and outdoors, to the protected indoor storage situation. The item is normally housed in its packaging/shipping container during exposure to the storage environment. The International Electrotechnical Commission also has standards (IEC, 990) to classify environmental conditions which are more elaborate than the above. They define ranges of meteorological, physical and biological environments that may be encountered by products being transported, stored, installed and used, which are useful for equipment specification and for planning tests. 4.2.2.2 environmental test programme

The following definitions serve to introduce the qualities of an instrument system that should be the subject of operational testing: Operational conditions: Those conditions or a set of conditions encountered or expected to be encountered during the time an item is performing its normal operational function in full compliance with its performance specification. Withstanding conditions: Those conditions or a set of conditions outside the operational conditions which the instrument is expected to withstand. They may have only a small probability of occurrence during an item’s lifetime. The item is not expected to perform its operational function when these withstanding conditions exist. The item is, however, expected to be able to survive these conditions and return to normal performance when the operational conditions return. Outdoor environment: Those conditions or a set of conditions encountered or expected to be encountered during the time that an item is performing its normal operational function in an unsheltered, uncontrolled natural environment. Indoor environment: Those conditions or a set of conditions encountered or expected to be encountered during the time that an item is energized and performing its normal operational function within an enclosed operational structure. Consideration is given to both the uncontrolled indoor environment and the artificially controlled indoor environment. Transportation environment: Those conditions or a set of conditions encountered or expected to be encountered during the transportation portion of an item’s life. Consideration is given to the major transportation modes – road, rail, ship and air transportation, and also to the complete range of environments encountered – before and during transportation, and during the unloading phase. The item is normally housed in its packaging/shipping container during exposure to the transportation environment. Storage environment: Those conditions or a set of conditions encountered or expected to be encountered during the time an item is in its non-operational storage mode. Consideration is given to all types of storage, from the open storage situation, in which an

Environmental tests in the laboratory enable rapid testing over a wide range of conditions, and can accelerate certain effects such as those of a marine environment with high atmospheric salt loading. The advantage of environmental tests over field tests is that many tests can be accelerated in a well-equipped laboratory, and equipment may be tested over a wide range of climatic variability. Environmental testing is important; it can give insight into potential problems and generate confidence to go ahead with field tests, but it cannot replace field testing. An environmental test programme is usually designed around a subset of the following conditions: high temperature, low temperature, temperature shock, temperature cycling, humidity, wind, rain, freezing rain, dust, sunshine (insolation), low pressure, transportation vibration and transportation shock. The ranges, or test limits, of each test are determined by the expected environments (operational, withstanding, outdoor, indoor, transportation, storage) that are expected to be encountered. The purpose of an environmental test programme document is to establish standard environmental test criteria and corresponding test procedures for the specification, procurement, design and testing of equipment. This document should be based on the expected environmental operating conditions and extremes. For example, the United States prepared its National Weather Service standard environmental criteria and test procedures (NWS, 984), based on a study which surveyed and reported the expected operational and extreme ranges of the various weather elements in the United States operational area, and presented proposed test criteria (NWS, 980). These criteria and procedures consist of three parts:

III.4–4 (a)

PART III. QUALITY ASSURANCE AND MANAGEMENT OF OBSERVING SYSTEMS

(b) (c)

Environmental test criteria and test limits for outdoor, indoor, and transportation/storage environments; Test procedures for evaluating equipment against the environmental test criteria; Rationale providing background information on the various environmental conditions to which equipment may be exposed, their potential effect(s) on the equipment, and the corresponding rationale for the recommended test criteria. electrical and electromagnetic interference testing

synergistic effects of all the changing weather elements on an instrument in all of its required operating environments. Functional testing is simply testing in the outdoor and natural environment where instruments are expected to operate over a wide variety of meteorological conditions and climatic regimes, and, in the case of surface instruments, over ground surfaces of widely varying albedo. Functional testing is required to determine the adequacy of a sensor or system while it is exposed to wide variations in wind, precipitation, temperature, humidity, and direct, diffuse and reflected solar radiation. Functional testing becomes more important as newer technology sensors, such as those using electro-optic, piezoelectric and capacitive elements, are placed into operational use. The readings from these sensors may be affected by adventitious conditions such as insects, spiders and their webs, and the size distribution of particles in the atmosphere, all of which must be determined by functional tests. For many applications, comparability must be tested in the field. This is done with side-by-side testing of like and different sensors or systems against a field reference standard. These concepts are presented in Hoehne (97; 972; 977). Functional testing may be planned and carried out by private laboratories or by the test department of the Meteorological Service or other user organization. For both the procurement and operation of equipment, the educational and skill level of the observers and technicians who will use the system must be considered. Use of the equipment by these staff members should be part of the test programme. The personnel who will install, use, maintain and repair the equipment should evaluate those portions of the sensor or system, including the adequacy of the instructions and manuals that they will use in their job. Their skill level should also be considered when preparing procurement specifications.

4.2.3

The prevalence of sensors and automated data collection and processing systems that contain electronic components necessitates in many cases the inclusion in an overall test programme for testing performance in operational electrical environments and under electromagnetic interference. An electrical/electromagnetic interference test programme document should be prepared. The purpose of the document is to establish standard electrical/electromagnetic interference test criteria and corresponding test procedures and to serve as a uniform guide in the specification of electrical/ electromagnetic interference susceptibility requirements for the procurement and design of equipment. The document should be based on a study that quantifies the expected power line and signal line transient levels and rise times caused by natural phenomena, such as thunderstorms. It should also include testing for expected power variations, both voltage and phase. If the equipment is expected to operate in an airport environment, or other environment with possible electromagnetic radiation interference, this should also be quantified and included in the standard. A purpose of the programme may also be to ensure that the equipment is not an electromagnetic radiation generator. Particular attention should be paid to equipment containing a microprocessor and, therefore, a crystal clock, which is critical for timing functions. 4.2.4 Functional testing

4.3 4.3.1

CalibraTion

The purpose of calibration

Calibration and environmental testing provide a necessary but not sufficient basis for defining the operational characteristics of a sensor or system, because calibration and laboratory testing cannot completely define how the sensor or system will operate in the field. It is impossible to simulate the

Sensor or system calibration is the first step in defining data validity. In general, it involves comparison against a known standard to determine how closely instrument output matches the standard over the expected range of operation. Performing laboratory calibration carries the implicit assumption that the instrument’s characteristics are stable enough to

CHAPTER 4. TESTING, CALIBRATION AND INTERCOMPARISON

III.4–5

retain the calibration in the field. A calibration history over successive calibrations should provide confidence in the instrument’s stability. Specifically, calibration is the set of operations that establish, under specified conditions, the relationship between the values indicated by a measuring instrument or measuring system and the corresponding known values of a measurand, namely the quantity to be measured. It should define a sensor/system’s bias or average deviation from the standard against which it is calibrated, its random errors, the range over which the calibration is valid, and the existence of any thresholds or non-linear response regions. It should also define resolution and hysteresis. Hysteresis should be identified by cycling the sensor over its operating range during calibration. The result of a calibration is often expressed as a calibration factor or as a series of calibration factors in the form of a calibration table or calibration curve. The results of a calibration must be recorded in a document called a calibration certificate or a calibration report. The calibration certificate or report should define any bias that can then be removed through mechanical, electrical or software adjustment. The remaining random error is not repeatable and cannot be removed, but can be statistically defined through a sufficient number of measurement repetitions during calibration. 4.3.2 standards

Reference standard: A standard, generally of the highest metrological quality available at a given location or in a given organization from which the measurements taken there are derived. Working standard: A standard that is used routinely to calibrate or check measuring instruments. Transfer standard: A standard used as an intermediary to compare standards. Travelling standard: A standard, sometimes of special construction, intended for transport between different locations. Primary standards reside within major international or national institutions. Secondary standards often reside in major calibration laboratories and are usually not suitable for field use. Working standards are usually laboratory instruments that have been calibrated against a secondary standard. Working standards that may be used in the field are known as transfer standards. Transfer standard instruments may also be used to compare instruments in a laboratory or in the field. 4.3.3 Traceability

Traceability is defined by ISO (993) as:
“The property of the result of a measurement or the value of a standard whereby it can be related to stated references, usually national or international standards, through an unbroken chain of comparisons all having stated uncertainties.”

The calibration of instruments or measurement systems is customarily carried out by comparing them against one or more measurement standards. These standards are classified according to their metrological quality. Their definitions (ISO, 993) are given in Part I, Chapter  and may be summarized as follows: Primary standard: A standard which has the highest metrological qualities and whose value is accepted without reference to other standards. Secondary standard: A standard whose value is assigned by comparison with a primary standard. International standard: A standard recognized by an international agreement to serve internationally as the basis for assigning values to other standards of the quantity concerned. National standard: A standard recognized by a national decision to serve, in a country, as the basis for assigning values to other standards.

In meteorology, it is common practice for pressure measurements to be traceable through travelling standards, working standards and secondary standards to national or primary standards, and the accumulated uncertainties therefore are known (except for those that arise in the field, which have to be determined by field testing). Temperature measurements lend themselves to the same practice. The same principle must be applied to the measurement of any quantity for which measurements of known uncertainty are required. 4.3.4 Calibration practices

The calibration of meteorological instruments is normally carried out in a laboratory where appropriate measurement standards and calibration devices are located. They may be national laboratories, private laboratories, or laboratories

III.4–6

PART III. QUALITY ASSURANCE AND MANAGEMENT OF OBSERVING SYSTEMS

established within the Meteorological Service or other user organization. A calibration laboratory is responsible for maintaining the necessary qualities of its measurement standards and for keeping records of their traceability. Such laboratories can also issue calibration certificates that should also contain an estimate of the accuracy of calibration. In order to guarantee traceability, the calibration laboratory should be recognized and authorized by the appropriate national authorities. Manufacturers of meteorological instruments should deliver their quality products, for example, standard barometers or thermometers, with calibration certificates or calibration reports. These documents may or may not be included in the basic price of the instrument, but may be available as options. Calibration certificates given by authorized calibration laboratories may be more expensive than factory certificates. As discussed in the previous section, environmental, functional, and possibly additional calibration testing, should be performed. Users may also purchase calibration devices or measurement standards for their own laboratories. A good calibration device should always be combined with a proper measurement standard, for example, a liquid bath temperature calibration chamber with a set of certified liquid-in-glass thermometers, and/or certified resistance thermometers. For the example above, further considerations, such as the use of non-conductive silicone fluid, should be applied. Thus, if a temperature-measurement device is mounted on an electronic circuit board, the entire board may be immersed in the bath so that the device can be tested in its operating configuration. Not only must the calibration equipment and standards be of high quality, but the engineers and technicians of a calibration laboratory must be well trained in basic metrology and in the use of available calibration devices and measurement standards. Once instruments have passed initial calibration and testing and are accepted by the user, a programme of regular calibration checks and calibrations should be instituted. Instruments, such as mercury barometers, are easily subject to breakage when transported to field sites. At distant stations, these instruments should be kept stationary as far as possible, and should be calibrated against more robust travelling standards that can be moved from one station to another by inspectors. Travelling standards must be compared frequently against a working standard or reference standard in the

calibration laboratory, and before and after each inspection tour. Details of laboratory calibration procedures of, for example, barometers, thermometers, hygrometers, anemometers and radiation instruments are given in the relevant chapters of this Guide or in specialized handbooks. These publications also contain information concerning recognized international standard instruments and calibration devices. Calibration procedures for automatic weather stations require particular attention, as discussed in Part II, Chapter . WMO (989) gives a detailed analysis of the calibration procedures used by several Meteorological Services for the calibration of instruments used to measure temperature, humidity, pressure and wind.

4.4

inTerComparisons

Intercomparisons of instruments and observing systems, together with agreed quality-control procedures, are essential for the establishment of compatible data sets. All intercomparisons should be planned and carried out carefully in order to maintain an adequate and uniform quality level of measurements of each meteorological variable. Many meteorological quantities cannot be directly compared with metrological standards and hence to absolute references — for example, visibility, cloud-base height and precipitation. For such quantities, intercomparisons are of primary value. Comparisons or evaluations of instruments and observing systems may be organized and carried out at the following levels: (a) International comparisons, in which participants from all interested countries may attend in response to a general invitation; (b) Regional intercomparisons, in which participants from countries of a certain region (for example, WMO Regions) may attend in response to a general invitation; (c) Multilateral and bilateral intercomparisons, in which participants from two or more countries may agree to attend without a general invitation; (d) National intercomparisons, within a country. Because of the importance of international comparability of measurements, WMO, through one of its constituent bodies, from time to time arranges for international and regional comparisons of instruments. Such intercomparisons or evaluations

CHAPTER 4. TESTING, CALIBRATION AND INTERCOMPARISON

III.4–7

of instruments and observing systems may be very lengthy and expensive. Rules have therefore been established so that coordination will be effective and assured. These rules are reproduced in Annexes 4.A and 4.B.2 They contain general guidelines and should, when necessary, be supplemented by specific working rules for each intercomparison (see the relevant chapters of this Guide). Reports of particular WMO international comparisons are referenced in other chapters in this

Guide (see, for instance, Part I, Chapters 3, 4, 9, 2, 4 and 5). Annex 4.C provides a list of the international comparisons which have been supported by the Commission for Instruments and Methods of Observation and which have been published in the WMO technical document series. Reports of comparisons at any level should be made known and available to the meteorological community at large.

2

Recommendations adopted by the Commission for Instruments and Methods of Observation at its eleventh session, through the annex to Recommendation 4 (CIMO-XI) and Annex IX (994).

III.4–8

PART III. QUALITY ASSURANCE AND MANAGEMENT OF OBSERVING SYSTEMS

ANNEx 4.A

procedures of Wmo global and regional intercomparisons of instruments
. A WMO intercomparison of instruments and methods of observation shall be agreed upon by the WMO constituent body concerned so that it is recognized as a WMO intercomparison. 2. The Executive Council will consider the approval of the intercomparison and its inclusion in the programme and budget of WMO. 3. When there is an urgent need to carry out a specific intercomparison that was not considered at the session of a constituent body, the president of the relevant body may submit a corresponding proposal to the President of WMO for approval. 4. In good time before each intercomparison, the Secretary-General, in cooperation with the president of CIMO and possibly with presidents of other technical commissions or regional associations, or heads of programmes concerned, should make inquiries as to the willingness of one or more Members to act as a host country and as to the interest of Members in participating in the intercomparison. 5. When at least one Member has agreed to act as host country and a reasonable number of Members have expressed their interest in participating, an international organizing committee should be established by the president of CIMO in consultation with the heads of the constituent bodies concerned, if appropriate. 6. Before the intercomparison begins, the organizing committee should agree on its organization, for example, at least on the main objectives, place, date and duration of the intercomparison, conditions for participation, data acquisition, processing and analysis methodology, plans for the publication of results, intercomparison rules, and the responsibilities of the host(s) and the participants. 7. The host should nominate a project leader who will be responsible for the proper conduct of the intercomparison, the data analysis, and the preparation of a final report of the intercomparison as agreed upon by the organizing committee. The project leader will be a member ex officio of the organizing committee. 8. When the organizing committee has decided to carry out the intercomparison at sites in different host countries, each of these countries should designate a site manager. The responsibilities of the site managers and the overall project management will be specified by the organizing committee. 9. The Secretary-General is invited to announce the planned intercomparison to Members as soon as possible after the establishment of the organizing committee. The invitation should include information on the organization and rules of the intercomparison as agreed upon by the organizing committee. Participating Members should observe these rules. 0. All further communication between the host(s) and the participants concerning organizational matters will be handled by the project leader and possibly by the site managers unless other arrangements are specified by the organizing committee. . Meetings of the organizing committee during the period of the intercomparison could be arranged, if necessary. 2. After completion of the intercomparison, the organizing committee shall discuss and approve the main results of the data analysis of the intercomparison and shall make proposals for the utilization of the results within the meteorological community. 3. The final report of the intercomparison, prepared by the project leader and approved by the organizing committee, should be published in the WMO Instruments and Observing Methods Report series.

CHAPTER 4. TESTING, CALIBRATION AND INTERCOMPARISON

III.4–9

ANNEx 4.B

guidelines for organizing Wmo intercomparisons of instruments
1.
inTroDuCTion

. These guidelines are complementary to the procedures of WMO global and regional intercomparisons of meteorological instruments. They assume that an international organizing committee has been set up for the intercomparison and provide guidance to the organizing committee for its conduct. In particular, see Part I, Chapter 2, Annex 2.C. .2 However, since all intercomparisons differ to some extent from each other, these guidelines should be considered as a generalized checklist of tasks. They should be modified as situations so warrant, keeping in mind the fact that fairness and scientific validity should be the criteria that govern the conduct of WMO intercomparisons and evaluations. .3 Final reports of other WMO intercomparisons and the reports of meetings of organizing committees may serve as examples of the conduct of intercomparisons. These are available from the World Weather Watch Department of the WMO Secretariat.

site and facilities (location(s), environmental and climatological conditions, major topographic features, and so forth). It should also nominate a project leader.3 3.2 The organizing committee should examine the suitability of the proposed site and facilities, propose any necessary changes, and agree on the site and facilities to be used. A full site and environmental description should then be prepared by the project leader. The organizing committee, in consultation with the project leader, should decide on the date for the start and the duration of the intercomparison. 3.3 The project leader should propose a date by which the site and its facilities will be available for the installation of equipment and its connection to the data-acquisition system. The schedule should include a period of time to check and test equipment and to familiarize operators with operational and routine procedures.

4.

parTiCipaTion in The inTerComparison

2.

objeCTives oF The inTerComparison

The organizing committee should examine the achievements to be expected from the intercomparison and identify the particular problems that may be expected. It should prepare a clear and detailed statement of the main objectives of the intercomparison and agree on any criteria to be used in the evaluation of results. The organizing committee should also investigate how best to guarantee the success of the intercomparison, making use of the accumulated experience of former intercomparisons, as appropriate.

4. The organizing committee should consider technical and operational aspects, desirable features and preferences, restrictions, priorities, and descriptions of different instrument types for the intercomparison. 4.2 Normally, only instruments in operational use or instruments that are considered for operational use in the near future by Members should be admitted. It is the responsibility of the participating Members to calibrate their instruments against recognized standards before shipment and to provide appropriate calibration certificates. Participants may be requested to provide two identical instruments of each type in order to achieve more confidence in the data. However, this should not be a condition for participation.
3 When more than one site is involved, site managers shall be appointed, as required. Some tasks of the project leader, as outlined in this annex, shall be delegated to the site managers.

3.

plaCe, DaTe anD DuraTion

3. The host country should be requested by the Secretariat to provide the organizing committee with a description of the proposed intercomparison

III.4–10

GUIDE TO METEOROLOGICAL INSTRUMENTS AND METHODS OF OBSERVATION

4.3 The organizing committee should draft a detailed questionnaire in order to obtain the required information on each instrument proposed for the intercomparison. The project leader shall provide further details and complete this questionnaire as soon as possible. Participants will be requested to specify very clearly the hardware connections and software characteristics in their reply and to supply adequate documentation (a questionnaire checklist is available from the WMO Secretariat). 4.4 The chairperson of the organizing committee should then request: (a) The Secretary-General to invite officially Members (who have expressed an interest) to participate in the intercomparison. The invitation shall include all necessary information on the rules of the intercomparison as prepared by the organizing committee and the project leader; (b) The project leader to handle all further contact with participants.

5.3

related observations and measurements

The organizing committee should agree on a list of meteorological and environmental variables that should be measured or observed at the intercomparison site during the whole intercomparison period. It should prepare a measuring programme for these and request the host country to execute this programme. The results of this programme should be recorded in a format suitable for the intercomparison analysis. 5.4 Data-acquisition system

5. 5.1

DaTa aCquisiTion

5.4. Normally the host country should provide the necessary data-acquisition system capable of recording the required analogue, pulse and digital (serial and parallel) signals from all participating instruments. A description and a block diagram of the full measuring chain should be provided by the host country to the organizing committee. The organizing committee, in consultation with the project leader, should decide whether analogue chart records and visual readings from displays will be accepted in the intercomparison for analysis purposes or only for checking the operation. 5.4.2 The data-acquisition system hardware and software should be well tested before the comparison is started and measures should be taken to prevent gaps in the data record during the intercomparison period. 5.5 Data-acquisition methodology

equipment set-up

5.. The organizing committee should evaluate a proposed layout of the instrument installation prepared by the project leader and agree on a layout of instruments for the intercomparison. Special attention should be paid to fair and proper siting and exposure of instruments, taking into account criteria and standards of WMO and other international organizations. The adopted siting and exposure criteria shall be documented. 5..2 Specific requests made by participants for equipment installation should be considered and approved, if acceptable, by the project leader on behalf of the organizing committee. 5.2 standards and references

The host country should make every effort to include at least one reference instrument in the intercomparison. The calibration of this instrument should be traceable to national or international standards. A description and specification of the standard should be provided to the organizing committee. If no recognized standard or reference exists for the variable(s) to be measured, the organizing committee should agree on a method to determine a reference for the intercomparison.

The organizing committee should agree on appropriate data-acquisition procedures, such as frequency of measurement, data sampling, averaging, data reduction, data formats, real-time quality control, and so on. When data reports have to be made by participants during the time of the intercomparison or when data are available as chart records or visual observations, the organizing committee should agree on the responsibility for checking these data, on the period within which the data should be submitted to the project leader, and on the formats and media that would allow storage of these data in the database of the host. When possible, direct comparisons should be made against the reference instrument. 5.6 schedule of the intercomparison

The organizing committee should agree on an outline of a time schedule for the intercomparison, including normal and specific tasks, and prepare a

CHAPTER 4. TESTING, CALIBRATION AND INTERCOMPARISON

III.4–11

time chart. Details should be further worked out by the project leader and the project staff.

include those used in the original study. This should not preclude the addition of new methods. 6.2.4 Normally the project leader should be responsible for the data-processing and analysis. The project leader should, as early as possible, verify the appropriateness of the selected analysis procedures and, as necessary, prepare interim reports for comment by the members of the organizing committee. Changes should be considered, as necessary, on the basis of these reviews. 6.2.5 After completion of the intercomparison, the organizing committee should review the results and analysis prepared by the project leader. It should pay special attention to recommendations for the utilization of the intercomparison results and to the content of the final report.

6. 6.1

DaTa proCessinG anD analysis

Database and data availability

6.. All essential data of the intercomparison, including related meteorological and environmental data, should be stored in a database for further analysis under the supervision of the project leader. The organizing committee, in collaboration with the project leader, should propose a common format for all data, including those reported by participants during the intercomparison. The organizing committee should agree on near-real-time monitoring and quality-control checks to ensure a valid database. 6..2 After completion of the intercomparison, the host country should, on request, provide each participating Member with a data set from its submitted instrument(s). This set should also contain related meteorological, environmental and reference data. 6.2 Data analysis

7.

Final reporT oF The inTerComparison

7. The organizing committee should draft an outline of the final report and request the project leader to prepare a provisional report based on it. 7.2 The final report of the intercomparison should contain, for each instrument, a summary of key performance characteristics and operational factors. Statistical analysis results should be presented in tables and graphs, as appropriate. Time-series plots should be considered for selected periods containing events of particular significance. The host country should be invited to prepare a chapter describing the database and facilities used for data-processing, analysis and storage. 7.3 The organizing committee should agree on the procedures to be followed for approval of the final report, such as: (a) The draft final report will be prepared by the project leader and submitted to all organizing committee members and, if appropriate, also to participating Members; (b) Comments and amendments should be sent back to the project leader within a specified time limit, with a copy to the chairperson of the organizing committee; (c) When there are only minor amendments proposed, the report can be completed by the project leader and sent to the WMO Secretariat for publication; (d) In the case of major amendments or if serious problems arise that cannot be resolved by

6.2. The organizing committee should propose a framework for data analysis and processing and for the presentation of results. It should agree on data conversion, calibration and correction algorithms, and prepare a list of terms, definitions, abbreviations and relationships (where these differ from commonly accepted and documented practice). It should elaborate and prepare a comprehensive description of statistical methods to be used that correspond to the intercomparison objectives. 6.2.2 Whenever a direct, time-synchronized, one-on-one comparison would be inappropriate (for example, in the case of spatial separation of the instruments under test), methods of analysis based on statistical distributions should be considered. Where no reference instrument exists (as for cloud base, meteorological optical range, and so on), instruments should be compared against a relative reference selected from the instruments under test, based on median or modal values, with care being taken to exclude unrepresentative values from the selected subset of data. 6.2.3 Whenever a second intercomparison is established some time after the first, or in a subsequent phase of an ongoing intercomparison, the methods of analysis and the presentation should

III.4–12

PART III. QUALITY ASSURANCE AND MANAGEMENT OF OBSERVING SYSTEMS

correspondence, an additional meeting of the organizing committee should be considered (the president of CIMO should be informed of this situation immediately). 7.4 The organizing committee may agree that intermediate and final results may be presented only by the project leader and the project staff at technical conferences.

8.2.3 The necessary electrical power for all instruments shall be provided. Participants should be informed of the network voltage and frequency and their stability. The connection of instruments to the data-acquisition system and the power supply will be carried out in collaboration with the participants. The project leader should agree with each participant on the provision, by the participant or the host country, of power and signal cables of adequate length (and with appropriate connectors). 8.2.4 The host country should be responsible for obtaining legal authorization related to measurements in the atmosphere, such as the use of frequencies, the transmission of laser radiation, compliance with civil and aeronautical laws, and so forth. Each participant shall submit the necessary documents at the the request of the project leader. 8.2.5 The host country may provide information on accommodation, travel, local transport, daily logistic support, and so forth. 8.3 host country servicing

8. 8.1

responsibiliTies

responsibilities of participants

8.. Participants shall be fully responsible for the transportation of all submitted equipment, all import and export arrangements, and any costs arising from these. Correct import/export procedures shall be followed to ensure that no delays are attributable to this process. 8..2 Participants shall generally install and remove any equipment under the supervision of the project leader, unless the host country has agreed to do this. 8..3 Each participant shall provide all necessary accessories, mounting hardware, signal and power cables and connectors (compatible with the standards of the host country), spare parts and consumables for its equipment. Participants requiring a special or non-standard power supply shall provide their own converter or adapter. Participants shall provide all detailed instructions and manuals needed for installation, operation, calibration and routine maintenance. 8.2 host country support

8.3. Routine operator servicing by the host country will be performed only for long-term intercomparisons for which absence of participants or their representatives can be justified. 8.3.2 When responsible for operator servicing, the host country should: (a) Provide normal operator servicing for each instrument, such as cleaning, chart changing, and routine adjustments as specified in the participant’s operating instructions; (b) Check each instrument every day of the intercomparison and inform the nominated contact person representing the participant immediately of any fault that cannot be corrected by routine maintenance; (c) Do its utmost to carry out routine calibration checks according to the participant’s specific instructions. 8.3.3 The project leader should maintain in a log regular records of the performance of all equipment participating in the intercomparison. This log should contain notes on everything at the site that may have an effect on the intercomparison, all events concerning participating equipment, and all events concerning equipment and facilities provided by the host country.

8.2. The host country should provide, if asked, the necessary information to participating Members on temporary and permanent (in the case of consumables) import and export procedures. It should assist with the unpacking and installation of the participants’ equipment and provide rooms or cabinets to house equipment that requires protection from the weather and for the storage of spare parts, manuals, consumables, and so forth. 8.2.2 A reasonable amount of auxiliary equipment or structures, such as towers, shelters, bases or foundations, should be provided by the host country.

CHAPTER 4. TESTING, CALIBRATION AND INTERCOMPARISON

III.4–13

9.

rules DurinG The inTerComparison

9.4 Calibration checks and equipment servicing by participants, which requires specialist knowledge or specific equipment, will be permitted according to predefined procedures. 9.5 Any problems that arise concerning the participants’ equipment shall be addressed to the project leader. 9.6 The project leader may select a period during the intercomparison in which equipment will be operated with extended intervals between normal routine maintenance in order to assess its susceptibility to environmental conditions. The same extended intervals will be applied to all equipment.

9.

The project leader shall exercise general control of the intercomparison on behalf of the organizing committee.
9.2 No changes to the equipment hardware or software shall be permitted without the concurrence of the project leader. 9.3 Minor repairs, such as the replacement of fuses, will be allowed with the concurrence of the project leader.

III.4–14

PART III. QUALITY ASSURANCE AND MANAGEMENT OF OBSERVING SYSTEMS

ANNEx 4.C

reports of international comparisons conducted under the auspices of the commission for instruments and methods of observation
Topic Pressure Instruments and Observing Report No. Title of report The WMO Automatic Digital Barometer Intercomparison (de Bilt, Netherlands, 1989–1991), J.P. van der Meulen, WMO/TD‑No. 474 (1992). WMO Assmann Aspiration Psychrometer Intercomparison (Potsdam, German Democratic Republic, 1987), D. Sonntag, WMO/ TD‑No. 289 (1989). WMO International Hygrometer Intercomparison (Oslo, Norway, 1989), J. Skaar, K. Hegg, Moe and K.S. Smedstud, WMO/ TD‑No. 316 (1989). WMO Wind Instrument Intercomparison (Mont Aigoual, France, 1992–1993), P. Gregoire and G. Oualid, WMO/TD‑No. 859 (1997). International Comparison of National Precipitation Gauges with a Reference Pit Gauge (1984), B. Sevruk and W.R. Hamon, WMO/ TD‑No. 38 (1984). WMO Solid Precipitation Measurement Intercomparison – Final Report (1998), B.E. Goodison, P.Y.T. Louie and D. Yang, WMO/TD‑No. 872. (1998). Radiation and Sunshine Duration Measurements: Comparison of Pyranometers and Electronic Sunshine Duration Recorders of RA VI (Budapest, Hungary, July–December 1984), G. Major, WMO/ TD‑No. 146 (1986). First WMO Regional Pyrheliometer Comparison of RA II and RA V (Tokyo, Japan, 23 January–4 February 1989), Y. Sano, WMO/ TD‑No. 308 (1989). First WMO Regional Pyrheliometer Comparison of RA IV (Ensenada, Mexico, 20–27 April 1989), I. Galindo, WMO/TD‑No. 345 (1989). Segunda Comparación de la OMM de Pirheliómetros Patrones Nacionales AR III (Buenos Aires, Argentina, 25 November– 13 December 1991), M. Ginzburg, WMO/TD‑No. 572 (1992). Tercera Comparación Regional de la OMM de Pirheliómetros Patrones Nacionales AR III – Informe Final (Santiago, Chile, 24 February– 7 March 1997), M.V. Muñoz, WMO/TD‑No. 861 (1997). Radiation and Sunshine Duration Measurements: Comparison of Pyranometers and Electronic Sunshine Duration – Recorders of RA VI (Budapest, Hungary, July–December 1984), G. Major, WMO/ TD‑No. 146 (1986). The First WMO Intercomparison of Visibility Measurements (United Kingdom, 1988/1989), D.J. Griggs, D.W. Jones, M. Ouldridge and W.R. Sparks, WMO/TD‑No. 401 (1990). WMO International Radiosonde Comparison, Phase I (Beaufort Park, United Kingdom, 1984), A.H. Hooper, WMO/TD‑No. 174 (1986).

46

Humidity

34

Humidity

38

Wind Precipitation

62 17

Precipitation

67

Radiationa

16

Radiationa

43

Radiationa Radiationa

44 53

Radiationa

64

Sunshine duration

16

Visibility

41

Radiosondes

28

a

The reports of the WMO International Pyrheliometer Intercomparisons, conducted by the World Radiation Centre at Davos (Switzerland) and carried out at five‑yearly intervals, are also distributed by WMO.

III.4–15
Topic Radiosondes

CHAPTER 4. TESTING, CALIBRATION AND INTERCOMPARISON Instruments and Observing Report No. Title of report

III.4–15

29

WMO International Radiosonde Intercomparison, Phase II (Wallops Island, United States, 4 February–15 March 1985), F.J. Schmidlin, WMO/TD‑No. 312 (1988). WMO International Radiosonde Comparison (United Kingdom, 1984/ United States, 1985), J. Nash and F.J. Schmidlin, WMO/TD‑No. 195 (1987). WMO International Radiosonde Comparison, Phase III (Dzhambul, USSR, 1989), A. Ivanov, A. Kats, S. Kurnosenko, J. Nash and N. Zaitseva, WMO/TD‑No. 451 (1991). WMO International Radiosonde Comparison, Phase IV (Tsukuba, Japan, 15 February–12 March 1993), S. Yagy, A. Mita and N. Inoue, WMO/TD‑No. 742 (1996). WMO Intercomparison of GPS Radiosondes – Executive Summary (Alcantâra, Maranhão, Brazil, 20 May–10 June 2001), R.B. da Silveira, G. Fisch, L.A.T. Machado, A.M. Dall’Antonia Jr., L.F. Sapucci, D. Fernandes and J. Nash, WMO/TD‑No. 1153 (2003). WMO Intercomparison of Radiosonde Systems (Vacoas, Maurutius, 2–25 February 2005), J. Nash, R. Smout, T. Oakley, B. Pathack and S. Kurnosenko, WMO/TD‑No. 1303 (2006). WMO International Ceilometer Intercomparison (United Kingdom, 1986), D.W. Jones, M. Ouldridge and D.J. Painting, WMO/ TD‑No. 217 (1988). WMO Intercomparison of Present Weather Sensors/Systems – Final Report (Canada and France France, 1993–1995), M. Leroy, C. Bellevaux, J.P. Jacob, WMO/TD‑No. 887 (1998).

Radiosondes

30

Radiosondes

40

Radiosondes

59

Radiosondes

76

Radiosondes

83

Cloud‑base height

32

Present weather

73

III.4–16

PART III. QUALITY ASSURANCE AND MANAGEMENT OF OBSERVING SYSTEMS

references and further reading

International Electrotechnical Commission, 990: Classification of Environmental Conditions. IEC 72. International Organization for Standardization, 989a: Sampling Procedures for Inspection by Attributes. – Part I: Sampling plans indexed by acceptable quality level (AQL) for lot-by-lot inspection. ISO 2859-: 989. International Organization for Standardization, 989b: Sampling Procedures and Charts for Inspection by Variables for Percent Nonconforming. ISO 395: 989. International Organization for Standardization, 993: International Vocabulary of Basic and General Terms in Metrology. Second edition, ISO Guide 99:993. Hoehne, W.E., 97: Standarding Functional Tests. NOAA Technical Memorandum, NWS T&EL-2, United States Department of Commerce, Sterling, Virginia. Hoehne, W.E., 972: Standardizng functional tests. Preprints of the Second Symposium on Meteorological Observations and Instrumentation, American Meteorological Society, pp. 6–65.

Hoehne, W.E., 977: Progress and Results of Functional Testing. NOAA Technical Memorandum NWS T&EL-5, United States Department of Commerce, Sterling, Virginia. National Weather Service, 980: Natural Environmental Testing Criteria and Recommended Test Methodologies for a Proposed Standard for National Weather Service Equipment. United States Department of Commerce, Sterling, Virginia. National Weather Service, 984: NWS Standard Environmental Criteria and Test Procedures. United States Department of Commerce, Sterling, Virginia. World Meteorological Organization/International Council of Scientific Unions, 986: Revised Instruction Manual on Radiation Instruments and Measurements (C. Fröhlich and J. London, eds). World Climate Research Programme Publications Series No. 7, WMO/TD-No. 49, Geneva. World Meteorological Organization, 989: Analysis of Instrument Calibration Methods Used by Members (H. Doering). Instruments and Observing Methods Report No. 45, WMO/ TD-No. 30, Geneva.

CHAPTER 5

training of instrument specialists

5.1 5.1.1

IntroductIon

General

resources demand new skills for the introductory process and for ongoing operation and maintenance. This human dimension is more important in capacity building than the technical material. As meteorology is a global discipline, the technology gap between developed and developing nations is a particular issue for technology transfer. Providing for effective training strategies, programmes and resources which foster self-sustaining technical infrastructures and build human capacity in developing countries is a goal that must be kept constantly in view. 5.1.3 Application to all users of meteorological instruments

Given that the science and application of meteorology are based on continuous series of measurements using instruments and systems of increasing sophistication, this chapter is concerned with the training of those specialists who deal with the planning, specification, design, installation, calibration, maintenance and application of meteorological measuring instruments and remote-sensing systems. This chapter is aimed at technical managers and trainers and not least at the instrument specialists themselves who want to advance in their profession. Training skilled personnel is critical to the availability of necessary and appropriate technologies in all countries so that the WMO Global Observing System can produce cost-effective data of uniform good quality and timeliness. However, more than just technical ability with instruments is required. Modern meteorology requires technologists who are also capable as planners and project managers, knowledgeable about telecommunications and data processing, good advocates for effective technical solutions, and skilled in the areas of financial budgets and people management. Thus, for the most able instrument specialists or meteorological instrument systems engineers, training programmes should be broad-based and include personal development and management skills as well as expertise in modern technology. Regional Training Centres (RTCs) have been established in many countries under the auspices of WMO, and many of them offer training in various aspects of the operation and management of instruments and instrument systems. Regional Traning Centres are listed in the annex. Similarly, Regional Instrument Centres (RICs) have been set up in many places, and some of them can provide training. Their locations and functions are listed in Part I, Chapter 1, Annex 1. A and are discussed briefly in section 5.5.1.2. 5.1.2 technology transfer

This chapter deals with training mainly as an issue for National Hydrometeorological Services. However, the same principles apply to any organizations that take meteorological measurements, whether they train their own staff or expect to recruit suitably qualified personnel. In common with all the observational sciences, the benefits of training to ensure standardized measurement procedures and the most effective use and care of equipment, are self-evident.

5.2

ApproprIAte trAInInG for operAtIonAl requIrements

5.2.1

theory and practice

Training is a vital part of the process of technology transfer, which is the developmental process of introducing new technical resources into service to improve quality and reduce operating costs. New

Taking measurements using instrument systems depends on physical principles (for example, the thermal expansion of mercury) to sense the atmospheric variables and transduce them into a standardized form that is convenient for the user, for example, a recorded trace on a chart or an electrical signal to input into an automatic weather station. The theoretical basis for understanding the measurement process must also take into account the coupling of the instrument to the quantity being measured (the representation or “exposure”) as well as the instrumental and observational errors with which every measurement is fraught. The basic measurement data is then often further processed and coded in more or less complex ways, thus requiring further theoretical understanding, for example, the reduction of atmospheric pressure to

III.5–2

PART III. QUALITY ASSURANCE AND MANAGEMENT OF OBSERVING SYSTEMS

mean sea level and upper-air messages derived from a radiosonde flight. Taking the measurement also depends on practical knowledge and skill in terms of how to install and set up the instrument to take a standardized measurement, how to operate it safely and accurately, and how to carry out any subsequent calculations or coding processes with minimal error. Thus, theoretical and practical matters are closely related in achieving measurement data of known quality, and the personnel concerned in the operation and management of the instrument systems need theoretical understanding and practical skills which are appropriate to the complexity and significance of their work. The engineers who design or maintain complex instrumentation systems require a particularly high order of theoretical and practical training. 5.2.2 matching skills to the tasks

and meteorological technician, and hydrologist and hydrological technician, respectively. The recommended syllabus for each class includes a substantial component on instruments and methods of observation related to the education, training and duties expected at that level. The WMO classification of personnel also sets guidelines for the work content, qualifications and skill levels required for instrument specialists. Section 7.3 of WMO (2002a) includes an example of competency requirements, while WMO (2002b) offers detailed syllabus examples for the initial training and specialization of meteorological personnel. These guidelines enable syllabi and training courses to be properly designed and interpreted; they also assist in the definition of skill deficits and aid the development of balanced national technical skill resources.

5.3

some GenerAl prIncIples for trAInInG

Organizations need to ensure that the qualifications, skills and numbers of their personnel or other contractors (and thus training) are well matched to the range of tasks to be performed. For example, the training needed to read air temperature in a Stevenson screen is at the lower end of the range of necessary skills, while theoretical and practical training at a much higher level is plainly necessary in order to specify, install, operate and maintain automatic weather stations, meteorological satellite receivers and radars. Therefore, it is useful to apply a classification scheme for the levels of qualification for operational requirements, employment, and training purposes. The national grades of qualification in technical education applicable in a particular country will be important benchmarks. To help the international community achieve uniform quality in their meteorological data acquisition and processing, WMO recommends the use of its own classification of personnel with the accompanying duties that they should be expected to carry out competently. 5.2.3 Wmo classification of personnel

5.3.1 5.3.1.1

management policy issues a personnel plan

It is important that National Meteorological Services have a personnel plan that includes instrument specialists, recognizing their value in the planning, development and maintenance of adequate and cost-effective weather observing programmes. The plan would show all specialist instrument personnel at graded levels (WMO, 2002a) of qualification. Skill deficits should be identified and provision made for recruitment and training. 5.3.1.2 staff retention

Every effort should be made to retain scarce instrumentation technical skills by providing a work environment that is technically challenging, has opportunities for career advancement, and has salaries comparable with those of other technical skills, both within and outside the Meteorological Service. 5.3.1.3 personnel development

The WMO classification scheme1 identifies two broad categories of personnel: graduate professionals and technicians (WMO, 2002a). For meteorological and hydrological personnel, these categories are designated as follows: meteorologist
1 Classification scheme approved by the WMO Executive Council at its fiftieth session (1998), and endorsed by the World Meteorological Congress at its thirteenth session (1999).

Training should be an integral part of the personnel plan. The introduction of new technology and re-equipment imply new skill requirements. New recruits will need training appropriate to their previous experience, and skill deficits can also be made up by enhancing the skills of other staff. This training also provides the path for career progression. It is helpful if each staff member has a career profile showing training, qualifications and career

CHAPTER 5. TRAINING OF INSTRUMENT SPECIALISTS

III.5–3

progression, maintained by the training department, in order to plan personnel development in an orderly manner. 5.3.1.4 Balanced training

5.3.2.3

for trainers and instrument specialists

National training programmes should aim at a balance of skills over all specialist classes giving due attention to the training, supplementation and refresher phases of training, and which result in a self-sustaining technical infrastructure. 5.3.2 Aims and objectives for training programmes

In order to achieve maximum benefits from training it is essential to have clear aims and specific objectives on which to base training plans, syllabi and expenditure. The following strategic aims and objectives for the training of instrument specialists may be considered. 5.3.2.1 for managers

Management aims in training instrument specialists should be, among others: (a) To improve and maintain the quality of information in all meteorological observing programmes; (b) To enable National Meteorological and Hydrological Services (NMHSs) to become self-reliant in the knowledge and skills required for the effective planning, implementation and operation of meteorological data-acquisition programmes, and to enable them to develop maintenance services ensuring maximum reliability, accuracy and economy from instrumentation systems; (c) To realize fully the value of capital invested in instrumentation systems over their optimum economic life. 5.3.2.2 for trainers

The general objective of training is to equip instrument specialists and engineers (at graded levels of training and experience): (a) To appreciate the use, value and desirable accuracy of all instrumental measurements; (b) To understand and apply the principles of siting instrument enclosures and instruments so that representative, homogeneous and compatible data sets are produced; (c) To acquire the knowledge and skill to carry out installations, adjustments and repairs and to provide a maintenance service ensuring maximum reliability, accuracy and economy from meteorological instruments and systems; (d) To be able to diagnose faults logically and quickly from observed symptoms and trace and rectify systematically their causes; (e) To understand the sources of error in measurements and be competent in the handling of instrument standards and calibration procedures in order to minimize systematic errors; (f) To keep abreast of new technologies and their appropriate application and acquire new knowledge and skills by means of special and refresher courses; (g) To plan and design data-acquisition networks, and manage budgets and technical staff; (h) To manage projects involving significant financial, equipment and staff resources and technical complexity; (i) To modify, improve, design and make instruments for specific purposes; (j) To design and apply computer and telecommunications systems and software, control measurements and process raw instrumental data into derived forms and transmit coded messages. 5.3.3 training for quality

The design of training courses should aim: (a) To provide balanced programmes of training which meet the defined needs of the countries within each region for skills at graded levels; (b) To provide effective knowledge transfer and skill enhancement in National Meteorological Services by using appropriately qualified tutors, good training aids and facilities, and effective learning methods; (c) To provide for monitoring the effectiveness of training by appropriate assessment and reporting procedures; (d) To provide training at a minimum necessary cost.

Meteorological data acquisition is a complex and costly activity involving human and material resources, communication and computation. It is necessary to maximize the benefit of the information derived while minimizing the financial and human resources required in this endeavour. The aim of quality data acquisition is to maintain the flow of representative, accurate and timely instrumental data into the national meteorological processing centres at the least cost. Through every stage of technical training, a broad appreciation of

III.5–4

PART III. QUALITY ASSURANCE AND MANAGEMENT OF OBSERVING SYSTEMS

how all staff can affect the quality of the end product should be encouraged. The discipline of total quality management (Walton, 1986, and Imai, 1986) considers the whole measurement environment (applications, procedures, instruments and personnel) in so far as each of its elements may affect quality. In total quality management, the data-acquisition activity is studied as a system or series of processes. Critical elements of each process, for example, time delay, are measured and the variation in the process is defined statistically. Problem-solving tools are used by a small team of people who understand the process, to reduce process variation and thereby improve quality. Processes are continuously refined by incremental improvement. WMO (1990) provides a checklist of factors under the following headings: (a) Personnel recruitment and training; (b) Specification, design and development; (c) Instrument installation; (d) Equipment maintenance; (e) Instrument calibration. All of the above influence data quality from the instrument expert’s point of view. The checklist can be used by managers to examine areas over which they have control to identify points of weakness, by training staff within courses on total quality management concepts, and by individuals to help them be aware of areas where their knowledge and skill should make a valuable contribution to overall data quality. The International Organization for Standardization provides for formal quality systems, defined by the ISO 9000 group of specifications (ISO, 1994a; 1994b), under which organizations may be certified by external auditors for the quality of their production processes and services to clients. These quality systems depend heavily on training in quality management techniques. 5.3.4 5.3.4.1 How people learn the learning environment

performance and relationships with others on the job. Learning is an active process in which the student reacts to the training environment and activity. A change of behaviour occurs as the student is involved mentally, physically and emotionally. Too much mental or emotional stress during learning time will be counterproductive. Trainers and managers should attempt to stimulate and encourage learning by creating a conducive physical and psychological climate and by providing appropriate experiences and methods that promote learning. Students should feel at ease and be comfortable in the learning environment, which should not provide distractions. The “psychological climate” can be affected by the student’s motivation, the manner and vocabulary of the tutor, the affirmation of previously-acquired knowledge, avoiding embarrassment and ridicule, establishing an atmosphere of trust, and the selection of teaching methods. 5.3.4.2 important principles

Learning is a process that is very personal to the individual, depending on a person’s needs and interests. People are motivated to learn when there is the prospect of some reward, for example, a salary increase. Nonetheless, job satisfaction, involvement, personal fulfilment, having some sense of power or influence, and the affirmation of peers and superiors are also strong motivators. These rewards come through enhanced work

Important principles for training include the following: (a) Readiness: Learning will take place more quickly if the student is ready, interested and wants to learn; (b) Objectives: The objectives of the training (including performance standards) should be clear to those responsible and those involved; (c) Involvement: Learning is more effective if students actively work out solutions and do things for themselves, rather than being passively supplied with answers or merely shown a skill; (d) Association: Learning should be related to past experiences, noting similarities and differences; (e) Learning rate: The rate of training should equal the rate at which an individual can learn (confirmed by testing), with learning distributed over several short sessions rather than one long session being more likely to be retained; (f) Reinforcement: Useful exercises and repetition will help instil new learning; (g) Intensity: Intense, vivid or dramatic experiences capture the imagination and make more impact; (h) Effectiveness: Experiences which are satisfying are better for learning than those which are embarrassing or annoying. Approval encourages learning;

CHAPTER 5. TRAINING OF INSTRUMENT SPECIALISTS

III.5–5

(i)

(j)

Support: The trainee’s supervisor must be fully supportive of the training and must be able to maintain and reinforce it; Planning and evaluation: Training should be planned, carried out and evaluated systematically, in the context of organizational needs. Varying the methods

5.3.5

personal skills development

5.3.4.3

People in a group will learn at different speeds. Some training methods (see section 5.4) will suit some individuals better than others and will be more effective under different circumstances. Using a variety of training methods and resources will help the group learn more rapidly. Research (Moss, 1987) shows that, through the senses, our retention of learning occurs from the following: (a) Sight (83 per cent); (b) Hearing (11 per cent); (c) Other senses (6 per cent). However, we learn best by actually performing the task. Methods or training media in general order of decreasing effectiveness are: (a) Real experience; (b) Simulated practical experience; (c) Demonstrations and discussions; (d) Physical models and text; (e) Film, video and computer animation; (f) Graphs, diagrams and photographs; (g) Written text; (h) Lectures. These methods may, of course, be used in combination. A good lecture may include some of the other methods. Traditional educational methods rely heavily on the spoken and written word, whereas evidence shows that visual and hands-on experience are far more powerful. Training for instrument specialists can take advantage of the widest range of methods and media. The theoretical aspects of measurement and instrument design are taught by lectures based on text and formulas and supported by graphs and diagrams. A working knowledge of the instrument system for operation, maintenance and calibration can be gained by the use of photographs with text, films or videos showing manual adjustments, models which may be disassembled, demonstrations, and ultimately practical experience in operating systems. Unsafe practices or modes of use may be simulated.

A meteorological instrument systems engineering group needs people who are not only technically capable, but who are broadly educated and are able to speak and write well. Good personal communication skills are necessary to support and justify technical programmes and particularly in management positions. Skilled technologists should receive training so that they can play a wider role in the decisions that affect the development of their Meteorological Service. There is a tendency for staff who are numerate and have practical, manual ability to be less able with verbal and written linguistic skills. In the annual personal performance review of their staff, managers should identify any opportunities for staff to enhance their personal skills by taking special courses, for example, in public speaking, negotiation, letter and report writing or assertiveness training. Some staff may need assistance in learning a second language in order to further their training. 5.3.6 management training

Good management skills are an important component of engineering activity. These skills involve time management; staff motivation, supervision and performance assessment (including a training dimension); project management (estimation of resources, budgets, time, staff and materials, and scheduling); problem solving; quality management; and good verbal and written communication skills. Instrument specialists with leadership aptitude should be identified for management training at an appropriate time in their careers. Today’s manager may have access to a personal computer and be adept in the use of office and engineering software packages to be used, for example, for word processing, spreadsheets, databases, statistical analysis with graphics, engineering drawing, flow charting, and project management. Training in the use of these tools can add greatly to personal productivity. 5.3.7 5.3.7.1 A lifelong occupation three training phases

Throughout their working lives, instrument specialists should expect to be engaged in repeated cycles of personal training, both through structured study and informal on-the-job training or self-study. Three phases of training can be recognized as follows:

III.5–6 (a)

PART III. QUALITY ASSURANCE AND MANAGEMENT OF OBSERVING SYSTEMS

(b)

(c)

A developmental, training phase when the trainee acquires general theory and practice at graded levels; A supplementation phase where the training is enhanced by learning about specific techniques and equipment; A refresher phase where some years after formal training the specialist needs refresher training and updates on current techniques and equipment. training

empathy with students, and will be patient and tolerant, ready to give encouragement and praise, flexible and imaginative, and practised in a variety of training techniques. Good trainers will set clear objectives and plan and prepare training sessions well. They will maintain good records of training prescriptions, syllabi, course notes, courses held and the results, and of budgets and expenditures. They will seek honest feedback on their performance and be ready to modify their approach. They will also expect to be always learning. 5.4.2 task analysis

5.3.7.2

For instrument specialists, the training phase of technical education and training usually occurs partly in an external technical institute and partly in the training establishment of the NMHS where a basic course in meteorological instruments is taken. Note that technical or engineering education may extend over both WMO class levels. 5.3.7.3 specialist training

The supplementation phase will occur over a few years as the specialist takes courses on special systems, for example, automatic weather stations, or radar, or on disciplines like computer software or management skills. Increasing use will be made of e x t e r n a l t r a i n i n g r e s o u rc e s , i n c l u d i n g WMO-sponsored training opportunities. 5.3.7.4 refresher training

As the instrument specialist’s career progresses there will be a need for periodic refresher courses to cover advances in instrumentation and technology, as well as other supplementary courses. There is an implied progression in these phases. Each training course will assume that students have some prerequisite training on which to build.

The instrument specialist must be trained to carry out many repetitive or complex tasks for the installation, maintenance and calibration of instruments, and sometimes for their manufacture. A task analysis form may be used to define the way in which the job is to be done, and could be used by the tutor in training and then as a checklist by the trainee. First, the objective of the job and the required standard of performance is written down. The job is broken down into logical steps or stages of a convenient size. The form might consist of a table whose columns are headed, for example with: steps, methods, measures, and reasons: (a) Steps (what must be done): These are numbered and consist of a brief description of each step of the task, beginning with an active verb; (b) Methods (how it is to be done): An indication of the method and equipment to be used or the skill required; (c) Measures (the standard required): Includes a qualitative statement, reference to a specification clause, test, or actual measure; (d) Reasons (why it must be done): A brief explanation of the purpose of each step. A flow chart would be a good visual means of relating the steps to the whole task, particularly when the order of the steps is important or if there are branches in the procedure. 5.4.3 planning the training session

5.4 5.4.1

tHe trAInInG process

the role of the trainer

Most instrument specialists find themselves in the important and satisfying role of trainer from time to time and for some it will become their full-time work, with its own field of expertise. All trainers need an appreciation of the attributes of a good trainer. A good trainer is concerned with quality results, is highly knowledgeable in specified fields, and has good communication skills. He or she will have

The training process consists of four stages, as shown in the opposite figure: (a) Planning: (i) Review the training objectives, established by the employing organization or standards-setting body (for example, WMO); (ii) Analyse the features of the body of knowledge, task or skill that is the subject of the session;

CHAPTER 5. TRAINING OF INSTRUMENT SPECIALISTS

III.5–7

(iii) Review the characteristics of the students: qualifications, work experience, language ability, specific problems; (iv) Assess the required level of training (Which students may need special attention?); (v) Determine the objectives for the session (What results are required? How can they be measured?); (b) Preparation: (i) Select course content: Assemble information, organize it in a logical sequence; (ii) Determine training methods and media: appropriate to the topic, so as to create and maintain interest (see section 5.4.5); (iii) Prepare a session plan: Set out the detailed plan with the time of each activity; (iv) Plan evaluation: What information is required and how is it to be collected? Select a method and prepare the questions or assignment; Presentation: (i) Carry out training, using the session plan; (ii) Encourage active learning and participation;

(iii) Use a variety of methods; (iv) Use demonstrations and visual aids; (d) Evaluation: (i) Carry out the planned evaluation with respect to the objectives; (ii) Summarize results; (iii) Review the training session for effectiveness in light of the evaluation; (iv) Consider improvements in content and presentation; (v) Write conclusions; (vi) Apply feedback to the next planning session.

All training will be more effective if these stages are worked through carefully and systematically. 5.4.4 5.4.4.1 effectiveness of training targeted training

(c)

Employer's objectives

With the limited resources available for training, real effort should be devoted to maximizing the effectiveness of training. Training courses and resources should be dedicated to optimizing the benefits of training the right personnel at the most useful time. For example, too little training may be a waste of resources, sending management staff to a course for maintenance technicians would be inappropriate, and it is pointless to train people 12 months before they have access to new technology. Training opportunities and methods should be selected to best suit knowledge and skills requirements and trainees, bearing in mind their educational and national backgrounds. To ensure maximum effectiveness, training should be evaluated. 5.4.4.2 evaluating the training

Plan

Prepare

Evaluate Present

Employing organization Students

figure 5.2. stages in the training process

Evaluation is a process of obtaining certain information and providing it to those who can influence future training performance. Several approaches to evaluating training may be applied, depending on who needs the information among the following: (a) WMO, which is concerned with improving the quality of data collected in the Global Observing System. It generates training programmes, establishes funds and uses the services of experts primarily to improve the skill base in developing countries; (b) The National Meteorological Service, which which needs quality weather data and is concerned with the overall capability of the division that performs data acquisition and particular instrumentation tasks within

III.5–8

PART III. QUALITY ASSURANCE AND MANAGEMENT OF OBSERVING SYSTEMS

(c)

(d)

(e)

certain staff number constraints. It is interested in the budget and cost-benefit of training programmes; The training department or Regional Training Centre, which is concerned with establishing training programmes to meet specified objectives within an agreed budget. Its trainers need to know how effective their methods are in meeting these objectives and how they can be improved; Engineering managers, who are concerned with having the work skills to accomplish their area of responsibility to the required standard and without wasting time or materials; Trainees, who are concerned with the rewards and job satisfaction that come with increased competence. They will want a training course to meet their needs and expectations.

(d)

Thus, the effectiveness of training should be evaluated at several levels. National and Regional Training Centres might evaluate their programmes annually and triennially, comparing the number of trainees in different courses and pass levels against budgets and the objectives which have been set at the start of each period. Trainers will need to evaluate the relevance and effectiveness of the content and presentation of their courses. 5.4.4.3 types of evaluation

(e)

Various forms of written test (essay, short answer questions, true or false questions, multiple-choice questions, drawing a diagram or flow chart) can be devised to test a trainee’s knowledge. Trainees may usefully test and score their own knowledge. Skills are best tested by a set practical assignment or by observation during on-the-job training (WMO, 1990). A checklist of required actions and skills (an observation form) for the task may be used by the assessor; Performance evaluation, which measures how the trainee’s performance on the job has changed after some time, in response to training, which is best compared with a pre-training test. This evaluation may be carried out by the employer at least six weeks after training, using an observation form, for example. The training institution may also make an assessment by sending questionnaires to both the employer and the trainee; Impact evaluation, which measures the effectiveness of training by determining the change in an organization or work group. This evaluation may require planning and the collection of baseline data before and after the specific training. Some measures might be: bad data and the number of data elements missing in meteorological reports, the time taken to perform installations, and the cost of installations. training for trainers

Types of evaluation include the following: (a) A training report, which does not attempt to measure effectiveness. Instead, it is a factual statement of, for example, the type and the number of courses offered, dates and durations, the number of trainees trained and qualifying, and the total cost of training. In some situations, a report is required on the assessed capability of the student; (b) Reaction evaluation, which measures the reaction of the trainees to the training programme. It may take the form of a written questionnaire through which trainees score, at the end of the course, their opinions about relevance, content, methods, training aids, presentation and administration. As such, this method cannot improve the training that they receive. Therefore, every training course should have regular opportunities for review and student feedback through group discussion. This enables the trainer to detect any problems with the training or any individual’s needs and to take appropriate action; (c) Learning evaluation, which measures the trainee’s new knowledge and skills, which are best compared against a pre-training test.

5.4.4.4

Trainers also require training to keep abreast of technological advances, to learn about new teaching techniques and media, and to catch a fresh vision of their work. There should be provision in their NMHS’s annual budget to allow the NMHS’s training staff to take training opportunities, probably in rotation. Some options are: personal study; short courses (including teaching skills) run by technical institutes; time out for study for higher qualifications; visits to the factories of meteorological equipment manufacturers; visits and secondments to other NMHS and RICs; and attendance at WMO and other training and technical conferences. 5.4.5 training methods and media

The following list, arranged in alphabetical order, contains only brief notes to serve as a reminder or to suggest possibilities for training methods (more details may be found in many other sources, such as Moss (1987) and Craig (1987)):

CHAPTER 5. TRAINING OF INSTRUMENT SPECIALISTS

III.5–9

(a)

Case study: (i) A particular real-life problem or development project is set up for study by individuals, or often a team; (ii) The presentation of the results could involve formal documentation as would be expected in a real situation; Classroom lecture: (i) This is most suitable for developing an understanding of information which is best mediated in spoken and written form: basic knowledge, theoretical ideas, calculations, procedures; (ii) Visual media and selected printed handout material are very useful additions; (iii) There should be adequate time for questions and discussion; (iv) Lectures tend to be excessively passive; Computer-assisted instruction: (i) This uses the capability of the personal computer to store large amounts of text and images, organized by the computer program into learning sequences, often with some element of interactive choice by the student through menu lists and screen selection buttons; (ii) The logical conditions and branching and looping structures of the program simulate the learning processes of selecting a topic for study based on the student’s needs, presenting information, testing for understanding with optional answers and then directing revision until the correct answer is obtained; (iii) Some computer languages, for example, ToolBook for the IBM personal computer and HyperCard for the Macintosh, are designed specifically for authoring and presenting interactive training courses in what are known as “hypermedia”; (iv) Modern systems use colour graphic screens and may include diagrams, still pictures and short moving sequences, while a graphical user interface is used to improve the interactive communication between the student and the program; (v) Entire meteorological instrument systems, for example, for upper-air sounding, may be simulated on the computer; (vi) Elaborate systems may include a laser video disc or DVD player or CD-ROM cartridge on which large amounts of text and moving image sequences are permanently stored;

(vii)The software development and capital cost of computer-assisted instruction systems range from modest to very great; they are beginning to replace multimedia and video tape training aids; (d) Correspondence courses: (i) The conventional course consists of lessons with exercises or assignments which are mailed to the student at intervals; (ii) The tutor marks the assignments and returns them to the student with the next lesson; (iii) Sometimes it is possible for students to discuss difficulties with their tutor by telephone; (iv) Some courses may include audio or video tapes, or computer disks, provided that the student has access to the necessary equipment; (v) At the end of the course an examination may be held at the training centre; Demonstrations: (i) The tutor demonstrates techniques in a laboratory or working situation; (ii) This is necessary for the initial teaching of manual maintenance and calibration procedures; (iii) Students must have an opportunity to try the procedures themselves and ask questions; Distance learning: (i) Students follow a training course, which is usually part-time, in their own locality and at times that suit their work commitments, remote from the training centre and their tutor; (ii) Study may be on an individual or group basis; (iii) Some institutions specialize in distancelearning capability; (iv) Distance learning is represented in this section by correspondence courses, television lectures and distance learning with telecommunications; Distance learning with telecommunications: (i) A class of students is linked by special telephone equipment to a remote tutor. They study from a printed text. Students each have a microphone which enables them to enter into discussions and engage in question and answer dialogue. Any reliable communications medium could be

(b)

(c)

(e)

(f)

(g)

III.5–10

PART III. QUALITY ASSURANCE AND MANAGEMENT OF OBSERVING SYSTEMS

used, including satellite, but obviously communications costs will be an issue; (ii) In more elaborate and costly systems, all students have computers that are linked to each other and to the remote tutor’s computer via a network; or the tutor teaches from a special kind of television studio and appears on a television monitor in the remote classroom, which also has a camera and microphones so that the tutor can see and hear the students; (h) Exercises and assignments: (i) These often follow a lecture or demonstration; (ii) They are necessary so that students actively assimilate and use their new knowledge; (iii) An assignment may involve research or be a practical task; Exhibits: (i) These are prepared display material and models which students can examine; (ii) They provide a useful overview when the real situation is complex or remote; Field studies and visits: (i) Trainees carry out observing practices and study instrument systems in the field environment, most usefully during installation, maintenance or calibration; (ii) Visits to meteorological equipment manufacturers and other Meteorological Services will expand the technical awareness of specialists; Group discussion/problem solving: (i) The class is divided into small groups of four to six persons; (ii) The group leader should ensure that all students are encouraged to contribute; (iii) A scribe or recorder notes ideas on a board in full view of the group; (iv) In a brainstorming session, all ideas are accepted in the first round without criticism, then the group explores each idea in detail and ranks its usefulness; Job rotation/secondment: (i) According to a timetable, the student is assigned to a variety of tasks with different responsibilities often under different supervisors or trainers in order to develop comprehensive work experience; (ii) Students may be seconded for a fixed term to another department, manufacturing

company or another Meteorological Service in order to gain work experience that cannot be obtained in their own department or Service; (iii) Students seconded internationally should be very capable and are usually supported by bilateral agreements or scholarships; (m) Multimedia programmes: (i) These include projection transparencies, video tapes and computer DVDs and CD-ROMs; (ii) They require access to costly equipment which must be compatible with the media; (iii) They may be used for class or individual study; (iv) The programmes should include exercises, questions and discussion topics; (v) Limited material is available for meteorological instrumentation; (n) One-to-one coaching: (i) The tutor works alongside one student who needs training in a specific skill; (ii) This method may be useful for both remedial and advanced training; On-the-job training: (i) This is an essential component of the training process and is when the trainee learns to apply the formally acquired skills in the wide variety of tasks and problems which confront the specialist. All skills are learnt best by exercising them; (ii) Certain training activities may be best conducted in the on-the-job mode, following necessary explanations and cautions. These include all skills requiring a high level of manipulative ability and for which it is difficult or costly to reproduce the equipment or conditions in the laboratory or workshop. Examples of this are the installation of equipment, certain maintenance operations and complex calibrations; (iii) This type of training uses available personnel and equipment resources and does not require travel, special training staff or accommodation, and is specific to local needs. It is particularly relevant where practical training far outweighs theoretical study, such as for training technicians; (iv) The dangers are that on-the-job training may be used by default as the “natural” training method in cases where more structured training with a sound

(i)

(j)

(o)

(k)

(l)

CHAPTER 5. TRAINING OF INSTRUMENT SPECIALISTS

III.5–11

theoretical component is required to produce fully rounded specialists; that supervisors with indifferent abilities may be used; that training may be too narrow in scope and have significant gaps in skills or knowledge; and that the effectiveness of training may not be objectively measured; (v) The elements necessary for successful on-the-job training are as follows: a. A training plan that defines the skills to be acquired; b. Work content covering the required field; c. A work supervisor who is a good trainer skilled in the topic, has a good teaching style and is patient and encouraging; d. Adequate theoretical understanding to support the practical training; e. A work diary for the trainee to record the knowledge acquired and skills mastered; f. Progress reviews conducted at intervals by the training supervisor; g. An objective measure of successfully acquired skills (by observation or tests); (p) Participative training: (i) This gives students active ownership of the learning process and enables knowledge and experience to be shared; (ii) Students are grouped in teams or syndicates and elect their own leaders; (iii) This is used for generating ideas, solving problems, making plans, developing projects, and providing leadership training; Peer-assisted learning: (i) This depends on prior common study and preparation; (ii) In small groups, students take it in turns to be the teacher, while the other students learn and ask questions; Programmed learning: (i) This is useful for students who are not close to tutors or training institutions; (ii) Students work individually at their own pace using structured prepared text, multimedia or computer-based courses; (iii) Each stage of the course provides self-testing and revision before moving on to the next topic; (iv) Training materials are expensive to produce and course options may be limited.

Good teaching is of greater value than expensive training aids. 5.4.6 television lectures

Some teaching institutions which provide predominantly extramural courses broadcast lectures to their correspondence students over a special television channel or at certain times on a commercial channel. 5.4.7 Video programmes

Video programmes offer a good training tool because of the following: (a) They provide a good medium for recording and repeatedly demonstrating procedures when access to the instrument system and a skilled tutor is limited; (b) The programme may include pauses for questions to be discussed; (c) A video programme can be optimized by combining it with supplementary written texts and group discussions; (d) Although professionally made videos are expensive and there is limited material available on meteorological instruments, amateurs can make useful technical videos for local use with modest equipment costs, particularly with careful planning and if a sound track is added subsequently.

5.5

resources for trAInInG

(q)

Other than the media resources suggested in the previous section, trainers and managers should be aware of the sources of information and guidance available to them; the external training opportunities which are available; the training institutions which can complement their own work; and, not least, the financial resources which support all training activities. 5.5.1 5.5.1.1 training institutions national education and training institutions

(r)

In general, NMHSs will be unable to provide the full range of technical education and training required by their instrument specialists, and so will have varying degrees of dependence on external educational institutions for training, supplementary and refresher training in advanced technology. Meteorological engineering managers will need to be conversant

III.5–12

PART III. QUALITY ASSURANCE AND MANAGEMENT OF OBSERVING SYSTEMS

with the curricula offered by their national institutions so that they can advise their staff on suitable education and training courses of. WMO (2002a; 2002b) give guidance on the syllabi necessary for the different classes of instrument specialists. When instrument specialists are recruited from outside the NMHS to take advantage of well-developed engineering skills, it is desirable that they have qualifications from a recognized national institution. They will then require further training in meteorology and its specific measurement techniques and instrumentation. 5.5.1.2 the role of Wmo regional instrument centres in training

Manufacturers of meteorological instrumentation systems could be encouraged to sponsor training sessions held at RICs. 5.5.2 5.5.2.1 Wmo training resources Wmo education and training syllabi

WMO (2002a; 2002b) include syllabi for specialization in meteorological instruments and in meteorological telecommunications. The education and training syllabi are guidelines that need to be interpreted in the light of national needs and technical education standards. 5.5.2.2 Wmo survey of training needs

On the recommendation of CIMO,2 several WMO regional associations set up RICs to maintain standards and provide advice. Their terms of reference and locations are given in Part I, Chapter 1, Annex 1.A. RICs are intended to be centres of expertise on instrument types, characteristics, performance, application and calibration. They will have a technical library on instrument science and practice; laboratory space and demonstration equipment; and will maintain a set of standard instruments with calibrations traceable to international standards. They should be able to offer information, advice and assistance to Members in their Region. Where possible, these centres will combine with a Regional Radiation Centre and should be located within or near an RTC in order to share expertise and resources. A particular role of RICs is to assist in organizing regional training seminars or workshops on the maintenance, comparison and calibration of meteorological instruments and to provide facilities and expert advisors. RICs should aim to sponsor the best teaching methods and provide access to training resources and media which may be beyond the resources of NMHSs. The centres will need to provide refresher training for their own experts in the latest technology available and training methods in order to maintain their capability.

WMO conducts a periodic survey of training needs by Regions, classes and meteorological specialization. This guides the distribution and kind of training events sponsored by WMO over a four-year period. It is important that Member countries include a comprehensive assessment of their need for instrument specialists in order that WMO training can reflect true needs. 5.5.2.3 Wmo education and training publications

These publications include useful information for instrument specialists and their managers. WMO (1986) is a compendium in two volumes of lecture notes on training in meteorological instruments at technician level which may be used in the classroom or for individual study. 5.5.2.4 Wmo training library

The library produces a catalogue (WMO, 1983) of training publications, audiovisual aids and computer diskettes, some of which may be borrowed, or otherwise purchased, through WMO. 5.5.2.5 Wmo instruments and observing methods publications

These publications, including reports of CIMO working groups and rapporteurs and instrument intercomparisons, and so forth, provide instrument specialists with a valuable technical resource for training and reference. 5.5.2.6 special Wmo-sponsored training opportunities

2

Recommended by the Commission for Instruments and Methods of Observation at its ninth session (1985) through Recommendation 19 (CIMO-IX).

The Managers of engineering groups should ensure that they are aware of technical training opportunities announced by WMO by maintaining contact

CHAPTER 5. TRAINING OF INSTRUMENT SPECIALISTS

III.5–13

with their training department and with the person in their organization who receives correspondence concerning the following: (a) Travelling experts/roving seminars/workshops: From time to time, CIMO arranges for an expert to conduct a specified training course, seminar or workshop in several Member countries, usually in the same Region. Alternatively, the expert may conduct the training event at a RIC or RTC and students in the region travel to the centre. The objective is to make the best expertise available at the lowest overall cost, bearing in mind the local situation; (b) Fellowships: WMO provides training fellowships under its Technical Cooperation Programme. Funding comes from several sources, including the United Nations Development Programme, the Voluntary Cooperation Programme, WMO trust funds, the regular budget of WMO and other bilateral assistance programmes. Short-term (less than 12 months) or long-term (several years) fellowships are for studies or training at universities, training institutes, or especially at WMO RTCs, and can come under the categories of university degree courses, postgraduate studies, non-degree tertiary studies, specialized training courses, on-the-job training, and technical training for the operation and maintenance of equipment. Applications cannot be accepted directly from individuals. Instead, they must be endorsed by the Permanent Representative with WMO of the candidate’s country. A clear definition must be given of the training required and priorities. Given that it takes an average of eight months to organize a candidate’s training programme because of the complex consultations between the Secretariat and the donor and recipient countries, applications are required well in advance of the proposed training period. This is only a summary of the conditions. Full information and nomination forms are available from the WMO Secretariat. Conditions are stringent and complete documentation of applications is required. 5.5.3 5.5.3.1 other training opportunities technical training in other countries

5.5.3.2

training by equipment manufacturers

This type of training includes the following: (a) New data-acquisition system purchase: All contracts for the supply of major data-acquisition systems (including donor-funded programmes) should include an adequate allowance for the training of local personnel in system operation and maintenance. The recipient Meteorological Service representatives should have a good understanding of the training offered and should be able to negotiate in view of their requirements. While training for a new system is usually given at the commissioning stage, it is useful to allow for a further session after six months of operational experience or when a significant maintenance problem emerges; (b) Factory acceptance/installation/commissioning: Work concerned with the introduction of a major data-acquisition facility, for example, a satellite receiver or radar, provides unique opportunities for trainees to provide assistance and learn the stringent technical requirements. Acceptance testing is the process of putting the system through agreed tests to ensure that the specifications are met before the system is accepted by the customer and despatched from the factory. During installation, the supplier’s engineers and the customer’s engineers often work together. Other components, such as a building, the power supply, telecommunications and data processing, may need to be integrated with the system installation. Commissioning is the process of carrying out agreed tests on the completed installation to ensure that it meets all the specified operational requirements. A bilateral training opportunity arises when a country installs and commissions a major instrumentation system and trainees can be invited from another country to observe and assist in the installation. 5.5.3.3 international scientific programmes

Other than WMO fellowships, agencies in some countries offer excellent training programmes which may be tailored to the needs of the candidate. Instrument specialists should enquire about these opportunities with the country or agency representative in their own country.

When international programmes, such as the World Climate Programme, the Atmospheric Research and Environment Programme, the Tropical Cyclone Programme or the Tropical Ocean and Global Atmosphere Programme, conduct large-scale experiments, there may be opportunities for local instrument specialists to be associated with senior colleagues in the measurement programme and to thereby gain valuable experience.

III.5–14 5.5.3.4

PART III. QUALITY ASSURANCE AND MANAGEMENT OF OBSERVING SYSTEMS

international instrument intercomparisons sponsored by the commission for instruments and methods of observation

5.5.4.2

Direct and indirect costs

From time to time, CIMO nominates particular meteorological measurements for investigation as a means of advancing the state of knowledge. Instruments of diverse manufacture and supplied by Members are compared under standard conditions using the facilities of the host country. An organizing committee plans the intercomparison and, in its report, describes the characteristics and performance of the instruments. If they can be associated with these exercises, instrument specialists would benefit from involvement in some of the following activities: experimental design, instrument exposure, operational techniques, data sampling, data acquisition, data processing, analysis and interpretation of results. If such intercomparisons can be conducted at RICs, the possibility of running a parallel special training course might be explored. 5.5.4 Budgeting for training costs

Costs may be divided into the direct costs of operating certain training courses and the indirect or overhead costs of providing the training facility. Each training activity could be assigned some proportion of the overhead costs as well as the direct operating costs. If the facilities are used by many activities throughout the year, the indirect cost apportioned to any one activity will be low and the facility is being used efficiently. Direct operating costs may include trainee and tutor travel, accommodation, meals and daily expenses, course and tutor fees, WMO staff costs, student notes and specific course consumables, and trainee time away from work. Indirect or overhead costs could include those relating to training centre buildings (classrooms, workshops and laboratories), equipment and running costs, teaching and administration staff salaries, WMO administration overheads, the cost of producing course materials (new course design, background notes, audiovisual materials), and general consumables used in training. In general, overall costs for the various modes of training may be roughly ranked from the lowest to the highest as follows (depending on the efficiency of resource use): (a) On-the-job training; (b) Correspondence courses; (c) Audiovisual courses; (d) Travelling expert/roving seminar, in situ course; (e) National course with participants travelling to a centre; (f) Computer-aided instruction (high initial production cost); (g) Regional course with participants from other countries; (h) Long-term fellowships; (i) Regional course at a specially equipped training centre.

The meteorological engineering or instrumentation department of every NMHS should provide an adequate and clearly identified amount for staff training in its annual budget, related to the Service’s personnel plan. A lack of training also has a cost: mistakes, accidents, wastage of time and material, staff frustration, and a high staff turnover resulting in poor quality data and meteorological products. 5.5.4.1 cost-effectiveness

Substantial costs are involved in training activities, and resources are always likely to be limited. Therefore, it is necessary that the costs of various training options should be identified and compared, and that the cost-effectiveness of all training activities should be monitored, and appropriate decisions taken. Overall, the investment in training by the NMHS must be seen to be of value to the organization.

CHAPTER 5. TRAINING OF INSTRUMENT SPECIALISTS

III.5–15

ANNEx

regional training centres
Country
Algeria Angolaa Egypt Kenyab Madagascar Niger

Name of centre
Hydrometeorological Institute for Training and Research (IHFR), Oran Regional Training Centre, Mulemba Regional Training Centre, Cairo Institute for Meteorological Training and Research, Nairobi; and Department of Meteorology, University of Nairobi, Nairobi École supérieure polytechnique d’Antananarivo, University of Antananarivo, Antananarivo African School of Meteorology and Civil Aviation (EAMAC), Niamey; and Regional Training Centre for Agrometeorology and Operational Hydrology and their Applications (AGRHYMET), Niamey Meteorological Research and Training Institute, Lagos; and Department of Meteorology, Federal University of Technology, Akure Nanjing Institute of Meteorology, Nanjing; and China Meteorological Administration Training Centre, Beijing Telecommunication and Radiometeorological Training Centre, New Delhi; and Training Directorate, Pune Advanced Meteorological Sciences Training Centre, Tehran Regional Training Centre, Baghdad Hydrometeorological Technical School, Tashkent Department of Atmospheric Science, University of Buenos Aires, Buenos Aires; and Department of Education and Training of the National Meteorological Service, Buenos Aires; Department of Meteorology, Federal University of Pará, Belém Department of Meteorology and Hydrology, Central University of Venezuela, Caracas Caribbean Institute for Meteorology and Hydrology, Bridgetown; and University of the West Indies, Bridgetown Section of Atmospheric Physics, School of Physics, University of Costa Rica, San José Department of Meteorology and Oceanography, University of the Philippines; and Training Centre of the Philippine Atmospheric, Geophysical and Astronomical Services Administration (PAGASA), Quezon City Postgraduate Training Centre for Applied Meteorology, Bet Dagan International School of Meteorology of the Mediterranean, Erice, Sicily; and Institute of Agrometeorology and Environment Analysis for Agriculture, Florence Advanced Training Institute and Moscow Hydrometeorological College, Moscow; and Russian State Hydrometeorological Institute, St Petersburg Anatolian Meteorological Technical High School, Ankara

WMO Region
I I I I I I

Nigeriab Chinab Indiab Iran (Islamic Republic of) Iraq Uzbekistan Argentinab

I II II II II II III

Brazil Venezuela Barbadosb Costa Rica Philippinesb

III III IV IV V

Israel Italyb Russian Federationb Turkey a b

VI VI VI VI

RTC Angola re-opening operations are under way. These Centres have a university component.

III.5–16

PART III. QUALITY ASSURANCE AND MANAGEMENT OF OBSERVING SYSTEMS

references anD further reaDing

Craig, R.L. (ed.), 1987: Training and Development Handbook: A Guide to Human Resource Development. McGraw-Hill, New York. Imai, M., 1986: Kaizen: The Key to Japan’s Competitive Success. Random House, New York. International Organization for Standardization, 1994a: Quality Management and Quality Assurance Standards – Part 1: Guidelines for Selection and Use. ISO 9000–1; 1994, Geneva. International Organization for Standardization, 1994b: Quality Management and Quality System Elements – Part 1: Guidelines. ISO 9004-1; 1994, Geneva Moss, G., 1987: The Trainer’s Handbook. Ministry of Agriculture and Fisheries, New Zealand. Walton, M., 1986: The Deming Management Method. Putnam Publishing, New York. World Meteorological Organization, 1983: Catalogue of Meteorological Training Publications and Audiovisual Aids. Third edition, Education and Training Programme Report No. 4, WMO/ TD-No. 124, Geneva.

World Meteorological Organization, 1986: Compendium of Lecture Notes on Meteorological Instruments for Training Class III and Class IV Meteorological Personnel (D.A. Simidchiev). WMO-No. 622, Geneva. World Meteorological Organization, 1989: Guide on the Global Observing System. WMO-No. 488, Geneva. World Meteorological Organization, 1990: Guidance for the Education and Training of Instrument Specialists (R.A. Pannett). Education and Training Programme Report No. 8, WMO/ TD-No. 413, Geneva. World Meteorological Organization, 2002a: Guidelines for the Education and Training of Personnel in Meteorology and Operational Hydrology. Volume I: Meteorology. Fourth edition, WMO-No. 258, Geneva. World Meteorological Organization, 2002b: Initial Formation and Specialisation of Meteorological Personnel: Detailed Syllabus Examples. WMO/ TD-No. 1101, Geneva.

aPPENdIx

lIst of contrIButors to tHe guIde

Artz, R. (United States) Ball, G. (Australia) Behrens, K. (Germany) Bonnin, G.M. (United States) Bower, C.A. (United States) Canterford, R. (Australia) Childs, B. (United States) Claude, H. (Germany) Crum, T. (United States) Dombrowsky, R. (United States) Edwards, M. (South Africa) Evans, R.D. (United States) Feister, E. (Germany) Forgan, B.W. (Australia) Hilger, D. (United States) Holleman, I. (Netherlands) Hoogendijk, K. (Netherlands) Johnson, M. (United States) Klapheck, K.-H. (Germany) Klausen, J. (Switzerland) Koehler, U. (Germany) Ledent, T. (Belgium)

Luke, R. (United States) Nash, J. (United Kingdom) Oke, T. (Canada) Painting, D.J. (United Kingdom) Pannett, R.A. (New zealand) Qiu Qixian (China) Rudel, E. (Austria) Saffle, R. (United States) Schmidlin, F.J. (United States) Sevruk, B. (Switzerland) Srivastava, S.K. (India) Steinbrecht, W. (Germany) Stickland, J. (Australia) Stringer, R. (Australia) Sturgeon, M.C. (United States) Thomas, R.D. (United States) Van der Meulen, J.P. (Netherlands) Vanicek, K. (Czech Republic) Wieringa, J. (Netherlands) Winkler, P. (Germany) zahumensky, I. (Slovakia) zhou Weixin (China)

www.wmo.int

Similar Documents

Premium Essay

Hbs Case

...HBS Case-SYSCO 1. Why does SYSCO need BI? SYSCO needs Business Intelligence to increase efficiency by organizing information generated by its operations. SYSCO has two different divisions, the broad-line companies and the specialty companies, which have separated profit and loss statements. With the use of BI, people can easily access the statements between these different divisions. There is a lack of consistency between part numbers, customer identifications, order statuses, and other important information which makes it difficult for managers to monitor and compare performance. 2. How can SYSCO take advantage of BI? Business Intelligence can be advantageous to SYSCO by providing statistical analysis, graphical representations and access to important data. With the use of dashboards, users at every level can easily access summaries of important information. BI software can combine data from separate warehouses and databases to help managers make better business decisions. With all the information centralized, managers save time when accessing other divisions’ databases. BI uses data mining to automatically sort through large pools of data to determine trends and patterns that could have otherwise been overlooked by managers. 3. What are the potential obstacles? The potential obstacle of Business Intelligence implementation is determining how much software to buy and when to buy it. SYSCO needs to determine the correct balance of software with the current needs...

Words: 349 - Pages: 2

Premium Essay

Instrument Flight

...is. Optical illusions like this could cause a lower than normal approach, due to the runway looking smaller or farther away. This illusion will cause a pilot to run the risk of a collision with an object or possibly even not make it to the runway due to the approach being too low. I want an instrument rating because I believe it makes a safer more confident pilot. I can mention several occasions of pilots getting into an IMC condition while flying VFR, JFK’s aircraft, most recent a Blackhawk helicopter locally that ran into fog and crashed, speculation is the pilot became disoriented. On the coast fog will roll in with zero visibility in 20 -30 minutes on occasion. “Many accidents are the result of pilots who lack the necessary skills or equipment to fly in marginal visual meteorological conditions (VMC) or IMC and attempt flight without outside references.”(FAA, 2012, p. vii) Key principles to instrument flight are trust your instruments not what your body is telling you. Successfully recognize errors in your instruments and what to do when these situations arise. During instrument flight when a pilot becomes disoriented he should try to obtain the horizon, trust his/her instruments, and ignore what your body is...

Words: 453 - Pages: 2

Premium Essay

Bizintel

...Evaluate a Business Intelligence initiative that has been undertaken within your organization or one potential application of Business Intelligence within your organization? I work in the cruise industry. I never really knew how much Business Intelligence was a science. Granted, I knew we have analysts and copious amounts of data, but I never truly gave it much thought. After studying BI, I have come to realize BI is an essential aspect of my organization’s success. From measuring productivity of different departments (such as revenue, event planning, individual & groups reservations, cruising power, our company website, shipboard management, etc…) to understanding travel partner and guests needs and staying ahead of the competition, we wouldn’t survive without BI metrics and applications. Because I work in the event planning division of my company, I am not familiar with the names or metrics used to evaluate important data, but I do know from experience and part of my job function, reports and data gathered are used to make judgments and decisions about new products and constant improvements for existing services we currently provide. Surveys are completed by our travel partners and guests, and even employees. We compile reports and present them to management electronically. Our research, experience, and use of different applications, along with our IT departments, helps management and executives determine which direction to move forward. Feedback from our travel...

Words: 526 - Pages: 3

Premium Essay

Popular Business Journals

...Business Intelligence is very important for a business and it can either help or hurt a business depending on how it is used. Now if it is used correctly this can create more much efficient ways of going about a companies process or getting information and making big decisions. The biggest thing with business intelligence is getting the information that you are looking for easily compared to what some other companies do, by using long formulas on a spreadsheet which could sometimes take hours to collect to get what could be accurate information. While looking through some businesses that have implemented BI (Business Intelligence) into their company, I was able to determine that they all became more efficient and in the end used less man hours getting information that they were looking for. Some businesses try collecting business information and it comes from a variety of different places and sources, to be put together in one area. This for one takes time to gather and two is much less efficient, requiring more people and more hours, causing a company to lose money. Sometimes the information may not be entirely accurate as well because of how long it takes to gather up and by the time they get the information they are looking for it is already out of date. In the end business intelligence eliminates "guess work" which a lot of higher-up in businesses have to do sometimes because of their business lacking the data structure to get the information needed. Another thing...

Words: 870 - Pages: 4

Premium Essay

Bi Bi

...Business Intelligence Redefining Management Processes * Project Category: Business Intelligence * Project Topic: Business Intelligence-Redefining Management Processes * Contributor  Details: * Name: Nishikant Harjai (11 Ex-037) * Synopsis of the Project Today, Indian firms have been exploring ways to improve their business practices and procedures to gain competitive advantage. One of the most talked about Information Technology (IT) enabled business innovation is the emergence of Business Intelligence (BI). A well implemented BI can bring manifold benefits to an organization. However, the implantation is resource intensive and complex, with success dependent upon various critical success factors. The successful implementation of BI depends upon several factors. Indian organizations hesitate to adopt BI as they feel higher risks are involved and the returns are not assured in India. The demands of the industry may vary in time in India. Indian firms are of opinion that the products have limited amount of customization paradigm, hence greater complexity involved to adopt BI. This paper focuses on two aspects of BI in India: prospects and challenges. It tries to identify the major issues and concerns in successfully implementing BI in India. A business is operated with the objective of making a profit from the sale of goods or services. Business Intelligence enables the comprehension, understanding and profit from experience. Business data and information...

Words: 581 - Pages: 3

Free Essay

Satellites

...of Space, Department of Telecommunications, India Meteorological Department, All India Radio and Doordarshan. The overall coordination and management of INSAT system rests with the Secretary-level INSAT Coordination Committee. INSAT satellites provide transponders in various bands (C, S, Extended C and Ku) to serve the television and communication needs INSAT 1B of India. Some of the satellites also have the Very High Resolution Radiometer (VHRR), CCD cameras for metrological imaging. The satellites also incorporate transponder(s) for receiving distress alert signals for search and rescue missions in the South Asian and Indian Ocean Region, as ISRO is a member of the Cospas-Sarsat programme. INSAT system The Indian National Satellite (INSAT) system was commissioned with the launch of INSAT-1B in August 1983 (INSAT-1A, the first satellite was launched in April 1982 but could not fulfill the mission). INSAT system ushered in a revolution in India’s television and radio broadcasting, telecommunications and meteorological sectors. It enabled the rapid expansion of TV and modern telecommunication facilities to even the remote areas and off-shore islands. Together, the system provides transponders in C, Extended C and Ku bands for a variety of communication services. Some of the INSATs also carry instruments for meteorological observation and data relay for providing meteorological services. KALPANA-1 is an exclusive meteorological satellite. The...

Words: 2200 - Pages: 9

Free Essay

Marketing

...have heard of the DiVito’s Bakery and how many have shopped there and if so how recent do they stop in this store. What do you already know about these changes? I already know that we have to make changes to perform survey’s to find out more information about the different cultures and what types of bakery foods are the types that they eat the most. It is important to know this so that we can change and supply these products so that they will come to this bakery and become regular customers it might be a good idea to have coffee and tea to allow people to stop in and taste the many products to make decisions on if it is somewhere that they would frequent to buy the products that they use the most. Develop effective research instruments- Instrument is the generic term that researchers use for a measurement device such as a...

Words: 1080 - Pages: 5

Free Essay

Air Pressure

...Science Project: Air Pressure (Egg in a Bottle) Introduction Atmospheric pressure is the force that acts on any object by the weight of tiny air molecules. Molecules in the air are invisible but they still have weight and they occupy space. Air Pressure changes with the change in the altitude. As the altitude increases the Air Pressure decreases and the altitude decreases Air Pressure increases. At an higher altitudes we have to breathe more often than at the sea level. If you are on Mt Evans then your ears pop to maintain the pressure. This happens because we have to maintain the outside and inside pressure. Our weather patterns change because of the air pressure. Figure – 1 – Shows the air pressure based on the altitude Figure – 2 – Graph shows Pressure vs Altitude Objective There are various experiments that can be conducted to show Air Pressue. Let us do an Egg in a bottle experiment to show Air Pressure. Doing this experiment we will develop the hypothesis on the effects of air pressure on the egg and how it squeezes it into the bottle. Experiment: Egg in a bottle Things or items required to perform egg in a bottle: 1. Eggs 2. Saucepan and stove to boil the eggs. 3. Wide-mouth bottle and bottle mouth should be smaller than the boiled egg. 4. Vegetable oil. 5. Matches 6. Paper strips to burn Figure – 3 - Items required to perform the Experiment What need to be done with the items listed above? 1. Take the eggs and add water into saucepan and boil...

Words: 816 - Pages: 4

Free Essay

Aviation Safety Program

...AVIATION SAFETY PROGRAM EASTERN SKY AIRLINES DIEGO LUIS PALACIN ENDERS INDEX 1. SECTION ONE: SAFETY POLICY 2. SECTION TWO: SAFETY AND HEALTH RESPONSIBILITIES 3. SECTION THREE: EMPLOYEE PARTICIPATION 4. SECTION FOUR: SAFETY RULES AND REGULATIONS 5. SECTION FIVE: DISCILINARY POLICY 6. SECTION SIX: HAZARD RECOGNITION, PREVENTION AND CONTROL 7. SECTION SEVEN: ACCIDENT/INCIDENT REPORTING 8. SECTION EIGHT: EMERGENCY PLANING AND RESPONSE 9. SECTION NINE: SAFETY AND HEALTH TRAINING AND EDUCATION 10. SECTION TEN: SAFETY AND HEATH ASSISTANCE RESOURCES 11. SECTION ELEVEN: CONTACT INFORMATION SECTION ONE SAFETY POLICY Safety is a team effort – Let us all work together to keep this a safe and healthy workplace Eastern Sky Airlines places high value on the safety of its employees and passengers. Eastern Sky Airlines is committed to providing a safe workplace for all employees and has developed this Aviation Safety Program for injury and accident prevention to involve management, supervisors, and employees in identifying and eliminating or reducing hazards that may develop during ground or air operations. Eastern Sky Airlines Safety Program objective is to create a safety culture in which we stress to all employees that safety is as important as any other business function. Only thought the joint commitment on the part of management and employees can workplace accidents and injuries be reduced or eliminated. Employees should be encouraged to not only work safely...

Words: 3713 - Pages: 15

Free Essay

Paper

...very catchy and reptile but still a lot of energy and rhythm in this piece. October was more of a steady laid back piece. Many solo players and i noticed it wasn't as loud as the previous two. Suite No 2.in F major was a four movement piece. first movement was a March i found this to be a delightful. second movement was a song without words called i"ll love my love this was more of a slow calm piece it was also higher pitched instrument soloists. the third movement titled Song of the Blacksmith sounded as if someone was working on a railroader clinging on something. the last movement titled Fantasia on the "Dargason" was the best one to me out of the four movements a lot of energy put into this piece toward the end thee was a highest to lowest range instrument type duet. Pagent was very smooth and sweet very peaceful almost like a lullaby. Symphonic Dance No.3 "Fiesta" was very strong and powerful the sound was very full makes you want to get up ad dance. Things i noticed that really caught my eye were the players were all very cordial all instruments were lifted and propped on time. the man in the front of the band controlled the other players. when the conductor puts up his stick everyone...

Words: 476 - Pages: 2

Free Essay

Moved by Music

...lifestyles, and personal connections that they create with their fans, yet both are very distinct! Both of the bands Motionless in White and Korn are a form of the genre, metal. The band Motionless in White is classified as metalcore, which is also known as screamo; the band Korn is classified as nu-metal, in reference to the term new-age-metal. The instruments involved in making this type of music include many of your basics; electric guitars, drums, etc. Though both bands have these instruments, Motionless in White’s main instrument is the singer’s voices. With the way they scream, most people would not understand them, but to their fans it is amazing that one person could sound like a type of machine had distorted the voice and made it the way it sounds, and to make it presentable to the audience so that it is understandable is an art of its own! Korn on the other hand has a very unique style in the way they set up their lyrics. With this band the music is important, and has more of a rock style feel to it than Motionless in White, but the lyrics seem to be their main focus in their music. They truly use the power of words as an instrument on its own. They have a way to communicate with their feelings. No matter the song you listen to by this band, you will be...

Words: 1106 - Pages: 5

Premium Essay

Act 1881

...Short essay on the negotiable instruments in business law The law relating to “negotiable instruments” is contained in the Negotiable Instruments Act, 1881. The Act extends to the whole of India. The Negotiable Instruments Act, 1881, has been amended for more than a dozen times so far. The latest in the series are: (i) the Banking, Public Financial Institutions and Negotiable Instruments Laws (Amendment) Act, 1988 (effective from 1st April, 1989), and (ii) the Negotiable Instruments (Amendment and Miscellaneous Provisions) Act, 2002 (effective from 6th February, 2003). The provisions of all the Amendment Acts have been incorporated at relevant places in Part IV of this book. The Negotiable Instruments Act, 1881, as amended up-to-date, deals with three kinds of negotiable instruments, i.e., Promissory Notes, Bills of Exchange and Cheques. Definition: The word negotiable means ‘transferable by delivery,’ and the word instrument means ‘a written document by which a right is created in favour of some person.’ Thus, the term “negotiable instrument” literally means ‘a written document transferable by delivery.’ According to Section 13 of the Negotiable Instruments Act, “a negotiable instrument means a promissory note, bill of exchange or cheque payable either to order or to bearer.” “A negotiable instrument may be made payable to two or more payees jointly, or it may be made payable in the alternative to one of two, or one or some of several payees”...

Words: 1111 - Pages: 5

Free Essay

Music

...What is music exactly? A way of sound to express feelings,emotions,ideas,thoughts and etc. We have different types of music.Between those types,we ought to like different ones.Even if it's classical,pop,rock,blues,jazz or rap and more.Some use music to soothe themselves,to relax,for fun,or to entertain.We sing,we dance or we play an instrument with it. As a musician I can say that it is my life,I let myself to enjoy what i sing especially,what I play.Some instruments are difficult to play or learn. You have to a lot a time master or give big time for it.People try to achieve but some give up because they don't want to sacrifice their time. But have you experienced studying real hard then got a perfect or if not a very high score?It is the same as trying hard to practice an instrument or your own voice,the feeling that you achieved a lot so far,the feeling that makes your heart flutter especially if you hate music and then start to like it and the people who was born to have an interest in music from the start. When you see or hear someone who had a higher level than you,you envy him or her but instead he or she should become an inspiration or target his or her level.If you become a great musician you have to aim another goal.Leaving a legacy.Without one all your efforts are'nt worth it.A student that is.To pass your knowledge and learnings in your trusted disciple. Music is life.It shares the emotion of the listener...

Words: 295 - Pages: 2

Free Essay

Job Applicatin and Interview

...coffee, have a barbecue, or any other activity that you all enjoy. Or sometimes when you don't do anything specific, you can say hang out with friends. Surf the internet - On the internet, you can research a topic you are interested in using a search engine, visit your favourite websites, watch music videos, create your own video and upload it for other people to see, maintain contact with your friends using a social networking site, write your thoughts in a blog, learn what is happening in the world by reading news websites, etc. Play video games - You can play games on your computer or on a game consoles, like PlayStation, X-Box, Wii, PSP, Gameboy, etc. You can play on your own or with your friends or family. Play a musical instrument - Learn to play the piano, guitar, violin, cello, flute, piano accordion, mouth organ, panpipes, clarinet, saxophone, trumpet, etc. You can play on your own or with a group, such as a band or an orchestra. Listen to music - Turn up the...

Words: 553 - Pages: 3

Premium Essay

Holder and Holder in Due Course

...sue on a negotiable instrument unless he is named therein as the payee or unless he becomes entitled to it as indorsee or becomes the bearer of an instrument payable to bearer. In the Full Bench case reported in Subba Narayana Vathiyar v. Ramaswami Aiyar,1 it has been held that in a suit on a negotiable instrument by the payee or indorsee, it is not open to the defendant to plead that the plaintiff is a mere benamidar not entitled to payment with a view to show that the note has been discharged by payment to real owner. Again in the Full Bench decision of the Patna High Court in Bacha Prasad v. Janaki,2 it has been held that a person who is not a holder of a negotiable instrument cannot maintain a suit for recovery of money due under it even though holder is admittedly the benamidar and is impleaded in the suit. In the said decision, it has also been held that "a beneficiary cannot be called a holder of the instrument and payment to him cannot discharge the maker thereof unless the case falls under section 82(c) of the Act". So also, it has been held in the decision reported in Subharaya v. Abiram,3 that a beneficiary does not become a holder of the instrument even upon getting a declaration that he is the beneficial owner and the payee is only a benamidar. In this connection it has to be noted that Allahabad and Rajasthan High Courts have taken a slightly different view and held that in certain cases a beneficiary may maintain a suit on a negotiable instrument "if holder is also...

Words: 16119 - Pages: 65