COMPARISON OF ASOS AND OBSERVER

CEILING-HEIGHT AND VISIBILITY VALUES

Allan C. Ramsay *

Hughes STX Corporation, Sterling, Virginia

1. Introduction

The ASOS has been criticized by some members of the aviation community for failing to provide representative reports of ceiling heights and visibilities that are important for flight operations. A joint National Weather Service (NWS) / Federal Aviation Administration (FAA) data-collection effort in 1995 provided a unique data set of over 10,000 hours of coincident ASOS and observer reports of ceiling height and visibility. This paper provides insight on the comparability of automated and manual observations, with special emphasis on differences in reports at the thresholds of 1000 feet for ceiling height and three miles for visibility, which define the breakpoint between Instrument Flight Rules (IFR) and Marginal Visual Flight Rules (MVFR) conditions.

2. THE 1995 AVIATION DEMONSTRATION PROJECT

In response to concerns from the aviation community, the NWS and the FAA conducted a study of twenty-five ASOS installations throughout the country from mid-February through mid-August, 1995. The study was based primarily on manual input from observers who were required to record instances when ASOS reports were considered to be "unrepresentative." The results of the study provided strong evidence that, in the subjective opinion of on-site observers, the ASOS provides representative observations of ceiling and visibility.

A small part of the 1995 study involved the collection of concurrent human observations and un-augmented ASOS observations from four locations: Allentown-Bethlehem, PA; Mobile, AL; Salem and Portland, OR. (Salem and Portland observations did not overlap in time, and were effectively treated a single data source which covered the full time period of the evaluation.) ASOS installations were typically three to four thousand feet distant from, and 30 to 70 feet below, the observers or towers. The arrangement at each location was that observers were not permitted to modify ('augment') the ASOS observations, even if those observations were considered to be unrepresentative. Just over 10,000 hours of observations were acquired, with more than 600 hours in IFR conditions. Although observers were not "blind" to the automated observations, the analysis was completed on the assumption that the automated data did not influence the observers' reports.

Using the logic that an observation remains valid until it is replaced with a new observation, it was possible to compare manual and automated values of ceiling height and visibility for each minute of the day.

3. "CLIMATOLOGY" OF THE DATA SET

The manual and automated observations were highly comparable in their over-all reporting of ceiling height and visibility. Tables 1 and 2 show the total number of hours with ceiling heights or visibilities below specific values, for both manual and automated observations. Table 3 shows the number of hours of specific flight conditions as reported by the observer or by the ASOS; Table 4 shows the fraction of IFR occurrences that were caused by either ceiling or visibility values.

Table 1 Ceiling-Height Climatology
Observer Hours ASOS Hours Joint Hours
No Ceiling 5006 6184 4769
5000' 2569 1417 1080
<5000' 2679 2653 2284
<3000' 1502 1508 1271
<1000' 448 463 358
<500' 186 209 142
<200' 45.7 85.7 35.6

Table 2 Visibility Climatology

Visibility

(Miles)

Observer Hours ASOS Hours Joint Hours
5 9449 9463 9248
<5 805 791 590
<4 510 589 380
<3 360 358 239
<1 82.4 99 59.6
<.5 39.0 58.3 28.7
<.25 26.5 0.0 0.0

Table 3 IFR Climatology
Observer Hours ASOS Hours Joint Hours
641 658 501

Table 4 Causes of IFR Conditions
Cause Observer ASOS
Ceiling Only 0.43 0.45
Visibility Only 0.30 0.30
Ceiling and Visibility 0.27 0.27

4. DISTRIBUTION OF VALUES

Although the over-all climatology of the data set indicated high comparability between manual and automated observations, a closer look at the distribution of specific values of ceiling height and visibility revealed significant differences. Figures 1 and 2 illustrate the differences between observer and automated observations: manual observations have a tendency to concentrate on specific values, while automated observations are more evenly distributed over the range of values, and appear to more closely represent a "natural" distribution of values.

The preferential distribution of observer values has been documented before and is therefore not unexpected (Bradley and Lewis, 1998).

Preferential reporting of ceiling heights, in particular, raises questions because observers in this assessment had full access to the accurate (and commonly accepted) laser cloud-height measurements from the ASOS ceilometer. Of particular interest is the distribution of observer reports in the 900- to 1100-foot range of ceiling heights -- bracketing the ceiling-height threshold for Instrument Flight Rules.

When both the observer and the ASOS were reporting ceilings, why would an observer choose to differ from measured height values? Observers are conscientious professionals, but are also very conservative professionals, and members of an airport team whose primary mission is to ensure safe aircraft operations. An observer will not commonly "rush to judgment" to release a report of changing cloud heights until he or she is confident that the new cloud height represents a stable condition. This would be especially true when the cloud height is near an important threshold, such as 1000 feet. Air traffic controllers and pilots in the local area would be highly unappreciative of reports which change between IFR / MVFR / IFR / MVFR, etc., every few minutes. Unfortunately, this is precisely what the ASOS may report: the automated system doesn't know or care how many aircraft are on final approach or waiting to take off, and the ASOS doesn't hesitate to transmit a new "SPECI" whenever it senses a change in cloud height from 1100 to 900 feet, or vice versa. (During this 10,000-hour evaluation, observers issued 1187 off-hourly "special" reports, compared to 3378 issued by the ASOS.)

5. DISAGREEMENTS BETWEEN OBSERVER AND ASOS

While the comparability between observer and ASOS reports is good from a climatological perspective, flight operations may impacted by minute-to-minute differences between manual and automated reports. This data set provided an opportunity to examine minute-to-minute details of disagreements between observer and ASOS.

5.1 Over-all Ceiling and Visibility Disagreement

Table 1 and Figure 1, above, show significant disagreement concerning ceiling heights at or below 200 feet. This difference can be related directly to a characteristic of the ASOS laser ceilometer, which occasionally interprets partial ground-based obscurations as total obscurations (reported as Vertical Visibility, "VV") or as 100-foot cloud layers. The National Weather Service is aware of this ceilometer characteristic, and is evaluating technology which may provide more accurate reporting of vertical visibilities and cloud height in obscured or partially-obscured conditions.

Table 2 and Figure 2, above, show significant disagreements existed between observer and ASOS for values of visibility less than one mile. This category of disagreement is attributed to the different locations of the instruments (human eye vs ASOS visibility sensor) and also to the spatial variability of visibility values. It is probable that, in most cases, both the observer and the ASOS were reporting accurate values of visibility from their respective vantage points. As with real estate sales, the three most important issues regarding ASOS visibility reports are location, location, and location.

 

5.2 Duration of Observer / ASOS Disagreements

One characteristic of an observer/ASOS disagreement is the duration of the condition. Reports of changes in sky cover or visibility will be released as soon as they are identified by an ASOS, but may be delayed by an observer until the condition is determined to be stable.Delayed reporting may indicate a disagreement when all that is happening is a simple difference in responsiveness.

Table 5 illustrates ceiling-height and visibility threshold criteria that were used to define "significant" differences. For this 10,000-hour data set, ceiling heights were within the allowable differences 80% of the time, while visibilities were within the allowable differences 96% of the time.

ELEMENT CONDITION ALLOWABLE DIFFERENCE
CEILING HEIGHT 1000 ' ± 200 feet
1100-5000' ± 300 feet
5100-10000' ± 500 feet
10100-12000' ± 1000 feet
VISIBILITY 2 miles ± 1/2 mile
2 1/2 - 4 miles ± 1 mile
> 4 miles ± 2 reportable values

Table 5 Allowable Observer/ASOS Differences

 

The median duration of episodes of significantly different values of ceiling height was 25 minutes; the median duration of visibility disagreements was 20 minutes. Under the Basic Weather Watch criteria used for manual observations, it may not be possible for an observer to detect a change in ceiling or visibility of so short a duration, and even if detected, the observer would be likely to delay reporting to ensure that the conditions were stable.

 

5.3 IFR / MVFR Disagreements

Disagreements between observers and the ASOS are most important at thresholds which change flight rules; the IFR/MVFR thresholds of 1000 feet (ceiling height) and 3 miles (visibility) were closely examined to see if there were any possible explanations or patterns in the time periods when observer and ASOS disagreed.

While observers reported IFR conditions and the ASOS remained above IFR, the ASOS frequently reported conditions very close to the IFR thresholds: for the 140 hours of disagreement, the ASOS reported 113 hours (81%) during which ASOS ceilings were at 1000-1200 feet, or ASOS visibilities were either 3 or 4 miles, or an ASOS scattered cloud layer was within 200 feet of the observer's ceiling, or there were showers creating different conditions at the different observation points.

While the ASOS reported IFR conditions and the observer remained above IFR, the observer frequently reported conditions very close to the IFR thresholds: for the 157 hours of disagreement, the observer reported 108 hours (69%) during which observer ceilings were at 1000-1200 feet, or observer visibilities were either 3 or 4 miles, or an observer scattered cloud layer was within 200 feet of the ASOS ceiling, or there were showers creating different conditions at the different observations points.

Figure 3, below, illustrates a timeline of flight categories derived from observer and ASOS reports from Allentown-Bethlehem, PA, during the early stages of the 1995 evaluation. This timeline is typical of the differences in reports from qualified and conscientious observers and an ASOS. Changes of flight categories in this example were caused by both ceiling-height and visibility changes. Observers reported three periods of IFR conditions: 1617-1832, 2131-0729, and 1005-1249. ASOS reported five periods of IFR conditions: 1622-1725, 1745-1813, 2119-0322, 0330-0542, and 1210-1338. Observer and ASOS disagreed in flight category some 15 times over this 24-hour period; disagreements durations ranging from five to 125 minutes. Disagreement lengths were 5, 5, 6, 7, 7, 8, 13, 17, 19, 24, 27, 38, 40, 49, and 125 minutes. The 125-minute disagreement occurred when the observer carried IFR conditions with 2 1/2-mile visibility while the ASOS reported MVFR with 3 and 3 1/2-mile visibilities.

6. CONCLUSIONS

Observer and ASOS reports of ceiling height and visibility occasionally differ by significant amounts. The many reasons for disagreements are discussed by Bradley and Lewis in the preceding paper.

This unique data set indicates that significant disagreements are typically of short duration, while long-term disagreements typically occur when the observer and ASOS are reporting values that are close, but may be on opposite sides of an important threshold (e.g., IFR / MVFR flight categories).

Users should remain aware of the potential for, and the character of, observer / ASOS differences. In particular, users should be aware of the ASOS ability to report changes in ceiling and visibility as often as changes are detected. The responsiveness of the ASOS to changing conditions is both a blessing and a curse: ASOS can report changes (especially at night) before they can be detected by an observer, but the ASOS proclivity to report frequent changes can create challenges for users of automated reports. In order to get a representative picture of conditions at an ASOS-supported airfield, users should learn to consider a series of ASOS reports rather than to assume a single ASOS snapshot is likely to represent the "smoothed" conditions typically reported by the observer.

ACKNOWLEDGMENT

This work was sponsored by the National Weather Service ASOS Program Office under Contract Number 50-DGNW-6-90001. Opinions expressed in this paper are solely those of the author, and do not represent an official position or endorsement by the United States Government.

 

References

Bradley, J.B., and R. Lewis, 1998: Comparability of ASOS and Human Observations, 14th International Conference on Interactive Information Processing Systems (IIPS), American Meteorological Society, Phoenix, AZ, Paper 10.1


*Corresponding author address: Allan C. Ramsay,

Hughes STX Corporation, 43872 Weather Service Road, Sterling, VA 20166

e-mail <aramsay@bigfoot.com>