Tải bản đầy đủ (.pdf) (13 trang)

Remote Sensing and GIS Accuracy Assessment - Chapter 7 potx

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (388.04 KB, 13 trang )


91

CHAPTER

7
Thematic Accuracy Assessment
of Regional Scale Land-Cover Data

Siamak Khorram, Joseph F. Knight, and Halil I. Cakir

CONTENTS

7.1 Introduction 91
7.2 Approach 92
7.2.1 Sampling Design 92
7.2.2 Training 93
7.2.3 Photographic Interpretation 93
7.2.3.1 Interpretation Protocol 93
7.2.3.2 Interpretation Procedures 94
7.2.3.3 Quality Assurance and Quality Control 94
7.3 Results 94
7.3.1 Accuracy Estimates 94
7.3.2 Issues and Problems 99
7.3.2.1 Heterogeneity 99
7.3.2.2 Acquisition Dates 99
7.3.2.3 Location Errors 99
7.4 Further Research 101
Acknowledgments 101
References 101
Appendix A: MRLC Classification Scheme and Class Definitions 102



7.1 INTRODUCTION

The Multi-Resolution Land Characteristics (MRLC) consortium, a cooperative effort of several
U.S. federal agencies, including the U.S. Geological Survey (USGS) EROS Data Center (EDC)
and the U.S. Environmental Protection Agency (EPA), has conducted the National Land Cover
Data (NLCD) program. This program used Landsat Thematic Mapper (TM) 30-m resolution
imagery as baseline data and successfully produced a consistent and conterminous land-cover (LC)
map of the lower 48 states at approximately an Anderson Level II thematic detail. The primary
goal of the program was to provide a generalized and regionally consistent LC product for use in
a broad range of applications (Lunetta et al., 1998). Each of the 10 U.S. federal geographic regions

L1443_C07.fm Page 91 Friday, June 25, 2004 10:14 AM
© 2004 by Taylor & Francis Group, LLC

92 REMOTE SENSING AND GIS ACCURACY ASSESSMENT

was mapped independently. EPA funded the Center for Earth Observation (CEO) at North Carolina
State University (NCSU) to assess the accuracy of the NLCD for federal geographic Region IV.
An accuracy assessment is an integral component of any remote sensing-based mapping project.
Thematic accuracy assessment consists of measuring the general and categorical qualities of the
data (Khorram et al., 1999). An independent accuracy assessment was implemented for each federal
geographic region after LC mapping was completed. The objective for this study was specifically
to estimate the overall accuracy and category-specific accuracy of the LC mapping effort. Federal
geographic Region IV included the states of Kentucky, Tennessee, Mississippi, Alabama, Georgia,
Florida, North Carolina, and South Carolina (Figure 7.1).

7.2 APPROACH
7.2.1 Sampling Design


Quantitative accuracy assessment of regional scale LC maps, produced from remotely sensed
data, involves comparing thematic maps with reference data (Congalton, 1991). Since there were
no suitable existing reference data that could be used for all federal regions, a practical and
statistically sound sampling plan was designed by Zhu et al. (2000) to characterize the accuracy
of common and rare classes for the map product using National Aerial Photography Program
(NAPP) photographs as the reference data.
The sampling design was developed based on the following criteria: (1) ensure the objectivity
of sample selection and validity of statistical inferences drawn from the sample data, (2) distribute
sample sites spatially across the region to ensure adequate coverage of the entire region, (3) reduce
the variance for estimated accuracy parameters, (4) provide a low-cost approach in terms of budget
and time, and (5) be easy to implement and analyze (Zhu et al., 2000).
The sampling was a two-stage design. The first stage, the primary sampling unit (PSU), was
the size of a NAPP aerial photograph. One PSU (photo) was randomly selected from a cluster of
128 photographs. These clusters were formed using a geographic frame of 30

¥

30 m. Randomly
selected PSU locations are shown in Figure 7.1. The second stage was a stratified random sample,

Figure 7.1

Randomly selected photograph center points.
Tennessee
Mississippi
Kentucky
North
Carolina
South
Carolina

Georgia
Florida
Alabama
070140 210 280 Miles
N

L1443_C07.fm Page 92 Friday, June 25, 2004 10:14 AM
© 2004 by Taylor & Francis Group, LLC

THEMATIC ACCURACY ASSESSMENT OF REGIONAL SCALE LAND-COVER DATA 93

within the extent of all of the PSUs only, of 100 sample sites per LC class. The selected sites were
referred to as secondary sampling units (SSU). The number of sites per photograph ranged from
1 to approximately 70 (Figure 7.2). The total number of sample sites in the study was 1500 (100
per cover classes), although only 1473 sites were interpreted due to missing NAPP photos. This
sampling approach was chosen by the Eros Data Center (EDC) over a standard random sample to
reduce the cost of purchasing the NAPP photography (Zhu et al., 2000).

7.2.2 Training

Before the NAPP photo interpretation for the sample sites could begin, photo interpreters were
trained to accomplish the goals of the study. To provide consistency among the interpreters, a
comprehensive training program was devised. The program consisted of a full-day training session
and subsequent on-the-job training. Two experienced aerial photo interpretation and photogram-
metry instructors led the formal classroom training sessions. The training sessions included the
following topics: (1) discussion of color theory and photo interpretation techniques, (2) understand-
ing of the class definitions, (3) interpretation of over 100 sample sites of different classes during
the training sessions followed by interactive discussions about potential discrepancies, (4) creation
of sample sites for later reference, and (5) repetition of interpretation practice after the sessions.
The focus was on real-world situations that the interpreters would encounter during the project.

Each participant was presented with over 100 preselected sites and was asked to provide his or her
interpretation of the land cover for these sites. Their interpretations were analyzed and subsequently
discussed to minimize any misconceptions. During the on-the-job portion of the training, each
interpreter was assigned approximately 500 sites to examine. Their progress was monitored daily
for accuracy and proper methodology. The interpreters kept logs of their decisions and the sites for
which they were uncertain about the LC classes. On a weekly basis, their questions were addressed
by the project photo interpretation supervisor. The problem sites (approximately 400) were discussed
until all team members felt comfortable with the class definitions and their consistency in interpre-
tation. Agreement analysis between the three interpreters resulted in an average agreement of 84%.

7.2.3 Photographic Interpretation

7.2.3.1 Interpretation Protocol

The standard protocol used by the photo interpreters was as follows:

Figure 7.2

Sample sites clustered around
the photograph center.
Photo Center
Sample Points
0 0.6 1.2
N
Miles

L1443_C07.fm Page 93 Friday, June 25, 2004 10:14 AM
© 2004 by Taylor & Francis Group, LLC

94 REMOTE SENSING AND GIS ACCURACY ASSESSMENT


• Each interpreter was assigned 500 of the 1500 total sites.
• Interpretation was based on NAPP photographs.
• The sample site locations on the NAPP photos were found by first plotting the sites on TM false-
color composite images then finding the same area on the photo by context.
• During the interpretation process, cover type and other related information such as site homogeneity
were recorded for later analysis.
• When there was some doubt as to the correct class or there was the possibility that two classes could
be considered correct, the interpreters selected an alternate class in addition to the primary class.
• The interpretations were based on the majority of a 3

¥

3 pixel window (Congalton and Green, 1999).

7.2.3.2 Interpretation Procedures

The Landsat TM images were displayed using ERDAS Imagine. By plotting the site locations
on the Landsat TM false-color composite images, the interpreters precisely located each site. Then,
based on the context from the image, the interpreters located the site on the photographs as best
they could. Clearly, some error was inherent in this location process; however, this was the simplest
and most cost-effective procedure available. The use of a 3

¥

3 pixel window for interpretation
was intended to reduce the effect of location errors.
The interpreters examined each site’s characteristics using the aerial photograph and TM image
and determined the appropriate LC label for the site according to the classification scheme, then
they entered the information into the project database. The following data were entered into the

database: site identification number (sample site), coordinates, photography acquisition date, pho-
tograph identification code, imagery identification number, primary or dominant LC class, alternate
LC class (if any), general site description, unusual observations, general comments, and any
temporal site changes between image and photo acquisition dates. The interpreters did not have
prior access to the MRLC classification values during the interpretation process.
Individual interpreters analyzed 15% (

n

= 75) of each of the other interpreters’ sample sites to
create an overlap database to evaluate the performance of the interpreters and the agreement among
them. Selection of these 75 sites was done through random sampling. This scheme provided 225
sites that were interpreted by all three interpreters. Agreement analysis using these overlap sites
indicated an average agreement of 84% among the three interpreters (Table 7.1).

7.2.3.3 Quality Assurance and Quality Control

Quality assurance (QA) and quality control (QC) procedures were vigorously implemented in
the study as designated in the interpretation organization chart (Table 7.2). Discussions among the
interpreters and project supervisors during the interpretation process provided an opportunity to
discuss the problems that occurred and to resolve problems on the spot.
The QA and QC plan is shown in Figure 7.3. Upon completion of training, a test was performed
to determine how similarly the interpreters would call the same sites. The initial results of the
analysis revealed that some misunderstandings about class definitions had remained after the training
process. As a result, the interpreters were retrained as a group to “calibrate” themselves. This helped
to ensure that calls were more consistent among interpreters. Upon satisfactory completion of the
retraining, the interpreters were assigned to complete interpretation of the 1500 sample sites.

7.3 RESULTS
7.3.1 Accuracy Estimates


Table 7.3 presents the error matrix for MRLC Level II classes. The numbers across the top and
sides of the matrices represent the 15 MRLC classes (Appendix A). Table 7.4 presents the error

L1443_C07.fm Page 94 Friday, June 25, 2004 10:14 AM
© 2004 by Taylor & Francis Group, LLC

THEMATIC ACCURACY ASSESSMENT OF REGIONAL SCALE LAND-COVER DATA 95

Table 7.1

Agreement Analysis Among PIs: Interpreter Call vs. Overlap Consensus for the 225 Overlap Sites
Overlap Consensus
Interpreted Results
MRLC
Class 1.1 2.1 2.2 2.3 3.1 3.2 3.3 4.1 4.2 4.3 8.1 8.2 8.5 9.1 9.2 Tot % Corr
1.1

18 1 1

20

0.9 18

2.1

21 1

22


1.0 21

2.2

31

4

0.8 3

2.3

9

9

1.0 9

3.1

42

6

0.7 4

3.2

16


7

0.9 6

3.3

16 1 1

18

0.9 16

4.1

214 111

19

0.7 14

4.2

27

9

0.8 7

4.3


3 2126 4

36

0.7 26

8.1

10 1

11

0.9 10

8.2

310 1

14

0.7 10

8.5

1162

19

0.8 16


9.1

2131

16

0.8 13

9.2

15

15

1.0 15

Tot 19 22 3 10 4 6 23 18 8 29 14 14181918225
%

0.9 1 1 0.9 1 1 0.7 0.8 0.9 0.9 0.7 0.7 0.9 0.7 0.8

0.84
Corr

18 21 3 9 4 6 16 14 7 261010161315

188

L1443_C07.fm Page 95 Friday, June 25, 2004 10:14 AM
© 2004 by Taylor & Francis Group, LLC


96 REMOTE SENSING AND GIS ACCURACY ASSESSMENT

matrix for MRLC Level I classes. The Level II classes were grouped into the following Level I
categories: (1) water, (2) urban or developed, (3) bare surface, (4) agriculture and other grasslands,
(5) forest (upland), and (6) wetland (woody or nonwoody). The overall accuracies for the Level I
and II classes were 66% and 44%, respectively.
Table 7.3 illustrates the confusion among low-intensity residential, high-intensity residential,
and commercial/transportation categories. Many factors may have contributed to the confusion;
however, we believe the complex classification scheme used was a dominant factor. For example,
the most ambiguous categories were the three urban classes, which were distinguished only by
percentage of vegetation. Technically, it was beyond the methods employed in this study to quantify
subpixel vegetation content. As a result, many high-intensity residential areas in the classified image
were assigned to low-intensity residential and commercial/transportation classes. This occurred
because high-intensity residential classes, which had a median percentage of vegetation, were easily
confused with lower-intensity and higher-intensity urban development.
Also, many problems were encountered with the interpretation of cropland and pasture/hay
since these classes had very similar spectral and spatial patterns that occurred within the same
agricultural areas. In addition, cropland was frequently converted to pasture/hay during the interval
of two acquisition dates, or vice versa. Confusion also existed within classes of evergreen forest

Table 7.2

Interpretation Team Organization
Interpreter Organization

Photo Interpreters

PI #1 (500 pts + 75 pts
from PI #2 and 75 pts

from PI #3
PI #2 (500 pts + 75 pts
from PI #1 and 75 pts
from PI #3
PI #3 (500 pts + 75 pts from
PI #1 and 75 pts from PI
#2

PI supervisor

Random checking for consistency, checking 225 overlapped sites, sites with question
from three PIs

Project supervisor

Checking sites with question from PI supervisor, random checking of overall sites,
overall QA/QC

Project director

Procedure establishment, discussions on issues, random checking, overall QA/QC

Figure 7.3

Training, photo interpretation (PI), and quality assurance and quality control (QA/QC) procedures.
Classroom photo
interpretation training
Independent and
supervised photo
interpretation for each

interpreter
Interpretation of 225
overlap points
Overlap
satisfactory?
Photo interpretation of
the 1500 random
sample points
Interpreters work
through overlap points
as a group to resolve
differences
Accuracy analysis
MRLC region
4 classified
data
Yes
No

L1443_C07.fm Page 96 Friday, June 25, 2004 10:14 AM
© 2004 by Taylor & Francis Group, LLC

THEMATIC ACCURACY ASSESSMENT OF REGIONAL SCALE LAND-COVER DATA 97

Table 7.3

Error Matrix for the Level II MRLC Data (15 Classes)
PI Results
Classified MRLC Data
MRLC

Class 1.1 2.1 2.2 2.3 3.1 3.2 3.3 4.1 4.2 4.3 8.1 8.2 8.5 9.1 9.2 Tot % Corr
1.1 87

3352 242

108

0.8 87

2.1 47

49 22 122151241

155

0.3 47

2.2

1

10

2 42

19

0.5 10

2.3


322

32

15 1 4 1

69

0.5 32

3.1

236

33

18 1 2 1 1 2

69

0.5 33

3.2

13

34 38

0.9 34


3.3

113

33

421234 51

78

0.4 33

4.1

63 86

46

373719

99

0.5 46

4.2

1111 7

34


4 264

61

0.6 34

4.3

24 3 7 2 6 16 29 42

62

944164

228

0.3 62

8.1

2 1215 4114 4

28

18 11 1 2

103

0.3 28


8.2

1311 1 631137

57

322

128

0.4 57

8.5

110111320 3 741384

41

23

131

0.3 41

9.1

42 11282104 31

43


15

96

0.4 43

9.2

41 2101211 9

60 91

0.7 60

Tot98 100 100 98 100 100 100 94 99 98 93 99 97 99 98 1473
%

0.9 0.5 0.1 0.3 0.3 0.3 0.3 0.5 0.3 0.6 0.3 0.6 0.4 0.4 0.6

0.44
Corr

87 47 10 32 33 34 33 46 34 62 28 57 41 43 60

647

L1443_C07.fm Page 97 Friday, June 25, 2004 10:14 AM
© 2004 by Taylor & Francis Group, LLC


98 REMOTE SENSING AND GIS ACCURACY ASSESSMENT

and mixed forest, deciduous forest and mixed forest, barren ground and other grassland, low-
intensity residential and mixed forest, and transitional and all other classes.
The difference between image classification and photo interpretation is that image classification
is mostly based on the spectral values of the pixels, whereas photo interpretation incorporates color
(tones), pattern recognition, and background context in combination. These issues are inherent in
any accuracy assessment project using aerial photos as the reference data (Ramsey et al., 2001).
For this project, however, aerial photos were the only reasonable reference data source.
The interpretation process is not the only component of the accuracy assessment process
(Congalton and Green, 1999). Additional factors that should be considered are positional and
correspondence error. To account for these errors, the following additional criteria for correct
classification were considered in this project: (1) primary matches classified pixel, (2) primary or
alternate matches classified pixel, (3) primary is most common in classified 3

¥

3 pixel areas, (4)
primary matches any pixel in a classified 3

¥

3 pixel area, (5) primary is most common in classified
3

¥

3 pixel area, and (6) primary or alternate matches any pixel in a 3

¥


3 pixel area. “Interpreted”
refers to the classes chosen during the aerial photo interpretation process, “primary” and “alternate”
are the most probable LC classes for a particular site, and “classified” refers to the MRLC
classification result for that site. The analysis results for each cover class in six cases are presented
in Table 7.5 and Table 7.6. The overall accuracies under various scenarios ranged from 44% to
79.4% (

n

= 1473) for cases “a” and “f,” respectively.

Table 7.4

Error Matrix for the Level I MRLC Data

PI Results
MRLC data
123489Tot%Corr
187

310026

108

0.81 87

2

0


188

94384

243

0.77 188

3

112

134

2189

185

0.72 134

4

14645

227

30 39

388


0.59 227

8

1437821

207

12

362

0.57 207

9

862418 4

127 187

0.68 127

Tot98 298 300 291 289 197 1473
%

0.89 0.63 0.45 0.78 0.72 0.64

0.66
Corr


87 188 134 227 207 127

970

Table 7.5
Summary of Further Accuracy Analysis by Interpreted Cover Class: Number of Sites
Class No.
Primary PI
Matches
MRLC
Prim or Alt
Matches
MRLC
Primary PI
Is Mode of
3 ¥ 3
Primary PI
Matches
Any 3 ¥ 3
Prim or Alt
PI Is Mode
of 3 ¥ 3
Prim or Alt
PI Matches
Any 3 ¥ 3
1.1 108 87 95 84 92 94 100
2.1 155 47 69 60 81 124 135
2.2 19 10 11 8 11 15 16
2.3 69 32 39 35 41 44 49

3.1 69 33 35 27 30 34 42
3.2 38 34 36 34 36 35 37
3.3 78 33 44 33 42 40 52
4.1 99 46 55 60 68 79 83
4.2 61 34 39 44 48 52 54
4.3 228 62 98 68 110 148 187
8.1 103 28 39 27 38 46 64
8.2 128 57 82 56 83 83 102
8.5 131 41 61 33 53 56 91
9.1 96 43 53 47 59 68 84
9.2 91 60 68 58 67 67 74
Totals 1473 647 824 674 859 985 1170
L1443_C07.fm Page 98 Friday, June 25, 2004 10:14 AM
© 2004 by Taylor & Francis Group, LLC
THEMATIC ACCURACY ASSESSMENT OF REGIONAL SCALE LAND-COVER DATA 99
7.3.2 Issues and Problems
7.3.2.1 Heterogeneity
The heterogeneity of many areas caused confusion in assigning an exact class label to the sites.
Since the spatial resolution of the Landsat TM data was 30 ¥ 30 m, pixel heterogeneity was a
common problem (Plate 7.1a). For example, a site on the image frequently contained a mixture of
trees, grassland, and several houses. Thus, the reflectance of the pixel was actually a combination
of different reflectance classes within that pixel. This factor contributed to confusion between
evergreen forest and mixed forest, deciduous forest and mixed forest, low-intensity residential and
other grassland, and transitional and several classes.
7.3.2.2 Acquisition Dates
Temporal discrepancies between photograph and image acquisition dates, if not reconciled,
would negatively affect the classification accuracy (Plate 7.1b). For example, to interpret early
forest growth areas, the interpreter had to decide whether the site was a transitional or a forested
area. If the photograph was acquired before the image (e.g., as much as 6 years earlier), it was
clear that those early forest growth sites would show up as forest cover on the satellite image. In

this case, the interpreters decided the appropriate cover class based on satellite imagery.
7.3.2.3 Location Errors
Locating the reference site on the photo was sometimes problematic. This frequently occurred
when: (1) the LC had changed between the image and photo acquisition dates, (2) there were few
clearly identifiable features for positional reference, and (3) the reference site was on the border
of two or more classes (boundary pixel problem). When the LC had changed between acquisition
dates, locating reference sites was difficult because the features surrounding the reference site were
also changed. Similarly, when a reference site fell in an area with few identifiable features for
positional reference, the interpreter had to approximate the location of the reference site. For
Table 7.6 Summary of Further Accuracy Analysis by Interpreted Cover Class: Percentage of Sites for
Each Class
Class Percentage
Primary PI
Matches
MRLC
Prim or Alt
PI Matches
MRLC
Primary PI
Is Mode of
3
¥ 3
Primary PI
Matches
Any 3 ¥ 3
Prim or Alt
PI Is Mode
of 3 ¥ 3
Prim or Alt
PI Matches

Any 3 ¥ 3
1.1 100.0 80.6 88.0 77.8 85.2 87.0 92.6
2.1 100.0 30.3 44.5 38.7 52.3 80.0 87.1
2.2 100.0 52.6 57.9 42.1 57.9 78.9 84.2
2.3 100.0 46.4 56.5 50.7 59.4 63.8 71.0
3.1 100.0 47.8 50.7 39.1 43.5 49.3 60.9
3.2 100.0 89.5 94.7 89.5 94.7 92.1 97.4
3.3 100.0 42.3 56.4 42.3 53.8 51.3 66.7
4.1 100.0 46.5 55.6 60.6 68.7 79.8 83.8
4.2 100.0 55.7 63.9 72.1 78.7 85.2 88.5
4.3 100.0 27.2 43.0 29.8 48.2 64.9 82.0
8.1 100.0 27.2 37.9 26.2 36.9 44.7 62.1
8.2 100.0 44.5 64.1 43.8 64.8 64.8 79.7
8.5 100.0 31.3 46.6 25.2 40.5 42.7 69.5
9.1 100.0 44.8 55.2 49.0 61.5 70.8 87.5
9.2 100.0 66.3 73.9 63.0 72.8 72.8 80.4
Total
Percentage
100.0 44.0 55.9 45.7 58.3 66.8 79.4
L1443_C07.fm Page 99 Friday, June 25, 2004 10:14 AM
© 2004 by Taylor & Francis Group, LLC
100 REMOTE SENSING AND GIS ACCURACY ASSESSMENT
example, when the reference site was on the shadowy side of a mountain, it was impossible to see
the reference features except the ridgeline of the mountain; thus, the interpreter was required to
locate the reference site based on the approximate distance to and the direction of the ridgeline.
The third case was the most common source of confusion in the interpretation process. Reference
sites were frequently on the border of two or more classes. In these situations, the interpreter
Plate 7.1 (See color insert following page 114.) (a) Heterogeneity problem: reference site consists of
several classes. (b) LC class changed between acquisition dates in reference site. (c) Ambiguity
of class definitions; it is difficult to differentiate between high-density and commercial class

according to definition.
B&W Aerial Photo LANDSAT TM Image
LANDSAT TM ImageCIR Aerial Photo
LANDSAT TM ImageCIR Aerial Photo
(a)
(b)
(c)
L1443_C07.fm Page 100 Friday, June 25, 2004 10:14 AM
© 2004 by Taylor & Francis Group, LLC
THEMATIC ACCURACY ASSESSMENT OF REGIONAL SCALE LAND-COVER DATA 101
decided between two or more classes by determining which class covered the majority of the 3 ¥ 3
pixel window.
7.4 FURTHER RESEARCH
The results of this study point to numerous opportunities for further research to improve
accuracy assessment methods for regional scale assessments, including: (1) examining the impact
of alternate classes in the accuracy assessment, (2) evaluating and analyzing the effect of positional
errors on accuracy assessment, (3) collecting field data for the 225 overlapping sample sites to
validate the interpretation, and (4) analyzing satellite data with a higher temporal resolution to
better identify changes between the acquisition of TM data and NAPP photography (e.g., using
NOAA-AVHRR and MODIS data).
ACKNOWLEDGMENTS
The results reported here were generated through an agreement funded by the Environmental
Protection Agency (EPA). The views expressed in this report are those of the authors and do not
necessarily reflect the views of EPA or any of its subagencies. The authors would like to thank
EPA, USGS-EROS Data Center (EDC) for their support and for the assistance given on this project.
The authors would also like to thank Heather Cheshire and Linda Babcock of CEO at NCSU for
contributing their expertise in photo interpretation and extended help throughout the duration of
this project.
REFERENCES
Congalton, R., A review of assessing the accuracy of classifications of remotely sensed data, Remote Sens.

Environ., 37, 35–46, 1991.
Congalton, R. and K. Green, A practical look at the sources of confusion in error matrix generation, Photogram.
Eng. Remote Sens., 59, 641–644, 1999.
Khorram, S., G.S. Biging, N.R. Chrisman, D.R. Colby, R.G. Congalton, J.E. Dobson, R.L. Ferguson, M.F.
Goodchild, J.R. Jensen, and T.H. Mace, Accuracy Assessment of Remote Sensing-Derived Change
Detection, monograph, American Society of Photogrammetry and Remote Sensing (ASPRS),
Bethesda, MD, 1999.
Lunetta, R S., J.G. Lyon, B. Guidon, and C.D. Elvidge, North American landscape characterization dataset
development and data fusion issues, Photogram. Eng. Remote Sens., 64, 821–829, 1998.
Ramsey, E., G. Nelson, and K. Sapkota, Coastal change analysis program implemented in Louisiana, J.
Coastal Res., 17, 53–71, 2001.
Zhu, Z.,L. Yang, S.V. Stehman, and R.L. Czaplewski, Accuracy assessment for the U.S. Geological Survey
regional land cover mapping program: New York and New Jersey region, Photogram. Eng. Remote
Sens., 66, 1425–1438, 2000.
L1443_C07.fm Page 101 Friday, June 25, 2004 10:14 AM
© 2004 by Taylor & Francis Group, LLC
102 REMOTE SENSING AND GIS ACCURACY ASSESSMENT
APPENDIX A
MRLC Classification Scheme and Class Definitions
The MRLC program utilizes a consistent classification scheme for all EPA regions at approx-
imately an Anderson Level II thematic detail. While there are 21 classes in the MRLC system, only
15 were mapped in EPA Region IV. The following classification scheme was applied to the EPA
Region IV data set:
1.0 Water: All areas of open water or permanent ice/snow cover.
1.1 Water: All areas of open water, generally with less than 25% vegetation.
2.0 Developed: Areas characterized by a high percentage of construction materials (e.g., asphalt,
concrete, buildings, etc.).
2.1 Low-intensity residential: Land includes areas with a mixture of constructed materials and
vegetation or other cover. Constructed materials account for 30 to 80% of the total area.
These areas most commonly include single-family housing areas, especially suburban neigh-

borhoods. Generally, population density values in this class will be lower than in high-
intensity residential areas.
2.2 High-intensity residential: Includes heavily built-up urban centers where people reside.
Examples include apartment complexes and row houses. Vegetation occupies less than 25%
of the landscape. Constructed materials account for 80 to 100% of the total area. Typically,
population densities will be quite high in these areas.
2.3 High-intensity commercial/industrial/transportation: Includes all highly developed lands not
classified as “high-intensity residential,” most of which is commercial, industrial, and
transportation.
3.0 Barren: Bare rock, sand, silt, gravel, or other earthen material with little or no vegetation regardless
of its inherent ability to support life. Vegetation, if present, is more widely spaced and scrubby
than that in the vegetated categories.
3.1 Bare Rock/Sand: Includes areas of bedrock, desert pavement, scarps, talus, slides, volcanic
material, glacial debris, beach, and other accumulations of rock and/or sand without vege-
tative cover.
3.2 Quarries/strip mines/gravel pits: Areas of extractive mining activities with significant surface
expression.
3.3 Transitional: Areas dynamically changing from one land cover to another, often because of
land use activities. Examples include forestlands cleared for timber and may include both
freshly cleared areas as well as areas in the earliest stages of forest growth.
4.0 Natural forested upland (nonwet): A class of vegetation dominated by trees generally forming >
25% canopy cover.
4.1 Deciduous forest: Areas dominated by trees where 75% or more of the tree species shed
foliage simultaneously in response to an unfavorable season.
4.2 Evergreen forest: Areas dominated by trees where 75% or more of the tree species maintain
their leaves all year. Canopy is never without green foliage.
4.3 Mixed forest: Areas dominated by trees where neither deciduous nor evergreen species
represent more than 75% of the cover present.
5.0 Herbaceous planted/cultivated: Areas dominated with vegetation that has been planted in its current
location by humans and/or is treated with annual tillage, modified conservation tillage, or other

intensive management or manipulation. The majority of vegetation in these areas is planted and/or
maintained for the production of food, fiber, feed, or seed.
5.1 Pasture/hay: Grasses, legumes, or grass-legume mixtures planted for livestock grazing or
the production of seed or hay crops.
L1443_C07.fm Page 102 Friday, June 25, 2004 10:14 AM
© 2004 by Taylor & Francis Group, LLC
THEMATIC ACCURACY ASSESSMENT OF REGIONAL SCALE LAND-COVER DATA 103
5.2 Row Crops: All areas used for the production of crops such as corn, soybeans, vegetables,
tobacco, and cotton.
5.3 Other grasses: Vegetation planted in developed settings for recreation, erosion control, or
aesthetic purposes. Examples include parks, lawns, and golf courses.
6.0 Wetlands: Nonwoody or woody vegetation where the substrate is periodically saturated with or
covered with water.
6.1 Woody wetlands: Areas of forested or shrubland vegetation where soil or substrate is
periodically saturated with or covered with water.
6.2 Emergent herbaceous wetlands: Nonwoody vascular perennial vegetation where the soil or
substrate is periodically saturated with or covered with water.
Note: Cover class types 5.0, 6.0, and 7.0 did not occur in federal geographic Region 4.
L1443_C07.fm Page 103 Friday, June 25, 2004 10:14 AM
© 2004 by Taylor & Francis Group, LLC

×