Lab 9

Image Classification

Geog 418/518, Fundamentals of Remote Sensing

Name 1 Name 2

Lab Overview

In this lab, you will create a create a classified image. To accomplish this, you will

·  Define spectral signatures for key categories of features.

·  Evaluate the spectral signatures.

·  Use the signatures to create a maximum likelihood supervised classification.

·  Conduct a qualitative evaluation of the accuracy of your classification.

Background: The Supervised Classification Process

The classification techniques you will explore today are commonly used by remote sensing professionals. All the techniques you will apply are described in your textbook or in the online ERDAS Field Manual, 1999, as well as being covered in class.

In a typical supervised classification, the steps that are followed include:

·  Surveying a field map that provides the “ground truth” for the classification. This map is usually surveyed by a ground-based field team that visits a number of sites where the features of interest (e.g., certain tree types or land use classes) are located. The field team maps the location and extent of the features so that they can be precisely located using image coordinates, even when the features cannot be clearly seen on the image. For example, the field team might map the location and extent of wheat fields. Although you might not be able to visually differentiate wheat from other crops on a Landsat image, you could still map the wheat fields to the image using the coordinates of the ground truth data. In other words, you can overlay the ground truth map onto the image to show where the wheat fields are. (Note that this requires you to have accurately georeferenced images and maps).

·  Selection of “training sites” from known locations on the image. For example, you might select a certain number of pixels (at least 30 and preferably 50 or more for maximum likelihood classification) from the wheat field portions of the image. The spectral signature of these wheat field pixels will then be used to search for other similar pixels on the image.

·  Evaluation of the training pixels to determine if any appear to be non-representative (e.g., different than the other wheat field training pixels)

·  Classification of the image using the training pixels.

·  Evaluation of the classification.

As a first approximation, the evaluation of the classification can done using a visual, qualitative comparison of imagery and the classification map. This is the approach you will apply today. Please be aware, however, that while qualitative assessment is useful for understanding the classification process and making broad generalized statements about the classification (e.g., this feature was confused with that feature, the classification is awful, mediocre, good, etc.), it is not sufficient for most management or research applications.

In today’s lab, part of the reason you can get away with using a qualitative evaluation is because you will have a high spatial resolution image with 1-m pixels where you can clearly and unambiguously (most of the time) identify features on the image. Because you have high resolution imagery on which features can be identified, you can bypass the field survey and the collection of a ground truth data set (at least for the purposes for which we are using the lab).

In most cases, however, you will not have 1-m imagery. When you use coarser spatial resolution imagery such as TM5, determining accuracy based solely on visual interpretation is usually not a viable solution. Furthermore, if you want quantitative measures of accuracy (e.g., percent correctly classified), you cannot use simple visual comparisons of classification maps and images. When using coarser pixel resolution, or when deriving quantitative measures of accuracy, you must have ground-based maps to train the image and test the classification results. These quantitative accuracy assessments are carried out by overlaying the ground truth data on the classification map and counting how often the classification is correct or incorrect for each category. In today’s lab you will conduct a visual qualitative estimate of accuracy, but will not do a quantitative evaluation with ground truth and classification maps.

Part 1, Acquiring spectral signatures to “train” the classification

Ideally, pixels used as training sites should:

·  Be scattered around the image to capture the variability in reflectances due to illumination differences. One should select training site pixels from have at least 5 to 10 locations around the image.

·  Have 10*n pixels per classification category, where n is the number of bands in the imagery.

·  Have similar spectral reflectances for each category. For example, in order to identify wheat fields on an image, you should only use training sites from a green wheat field, or a golden wheat field, or a mowed wheat filed, but not all three at the same time. If you want to classify all three stages of wheat, you should compile separate spectral signatures for each category of wheat.

·  Be accessible for ground truthing and verification. Choosing pixel locations on a ridge top may be wonderful for a mountaineer, but does not make for easy logistics. You may often wish to visit the training sites in the field to understand the land cover at that site.

In reality, logistical and image limitations often force you to follow less than ideal practices. It is quite common to only use 30 to 50 training pixels per category and sometimes all these pixels may come from one site. If you do not follow the ideal rules laid out above, it is especially important to be aware of how the training sites may lead to confusion within the classification. An excellent overview of all these issues can be found in Congalton and Greene (1999).

Your first order of business is to collect some training sites. Open up the Lamar_TM5 image as a true color composite. We will attempt to classify the following features on this image: (1) water; (2) trees: (3) gravel bar; (4) woody debris; (5) sage and (6) willow/alder/sedge.

Figure 1. True color image of the Lamar River, 08-03-1999, Yellowstone National park. The accompanying file (Lab_classification_photos) has ground photos of the key features shown on the image.


Also open the file Lab_classification_photos and take a look at the ground images for the various categories. You will want to think carefully about these ground images and their spectral reflectances as you answer the questions in this lab and evaluate your spectral signature files and classification

1. What is the minimum number of pixels you should choose to use as training sites for any one feature type (e.g., water)?

2. Based on a visual examination of the true color composite (Figure 1) and the ground photos, which features do you think might be difficult to separate based on their spectral reflectances?

To create spectral signature files, you first select areas of interest that capture the spectral signal of the feature of interest (e.g., trees). The following steps lead you through some different ways of collecting areas of interest for your spectral signature files. The use of different techniques for creating areas of interest in intended to show you all the options for defining spectral signatures – you should not interpret this to mean that any method can be used at any time. You should consider the goals of the project and the accuracy requirements before choosing a particular method. For example in my research work, I always use ground-based maps to define where my training sites are (described in section 1.1 immediately below).

1.1 Using the polygon tool to collect AOIs

In the View window, open up AOI/Tools and select the Polygon icon which looks like this: . Locate 5 areas scattered across the image that are water and draw an AOI polygon at each site. In total, the number of pixels in the 5 sites should equal or exceed your answer to question 1 above, which will give you a sense of how big the AOIs should be. Remember, it is okay to collect more training pixels than the minimum required, but it is not okay to select fewer. By the same token, if you highlight all or a large majority of the water, you will have nothing left to classify, and your training pixels will have picked up so much within-water spectral variation that they won’t work as well as using a smaller number of pure” water pixels. As you move from one AOI water site to the next, you will have to re-select the Polygon tool for each new AOI. Only select areas where it is clear you are selecting water and nothing else. In other words, avoid edges and anything that might be a mixed pixel.

After drawing all five AOIs, select them all in the Viewer window by clicking on AOI/Group. Select or deselect subsets of the AOIs by holding down the shift key while left clicking on AOI with your mouse. After grouping the AOIs, select File/Save/AOI Layer as… in the Viewer window and save the AOI as water_training.aoi. This way your work will not disappear if you have a computer hiccup .

You have now created an AOI. The next step is to edit it, which is done in the Signature Editor window. Open Classifier/Signature Editor. In the Signature Editor window select Edit/Add, or click on the Add Icon: The pixels in the AOI polygon boundaries will be added as a row of data (note – the AOI must be highlighted in the Viewer window before it will be added to the Signature Editor). Click on the Signature Name cell for the row and rename it Water. Click on the color cell and change it to blue.

Check the Count column to determine how many pixels you selected. If you do not have enough pixels to meet the criteria you outlined in question 1, go back to the AOI tool and select some more pixels from different sites. Then add the pixels to the Signature Editor.

Remove the water AOI from the screen by opening the View/Arrange Layers window, right clicking on the AOI layer and deleting it.

Save your signature file (not the AOI file) as Lamar_training before going on to the next step.

1.2 Using the seed growing tool to define AOIs

In your AOI window, select the Seed Properties tool (or go to AOI/Seed Properties through the Viewer window menu). Set the Euclidean distance to 50, then select the Seed Tool icon

and click on a tree top. Locating tree tops accurately can be more difficult than you more imagine at first glance. Use shadows to help distinguish trees from other green features. (Note: I selected the Euclidean distance of 50 after some trial and error. Try resetting the distance to higher or lower numbers to see how this affects your AOI definitions).

Repeat this process for at least 5 (and preferably more) tree tops scattered across the image. Group the tree top AOIs and save them at tree_training.aoi.

Add the AOI as a layer to your Signature Editor. Give this row a Signature Name of Trees and assign it a dark green color. Save the revised signature file.

1.3 Using the inquire cursor to define AOIs

Change the band combinations on your Lamar_TM5 image to assign band 5 to red, band 4 to green, and band 3 to blue. This will highlight the woody debris much better than the true color composite does.

Accumulations of wood can be relatively small, so you want to be precise in locating your AOIs. A way to be precise is to use the Utility/Inquire Cursor to “seed” the AOI. The Inquire Cursor can be located precisely to the nearest pixel.

If you have closed it, reopen the AOI/Seed Properties window. Set the Euclidean Distance to 25 and the number of pixels to 50. Select Utility/Inquire Cursor or select the Inquire Cursor icon (the cross hair on the Viewer window). Locate it on top of a piece of wood and click the Grow at Inquire button in the Region Growing Properties window. Repeat this process for about 10 locations around the image.

If you experience is like mine, you will occasionally grab more materials than you want. When this happens, delete that AOI.

Add the wood AOI as a layer to your Signature Editor. Give this row a Signature Name of Wood and assign it a Sienna color. Be sure to remember to save the revised signature file.

Delete all the AOIs from the Viewer window before proceeding to the next step. Don’t worry – as long as you have saved the AOI and Spectral Signature files, the areas of interest can be brought back up. Removing them from the Viewer window simply serves to simplify the screen before the next step.

1.4 Using feature space to collect AOIs

The approaches above either used: (1) spatial features to define the training site, which is what you did when you drew the polygons in the water, or (2) the spectral characteristics (Euclidean distance) within a certain spatial distance of a seed pixel to define a training site. One can also use spectral criteria only to define training pixels by using a feature called Feature Space.

Feature Space creates a scatter plot of the spectral values for two bands for all the pixels on the image. Using Inquire Cursor you can then explore where individual pixels from the image are located on the scatter plot. In turn, this information can be used to create an AOI that highlights all pixels with those spectral values.

In the Signature Editor window, select Feature/Create/Feature Space Layers. Use Lamar_TM5 as the input layer and also use Lamar_TM5 as the output root name. In the rows that open up under the output root name, highlight Lamar_TM5_3-4.fsp. This will create an image that plots band 3 versus band 4. Click OK.

Open a second viewer and display Lamar_TM5_3-4.fsp.img. This is a scatter plot of band 3 versus band 4 for all the pixels values in the images. The different colors indicate the number of pixels that have certain values. Cooler colors (purple, blue) represent fewer pixels with those values, warmer colors (yellow, red) indicate that a larger number of pixels have those values.