Lab VII: 1998 Siberian Fires
(This document last modified 2/25/09)
Objective: We will be using a AVHRR LAC
scene to map the extent of forest fires that burned throughout a large area of
eastern
Preliminaries: I received the following email message from the Environmental News Network during the fall of 1998:
>ENN Daily News
>Siberian fires declared global disaster
>Monday, October 26, 1998
>While the fires on the
"This emergency is internationally significant. There are at least
three implications: possible effects on global climate, potential transboundary air pollution and large scale destruction of
biodiversity," the United Nation's said in a statement. From the point of
view of
As well, more than 1 million people have had their health affected by long-term exposure to smoke, a United Nation's team that just returned from inspection of the area reported. The fires started because the region has endured an extensive period of drought, normal rains are late, and due to economic constraints, the country lacks adequate resources to combat the blazes.
Under normal circumstances, the United Nations reports,
While the fires on the
For today's lab, we are going to use 1 km resolution AVHRR thermal data
to map the extent these fires on August 15, 1998. I was able to download a
series of these images for free over the internet from this
sites:
Satellite Active Archive (NOAA)
Technical information on the AVHRR sensor, calibration data and information on data formats is provided in a very detailed online manual (NOAA Polar Orbiter Users Guide). You should scan the information in Section 1 of this guide to familiarize yourself with the characteristics of this satellite system and the AVHRR sensor. Note that there are a whole series of satellites in this series ranging from TIROS-N through NOAA-15 (NOAA-15 was just launched and it is also known as NOAA-K). TIROS-N was launched in 1978 and NOAA-15 was just launched a few months ago. Take a look at the orbital information for these satellites (Section1.2). Note that on the ascending node of the orbit, these satellites pass overhead at a Local Solar Time (LST) of either 14:30 or 19:30. On the descending node, they pass over at a LST of either 07:30 or about 02:00.
The first step in ordering imagery is to decide what LST is optimal for acquiring the data you need for any given application. We want to map fires and fires tend to burn with the highest intensity during early to mid-afternoon when air temperatures are high and relative humidity is low. The 14:30 pass is a good choice. I did a search for available imagery acquired by NOAA-14 between 8/15/98 and 10/31. I selected six images at roughly two-week intervals. To save disk space and make processing a bit easier, I selected the most cloud free image and selected a subset of the image between roughly N47o - N54o and E130o - E145o. Take a look at the map I have in the lab to orient yourself to this region. Our scene covers an area of roughly 1500 km by 1000 km.
PROCEEDURES:
Step 1: Getting and viewing the imagery. The images you will need for this lab is in J:\SALDATA\esci442\Russia\Russiasubset.img (and the .rrd file!). This image was acquired on August 15, 1998. Copy both files to your folder in C:/temp directory and view the image.
Our image is 691 pixels x 352 lines. Remember that we are using AVHRR data. This sensor records data with a 10-bit radiometric resolution, however, ERDAS only gives you the option of storing data with either 4, 8, 16 or 32 bit resolution. For this reason, our russiasubset.img file stores the data as unsigned 16-bit bands.
Step 2: Calculating Sample Brightness Temperatures in Excel. I will put two papers in the notebook in the Huxley library. You will need to take a look at these papers to understand the calculations outlined below. The papers are:
Flannigan, M.D. and T.H. Vonder Haar 1986.
Chuvieco, E and MP Martin. 1994. Global fire mapping and fire danger estimation using AVHRR images. Photogrammetric Engineering and Remote Sensing 60(5):563-570.
We need to convert the Digital Numbers for our thermal channels into a brightness temperature. In principle, this is quite simple but the calculations are rather tedious. We will begin by concentrating on the data from Channel 4. Eventually, we will do the same thing for the Channel 3 data. We will need to use a series of calibration values for our calculations. These are known as "channel slope and intercept coefficients for radiometric calibration.” These calibration values will be different for each image and a different set of values for each of the five AVHRR bands. These are normally stored in the header file for each image. We did quite a bit of pre-processing of this image to prepare it for you. Somewhere in the process, this header information was removed from the file. Normally, it would be present in somewhere in the header information for the file. Just to save you another step, here are these calibration values for our image:
! Channel slope and
intercept coefficients for radiometric calibration.
|
Band 1 |
Band 2 |
Band 3 |
Band 4 |
Band 5 |
Slopes |
0.1396000 |
0.1782000 |
-0.0016512 |
-0.1645208 |
-0.1830410 |
Intercepts |
-5.7245998 |
-7.3077002 |
1.6384751 |
159.2084838 |
178.8158133 |
Note that these values are only appropriate for this image! Different calibration values must be used with the other images! The first value after "Slopes" and "Intercepts" is for channel 1, the second set of values are for channel 2 and so on.
Now go to the NOAA POD Users guide Section 1.4.10. You might want to print this out. Towards the end of this document, it presents the equations you need to convert from DN (what they refer to as "counts of the Channel") to Radiance (RAD). You first need to use their equation (3) to calculate "Linear Radiance" (Rlin) using the "count" from a given pixel and the slope and intercept given above. You can then calculate the Radiance (RAD) for that pixel using equation (4) and calibration coefficients in Table 1.4.10-3. Go to the Viewer and use the Inquire Cursor tool to extract DNs for Channel 4 for several different locations. Pick a large cloud, a cloud-free section of ocean and a cloud-free section of land. Search around on the land to get a representative DN value (NOTE: record the coordinates of each of your sample locations as well as the DNs; you will want to come back to these same points later in the lab). Then go into Excel and punch in these equations and enter your DNs to see if you get some realistic values.
The equation for converting RAD to temperature is in Section 3.3.1 of the NOAA POD Users Guide. You will need to use their equation (3.3.1-2) to calculate temperature based on your RAD value. Appropriate wave numbers for use in this equation come from Table 1.4.10-1. Add this temperature equation to your Excel spreadsheet and see if you come up with reasonable temperatures.
Create an Excel spreadsheet that looks something like this:
|
DN4 |
G |
I |
Rlin |
A |
B |
D |
RAD |
C2 |
C1 |
Wave# |
Temp (K) |
Temp (C) |
land |
|
|
|
|
|
|
|
|
|
|
|
|
|
ocean |
|
|
|
|
|
|
|
|
|
|
|
|
|
cloud |
|
|
|
|
|
|
|
|
|
|
|
|
|
fire |
|
|
|
|
|
|
|
|
|
|
|
|
|
You will need to create one of these to do the calculations for Band 4 and another to do the calculations for Band 3. The constants that you use will be somewhat different for each band. You should also record the coordinates for each of your targets (land, cloud, ocean and fire). We will be creating models (below) that will enable us to create a temperature image channel from band 3 and from band 4. As a check on your models, you can go back to your sample pixels and see if the temperatures calculated by your models in ERDAS are the same as those calculated by your Excel spreadsheet.
Step 3: Creating Models to Calculate Brightness Temperatures for Your Image. You will need to create models in ERDAS to calculate Rlin., RAD and Brightness Temperature. For Rlin for Channel 3 on our image russiasubset.img file, your model might look something like this:
EITHER 0 IF ( $n1_russiasubset(3)==0 ) OR 10000 * (-0.0016512 * $n1_russiasubset(3) + 1.6384751) OTHERWISE
Note: you can use the copy/paste function (cntl C and cntl V) to copy this expression from the web page and paste it into the function in your modeling window. The first half of the model above insures that the "background" (outside the bounds of our image) retains a value of 0. The multiplication by 10,000 is needed because the range of values we get here will all be less than 1 and so would be recorded on the image only as values of either 0 or 1. Take another look at the calculations you did in Excel to understand why this is so. If our image data were recorded as real numbers instead of integers, this would not be a problem. We want to retain the information here so this trick is needed.
Now come up with a model to calculate Radiance for Channel 3. Your model for this task might look something like:
EITHER 0 IF ( $n1_rlin3==0 ) OR 10000 * (1.00359 * $n1_rlin3/10000 + 0 * $n1_rlin3 /10000 * $n1_rlin3 /10000 -0.0031) OTHERWISE
Note the division by 10000 and then the multiplication by 10000. Do you understand why this is needed again?
Note added 11/26/03: For reasons that I don’t completely understand, ERDAS seems to have difficulty generating the temperature image if you use the Radiance model that is presented above. My original approach should have worked. I have no idea why it didn’t. Try using this new radiance model instead:
EITHER 0 IF ( $n1_rlin3==0 ) OR (1.00359 * $n1_rlin3/10000 + 0 * $n1_rlin3 /10000 * $n1_rlin3 /10000 -0.0031) OTHERWISE
Note that this statement differs from the one above in that the radiance
value IS NOT multiplied by 10000. This
means that the result of this model will include values less than one. Therefore, it is CRITICAL that the resulting
raster layer is stored as a SINGLE FLOAT rather than as an UNSIGNED 16-BIT
image. The big disadvantage of this
approach is that the SINGLE FLOAT image that is produced has a file size that
is about twice that of an UNSIGNED 16-BIT image.
Now come up with a model for calculating brightness temperature and place the result in a new 16-bit image file. Your model for this task might look something like:
EITHER 0 IF ( $n1_rad3==0 ) OR 1.438833 * 2645.899 / LOG ( 1 + 1.1910659e-5 * 2645.899 * 2645.899 * 2645.899 /($n1_rad3/10000)) OTHERWISE
The new model for calculating temperature using the output from the new
radiance model above:
EITHER 0 IF ( $n1_rad3==0 ) OR (1.438833 * 2645.899 / LOG ( 1 + 1.1910659e-5 * 2645.899 * 2645.899 * 2645.899 /$n1_rad3)) OTHERWISE
Because the output of the new radiance model is a SINGLE FLOAT rather than an UNSIGNED 16-bit image, the new temperature model differs from the original model in that the radiance valued does not need to be divided by 10000. The output of this temperature model can be stored as an unsigned 16-bit image.
Use the cursor to move around the image and take a look at some of these temperatures. Are they reasonable? Keep in mind that these are in degrees Kelvin. Why do some of the cloud tops have relatively high apparent brightness temperatures (comparable to some areas on the ground)? Does Channel 3 exclusively measure emitted energy?
Now you will need to create new models to do these same calculations for Channel 4. All of the coefficients will need to be changed but the form of the equations remains the same. As you create these new models, you will also need to change the multiplier you use. Is a value of 10,000 still appropriate? In your Excel spreadsheet, take a look at the expected range of values for Rlin and RAD from Band 4 and compare them to the values you got for Rlin and RAD from Band 3.
Step 4: Cloud Masking. Before we locate the fires, we first need to
mask out the portions of our image that are obscured by clouds. We can do this
using our Channel 4 temperature layer. For pixels with dense clouds, the
radiometer is actually recording the temperature of the cloud top, not the
earth's surface. All we need to do is to sample the Channel 4 temperature layer
and decide on a threshold value that separates cloud tops from apparently
cloud-free areas. This is a bit subjective. Use the same modeling approach you
used in the
Step 5: Mapping Fires. From two papers you read (especially Flannigan and Vonder Haar 1986) you know that fires can be identified based on the difference in temperature between Channel 3 and 4. Pixels that contain a very hot target (a fire) will have an apparent temperature in Channel 3 that is about 20 to 40 degrees hotter than the apparent temperature in Channel 4. All you need to do is create one final temperature layer by subtracting the (cloud-masked) Channel 4 temperature layer from the (cloud-masked) Channel 3 temperature layer. Now examine this differenced image and determine a threshold (subjectively) that seems to identify things that look like they could be fires. How many pixels contain a fire? How many square kilometers does this cover, compared to the total (cloud-free) area of your scene?
Step 6: Prepare a Lab Report. You should have quite a bit to talk about. How extensive do you think the burning was? Do you have any estimates of the area that was burned? Clouds probably obscured your view of some of the study area. Nevertheless, you can probably make some informed guesses.
Created by
Return to ESCI 442 Lab Page
Return to ESCI 442 Syllabus
Return to David Wallin's Home Page