Source: Jan Nelson and John Dwyer, USGS EROS

Yellowstone data showing burned area

An animation comparing satellite imagery captured over Yellowstone National Park after a fire, on Oct. 10, 1988. The first image is a natural color composite created from Landsat 5’s Thematic Mapper (TM). The second image applies Landsat’s Level-3 Burned Area product to the same scene. Image credit: USGS

Apr 10, 2019 • The public availability on EarthExplorer of Landsat Level-3 burned area, dynamic surface water extent, and fractional snow and ice cover products starting in February 2019 means remote sensing scientists now can access a decades-long Landsat look at how those specific conditions have played out on every applicable 30-meter pixel of the American landscape.

That’s important when it comes to analyzing such things as trends through time or the impacts of climate variability. But Level-3 products are also valuable because of the science they inform without having to go through all the data preparation work.

Researchers wanting to produce maps that show where climate change may have impacted conditions on the landscape have long had to start by preprocessing the curvature of the Earth out of the data. They had to compensate for the effects of the atmosphere. They had to correct for the disparate look angles of the sensors, and what the terrain does to disturb things.

To put it succinctly, accurately detecting genuine landscape change through remote sensing first requires the removal of ground distortions in the data. Do images align correctly across multiple dates? Are they being viewed at the same angle? Is the reflectance of the landscape consistent from image to image and sensor to sensor? Today, Landsat Level-1 products address and correct those radiometric and geometric issues, bringing a consistency through the length of the Landsat archive that helps ensure that measured changes are due to Earth surface dynamics and not sensor differences.

After that, Landsat Level-2 algorithms account for the atmospheric effects—aerosols, water vapor, and other constituents between the spacecraft sensors and the Earth—providing a truer view of the landscape without the effects of the intervening atmosphere.

So, the Landsat Level-1 and Level-2 algorithms get the reflectance values as correct as possible, and provide masks for clouds, cloud shadows, adjacent clouds, land, and water. Then it’s the Landsat Level-3 algorithms that move users from reflectance to actual land cover—the gold that is the likely existence of burned areas, surface water extent, and snow and ice cover on every applicable pixel from Landsat 8 back through the Landsat 4-5 Thematic Mapper record.

The genesis of Level-3 products actually began to evolve a decade ago, when the world was grappling with climate change issues and the USGS was deciding to make Landsat data free and open to the public, said John Dwyer, Science and Applications Branch Chief at EROS.

“There was a recognition that, ‘Look, if we’re going to deal with climate change … if we’re going to understand how the Earth’s system is changing … we have to have stable, consistent measurements over time,’ ” Dwyer said.

From those early conversations evolved what came to be known as Climate Data Records (CDR). Fundamental CDRs are the calibrated radiances that all space agencies keep in their data archives, Dwyer said. Then there are thematic CDRs, such as surface reflectance and surface temperature. Because of EROS’ work with calibrating the Landsat sensors, staff at the Center took the primary responsibility for developing those CDRs, Dwyer said.

Meanwhile, NASA was already developing the algorithms to generate Landsat surface reflectance. “So, when their funding was coming to an end, we just talked to them and said, ‘Hey look, obviously based on the work you guys did, the community likes (surface reflectance). How about if we transition it, take your codes and other information and implement it here at EROS toward generating a systematic product that we could make available to the community?’ ” Dwyer said.

Surface reflectance and surface temperature CDRs are used to generate what at first were called Essential Climate Variables, but now are known as the Landsat Level-3 science products. Initially, there were five potential Level-3 science products identified: burned area, dynamic surface water extent, fractional snow and ice cover, global 30-meter land cover, and biomass. But funding and resource issues did in the global 30-meter land cover and biomass products, Dwyer said, leaving the three that are now operational today.

As the project lead on those remaining Level-3 science products, Dwyer said he required the principal investigators (PI) for each product to satisfy three criteria before their products could become operational:

  1. The PIs had to publish an algorithm description in peer reviewed literature;
  2. They were required to publish the methodology by which, once they had developed their individual products, they characterized the uncertainties;
  3. They had to have stakeholders to which they could provide initial provisional products and get feedback on how they fit their purposes.

“I prefer saying ‘characterizing uncertainties’ to the word ‘validation’ because validation to some people connotes it’s either right or wrong, whereas everything has an uncertainty,” Dwyer said. “And getting stakeholders engaged was important because we had to figure out, does it really satisfy the initial intention as well as the usability? Are the formats convenient? Is the metadata sufficient and the documentation sufficient?”

While generating these kinds of thematic products is not necessarily something new, doing so for large geographic areas at each time step in the Landsat archive and allowing for automation has brought its own new and unique challenges, Dwyer said.

“Our going-in position was that we wanted to have an algorithm that is sufficiently robust to handle variability in geography but is tuned and trained sufficiently so that it performs consistently over large areas,” he said. “And if you can characterize the uncertainty, that’s important.

“I’ve always tried to make the argument that you work through trying to characterize the uncertainties because until you do that, you’re not really going to understand where the limitations are in your algorithm. And once you understand where the limitations are, you have a better chance of refining it and reprocessing the next collection and improving it.”

In developing higher level products, EROS’ experience has generally been to expect it to take six months before the user community really accesses and looks at the products in earnest, Dwyer said. Once they start doing that, he added, then the question will become, “Does it pass the test?”

“Community feedback is really important to be sure that, A, people buy off on the fact that yeah, the information itself is usable,” Dwyer said. “And then question B involves how you formatted the data and metadata and documentation. Is that good enough, as well as how we make it discoverable and accessible? That’s the feedback we’ll be looking for.”