
Why don’t we use the highest resolution images available? Sometimes, less is more
One of the Sentinel-2 satellites prior to launch. The two Sentinel-2 satellites collect 10 and 20 m resolution data (depending on wavelength) across the whole world every 5 days. Photo Credit: IABG via ESA_events on Flickr
Why don’t we use the highest resolution images available? Sometimes, less is more
You can now get commercially available Earth Observation imagery up to a maximum resolution of 31x31cm per pixel – and yet you don’t need higher resolution than about 20m per pixel for most of what we do. Higher resolution than that often doesn’t make sense for things like landcover and deforestation mapping.
Take a savannah, for instance. To define a pixel as being in the class ‘savannah’, you would expect to find scattered trees and grass in it. At 20m, this is fine – your pixel will likely have a combination of grass and trees. But at 1m, every pixel will likely have either tree or grass – so you couldn’t really have any ‘savannah’. You’d just have ‘tree’ or ‘grass’ classes, which isn’t useful for a client. You would then likely resample back to 20m with some rule that mixed tree and grass areas are called ‘savannah’.
That might be okay if high-resolution data was free (though obviously computation costs rise markedly with resolution). But higher resolution data costs a lot. Just because Google has bought and released expensive high-resolution data on Google Maps/Earth – often taken from planes, rather than just satellites – doesn’t mean we can download and use it. Putting usage restrictions to one side, RGB images displayed on a screen are not the full dynamic range of the sensor, and vary a lot in quality – they’re great for looking at by eye, but not good for doing scientific classification.
Additionally, higher resolution satellites often don’t contain more than 3-4 wavelength bands. They typically don’t have the more interesting near infrared bands that are useful for landcover classification data, as included in the lower resolution Sentinel-2, which has 13 bands (with 10, 20 and 60 m resolution). You therefore can distinguish more classes with the coarser resolution data.
Calibration of very high-resolution satellites is often terrible. Medium to high-resolution satellites working with a 10-30m range (e.g. Sentinel- 2 and Landsat) are produced by space agencies who have spent hundreds of millions of dollars producing very high quality and well-calibrated instruments. So they produce nice and easy-to-use, high-end products which makes it easier for us to combine lots of those images to make analysis-ready data. By contrast, the very high-resolution data (<1m pixels) are not very well calibrated between lots of small, cheaper satellites that are collecting it. They are also orbiting at different times of day, so sun angles are different. Think of it like a mosaic – larger sheets will result in a much clearer mosaic compared to varied postage stamp pieces. Therefore, science quality data from Sentinels or Landsat produce more accurate maps with much less fiddling.
As touched upon above, another factor is processing cost and time. Clients may initially think they need data at, say, 1m resolution – when in reality, they actually need around 1ha (100m) resolution for most purposes. There are 10,000 1m pixels in a 100m pixel – so 1m requires 10,000 times the computing resource. We’d end up producing enormous files that the client probably wouldn’t even have the capacity to view in a helpful way.
So, we find that 20m pixels, with ‘only’ 25 times more pixels than the 100m (1 ha) data than most clients need, is normally a good trade-off!