A very basic Bayer demosaicing algorithm (see here for a brief introduction) will effectively act as a low-pass (i.e., blurring) filter because it interpolates the missing colours from nearby neighbours. In smooth areas, this is actually beneficial, since it will help to suppress image noise. On sharp edges (or detailed regions) this will obviously introduce unwanted blurring. Smarter demosaicing algorithms, such as Pattern Pixel Grouping (PPG) or Adaptive Homogeneity-Directed (AHD) interpolation have crude mechanisms to detect edges. They can then avoid interpolating values across the edges, thus reducing blurring in areas with high detail or sharp edges. Unfortunately, because they use only local information (maybe 1 or two pixels away), they cannot estimate edge orientation very accurately, and hence appear to introduce varying amounts of blurring, depending on the orientation of edges in the image.
I set out to investigate this. The first step is to generate synthetic images for each channel (Red, Green, and Blue). These images are generated with a known edge orientation, and a known edge sharpness (MTF50 value). The gray scales ranges of these images have been manipulated so that the black level and the intensity range differs, to match more closely how a real Bayer sensor performs. A little bit of Gaussian noise was added to each channel. Next, the three channels are combined to form a Bayer image. Each pixel in the Bayer image comes from only one of the three channels, according to a regular pattern, e.g., RGRGRG on even image rows, and GBGBGB on odd rows.
Here is an example:
Using a set of these synthetic images with known MTF50 values, I could then measure the MTF50 values with and without Bayer demosaicing. The difference between the known MTF50 value for each edge, and the measured MTF50 value, is then expressed as a percentage (of the known MTF50 value). This was computed over 15 repetitions over edge angles from 3 degrees through 43 degrees (other edge orientations can be reduced to this range through symmetry). The result is a plot like this:
This clearly shows a few things:
- Using the original synthetic full resolution red channel image (before any Bayer mosaicing) produces very accurate, unbiased results (blue boxes in the plot). The height of each box represents the width of the middle-most 50% of individual measurements, i.e., the typical spread of values.
- Both PPG (black) and AHD (red) start off with a 5% error, which increases as the edge angle increases from 3 through 43 degrees. This upwards trend is the real problem -- a constant 5% error would have been fine. The values are positive, i.e., sharpness was higher than expected. This is because the green and blue channels both have higher MTF50 values, so some of that sharpness is bleeding through to the red channel.
- A variant of the MTF measurement algorithm, designed to run on the raw Bayer mosaiced image, was run on the red sites only -- see the green boxes. Since this means only 25% of the pixels could be used, the effect of noise is magnified (remember, the synthetic images had a little bit of Gaussian noise added). This increases the spread of the errors, but the algorithm remains unbiased, regardless of edge orientation.
The data is quite clear: both PPG and AHD distort the MTF50 measurements in an orientation-dependent manner. These two algorithms are the most sophisticated ones included with dcraw. Other raw converters, such as ACR, may perform differently, and I intend to demonstrate their performance in future posts. I have not had time to investigate this in great detail, but I suspect that the AHD implementation in LibRaw does not produce the same result, based on a quick-and-dirty test.