False-colors removal on the YCrCb color space

10
False-colors removal on the YCrCb color space Valeria Tomaselli, Mirko Guarnera, Giuseppe Messina STMicroelectronics, AST Imaging Team, Bld. M5, Stradale Primosole 50, 95121 Catania, Italy ABSTRACT Postprocessing algorithms are usually placed in the pipeline of imaging devices to remove residual color artifacts introduced by the demosaicing step. Although demosaicing solutions aim to eliminate, limit or correct false colors and other impairments caused by a non ideal sampling, postprocessing techniques are usually more powerful in achieving this purpose. This is mainly because the input of postprocessing algorithms is a fully restored RGB color image. Moreover, postprocessing can be applied more than once, in order to meet some quality criteria. In this paper we propose an efficacious technique for reducing the color artifacts generated by conventional color interpolation algorithms, in YCrCb color space. This solution effectively removes false colors and can be executed while performing the edge emphasis process. Keywords: False colors, YCrCb, median filter, dynamic range. 1. INTRODUCTION A color image usually consists of three channels per pixel, each carrying the information of a specific wavelength sensitivity band (red, green, or blue) to allow color processing and display. Acquiring such an image should require cameras with three spatially aligned sensors, each preceded by a different color filter to capture the information for a given part of the visible spectrum. However, to reduce size, cost, and image registration errors, most digital cameras only have a single sensor with a color filter array (CFA) placed in front of it. Consequently, only one color is captured at each spatial location. Fig. 1 shows the most popular CFA pattern, which is known as Bayer pattern [[1]]. In the Bayer arrangement, the green component is sampled at a double rate with respect to red and blue components, and it is placed like a checkerboard pattern. The red and blue components are placed alternately in the remaining locations. This is because the green channel is a good approximation of the luminance, to which the human eye is more sensitive. G R G R G B G B G B G R G R G B G B G B G R G R G G R G R G B G B G B G R G R G B G B G B G R G R G Fig. 1. Bayer pattern. To restore a full color image from the CFA samples, the two missing color values at each pixel need to be estimated. This process is referred to as color interpolation or demosaicing. Hundred of papers on demosaicing solutions have been published in the last years, exploiting a lot of different approaches. An extensive dissertation of these approaches can be found in [[2]]. Traditional color interpolation methods usually result in color edge artifacts in the image. The two main types of demosaicing artifacts are usually named false colors and zipper effect. False colors are evident color errors which arise near the object boundaries, whereas zipper effect artifacts manifest as “on-off” patterns and are caused by an erroneous interpolation across edges. As it is disclosed in [3], a well performing demosaicing technique has to exploit two types of correlation: spatial correlation and spectral correlation. According to the first principle, within a homogeneous image region, neighboring pixels have similar color values, so a missing value can be retrieved by averaging the pixels which belong to the same object. Spatial correlation is well exploited by those techniques which interpolate missing information along edges, and not across them [4-8]. These algorithms are less affected by zipper effect, which usually appears when directional information is disregarded. Spectral correlation states that there is a high

Transcript of False-colors removal on the YCrCb color space

False-colors removal on the YCrCb color space

Valeria Tomaselli, Mirko Guarnera, Giuseppe Messina

STMicroelectronics, AST Imaging Team, Bld. M5, Stradale Primosole 50, 95121 Catania, Italy

ABSTRACT

Postprocessing algorithms are usually placed in the pipeline of imaging devices to remove residual color artifacts introduced by the demosaicing step. Although demosaicing solutions aim to eliminate, limit or correct false colors and other impairments caused by a non ideal sampling, postprocessing techniques are usually more powerful in achieving this purpose. This is mainly because the input of postprocessing algorithms is a fully restored RGB color image. Moreover, postprocessing can be applied more than once, in order to meet some quality criteria. In this paper we propose an efficacious technique for reducing the color artifacts generated by conventional color interpolation algorithms, in YCrCb color space. This solution effectively removes false colors and can be executed while performing the edge emphasis process.

Keywords: False colors, YCrCb, median filter, dynamic range.

1. INTRODUCTION

A color image usually consists of three channels per pixel, each carrying the information of a specific wavelength sensitivity band (red, green, or blue) to allow color processing and display. Acquiring such an image should require cameras with three spatially aligned sensors, each preceded by a different color filter to capture the information for a given part of the visible spectrum. However, to reduce size, cost, and image registration errors, most digital cameras only have a single sensor with a color filter array (CFA) placed in front of it. Consequently, only one color is captured at each spatial location. Fig. 1 shows the most popular CFA pattern, which is known as Bayer pattern [[1]]. In the Bayer arrangement, the green component is sampled at a double rate with respect to red and blue components, and it is placed like a checkerboard pattern. The red and blue components are placed alternately in the remaining locations. This is because the green channel is a good approximation of the luminance, to which the human eye is more sensitive.

G R G R G

B G B G B

G R G R G

B G B G B

G R G R G

G R G R G

B G B G B

G R G R G

B G B G B

G R G R G

Fig. 1. Bayer pattern.

To restore a full color image from the CFA samples, the two missing color values at each pixel need to be estimated. This process is referred to as color interpolation or demosaicing. Hundred of papers on demosaicing solutions have been published in the last years, exploiting a lot of different approaches. An extensive dissertation of these approaches can be found in [[2]]. Traditional color interpolation methods usually result in color edge artifacts in the image. The two main types of demosaicing artifacts are usually named false colors and zipper effect. False colors are evident color errors which arise near the object boundaries, whereas zipper effect artifacts manifest as “on-off” patterns and are caused by an erroneous interpolation across edges. As it is disclosed in [3], a well performing demosaicing technique has to exploit two types of correlation: spatial correlation and spectral correlation. According to the first principle, within a homogeneous image region, neighboring pixels have similar color values, so a missing value can be retrieved by averaging the pixels which belong to the same object. Spatial correlation is well exploited by those techniques which interpolate missing information along edges, and not across them [4-8]. These algorithms are less affected by zipper effect, which usually appears when directional information is disregarded. Spectral correlation states that there is a high

correlation between the three color channels (R, G, B) in a local image neighborhood. For real world images the color difference planes (∆GR=G-R and ∆GB=G-B) are rather flat over small regions, and this property is widely exploited in demosaicing and antialiasing techniques. For example, some techniques median filter the color difference values in order to make pixels more similar to their neighbors, thus reducing false colors [9]. Furthermore, Gunturk et al. in [10] have demonstrated that high frequency components of the three color planes are highly correlated by calculating the correlation values between the detail wavelet coefficients. In addition to this, Lian et al. in [11] have reported that the detail wavelet coefficients are also very similar to each other. This suggests that if there is a strong edge in the R channel, there is usually a strong edge at the same location in the G and B channels. On the contary, the YCrCb domain is less correlated, as demonstrated in [12]. Although edges still tend to be strong in the Y (luminance) plane, the chrominance planes (Cr and Cb) are smoother than the RGB plane, and hence they are more suitable for interpolation. For this reason, some recent papers propose to achieve demosaicing in YCrCb color space [13-15].

Although demosaicing solutions aim to eliminate false colors, some artifacts still remain, especially in the chrominance planes. Thus imaging pipelines often include a post-processing module, with the aim of removing residual artifacts [16]. Post-processing techniques are usually more powerful in achieving false colors removal and sharpness enhancement, because their inputs are fully restored color images. Moreover, they can be applied more than once, in order to meet some quality criteria. In this paper we propose an effective post-processing antialiasing algorithm which operates in the YCrCb color space. Since the luminance plane is usually faithfully reconstructed by the state of the art demosaicing techniques, residual artifacts reside in the chrominance planes. To validate this assertion we took the well-know “lighthouse” full color image from the Kodak database, generated its Bayer pattern and interpolated it the Directional Filter color interpolation in [5]. Fig. 2 shows a comparison between the original full color image and the interpolated image.

(a)

(b)

Fig. 2. Comparison between (a) the original Kodak image and (b) the interpolated image.

From fig. 2 it is possible to notice that a lot of false colors are introduced in the color interpolated image with respect to the original one. If we give a look at the luminance and chrominances (fig. 3) of both the Kodak original and the interpolated images, we can observe that false colors are due to a bad chrominance interpolation. In particular, the luminance of the interpolated image (fig. 3.b) is not too much different from the luminance of the original Kodak image (fig. 3.a). On the other hand, chrominances of the original Kodak image (figs. 3.c and 3.e) are smoother than chrominances of the interpolated image (figs. 3.d and 3.f). As a consequence, false colors could be removed by acting just on the chrominance planes, giving the opportunity to contemporary perform edge enhancement on luminance plane.

(a)

(b)

(c)

(d)

(e)

(f)

Fig. 3. Comparison between the original Kodak image and the interpolated image: (a) luminance original, (b) luminance interpolated, (c) chrominance Cr original, (d) chrominance Cr interpolated, (e) chrominance Cb original, (f) chrominance Cb interpolated.

The simplest way to remove color artifacts consists in correcting both the chrominance planes by simply blurring them. One liability with this approach is that there is no discrimination between false colors and genuine chrominance details. Consequently, sharp colored edges in the image begin to bleed color as the blurring becomes more aggressive. Adams et al. in [17] address the problem of eliminating low-frequency colored patterns, such as color moiré, by filtering chrominances according to an activity value depending on the nearby luminance and chrominances.

2. PROPOSED FALSE-COLORS REMOVAL

As it has been said in section 1, false colors are due to errors in chrominance interpolation, so they can be removed filtering only the chrominance planes. Therefore, the false-colors removal step on chrominances could be performed while the edge enhancing step is executed on the luminance plane, as it is shown in fig. 4. Since the color space conversion is in any case executed before the JPEG encoding, this step does not contribute to the complexity of the proposed algorithm. Although the luminance plane is not filtered by the proposed technique, fig. 4 shows that it is provided to it in order to modulate the strength of the filter to be applied on chrominances. In particular, the stronger a luminance edge is, the heavier the filtering action on chrominances should be, and vice versa.

ScalarProc.

NoiseProc.

ColorInterpolation

ColorMatrix

GammaProc. RGB2YCC YCrCb

False-

colors

removal

Sharp.

Encoder

2

Y

Cr

Cb

Cr

Cb

Y

ScalarProc.

NoiseProc.

ColorInterpolation

ColorMatrix

GammaProc. RGB2YCC YCrCb

False-

colors

removal

YCrCb

False-

colors

removal

Sharp.

Encoder

222

Y

Cr

Cb

Cr

Cb

Y

Fig. 4. Block diagram showing a configuration of an image processing pipeline where the proposed YCC false-colors removal is applied while sharpening is performed.

Instead of blurring the chrominance planes through an average filter, a median filter can be applied, in order to remove spikes or valleys from these signals, which usually change smoothly. A median filter can remove false colors pretty well from the image edges, but it could introduce color bleeding artifacts in sharp colored edges. This implies that chrominance values of a pixel should be modified with respect to luminance and local chromatic dynamic ranges, in order to not reduce chromaticity too much in regions with uniform colors (cfr. [18]). For this reason, the dynamic chromatic ranges (DCr and DCb) and the dynamic luminance range (DY) are evaluated in a 5x5 neighborhood of the pixel to be corrected. For each pixel of interest, dynamic luminance and chrominance ranges may be computed as the difference between the maximum and the minimum value in the local neighborhood, as it is expressed in (1).

( ) ( )( ) ( )

( ) ( )YminYmaxDY

CbminCbmaxDCb

CrminCrmaxDCr

II

II

II

−=

−=

−=

(1)

where I is the local neighborhood of the central pixel, which can be a 5x5 window.

Both the dynamic chromatic and luminance ranges are used to calculate a parameter, named CorrectionFactor, which determines the strength of the filter on chrominances. In correspondence of a sharp edge, the filtering action should be strong, in order to remove possible false colors around it. On the contrary, if the luminance edge is weaker than both the chrominance edges, color bleeding has to be avoided by reducing the strength of the filter. Thus, the following equation is used to calculate this parameter:

=

=otherwise)DCb,DCr,DYmax(

)DCb,DCr,DYmin(DYifDYFactorCorrection (2)

The CorrectionFactor, which is a value ranging from 0 to 255 (for 8 bits images), determines the power of the false-colors correction, according to the equation (3).

−⋅+=

−⋅+=

II

II

medianCboriginalCb)FactorCorrection(fmedianCbCb

medianCroriginalCr)FactorCorrection(fmedianCrCr (3)

where f(x) is defined by the (4).

[ ]2550

2

2

1

,xe)x(f sigma

x

∈=

(4)

It is straightforward to note that the equation (3) updates each chrominance value with a weighted average of the original chrominance (originalCr and originalCb) and the median value of the chrominance in the neighborhood (medianCr and medianCb). The weights depend on the CorrectionFactor parameter, through the function f(x).

The function f(x) is the right part of a Gaussian function with expected value equal to zero and standard deviation equal to sigma. Fig. 5 illustrates the trend of the f(x), for a given sigma value (sigma = 10).

0 50 100 150 200 2500

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x

f(x)

Fig. 5.Graph of f(x) vs. x, with sigma = 10.

The function f(x) rapidly decreases as the x increases, according to the equation (4). The value of the standard deviation sigma determines how fast f(x) approaches zero.

With reference to the equation (3), low values of the CorrectionFactor imply a greater contribution of the original chrominance value; on the contrary, as the CorrectionFactor increases a higher weight is assigned to the median value. The function f(x) avoids discontinuous corrections when dynamic ranges change; in fact proportions of both the original value and the median filtered value are continuously varied to form the final value. This soft-threshold methodology avoids abrupt transitions between corrected and non-corrected pixels, thus producing high quality images.

One critical situation for the already described approach is the case of opposite correlation between chrominances, in correspondence of a strong edge, as it can be seen from fig. 6.

(a)

(b)

Fig. 6. Comparison between: (a) original image, (b) image processed through the equations (1-4) on a 5x5 window.

From fig. 6.b it is possible to notice a high desaturation of genuine chrominance details, due to the application of the equations (1-4) on a 5x5 window. Fig. 7 illustrates luminance and chrominances of the crop which is shown in fig. 6.a. It is evident that strong edges appear in the three planes, and the two chrominance planes are opposite correlated. The median filtering approach, eliminating the valleys from chrominance Cr and the spikes from chrominance Cb, strongly reduce the saturation of genuine chrominance details.

(a)

(b)

(c)

Fig. 7. (a) Luminance, (b) Chrominance Cr, (c) Crominance Cb of the crop in fig. 6.a.

To solve this issue, without negatively impacting on the false colors reduction, the cross correlation between luminance and chrominance planes is evaluated. More specifically, if the luminance plane is correlated to the chrominance plane having the greatest variation, whereas the two chrominance planes are opposite correlated to each other, the chrominance correction is not performed using the formulas in (3).

In this case, let k be the index of the generic element within a NN × mask, with [ ]NN,k ×∈ 1 and let c be the index of the central pixel of the mask. The rules for updating the chrominance planes, in case of a strong edge and opposite correlation between chrominances, are:

[ ]

[ ]∑

∑×

=

×

=

⋅=

⋅=

NN

k

k

NN

k

k

kCbsumweight

WCb

kCrsumweight

WCr

1

1 (5)

Where:

[ ] [ ]( )

∑×

=

=

+−=

NN

kk

k

Wsumweight

cYkYW

1

11

(6)

This means that the correction process is carried out by a weighted average, where the pixels having the lower luminance differences with the central pixel are weighted more.

Moreover, the Cr and Cb chrominances of the central pixel (Cr[c] and Cb[c]) are weighted only if at least one ckWk ≠

exists such that 60.Wk > . This condition is useful to avoid that, if the luminance of the central pixel differs very much

from all the other luminances in the neighborhood, the chrominances of the central pixel are not filtered.

3. EXPERIMENTAL RESULTS

In this section, we will show the experimental results obtained applying the proposed false-colors removal in YCC color space to the well-known edge sensing algorithm [19] and to the directional interpolation proposed in [5]. We will demonstrate how the performance of a simple demosaicing algorithm, such as edge sensing, can be drastically improved by means of the proposed false-colors removal technique. Although this demosaicing algorithm quite well exploits spatial correlation, it does not take advantage of spectral correlation, thus outputting images affected by false colors. Comparisons will be also made with another state of the art technique, that is the alternating projection method proposed by Gunturk et al. in [10]. The standard Kodak images test set has been used for this purpose.

Figs. 8 and 9 illustrate two ROIs (region of interest) cropped from the original and interpolated images. In particular fig.8 is relative to the Kodim9 , and fig.9 is a crop of Kodim13. In each of these figures, (a) represents the ROI of the original image, (b) is the result of the interpolation by Gunturk, (c) is the result of the edge sensing algorithm, (d) is obtained applying the proposed algorithm to the output of edge sensing, (e) is the result of the patent in [5] and, finally, (f) is obtained applying the proposed technique to the output of directional color interpolation in [5].

Fig. 8 shows how the proposed technique is able to drastically remove false colors from the output of both edge sensing and directional interpolation. Gunturk’s approach, which usually produces high quality results, still maintains some artifacts like zipper effect, as shown in figure 8.b. Edge sensing and directional interpolation followed by the YCC antialiasing produce better results. Also from fig. 9 it is possible to note that Gunturk’s demosaicing does not remove all the false colors, whereas the proposed YCC antialiasing effectively removes false colors from images interpolated by both edge sensing (fig. 9.d) and directional interpolation (fig. 9.f). In this case edge sensing followed by the proposed antialiasing outperforms Gunturk’s technique, from the visual standpoint. This means that it is possible to reduce the overall complexity of the image generation pipeline by using edge sensing interpolation, which has a low complexity if compared to the Gunturk’s algorithm, followed by the proposed YCC antialiasing, which can be executed while edge enhancement is achieved, using also the same memory lines.

4. CONCLUSIONS

In this paper we presented a new postprocessing false-colors removal technique in the YCrCb color space, which effectively eliminates false colors from demosaiced images. Since false colors arise because of wrong chrominance interpolation, this technique filters only the chrominance planes, taking advantage also of luminance information, which contributes to modulate the strength of the correction. A configuration of an image processing pipeline was also proposed, where the YCC false-colors removal technique can applied while sharpening is performed on the luminance plane, thus reducing the overall memory requirements.

This technique has been efficaciously applied to the well-known edge sensing algorithm, outputting images with a higher visual quality, at a reduced complexity.

(a)

(b)

(c)

(d)

(e)

(f)

Figure 1. Visual comparisons: (a) Original Kodak image, (b) Gunturk, (c) Edge sensing, (d) Edge sensing followed by the proposed YCC technique, (e) Directional color interpolation, (f) Directional color interpolation followed by the proposed YCC technique.

(a)

(b)

(c)

(d)

(e)

(f)

Figure 9. Visual comparisons: (a) Original Kodak image, (b) Gunturk, (c) Edge sensing, (d) Edge sensing followed by the proposed YCC technique, (e) Directional color interpolation, (f) Directional color interpolation followed by the proposed YCC technique.

REFERENCES

[1] Bayer, B.E., “Color Imaging Array”, US3971065 (1976). [2] Gunturk, B.K., Glotzbach, J., Altunbasak, Y., Schafer, R.W., Mersereau, R.M., “Demosaicking: Color Filter Array

Interpolation”, IEEE Signal Proc. Magazine, 22(1), 44-54 (2005). [3] Chang, L., Tan, Y.-P., “Effective use of Spatial and Spectral Correlations for Color Filter Array Demosaicking”,

IEEE Transactions on Consumer Electronics, 50(1), 355-365 (2004). [4] Chang, E., Cheung, S., Pan, D.Y., “Color filter array recovery using a threshold-based variable number of

gradients,” Proc. SPIE, 3650, 36–43 (1999). [5] Messina, G., Guarnera, M., Tomaselli, V., Bruna, A., Spampinato, G., Castorina, A., “Color interpolation method of

an image acquired by a digital sensor by directional filtering”, US2006072814 (2006). [6] Adams, J., “Interactions between color plane interpolation and other image processing functions in electronic

photography,” Proc. SPIE Cameras and Systems for Electronic Photography and Scientific Imaging, 2416, 144-151 (1995).

[7] Kuno, T., Sugiura, H., Matoba, N., “New interpolation method using discriminated color correlation for digital still camera,” IEEE Trans. Consumer Electronics, 45(1), 259-267 (1999).

[8] Menon, D., Andriani, S., “Demosaicing with directional filtering and a posteriori decision”, IEEE Trans. Image Processing, 16(1), 132-141 (2007).

[9] Freeman, T.W., “Median Filter for Reconstructing Missing Color Samples”, US4724395 (1988). [10] Gunturk, B.K., Altunbasak, Y., Mersereau, R., “Color Plane Interpolation Using Alternating Projections”, IEEE

Trans. Image Processing, 11(9), 997-1013 (2002). [11] Lian, N.-X., Zagorodnov, V., Tan, Y.-P., “Edge-preserving image denoising via optimal color space projection,”

IEEE Trans. Image Processing, 15(9), 2575–2587 (2006). [12] Chan, W.C., Au, O.C., Fu, M.F., “A novel color interpolation framework in modified YCbCr domain for digital

cameras”, Proceedings Int. Conf. Image Processing, 2, II-925-928 (2003). [13] Alleysson, D., Susstrunk, S., “Linear Demosaicing inspired by the human visual system”, IEEE Trans. Image

Processing, 14(4), 439-449 (2005). [14] Lian, N.-X., Chang, L., Tan, Y.-P., “Improved color filter array demosaicking by accurate luminance estimation”,

IEEE Int. Conf. Image Processing, (2005). [15] Lian, N.-X., Chang, L., Tan, Y.-P., Zagorodnov, V., “Adaptive Filtering for Color Filter Array Demosaicking”,

IEEE Trans. Image Processing, 16(10), 2515-2525 (2007). [16] Lu, W., Tan, Y.-P., “Color Filter Array Demosaicking: New Method and Performance Measures”, IEEE Trans.

Image Processing, 12(10), 1194-1210 (2003). [17] Adams, J.E., Hamilton, J.F., Hamilton, J. A., “Removing color aliasing artifacts from color digital images”,

US2004/0264915A1 (2004). [18] Maurer, R.P., “Reduction of chromatic bleeding artifacts in image”, WO 03/051035 A1 (2003). [19] http://scien.stanford.edu/class/psych221/projects/99/tingchen/main.htm.