Bayesian denoising in the wavelet domain
In order to separate the noise and image components from a single observation of a degraded image we need to use the prior information about what the typical behavior of the images is, and to use some assumption/knowledge about the statistical properties of the noise. Here [Portilla 2003b] we will consider it stationary, independent, additive, and Gaussian of known autocovariance (see [Portilla 2004a, Portilla2004b] for the blind denoising case). Our prior knowledge about image and noise, and the observed image itself provide us with the necessary elements for estimating the original image under a Bayesian frame.
For representing the image, we use a (generic sense) wavelet transform (that is, a multiscale multiorientation invertible subband representation). Wavelets have demonstrated to be a very powerful tool for analysis, processing and synthesis of relevant image features. Furthermore, these transformations allow to make explicit some important higher order statistical dependences of the typical images. It is important to use a redundant representation. Otherwise, the lack of translation invariance will produce severe artifacts in the reconstructed image. For (nonblind) BLSGSM denoising, we have obtained our best results using a highly redundant version of the original steerable pyramid [Simoncelli 1995], that uses 8 orientations in a set of scales, plus 8 oriented highpass subbands [Portilla 2003b]. For the case of blind image denoising, and for some other test images, best results are obtained using more localized kernels (for instance, with an overcomplete pyramidal version of the Haar wavelet).
