Home

Research Lines

Resource

Vision & Mission

In the Computational Imaging & Visual Image Processing group we study and develop new computational techniques for improving the visual experience of human subjects looking at digital images. In real observation processes both the images and the human observers are affected by different kinds of non-ideal behavior, which we try to compensate for by digitally processing the observed images. In addition, we are also interested in synthesizing artificial digital images, or certain image features, to be combined with observations from the real world. 

Our general approach is to develop multi-purpose tools (mainly, linear and non-linear image representations) inspired in what is known about image statistics and human vision, in a very qualitative, almost schematic, way, such that, applying our own fast optimization techniques we are able to efficiently carry out particular tasks (e.g., denoising, deblurring, inpainting, demosaicing, enhanced resolution, coding/quantization artifact compensation, texture coding and synthesis, etc.).

From a Bayesian perspective, one can pose the following correspondences:

(1) Statistics of ideal (i.e., artifact-free) images <-> image modelling <-> prior knowledge

(2) Physical acquisition (devices + human eye) <-> imaging <-> observation model

(3) Perceptual relevance <-> computational (neural) human vision <-> cost function

These three sources of information can be used by estimation/optimization processes to improve the human visual experience in computationally affordable ways. Given an observation and a human observer, we aim at processing the observation by minimizing the expected "visual discrepancy" between the visual perception from the processed image and the one that an ideal observer would have from the ideal (artifact-free) image. In practice, though, we only have simplified and incomplete models of (1), (2) and (3). In addition, computational complexity is a stringent filter, if we want to apply these techniques in practical situations. Therefore, we search for conjunctions of models and methods achieving high standards simultaneously in visual quality, conceptual simplicity, and computational efficiency.

For the synthesis case, we aim at capturing and reproducing a variety of image features (e.g., texture) using visual-statistical models. Our motivation is double: First, to understand typical image statistics and their relationship with simplified perceptual models. Second, to be able to re-generate certain image components partially lost in the observation, in a visually plausible way (not necessarily faithful to the ideal image associated to the observation, in a point-wise comparison).