Subventions et des contributions :
Subvention ou bourse octroyée s'appliquant à plus d'un exercice financier. (2017-2018 à 2022-2023)
This work is aimed at physics-based concepts in imaging. The project
components described are specific, but form aspects of long-standing and
deep problems throughout computer vision. The context for this research
is the "fusion" of images. For example, suppose we need a greyscale
("black and white") xerox copy of a colour image: here we are aiming to
fuse the three RGB (red, geen, blue) colour channels into a single grey output. This is a
deceptively difficult problem, and hundreds of academic papers have been
published on the subject. Many other problem domains are similar: e.g.,
fusing RGB plus Near-Infrared (NIR) into a new colour image with better night-vision
properties for driving in the dark. It is understood in the camera-research
community that including a NIR sensor in a leading brand of smartphones is about
to take place.
It turns out that the idea of "contrast" is critical for image fusion -- we need to capture
local changes in the set of images to best combine them. Just what contrast
consists of has been long-debated. In my Spectral Edge (SpE) application (and
spin-off company) a definition of contrast is taken from solid mathematical
foundations. It turns out the the SpE algorithm is applicable in a variety of
situations: for example in medical imaging it is common to develop "image" data
with more than just 3 dimensions like RGB colour. And in satellite imaging it is
common to take "spectral" images with hundreds of values at each pixel location.
The SpE approach disentangles how colour, on the one hand, and change in spatial
position on the other, combine to form contrast.
These calculations all take place in the "gradient" domain -- gradient is the
derivative of the image, in both x and y directions, where derivative in imaging
simply means the change from a pixel to its neighbouring pixel. Now, it is a
classical problem in applied mathematics to transform back to a colour image
the gradient (in each of
R, G, B) that we arrived at by combining contrast. The classic solution for
"reintegrating" the gradient back into image dates to the year 1800. However
that method, even though since elaborated in a number of mathematical approaches,
produces images that display unpleasant artifacts
in the output image. Instead, in this proposal I will look deeper into making use
of the SpE approach, along with a new, computer-science approach to
reintegration into an output image. To date, the SpE method (and patent)
uses mathematics that applies to a whole image: here I propose pixel-oriented
methods that should produce substantially more detail.
The methods developed will have far-reaching impact in that gradient-based
approaches are now the standard approach to problems in computational
photography. As one example, this research will have a major impact on the
problem of bringing daytime image information into nighttime surveillance
images.