
Is any combination of pixel intensities an image of our daily world? The answer is no,
there is a huge amount of bizarre synthetic images that we would never expect to see that,
if synthetically produced, are quite imposible to be represented by words, i.e. they are
not susceptible of a higher representation according to previously seen images. In other
words, it can be concluded that all the images we daily see lie in a manifold embedded
in the mxn (amount of pixels) dimensional input space. This fact yields to the possibil
ity of representing an image by a reduced number of data that determines its position on
the manifold, which by the way is the operation our brain does and we formulate by words.
Dimensionality Reduction is a key issue in many scientific problems, in which data is
originally given by high dimensional vectors, all of which lie however over a fewer
dimensional manifold. Therefore, they can be represented by a reduced number of
values that parametrize their position over the mentioned nonlinear manifold.
This dimensionality reduction is essential not only for representing and managing data,
but also for its understanding at a high interpretation level, similar to the way it is
performed by the mammal cortex.
This conference presents a brief introduction to the problem of Dimensionality Reduction,
its state of the art, and its solution by techniques based in Neural Networks, both supervised
and not supervised, with special emphasis in Selforganizing Maps (SOM). Finally the
conference gives an overview of new learning algorithms for unsupervised selforganizing
maps developed at the Computer Vision Group at the Universidad Politécnica Madrid.
