Dimensionality reduction and Neural Nets
Written by Ricardo Sanz   
Tuesday, 26 February 2008

Dimensionality reduction and Neural Nets

Pascual Campoy

Place: Automatica Seminar Room
Time: September 19 , 2008 / 12:00-13:00

Is any combination of pixel intensities an image of our daily world? The answer is no, there is a huge amount of bizarre synthetic images that we would never expect to see that, if synthetically produced, are quite imposible to be represented by words, i.e. they are not susceptible of a higher representation according to previously seen images. In other words, it can be concluded that all the images we daily see lie in a manifold embedded in the mxn (amount of pixels) dimensional input space. This fact yields to the possibil- ity of representing an image by a reduced number of data that determines its position on the manifold, which by the way is the operation our brain does and we formulate by words.

Dimensionality Reduction is a key issue in many scientific problems, in which data is originally given by high dimensional vectors, all of which lie however over a fewer dimensional manifold. Therefore, they can be represented by a reduced number of values that parametrize their position over the mentioned non-linear manifold. This dimensionality reduction is essential not only for representing and managing data, but also for its understanding at a high interpretation level, similar to the way it is performed by the mammal cortex.

This conference presents a brief introduction to the problem of Dimensionality Reduction, its state of the art, and its solution by techniques based in Neural Networks, both supervised and not supervised, with special emphasis in Self-organizing Maps (SOM). Finally the conference gives an overview of new learning algorithms for unsupervised self-organizing maps developed at the Computer Vision Group at the Universidad Politécnica Madrid.

Last Updated ( Wednesday, 10 June 2009 )