Discriminative Dimensionality Reduction

Acronym: 
DIDI
Term: 
06.2012 - 09.2015
Research Areas: 
D
Abstract: 

The amount of electronic data available today increases rapidly, such that people rely on automated tools which allow them to intuitively scan data volumes for valuable information. Dimensionality reducing data visualization, which displays high dimensional data in two or three dimensions, constitutes a popular tool to directly visualize data sets on the computer screen. Dimensionality reduction is an inherently ill-posed problem, and the result of a dimensionality reduction tool largely varies depending on the chosen technology, the parameters, and partially even random aspects for non-deterministic al- gorithms. Often, the reliability and suitability of the obtained visualization for the task at hand is not clear at all since a dimensionality reduction tool might focus on irrelevant aspects or noise in the data. The goal of this project is to enhance dimensionality reducing data visualization techniques by auxiliary information in the form of class labeling of the data. This way, the visualization can concentrate on the aspects relevant for the given auxiliary information rather than potential noise.

Methods and Research Questions: 

The focus of the project lies on: 1. The investigation of principled techniques to extend dimensionality reduction tools to class-discriminative visualization 2. The experimental and theoretical evaluation and comparison of the approaches 3. The extension and adaptation of discriminative dimensionality reduction to deal with large data sets.

Outcomes: 

In the first half of the project duration, we have addressed two of the three main targets of this project. The first one being the processing of big data, in this case understood as data sets with very many instances. Dealing with such data comes in reach due to the introduction of the method named kernel t-SNE. This technique equips the non-linear dimensionality reduction approach t-SNE with a parametric mapping, allowing visualizations in linear time and, hence, opening the way towards life-long learning or online visualization. Furthermore, the principled approach of including supervised information via the metric has been investigated for various approaches. The resulting Fisher information metric has been integrated into several methods such as Isomap, MVU, t-SNE, kernel t-SNE, SOM and GTM. The obtained projections have been utilized for the special application of classifier visualization. In this scenario, several experiments have shown the clear superiority of supervised techniques. Moreover, recent work has addressed the important question of feature relevance for a given projection. Since the role of individual features in non-linear projections obtained by non-parametric methods is usually unknown, practitioners often prefer rather simple methods such as PCA over more complex and, hence, more powerful ones. Our recently proposed approach allows to judge the importance of individual features and, such, might improve the usefulness of supervised as well as unsupervised non-parametric dimensionality reduction techniques.

Publications: