Applying Manifold Learning Technique to Design Recurrent Architecture for Low Dimension Classification

Image credit: scikit-learn

Deep Neural Networks (DNNs) can have very high performance in a visual recognition task but are prone to noise and adversarial attacks. One main problem of training a DNN is the input often lay in very high dimensional space which leads to a high number of parameters to train. This raises the question of reducing the number of dimensions of the dataset. Given a high dimension dataset such as a visual dataset, how can we find a lower dimension representation that keeps the essential information of the images? With a low dimension representation, we can hopefully use a more shallow/simple architecture that can decently classify high dimensional datasets.