Geometric Methods for Machine Learning and Optimization
A key challenge in machine learning and optimization is the identification of geometric structure in high-dimensional data. Such structural understanding is of great value for the design of efficient algorithms and for developing fundamental guarantees for their performance. Motivated by the observation that many applications involve non-Euclidean data, such as graphs, strings, or matrices, we discuss how Riemannian geometry can be exploited in Machine Learning and Optimization. First, we consider the task of learning a classifier in hyperbolic space. Such spaces have received a surge of interest for representing large-scale, hierarchical data, since they achieve better representation accuracy with fewer dimensions. Secondly, we consider the problem of optimizing a function on a Riemannian manifold. Specifically, we will consider classes of optimization problems where exploiting Riemannian geometry can deliver algorithms that are computationally superior to standard (Euclidean) approaches.