Exploiting Geometric Structure in Machine Learning and Optimization
Motivated by the observation that many applications involve non-Euclidean data, such as graphs, strings, or matrices, we discuss how Riemannian geometry can be exploited in Machine Learning and Optimization. First, we discuss the problem of characterizing data geometry from the perspective of Discrete Geometry, specifically focusing on the analysis of relational data. Secondly, we discuss examples of exploiting geometry in downstream tasks. We begin by considering the task of learning a classifier in hyperbolic spaces. Such spaces have received a surge of interest for representing large-scale, hierarchical data, since they achieve better representation accuracy with fewer dimensions. Next, we consider the problem of optimizing a function on a Riemannian manifold. Specifically, we will consider classes of optimization problems where exploiting Riemannian geometry can deliver algorithms that are computationally superior to standard (Euclidean) approaches.