Learning a robust classifier in hyperbolic space @ MSR

Abstract

Recently, there has been a surge of interest in representing large-scale, hierarchical data in hyperbolic spaces to achieve better representation accuracy with lower dimensions. However, beyond representation learning, there are few empirical and theoretical results that develop performance guarantees for downstream machine learning tasks in hyperbolic spaces. In this talk we consider the task of learning a robust classifier in hyperbolic space. First, we introduce a hyperbolic perceptron algorithm, which provably converges to a separating hyperplane. We then provide an algorithm to efficiently learn a robust large-margin hyperplane, relying on the careful injection of adversarial examples. Finally, we prove that for hierarchical data that embeds well into hyperbolic space, the low embedding dimension ensures superior guarantees when learning the classifier directly in hyperbolic space.

Date
Oct 16, 2020
Location
Microsoft Research, Machine Learning Foundations Seminar
Melanie Weber
Melanie Weber
Assistant Professor