The manifold hypothesis presumes that high-dimensional data lies on or near a low-dimensional manifold. While the utility of encoding such structure has been demonstrated empirically, rigorous analysis of its impact on the learnability of neural networks is largely missing. We ask which minimal assumptions on the curvature and regularity of the manifold, if any, render the learning problem efficiently learnable. We prove that learning is hard under input manifolds of bounded curvature, but that additional assumptions on the volume of the data manifold alleviate these fundamental limitations and guarantee learnability. Notable instances of this regime are manifolds which can be reliably reconstructed via manifold learning. We comment on and empirically explore intermediate regimes of manifolds, which have heterogeneous features commonly found in real world data.