Fine-Grained Distribution-Dependent Learning Curves

Olivier Bousquet, Steve Hanneke, Shay Moran, Jonathan Shafer, Ilya Tolstikhin

Research output: Contribution to journalConference articlepeer-review

Abstract

Learning curves plot the expected error of a learning algorithm as a function of the number of labeled samples it receives from a target distribution. They are widely used as a measure of an algorithm’s performance, but classic PAC learning theory cannot explain their behavior. As observed by Antos and Lugosi (1996, 1998), the classic ‘No Free Lunch’ lower bounds only trace the upper envelope above all learning curves of specific target distributions. For a concept class with VC dimension d the classic bound decays like d/n, yet it is possible that the learning curve for every specific distribution decays exponentially. In this case, for each n there exists a different ‘hard’ distribution requiring d/n samples. Antos and Lugosi asked which concept classes admit a ‘strong minimax lower bound’ – a lower bound of d/n that holds for a fixed distribution for infinitely many n. We solve this problem in a principled manner, by introducing a combinatorial dimension called VCL that characterizes the best d for which d/n is a strong minimax lower bound. Conceptually, the VCL dimension determines the asymptotic rate of decay of the minimax learning curve, which we call the ‘distribution-free trail’ of the class. Our characterization strengthens the lower bounds of Bousquet, Hanneke, Moran, van Handel, and Yehudayoff (2021), and it refines their analysis of learning curves, by showing that for classes with finite VCL the learning rate can be decomposed into a linear component that depends only on the hypothesis class and a faster (e.g., exponential) component that depends also on the target distribution. As a corollary, we recover the lower bound of Antos and Lugosi (1996, 1998) for half-spaces in Rd. Finally, to provide another viewpoint on our work and how it compares to traditional PAC learning bounds, we also present an alternative formulation of our results in a language that is closer to the PAC setting.

Original languageEnglish
Pages (from-to)5890-5924
Number of pages35
JournalProceedings of Machine Learning Research
Volume195
StatePublished - 2023
Event36th Annual Conference on Learning Theory, COLT 2023 - Bangalore, India
Duration: 12 Jul 202315 Jul 2023

Keywords

  • Online Learning
  • PAC Learning
  • Strong Minimax Lower Bounds
  • Universal Learning

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Fine-Grained Distribution-Dependent Learning Curves'. Together they form a unique fingerprint.

Cite this