Optimal Prediction Using Expert Advice and Randomized Littlestone Dimension

Yuval Filmus, Steve Hanneke, Idan Mehalel, Shay Moran

Research output: Contribution to journalConference articlepeer-review

Abstract

A classical result in online learning characterizes the optimal mistake bound achievable by deterministic learners using the Littlestone dimension (Littlestone’88). We prove an analogous result for randomized learners: we show that the optimal expected mistake bound in learning a class H equals its randomized Littlestone dimension, which we define as follows: it is the largest d for which there exists a tree shattered by H whose average depth is 2d. We further study optimal mistake bounds in the agnostic case, as a function of the number of mistakes made by the best function in H, denoted by k. Towards this end we introduce the k-Littlestone dimension and its randomized variant, and use them to characterize the optimal deterministic and randomized mistake bounds. Quantitatively, we show that_ the optimal randomized mistake bound for learning a class with Littlestone dimension d is k + Θ(kd + d) (equivalently, the optimal regret is Θ(kd + d)). This also implies an optimal deterministic mistake bound of 2k + O(kd + d), thus resolving an open question which was studied by Auer and Long [’99]. As an application of our theory, we revisit the classical problem of prediction using expert advice: about 30 years ago Cesa-Bianchi, Freund, Haussler, Helmbold, Schapire and Warmuth studied prediction using expert advice, provided that the best among the n experts makes at most k mistakes, and asked what are the optimal mistake bounds (as a function of n and k). Cesa-Bianchi, Freund, Helmbold, and Warmuth [’93,’96] provided a nearly optimal bound for deterministic learners, and left the randomized case as an open problem. We resolve this question by providing an optimal learning rule in the randomized case, and showing that its expected mistake bound equals half of the deterministic bound, up to negligible additive terms. This improves upon previous works by Cesa-Bianchi, Freund, Haussler, Helmbold, Schapire and Warmuth [’93,’97], by Abernethy, Langford, and Warmuth [’06], and by Brânzei and Peres [’19], which handled the regimes k ≪ log n or k ≫ log n. In contrast, our result applies to all pairs n, k, and does so via a unified analysis using the randomized Littlestone dimension. In our proofs we develop and use optimal learning rules, which can be seen as natural variants of the Standard Optimal Algorithm (SOA) of Littlestone: a weighted variant in the agnostic case, and a probabilistic variant in the randomized case. We conclude the paper with suggested directions for future research and open questions.

Original languageEnglish
Pages (from-to)773-836
Number of pages64
JournalProceedings of Machine Learning Research
Volume195
StatePublished - 2023
Externally publishedYes
Event36th Annual Conference on Learning Theory, COLT 2023 - Bangalore, India
Duration: 12 Jul 202315 Jul 2023

Keywords

  • Online learning
  • Online prediction
  • Randomized algorithms

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Optimal Prediction Using Expert Advice and Randomized Littlestone Dimension'. Together they form a unique fingerprint.

Cite this