A Trichotomy for Transductive Online Learning

Steve Hanneke, Shay Moran, Jonathan Shafer

Research output: Contribution to journalConference articlepeer-review

Abstract

We present new upper and lower bounds on the number of learner mistakes in the 'transductive' online learning setting of Ben-David, Kushilevitz and Mansour (1997). This setting is similar to standard online learning, except that the adversary fixes a sequence of instances x1, ..., xn to be labeled at the start of the game, and this sequence is known to the learner. Qualitatively, we prove a trichotomy, stating that the minimal number of mistakes made by the learner as n grows can take only one of precisely three possible values: n, Θ (log(n)), or Θ(1). Furthermore, this behavior is determined by a combination of the VC dimension and the Littlestone dimension. Quantitatively, we show a variety of bounds relating the number of mistakes to well-known combinatorial dimensions. In particular, we improve the known lower bound on the constant in the Θ(1) case from Ω (Equation presented)(plog(d) ) to Ω(log(d)) where d is the Littlestone dimension. Finally, we extend our results to cover multiclass classification and the agnostic setting.

Original languageEnglish
JournalAdvances in Neural Information Processing Systems
Volume36
StatePublished - 2023
Event37th Conference on Neural Information Processing Systems, NeurIPS 2023 - New Orleans, United States
Duration: 10 Dec 202316 Dec 2023

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'A Trichotomy for Transductive Online Learning'. Together they form a unique fingerprint.

Cite this