Improved Regret Bounds for Projection-free Bandit Convex Optimization

Dan Garber, Ben Kretzu

Research output: Contribution to journalConference articlepeer-review

Abstract

We revisit the challenge of designing online algorithms for the bandit convex optimization problem (BCO) which are also scalable to high dimensional problems. Hence, we consider algorithms that are projection-free, i.e., based on the conditional gradient method whose only access to the feasible decision set is through a linear optimization oracle (as opposed to other methods which require potentially much more computationally-expensive subprocedures, such as computing Euclidean projections). We present the first such algorithm that attains O(T3/4) expected regret using only O(T) overall calls to the linear optimization oracle, in expectation, where T is the number of prediction rounds. This improves over the O(T4/5) expected regret bound recently obtained by Chen et al. (2019), and actually matches the current best regret bound for projection-free online learning in the full information setting.

Original languageEnglish
Pages (from-to)2196-2206
Number of pages11
JournalProceedings of Machine Learning Research
Volume108
StatePublished - 2020
Event23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020 - Virtual, Online
Duration: 26 Aug 202028 Aug 2020

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Improved Regret Bounds for Projection-free Bandit Convex Optimization'. Together they form a unique fingerprint.

Cite this