Best-case lower bounds in online learning

Cristóbal Guzmán, Nishant Mehta, Ali Mortazavi

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

59 Downloads (Pure)


Much of the work in online learning focuses on the study of sublinear upper bounds on the regret. In this work, we initiate the study of best-case lower bounds in online convex optimization, wherein we bound the largest improvement an algorithm can obtain relative to the single best action in hindsight. This problem is motivated by the goal of better understanding the adaptivity of a learning algorithm. Another motivation comes from fairness: it is known that best-case lower bounds are instrumental in obtaining algorithms for decision-theoretic online learning (DTOL) that satisfy a notion of group fairness. Our contributions are a general method to provide best-case lower bounds in Follow the Regularized Leader (FTRL) algorithms with time-varying regularizers, which we use to show that best-case lower bounds are of the same order as existing upper regret bounds: this includes situations with a fixed learning rate, decreasing learning rates, timeless methods, and adaptive gradient methods. In stark contrast, we show that the linearized version of FTRL can attain negative linear regret. Finally, in DTOL with two experts and binary losses, we fully characterize the best-case sequences, which provides a finer understanding of the best-case lower bounds.
Original languageEnglish
Title of host publication35th Conference on Neural Information Processing Systems, NeurIPS 2021
Subtitle of host publicationNeurIPS
PublisherCurran Associates Inc.
Number of pages11
Publication statusPublished - 5 Dec 2021
Event35th Conference on Neural Information Processing Systems, NeurIPS 2021 - Virtual, Online
Duration: 6 Dec 202114 Dec 2021
Conference number: 35


Conference35th Conference on Neural Information Processing Systems, NeurIPS 2021
Abbreviated titleNeurIPS 2021
CityVirtual, Online


Dive into the research topics of 'Best-case lower bounds in online learning'. Together they form a unique fingerprint.

Cite this