Abstract
Meta-learning aims to extract common knowledge from similar training tasks in order to facilitate efficient and effective learning on future tasks. Several recent works have extended PAC-Bayes generalization error bounds to the meta-learning setting. By doing so, prior knowledge can be incorporated in the form of a distribution over hypotheses that is expected to lead to low error on new tasks that are similar to those that have been previously observed. In this work, we develop novel bounds for the generalization error on test tasks based on recent data-dependent bounds and provide a novel algorithm for adapting prior knowledge to downstream tasks in a potentially more effective manner. We demonstrate the effectiveness of our algorithm numerically for few-shot image classification tasks with deep neural networks and show a significant reduction in generalization error without any additional adaptation data.
Original language | English |
---|---|
Pages (from-to) | 796-810 |
Number of pages | 15 |
Journal | Proceedings of Machine Learning Research |
Volume | 232 |
State | Published - 2023 |
Event | 2nd Conference on Lifelong Learning Agents, CoLLA 2023 - Montreal, Canada Duration: 22 Aug 2023 → 25 Aug 2023 |
ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability