Meta-learning has recently re-emerged as an important direction for developing algorithms for multi-task learning, dynamic environments, and federated settings; however, meta-learning approaches that can scale to deep neural networks are largely heuristic and lack formal guarantees. We build a theoretical framework for designing and understanding practical meta-learning methods that integrates sophisticated formalizations of task-similarity with the extensive literature on online convex optimization and sequential prediction algorithms. Our approach enables the task-similarity to be learned adaptively, provides sharper transfer-risk bounds in the setting of statistical learning-to-learn, and leads to straightforward derivations of average-case regret bounds for efficient algorithms in settings where the task-environment changes dynamically or the tasks share a certain geometric structure. We use our theory to modify several popular meta-learning algorithms and improve performance on standard problems in few-shot and federated learning.
Joint work Nina Balcan, Ameet Talwalkar, Jeff Li, and Sebastian Caldas
The AI Seminar is generously sponsored by Apple.