Home
learn2learn is a software library for meta-learning research.
learn2learn builds on top of PyTorch to accelerate two aspects of the meta-learning research cycle:
- fast prototyping, essential in letting researchers quickly try new ideas, and
- correct reproducibility, ensuring that these ideas are evaluated fairly.
learn2learn provides low-level utilities and unified interface to create new algorithms and domains, together with high-quality implementations of existing algorithms and standardized benchmarks. It retains compatibility with torchvision, torchaudio, torchtext, cherry, and any other PyTorch-based library you might be using.
To learn more, see our whitepaper: arXiv:2008.12284
Overview
learn2learn.data
:TaskDataset
and transforms to create few-shot tasks from any PyTorch dataset.learn2learn.vision
: Models, datasets, and benchmarks for computer vision and few-shot learning.learn2learn.gym
: Environment and utilities for meta-reinforcement learning.learn2learn.algorithms
: High-level wrappers for existing meta-learning algorithms.learn2learn.optim
: Utilities and algorithms for differentiable optimization and meta-descent.
Resources
- Website: http://learn2learn.net/
- Documentation: http://learn2learn.net/docs/learn2learn
- Tutorials: http://learn2learn.net/tutorials/getting_started/
- Examples: https://github.com/learnables/learn2learn/tree/master/examples
- GitHub: https://github.com/learnables/learn2learn/
- Slack: http://slack.learn2learn.net/
Installation¶
1 |
|
Snippets & Examples¶
The following snippets provide a sneak peek at the functionalities of learn2learn.
High-level Wrappers¶
Few-Shot Learning with MAML
For more algorithms (ProtoNets, ANIL, Meta-SGD, Reptile, Meta-Curvature, KFO) refer to the examples folder. Most of them can be implemented with with the `GBML` wrapper. (documentation).1 2 3 4 5 6 7 8 9 10 |
|
Meta-Descent with Hypergradient
Learn any kind of optimization algorithm with the `LearnableOptimizer`. (example and documentation)1 2 3 4 5 6 7 8 9 10 11 |
|
Learning Domains¶
Custom Few-Shot Dataset
Many standardized datasets (Omniglot, mini-/tiered-ImageNet, FC100, CIFAR-FS) are readily available in `learn2learn.vision.datasets`. (documentation)1 2 3 4 5 6 7 8 9 10 |
|
Environments and Utilities for Meta-RL
Parallelize your own meta-environments with `AsyncVectorEnv`, or use the standardized ones. (documentation)1 2 3 4 5 6 7 8 9 10 11 |
|
Low-Level Utilities¶
Differentiable Optimization
Learn and differentiate through updates of PyTorch Modules. (documentation)1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Changelog¶
A human-readable changelog is available in the CHANGELOG.md file.
Citation¶
To cite the learn2learn
repository in your academic publications, please use the following reference.
Arnold, Sebastien M. R., Praateek Mahajan, Debajyoti Datta, Ian Bunner, and Konstantinos Saitas Zarkias. 2020. “learn2learn: A Library for Meta-Learning Research.” arXiv [cs.LG]. http://arxiv.org/abs/2008.12284.
You can also use the following Bibtex entry.
1 2 3 4 5 6 7 8 9 10 11 |
|
Acknowledgements & Friends¶
- TorchMeta is similar library, with a focus on datasets for supervised meta-learning.
- higher is a PyTorch library that enables differentiating through optimization inner-loops. While they monkey-patch
nn.Module
to be stateless, learn2learn retains the stateful PyTorch look-and-feel. For more information, refer to their ArXiv paper. - We are thankful to the following open-source implementations which helped guide the design of learn2learn:
- Tristan Deleu's pytorch-maml-rl
- Jonas Rothfuss' ProMP
- Kwonjoon Lee's MetaOptNet
- Han-Jia Ye's and Hexiang Hu's FEAT