This directory contains examples of using learn2learn for meta-optimization or meta-descent.
hypergrad_mnist.py demonstrates how to implement a slightly modified version of "Online Learning Rate Adaptation with Hypergradient Descent".
The implementation departs from the algorithm presented in the paper in two ways.
- We forgo the analytical formulation of the learning rate's gradient to demonstrate the capability of the
- We adapt per-parameter learning rates instead of updating a single learning rate shared by all parameters.
The parameters for this script were not carefully tuned.
Manually edit the script and run: