learn2learn.vision.models

Description

A set of commonly used models for meta-learning vision tasks.

OmniglotFC

OmniglotFC(input_size, output_size, sizes=None)

[Source]

Description

The fully-connected network used for Omniglot experiments, as described in Santoro et al, 2016.

References

  1. Santoro et al. 2016. “Meta-Learning with Memory-Augmented Neural Networks.” ICML.

Arguments

Example

net = OmniglotFC(input_size=28**2,
                 output_size=10,
                 sizes=[64, 64, 64])

OmniglotCNN

OmniglotCNN(output_size=5, hidden_size=64, layers=4)

Source

Description

The convolutional network commonly used for Omniglot, as described by Finn et al, 2017.

This network assumes inputs of shapes (1, 28, 28).

References

  1. Finn et al. 2017. “Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks.” ICML.

Arguments

Example

model = OmniglotCNN(output_size=20, hidden_size=128, layers=3)

MiniImagenetCNN

MiniImagenetCNN(output_size, hidden_size=32, layers=4)

[Source]

Description

The convolutional network commonly used for MiniImagenet, as described by Ravi et Larochelle, 2017.

This network assumes inputs of shapes (3, 84, 84).

References

  1. Ravi and Larochelle. 2017. “Optimization as a Model for Few-Shot Learning.” ICLR.

Arguments

Example

model = MiniImagenetCNN(output_size=20, hidden_size=128, layers=3)

learn2learn.vision.datasets

Description

Some datasets commonly used in meta-learning vision tasks.

FullOmniglot

FullOmniglot(root, transform=None, target_transform=None, download=False)

[Source]

Description

This class provides an interface to the Omniglot dataset.

The Omniglot dataset was introduced by Lake et al., 2015. Omniglot consists of 1623 character classes from 50 different alphabets, each containing 20 samples. While the original dataset is separated in background and evaluation sets, this class concatenates both sets and leaves to the user the choice of classes splitting as was done in Ravi and Larochelle, 2017. The background and evaluation splits are available in the torchvision package.

References

  1. Lake et al. 2015. “Human-Level Concept Learning through Probabilistic Program Induction.” Science.
  2. Ravi and Larochelle. 2017. “Optimization as a Model for Few-Shot Learning.” ICLR.

Arguments

Example

omniglot = l2l.vision.datasets.FullOmniglot(root='./data',
                                            transform=transforms.Compose([
                                                l2l.vision.transforms.RandomDiscreteRotation(
                                                    [0.0, 90.0, 180.0, 270.0]),
                                                transforms.Resize(28, interpolation=LANCZOS),
                                                transforms.ToTensor(),
                                                lambda x: 1.0 - x,
                                            ]),
                                            download=True)
omniglot = l2l.data.MetaDataset(omniglot)

MiniImagenet

MiniImagenet(root, mode='train', transform=None, target_transform=None)

[Source]

Description

The mini-ImageNet dataset was originally introduced by Vinyals et al., 2016.

It consists of 60'000 colour images of sizes 84x84 pixels. The dataset is divided in 3 splits of 64 training, 16 validation, and 20 testing classes each containing 600 examples. The classes are sampled from the ImageNet dataset, and we use the splits from Ravi & Larochelle, 2017.

References

  1. Vinyals et al. 2016. “Matching Networks for One Shot Learning.” NeurIPS.
  2. Ravi and Larochelle. 2017. “Optimization as a Model for Few-Shot Learning.” ICLR.

Arguments

Example

train_dataset = l2l.vision.datasets.MiniImagenet(root='./data', mode='train')
train_dataset = l2l.data.MetaDataset(train_dataset)
train_generator = l2l.data.TaskGenerator(dataset=train_dataset, ways=ways)

learn2learn.vision.transforms

Description

A set of transformations commonly used in meta-learning vision tasks.

RandomDiscreteRotation

RandomDiscreteRotation(degrees, *args, **kwargs)

[Source]

Description

Samples rotations from a given list, uniformly at random.

Arguments

Example

transform = RandomDiscreteRotation([0, 90, 180, 270])