MetaDataset

MetaDataset(dataset)

Description

It wraps a torch dataset by creating a map of target to indices. This comes in handy when we want to sample elements randomly for a particular label.

Notes: For l2l to work its important that the dataset returns a (data, target) tuple. If your dataset doesn't return that, it should be trivial to wrap your dataset with another class to do that. TODO : Add example for wrapping a non standard l2l dataset

Arguments

Example

mnist = torchvision.datasets.MNIST(root="/tmp/mnist", train=True)
mnist = l2l.data.MetaDataset(mnist)

TaskDataset

TaskDataset(dataset, task_transforms=None, num_tasks=-1, task_collate=None)

[Source]

Description

Creates a set of tasks from a given Dataset.

In addition to the Dataset, TaskDataset accepts a list of task transformations (task_transforms) which define the kind of tasks sampled from the dataset.

The tasks are lazily sampled upon indexing (or calling the .sample() method), and their descriptions cached for later use. If num_tasks is -1, the TaskDataset will not cache task descriptions and instead continuously resample new ones. In this case, the length of the TaskDataset is set to 1.

For more information on tasks and task descriptions, please refer to the documentation of task transforms.

Arguments

Example

dataset = l2l.data.MetaDataset(MyDataset())
transforms = [
    l2l.data.transforms.NWays(dataset, n=5),
    l2l.data.transforms.KShots(dataset, k=1),
    l2l.data.transforms.LoadData(dataset),
]
taskset = TaskDataset(dataset, transforms, num_tasks=20000)
for task in taskset:
    X, y = task

sample

TaskDataset.sample()

Description

Randomly samples a task from the TaskDataset.

Example

X, y = taskset.sample()

learn2learn.data.transforms

Description

Collection of general task transformations.

A task transformation is an object that implements the callable interface. (Either a function or an object that implements the __call__ special method.) Each transformation is called on a task description, which consists of a list of DataDescription with attributes index and transforms, where index corresponds to the index of single data sample inthe dataset, and transforms is a list of transformations that will be applied to the sample. Each transformation must return a new task description.

At first, the task description contains all samples from the dataset. A task transform takes this task description list and modifies it such that a particular task is created. For example, the NWays task transform filters data samples from the task description such that remaining ones belong to a random subset of all classes available. (The size of the subset is controlled via the class's n argument.) On the other hand, the LoadData task transform simply appends a call to load the actual data from the dataset to the list of transformations of each sample.

To create a task from a task description, the TaskDataset applies each sample's list of transforms in order. Then, all samples are collated via the TaskDataset's collate function.

LoadData

LoadData(dataset)

[Source]

Description

Loads a sample from the dataset given its index.

Arguments

NWays

NWays(dataset, n=2)

[Source]

Description

Keeps samples from N random labels present in the task description.

Arguments

KShots

KShots(dataset, k=1, replacement=False)

[Source]

Description

Keeps K samples for each present labels.

Arguments

FilterLabels

FilterLabels(dataset, labels)

[Source]

Description

Removes samples that do not belong to the given set of labels.

Arguments

RemapLabels

RemapLabels(dataset, shuffle=True)

[Source]

Description

Given samples from K classes, maps the labels to 0, ..., K.

Arguments

ConsecutiveLabels

ConsecutiveLabels(dataset)

[Source]

Description

Re-orders the samples in the task description such that they are sorted in consecutive order.

Note: when used before RemapLabels, the labels will be homogeneously clustered, but in no specific order.

Arguments