MetaDataset(dataset)


Description

It wraps a torch dataset by creating a map of target to indices. This comes in handy when we want to sample elements randomly for a particular label.

Notes: For l2l to work its important that the dataset returns a (data, target) tuple. If your dataset doesn't return that, it should be trivial to wrap your dataset with another class to do that. TODO : Add example for wrapping a non standard l2l dataset

Arguments

• dataset (Dataset) - A torch dataset.
• labels_to_indices (Dict) - A dictionary mapping label to their indices. If not specified then we loop through all the datapoints to understand the mapping. (default: None)

Example

mnist = torchvision.datasets.MNIST(root="/tmp/mnist", train=True)


TaskDataset(dataset, task_transforms=None, num_tasks=-1, task_collate=None)


[Source]

Description

Creates a set of tasks from a given Dataset.

In addition to the Dataset, TaskDataset accepts a list of task transformations (task_transforms) which define the kind of tasks sampled from the dataset.

The tasks are lazily sampled upon indexing (or calling the .sample() method), and their descriptions cached for later use. If num_tasks is -1, the TaskDataset will not cache task descriptions and instead continuously resample new ones. In this case, the length of the TaskDataset is set to 1.

Arguments

• dataset (Dataset) - Dataset of data to compute tasks.

Example

dataset = l2l.data.MetaDataset(MyDataset())
transforms = [
l2l.data.transforms.NWays(dataset, n=5),
l2l.data.transforms.KShots(dataset, k=1),
]


## sample

TaskDataset.sample()


Description

Example

X, y = taskset.sample()


# learn2learn.data.transforms

Description

A task transformation is an object that implements the callable interface. (Either a function or an object that implements the __call__ special method.) Each transformation is called on a task description, which consists of a list of DataDescription with attributes index and transforms, where index corresponds to the index of single data sample inthe dataset, and transforms is a list of transformations that will be applied to the sample. Each transformation must return a new task description.

At first, the task description contains all samples from the dataset. A task transform takes this task description list and modifies it such that a particular task is created. For example, the NWays task transform filters data samples from the task description such that remaining ones belong to a random subset of all classes available. (The size of the subset is controlled via the class's n argument.) On the other hand, the LoadData task transform simply appends a call to load the actual data from the dataset to the list of transformations of each sample.

To create a task from a task description, the TaskDataset applies each sample's list of transforms in order. Then, all samples are collated via the TaskDataset's collate function.

LoadData(dataset)


[Source]

Description

Loads a sample from the dataset given its index.

Arguments

• dataset (Dataset) - The dataset from which to load the sample.

# NWays

NWays(dataset, n=2)


[Source]

Description

Keeps samples from N random labels present in the task description.

Arguments

• dataset (Dataset) - The dataset from which to load the sample.
• n (int, optional, default=2) - Number of labels to sample from the task description's labels.

# KShots

KShots(dataset, k=1, replacement=False)


[Source]

Description

Keeps K samples for each present labels.

Arguments

• dataset (Dataset) - The dataset from which to load the sample.
• k (int, optional, default=1) - The number of samples per label.
• replacement (bool, optional, default) - Whether to sample with replacement.

# FilterLabels

FilterLabels(dataset, labels)


[Source]

Description

Removes samples that do not belong to the given set of labels.

Arguments

• dataset (Dataset) - The dataset from which to load the sample.
• labels (list) - The list of labels to include.

# RemapLabels

RemapLabels(dataset, shuffle=True)


[Source]

Description

Given samples from K classes, maps the labels to 0, ..., K.

Arguments

• dataset (Dataset) - The dataset from which to load the sample.

# ConsecutiveLabels

ConsecutiveLabels(dataset)


[Source]

Description

Re-orders the samples in the task description such that they are sorted in consecutive order.

Note: when used before RemapLabels, the labels will be homogeneously clustered, but in no specific order.

Arguments

• dataset (Dataset) - The dataset from which to load the sample.