# clone_module

```
clone_module(module)
```

**Description**

Creates a copy of a module, whose parameters/buffers/submodules are created using PyTorch's torch.clone().

This implies that the computational graph is kept, and you can compute the derivatives of the new modules' parameters w.r.t the original parameters.

**Arguments**

**module**(Module) - Module to be cloned.

**Return**

- (Module) - The cloned module.

**Example**

```
net = nn.Sequential(Linear(20, 10), nn.ReLU(), nn.Linear(10, 2))
clone = clone_module(net)
error = loss(clone(X), y)
error.backward() # Gradients are back-propagate all the way to net.
```

# detach_module

```
detach_module(module)
```

**Description**

Detaches all parameters/buffers of a previously cloned module from its computational graph.

Note: detach works in-place, so it does not return a copy.

**Arguments**

**module**(Module) - Module to be detached.

**Example**

```
net = nn.Sequential(Linear(20, 10), nn.ReLU(), nn.Linear(10, 2))
clone = clone_module(net)
detach_module(clone)
error = loss(clone(X), y)
error.backward() # Gradients are back-propagate on clone, not net.
```

# magic_box

```
magic_box(x)
```

**Description**

The magic box operator, which evaluates to 1 but whose gradient is :

where is the stop-gradient (or detach) operator.

This operator is useful when computing higher-order derivatives of stochastic graphs. For more informations, please refer to the DiCE paper. (Reference 1)

**References**

- Foerster et al. 2018. “DiCE: The Infinitely Differentiable Monte-Carlo Estimator.” arXiv.

**Arguments**

**x**(Variable) - Variable to transform.

**Return**

- (Variable) - Tensor of 1, but it's gradient is the gradient of x.

**Example**

```
loss = (magic_box(cum_log_probs) * advantages).mean() # loss is the mean advantage
loss.backward()
```