torchdms.model.FullyConnected

class torchdms.model.FullyConnected(layer_sizes, activations, *args, beta_l1_coefficient=0.0, interaction_l1_coefficient=0.0, **kwargs)[source]

Bases: torchdms.model.TorchdmsModel

A flexible fully connected neural network model.

Todo

schematic image

Parameters
  • layer_sizes (List[int]) – Sequence of widths for each layer between input and output.

  • activations (List[Callable]) –

    Corresponding activation functions for each layer. The first layer with None or nn.Identity() activation is the latent space.

    Todo

    allowable activation function names?

  • args – base positional arguments, see torchdms.model.TorchdmsModel

  • beta_l1_coefficient (float) – lasso penalty on latent space \(\beta\) coefficients

  • interaction_l1_coefficient (float) – lasso penalty on parameters in the pre-latent interaction layer(s)

  • kwargs – base keyword arguments, see torchdms.model.TorchdmsModel

Example

With layer_sizes = [10, 2, 10, 10] and activations = ["relu", None, "relu", "relu"] we have a latent space of 2 nodes, feeding into two more dense layers, each with 10 nodes, before the output. Layers before the latent layer are a nonlinear module for site-wise interactions, in this case one layer of 10 nodes. The latent space has skip connections directly from the input layer, so we always model single mutation effects.

Methods

add_module

Adds a child module to the current module.

apply

Applies fn recursively to every submodule (as returned by .children()) as well as self.

beta_coefficients

Beta coefficients (single mutant effects only, no interaction terms)

bfloat16

Casts all floating point parameters and buffers to bfloat16 datatype.

buffers

Returns an iterator over module buffers.

children

Returns an iterator over immediate children modules.

cpu

Moves all model parameters and buffers to the CPU.

cuda

Moves all model parameters and buffers to the GPU.

double

Casts all floating point parameters and buffers to double datatype.

eval

Sets the module in evaluation mode.

extra_repr

Set the extra representation of the module

fix_gauge

Perform gauge-fixing procedure on latent space parameters.

float

Casts all floating point parameters and buffers to float datatype.

forward

data \(X\) \(\rightarrow\) output \(Y\).

from_latent_to_output

Evaluate the mapping from the latent space to output.

get_buffer

Returns the buffer given by target if it exists, otherwise throws an error.

get_extra_state

Returns any extra state to include in the module's state_dict.

get_parameter

Returns the parameter given by target if it exists, otherwise throws an error.

get_submodule

Returns the submodule given by target if it exists, otherwise throws an error.

half

Casts all floating point parameters and buffers to half datatype.

load_state_dict

Copies parameters and buffers from state_dict into this module and its descendants.

modules

Returns an iterator over all modules in the network.

monotonic_params_from_latent_space

Yields parameters to be floored to zero in a monotonic model.

named_buffers

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

named_children

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

numpy_single_mutant_predictions

Single mutant predictions as a numpy array of shape (AAs, sites, outputs).

parameters

Returns an iterator over module parameters.

randomize_parameters

Randomize model parameters.

register_backward_hook

Registers a backward hook on the module.

register_buffer

Adds a buffer to the module.

register_forward_hook

Registers a forward hook on the module.

register_forward_pre_hook

Registers a forward pre-hook on the module.

register_full_backward_hook

Registers a backward hook on the module.

register_parameter

Adds a parameter to the module.

regularization_loss

Lasso penalty of single mutant effects and pre-latent interaction weights, a scalar-valued torch.Tensor

requires_grad_

Change if autograd should record operations on parameters in this module.

seq_to_binary

Takes a string of amino acids and creates an appropriate one-hot encoding.

set_extra_state

This function is called from load_state_dict() to handle any extra state found within the state_dict.

set_require_grad_for_all_parameters

Set require_grad for all parameters.

share_memory

See torch.Tensor.share_memory_()

single_mutant_predictions

Single mutant predictions as a list (across output dimensions) of Pandas dataframes.

state_dict

Returns a dictionary containing a whole state of the module.

str_summary

A one-line summary of the model.

to

Moves and/or casts the parameters and buffers.

to_empty

Moves the parameters and buffers to the specified device without copying storage.

to_latent

Latent space representation \(Z\)

train

Sets the module in training mode.

type

Casts all parameters and buffers to dst_type.

xpu

Moves all model parameters and buffers to the XPU.

zero_grad

Sets gradients of all model parameters to zero.

Attributes

T_destination

alias of TypeVar('T_destination', bound=Mapping[str, torch.Tensor])

characteristics

Salient characteristics of the model that aren't represented in the PyTorch description.

dump_patches

This allows better BC support for load_state_dict().

internal_layer_dimensions

List of widths of internal layers.

latent_dim

Number of dimensions in latent space.

sequence_length

input amino acid sequence length

property characteristics: Dict

Salient characteristics of the model that aren’t represented in the PyTorch description.

Return type

Dict

property internal_layer_dimensions: List[int]

List of widths of internal layers.

Return type

List[int]

property latent_dim: int

Number of dimensions in latent space.

Return type

int

str_summary()[source]

A one-line summary of the model.

Return type

str

fix_gauge(gauge_mask)[source]

Perform gauge-fixing procedure on latent space parameters.

Parameters

gauge_mask – 0/1 mask array the same shape as latent space input with 1s for parameters that should be zeroed

to_latent(x, **kwargs)[source]

Latent space representation \(Z\)

\[z \equiv \phi(x) \equiv \beta^\intercal x\]
Parameters

x (Tensor) – input data tensor \(X\)

Return type

Tensor

from_latent_to_output(z, **kwargs)[source]

Evaluate the mapping from the latent space to output.

\[y = g(z)\]
Parameters

z (Tensor) – latent space representation

Return type

Tensor

beta_coefficients()[source]

Beta coefficients (single mutant effects only, no interaction terms)

Return type

Tensor

regularization_loss()[source]

Lasso penalty of single mutant effects and pre-latent interaction weights, a scalar-valued torch.Tensor

Return type

Tensor

__call__(*input, **kwargs)

Call self as a function.

add_module(name, module)

Adds a child module to the current module.

The module can be accessed as an attribute using the given name.

Parameters
  • name (string) – name of the child module. The child module can be accessed from this module using the given name

  • module (Module) – child module to be added to the module.

Return type

None

apply(fn)

Applies fn recursively to every submodule (as returned by .children()) as well as self. Typical use includes initializing the parameters of a model (see also torch.nn.init).

Parameters

fn (Module -> None) – function to be applied to each submodule

Returns

self

Return type

Module

Example:

>>> @torch.no_grad()
>>> def init_weights(m):
>>>     print(m)
>>>     if type(m) == nn.Linear:
>>>         m.weight.fill_(1.0)
>>>         print(m.weight)
>>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))
>>> net.apply(init_weights)
Linear(in_features=2, out_features=2, bias=True)
Parameter containing:
tensor([[ 1.,  1.],
        [ 1.,  1.]])
Linear(in_features=2, out_features=2, bias=True)
Parameter containing:
tensor([[ 1.,  1.],
        [ 1.,  1.]])
Sequential(
  (0): Linear(in_features=2, out_features=2, bias=True)
  (1): Linear(in_features=2, out_features=2, bias=True)
)
Sequential(
  (0): Linear(in_features=2, out_features=2, bias=True)
  (1): Linear(in_features=2, out_features=2, bias=True)
)
bfloat16()

Casts all floating point parameters and buffers to bfloat16 datatype.

Note

This method modifies the module in-place.

Returns

self

Return type

Module

buffers(recurse=True)

Returns an iterator over module buffers.

Parameters

recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module.

Yields

torch.Tensor – module buffer

Example:

>>> for buf in model.buffers():
>>>     print(type(buf), buf.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L)
Return type

Iterator[Tensor]

children()

Returns an iterator over immediate children modules.

Yields

Module – a child module

Return type

Iterator[Module]

cpu()

Moves all model parameters and buffers to the CPU.

Note

This method modifies the module in-place.

Returns

self

Return type

Module

cuda(device=None)

Moves all model parameters and buffers to the GPU.

This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized.

Note

This method modifies the module in-place.

Parameters

device (int, optional) – if specified, all parameters will be copied to that device

Returns

self

Return type

Module

double()

Casts all floating point parameters and buffers to double datatype.

Note

This method modifies the module in-place.

Returns

self

Return type

Module

eval()

Sets the module in evaluation mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

This is equivalent with self.train(False).

See Locally disabling gradient computation for a comparison between .eval() and several similar mechanisms that may be confused with it.

Returns

self

Return type

Module

extra_repr()

Set the extra representation of the module

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

Return type

str

float()

Casts all floating point parameters and buffers to float datatype.

Note

This method modifies the module in-place.

Returns

self

Return type

Module

forward(x, **kwargs)

data \(X\) \(\rightarrow\) output \(Y\).

\[y = g(\phi(x))\]
Parameters

x (Tensor) – input data tensor \(X\)

Return type

Tensor

get_buffer(target)

Returns the buffer given by target if it exists, otherwise throws an error.

See the docstring for get_submodule for a more detailed explanation of this method’s functionality as well as how to correctly specify target.

Parameters

target (str) – The fully-qualified string name of the buffer to look for. (See get_submodule for how to specify a fully-qualified string.)

Returns

The buffer referenced by target

Return type

torch.Tensor

Raises

AttributeError – If the target string references an invalid path or resolves to something that is not a buffer

get_extra_state()

Returns any extra state to include in the module’s state_dict. Implement this and a corresponding set_extra_state() for your module if you need to store extra state. This function is called when building the module’s state_dict().

Note that extra state should be pickleable to ensure working serialization of the state_dict. We only provide provide backwards compatibility guarantees for serializing Tensors; other objects may break backwards compatibility if their serialized pickled form changes.

Returns

Any extra state to store in the module’s state_dict

Return type

object

get_parameter(target)

Returns the parameter given by target if it exists, otherwise throws an error.

See the docstring for get_submodule for a more detailed explanation of this method’s functionality as well as how to correctly specify target.

Parameters

target (str) – The fully-qualified string name of the Parameter to look for. (See get_submodule for how to specify a fully-qualified string.)

Returns

The Parameter referenced by target

Return type

torch.nn.Parameter

Raises

AttributeError – If the target string references an invalid path or resolves to something that is not an nn.Parameter

get_submodule(target)

Returns the submodule given by target if it exists, otherwise throws an error.

For example, let’s say you have an nn.Module A that looks like this:

(The diagram shows an nn.Module A. A has a nested submodule net_b, which itself has two submodules net_c and linear. net_c then has a submodule conv.)

To check whether or not we have the linear submodule, we would call get_submodule("net_b.linear"). To check whether we have the conv submodule, we would call get_submodule("net_b.net_c.conv").

The runtime of get_submodule is bounded by the degree of module nesting in target. A query against named_modules achieves the same result, but it is O(N) in the number of transitive modules. So, for a simple check to see if some submodule exists, get_submodule should always be used.

Parameters

target (str) – The fully-qualified string name of the submodule to look for. (See above example for how to specify a fully-qualified string.)

Returns

The submodule referenced by target

Return type

torch.nn.Module

Raises

AttributeError – If the target string references an invalid path or resolves to something that is not an nn.Module

half()

Casts all floating point parameters and buffers to half datatype.

Note

This method modifies the module in-place.

Returns

self

Return type

Module

load_state_dict(state_dict, strict=True)

Copies parameters and buffers from state_dict into this module and its descendants. If strict is True, then the keys of state_dict must exactly match the keys returned by this module’s state_dict() function.

Parameters
  • state_dict (dict) – a dict containing parameters and persistent buffers.

  • strict (bool, optional) – whether to strictly enforce that the keys in state_dict match the keys returned by this module’s state_dict() function. Default: True

Returns

  • missing_keys is a list of str containing the missing keys

  • unexpected_keys is a list of str containing the unexpected keys

Return type

NamedTuple with missing_keys and unexpected_keys fields

Note

If a parameter or buffer is registered as None and its corresponding key exists in state_dict, load_state_dict() will raise a RuntimeError.

modules()

Returns an iterator over all modules in the network.

Yields

Module – a module in the network

Note

Duplicate modules are returned only once. In the following example, l will be returned only once.

Example:

>>> l = nn.Linear(2, 2)
>>> net = nn.Sequential(l, l)
>>> for idx, m in enumerate(net.modules()):
        print(idx, '->', m)

0 -> Sequential(
  (0): Linear(in_features=2, out_features=2, bias=True)
  (1): Linear(in_features=2, out_features=2, bias=True)
)
1 -> Linear(in_features=2, out_features=2, bias=True)
Return type

Iterator[Module]

monotonic_params_from_latent_space()

Yields parameters to be floored to zero in a monotonic model. This is every parameter after the latent space excluding bias parameters.

We follow the convention that the latent layers of a network are named like latent_layer* and the weights and bias are denoted layer_name*.weight and layer_name*.bias.

Layers from nested modules will be prefixed (e.g. with Independent).

Return type

Generator[Tensor, None, None]

named_buffers(prefix='', recurse=True)

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

Parameters
  • prefix (str) – prefix to prepend to all buffer names.

  • recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module.

Yields

(string, torch.Tensor) – Tuple containing the name and buffer

Example:

>>> for name, buf in self.named_buffers():
>>>    if name in ['running_var']:
>>>        print(buf.size())
Return type

Iterator[Tuple[str, Tensor]]

named_children()

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

Yields

(string, Module) – Tuple containing a name and child module

Example:

>>> for name, module in model.named_children():
>>>     if name in ['conv4', 'conv5']:
>>>         print(module)
Return type

Iterator[Tuple[str, Module]]

named_modules(memo=None, prefix='', remove_duplicate=True)

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

Parameters
  • memo (Optional[Set[Module]]) – a memo to store the set of modules already added to the result

  • prefix (str) – a prefix that will be added to the name of the module

  • remove_duplicate (bool) – whether to remove the duplicated module instances in the result

  • not (or) –

Yields

(string, Module) – Tuple of name and module

Note

Duplicate modules are returned only once. In the following example, l will be returned only once.

Example:

>>> l = nn.Linear(2, 2)
>>> net = nn.Sequential(l, l)
>>> for idx, m in enumerate(net.named_modules()):
        print(idx, '->', m)

0 -> ('', Sequential(
  (0): Linear(in_features=2, out_features=2, bias=True)
  (1): Linear(in_features=2, out_features=2, bias=True)
))
1 -> ('0', Linear(in_features=2, out_features=2, bias=True))
named_parameters(prefix='', recurse=True)

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

Parameters
  • prefix (str) – prefix to prepend to all parameter names.

  • recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module.

Yields

(string, Parameter) – Tuple containing the name and parameter

Example:

>>> for name, param in self.named_parameters():
>>>    if name in ['bias']:
>>>        print(param.size())
Return type

Iterator[Tuple[str, Parameter]]

numpy_single_mutant_predictions()

Single mutant predictions as a numpy array of shape (AAs, sites, outputs).

Return type

ndarray

parameters(recurse=True)

Returns an iterator over module parameters.

This is typically passed to an optimizer.

Parameters

recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module.

Yields

Parameter – module parameter

Example:

>>> for param in model.parameters():
>>>     print(type(param), param.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L)
Return type

Iterator[Parameter]

randomize_parameters()

Randomize model parameters.

register_backward_hook(hook)

Registers a backward hook on the module.

This function is deprecated in favor of register_full_backward_hook() and the behavior of this function will change in future versions.

Returns

a handle that can be used to remove the added hook by calling handle.remove()

Return type

torch.utils.hooks.RemovableHandle

register_buffer(name, tensor, persistent=True)

Adds a buffer to the module.

This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s running_mean is not a parameter, but is part of the module’s state. Buffers, by default, are persistent and will be saved alongside parameters. This behavior can be changed by setting persistent to False. The only difference between a persistent buffer and a non-persistent buffer is that the latter will not be a part of this module’s state_dict.

Buffers can be accessed as attributes using given names.

Parameters
  • name (string) – name of the buffer. The buffer can be accessed from this module using the given name

  • tensor (Tensor or None) – buffer to be registered. If None, then operations that run on buffers, such as cuda, are ignored. If None, the buffer is not included in the module’s state_dict.

  • persistent (bool) – whether the buffer is part of this module’s state_dict.

Example:

>>> self.register_buffer('running_mean', torch.zeros(num_features))
Return type

None

register_forward_hook(hook)

Registers a forward hook on the module.

The hook will be called every time after forward() has computed an output. It should have the following signature:

hook(module, input, output) -> None or modified output

The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the forward. The hook can modify the output. It can modify the input inplace but it will not have effect on forward since this is called after forward() is called.

Returns

a handle that can be used to remove the added hook by calling handle.remove()

Return type

torch.utils.hooks.RemovableHandle

register_forward_pre_hook(hook)

Registers a forward pre-hook on the module.

The hook will be called every time before forward() is invoked. It should have the following signature:

hook(module, input) -> None or modified input

The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the forward. The hook can modify the input. User can either return a tuple or a single modified value in the hook. We will wrap the value into a tuple if a single value is returned(unless that value is already a tuple).

Returns

a handle that can be used to remove the added hook by calling handle.remove()

Return type

torch.utils.hooks.RemovableHandle

register_full_backward_hook(hook)

Registers a backward hook on the module.

The hook will be called every time the gradients with respect to module inputs are computed. The hook should have the following signature:

hook(module, grad_input, grad_output) -> tuple(Tensor) or None

The grad_input and grad_output are tuples that contain the gradients with respect to the inputs and outputs respectively. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the input that will be used in place of grad_input in subsequent computations. grad_input will only correspond to the inputs given as positional arguments and all kwarg arguments are ignored. Entries in grad_input and grad_output will be None for all non-Tensor arguments.

For technical reasons, when this hook is applied to a Module, its forward function will receive a view of each Tensor passed to the Module. Similarly the caller will receive a view of each Tensor returned by the Module’s forward function.

Warning

Modifying inputs or outputs inplace is not allowed when using backward hooks and will raise an error.

Returns

a handle that can be used to remove the added hook by calling handle.remove()

Return type

torch.utils.hooks.RemovableHandle

register_parameter(name, param)

Adds a parameter to the module.

The parameter can be accessed as an attribute using given name.

Parameters
  • name (string) – name of the parameter. The parameter can be accessed from this module using the given name

  • param (Parameter or None) – parameter to be added to the module. If None, then operations that run on parameters, such as cuda, are ignored. If None, the parameter is not included in the module’s state_dict.

Return type

None

requires_grad_(requires_grad=True)

Change if autograd should record operations on parameters in this module.

This method sets the parameters’ requires_grad attributes in-place.

This method is helpful for freezing part of the module for finetuning or training parts of a model individually (e.g., GAN training).

See Locally disabling gradient computation for a comparison between .requires_grad_() and several similar mechanisms that may be confused with it.

Parameters

requires_grad (bool) – whether autograd should record operations on parameters in this module. Default: True.

Returns

self

Return type

Module

seq_to_binary(seq)

Takes a string of amino acids and creates an appropriate one-hot encoding.

property sequence_length

input amino acid sequence length

set_extra_state(state)

This function is called from load_state_dict() to handle any extra state found within the state_dict. Implement this function and a corresponding get_extra_state() for your module if you need to store extra state within its state_dict.

Parameters

state (dict) – Extra state from the state_dict

set_require_grad_for_all_parameters(value)

Set require_grad for all parameters.

Parameters

value (bool) – require_grad=value for all parameters

share_memory()

See torch.Tensor.share_memory_()

Return type

~T

single_mutant_predictions()

Single mutant predictions as a list (across output dimensions) of Pandas dataframes.

Return type

List[DataFrame]

state_dict(destination=None, prefix='', keep_vars=False)

Returns a dictionary containing a whole state of the module.

Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Parameters and buffers set to None are not included.

Returns

a dictionary containing a whole state of the module

Return type

dict

Example:

>>> module.state_dict().keys()
['bias', 'weight']
to(*args, **kwargs)

Moves and/or casts the parameters and buffers.

This can be called as

to(device=None, dtype=None, non_blocking=False)
to(dtype, non_blocking=False)
to(tensor, non_blocking=False)
to(memory_format=torch.channels_last)

Its signature is similar to torch.Tensor.to(), but only accepts floating point or complex dtypes. In addition, this method will only cast the floating point or complex parameters and buffers to dtype (if given). The integral parameters and buffers will be moved device, if that is given, but with dtypes unchanged. When non_blocking is set, it tries to convert/move asynchronously with respect to the host if possible, e.g., moving CPU Tensors with pinned memory to CUDA devices.

See below for examples.

Note

This method modifies the module in-place.

Parameters
  • device (torch.device) – the desired device of the parameters and buffers in this module

  • dtype (torch.dtype) – the desired floating point or complex dtype of the parameters and buffers in this module

  • tensor (torch.Tensor) – Tensor whose dtype and device are the desired dtype and device for all parameters and buffers in this module

  • memory_format (torch.memory_format) – the desired memory format for 4D parameters and buffers in this module (keyword only argument)

Returns

self

Return type

Module

Examples:

>>> linear = nn.Linear(2, 2)
>>> linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
        [-0.5113, -0.2325]])
>>> linear.to(torch.double)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
        [-0.5113, -0.2325]], dtype=torch.float64)
>>> gpu1 = torch.device("cuda:1")
>>> linear.to(gpu1, dtype=torch.half, non_blocking=True)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
        [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1')
>>> cpu = torch.device("cpu")
>>> linear.to(cpu)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
        [-0.5112, -0.2324]], dtype=torch.float16)

>>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble)
>>> linear.weight
Parameter containing:
tensor([[ 0.3741+0.j,  0.2382+0.j],
        [ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128)
>>> linear(torch.ones(3, 2, dtype=torch.cdouble))
tensor([[0.6122+0.j, 0.1150+0.j],
        [0.6122+0.j, 0.1150+0.j],
        [0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)
to_empty(*, device)

Moves the parameters and buffers to the specified device without copying storage.

Parameters

device (torch.device) – The desired device of the parameters and buffers in this module.

Returns

self

Return type

Module

train(mode=True)

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

Parameters

mode (bool) – whether to set training mode (True) or evaluation mode (False). Default: True.

Returns

self

Return type

Module

type(dst_type)

Casts all parameters and buffers to dst_type.

Note

This method modifies the module in-place.

Parameters

dst_type (type or string) – the desired type

Returns

self

Return type

Module

xpu(device=None)

Moves all model parameters and buffers to the XPU.

This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on XPU while being optimized.

Note

This method modifies the module in-place.

Parameters

device (int, optional) – if specified, all parameters will be copied to that device

Returns

self

Return type

Module

zero_grad(set_to_none=False)

Sets gradients of all model parameters to zero. See similar function under torch.optim.Optimizer for more context.

Parameters

set_to_none (bool) – instead of setting to zero, set the grads to None. See torch.optim.Optimizer.zero_grad() for details.

Return type

None