combustion.nn.functional¶
Extensions to torch.nn.functional
.
combustion.nn.functional
Activation Functions¶
-
combustion.nn.functional.
swish
(inputs, memory_efficient=True)[source]¶ The swish activation function, defined as
\[f(x) = x \cdot \text{sigmoid}(x) \]- Parameters
inputs (Tensor) – The input tensor
memory_efficient (bool, optional) – Whether or not to use an implementation that is more memory efficient at training time. When
memory_efficient=True
, this method is incompatible with TorchScript.
- Return type
Warning
This method is traceable with TorchScript when
memory_efficient=False
, but is un-scriptable due to the use oftorch.autograd.Function
for a memory-efficient backward pass. Please export usingtorch.jit.trace()
withmemory_efficient=False
-
combustion.nn.functional.
hard_swish
(inputs, inplace=False)[source]¶ The hard swish activation function proposed in Searching For MobileNetV3, defined as
\[f(x) = x \cdot \frac{\text{ReLU6}(x + 3)}{6} \]Hard swish approximates the swish activation, but computationally cheaper due to the removal of \(\text{sigmoid}(x)\).
- Parameters
inputs (Tensor) – The input tensor
inplace (bool, optional) – Whether or not to perform the operation in place.
- Return type
-
combustion.nn.functional.
hard_sigmoid
(inputs, inplace=True)[source]¶ The hard sigmoid activation function, defined as
\[f(x) = \frac{\text{ReLU6}(x + 3)}{6} \]Hard sigmoid is a computationally efficient approximation to the sigmoid activation and is more suitable for quantization.
- Parameters
inputs (Tensor) – The input tensor
inplace (bool, optional) – Whether or not to perform the operation in place.
- Return type
Utilities¶
-
combustion.nn.functional.
patch_dynamic_same_pad
(module, padding_mode='constant', pad_value=0.0, include_classes=[], include_names=[], exclude_names=[])[source]¶ Patches spatial layers in a
torch.nn.Module
, wrapping each layer in acombustion.nn.DynamicSamePad
module. This method allows for dynamic same padding to be added to a module during or after instantiation.Note
This method alone is not sufficient to ensure shape matching throughout a U-Net or similar architecture. Use this method in conjunction with
combustion.nn.MatchShapes
for correct end to end operation of any input.Warning
This method is experimental
- Parameters
module (
torch.nn.Module
) – The module to patch with dynamic same padding.padding_mode (str) – Padding mode for
combustion.nn.DynamicSamePad
pad_value (str) – Fill value for
combustion.nn.DynamicSamePad
include_classes (iterable of types) – Types of modules to be patched. By default, PyTorch’s convolutional and pooling layers are matched
include_names (iterable of str) – Explicit names of children to be patched. If
include_names
is specified, only children whose names appear ininclude_names
will be patched.exclude_names (iterable of str) – Names of children to be excluded from patching.
- Returns
A mapping of child module names to their newly patched module instances.
- Return type
Dict[str, torch.nn.modules.module.Module]
-
combustion.nn.functional.
fill_normal
()¶ Fills a tensor with samples from a normal distribution under optional masking constraints, preserving mean and optionally variance. This method is focused towards a use case where the parameters of the normal distribution are derived from the input tensor.
Note
This method may be significantly faster when
sample_mask
isNone
.- Parameters
inputs (
torch.Tensor
) – Input tensor to fillfill_mask (
torch.Tensor
) – Boolean mask of locations that should be filled with new values. By default, the entire tensor will be filled with new values.sample_mask (
torch.Tensor
) – Boolean mask of locations to include when calculating per channel mean/variance. By default, all locations are included in the mean/variance calculation.preserve_var (bool) – If true, fill values for each channel will be generated by sampling values from a normal distribution parameterized by the calculated mean/variance of the channel. Otherwise, sample from a normal distribution centered at the per channel mean, but with zero variance.
unbiased (bool) – Whether to use an unbiased estimator for variance. See
torch.var_mean()
.
Dropout Example:
>>> t1 = torch.rand(4, 3, 10, 10) >>> # drop channels 0 and 1 while preserving channel mean/variance >>> t1[:, 0:2, :, :] = fill_normal(t1[:, 0:2, :, :])
- Shape
inputs
- \((N, C, *)\)fill_mask
- Same asinputs
sample_mask
- Same asinputs