Shortcuts

combustion.nn.functional

Extensions to torch.nn.functional.

combustion.nn.functional

Activation Functions

combustion.nn.functional.swish(inputs, memory_efficient=True)[source]

The swish activation function, defined as

\[f(x) = x \cdot \text{sigmoid}(x) \]
Parameters
  • inputs (Tensor) – The input tensor

  • memory_efficient (bool, optional) – Whether or not to use an implementation that is more memory efficient at training time. When memory_efficient=True, this method is incompatible with TorchScript.

Return type

torch.Tensor

Warning

This method is traceable with TorchScript when memory_efficient=False, but is un-scriptable due to the use of torch.autograd.Function for a memory-efficient backward pass. Please export using torch.jit.trace() with memory_efficient=False

combustion.nn.functional.hard_swish(inputs, inplace=False)[source]

The hard swish activation function proposed in Searching For MobileNetV3, defined as

\[f(x) = x \cdot \frac{\text{ReLU6}(x + 3)}{6} \]

Hard swish approximates the swish activation, but computationally cheaper due to the removal of \(\text{sigmoid}(x)\).

Parameters
  • inputs (Tensor) – The input tensor

  • inplace (bool, optional) – Whether or not to perform the operation in place.

Return type

torch.Tensor

combustion.nn.functional.hard_sigmoid(inputs, inplace=True)[source]

The hard sigmoid activation function, defined as

\[f(x) = \frac{\text{ReLU6}(x + 3)}{6} \]

Hard sigmoid is a computationally efficient approximation to the sigmoid activation and is more suitable for quantization.

Parameters
  • inputs (Tensor) – The input tensor

  • inplace (bool, optional) – Whether or not to perform the operation in place.

Return type

torch.Tensor

Utilities

combustion.nn.functional.patch_dynamic_same_pad(module, padding_mode='constant', pad_value=0.0, include_classes=[], include_names=[], exclude_names=[])[source]

Patches spatial layers in a torch.nn.Module, wrapping each layer in a combustion.nn.DynamicSamePad module. This method allows for dynamic same padding to be added to a module during or after instantiation.

Note

This method alone is not sufficient to ensure shape matching throughout a U-Net or similar architecture. Use this method in conjunction with combustion.nn.MatchShapes for correct end to end operation of any input.

Warning

This method is experimental

Parameters
  • module (torch.nn.Module) – The module to patch with dynamic same padding.

  • padding_mode (str) – Padding mode for combustion.nn.DynamicSamePad

  • pad_value (str) – Fill value for combustion.nn.DynamicSamePad

  • include_classes (iterable of types) – Types of modules to be patched. By default, PyTorch’s convolutional and pooling layers are matched

  • include_names (iterable of str) – Explicit names of children to be patched. If include_names is specified, only children whose names appear in include_names will be patched.

  • exclude_names (iterable of str) – Names of children to be excluded from patching.

Returns

A mapping of child module names to their newly patched module instances.

Return type

Dict[str, torch.nn.modules.module.Module]

combustion.nn.functional.fill_normal()

Fills a tensor with samples from a normal distribution under optional masking constraints, preserving mean and optionally variance. This method is focused towards a use case where the parameters of the normal distribution are derived from the input tensor.

Note

This method may be significantly faster when sample_mask is None.

Parameters
  • inputs (torch.Tensor) – Input tensor to fill

  • fill_mask (torch.Tensor) – Boolean mask of locations that should be filled with new values. By default, the entire tensor will be filled with new values.

  • sample_mask (torch.Tensor) – Boolean mask of locations to include when calculating per channel mean/variance. By default, all locations are included in the mean/variance calculation.

  • preserve_var (bool) – If true, fill values for each channel will be generated by sampling values from a normal distribution parameterized by the calculated mean/variance of the channel. Otherwise, sample from a normal distribution centered at the per channel mean, but with zero variance.

  • unbiased (bool) – Whether to use an unbiased estimator for variance. See torch.var_mean().

Dropout Example:

>>> t1 = torch.rand(4, 3, 10, 10)
>>> # drop channels 0 and 1 while preserving channel mean/variance
>>> t1[:, 0:2, :, :] = fill_normal(t1[:, 0:2, :, :])
Shape
  • inputs - \((N, C, *)\)

  • fill_mask - Same as inputs

  • sample_mask - Same as inputs

Read the Docs v: 0.1.0rc2
Versions
latest
docs
0.1.0rc2
v0.1.0rc1
Downloads
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources