combustion.nn.functional¶
Extensions to torch.nn.functional
.
combustion.nn.functional
Activation Functions¶
-
combustion.nn.functional.
swish
(inputs, memory_efficient=True)[source]¶ The swish activation function, defined as
\[f(x) = x \cdot \text{sigmoid}(x) \]- Parameters
inputs (Tensor) – The input tensor
memory_efficient (bool, optional) – Whether or not to use an implementation that is more memory efficient at training time. When
memory_efficient=True
, this method is incompatible with TorchScript.
- Return type
Warning
This method is traceable with TorchScript when
memory_efficient=False
, but is un-scriptable due to the use oftorch.autograd.Function
for a memory-efficient backward pass. Please export usingtorch.jit.trace()
withmemory_efficient=False
-
combustion.nn.functional.
hard_swish
(inputs, inplace=False)[source]¶ The hard swish activation function proposed in Searching For MobileNetV3, defined as
\[f(x) = x \cdot \frac{\text{ReLU6}(x + 3)}{6} \]Hard swish approximates the swish activation, but computationally cheaper due to the removal of \(\text{sigmoid}(x)\).
- Parameters
inputs (Tensor) – The input tensor
inplace (bool, optional) – Whether or not to perform the operation in place.
- Return type
-
combustion.nn.functional.
hard_sigmoid
(inputs, inplace=True)[source]¶ The hard sigmoid activation function, defined as
\[f(x) = \frac{\text{ReLU6}(x + 3)}{6} \]Hard sigmoid is a computationally efficient approximation to the sigmoid activation and is more suitable for quantization.
- Parameters
inputs (Tensor) – The input tensor
inplace (bool, optional) – Whether or not to perform the operation in place.
- Return type