segformer

SegFormer adapted from https://github.com/lucidrains/segformer-pytorch.

class mfai.pytorch.models.segformer.DsConv2d(nb_in_channels, nb_out_channels, kernel_size, padding, stride=1, bias=True)[source]

Bases: Module

Parameters:
forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:

x (Tensor)

Return type:

Tensor

class mfai.pytorch.models.segformer.EfficientSelfAttention(*, dim, heads, kernel_and_stride)[source]

Bases: Module

Parameters:
forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:

x (Tensor)

Return type:

Tensor

class mfai.pytorch.models.segformer.LayerNorm(dim, eps=1e-05)[source]

Bases: Module

Parameters:
forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:

x (Tensor)

Return type:

Tensor

class mfai.pytorch.models.segformer.MiT(*, channels, dims, heads, ff_expansions, kernel_and_strides, num_layers)[source]

Bases: Module

Parameters:
forward(x, return_layer_outputs=False)[source]

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor | list[Tensor]

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:
Return type:

Tensor | list[Tensor]

class mfai.pytorch.models.segformer.MixFeedForward(*, dim, expansion_factor)[source]

Bases: Module

Parameters:
  • dim (int)

  • expansion_factor (int)

forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:

x (Tensor)

Return type:

Tensor

class mfai.pytorch.models.segformer.PreNorm(dim, fn)[source]

Bases: Module

Parameters:
forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:

x (Tensor)

Return type:

Tensor

class mfai.pytorch.models.segformer.Segformer(in_channels, out_channels, input_shape, settings=SegformerSettings(dims=(32, 64, 160, 256), heads=(1, 2, 5, 8), ff_expansion=(8, 8, 4, 4), kernel_and_stride=(8, 4, 2, 1), num_layers=2, decoder_dim=256, autopad_enabled=False, num_downsampling_chans=32), *args, **kwargs)[source]

Bases: BaseModel, AutoPaddingModel

Segformer architecture with extra upsampling in the decoder to match the input image size.

Parameters:
features_last = False
forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:

x (Tensor)

Return type:

Tensor

model_type = 3
num_spatial_dims: int = 2
onnx_supported = True
register: bool = True
property settings: SegformerSettings

Returns the settings instance used to configure for this model.

settings_kls

alias of SegformerSettings

supported_num_spatial_dims = (2,)
validate_input_shape(input_shape)[source]
Given an input shape, verifies whether the inputs fit with the

calling model’s specifications.

Parameters:

input_shape (Size) – The shape of the input data, excluding any batch dimension and channel dimension. For example, for a batch of 2D tensors of shape [B,C,W,H], [W,H] should be passed. For 3D data instead of shape [B,C,W,H,D], instead, [W,H,D] should be passed.

Returns:

Returns a tuple where the first element is a boolean signaling whether the given input shape

already fits the model’s requirements. If that value is False, the second element contains the closest shape that fits the model, otherwise it will be None.

Return type:

tuple[bool, Size]

class mfai.pytorch.models.segformer.SegformerSettings(dims=(32, 64, 160, 256), heads=(1, 2, 5, 8), ff_expansion=(8, 8, 4, 4), kernel_and_stride=(8, 4, 2, 1), num_layers=2, decoder_dim=256, autopad_enabled=False, num_downsampling_chans=32)[source]

Bases: object

Parameters:
autopad_enabled: bool = False
decoder_dim: int = 256
dims: tuple[int, ...] = (32, 64, 160, 256)
ff_expansion: tuple[int, ...] = (8, 8, 4, 4)
classmethod from_dict(kvs, *, infer_missing=False)
Return type:

TypeVar(A, bound= DataClassJsonMixin)

Parameters:

kvs (dict | list | str | int | float | bool | None)

classmethod from_json(s, *, parse_float=None, parse_int=None, parse_constant=None, infer_missing=False, **kw)
Return type:

TypeVar(A, bound= DataClassJsonMixin)

Parameters:

s (str | bytes | bytearray)

heads: tuple[int, ...] = (1, 2, 5, 8)
kernel_and_stride: tuple[int, ...] = (8, 4, 2, 1)
num_downsampling_chans: int = 32
num_layers: int = 2
classmethod schema(*, infer_missing=False, only=None, exclude=(), many=False, context=None, load_only=(), dump_only=(), partial=False, unknown=None)
Return type:

SchemaF[TypeVar(A, bound= DataClassJsonMixin)]

Parameters:
to_dict(encode_json=False)
Return type:

Dict[str, Union[dict, list, str, int, float, bool, None]]

to_json(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, indent=None, separators=None, default=None, sort_keys=False, **kw)
Return type:

str

Parameters:
mfai.pytorch.models.segformer.cast_tuple(val, depth)[source]
Return type:

tuple[Any, ...]

Parameters:
mfai.pytorch.models.segformer.exists(val)[source]
Return type:

bool

Parameters:

val (Any | None)