swinunetr

class mfai.pytorch.models.swinunetr.SwinUNetR(in_channels, out_channels, input_shape, settings=SwinUNetRSettings(depths=(2, 2, 2, 2), num_heads=(3, 6, 12, 24), feature_size=24, norm_name='instance', drop_rate=0.0, attn_drop_rate=0.0, dropout_path_rate=0.0, normalize=True, use_checkpoint=False, downsample='merging', use_v2=False, autopad_enabled=False), *args, **kwargs)[source]

Bases: ModelABC, SwinUNETR, AutoPaddingModel

Wrapper around the SwinUNETR from MONAI. Instanciated in 2D for now, with a custom decoder.

Parameters:
features_last: bool = False
forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:

x (Tensor)

Return type:

Tensor

model_type: ModelType = 3
num_spatial_dims: int = 2
onnx_supported: bool = False
register: bool = True
property settings: SwinUNetRSettings

Returns the settings instance used to configure for this model.

settings_kls

alias of SwinUNetRSettings

supported_num_spatial_dims: tuple[int, ...] = (2,)
validate_input_shape(input_shape)[source]
Given an input shape, verifies whether the inputs fit with the

calling model’s specifications.

Parameters:

input_shape (Size) – The shape of the input data, excluding any batch dimension and channel dimension. For example, for a batch of 2D tensors of shape [B,C,W,H], [W,H] should be passed. For 3D data instead of shape [B,C,W,H,D], instead, [W,H,D] should be passed.

Returns:

Returns a tuple where the first element is a boolean signaling whether the given input shape

already fits the model’s requirements. If that value is False, the second element contains the closest shape that fits the model, otherwise it will be None.

Return type:

tuple[bool, Size]

class mfai.pytorch.models.swinunetr.SwinUNetRSettings(depths=(2, 2, 2, 2), num_heads=(3, 6, 12, 24), feature_size=24, norm_name='instance', drop_rate=0.0, attn_drop_rate=0.0, dropout_path_rate=0.0, normalize=True, use_checkpoint=False, downsample='merging', use_v2=False, autopad_enabled=False)[source]

Bases: object

Parameters:
attn_drop_rate: float
autopad_enabled: bool
depths: tuple[int, ...]
downsample: Union[Literal['merging', 'merginv2'], Module]
drop_rate: float
dropout_path_rate: float
feature_size: int
classmethod from_dict(kvs, *, infer_missing=False)
Return type:

TypeVar(A, bound= DataClassJsonMixin)

Parameters:

kvs (dict | list | str | int | float | bool | None)

classmethod from_json(s, *, parse_float=None, parse_int=None, parse_constant=None, infer_missing=False, **kw)
Return type:

TypeVar(A, bound= DataClassJsonMixin)

Parameters:

s (str | bytes | bytearray)

monai_kwargs()[source]
Return type:

dict

norm_name: tuple | str
normalize: bool
num_heads: tuple[int, ...]
classmethod schema(*, infer_missing=False, only=None, exclude=(), many=False, context=None, load_only=(), dump_only=(), partial=False, unknown=None)
Return type:

SchemaF[TypeVar(A, bound= DataClassJsonMixin)]

Parameters:
to_dict(encode_json=False)
Return type:

Dict[str, Union[dict, list, str, int, float, bool, None]]

to_json(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, indent=None, separators=None, default=None, sort_keys=False, **kw)
Return type:

str

Parameters:
use_checkpoint: bool
use_v2: bool
class mfai.pytorch.models.swinunetr.UpsampleBlock(in_channels, out_channels, kernel_size, norm_name)[source]

Bases: Module

Parameters:
forward(inp, skip)[source]

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:
Return type:

Tensor