vit

VIT adapted from Lucidrain’s repo https://github.com/lucidrains/vit-pytorch. Added a multi-token output for multimodal LLMs.

class mfai.pytorch.models.vit.Attention(dim, heads=8, dim_head=64, dropout=0.0)[source]

Bases: Module

Parameters:
forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:

x (Tensor)

Return type:

Tensor

class mfai.pytorch.models.vit.FeedForward(dim, hidden_dim, dropout=0.0)[source]

Bases: Module

Parameters:
forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:

x (Tensor)

Return type:

Tensor

class mfai.pytorch.models.vit.Transformer(dim, depth, heads, dim_head, mlp_dim, dropout=0.0)[source]

Bases: Module

Parameters:
forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:

x (Tensor)

Return type:

Tensor

class mfai.pytorch.models.vit.ViTClassifier(in_channels, out_channels, input_shape, settings=ViTClassifierSettings(patch_size=None, emb_dim=768, n_heads=16, n_layers=6, mlp_dim=2048, transformer_dropout=0.1, emb_dropout=0.1, autopad_enabled=False, pool='cls'))[source]

Bases: BaseModel, VitPaddingMixin

Vision Transformer (ViT) classifier model outputing class probabilities per input sample. THIS IS NOT A per pixel/grid classifier, but a global image/sample classifier.

Parameters:
features_last: bool = False
forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:

x (Tensor)

Return type:

Tensor

model_type: ModelType = 3
num_spatial_dims: int = 2
onnx_supported: bool = False
register: bool = False
property settings: ViTClassifierSettings

Returns the settings instance used to configure for this model.

settings_kls

alias of ViTClassifierSettings

supported_num_spatial_dims: tuple[int, ...] = (2,)
class mfai.pytorch.models.vit.ViTClassifierSettings(patch_size=None, emb_dim=768, n_heads=16, n_layers=6, mlp_dim=2048, transformer_dropout=0.1, emb_dropout=0.1, autopad_enabled=False, pool='cls')[source]

Bases: ViTEncoderSettings

Parameters:
classmethod from_dict(kvs, *, infer_missing=False)
Return type:

TypeVar(A, bound= DataClassJsonMixin)

Parameters:

kvs (dict | list | str | int | float | bool | None)

classmethod from_json(s, *, parse_float=None, parse_int=None, parse_constant=None, infer_missing=False, **kw)
Return type:

TypeVar(A, bound= DataClassJsonMixin)

Parameters:

s (str | bytes | bytearray)

pool: Literal['cls', 'mean'] = 'cls'
classmethod schema(*, infer_missing=False, only=None, exclude=(), many=False, context=None, load_only=(), dump_only=(), partial=False, unknown=None)
Return type:

SchemaF[TypeVar(A, bound= DataClassJsonMixin)]

Parameters:
to_dict(encode_json=False)
Return type:

Dict[str, Union[dict, list, str, int, float, bool, None]]

to_json(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, indent=None, separators=None, default=None, sort_keys=False, **kw)
Return type:

str

Parameters:
class mfai.pytorch.models.vit.ViTCore(*, image_size, patch_size, emb_dim, n_layers, n_heads, mlp_dim, n_input_channels=3, dim_head=64, transformer_dropout=0.0, emb_dropout=0.0, raise_on_size=False)[source]

Bases: Module

Core ViT implementation without any classification or specific head.

Parameters:
forward(img)[source]

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:

img (Tensor)

Return type:

Tensor

class mfai.pytorch.models.vit.ViTEncoderSettings(patch_size=None, emb_dim=768, n_heads=16, n_layers=6, mlp_dim=2048, transformer_dropout=0.1, emb_dropout=0.1, autopad_enabled=False)[source]

Bases: object

Parameters:
autopad_enabled: bool = False
emb_dim: int = 768
emb_dropout: float = 0.1
classmethod from_dict(kvs, *, infer_missing=False)
Return type:

TypeVar(A, bound= DataClassJsonMixin)

Parameters:

kvs (dict | list | str | int | float | bool | None)

classmethod from_json(s, *, parse_float=None, parse_int=None, parse_constant=None, infer_missing=False, **kw)
Return type:

TypeVar(A, bound= DataClassJsonMixin)

Parameters:

s (str | bytes | bytearray)

mlp_dim: int = 2048
n_heads: int = 16
n_layers: int = 6
patch_size: None | tuple[int, int] | int = None
classmethod schema(*, infer_missing=False, only=None, exclude=(), many=False, context=None, load_only=(), dump_only=(), partial=False, unknown=None)
Return type:

SchemaF[TypeVar(A, bound= DataClassJsonMixin)]

Parameters:
to_dict(encode_json=False)
Return type:

Dict[str, Union[dict, list, str, int, float, bool, None]]

to_json(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, indent=None, separators=None, default=None, sort_keys=False, **kw)
Return type:

str

Parameters:
transformer_dropout: float = 0.1
class mfai.pytorch.models.vit.VitEncoder(in_channels, out_channels, input_shape=(64, 64), settings=ViTEncoderSettings(patch_size=None, emb_dim=768, n_heads=16, n_layers=6, mlp_dim=2048, transformer_dropout=0.1, emb_dropout=0.1, autopad_enabled=False))[source]

Bases: BaseModel, VitPaddingMixin

ViT vision encoder for multimodal LLMs. The number of output tokens is equal to the number of patches + 1.

Parameters:
features_last: bool = False
forward(x)[source]

Forward function of the ViT vision encoder.

Parameters:

x (Tensor) – tensor of shape (B, features, height, width)

Returns:

tensor of shape (B, n_patches_h * n_patches_w + 1, embed_dim)

Return type:

Tensor

model_type: ModelType = 3
num_spatial_dims: int = 2
onnx_supported: bool = False
register: bool = False
property settings: ViTEncoderSettings

Returns the settings instance used to configure for this model.

settings_kls

alias of ViTEncoderSettings

supported_num_spatial_dims: tuple[int, ...] = (2,)
class mfai.pytorch.models.vit.VitPaddingMixin[source]

Bases: AutoPaddingModel

Mixin implementing the padding logic for ViT models.

validate_input_shape(input_shape)[source]

Check if the input shape is divisible by the patch size and returns the new shape if padding is required.

Return type:

tuple[bool, Size]

Parameters:

input_shape (Size)

mfai.pytorch.models.vit.pair(t)[source]
Return type:

Size | tuple[int, int]

Parameters:

t (Size | tuple[int, int] | int)