vit¶
VIT adapted from Lucidrain’s repo https://github.com/lucidrains/vit-pytorch. Added a multi-token output for multimodal LLMs.
- class mfai.pytorch.models.vit.Attention(dim, heads=8, dim_head=64, dropout=0.0)[source]¶
Bases:
Module- forward(x)[source]¶
Define the computation performed at every call.
Should be overridden by all subclasses. :rtype:
TensorNote
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class mfai.pytorch.models.vit.FeedForward(dim, hidden_dim, dropout=0.0)[source]¶
Bases:
Module- forward(x)[source]¶
Define the computation performed at every call.
Should be overridden by all subclasses. :rtype:
TensorNote
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class mfai.pytorch.models.vit.Transformer(dim, depth, heads, dim_head, mlp_dim, dropout=0.0)[source]¶
Bases:
Module- forward(x)[source]¶
Define the computation performed at every call.
Should be overridden by all subclasses. :rtype:
TensorNote
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class mfai.pytorch.models.vit.ViTClassifier(in_channels, out_channels, input_shape, settings=ViTClassifierSettings(patch_size=None, emb_dim=768, n_heads=16, n_layers=6, mlp_dim=2048, transformer_dropout=0.1, emb_dropout=0.1, autopad_enabled=False, pool='cls'))[source]¶
Bases:
BaseModel,VitPaddingMixinVision Transformer (ViT) classifier model outputing class probabilities per input sample. THIS IS NOT A per pixel/grid classifier, but a global image/sample classifier.
- Parameters:
- forward(x)[source]¶
Define the computation performed at every call.
Should be overridden by all subclasses. :rtype:
TensorNote
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- property settings: ViTClassifierSettings¶
Returns the settings instance used to configure for this model.
- settings_kls¶
alias of
ViTClassifierSettings
- class mfai.pytorch.models.vit.ViTClassifierSettings(patch_size=None, emb_dim=768, n_heads=16, n_layers=6, mlp_dim=2048, transformer_dropout=0.1, emb_dropout=0.1, autopad_enabled=False, pool='cls')[source]¶
Bases:
ViTEncoderSettings- Parameters:
- classmethod from_dict(kvs, *, infer_missing=False)¶
- classmethod from_json(s, *, parse_float=None, parse_int=None, parse_constant=None, infer_missing=False, **kw)¶
- classmethod schema(*, infer_missing=False, only=None, exclude=(), many=False, context=None, load_only=(), dump_only=(), partial=False, unknown=None)¶
- to_json(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, indent=None, separators=None, default=None, sort_keys=False, **kw)¶
- class mfai.pytorch.models.vit.ViTCore(*, image_size, patch_size, emb_dim, n_layers, n_heads, mlp_dim, n_input_channels=3, dim_head=64, transformer_dropout=0.0, emb_dropout=0.0, raise_on_size=False)[source]¶
Bases:
ModuleCore ViT implementation without any classification or specific head.
- Parameters:
- forward(img)[source]¶
Define the computation performed at every call.
Should be overridden by all subclasses. :rtype:
TensorNote
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class mfai.pytorch.models.vit.ViTEncoderSettings(patch_size=None, emb_dim=768, n_heads=16, n_layers=6, mlp_dim=2048, transformer_dropout=0.1, emb_dropout=0.1, autopad_enabled=False)[source]¶
Bases:
object- Parameters:
- classmethod from_dict(kvs, *, infer_missing=False)¶
- classmethod from_json(s, *, parse_float=None, parse_int=None, parse_constant=None, infer_missing=False, **kw)¶
- classmethod schema(*, infer_missing=False, only=None, exclude=(), many=False, context=None, load_only=(), dump_only=(), partial=False, unknown=None)¶
- to_json(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, indent=None, separators=None, default=None, sort_keys=False, **kw)¶
- class mfai.pytorch.models.vit.VitEncoder(in_channels, out_channels, input_shape=(64, 64), settings=ViTEncoderSettings(patch_size=None, emb_dim=768, n_heads=16, n_layers=6, mlp_dim=2048, transformer_dropout=0.1, emb_dropout=0.1, autopad_enabled=False))[source]¶
Bases:
BaseModel,VitPaddingMixinViT vision encoder for multimodal LLMs. The number of output tokens is equal to the number of patches + 1.
- Parameters:
- forward(x)[source]¶
Forward function of the ViT vision encoder.
- Parameters:
x (Tensor) – tensor of shape (B, features, height, width)
- Returns:
tensor of shape (B, n_patches_h * n_patches_w + 1, embed_dim)
- Return type:
Tensor
- property settings: ViTEncoderSettings¶
Returns the settings instance used to configure for this model.
- settings_kls¶
alias of
ViTEncoderSettings
- class mfai.pytorch.models.vit.VitPaddingMixin[source]¶
Bases:
AutoPaddingModelMixin implementing the padding logic for ViT models.