clip¶
Implementation of CLIP (Contrastive Langage-Image Pre-training) model. Based on the original https://arxiv.org/abs/2103.00020.
- class mfai.pytorch.models.clip.Clip(settings)[source]¶
Bases:
ModuleImplementation of CLIP (Contrastive Langage-Image Pre-training) model. - Based on the original article from OpenAI:
- Parameters:
settings (ClipSettings)
- forward(text_tokens, image_input)[source]¶
Define the computation performed at every call.
Should be overridden by all subclasses. :rtype:
Tuple[Tensor,Tensor]Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.- Parameters:
text_tokens (Tensor)
image_input (NamedTensor)
- Return type:
- class mfai.pytorch.models.clip.ClipSettings(image_encoder, text_encoder, emb_dim=1024, init_temperature=14.285714285714285)[source]¶
Bases:
object- Parameters:
- classmethod from_dict(kvs, *, infer_missing=False)¶
- classmethod from_json(s, *, parse_float=None, parse_int=None, parse_constant=None, infer_missing=False, **kw)¶
- classmethod schema(*, infer_missing=False, only=None, exclude=(), many=False, context=None, load_only=(), dump_only=(), partial=False, unknown=None)¶
- to_json(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, indent=None, separators=None, default=None, sort_keys=False, **kw)¶