llama2

Pytorch implementation of Llama2. It is widely inspired by Sebastian Raschka’s book and work https://github.com/rasbt/LLMs-from-scratch/.

class mfai.pytorch.models.llms.llama2.FeedForwardLlama2(emb_dim, hidden_dim, dtype=None)[source]

Bases: Module

Parameters:
forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:

x (Tensor)

Return type:

Tensor

class mfai.pytorch.models.llms.llama2.Llama2(settings, vocab_size=32000)[source]

Bases: Module

Llama2 implementation - Based on Sebastian Raschka’s book and github repo :

Parameters:
embed_tokens(tok_ids)[source]
Return type:

Tensor

Parameters:

tok_ids (Tensor)

forward(tok_ids)[source]

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:

tok_ids (Tensor)

Return type:

Tensor

forward_vectors(embeddings, first_embedding=None)[source]

Process a batch of embeddings through the model. If first_embedding is supplied the first tokens of each blocks are replaced by the corresponding embeddings. Useful for multimodal models with injection of vision data at each stage.

Return type:

Tensor

Parameters:
model_type = 4
settings_kls

alias of Llama2Settings

class mfai.pytorch.models.llms.llama2.Llama2Settings(emb_dim=256, context_length=512, n_heads=4, n_layers=4, hidden_dim=768)[source]

Bases: object

Parameters:
  • emb_dim (int)

  • context_length (int)

  • n_heads (int)

  • n_layers (int)

  • hidden_dim (int)

context_length: int
emb_dim: int
classmethod from_dict(kvs, *, infer_missing=False)
Return type:

TypeVar(A, bound= DataClassJsonMixin)

Parameters:

kvs (dict | list | str | int | float | bool | None)

classmethod from_json(s, *, parse_float=None, parse_int=None, parse_constant=None, infer_missing=False, **kw)
Return type:

TypeVar(A, bound= DataClassJsonMixin)

Parameters:

s (str | bytes | bytearray)

hidden_dim: int
n_heads: int
n_layers: int
classmethod schema(*, infer_missing=False, only=None, exclude=(), many=False, context=None, load_only=(), dump_only=(), partial=False, unknown=None)
Return type:

SchemaF[TypeVar(A, bound= DataClassJsonMixin)]

Parameters:
to_dict(encode_json=False)
Return type:

Dict[str, Union[dict, list, str, int, float, bool, None]]

to_json(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, indent=None, separators=None, default=None, sort_keys=False, **kw)
Return type:

str

Parameters:
class mfai.pytorch.models.llms.llama2.MultiHeadAttentionPySDPALlama2(d_in, d_out, num_heads, context_length, dtype=None)[source]

Bases: Module

Mutli Head Attention using Pytorch’s scaled_dot_product_attention.

Parameters:
cos: Tensor
forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:

x (Tensor)

Return type:

Tensor

sin: Tensor
class mfai.pytorch.models.llms.llama2.RMSNorm(emb_dim, eps=1e-05)[source]

Bases: Module

Parameters:
forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:

x (Tensor)

Return type:

Tensor

class mfai.pytorch.models.llms.llama2.SiLU[source]

Bases: Module

forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:

x (Tensor)

Return type:

Tensor

class mfai.pytorch.models.llms.llama2.TransformerBlockLlama2(settings)[source]

Bases: Module

A transformer block - Based on Sebastian Raschka’s book and github repo : https://github.com/rasbt/LLMs-from-scratch/.

  • Attention used is based on pytorch’s scaled_dot_product_attention

( Most efficient MultiHeadAttention module accodring S.Raschka’s benchmark https://github.com/rasbt/LLMs-from-scratch/tree/main/ch03/02_bonus_efficient-multihead-attention )

Parameters:

settings (Llama2Settings)

forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Parameters:

x (Tensor)

Return type:

Tensor

mfai.pytorch.models.llms.llama2.compute_rope(x, cos, sin)[source]
Return type:

Tensor

Parameters:
mfai.pytorch.models.llms.llama2.precompute_rope_params(head_dim, theta_base=10000, context_length=4096)[source]
Return type:

tuple[Tensor, Tensor]

Parameters:
  • head_dim (int)

  • theta_base (int)

  • context_length (int)