namedtensor¶
A class based NamedTensor implementation for PyTorch, inspired from the unstable PyTorch namedtensors.
- class mfai.pytorch.namedtensor.NamedTensor(tensor, names, feature_names, feature_dim_name='features')[source]¶
Bases:
TensorWrapperNamedTensor is a wrapper around a torch tensor adding several attributes :
a ‘names’ attribute with the names of the
tensor’s dimensions (like https://pytorch.org/docs/stable/named_tensor.html).
Torch’s named tensors are still experimental and subject to change.
a ‘feature_names’ attribute containing the names of the features
along the last dimension of the tensor.
NamedTensor can be concatenated along the last dimension using the | operator. nt3 = nt1 | nt2
- Parameters:
- SPATIAL_DIM_NAMES = ('lat', 'lon', 'ngrid')¶
- static collate_fn(batch, pad_dims=(), pad_value=0)[source]¶
Collate a list of NamedTensors into a batched single NamedTensor. Optionnally pads the dimensions specified in pad_dims with pad_value.
- Return type:
- Parameters:
- static concat(nts)[source]¶
Safely concat a list of NamedTensors along the last dimension in one shot.
- Return type:
- Parameters:
nts (Sequence[NamedTensor])
- static expand_to_batch_like(tensor, other)[source]¶
Create a new NamedTensor with the same names and feature names as another NamedTensor with an extra first dimension called ‘batch’ using the supplied tensor. Supplied new ‘batched’ tensor must have one more dimension than other.
- Return type:
- Parameters:
tensor (Tensor)
other (NamedTensor)
- flatten_(flatten_dim_name, start_dim=0, end_dim=-1)[source]¶
Flatten the underlying tensor from start_dim to end_dim. Deletes flattened dimension names and insert the new one.
- index_select_dim(dim_name, indices)[source]¶
Return the tensor indexed along the dimension dim_name with the indices tensor. The returned tensor has the same number of dimensions as the original tensor (input). The dimth dimension has the same size as the length of index; other dimensions have the same size as in the original tensor. See https://pytorch.org/docs/stable/generated/torch.index_select.html.
- Return type:
- Parameters:
- index_select_tensor_dim(dim_name, indices)[source]¶
Same as index_select_dim but returns a torch.tensor, but returns a Tensor.
- static new_like(tensor, other)[source]¶
Create a new NamedTensor with the same names and feature names as another NamedTensor.
- Return type:
- Parameters:
tensor (Tensor)
other (NamedTensor)
- rearrange_(einops_str)[source]¶
Rearrange in place the underlying tensor dimensions using einops syntax. For now only supports re-ordering of dimensions.
- select_dim(dim_name, index)[source]¶
Return the tensor indexed along the dimension dim_name with the index index. The given dimension is removed from the tensor. See https://pytorch.org/docs/stable/generated/torch.select.html.
- Return type:
- Parameters:
- select_tensor_dim(dim_name, index)[source]¶
Same as select_dim but returns a Tensor. Allows the selection of the feature dimension.
- squeeze_(dim_name)[source]¶
Squeeze the underlying tensor along the dimension(s) given its/their name(s).
- static stack(nts, dim_name, dim=0)[source]¶
Stack a list of NamedTensors along a new dimension.
- Return type:
- Parameters:
nts (Sequence[NamedTensor])
dim_name (str)
dim (int)
- to_(*args, **kwargs)[source]¶
‘In place’ operation to call torch’s ‘to’ method on the underlying tensor.
- type_(new_type)[source]¶
Modify the type of the underlying torch tensor by calling torch’s .type method.
in_place operation for this class, the internal tensor is replaced by the new one.
- unflatten_(dim, unflattened_size, unflatten_dim_name)[source]¶
Unflatten the dimension dim of the underlying tensor. Insert unflattened_size dimension instead.
- unsqueeze_and_expand_from_(other)[source]¶
Unsqueeze and expand the tensor to have the same number of spatial dimensions as another NamedTensor. Injects new dimensions where the missing names are.
- Return type:
- Parameters:
other (NamedTensor)
- class mfai.pytorch.namedtensor.TensorWrapper(tensor)[source]¶
Bases:
objectWrapper around a torch tensor. We do this separated dataclass to allow lightning’s introspection to see our batch size and move our tensors to the right device, otherwise we have this error/warning: “Trying to infer the batch_size from an ambiguous collection …”.
- Parameters:
tensor (Tensor)