siameseο
Module containing Siamese models.
- class MinervaSiamese(*args, **kwargs)ο
Abstract class for Siamese models.
- backboneο
The backbone encoder for the Siamese model.
- Type:
- forward(x: Tensor) tuple[Tensor, Tensor, Tensor, Tensor, Tensor] ο
Performs a forward pass of the network by using the forward methods of the backbone and feeding its output into the projection heads.
Can be called directly as a method (e.g.
model.forward()
) or when data is parsed to model (e.g.model()
).- Parameters:
x (Tensor) β Pair of batches of input data to the network.
- Returns:
- Return type:
- forward_pair(x: Tensor) tuple[Tensor, Tensor, Tensor, Tensor, Tensor] ο
Performs a forward pass of the network by using the forward methods of the backbone and feeding its output into the projection heads.
- Parameters:
x (Tensor) β Pair of batches of input data to the network.
- Returns:
- Tuple of:
Ouput feature vectors concated together.
Output feature vector A.
Output feature vector B.
Embedding, A, from the backbone.
Embedding, B, from the backbone.
- Return type:
- class SimCLR(criterion: Any, input_size: tuple[int, int, int] = (4, 256, 256), feature_dim: int = 128, scaler: GradScaler | None = None, backbone_kwargs: dict[str, Any] = {})ο
Base SimCLR class to be subclassed by SimCLR variants.
Subclasses
MinervaSiamese
.- backboneο
Backbone of SimCLR that takes the imagery input and extracts learned representations.
- Type:
- proj_headο
Projection head that takes the learned representations from the
backbone
encoder.- Type:
- Parameters:
- forward_single(x: Tensor) tuple[Tensor, Tensor] ο
Performs a forward pass of a single head of the network by using the forward methods of the
backbone
and feeding its output into theproj_head
.Overwrites
MinervaSiamese.forward_single()
- step(x: Tensor, *args, train: bool = False) tuple[Tensor, Tensor] ο
Overwrites
MinervaModel
to account for paired logits.- Raises:
NotImplementedError β If
optimiser
isNone
.- Parameters:
x (Tensor) β Batch of input data to network.
train (bool) β Sets whether this shall be a training step or not.
True
for training step which will then clear theoptimiser
, and perform a backward pass of the network then update theoptimiser
. IfFalse
for a validation or testing step, these actions are not taken.
- Returns:
Loss computed by the loss function and a
Tensor
with both projectionβs logits.- Return type:
- class SimCLR18(criterion: Any, input_size: tuple[int, int, int] = (4, 256, 256), feature_dim: int = 128, scaler: GradScaler | None = None, backbone_kwargs: dict[str, Any] = {})ο
- class SimCLR34(criterion: Any, input_size: tuple[int, int, int] = (4, 256, 256), feature_dim: int = 128, scaler: GradScaler | None = None, backbone_kwargs: dict[str, Any] = {})ο
- class SimCLR50(criterion: Any, input_size: tuple[int, int, int] = (4, 256, 256), feature_dim: int = 128, scaler: GradScaler | None = None, backbone_kwargs: dict[str, Any] = {})ο
- class SimConv(criterion: Any, input_size: tuple[int, int, int] = (4, 256, 256), feature_dim: int = 2048, projection_dim: int = 512, scaler: GradScaler | None = None, encoder_weights: str | None = None, backbone_kwargs: dict[str, Any] = {})ο
Base SimConv class.
Subclasses
MinervaSiamese
.- backboneο
Backbone of SimCLR that takes the imagery input and extracts learned representations.
- Type:
- proj_headο
Projection head that takes the learned representations from the
backbone
encoder.- Type:
- Parameters:
- forward_single(x: Tensor) tuple[Tensor, Tensor] ο
Performs a forward pass of a single head of the network by using the forward methods of the
backbone
and feeding its output into theproj_head
.Overwrites
MinervaSiamese.forward_single()
- step(x: Tensor, *args, train: bool = False) tuple[Tensor, Tensor] ο
Overwrites
MinervaModel
to account for paired logits.- Raises:
NotImplementedError β If
optimiser
isNone
.- Parameters:
x (Tensor) β Batch of input data to network.
train (bool) β Sets whether this shall be a training step or not.
True
for training step which will then clear theoptimiser
, and perform a backward pass of the network then update theoptimiser
. IfFalse
for a validation or testing step, these actions are not taken.
- Returns:
Loss computed by the loss function and a
Tensor
with both projectionβs logits.- Return type:
- class SimConv101(criterion: Any, input_size: tuple[int, int, int] = (4, 256, 256), feature_dim: int = 2048, projection_dim: int = 512, scaler: GradScaler | None = None, encoder_weights: str | None = None, backbone_kwargs: dict[str, Any] = {})ο
- class SimConv18(criterion: Any, input_size: tuple[int, int, int] = (4, 256, 256), feature_dim: int = 2048, projection_dim: int = 512, scaler: GradScaler | None = None, encoder_weights: str | None = None, backbone_kwargs: dict[str, Any] = {})ο
- class SimConv34(criterion: Any, input_size: tuple[int, int, int] = (4, 256, 256), feature_dim: int = 2048, projection_dim: int = 512, scaler: GradScaler | None = None, encoder_weights: str | None = None, backbone_kwargs: dict[str, Any] = {})ο
- class SimConv50(criterion: Any, input_size: tuple[int, int, int] = (4, 256, 256), feature_dim: int = 2048, projection_dim: int = 512, scaler: GradScaler | None = None, encoder_weights: str | None = None, backbone_kwargs: dict[str, Any] = {})ο
- class SimSiam(criterion: Any, input_size: tuple[int, int, int] = (4, 256, 256), feature_dim: int = 128, pred_dim: int = 512, scaler: GradScaler | None = None, backbone_kwargs: dict[str, Any] = {})ο
Base SimSiam class to be subclassed by SimSiam variants.
Subclasses
MinervaSiamese
.- backboneο
Backbone of SimSiam that takes the imagery input and extracts learned representations.
- Type:
- proj_headο
Projection head that takes the learned representations from the backbone encoder.
- Type:
- Parameters:
- forward_single(x: Tensor) tuple[Tensor, Tensor] ο
Performs a forward pass of a single head of
SimSiam
by using the forward methods of the backbone and feeding its output into theproj_head
.
- step(x: Tensor, *args, train: bool = False) tuple[Tensor, Tensor] ο
Overwrites
MinervaModel
to account for paired logits.- Raises:
NotImplementedError β If
optimiser
isNone
.- Parameters:
x (Tensor) β Batch of input data to network.
train (bool) β Sets whether this shall be a training step or not.
True
for training step which will then clear theoptimiser
, and perform a backward pass of the network then update theoptimiser
. IfFalse
for a validation or testing step, these actions are not taken.
- Returns:
Loss computed by the loss function and a
Tensor
with both projectionβs logits.- Return type:
- class SimSiam18(criterion: Any, input_size: tuple[int, int, int] = (4, 256, 256), feature_dim: int = 128, pred_dim: int = 512, scaler: GradScaler | None = None, backbone_kwargs: dict[str, Any] = {})ο