siamese

Module containing Siamese models.

class MinervaSiamese(*args, **kwargs)

Abstract class for Siamese models.

backbone

The backbone encoder for the Siamese model.

Type:

MinervaModel

proj_head

The projection head for re-projecting the outputs from the backbone.

Type:

Module

forward(x: Tensor) tuple[Tensor, Tensor, Tensor, Tensor, Tensor]

Performs a forward pass of the network by using the forward methods of the backbone and feeding its output into the projection heads.

Can be called directly as a method (e.g. model.forward()) or when data is parsed to model (e.g. model()).

Parameters:

x (Tensor) – Pair of batches of input data to the network.

Returns:

Tuple of:
  • Ouput feature vectors concated together.

  • Output feature vector A.

  • Output feature vector B.

  • Detached embedding, A, from the backbone.

  • Detached embedding, B, from the backbone.

Return type:

tuple[Tensor, Tensor, Tensor, Tensor, Tensor]

forward_pair(x: Tensor) tuple[Tensor, Tensor, Tensor, Tensor, Tensor]

Performs a forward pass of the network by using the forward methods of the backbone and feeding its output into the projection heads.

Parameters:

x (Tensor) – Pair of batches of input data to the network.

Returns:

Tuple of:
  • Ouput feature vectors concated together.

  • Output feature vector A.

  • Output feature vector B.

  • Embedding, A, from the backbone.

  • Embedding, B, from the backbone.

Return type:

tuple[Tensor, Tensor, Tensor, Tensor, Tensor]

abstract forward_single(x: Tensor) tuple[Tensor, Tensor]

Performs a forward pass of a single head of the network by using the forward methods of the backbone and feeding its output into the projection heads.

Parameters:

x (Tensor) – Batch of unpaired input data to the network.

Returns:

Tuple of the feature vector outputted from the projection head and the detached embedding vector from the backbone.

Return type:

tuple[Tensor, Tensor]

class SimCLR(criterion: Any, input_size: tuple[int, int, int] = (4, 256, 256), feature_dim: int = 128, scaler: GradScaler | None = None, backbone_kwargs: dict[str, Any] = {})

Base SimCLR class to be subclassed by SimCLR variants.

Subclasses MinervaSiamese.

backbone_name

Name of the backbone within this module to use.

Type:

str

backbone

Backbone of SimCLR that takes the imagery input and extracts learned representations.

Type:

Module

proj_head

Projection head that takes the learned representations from the backbone encoder.

Type:

Module

Parameters:
  • criterion – torch loss function model will use.

  • input_size (tuple[int, int, int]) – Optional; Defines the shape of the input data in order of number of channels, image width, image height.

  • backbone_kwargs (dict[str, Any]) – Optional; Keyword arguments for the backbone packed up into a dict.

forward_single(x: Tensor) tuple[Tensor, Tensor]

Performs a forward pass of a single head of the network by using the forward methods of the backbone and feeding its output into the proj_head.

Overwrites MinervaSiamese.forward_single()

Parameters:

x (Tensor) – Batch of unpaired input data to the network.

Returns:

Tuple of the feature vector outputted from the proj_head and the detached embedding vector from the backbone.

Return type:

tuple[Tensor, Tensor]

step(x: Tensor, *args, train: bool = False) tuple[Tensor, Tensor]

Overwrites MinervaModel to account for paired logits.

Raises:

NotImplementedError – If optimiser is None.

Parameters:
  • x (Tensor) – Batch of input data to network.

  • train (bool) – Sets whether this shall be a training step or not. True for training step which will then clear the optimiser, and perform a backward pass of the network then update the optimiser. If False for a validation or testing step, these actions are not taken.

Returns:

Loss computed by the loss function and a Tensor with both projection’s logits.

Return type:

tuple[Tensor, Tensor]

class SimCLR18(criterion: Any, input_size: tuple[int, int, int] = (4, 256, 256), feature_dim: int = 128, scaler: GradScaler | None = None, backbone_kwargs: dict[str, Any] = {})

SimCLR network using a ResNet18 backbone.

class SimCLR34(criterion: Any, input_size: tuple[int, int, int] = (4, 256, 256), feature_dim: int = 128, scaler: GradScaler | None = None, backbone_kwargs: dict[str, Any] = {})

SimCLR network using a ResNet32 backbone.

class SimCLR50(criterion: Any, input_size: tuple[int, int, int] = (4, 256, 256), feature_dim: int = 128, scaler: GradScaler | None = None, backbone_kwargs: dict[str, Any] = {})

SimCLR network using a ResNet50 backbone.

class SimConv(criterion: Any, input_size: tuple[int, int, int] = (4, 256, 256), feature_dim: int = 2048, projection_dim: int = 512, scaler: GradScaler | None = None, encoder_weights: str | None = None, backbone_kwargs: dict[str, Any] = {})

Base SimConv class.

Subclasses MinervaSiamese.

backbone_name

Name of the backbone within this module to use.

Type:

str

backbone

Backbone of SimCLR that takes the imagery input and extracts learned representations.

Type:

Module

proj_head

Projection head that takes the learned representations from the backbone encoder.

Type:

Module

Parameters:
  • criterion – torch loss function model will use.

  • input_size (tuple[int, int, int]) – Optional; Defines the shape of the input data in order of number of channels, image width, image height.

  • backbone_kwargs (dict[str, Any]) – Optional; Keyword arguments for the backbone packed up into a dict.

forward_single(x: Tensor) tuple[Tensor, Tensor]

Performs a forward pass of a single head of the network by using the forward methods of the backbone and feeding its output into the proj_head.

Overwrites MinervaSiamese.forward_single()

Parameters:

x (Tensor) – Batch of unpaired input data to the network.

Returns:

Tuple of the feature vector outputted from the proj_head and the detached embedding vector from the backbone.

Return type:

tuple[Tensor, Tensor]

step(x: Tensor, *args, train: bool = False) tuple[Tensor, Tensor]

Overwrites MinervaModel to account for paired logits.

Raises:

NotImplementedError – If optimiser is None.

Parameters:
  • x (Tensor) – Batch of input data to network.

  • train (bool) – Sets whether this shall be a training step or not. True for training step which will then clear the optimiser, and perform a backward pass of the network then update the optimiser. If False for a validation or testing step, these actions are not taken.

Returns:

Loss computed by the loss function and a Tensor with both projection’s logits.

Return type:

tuple[Tensor, Tensor]

class SimConv101(criterion: Any, input_size: tuple[int, int, int] = (4, 256, 256), feature_dim: int = 2048, projection_dim: int = 512, scaler: GradScaler | None = None, encoder_weights: str | None = None, backbone_kwargs: dict[str, Any] = {})

SimConv network using a ResNet50 backbone.

class SimConv18(criterion: Any, input_size: tuple[int, int, int] = (4, 256, 256), feature_dim: int = 2048, projection_dim: int = 512, scaler: GradScaler | None = None, encoder_weights: str | None = None, backbone_kwargs: dict[str, Any] = {})

SimConv network using a ResNet18 backbone.

class SimConv34(criterion: Any, input_size: tuple[int, int, int] = (4, 256, 256), feature_dim: int = 2048, projection_dim: int = 512, scaler: GradScaler | None = None, encoder_weights: str | None = None, backbone_kwargs: dict[str, Any] = {})

SimConv network using a ResNet34 backbone.

class SimConv50(criterion: Any, input_size: tuple[int, int, int] = (4, 256, 256), feature_dim: int = 2048, projection_dim: int = 512, scaler: GradScaler | None = None, encoder_weights: str | None = None, backbone_kwargs: dict[str, Any] = {})

SimConv network using a ResNet50 backbone.

class SimSiam(criterion: Any, input_size: tuple[int, int, int] = (4, 256, 256), feature_dim: int = 128, pred_dim: int = 512, scaler: GradScaler | None = None, backbone_kwargs: dict[str, Any] = {})

Base SimSiam class to be subclassed by SimSiam variants.

Subclasses MinervaSiamese.

backbone_name

Name of the backbone within this module to use.

Type:

str

backbone

Backbone of SimSiam that takes the imagery input and extracts learned representations.

Type:

Module

proj_head

Projection head that takes the learned representations from the backbone encoder.

Type:

Module

Parameters:
  • criterion – torch loss function model will use.

  • input_size (tuple[int, int, int]) – Optional; Defines the shape of the input data in order of number of channels, image width, image height.

  • backbone_kwargs (dict[str, Any]) – Optional; Keyword arguments for the backbone packed up into a dict.

forward_single(x: Tensor) tuple[Tensor, Tensor]

Performs a forward pass of a single head of SimSiam by using the forward methods of the backbone and feeding its output into the proj_head.

Parameters:

x (Tensor) – Batch of unpaired input data to the network.

Returns:

Tuple of the feature vector outputted from proj_head and the detached embedding vector from the backbone.

Return type:

tuple[Tensor, Tensor]

step(x: Tensor, *args, train: bool = False) tuple[Tensor, Tensor]

Overwrites MinervaModel to account for paired logits.

Raises:

NotImplementedError – If optimiser is None.

Parameters:
  • x (Tensor) – Batch of input data to network.

  • train (bool) – Sets whether this shall be a training step or not. True for training step which will then clear the optimiser, and perform a backward pass of the network then update the optimiser. If False for a validation or testing step, these actions are not taken.

Returns:

Loss computed by the loss function and a Tensor with both projection’s logits.

Return type:

tuple[Tensor, Tensor]

class SimSiam18(criterion: Any, input_size: tuple[int, int, int] = (4, 256, 256), feature_dim: int = 128, pred_dim: int = 512, scaler: GradScaler | None = None, backbone_kwargs: dict[str, Any] = {})

SimSiam network using a ResNet18 backbone.

class SimSiam34(criterion: Any, input_size: tuple[int, int, int] = (4, 256, 256), feature_dim: int = 128, pred_dim: int = 512, scaler: GradScaler | None = None, backbone_kwargs: dict[str, Any] = {})

SimSiam network using a ResNet34 backbone.

class SimSiam50(criterion: Any, input_size: tuple[int, int, int] = (4, 256, 256), feature_dim: int = 128, pred_dim: int = 512, scaler: GradScaler | None = None, backbone_kwargs: dict[str, Any] = {})

SimSiam network using a ResNet50 backbone.