espnet2.spk package¶
espnet2.spk.espnet_model¶
-
class
espnet2.spk.espnet_model.
ESPnetSpeakerModel
(frontend: Optional[espnet2.asr.frontend.abs_frontend.AbsFrontend], specaug: Optional[espnet2.asr.specaug.abs_specaug.AbsSpecAug], normalize: Optional[espnet2.layers.abs_normalize.AbsNormalize], encoder: Optional[espnet2.asr.encoder.abs_encoder.AbsEncoder], pooling: Optional[espnet2.spk.pooling.abs_pooling.AbsPooling], projector: Optional[espnet2.spk.projector.abs_projector.AbsProjector], loss: Optional[espnet2.spk.loss.abs_loss.AbsLoss])[source]¶ Bases:
espnet2.train.abs_espnet_model.AbsESPnetModel
Speaker embedding extraction model. Core model for diverse speaker-related tasks (e.g., verification, open-set identification, diarization)
The model architecture comprises mainly ‘encoder’, ‘pooling’, and ‘projector’. In common speaker recognition field, the combination of three would be usually named as ‘speaker_encoder’ (or speaker embedding extractor). We splitted it into three for flexibility in future extensions:
‘encoder’ : extract frame-level speaker embeddings.
‘pooling’ : aggregate into single utterance-level embedding.
- ‘projector’(optional) additional processing (e.g., one fully-
connected layer) to derive speaker embedding.
Possibly, in the future, ‘pooling’ and/or ‘projector’ can be integrated as a ‘decoder’, depending on the extension for joint usage of different tasks (e.g., ASR, SE, target speaker extraction).
-
collect_feats
(speech: torch.Tensor, speech_lengths: torch.Tensor, spk_labels: torch.Tensor = None, **kwargs) → Dict[str, torch.Tensor][source]¶
-
extract_feats
(speech: torch.Tensor, speech_lengths: torch.Tensor) → Tuple[torch.Tensor, torch.Tensor][source]¶
-
forward
(speech: torch.Tensor, spk_labels: torch.Tensor, extract_embd: bool = False, **kwargs) → Tuple[torch.Tensor, Dict[str, torch.Tensor], torch.Tensor][source]¶ Feed-forward through encoder layers and aggregate into utterance-level feature.
- Parameters:
speech – (Batch, samples)
speech_lengths – (Batch,)
extract_embd – a flag which doesn’t go through the classification head when set True
spk_labels – (Batch, )
espnet2.spk.__init__¶
espnet2.spk.layers.RawNetBasicBlock¶
-
class
espnet2.spk.layers.RawNetBasicBlock.
AFMS
(nb_dim: int)[source]¶ Bases:
torch.nn.modules.module.Module
Alpha-Feature map scaling, added to the output of each residual block[1,2].
Reference: [1] RawNet2 : https://www.isca-speech.org/archive/Interspeech_2020/pdfs/1011.pdf [2] AMFS : https://www.koreascience.or.kr/article/JAKO202029757857763.page
-
forward
(x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
class
espnet2.spk.layers.RawNetBasicBlock.
Bottle2neck
(inplanes, planes, kernel_size=None, dilation=None, scale=4, pool=False)[source]¶ Bases:
torch.nn.modules.module.Module
-
forward
(x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
class
espnet2.spk.layers.RawNetBasicBlock.
PreEmphasis
(coef: float = 0.97)[source]¶ Bases:
torch.nn.modules.module.Module
-
forward
(input: torch._VariableFunctionsClass.tensor) → torch._VariableFunctionsClass.tensor[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.espnet2.spk.layers.__init__¶
espnet2.spk.encoder.rawnet3_encoder¶
RawNet3 Encoder
-
class
espnet2.spk.encoder.rawnet3_encoder.
RawNet3Encoder
(block: str = 'Bottle2neck', model_scale: int = 8, ndim: int = 1024, sinc_stride: int = 16, **kwargs)[source]¶ Bases:
espnet2.asr.encoder.abs_encoder.AbsEncoder
RawNet3 encoder. Extracts frame-level RawNet embeddings from raw waveform. paper: J. Jung et al., “Pushing the limits of raw waveform speaker
recognition”, in Proc. INTERSPEECH, 2022.
- Note that the model’s output dimensionality self._output_size equals to
1.5 * ndim.
- Parameters:
block – type of encoder block class to use.
model_scale – scale value of the Res2Net architecture.
ndim – dimensionality of the hidden representation.
sinc_stride – stride size of the first sinc-conv layer where it decides the compression rate (Hz).
-
forward
(data: torch.Tensor)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.espnet2.spk.encoder.__init__¶
espnet2.spk.loss.aamsoftmax¶
-
class
espnet2.spk.loss.aamsoftmax.
AAMSoftmax
(nout, nclasses, margin=0.3, scale=15, easy_margin=False, **kwargs)[source]¶ Bases:
espnet2.spk.loss.abs_loss.AbsLoss
Additive angular margin softmax.
Paper: Deng, Jiankang, et al. “Arcface: Additive angular margin loss for deep face recognition.” Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019.
- Parameters:
nout – dimensionality of speaker embedding
nclases – number of speakers in the training set
margin – margin value of AAMSoftmax
scale – scale value of AAMSoftmax
-
forward
(x, label=None)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.espnet2.spk.loss.__init__¶
espnet2.spk.loss.abs_loss¶
-
class
espnet2.spk.loss.abs_loss.
AbsLoss
(nout: int, **kwargs)[source]¶ Bases:
torch.nn.modules.module.Module
-
abstract
forward
(x: torch.Tensor, label=None) → torch.Tensor[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.espnet2.spk.pooling.chn_attn_stat_pooling¶
-
class
espnet2.spk.pooling.chn_attn_stat_pooling.
ChnAttnStatPooling
(input_size: int = 1536)[source]¶ Bases:
espnet2.spk.pooling.abs_pooling.AbsPooling
Aggregates frame-level features to single utterance-level feature. Proposed in B.Desplanques et al., “ECAPA-TDNN: Emphasized Channel Attention, Propagation and Aggregation in TDNN Based Speaker Verification”
- Parameters:
input_size – dimensionality of the input frame-level embeddings. Determined by encoder hyperparameter. For this pooling layer, the output dimensionality will be double of the input_size
-
forward
(x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.espnet2.spk.pooling.__init__¶
espnet2.spk.pooling.abs_pooling¶
-
class
espnet2.spk.pooling.abs_pooling.
AbsPooling
[source]¶ Bases:
torch.nn.modules.module.Module
,abc.ABC
Initializes internal Module state, shared by both nn.Module and ScriptModule.
-
abstract
forward
(input: torch.Tensor) → torch.Tensor[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.espnet2.spk.projector.__init__¶
espnet2.spk.projector.rawnet3_projector¶
-
class
espnet2.spk.projector.rawnet3_projector.
RawNet3Projector
(input_size, output_size)[source]¶ Bases:
espnet2.spk.projector.abs_projector.AbsProjector
-
forward
(x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.espnet2.spk.projector.abs_projector¶
-
-
class
-
abstract
-
class
-
class
-
abstract
-
class
-
class
-
class
-
-
class
-
-
class
-