espnet2.uasr package¶
espnet2.uasr.espnet_model¶
-
class
espnet2.uasr.espnet_model.
ESPnetUASRModel
(frontend: Optional[espnet2.asr.frontend.abs_frontend.AbsFrontend], segmenter: Optional[espnet2.uasr.segmenter.abs_segmenter.AbsSegmenter], generator: espnet2.uasr.generator.abs_generator.AbsGenerator, discriminator: espnet2.uasr.discriminator.abs_discriminator.AbsDiscriminator, losses: Dict[str, espnet2.uasr.loss.abs_loss.AbsUASRLoss], kenlm_path: Optional[str], token_list: Optional[list], max_epoch: Optional[int], vocab_size: int, cfg: Optional[Dict] = None, pad: int = 1, sil_token: str = '<SIL>', sos_token: str = '<s>', eos_token: str = '</s>', skip_softmax: espnet2.utils.types.str2bool = False, use_gumbel: espnet2.utils.types.str2bool = False, use_hard_gumbel: espnet2.utils.types.str2bool = True, min_temperature: float = 0.1, max_temperature: float = 2.0, decay_temperature: float = 0.99995, use_collected_training_feats: espnet2.utils.types.str2bool = False)[source]¶ Bases:
espnet2.train.abs_espnet_model.AbsESPnetModel
Unsupervised ASR model.
The source code is from FAIRSEQ: https://github.com/facebookresearch/fairseq/tree/main/examples/wav2vec/unsupervised
-
collect_feats
(speech: torch.Tensor, speech_lengths: torch.Tensor, text: Optional[torch.Tensor] = None, text_lengths: Optional[torch.Tensor] = None, **kwargs) → Dict[str, torch.Tensor][source]¶
-
encode
(speech: torch.Tensor, speech_lengths: torch.Tensor) → Tuple[torch.Tensor, torch.Tensor][source]¶
-
forward
(speech: torch.Tensor, speech_lengths: torch.Tensor, text: Optional[torch.Tensor] = None, text_lengths: Optional[torch.Tensor] = None, pseudo_labels: Optional[torch.Tensor] = None, pseudo_labels_lengths: Optional[torch.Tensor] = None, do_validation: Optional[espnet2.utils.types.str2bool] = False, print_hyp: Optional[espnet2.utils.types.str2bool] = False, **kwargs) → Tuple[torch.Tensor, Dict[str, torch.Tensor], torch.Tensor][source]¶ Frontend + Segmenter + Generator + Discriminator + Calc Loss
Args:
-
property
number_updates
¶
-
espnet2.uasr.__init__¶
espnet2.uasr.discriminator.abs_discriminator¶
-
class
espnet2.uasr.discriminator.abs_discriminator.
AbsDiscriminator
[source]¶ Bases:
torch.nn.modules.module.Module
,abc.ABC
Initializes internal Module state, shared by both nn.Module and ScriptModule.
-
abstract
forward
(xs_pad: torch.Tensor, padding_mask: torch.Tensor) → torch.Tensor[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.espnet2.uasr.discriminator.conv_discriminator¶
-
class
espnet2.uasr.discriminator.conv_discriminator.
ConvDiscriminator
(input_dim: int, cfg: Optional[Dict] = None, conv_channels: int = 384, conv_kernel: int = 8, conv_dilation: int = 1, conv_depth: int = 2, linear_emb: espnet2.utils.types.str2bool = False, causal: espnet2.utils.types.str2bool = True, max_pool: espnet2.utils.types.str2bool = False, act_after_linear: espnet2.utils.types.str2bool = False, dropout: float = 0.0, spectral_norm: espnet2.utils.types.str2bool = False, weight_norm: espnet2.utils.types.str2bool = False)[source]¶ Bases:
espnet2.uasr.discriminator.abs_discriminator.AbsDiscriminator
convolutional discriminator for UASR.
-
forward
(x: torch.Tensor, padding_mask: Optional[torch.Tensor])[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
class
espnet2.uasr.discriminator.conv_discriminator.
SamePad
(kernel_size, causal=False)[source]¶ Bases:
torch.nn.modules.module.Module
-
forward
(x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.espnet2.uasr.discriminator.__init__¶
espnet2.uasr.loss.pseudo_label_loss¶
-
class
espnet2.uasr.loss.pseudo_label_loss.
UASRPseudoLabelLoss
(weight: float = 1.0, input_dim: int = 128, output_dim: int = 64, downsample_rate: int = 2, ignore_index: int = -1, reduction: str = 'none')[source]¶ Bases:
espnet2.uasr.loss.abs_loss.AbsUASRLoss
auxiliary pseudo label loss for UASR.
espnet2.uasr.loss.discriminator_loss¶
-
class
espnet2.uasr.loss.discriminator_loss.
UASRDiscriminatorLoss
(weight: float = 1.0, smoothing: float = 0.0, smoothing_one_side: espnet2.utils.types.str2bool = False, reduction: str = 'sum')[source]¶ Bases:
espnet2.uasr.loss.abs_loss.AbsUASRLoss
discriminator loss for UASR.
espnet2.uasr.loss.gradient_penalty¶
-
class
espnet2.uasr.loss.gradient_penalty.
UASRGradientPenalty
(discriminator: espnet2.uasr.discriminator.abs_discriminator.AbsDiscriminator, weight: float = 1.0, probabilistic_grad_penalty_slicing: espnet2.utils.types.str2bool = False, reduction: str = 'sum')[source]¶ Bases:
espnet2.uasr.loss.abs_loss.AbsUASRLoss
gradient penalty for UASR.
-
forward
(fake_sample: torch.Tensor, real_sample: torch.Tensor, is_training: espnet2.utils.types.str2bool, is_discrimininative_step: espnet2.utils.types.str2bool)[source]¶ Forward.
- Parameters:
fake_sample – generated sample from generator
real_sample – real sample
is_training – whether is at training step
is_discriminative_step – whether is training discriminator
-
espnet2.uasr.loss.phoneme_diversity_loss¶
-
class
espnet2.uasr.loss.phoneme_diversity_loss.
UASRPhonemeDiversityLoss
(weight: float = 1.0)[source]¶ Bases:
espnet2.uasr.loss.abs_loss.AbsUASRLoss
phoneme diversity loss for UASR.
espnet2.uasr.loss.__init__¶
espnet2.uasr.loss.smoothness_penalty¶
-
class
espnet2.uasr.loss.smoothness_penalty.
UASRSmoothnessPenalty
(weight: float = 1.0, reduction: str = 'none')[source]¶ Bases:
espnet2.uasr.loss.abs_loss.AbsUASRLoss
smoothness penalty for UASR.
-
forward
(dense_logits: torch.Tensor, dense_padding_mask: torch.Tensor, sample_size: int, is_discriminative_step: bool)[source]¶ Forward.
- Parameters:
dense_logits – output logits of generator
dense_padding_mask – padding mask of logits
sample_size – batch size
is_discriminative_step – Whether is training discriminator
-
espnet2.uasr.loss.abs_loss¶
-
class
espnet2.uasr.loss.abs_loss.
AbsUASRLoss
[source]¶ Bases:
torch.nn.modules.module.Module
,abc.ABC
Base class for all Diarization loss modules.
Initializes internal Module state, shared by both nn.Module and ScriptModule.
-
abstract
forward
() → torch.Tensor[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
property
name
¶
espnet2.uasr.segmenter.join_segmenter¶
espnet2.uasr.segmenter.__init__¶
espnet2.uasr.segmenter.abs_segmenter¶
Segmenter definition for UASR task
Practially, the output of the generator (in frame-level) may predict the same phoneme for consecutive frames, which makes it too easy for the discriminator. So, the segmenter here is to merge frames with a similar prediction from the generator output.
espnet2.uasr.segmenter.random_segmenter¶
espnet2.uasr.generator.conv_generator¶
-
class
espnet2.uasr.generator.conv_generator.
ConvGenerator
(input_dim: int, output_dim: int, cfg: Optional[Dict] = None, conv_kernel: int = 3, conv_dilation: int = 1, conv_stride: int = 9, pad: int = -1, bias: espnet2.utils.types.str2bool = False, dropout: float = 0.0, batch_norm: espnet2.utils.types.str2bool = True, batch_norm_weight: float = 30.0, residual: espnet2.utils.types.str2bool = True)[source]¶ Bases:
espnet2.uasr.generator.abs_generator.AbsGenerator
convolutional generator for UASR.
-
forward
(feats: torch.Tensor, text: Optional[torch.Tensor], feats_padding_mask: torch.Tensor)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
class
espnet2.uasr.generator.conv_generator.
SamePad
(kernel_size, causal=False)[source]¶ Bases:
torch.nn.modules.module.Module
-
forward
(x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.-
class
espnet2.uasr.generator.conv_generator.
TransposeLast
(deconstruct_idx=None)[source]¶ Bases:
torch.nn.modules.module.Module
-
forward
(x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.espnet2.uasr.generator.__init__¶
espnet2.uasr.generator.abs_generator¶
-
-
class
-
-
class
-
-
property
-
abstract
-
class
-
-
class
-
-
class
-
abstract