espnet2.torch_utils package¶
espnet2.torch_utils.get_layer_from_string¶
-
espnet2.torch_utils.get_layer_from_string.
get_layer
(l_name, library=<module 'torch.nn' from '/home/runner/work/espnet/espnet/tools/venv/lib/python3.8/site-packages/torch/nn/__init__.py'>)[source]¶ Return layer object handler from library e.g. from torch.nn
E.g. if l_name==”elu”, returns torch.nn.ELU.
- Parameters:
l_name (string) – Case insensitive name for layer in library (e.g. .’elu’).
library (module) – Name of library/module where to search for object handler
l_name e.g. "torch.nn". (with) –
- Returns:
handler for the requested layer e.g. (torch.nn.ELU)
- Return type:
layer_handler (object)
espnet2.torch_utils.add_gradient_noise¶
-
espnet2.torch_utils.add_gradient_noise.
add_gradient_noise
(model: torch.nn.modules.module.Module, iteration: int, duration: float = 100, eta: float = 1.0, scale_factor: float = 0.55)[source]¶ Adds noise from a standard normal distribution to the gradients.
The standard deviation (sigma) is controlled by the three hyper-parameters below. sigma goes to zero (no noise) with more iterations.
- Parameters:
model – Model.
iteration – Number of iterations.
duration – {100, 1000}: Number of durations to control the interval of the sigma change.
eta – {0.01, 0.3, 1.0}: The magnitude of sigma.
scale_factor – {0.55}: The scale of sigma.
espnet2.torch_utils.recursive_op¶
Torch utility module.
espnet2.torch_utils.pytorch_version¶
espnet2.torch_utils.load_pretrained_model¶
-
espnet2.torch_utils.load_pretrained_model.
filter_state_dict
(dst_state: Dict[str, Union[float, torch.Tensor]], src_state: Dict[str, Union[float, torch.Tensor]])[source]¶ Filter name, size mismatch instances between dicts.
- Parameters:
dst_state – reference state dict for filtering
src_state – target state dict for filtering
-
espnet2.torch_utils.load_pretrained_model.
load_pretrained_model
(init_param: str, model: torch.nn.modules.module.Module, ignore_init_mismatch: bool, map_location: str = 'cpu')[source]¶ Load a model state and set it to the model.
- Parameters:
init_param – <file_path>:<src_key>:<dst_key>:<exclude_Keys>
Examples
>>> load_pretrained_model("somewhere/model.pth", model) >>> load_pretrained_model("somewhere/model.pth:decoder:decoder", model) >>> load_pretrained_model("somewhere/model.pth:decoder:decoder:", model) >>> load_pretrained_model( ... "somewhere/model.pth:decoder:decoder:decoder.embed", model ... ) >>> load_pretrained_model("somewhere/decoder.pth::decoder", model)
espnet2.torch_utils.device_funcs¶
-
espnet2.torch_utils.device_funcs.
force_gatherable
(data, device)[source]¶ Change object to gatherable in torch.nn.DataParallel recursively
The difference from to_device() is changing to torch.Tensor if float or int value is found.
- The restriction to the returned value in DataParallel:
The object must be - torch.cuda.Tensor - 1 or more dimension. 0-dimension-tensor sends warning. or a list, tuple, dict.
espnet2.torch_utils.model_summary¶
-
espnet2.torch_utils.model_summary.
get_human_readable_count
(number: int) → str[source]¶ Return human_readable_count
Originated from: https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/core/memory.py
Abbreviates an integer number with K, M, B, T for thousands, millions, billions and trillions, respectively. .. rubric:: Examples
>>> get_human_readable_count(123) '123 ' >>> get_human_readable_count(1234) # (one thousand) '1 K' >>> get_human_readable_count(2e6) # (two million) '2 M' >>> get_human_readable_count(3e9) # (three billion) '3 B' >>> get_human_readable_count(4e12) # (four trillion) '4 T' >>> get_human_readable_count(5e15) # (more than trillion) '5,000 T'
- Parameters:
number – a positive integer number
- Returns:
A string formatted according to the pattern described above.
espnet2.torch_utils.forward_adaptor¶
-
class
espnet2.torch_utils.forward_adaptor.
ForwardAdaptor
(module: torch.nn.modules.module.Module, name: str)[source]¶ Bases:
torch.nn.modules.module.Module
Wrapped module to parallelize specified method
torch.nn.DataParallel parallelizes only “forward()” and, maybe, the method having the other name can’t be applied except for wrapping the module just like this class.
Examples
>>> class A(torch.nn.Module): ... def foo(self, x): ... ... >>> model = A() >>> model = ForwardAdaptor(model, "foo") >>> model = torch.nn.DataParallel(model, device_ids=[0, 1]) >>> x = torch.randn(2, 10) >>> model(x)
-
forward
(*args, **kwargs)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.espnet2.torch_utils.set_all_random_seed¶
espnet2.torch_utils.initialize¶
Initialize modules for espnet2 neural networks.
-
espnet2.torch_utils.initialize.
initialize
(model: torch.nn.modules.module.Module, init: str)[source]¶ Initialize weights of a neural network module.
Parameters are initialized using the given method or distribution.
Custom initialization routines can be implemented into submodules as function espnet_initialization_fn within the custom module.
- Parameters:
model – Target.
init – Method of initialization.
espnet2.torch_utils.__init__¶
-
-