Use transfer learning for ASR in ESPnet2¶
Author : Dan Berrebbi (dberrebb@andrew.cmu.edu)
Date : April 11th, 2022
Abstract¶
In that tutorial, we will introduce several options to use pre-trained models/parameters for Automatic Speech Recognition (ASR) in ESPnet2. Available options are : - use a local model you (or a collegue) have already trained, - use a trained model from ESPnet repository on HuggingFace.
We note that this is done for ASR training, so at stage 11 of ESPnet2 models’ recipe.
Why using such (pre-)trained models ?¶
Several projects may involve making use of previously trained models, this is the reason why we developed ESPnet repository on HuggingFace for instance. Example of use cases are listed below (non-exhaustive): - target a low resource language, a model trained from scratch may perform badly if trained with only few hours of data, - study robustness to shifts (domain, language … shifts) of a model, - make use of massively trained multilingual models. - …
ESPnet installation (about 10 minutes in total)¶
Please use the gpu environnement provided by google colab for runing this notebook.
[ ]:
!git clone --depth 5 https://github.com/espnet/espnet
[ ]:
# It takes 30 seconds
%cd /content/espnet/tools
!./setup_anaconda.sh anaconda espnet 3.9
[ ]:
# It may take ~8 minutes
%cd /content/espnet/tools
!make CUDA_VERSION=10.2
mini_an4 recipe as a transfer learning example¶
In this example, we use the mini_an4 data, which has only 4 utterances for training. This is of course too small to train an ASR model, but it enables to run all the decribed transfer learning models on a colab environnement. After having run and understood those models/instructions, you can apply it to any other recipe of ESPnet2 or a new recipe that you build. First, move to the recipe directory
[ ]:
%cd /content/espnet/egs2/mini_an4/asr1
/content/espnet/egs2/mini_an4/asr1
Add a configuration file
As the mini_an4 does not contain any configuration file for ASR model, we add one here.
[ ]:
config = {'accum_grad': 1,
'batch_size': 1,
'batch_type': 'folded',
'best_model_criterion': [['valid', 'acc', 'max']],
'decoder': 'transformer',
'decoder_conf': {'dropout_rate': 0.1,
'input_layer': 'embed',
'linear_units': 2048,
'num_blocks': 6},
'encoder': 'transformer',
'encoder_conf': {'attention_dropout_rate': 0.0,
'attention_heads': 4,
'dropout_rate': 0.1,
'input_layer': 'conv2d',
'linear_units': 2048,
'num_blocks': 12,
'output_size': 256},
'grad_clip': 5,
'init': 'xavier_uniform',
'keep_nbest_models': 1,
'max_epoch': 5,
'model_conf': {'ctc_weight': 0.3,
'length_normalized_loss': False,
'lsm_weight': 0.1},
'optim': 'adam',
'optim_conf': {'lr': 1.0},
'patience': 0,
'scheduler': 'noamlr',
'scheduler_conf': {'warmup_steps': 1000}}
[ ]:
import yaml
with open("conf/train_asr.yaml","w") as f:
yaml.dump(config, f)
Data preparation (stage 1 - stage 5)
[ ]:
!./asr.sh --stage 1 --stop_stage 5 --train-set "train_nodev" --valid-set "train_dev" --test_sets "test"
Stage 10: ASR collect stats:
[ ]:
# takes about 10 seconds
!./asr.sh --stage 10 --stop_stage 10 --train-set "train_nodev" --valid-set "train_dev" --test_sets "test" --asr_config "conf/train_asr.yaml"
2022-04-04T22:16:43 (asr.sh:252:main) ./asr.sh --stage 10 --stop_stage 10 --train-set train_nodev --valid-set train_dev --test_sets test --asr_config conf/train_asr.yaml
2022-04-04T22:16:43 (asr.sh:911:main) Stage 10: ASR collect stats: train_set=dump/raw/train_nodev, valid_set=dump/raw/train_dev
2022-04-04T22:16:43 (asr.sh:961:main) Generate 'exp/asr_stats_raw_bpe30/run.sh'. You can resume the process from stage 10 using this script
2022-04-04T22:16:43 (asr.sh:965:main) ASR collect-stats started... log: 'exp/asr_stats_raw_bpe30/logdir/stats.*.log'
/content/espnet/tools/anaconda/envs/espnet/bin/python3 /content/espnet/espnet2/bin/aggregate_stats_dirs.py --input_dir exp/asr_stats_raw_bpe30/logdir/stats.1 --output_dir exp/asr_stats_raw_bpe30
2022-04-04T22:16:48 (asr.sh:1480:main) Skip the uploading stage
2022-04-04T22:16:48 (asr.sh:1532:main) Skip the uploading to HuggingFace stage
2022-04-04T22:16:48 (asr.sh:1535:main) Successfully finished. [elapsed=5s]
Stage 11: ASR training (from scratch)
We train our model for only 5 epochs, just to have a pre-trained model.
[ ]:
# takes about 1-2 minutes
!./asr.sh --stage 11 --stop_stage 11 --train-set "train_nodev" --valid-set "train_dev" --test_sets "test" --asr_config "conf/train_asr.yaml" --asr_tag "pre_trained_model"
Stage 11.2 : ASR training over a pre-trained model
We train our new model over the previously trained model. (here as we use the same training data, this is not very useful, but again this is a toy example that is reproducible with any model.)
Step 1 : make sure your ASR model file has the proper ESPnet format (should be ok if trained with ESPnet). It just needs to be a “.pth” (or “.pt” or other extension) type pytorch model.
Step 2 : add the parameter --pretrained_model path/to/your/pretrained/model/file.pth
to run.sh.
Step 3 : step 2 will initialize your new model with the parameters of the pre-trained model. Thus your new model will be trained with a strong initialization. However, if your new model have different parameter sizes for some parts of the model (e.g. last projection layer could be modified …). This will lead to an error because of mismatches in size. To prevent this to happen, you can add the parameter --ignore_init_mismatch true
in run.sh.
Step 4 (Optional) : if you only want to use some specific parts of the pre-trained model, or exclude specific parts, you can specify it in the --pretrained_model
argument by passing the component names with the following syntax : --pretrained_model <file_path>:<src_key>:<dst_key>:<exclude_Keys>
. src_key
are the parameters you want to keep from the pre-trained model. dst_key
are the parameters you want to initialize in the new model with the src_key
parameters. And
exclude_Keys
are the parameters from the pre-trained model that you do not want to use. You can leave src_key
and dst_key
fields empty and just fill exclude_Keys
with the parameters that you ant to drop. For instance, if you want to re-use encoder parameters but not decoder ones, syntax will be --pretrained_model <file_path>:::decoder
. You can see the argument expected format in more details
here.
[ ]:
# takes about 1-2 minutes
!./asr.sh --stage 11 --stop_stage 11 --train-set "train_nodev" --valid-set "train_dev" \
--test_sets "test" --asr_config "conf/train_asr.yaml" --asr_tag "transfer_learning_with_pre_trained_model"\
--pretrained_model "/content/espnet/egs2/mini_an4/asr1/exp/asr_train_asr_raw_bpe30/valid.acc.ave.pth"
2022-04-04T22:23:12 (asr.sh:252:main) ./asr.sh --stage 11 --stop_stage 11 --train-set train_nodev --valid-set train_dev --test_sets test --asr_config conf/train_asr.yaml --asr_tag transfer_learning_with_pre_trained_model --pretrained_model /content/espnet/egs2/mini_an4/asr1/exp/asr_train_asr_raw_bpe30/valid.acc.ave.pth
2022-04-04T22:23:13 (asr.sh:1012:main) Stage 11: ASR Training: train_set=dump/raw/train_nodev, valid_set=dump/raw/train_dev
2022-04-04T22:23:13 (asr.sh:1079:main) Generate 'exp/asr_transfer_learning_with_pre_trained_model/run.sh'. You can resume the process from stage 11 using this script
2022-04-04T22:23:13 (asr.sh:1083:main) ASR training started... log: 'exp/asr_transfer_learning_with_pre_trained_model/train.log'
2022-04-04 22:23:13,470 (launch:95) INFO: /content/espnet/tools/anaconda/envs/espnet/bin/python3 /content/espnet/espnet2/bin/launch.py --cmd 'run.pl --name exp/asr_transfer_learning_with_pre_trained_model/train.log' --log exp/asr_transfer_learning_with_pre_trained_model/train.log --ngpu 1 --num_nodes 1 --init_file_prefix exp/asr_transfer_learning_with_pre_trained_model/.dist_init_ --multiprocessing_distributed true -- python3 -m espnet2.bin.asr_train --use_preprocessor true --bpemodel data/token_list/bpe_unigram30/bpe.model --token_type bpe --token_list data/token_list/bpe_unigram30/tokens.txt --non_linguistic_symbols none --cleaner none --g2p none --valid_data_path_and_name_and_type dump/raw/train_dev/wav.scp,speech,sound --valid_data_path_and_name_and_type dump/raw/train_dev/text,text,text --valid_shape_file exp/asr_stats_raw_bpe30/valid/speech_shape --valid_shape_file exp/asr_stats_raw_bpe30/valid/text_shape.bpe --resume true --init_param /content/espnet/egs2/mini_an4/asr1/exp/asr_train_asr_raw_bpe30/valid.acc.ave.pth --ignore_init_mismatch false --fold_length 80000 --fold_length 150 --output_dir exp/asr_transfer_learning_with_pre_trained_model --config conf/train_asr.yaml --frontend_conf fs=16k --normalize=global_mvn --normalize_conf stats_file=exp/asr_stats_raw_bpe30/train/feats_stats.npz --train_data_path_and_name_and_type dump/raw/train_nodev/wav.scp,speech,sound --train_data_path_and_name_and_type dump/raw/train_nodev/text,text,text --train_shape_file exp/asr_stats_raw_bpe30/train/speech_shape --train_shape_file exp/asr_stats_raw_bpe30/train/text_shape.bpe
2022-04-04 22:23:13,504 (launch:349) INFO: log file: exp/asr_transfer_learning_with_pre_trained_model/train.log
2022-04-04T22:24:24 (asr.sh:1480:main) Skip the uploading stage
2022-04-04T22:24:24 (asr.sh:1532:main) Skip the uploading to HuggingFace stage
2022-04-04T22:24:24 (asr.sh:1535:main) Successfully finished. [elapsed=72s]
Stage 11.3 : ASR training over a HuggingFace pre-trained model
We train our new model over the previously trained model from HuggingFace. Any model can be used, here we take a model trained on Bengali as an example. It can be found at https://huggingface.co/espnet/bn_openslr53.
Use a trained model from ESPnet repository on HuggingFace.¶
ESPnet repository on HuggingFace contains more than 200 pre-trained models, for a wide variety of languages and dataset, and we are actively expanding this repositories with new models every week! This enable any user to perform transfer learning with a wide variety of models without having to re-train them. In order to use our pre-trained models, the first step is to download the “.pth” model file from the HugginFace page.
There are several easy way to do it, either by manually downloading them (e.g. wget https://huggingface.co/espnet/bn_openslr53/blob/main/exp/asr_train_asr_raw_bpe1000/41epoch.pth
), cloning it (git clone https://huggingface.co/espnet/bn_openslr53
) or downloading it through an ESPnet recipe (described in the models’ pages on HuggingFace):
cd espnet git checkout fa1b865352475b744c37f70440de1cc6b257ba70 pip install -e . cd egs2/bn_openslr53/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/bn_openslr53
Then, as you have the “.pth” model file, you can follow the steps 1 to 4 from the previous section in order to use this pre-train model.
[ ]:
!wget https://huggingface.co/espnet/bn_openslr53/resolve/main/exp/asr_train_asr_raw_bpe1000/41epoch.pth
--2022-04-04 22:25:38-- https://huggingface.co/espnet/bn_openslr53/resolve/main/exp/asr_train_asr_raw_bpe1000/41epoch.pth
Resolving huggingface.co (huggingface.co)... 34.200.173.213, 34.197.58.156, 34.198.1.82, ...
Connecting to huggingface.co (huggingface.co)|34.200.173.213|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://cdn-lfs.huggingface.co/repos/93/20/93201c6e680320b347075e21105ff3d3fe5147b0fcab0126f2d3b56ed1eea0d1/6efee10b5e3904bb7a86f0bfa42761d015c2817695d78bc833c7a76c281433ac [following]
--2022-04-04 22:25:38-- https://cdn-lfs.huggingface.co/repos/93/20/93201c6e680320b347075e21105ff3d3fe5147b0fcab0126f2d3b56ed1eea0d1/6efee10b5e3904bb7a86f0bfa42761d015c2817695d78bc833c7a76c281433ac
Resolving cdn-lfs.huggingface.co (cdn-lfs.huggingface.co)... 108.159.227.69, 108.159.227.123, 108.159.227.71, ...
Connecting to cdn-lfs.huggingface.co (cdn-lfs.huggingface.co)|108.159.227.69|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 111680269 (107M) [application/zip]
Saving to: ‘41epoch.pth’
41epoch.pth 100%[===================>] 106.51M 68.0MB/s in 1.6s
2022-04-04 22:25:40 (68.0 MB/s) - ‘41epoch.pth’ saved [111680269/111680269]
The next command line will raise an error because of the size mismatch of some parameters, as mentionned before (step3).
[ ]:
# will fail in about 5 seconds
!./asr.sh --stage 11 --stop_stage 11 --train-set "train_nodev" --valid-set "train_dev" \
--test_sets "test" --asr_config "conf/train_asr.yaml" --asr_tag "transfer_learning_with_pre_trained_model"\
--pretrained_model "/content/espnet/egs2/mini_an4/asr1/41epoch.pth"
2022-04-04T22:26:29 (asr.sh:252:main) ./asr.sh --stage 11 --stop_stage 11 --train-set train_nodev --valid-set train_dev --test_sets test --asr_config conf/train_asr.yaml --asr_tag transfer_learning_with_pre_trained_model --pretrained_model /content/espnet/egs2/mini_an4/asr1/41epoch.pth
2022-04-04T22:26:29 (asr.sh:1012:main) Stage 11: ASR Training: train_set=dump/raw/train_nodev, valid_set=dump/raw/train_dev
2022-04-04T22:26:29 (asr.sh:1079:main) Generate 'exp/asr_transfer_learning_with_pre_trained_model/run.sh'. You can resume the process from stage 11 using this script
2022-04-04T22:26:29 (asr.sh:1083:main) ASR training started... log: 'exp/asr_transfer_learning_with_pre_trained_model/train.log'
2022-04-04 22:26:29,844 (launch:95) INFO: /content/espnet/tools/anaconda/envs/espnet/bin/python3 /content/espnet/espnet2/bin/launch.py --cmd 'run.pl --name exp/asr_transfer_learning_with_pre_trained_model/train.log' --log exp/asr_transfer_learning_with_pre_trained_model/train.log --ngpu 1 --num_nodes 1 --init_file_prefix exp/asr_transfer_learning_with_pre_trained_model/.dist_init_ --multiprocessing_distributed true -- python3 -m espnet2.bin.asr_train --use_preprocessor true --bpemodel data/token_list/bpe_unigram30/bpe.model --token_type bpe --token_list data/token_list/bpe_unigram30/tokens.txt --non_linguistic_symbols none --cleaner none --g2p none --valid_data_path_and_name_and_type dump/raw/train_dev/wav.scp,speech,sound --valid_data_path_and_name_and_type dump/raw/train_dev/text,text,text --valid_shape_file exp/asr_stats_raw_bpe30/valid/speech_shape --valid_shape_file exp/asr_stats_raw_bpe30/valid/text_shape.bpe --resume true --init_param /content/espnet/egs2/mini_an4/asr1/41epoch.pth --ignore_init_mismatch false --fold_length 80000 --fold_length 150 --output_dir exp/asr_transfer_learning_with_pre_trained_model --config conf/train_asr.yaml --frontend_conf fs=16k --normalize=global_mvn --normalize_conf stats_file=exp/asr_stats_raw_bpe30/train/feats_stats.npz --train_data_path_and_name_and_type dump/raw/train_nodev/wav.scp,speech,sound --train_data_path_and_name_and_type dump/raw/train_nodev/text,text,text --train_shape_file exp/asr_stats_raw_bpe30/train/speech_shape --train_shape_file exp/asr_stats_raw_bpe30/train/text_shape.bpe
2022-04-04 22:26:29,872 (launch:349) INFO: log file: exp/asr_transfer_learning_with_pre_trained_model/train.log
run.pl: job failed, log is in exp/asr_transfer_learning_with_pre_trained_model/train.log
Command '['run.pl', '--name', 'exp/asr_transfer_learning_with_pre_trained_model/train.log', '--gpu', '1', 'exp/asr_transfer_learning_with_pre_trained_model/train.log', 'python3', '-m', 'espnet2.bin.asr_train', '--use_preprocessor', 'true', '--bpemodel', 'data/token_list/bpe_unigram30/bpe.model', '--token_type', 'bpe', '--token_list', 'data/token_list/bpe_unigram30/tokens.txt', '--non_linguistic_symbols', 'none', '--cleaner', 'none', '--g2p', 'none', '--valid_data_path_and_name_and_type', 'dump/raw/train_dev/wav.scp,speech,sound', '--valid_data_path_and_name_and_type', 'dump/raw/train_dev/text,text,text', '--valid_shape_file', 'exp/asr_stats_raw_bpe30/valid/speech_shape', '--valid_shape_file', 'exp/asr_stats_raw_bpe30/valid/text_shape.bpe', '--resume', 'true', '--init_param', '/content/espnet/egs2/mini_an4/asr1/41epoch.pth', '--ignore_init_mismatch', 'false', '--fold_length', '80000', '--fold_length', '150', '--output_dir', 'exp/asr_transfer_learning_with_pre_trained_model', '--config', 'conf/train_asr.yaml', '--frontend_conf', 'fs=16k', '--normalize=global_mvn', '--normalize_conf', 'stats_file=exp/asr_stats_raw_bpe30/train/feats_stats.npz', '--train_data_path_and_name_and_type', 'dump/raw/train_nodev/wav.scp,speech,sound', '--train_data_path_and_name_and_type', 'dump/raw/train_nodev/text,text,text', '--train_shape_file', 'exp/asr_stats_raw_bpe30/train/speech_shape', '--train_shape_file', 'exp/asr_stats_raw_bpe30/train/text_shape.bpe', '--ngpu', '1', '--multiprocessing_distributed', 'True']' returned non-zero exit status 1.
Traceback (most recent call last):
File "/content/espnet/tools/anaconda/envs/espnet/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/content/espnet/tools/anaconda/envs/espnet/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/content/espnet/espnet2/bin/launch.py", line 385, in <module>
main()
File "/content/espnet/espnet2/bin/launch.py", line 376, in main
raise RuntimeError(
RuntimeError:
################### The last 1000 lines of exp/asr_transfer_learning_with_pre_trained_model/train.log ###################
# python3 -m espnet2.bin.asr_train --use_preprocessor true --bpemodel data/token_list/bpe_unigram30/bpe.model --token_type bpe --token_list data/token_list/bpe_unigram30/tokens.txt --non_linguistic_symbols none --cleaner none --g2p none --valid_data_path_and_name_and_type dump/raw/train_dev/wav.scp,speech,sound --valid_data_path_and_name_and_type dump/raw/train_dev/text,text,text --valid_shape_file exp/asr_stats_raw_bpe30/valid/speech_shape --valid_shape_file exp/asr_stats_raw_bpe30/valid/text_shape.bpe --resume true --init_param /content/espnet/egs2/mini_an4/asr1/41epoch.pth --ignore_init_mismatch false --fold_length 80000 --fold_length 150 --output_dir exp/asr_transfer_learning_with_pre_trained_model --config conf/train_asr.yaml --frontend_conf fs=16k --normalize=global_mvn --normalize_conf stats_file=exp/asr_stats_raw_bpe30/train/feats_stats.npz --train_data_path_and_name_and_type dump/raw/train_nodev/wav.scp,speech,sound --train_data_path_and_name_and_type dump/raw/train_nodev/text,text,text --train_shape_file exp/asr_stats_raw_bpe30/train/speech_shape --train_shape_file exp/asr_stats_raw_bpe30/train/text_shape.bpe --ngpu 1 --multiprocessing_distributed True
# Started at Mon Apr 4 22:26:29 UTC 2022
#
/content/espnet/tools/anaconda/envs/espnet/bin/python3 /content/espnet/espnet2/bin/asr_train.py --use_preprocessor true --bpemodel data/token_list/bpe_unigram30/bpe.model --token_type bpe --token_list data/token_list/bpe_unigram30/tokens.txt --non_linguistic_symbols none --cleaner none --g2p none --valid_data_path_and_name_and_type dump/raw/train_dev/wav.scp,speech,sound --valid_data_path_and_name_and_type dump/raw/train_dev/text,text,text --valid_shape_file exp/asr_stats_raw_bpe30/valid/speech_shape --valid_shape_file exp/asr_stats_raw_bpe30/valid/text_shape.bpe --resume true --init_param /content/espnet/egs2/mini_an4/asr1/41epoch.pth --ignore_init_mismatch false --fold_length 80000 --fold_length 150 --output_dir exp/asr_transfer_learning_with_pre_trained_model --config conf/train_asr.yaml --frontend_conf fs=16k --normalize=global_mvn --normalize_conf stats_file=exp/asr_stats_raw_bpe30/train/feats_stats.npz --train_data_path_and_name_and_type dump/raw/train_nodev/wav.scp,speech,sound --train_data_path_and_name_and_type dump/raw/train_nodev/text,text,text --train_shape_file exp/asr_stats_raw_bpe30/train/speech_shape --train_shape_file exp/asr_stats_raw_bpe30/train/text_shape.bpe --ngpu 1 --multiprocessing_distributed True
[a7588ebdfd24] 2022-04-04 22:26:32,466 (asr:411) INFO: Vocabulary size: 30
/content/espnet/espnet2/schedulers/noam_lr.py:40: UserWarning: NoamLR is deprecated. Use WarmupLR(warmup_steps=1000) with Optimizer(lr=0.0017677669529663688)
warnings.warn(
[a7588ebdfd24] 2022-04-04 22:26:34,960 (abs_task:1157) INFO: pytorch.version=1.10.1, cuda.available=True, cudnn.version=7605, cudnn.benchmark=False, cudnn.deterministic=True
[a7588ebdfd24] 2022-04-04 22:26:34,966 (abs_task:1158) INFO: Model structure:
ESPnetASRModel(
(frontend): DefaultFrontend(
(stft): Stft(n_fft=512, win_length=512, hop_length=128, center=True, normalized=False, onesided=True)
(frontend): Frontend()
(logmel): LogMel(sr=16000, n_fft=512, n_mels=80, fmin=0, fmax=8000.0, htk=False)
)
(normalize): GlobalMVN(stats_file=exp/asr_stats_raw_bpe30/train/feats_stats.npz, norm_means=True, norm_vars=True)
(encoder): TransformerEncoder(
(embed): Conv2dSubsampling(
(conv): Sequential(
(0): Conv2d(1, 256, kernel_size=(3, 3), stride=(2, 2))
(1): ReLU()
(2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2))
(3): ReLU()
)
(out): Sequential(
(0): Linear(in_features=4864, out_features=256, bias=True)
(1): PositionalEncoding(
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
(encoders): MultiSequential(
(0): EncoderLayer(
(self_attn): MultiHeadedAttention(
(linear_q): Linear(in_features=256, out_features=256, bias=True)
(linear_k): Linear(in_features=256, out_features=256, bias=True)
(linear_v): Linear(in_features=256, out_features=256, bias=True)
(linear_out): Linear(in_features=256, out_features=256, bias=True)
(dropout): Dropout(p=0.0, inplace=False)
)
(feed_forward): PositionwiseFeedForward(
(w_1): Linear(in_features=256, out_features=2048, bias=True)
(w_2): Linear(in_features=2048, out_features=256, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
(activation): ReLU()
)
(norm1): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(norm2): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(1): EncoderLayer(
(self_attn): MultiHeadedAttention(
(linear_q): Linear(in_features=256, out_features=256, bias=True)
(linear_k): Linear(in_features=256, out_features=256, bias=True)
(linear_v): Linear(in_features=256, out_features=256, bias=True)
(linear_out): Linear(in_features=256, out_features=256, bias=True)
(dropout): Dropout(p=0.0, inplace=False)
)
(feed_forward): PositionwiseFeedForward(
(w_1): Linear(in_features=256, out_features=2048, bias=True)
(w_2): Linear(in_features=2048, out_features=256, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
(activation): ReLU()
)
(norm1): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(norm2): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(2): EncoderLayer(
(self_attn): MultiHeadedAttention(
(linear_q): Linear(in_features=256, out_features=256, bias=True)
(linear_k): Linear(in_features=256, out_features=256, bias=True)
(linear_v): Linear(in_features=256, out_features=256, bias=True)
(linear_out): Linear(in_features=256, out_features=256, bias=True)
(dropout): Dropout(p=0.0, inplace=False)
)
(feed_forward): PositionwiseFeedForward(
(w_1): Linear(in_features=256, out_features=2048, bias=True)
(w_2): Linear(in_features=2048, out_features=256, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
(activation): ReLU()
)
(norm1): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(norm2): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(3): EncoderLayer(
(self_attn): MultiHeadedAttention(
(linear_q): Linear(in_features=256, out_features=256, bias=True)
(linear_k): Linear(in_features=256, out_features=256, bias=True)
(linear_v): Linear(in_features=256, out_features=256, bias=True)
(linear_out): Linear(in_features=256, out_features=256, bias=True)
(dropout): Dropout(p=0.0, inplace=False)
)
(feed_forward): PositionwiseFeedForward(
(w_1): Linear(in_features=256, out_features=2048, bias=True)
(w_2): Linear(in_features=2048, out_features=256, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
(activation): ReLU()
)
(norm1): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(norm2): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(4): EncoderLayer(
(self_attn): MultiHeadedAttention(
(linear_q): Linear(in_features=256, out_features=256, bias=True)
(linear_k): Linear(in_features=256, out_features=256, bias=True)
(linear_v): Linear(in_features=256, out_features=256, bias=True)
(linear_out): Linear(in_features=256, out_features=256, bias=True)
(dropout): Dropout(p=0.0, inplace=False)
)
(feed_forward): PositionwiseFeedForward(
(w_1): Linear(in_features=256, out_features=2048, bias=True)
(w_2): Linear(in_features=2048, out_features=256, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
(activation): ReLU()
)
(norm1): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(norm2): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(5): EncoderLayer(
(self_attn): MultiHeadedAttention(
(linear_q): Linear(in_features=256, out_features=256, bias=True)
(linear_k): Linear(in_features=256, out_features=256, bias=True)
(linear_v): Linear(in_features=256, out_features=256, bias=True)
(linear_out): Linear(in_features=256, out_features=256, bias=True)
(dropout): Dropout(p=0.0, inplace=False)
)
(feed_forward): PositionwiseFeedForward(
(w_1): Linear(in_features=256, out_features=2048, bias=True)
(w_2): Linear(in_features=2048, out_features=256, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
(activation): ReLU()
)
(norm1): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(norm2): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(6): EncoderLayer(
(self_attn): MultiHeadedAttention(
(linear_q): Linear(in_features=256, out_features=256, bias=True)
(linear_k): Linear(in_features=256, out_features=256, bias=True)
(linear_v): Linear(in_features=256, out_features=256, bias=True)
(linear_out): Linear(in_features=256, out_features=256, bias=True)
(dropout): Dropout(p=0.0, inplace=False)
)
(feed_forward): PositionwiseFeedForward(
(w_1): Linear(in_features=256, out_features=2048, bias=True)
(w_2): Linear(in_features=2048, out_features=256, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
(activation): ReLU()
)
(norm1): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(norm2): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(7): EncoderLayer(
(self_attn): MultiHeadedAttention(
(linear_q): Linear(in_features=256, out_features=256, bias=True)
(linear_k): Linear(in_features=256, out_features=256, bias=True)
(linear_v): Linear(in_features=256, out_features=256, bias=True)
(linear_out): Linear(in_features=256, out_features=256, bias=True)
(dropout): Dropout(p=0.0, inplace=False)
)
(feed_forward): PositionwiseFeedForward(
(w_1): Linear(in_features=256, out_features=2048, bias=True)
(w_2): Linear(in_features=2048, out_features=256, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
(activation): ReLU()
)
(norm1): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(norm2): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(8): EncoderLayer(
(self_attn): MultiHeadedAttention(
(linear_q): Linear(in_features=256, out_features=256, bias=True)
(linear_k): Linear(in_features=256, out_features=256, bias=True)
(linear_v): Linear(in_features=256, out_features=256, bias=True)
(linear_out): Linear(in_features=256, out_features=256, bias=True)
(dropout): Dropout(p=0.0, inplace=False)
)
(feed_forward): PositionwiseFeedForward(
(w_1): Linear(in_features=256, out_features=2048, bias=True)
(w_2): Linear(in_features=2048, out_features=256, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
(activation): ReLU()
)
(norm1): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(norm2): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(9): EncoderLayer(
(self_attn): MultiHeadedAttention(
(linear_q): Linear(in_features=256, out_features=256, bias=True)
(linear_k): Linear(in_features=256, out_features=256, bias=True)
(linear_v): Linear(in_features=256, out_features=256, bias=True)
(linear_out): Linear(in_features=256, out_features=256, bias=True)
(dropout): Dropout(p=0.0, inplace=False)
)
(feed_forward): PositionwiseFeedForward(
(w_1): Linear(in_features=256, out_features=2048, bias=True)
(w_2): Linear(in_features=2048, out_features=256, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
(activation): ReLU()
)
(norm1): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(norm2): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(10): EncoderLayer(
(self_attn): MultiHeadedAttention(
(linear_q): Linear(in_features=256, out_features=256, bias=True)
(linear_k): Linear(in_features=256, out_features=256, bias=True)
(linear_v): Linear(in_features=256, out_features=256, bias=True)
(linear_out): Linear(in_features=256, out_features=256, bias=True)
(dropout): Dropout(p=0.0, inplace=False)
)
(feed_forward): PositionwiseFeedForward(
(w_1): Linear(in_features=256, out_features=2048, bias=True)
(w_2): Linear(in_features=2048, out_features=256, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
(activation): ReLU()
)
(norm1): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(norm2): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(11): EncoderLayer(
(self_attn): MultiHeadedAttention(
(linear_q): Linear(in_features=256, out_features=256, bias=True)
(linear_k): Linear(in_features=256, out_features=256, bias=True)
(linear_v): Linear(in_features=256, out_features=256, bias=True)
(linear_out): Linear(in_features=256, out_features=256, bias=True)
(dropout): Dropout(p=0.0, inplace=False)
)
(feed_forward): PositionwiseFeedForward(
(w_1): Linear(in_features=256, out_features=2048, bias=True)
(w_2): Linear(in_features=2048, out_features=256, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
(activation): ReLU()
)
(norm1): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(norm2): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(after_norm): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
)
(decoder): TransformerDecoder(
(embed): Sequential(
(0): Embedding(30, 256)
(1): PositionalEncoding(
(dropout): Dropout(p=0.1, inplace=False)
)
)
(after_norm): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(output_layer): Linear(in_features=256, out_features=30, bias=True)
(decoders): MultiSequential(
(0): DecoderLayer(
(self_attn): MultiHeadedAttention(
(linear_q): Linear(in_features=256, out_features=256, bias=True)
(linear_k): Linear(in_features=256, out_features=256, bias=True)
(linear_v): Linear(in_features=256, out_features=256, bias=True)
(linear_out): Linear(in_features=256, out_features=256, bias=True)
(dropout): Dropout(p=0.0, inplace=False)
)
(src_attn): MultiHeadedAttention(
(linear_q): Linear(in_features=256, out_features=256, bias=True)
(linear_k): Linear(in_features=256, out_features=256, bias=True)
(linear_v): Linear(in_features=256, out_features=256, bias=True)
(linear_out): Linear(in_features=256, out_features=256, bias=True)
(dropout): Dropout(p=0.0, inplace=False)
)
(feed_forward): PositionwiseFeedForward(
(w_1): Linear(in_features=256, out_features=2048, bias=True)
(w_2): Linear(in_features=2048, out_features=256, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
(activation): ReLU()
)
(norm1): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(norm2): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(norm3): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(1): DecoderLayer(
(self_attn): MultiHeadedAttention(
(linear_q): Linear(in_features=256, out_features=256, bias=True)
(linear_k): Linear(in_features=256, out_features=256, bias=True)
(linear_v): Linear(in_features=256, out_features=256, bias=True)
(linear_out): Linear(in_features=256, out_features=256, bias=True)
(dropout): Dropout(p=0.0, inplace=False)
)
(src_attn): MultiHeadedAttention(
(linear_q): Linear(in_features=256, out_features=256, bias=True)
(linear_k): Linear(in_features=256, out_features=256, bias=True)
(linear_v): Linear(in_features=256, out_features=256, bias=True)
(linear_out): Linear(in_features=256, out_features=256, bias=True)
(dropout): Dropout(p=0.0, inplace=False)
)
(feed_forward): PositionwiseFeedForward(
(w_1): Linear(in_features=256, out_features=2048, bias=True)
(w_2): Linear(in_features=2048, out_features=256, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
(activation): ReLU()
)
(norm1): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(norm2): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(norm3): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(2): DecoderLayer(
(self_attn): MultiHeadedAttention(
(linear_q): Linear(in_features=256, out_features=256, bias=True)
(linear_k): Linear(in_features=256, out_features=256, bias=True)
(linear_v): Linear(in_features=256, out_features=256, bias=True)
(linear_out): Linear(in_features=256, out_features=256, bias=True)
(dropout): Dropout(p=0.0, inplace=False)
)
(src_attn): MultiHeadedAttention(
(linear_q): Linear(in_features=256, out_features=256, bias=True)
(linear_k): Linear(in_features=256, out_features=256, bias=True)
(linear_v): Linear(in_features=256, out_features=256, bias=True)
(linear_out): Linear(in_features=256, out_features=256, bias=True)
(dropout): Dropout(p=0.0, inplace=False)
)
(feed_forward): PositionwiseFeedForward(
(w_1): Linear(in_features=256, out_features=2048, bias=True)
(w_2): Linear(in_features=2048, out_features=256, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
(activation): ReLU()
)
(norm1): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(norm2): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(norm3): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(3): DecoderLayer(
(self_attn): MultiHeadedAttention(
(linear_q): Linear(in_features=256, out_features=256, bias=True)
(linear_k): Linear(in_features=256, out_features=256, bias=True)
(linear_v): Linear(in_features=256, out_features=256, bias=True)
(linear_out): Linear(in_features=256, out_features=256, bias=True)
(dropout): Dropout(p=0.0, inplace=False)
)
(src_attn): MultiHeadedAttention(
(linear_q): Linear(in_features=256, out_features=256, bias=True)
(linear_k): Linear(in_features=256, out_features=256, bias=True)
(linear_v): Linear(in_features=256, out_features=256, bias=True)
(linear_out): Linear(in_features=256, out_features=256, bias=True)
(dropout): Dropout(p=0.0, inplace=False)
)
(feed_forward): PositionwiseFeedForward(
(w_1): Linear(in_features=256, out_features=2048, bias=True)
(w_2): Linear(in_features=2048, out_features=256, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
(activation): ReLU()
)
(norm1): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(norm2): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(norm3): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(4): DecoderLayer(
(self_attn): MultiHeadedAttention(
(linear_q): Linear(in_features=256, out_features=256, bias=True)
(linear_k): Linear(in_features=256, out_features=256, bias=True)
(linear_v): Linear(in_features=256, out_features=256, bias=True)
(linear_out): Linear(in_features=256, out_features=256, bias=True)
(dropout): Dropout(p=0.0, inplace=False)
)
(src_attn): MultiHeadedAttention(
(linear_q): Linear(in_features=256, out_features=256, bias=True)
(linear_k): Linear(in_features=256, out_features=256, bias=True)
(linear_v): Linear(in_features=256, out_features=256, bias=True)
(linear_out): Linear(in_features=256, out_features=256, bias=True)
(dropout): Dropout(p=0.0, inplace=False)
)
(feed_forward): PositionwiseFeedForward(
(w_1): Linear(in_features=256, out_features=2048, bias=True)
(w_2): Linear(in_features=2048, out_features=256, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
(activation): ReLU()
)
(norm1): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(norm2): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(norm3): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(5): DecoderLayer(
(self_attn): MultiHeadedAttention(
(linear_q): Linear(in_features=256, out_features=256, bias=True)
(linear_k): Linear(in_features=256, out_features=256, bias=True)
(linear_v): Linear(in_features=256, out_features=256, bias=True)
(linear_out): Linear(in_features=256, out_features=256, bias=True)
(dropout): Dropout(p=0.0, inplace=False)
)
(src_attn): MultiHeadedAttention(
(linear_q): Linear(in_features=256, out_features=256, bias=True)
(linear_k): Linear(in_features=256, out_features=256, bias=True)
(linear_v): Linear(in_features=256, out_features=256, bias=True)
(linear_out): Linear(in_features=256, out_features=256, bias=True)
(dropout): Dropout(p=0.0, inplace=False)
)
(feed_forward): PositionwiseFeedForward(
(w_1): Linear(in_features=256, out_features=2048, bias=True)
(w_2): Linear(in_features=2048, out_features=256, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
(activation): ReLU()
)
(norm1): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(norm2): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(norm3): LayerNorm((256,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
(criterion_att): LabelSmoothingLoss(
(criterion): KLDivLoss()
)
(ctc): CTC(
(ctc_lo): Linear(in_features=256, out_features=30, bias=True)
(ctc_loss): CTCLoss()
)
)
Model summary:
Class Name: ESPnetASRModel
Total Number of model parameters: 27.12 M
Number of trainable parameters: 27.12 M (100.0%)
Size: 108.46 MB
Type: torch.float32
[a7588ebdfd24] 2022-04-04 22:26:34,966 (abs_task:1161) INFO: Optimizer:
Adam (
Parameter Group 0
amsgrad: False
betas: (0.9, 0.999)
eps: 1e-08
initial_lr: 1.0
lr: 1.7677669529663689e-06
weight_decay: 0
)
[a7588ebdfd24] 2022-04-04 22:26:34,966 (abs_task:1162) INFO: Scheduler: NoamLR(model_size=320, warmup_steps=1000)
[a7588ebdfd24] 2022-04-04 22:26:34,966 (abs_task:1171) INFO: Saving the configuration in exp/asr_transfer_learning_with_pre_trained_model/config.yaml
[a7588ebdfd24] 2022-04-04 22:26:34,977 (abs_task:1228) INFO: Loading pretrained params from /content/espnet/egs2/mini_an4/asr1/41epoch.pth
Traceback (most recent call last):
File "/content/espnet/tools/anaconda/envs/espnet/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/content/espnet/tools/anaconda/envs/espnet/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/content/espnet/espnet2/bin/asr_train.py", line 23, in <module>
main()
File "/content/espnet/espnet2/bin/asr_train.py", line 19, in main
ASRTask.main(cmd=cmd)
File "/content/espnet/espnet2/tasks/abs_task.py", line 1019, in main
cls.main_worker(args)
File "/content/espnet/espnet2/tasks/abs_task.py", line 1229, in main_worker
load_pretrained_model(
File "/content/espnet/espnet2/torch_utils/load_pretrained_model.py", line 117, in load_pretrained_model
obj.load_state_dict(dst_state)
File "/content/espnet/tools/anaconda/envs/espnet/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1482, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for ESPnetASRModel:
size mismatch for decoder.embed.0.weight: copying a param with shape torch.Size([1000, 256]) from checkpoint, the shape in current model is torch.Size([30, 256]).
size mismatch for decoder.output_layer.weight: copying a param with shape torch.Size([1000, 256]) from checkpoint, the shape in current model is torch.Size([30, 256]).
size mismatch for decoder.output_layer.bias: copying a param with shape torch.Size([1000]) from checkpoint, the shape in current model is torch.Size([30]).
size mismatch for ctc.ctc_lo.weight: copying a param with shape torch.Size([1000, 256]) from checkpoint, the shape in current model is torch.Size([30, 256]).
size mismatch for ctc.ctc_lo.bias: copying a param with shape torch.Size([1000]) from checkpoint, the shape in current model is torch.Size([30]).
# Accounting: time=6 threads=1
# Ended (code 1) at Mon Apr 4 22:26:35 UTC 2022, elapsed time 6 seconds
To solve this issue, as mentionned, we can use the --ignore_init_mismatch "true"
parameter.
[ ]:
# takes about 1-2 minutes
!./asr.sh --stage 11 --stop_stage 11 --train-set "train_nodev" --valid-set "train_dev" \
--test_sets "test" --asr_config "conf/train_asr.yaml" --asr_tag "transfer_learning_with_pre_trained_model_from_HF"\
--pretrained_model "/content/espnet/egs2/mini_an4/asr1/41epoch.pth" --ignore_init_mismatch "true"
2022-04-04T22:35:41 (asr.sh:252:main) ./asr.sh --stage 11 --stop_stage 11 --train-set train_nodev --valid-set train_dev --test_sets test --asr_config conf/train_asr.yaml --asr_tag transfer_learning_with_pre_trained_model_from_HF --pretrained_model /content/espnet/egs2/mini_an4/asr1/41epoch.pth --ignore_init_mismatch true
2022-04-04T22:35:42 (asr.sh:1012:main) Stage 11: ASR Training: train_set=dump/raw/train_nodev, valid_set=dump/raw/train_dev
2022-04-04T22:35:42 (asr.sh:1079:main) Generate 'exp/asr_transfer_learning_with_pre_trained_model_from_HF/run.sh'. You can resume the process from stage 11 using this script
2022-04-04T22:35:42 (asr.sh:1083:main) ASR training started... log: 'exp/asr_transfer_learning_with_pre_trained_model_from_HF/train.log'
2022-04-04 22:35:42,611 (launch:95) INFO: /content/espnet/tools/anaconda/envs/espnet/bin/python3 /content/espnet/espnet2/bin/launch.py --cmd 'run.pl --name exp/asr_transfer_learning_with_pre_trained_model_from_HF/train.log' --log exp/asr_transfer_learning_with_pre_trained_model_from_HF/train.log --ngpu 1 --num_nodes 1 --init_file_prefix exp/asr_transfer_learning_with_pre_trained_model_from_HF/.dist_init_ --multiprocessing_distributed true -- python3 -m espnet2.bin.asr_train --use_preprocessor true --bpemodel data/token_list/bpe_unigram30/bpe.model --token_type bpe --token_list data/token_list/bpe_unigram30/tokens.txt --non_linguistic_symbols none --cleaner none --g2p none --valid_data_path_and_name_and_type dump/raw/train_dev/wav.scp,speech,sound --valid_data_path_and_name_and_type dump/raw/train_dev/text,text,text --valid_shape_file exp/asr_stats_raw_bpe30/valid/speech_shape --valid_shape_file exp/asr_stats_raw_bpe30/valid/text_shape.bpe --resume true --init_param /content/espnet/egs2/mini_an4/asr1/41epoch.pth --ignore_init_mismatch true --fold_length 80000 --fold_length 150 --output_dir exp/asr_transfer_learning_with_pre_trained_model_from_HF --config conf/train_asr.yaml --frontend_conf fs=16k --normalize=global_mvn --normalize_conf stats_file=exp/asr_stats_raw_bpe30/train/feats_stats.npz --train_data_path_and_name_and_type dump/raw/train_nodev/wav.scp,speech,sound --train_data_path_and_name_and_type dump/raw/train_nodev/text,text,text --train_shape_file exp/asr_stats_raw_bpe30/train/speech_shape --train_shape_file exp/asr_stats_raw_bpe30/train/text_shape.bpe
2022-04-04 22:35:42,653 (launch:349) INFO: log file: exp/asr_transfer_learning_with_pre_trained_model_from_HF/train.log
2022-04-04T22:37:09 (asr.sh:1480:main) Skip the uploading stage
2022-04-04T22:37:09 (asr.sh:1532:main) Skip the uploading to HuggingFace stage
2022-04-04T22:37:09 (asr.sh:1535:main) Successfully finished. [elapsed=88s]
Additional note about the ``–ignore_init_mismatch true`` option : This option is very convenient because in lots of transfer learning use cases, you will aim to use a model trained on a language X (e.g. X=English) for another language Y. Language Y may have a vocabulary (set of tokens) different from language X, for instance if you target Y=Totonac, a Mexican low resource language, your model may be stronger if you use a different set of bpes/tokens thatn the one used to train the English
model. In that situation, the last layer (projection to vocabulary space) of your ASR model needs to be initialized from scratch and may be different in shape than the one of the English model. For that reason, you should use the --ignore_init_mismatch true
option. It also enables to handle the case where the scripts are differents from languages X to Y.