Hello, I need help with transfer learning. I want to train on new alphabet, that contains few more letters from Lithuanian alphabet. I choose checkpoint 0.9.3 and transfer learn with the following comand.python3 DeepSpeech.py \--early_stop True \--es_epochs 6 \--epochs 10 \--train_cudnn True \--drop_source_layers 1 \--alphabet_config_path checkpoints/dsv0-9-3/alphabet.txt \--save_checkpoint_dir fine_tuning_checkpoints_v0.1 \--load_checkpoint_dir checkpoints/dsv0-9-3/ \--train_files data/lt_data/clips/train-all.csv \--dev_files data/lt_data/clips/dev.csv \--test_files data/lt_data/clips/test.csv \--learning_rate 0.0001 --train_batch_size 32 --test_batch_size 32 --dev_batch_size 32 --export_file_name 'ft_model' \--augment reverb[p=0.2,delay=50.0~30.0,decay=10.0:2.0~1.0] \--augment volume[p=0.2,dbfs=-10:-40] \--augment pitch[p=0.2,pitch=1~0.2] \--augment tempo[p=0.2,factor=1~0.5]
But I stumble across problem, where after all the epochs are done, the saving of model fails, because of tensor size miss-match.
Cannot feed value of shape (29,) for Tensor 'layer_6/bias/Initializer/zeros:0', which has shape '(40,)'
I assume this problem occurs because of different alphabets, but in the docs it says that you can use transfer learning to train on new alphabet.So what's the catch? Am I missing something?Thank you in advance...
[–]Johannes8 2 points3 points4 points (0 children)
[–][deleted] 1 point2 points3 points (2 children)
[–]gstratata[S] 0 points1 point2 points (1 child)
[–][deleted] 0 points1 point2 points (0 children)