如何用kaldi作孤立詞識別-第一版

------------------------------------------------------------------------------------------------------------------------------------------------------html

孤立詞參考的例子就是yes/no腳本。前端

------------------------------------------------------------------------------------------------------------------------------------------------------python

這裏咱們作10個詞識別實驗,熟悉整條鏈路。git

後續嘗試一些新的模型提升識別率;正則表達式

再嘗試模型語速、語調、平穩噪聲的魯棒性,嘗試已有去噪處理算法的優化前端;算法

擴大孤立詞的數量,裁剪模型,效率優化,熟悉FST解碼器,將嵌入式硬件的孤立詞識別能作到實用層面。shell

最後作連續語音識別、熟悉語言模型。bash

------------------------------------------------------------------------------------------------------------------------------------------------------架構

1、錄製的語料說明ide

在egs/建立本身的項目coffee,再建立s5。

拷貝coffee語音庫以及yes/no如下文件夾和文件。

.
├── conf
├── data
├── exp
├── input
├── local
├── mfcc
├── Nestle
├── path.sh
├── run.sh
├── steps -> ../../wsj/s5/steps
└── utils -> ../../wsj/s5/utils

Nestle語音庫目錄結構以下:

.
├── TestData
│   ├── 001_001.wav
│   ├── 001_002.wav
│   ├── 002_001.wav
│   ├── 002_002.wav
│   ├── 003_001.wav
│   ├── 003_002.wav
│   ├── 004_001.wav
│   ├── 004_002.wav
│   ├── 005_001.wav
│   ├── 005_002.wav
│   ├── 006_001.wav
│   ├── 006_002.wav
│   ├── 007_001.wav
│   ├── 007_02.wav
│   ├── 008_001.wav
│   ├── 008_002.wav
│   ├── 009_001.wav
│   ├── 009_002.wav
│   ├── 010_001.wav
│   └── 010_002.wav
└── TrainData
├── FastWord
│   ├── Model2_1
│   ├── Model2_10
│   ├── Model2_2
│   ├── Model2_3
│   ├── Model2_4
│   ├── Model2_5
│   ├── Model2_6
│   ├── Model2_7
│   ├── Model2_8
│   └── Model2_9
├── MiddWord
│   ├── Model1_1
│   ├── Model1_10
│   ├── Model1_2
│   ├── Model1_3
│   ├── Model1_4
│   ├── Model1_5
│   ├── Model1_6
│   ├── Model1_7
│   ├── Model1_8
│   └── Model1_9
└── SlowWord
├── Model3_1
├── Model3_10
├── Model3_2
├── Model3_3
├── Model3_4
├── Model3_5
├── Model3_6
├── Model3_7
├── Model3_8
└── Model3_9

首先,錄製的語料是關於咖啡的品種,分爲快速、中速、慢速

1)文件夾命名規則:  Modelx_x

 

Model1表示中速

Model2表示快速 

Model3表示慢速 

10個詞,後綴表示表示發音id號

 

2)文件說明

CF_speaker-id_speed_utter-id.wav

CF  coffee,語庫名縮寫

 

 2、腳本修改 

因爲現有的語音庫,詞彙表、目錄結構、語音文件命名、詞典、語言模型,還有對perl不是很熟悉等,因此不少腳本都得從新書寫,順便下python正則表達式。

run.sh
內容以下:
#!/bin/bash

train_cmd="utils/run.pl"
decode_cmd="utils/run.pl"

#have data, not need to download
#if [ ! -d waves_yesno ]; then
#  wget http://www.openslr.org/resources/1/waves_yesno.tar.gz || exit 1;
  # was:
  # wget http://sourceforge.net/projects/kaldi/files/waves_yesno.tar.gz || exit 1;
#  tar -xvzf waves_yesno.tar.gz || exit 1;
#fi


train_yesno=waves_train
test_base_name=waves_test

#clear  data exp  mfcc filefolder
rm -rf data exp mfcc

# Data preparation
# we need to rewrite scripts below
local/prepare_data.sh Nestle   #structure of dir and file name is  different
local/prepare_dict.sh               #dict contains 10 words, not 2.
utils/prepare_lang.sh --position-dependent-phones false data/local/dict "<SIL>" data/local/lang data/lang
local/prepare_lm.sh
echo "Data Prepraration finish!"

# Feature extraction
for x in  waves_test waves_train; do 
     steps/make_mfcc.sh --nj 1 data/$x exp/make_mfcc/$x mfcc
     steps/compute_cmvn_stats.sh data/$x exp/make_mfcc/$x mfcc
     utils/fix_data_dir.sh data/$x
done
echo "Feature extraction finish!"

# Mono training
steps/train_mono.sh --nj 1 --cmd "$train_cmd" \
  --totgauss 400 \
  data/waves_train data/lang exp/mono0 
echo "Mono training finish!"

  
# Graph compilation  
utils/mkgraph.sh data/lang_test_tg exp/mono0 exp/mono0/graph_tgpr
echo "Graph compilation finish!"


# Decoding
steps/decode.sh --nj 1 --cmd "$decode_cmd" \
    exp/mono0/graph_tgpr data/waves_test exp/mono0/decode_waves_test

for x in exp/*/decode*; do [ -d $x ] && grep WER $x/wer_* | utils/best_wer.sh; done

 

prepare_data.sh
修改以下:

#!/bin/bash

 
 

mkdir -p data/local
local=`pwd`/local
scripts=`pwd`/scripts

 
 

export PATH=$PATH:`pwd`/../../../tools/irstlm/bin

 
 

echo "Preparing train and test data"

 
 

train_base_name=waves_train
test_base_name=waves_test
root_dir=$1
train_dir=$root_dir/TrainData/MiddWord
test_dir=$root_dir/TestData

 
 

echo fix up uncorrect file name
#fix file name
for subdir in $(ls $train_dir)
do
moddir=$train_dir/$subdir
if [ -d $moddir ];then
for wav_file in $(ls $moddir)
do
#filename=$moddir/$wav_file
python ./local/FixUpName.py $moddir $wav_file
done
fi
done
echo "fix up files finish!"

 
 

#遍歷各個詞的文件夾
#文件列表
#注意空格
#utter_id 文件路徑
for subdir in $(ls $train_dir)
do
moddir=$train_dir/$subdir
if [ -d $moddir ];then
echo "$moddir make file list and labels"
#ls -l $moddir > ${moddir}.list
#ls -l $moddir | grep -P 'CF.+\.wav' -o > ${moddir}.list
ls -l $moddir | grep -P 'CF(_[0-9]{1,3}){3}\.wav' -o >${moddir}.list
if [ -f ${moddir}_waves_train.scp ];then
rm ${moddir}_waves_train.scp
fi
python ./local/create_coffee_labels.py ${moddir}.list ${moddir} ${moddir}_waves_train.scp
fi
done

 
 


cat $train_dir/*.list > data/local/waves_train
#sort -u data/local/waves_train -o data/local/waves_train
cat $train_dir/*.scp > data/local/waves_train_wav.scp
#sort -u data/local/waves_train_wav.scp -o data/local/waves_train_wav.scp

 
 


ls -l $test_dir | grep -P '[0-9]{1,3}_[0-9]{1,3}\.wav' -o >data/local/waves_test
python ./local/create_coffee_test_labels.py data/local/waves_test $test_dir data/local/waves_test_wav.scp

 
 

#sort -u data/local/waves_test -o data/local/waves_test
#sort -u data/local/waves_test_wav.scp -o data/local/waves_test_wav.scp
echo "$moddir make file list and labels finish!"

 
 


echo "$moddir make file utter"
cd data/local
#測試數據的命名與訓練數據的命名不一致
python ../../local/create_coffee_test_txt.py waves_test ../../input/trans.txt ${test_base_name}.txt
python ../../local/create_coffee_txt.py waves_train ../../input/trans.txt ${train_base_name}.txt
echo "make make file utter finish"

 
 

#sort -u ${test_base_name}.txt -o ${test_base_name}.txt
#sort -u ${train_base_name}.txt -o ${train_base_name}.txt

 
 

cp ../../input/task.arpabo lm_tg.arpa

 
 

cd ../..

 
 

#暫時不區分說話人
# This stage was copied from WSJ example
for x in waves_train waves_test; do
mkdir -p data/$x
cp data/local/${x}_wav.scp data/$x/wav.scp
echo "data/$x/wav.scp sort!"
sort -u data/$x/wav.scp -o data/$x/wav.scp
cp data/local/$x.txt data/$x/text
echo "data/$x/text sort!"
sort -u data/$x/text -o data/$x/text
cat data/$x/text | awk '{printf("%s global\n", $1);}' >data/$x/utt2spk
echo "data/$x/utt2spk sort!"
sort -u data/$x/utt2spk -o data/$x/utt2spk
#create_coffee_speaker.py
utils/utt2spk_to_spk2utt.pl <data/$x/utt2spk >data/$x/spk2utt
done

 
FixUpName.py

#coding=utf-8
import re
import sys
import os


def addzeros(matched):
    digits = matched.group('digits')
    n = len(digits)
    zeros = '0'*(3-n)  
    return (zeros+digits)

moddir=sys.argv[1]
wav_file=sys.argv[2]
filename=os.path.join(moddir,wav_file)
out_file = re.sub('(?P<digits>\d+)',addzeros,wav_file)
#print wav_file
#print out_file
outname=os.path.join(moddir,out_file)
if os.path.exists(filename):
   os.rename(filename, outname);
create_coffee_labels.py

#coding=utf-8
import re
import codecs
import sys
import os

list_file = sys.argv[1]#os.path.join(os.getcwd(),sys.argv[1])
wav_dir = sys.argv[2]
label_file = sys.argv[3]#os.path.join(os.getcwd(),sys.argv[3])
#print list_file, label_file
file_object = codecs.open(list_file, 'r', 'utf-8')
try:
    all_the_text = file_object.readlines()
finally:
    file_object.close()
pattern = r"CF_(.+)\.wav"

try:
    utf_file = codecs.open(label_file, 'w', 'utf-8')
    for each_text in all_the_text:
            print each_text
        ret = re.findall(pattern, each_text)
        #print ret[0]
        if len(ret) > 0:
        content = ret[0] + "\t" + wav_dir  +"/"+each_text
        #print content
        utf_file.write(content)
finally:
    utf_file.close()
create_coffee_test_labels.py

#coding=utf-8
import re
import codecs
import sys
import os

list_file = sys.argv[1]#os.path.join(os.getcwd(),sys.argv[1])
wav_dir = sys.argv[2]
label_file = sys.argv[3]#os.path.join(os.getcwd(),sys.argv[3])
#print list_file, label_file
file_object = codecs.open(list_file, 'r', 'utf-8')
try:
    all_the_text = file_object.readlines()
finally:
    file_object.close()
pattern = r"(.+)\.wav"

try:
    utf_file = codecs.open(label_file, 'w', 'utf-8')
    for each_text in all_the_text:
            print each_text
        ret = re.findall(pattern, each_text)
        #print ret[0]
        if len(ret) > 0:
        content = ret[0] + "\t" + wav_dir  +"/"+each_text
        #print content
        utf_file.write(content)
finally:
    utf_file.close()

 

create_coffee_txt.py

#coding=utf-8
import re
import codecs
import sys
import os

filename = sys.argv[1]
#print filename
wordfile = sys.argv[2]
#print wordfile
outname = sys.argv[3]



#print wordfile
word_object = codecs.open(wordfile, 'r', 'utf-8')
try:
    all_the_words = word_object.readlines()
        #print all_the_words[0]
finally:
    word_object.close()

#print filename
file_object = codecs.open(filename, 'r', 'utf-8')
try:
    all_the_text = file_object.readlines()
finally:
    file_object.close()

pattern = r"_(\d+)\.wav"

try:
    out_file = codecs.open(outname, 'w', 'utf-8')
    for each_text in all_the_text:
            #print each_text
        ret = re.findall(pattern, each_text)
        #print ret[0]
        if len(ret) > 0:                
                index = int(ret[0])
                #print all_the_words[index-1]
        content = each_text[3:-5]+ "\t" + all_the_words[index-1]
        #print content
        out_file.write(content)
finally:
    out_file.close()
create_coffee_test_txt.py

#coding=utf-8
import re
import codecs
import sys
import os

filename = sys.argv[1]
print filename
wordfile = sys.argv[2]
print wordfile
outname = sys.argv[3]



#print wordfile
word_object = codecs.open(wordfile, 'r', 'utf-8')
try:
    all_the_words = word_object.readlines()
        print all_the_words[0]
finally:
    word_object.close()

#print filename
file_object = codecs.open(filename, 'r', 'utf-8')
try:
    all_the_text = file_object.readlines()
finally:
    file_object.close()

pattern = r"(\d+)_.+\.wav"

try:
    out_file = codecs.open(outname, 'w', 'utf-8')
    for each_text in all_the_text:
            print each_text
        ret = re.findall(pattern, each_text)
        print ret[0]
        if len(ret) > 0:                
                index = int(ret[0])
                print "test_data"
                print all_the_words[index-1]
        content = each_text[:-5]+ "\t" + all_the_words[index-1]
        #print content
        out_file.write(content)
finally:
    out_file.close()

 

3、實驗流程

1.準備好語音庫

2.準備好腳本文件

local裏面的全部腳本文件。

├── create_coffee_labels.py
├── create_coffee_speaker.py
├── create_coffee_test_labels.py
├── create_coffee_test_txt.py
├── create_coffee_txt.py
├── FixUpName.py
├── prepare_data.sh
├── prepare_dict.sh
├── prepare_lm.sh
└── score.sh

input裏面的全部文件。

.
├── lexicon_nosil.txt
├── lexicon.txt
├── phones.txt
├── task.arpabo
└── trans.txt

conf 配置文件夾

conf
├── mfcc.conf
└── topo_orig.proto

---------------------------------------------------------------------------------------------------------------------------------------------------------------------

1.數據準備階段

prepare_data.sh 運行完,獲得 text,wav.scp,utt2spk, spk2utt文件,這幾個文件須要保證已排序。

  text : < uttid > < word >

  wav.scp : < uttid > < utter_file_path >

  utt2spk : < uttid > < speakid >

  spk2utt : < speakid > < uttid >

  word.txt : 同 text

2.準備好詞典,生成語言模型文件

input文件夾下:

├── lexicon_nosil.txt
├── lexicon.txt
├── phones.txt
└── trans.txt

lexicon.txt: 詞典,包括語料中涉及的詞彙與發音。

lexicon_nosil.txt 不包含靜音的詞典。

phones.txt 全部發音音素

trans.txt 訓練的全部詞彙表

生成語言模型

task.arpabo 用 ngram-count  trans.txt生成。

4、碰到的問題

1.shell腳本輸出文件名,不能有空格。

2.生成語言模型,生成arpa文件,對於孤立詞來講,語言模型沒有意義,這個多是kaldi整個架構的緣由,必需要有語言模型。

ngram-count

3.文件命名規範,須要進行排序,若是沒有,可能報以下錯誤。

 echo "$0: file $1 is not in sorted order or has duplicates" && exit 1;

有時加 sort -u 就能夠保證有序 ,可是若是命名不規範,依然報錯,因此本身寫了個腳本,對語音庫文件從新命名。

FixUpName.py

5、跑完的流程

make make file utter finish
data/waves_train/wav.scp sort!
data/waves_train/text sort!
data/waves_train/utt2spk sort!
data/waves_test/wav.scp sort!
data/waves_test/text sort!
data/waves_test/utt2spk sort!
Dictionary preparation succeeded
utils/prepare_lang.sh --position-dependent-phones false data/local/dict <SIL> data/local/lang data/lang
Checking data/local/dict/silence_phones.txt ...
--> reading data/local/dict/silence_phones.txt
--> data/local/dict/silence_phones.txt is OK

Checking data/local/dict/optional_silence.txt ...
--> reading data/local/dict/optional_silence.txt
--> data/local/dict/optional_silence.txt is OK

Checking data/local/dict/nonsilence_phones.txt ...
--> reading data/local/dict/nonsilence_phones.txt
--> data/local/dict/nonsilence_phones.txt is OK

Checking disjoint: silence_phones.txt, nonsilence_phones.txt
--> disjoint property is OK.

Checking data/local/dict/lexicon.txt
--> reading data/local/dict/lexicon.txt
--> data/local/dict/lexicon.txt is OK

Checking data/local/dict/extra_questions.txt ...
--> data/local/dict/extra_questions.txt is empty (this is OK)
--> SUCCESS [validating dictionary directory data/local/dict]

**Creating data/local/dict/lexiconp.txt from data/local/dict/lexicon.txt
fstaddselfloops data/lang/phones/wdisambig_phones.int data/lang/phones/wdisambig_words.int 
prepare_lang.sh: validating output directory
utils/validate_lang.pl data/lang
Checking data/lang/phones.txt ...
--> data/lang/phones.txt is OK

Checking words.txt: #0 ...
--> data/lang/words.txt is OK

Checking disjoint: silence.txt, nonsilence.txt, disambig.txt ...
--> silence.txt and nonsilence.txt are disjoint
--> silence.txt and disambig.txt are disjoint
--> disambig.txt and nonsilence.txt are disjoint
--> disjoint property is OK

Checking sumation: silence.txt, nonsilence.txt, disambig.txt ...
--> summation property is OK

Checking data/lang/phones/context_indep.{txt, int, csl} ...
--> 1 entry/entries in data/lang/phones/context_indep.txt
--> data/lang/phones/context_indep.int corresponds to data/lang/phones/context_indep.txt
--> data/lang/phones/context_indep.csl corresponds to data/lang/phones/context_indep.txt
--> data/lang/phones/context_indep.{txt, int, csl} are OK

Checking data/lang/phones/nonsilence.{txt, int, csl} ...
--> 10 entry/entries in data/lang/phones/nonsilence.txt
--> data/lang/phones/nonsilence.int corresponds to data/lang/phones/nonsilence.txt
--> data/lang/phones/nonsilence.csl corresponds to data/lang/phones/nonsilence.txt
--> data/lang/phones/nonsilence.{txt, int, csl} are OK

Checking data/lang/phones/silence.{txt, int, csl} ...
--> 1 entry/entries in data/lang/phones/silence.txt
--> data/lang/phones/silence.int corresponds to data/lang/phones/silence.txt
--> data/lang/phones/silence.csl corresponds to data/lang/phones/silence.txt
--> data/lang/phones/silence.{txt, int, csl} are OK

Checking data/lang/phones/optional_silence.{txt, int, csl} ...
--> 1 entry/entries in data/lang/phones/optional_silence.txt
--> data/lang/phones/optional_silence.int corresponds to data/lang/phones/optional_silence.txt
--> data/lang/phones/optional_silence.csl corresponds to data/lang/phones/optional_silence.txt
--> data/lang/phones/optional_silence.{txt, int, csl} are OK

Checking data/lang/phones/disambig.{txt, int, csl} ...
--> 2 entry/entries in data/lang/phones/disambig.txt
--> data/lang/phones/disambig.int corresponds to data/lang/phones/disambig.txt
--> data/lang/phones/disambig.csl corresponds to data/lang/phones/disambig.txt
--> data/lang/phones/disambig.{txt, int, csl} are OK

Checking data/lang/phones/roots.{txt, int} ...
--> 11 entry/entries in data/lang/phones/roots.txt
--> data/lang/phones/roots.int corresponds to data/lang/phones/roots.txt
--> data/lang/phones/roots.{txt, int} are OK

Checking data/lang/phones/sets.{txt, int} ...
--> 11 entry/entries in data/lang/phones/sets.txt
--> data/lang/phones/sets.int corresponds to data/lang/phones/sets.txt
--> data/lang/phones/sets.{txt, int} are OK

Checking data/lang/phones/extra_questions.{txt, int} ...
Checking optional_silence.txt ...
--> reading data/lang/phones/optional_silence.txt
--> data/lang/phones/optional_silence.txt is OK

Checking disambiguation symbols: #0 and #1
--> data/lang/phones/disambig.txt has "#0" and "#1"
--> data/lang/phones/disambig.txt is OK

Checking topo ...

Checking word-level disambiguation symbols...
--> data/lang/phones/wdisambig.txt exists (newer prepare_lang.sh)
Checking data/lang/oov.{txt, int} ...
--> 1 entry/entries in data/lang/oov.txt
--> data/lang/oov.int corresponds to data/lang/oov.txt
--> data/lang/oov.{txt, int} are OK

--> data/lang/L.fst is olabel sorted
--> data/lang/L_disambig.fst is olabel sorted
--> SUCCESS [validating lang directory data/lang]
Preparing language models for test
arpa2fst --disambig-symbol=#0 --read-symbol-table=data/lang_test_tg/words.txt input/task.arpabo data/lang_test_tg/G.fst 
LOG (arpa2fst[5.2.124~1396-70748]:Read():arpa-file-parser.cc:98) Reading \data\ section.
LOG (arpa2fst[5.2.124~1396-70748]:Read():arpa-file-parser.cc:153) Reading \1-grams: section.
LOG (arpa2fst[5.2.124~1396-70748]:RemoveRedundantStates():arpa-lm-compiler.cc:359) Reduced num-states from 1 to 1
fstisstochastic data/lang_test_tg/G.fst 
7.27531e-08 7.27531e-08
Succeeded in formatting data.
Data Prepraration finish!
steps/make_mfcc.sh --nj 1 data/waves_test exp/make_mfcc/waves_test mfcc
utils/validate_data_dir.sh: WARNING: you have only one speaker.  This probably a bad idea.
   Search for the word 'bold' in http://kaldi-asr.org/doc/data_prep.html
   for more information.
utils/validate_data_dir.sh: Successfully validated data-directory data/waves_test
steps/make_mfcc.sh: [info]: no segments file exists: assuming wav.scp indexed by utterance.
Succeeded creating MFCC features for waves_test
steps/compute_cmvn_stats.sh data/waves_test exp/make_mfcc/waves_test mfcc
Succeeded creating CMVN stats for waves_test
fix_data_dir.sh: kept all 20 utterances.
fix_data_dir.sh: old files are kept in data/waves_test/.backup
steps/make_mfcc.sh --nj 1 data/waves_train exp/make_mfcc/waves_train mfcc
utils/validate_data_dir.sh: WARNING: you have only one speaker.  This probably a bad idea.
   Search for the word 'bold' in http://kaldi-asr.org/doc/data_prep.html
   for more information.
utils/validate_data_dir.sh: Successfully validated data-directory data/waves_train
steps/make_mfcc.sh: [info]: no segments file exists: assuming wav.scp indexed by utterance.
Succeeded creating MFCC features for waves_train
steps/compute_cmvn_stats.sh data/waves_train exp/make_mfcc/waves_train mfcc
Succeeded creating CMVN stats for waves_train
fix_data_dir.sh: kept all 680 utterances.
fix_data_dir.sh: old files are kept in data/waves_train/.backup
Feature extraction finish!
steps/train_mono.sh --nj 1 --cmd utils/run.pl --totgauss 400 data/waves_train data/lang exp/mono0
steps/train_mono.sh: Initializing monophone system.
steps/train_mono.sh: Compiling training graphs
steps/train_mono.sh: Aligning data equally (pass 0)
steps/train_mono.sh: Pass 1
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 2
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 3
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 4
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 5
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 6
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 7
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 8
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 9
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 10
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 11
steps/train_mono.sh: Pass 12
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 13
steps/train_mono.sh: Pass 14
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 15
steps/train_mono.sh: Pass 16
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 17
steps/train_mono.sh: Pass 18
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 19
steps/train_mono.sh: Pass 20
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 21
steps/train_mono.sh: Pass 22
steps/train_mono.sh: Pass 23
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 24
steps/train_mono.sh: Pass 25
steps/train_mono.sh: Pass 26
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 27
steps/train_mono.sh: Pass 28
steps/train_mono.sh: Pass 29
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 30
steps/train_mono.sh: Pass 31
steps/train_mono.sh: Pass 32
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 33
steps/train_mono.sh: Pass 34
steps/train_mono.sh: Pass 35
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 36
steps/train_mono.sh: Pass 37
steps/train_mono.sh: Pass 38
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 39
steps/diagnostic/analyze_alignments.sh --cmd utils/run.pl data/lang exp/mono0
analyze_phone_length_stats.py: WARNING: optional-silence SIL is seen only 32.6470588235% of the time at utterance begin.  This may not be optimal.
steps/diagnostic/analyze_alignments.sh: see stats in exp/mono0/log/analyze_alignments.log
1 warnings in exp/mono0/log/analyze_alignments.log
exp/mono0: nj=1 align prob=-99.75 over 0.14h [retry=0.0%, fail=0.0%] states=35 gauss=396
steps/train_mono.sh: Done training monophone system in exp/mono0
Mono training finish!
tree-info exp/mono0/tree 
tree-info exp/mono0/tree 
fstpushspecial 
fsttablecompose data/lang_test_tg/L_disambig.fst data/lang_test_tg/G.fst 
fstminimizeencoded 
fstdeterminizestar --use-log=true 
fstisstochastic data/lang_test_tg/tmp/LG.fst 
0.000133727 0.000118745
fstcomposecontext --context-size=1 --central-position=0 --read-disambig-syms=data/lang_test_tg/phones/disambig.int --write-disambig-syms=data/lang_test_tg/tmp/disambig_ilabels_1_0.int data/lang_test_tg/tmp/ilabels_1_0.30709 
fstisstochastic data/lang_test_tg/tmp/CLG_1_0.fst 
0.000133727 0.000118745
make-h-transducer --disambig-syms-out=exp/mono0/graph_tgpr/disambig_tid.int --transition-scale=1.0 data/lang_test_tg/tmp/ilabels_1_0 exp/mono0/tree exp/mono0/final.mdl 
fsttablecompose exp/mono0/graph_tgpr/Ha.fst data/lang_test_tg/tmp/CLG_1_0.fst 
fstrmepslocal 
fstrmsymbols exp/mono0/graph_tgpr/disambig_tid.int 
fstminimizeencoded 
fstdeterminizestar --use-log=true 
fstisstochastic exp/mono0/graph_tgpr/HCLGa.fst 
0.000286827 -0.000292581
add-self-loops --self-loop-scale=0.1 --reorder=true exp/mono0/final.mdl 
Graph compilation finish!
steps/decode.sh --nj 1 --cmd utils/run.pl exp/mono0/graph_tgpr data/waves_test exp/mono0/decode_waves_test
decode.sh: feature type is delta
steps/diagnostic/analyze_lats.sh --cmd utils/run.pl exp/mono0/graph_tgpr exp/mono0/decode_waves_test
analyze_phone_length_stats.py: WARNING: optional-silence SIL is seen only 55.0% of the time at utterance begin.  This may not be optimal.
steps/diagnostic/analyze_lats.sh: see stats in exp/mono0/decode_waves_test/log/analyze_alignments.log
Overall, lattice depth (10,50,90-percentile)=(3,8,31) and mean=13.9
steps/diagnostic/analyze_lats.sh: see stats in exp/mono0/decode_waves_test/log/analyze_lattice_depth_stats.log
%WER 65.00 [ 13 / 20, 11 ins, 0 del, 2 sub ] exp/mono0/decode_waves_test/wer_10
相關文章
相關標籤/搜索