在語音分析,合成,轉換中,第一步每每是提取語音特徵參數。
利用機器學習方法進行上述語音任務,經常使用到梅爾頻譜。
本文介紹從音頻文件提取梅爾頻譜,和從梅爾頻譜變成音頻波形。算法
從音頻波形提取Mel頻譜:app
對音頻信號預加劇、分幀和加窗
對每幀信號進行短時傅立葉變換STFT,獲得短時幅度譜
短時幅度譜經過Mel濾波器組獲得Mel頻譜
從Mel頻譜重建音頻波形機器學習
Mel頻譜轉換成幅度譜
griffin_lim聲碼器算法重建波形
去加劇
聲碼器有不少種,好比world,straight等,可是griffin_lim是特殊的,它不須要相位信息就能夠重頻譜重建波形,實際上它根據幀之間的關係估計相位信息。和成的音頻質量也較高,代碼也比較簡單。
音頻波形 到 mel-spectrogram函數
sr = 24000 # Sample rate.
n_fft = 2048 # fft points (samples)
frame_shift = 0.0125 # seconds
frame_length = 0.05 # seconds
hop_length = int(sr*frame_shift) # samples.
win_length = int(sr*frame_length) # samples.
n_mels = 512 # Number of Mel banks to generate
power = 1.2 # Exponent for amplifying the predicted magnitude
n_iter = 100 # Number of inversion iterations
preemphasis = .97 # or None
max_db = 100
ref_db = 20
top_db = 15
1
2
3
4
5
6
7
8
9
10
11
12
13
def get_spectrograms(fpath):
'''Returns normalized log(melspectrogram) and log(magnitude) from `sound_file`.
Args:
sound_file: A string. The full path of a sound file.學習
Returns:
mel: A 2d array of shape (T, n_mels) <- Transposed
mag: A 2d array of shape (T, 1+n_fft/2) <- Transposed
'''
# Loading sound file
y, sr = librosa.load(fpath, sr=sr)orm
# Trimming
y, _ = librosa.effects.trim(y, top_db=top_db)ip
# Preemphasis
y = np.append(y[0], y[1:] - preemphasis * y[:-1])
# stft
linear = librosa.stft(y=y,
n_fft=n_fft,
hop_length=hop_length,
win_length=win_length)ci
# magnitude spectrogram
mag = np.abs(linear) # (1+n_fft//2, T)get
# mel spectrogram
mel_basis = librosa.filters.mel(sr, n_fft, n_mels) # (n_mels, 1+n_fft//2)
mel = np.dot(mel_basis, mag) # (n_mels, t)string
# to decibel
mel = 20 * np.log10(np.maximum(1e-5, mel))
mag = 20 * np.log10(np.maximum(1e-5, mag))
# normalize
mel = np.clip((mel - ref_db + max_db) / max_db, 1e-8, 1)
mag = np.clip((mag - ref_db + max_db) / max_db, 1e-8, 1)
# Transpose
mel = mel.T.astype(np.float32) # (T, n_mels)
mag = mag.T.astype(np.float32) # (T, 1+n_fft//2)
return mel, mag
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
mel-spectrogram 到 音頻波形
def melspectrogram2wav(mel):
'''# Generate wave file from spectrogram'''
# transpose
mel = mel.T
# de-noramlize
mel = (np.clip(mel, 0, 1) * max_db) - max_db + ref_db
# to amplitude
mel = np.power(10.0, mel * 0.05)
m = _mel_to_linear_matrix(sr, n_fft, n_mels)
mag = np.dot(m, mel)
# wav reconstruction
wav = griffin_lim(mag)
# de-preemphasis
wav = signal.lfilter([1], [1, -preemphasis], wav)
# trim
wav, _ = librosa.effects.trim(wav)
return wav.astype(np.float32)
def spectrogram2wav(mag):
'''# Generate wave file from spectrogram'''
# transpose
mag = mag.T
# de-noramlize
mag = (np.clip(mag, 0, 1) * max_db) - max_db + ref_db
# to amplitude
mag = np.power(10.0, mag * 0.05)
# wav reconstruction
wav = griffin_lim(mag)
# de-preemphasis
wav = signal.lfilter([1], [1, -preemphasis], wav)
# trim
wav, _ = librosa.effects.trim(wav)
return wav.astype(np.float32)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
幾個輔助函數:
def _mel_to_linear_matrix(sr, n_fft, n_mels):
m = librosa.filters.mel(sr, n_fft, n_mels)
m_t = np.transpose(m)
p = np.matmul(m, m_t)
d = [1.0 / x if np.abs(x) > 1.0e-8 else x for x in np.sum(p, axis=0)]
return np.matmul(m_t, np.diag(d))
def griffin_lim(spectrogram):
'''Applies Griffin-Lim's raw.
'''
X_best = copy.deepcopy(spectrogram)
for i in range(n_iter):
X_t = invert_spectrogram(X_best)
est = librosa.stft(X_t, n_fft, hop_length, win_length=win_length)
phase = est / np.maximum(1e-8, np.abs(est))
X_best = spectrogram * phase
X_t = invert_spectrogram(X_best)
y = np.real(X_t)
return y
def invert_spectrogram(spectrogram):
'''
spectrogram: [f, t]
'''
return librosa.istft(spectrogram, hop_length, win_length=win_length, window="hann")
12345678910111213141516171819202122232425262728預加劇:語音信號的平均功率譜受聲門激勵和口鼻輻射影響,高頻端約在800HZ以上按6dB/倍頻程衰落,預加劇的目的是提高高頻成分,使信號頻譜平坦化,以便於頻譜分析或聲道參數分析.---------------------