Multichannel frequency domain speech enhancement algorithm based on variable-span generalized subspace

文档序号:9806 发布日期:2021-09-17 浏览:47次 中文

1. A multi-channel frequency domain speech enhancement algorithm based on a transform into a generalized subspace, comprising the steps of:

s1, collecting multi-point noisy speech signal data by a microphone array to obtain multi-channel observation data, and collecting multi-point reference noise signals by arranging the microphone array to obtain multi-channel noise reference data;

s2, framing the voice signal with noise and the reference noise signal, adding a window function to each frame, and performing discrete fast Fourier transform on the windowed function;

s3, constructing a covariance matrix data update vector under the current frequency band according to the data subjected to the discrete fast Fourier transform in the S2 and multichannel data of different frequency bands;

s4, updating covariance estimation matrixes under different frequency bands by using the updating vector of S3;

s5, extracting the covariance estimation matrix generalized eigenvector updated in S4 by utilizing a subspace tracking algorithm;

s6, selecting the number of the generalized characteristic vectors to construct a transform filter and filtering the voice data under the sub-band;

and S7, performing discrete inverse Fourier transform on the frequency domain voice data filtered in the S6 to obtain time domain estimation of the pure voice signal subjected to noise reduction.

2. The multi-channel frequency domain speech enhancement algorithm based on the transformation into the generalized subspace, as claimed in claim 1, wherein in S1, the number of the microphone array elements is M, and N-point noisy speech signal data are collected to obtain multi-channel observation data YM×NMulti-channel noise reference data VM×N

3. The multi-channel frequency-domain speech enhancement algorithm based on the transform into the generalized subspace, as set forth in claim 1, wherein the specific method of S2 is as follows:

carrying out frame division operation with consistent size on the voice signal with noise and the noise signal to ensure that the length of the windowing function is consistent;

performing discrete fast Fourier transform on each channel data after windowing to obtain time-frequency data:

y(k,n)=[Y1(k,n)Y2(k,n)…YM(k,n)]=x(k,n)+v(k,n)

where k is the index of the frequency band, n is the index of the time frame, Y1,Y2,…,YMThe time domain observation data of the microphones 1, … and M are frequency domain data after fourier transform, x is a frequency domain vector of a time domain voice signal vector after fourier transform, and v is a frequency domain vector of a reference noise vector after fourier transform.

4. The multi-channel frequency-domain speech enhancement algorithm based on transforming into a generalized subspace, as set forth in claim 3, wherein the type of windowing function is Kaiser window or Hamming window.

5. The multi-channel frequency-domain speech enhancement algorithm based on the transform into the generalized subspace, as set forth in claim 1, wherein the specific method of S4 is as follows:

iteratively updating the covariance matrix of the noisy speech signal using the update vector:

wherein gamma isyK is the index of the frequency band, n is the index of the time frame,is a frequency domain covariance matrix of the observed signal, y is an innovation vector of the frequency domain data of the observed signal, yHComplex conjugate transpose of frequency domain information vector for the observed signal;

iteratively updating the covariance matrix of the reference noise signal using the update vector:

wherein

γvA forgetting factor for updating the covariance matrix, the magnitude of which is between (0, 1) to track the changing covariance statistic; i is an identity matrix with the size of M multiplied by M; updating an intermediate variable of the covariance matrix by alpha;the vector of the noise frequency domain data subjected to whitening processing is obtained;the complex conjugate transpose of the vector of the noise frequency domain data subjected to whitening processing;is the inverse of the frequency domain covariance matrix of the reference noise; v is a frequency domain data vector of the reference noise;

estimating the covariance matrix of the clean signal:

is a frequency domain covariance matrix of the reference noise.

6. The multi-channel frequency-domain speech enhancement algorithm based on the transform into generalized subspace, in accordance with claim 5, characterized by the forgetting factor γyCovariance statistic, forgetting factor gamma for tracking time-varyingyIs in the range of 0 to 1.

7. The multi-channel frequency-domain speech enhancement algorithm based on the transform into the generalized subspace, as set forth in claim 1, wherein the specific method of S5 is as follows:

independently updating the Q weight vectors and carrying out QR decomposition orthogonalization:

for q=1,…,Q

end

is the q normalized weight vector; u. ofqIs the q non-normalized weight vector;is the inverse of the frequency domain covariance matrix of the reference noise;estimating a covariance matrix for a frequency domain of the clean speech signal;transpose of inverse of frequency domain covariance matrix for reference noise;complex conjugate transpose for the q-th non-normalized weight vector;q normalized weight vectors are obtained; u. of1,…,uQThe weight vectors are Q weight vectors which are not normalized;

and performing an inverse whitening process on the weight vector to obtain an estimator of the generalized characteristic vector:

w1,…,wQis a matrix pairQ generalized eigenvectors.

8. The multi-channel frequency-domain speech enhancement algorithm based on the transform into the generalized subspace, as set forth in claim 1, wherein the specific method of S6 is as follows:

selecting generalized eigenvectors to construct a variogram filter:

wherein, delta is a diagonal loading factor and has the function of leading the covariance matrix of the pure signals to be a positive definite matrix,complex conjugate transpose of the qth generalized eigenvector; w is aqIs the qth generalized eigenvector; i is the first column vector of the unit array of size M × M.

Background

Speech signals are an important means of communication between living beings. The voice signal becomes the most effective means for communication among individuals at present by virtue of the abundant information content. The development of speech signal technology began in the second half of the 19 th century with the beyer invention phone. The invention of the telephone enables the human communication ability to go one step, greatly improves the operation efficiency of the society and raises the trend of information science and technology development. Speech signals are typically processed by means of acoustic transducers, where virtual acoustic energy is passed through the vibrations of the acoustic transducers to convert kinetic energy into a quantized electrical signal that can be recognized by a computer. The computer program further processes the obtained electrical signal to decode the information carried by the acoustic signal or to obtain a specific acoustic effect. Speech signal processing is generally divided into several stages:

acquiring a voice signal: the method comprises the steps of collecting sound signals of a speaker through a microphone sensor array or a built-in microphone of a mobile phone, and converting the characteristics of the sound signals into electric signals to be stored. In order to avoid distortion of the originally recorded acoustic signal, it is necessary to design an acoustic card acquisition system, including an analog-to-digital converter with high resolution, a storage system with a sufficiently large memory capacity, and a non-blocking data transmission system.

Preprocessing of a voice signal: the converted acoustic energy is stored in the digital computer in the form of an electrical signal. Due to the short-time stationarity of speech signals, speech signals are typically divided into short frames that are truncated and smoothed by a windowing function for subsequent time-domain or frequency-domain processing.

Processing of the voice signal: the technical processing content of the voice signal comprises sub tasks of echo removal, reverberation, noise, separation and the like. These contents are also the core focus of the research of speech signal enhancement algorithms. In order to facilitate the transplantation of the algorithm to the digital signal processor, the algorithm generally has requirements on real-time performance, robustness, expandability, low computational complexity and the like.

At present, a classical wiener filter method, a maximum signal-to-noise ratio algorithm, a minimum variance distortionless filter method and the like exist for a speech enhancement algorithm. Because the filter only considers the maximum retention of the pure component after filtering and the maximum output signal-to-noise ratio, the application effect of the algorithms in the actual scene is limited by the small defect of expansibility of the filter. For example, when the signal-to-noise ratio is low, the filtering requirement cannot be met with a certain probability of the signal-to-noise ratio of the filtered speech signal output obtained by the minimum variance distortionless filter. In high snr situations, however, the maximum snr algorithm may distort the speech signal resulting in a reduction of the hearing effect. In order to adapt to the voice enhancement effect under different scenes, an algorithm which has high expansibility and can be particularly important in outputting the signal-to-noise ratio and coordinating the voice distortion is provided.

Disclosure of Invention

The invention aims to overcome the defects, provides a multi-channel frequency domain speech enhancement algorithm based on the generalized subspace, solves the problem of coordination between the speech signal output signal-to-noise ratio and the speech distortion, and provides an online high-efficiency and rapid speech enhancement algorithm.

In order to achieve the above object, the present invention comprises the steps of:

s1, collecting multi-point noisy speech signal data by a microphone array to obtain multi-channel observation data, and collecting multi-point reference noise signals by arranging the microphone array to obtain multi-channel noise reference data;

s2, framing the voice signal with noise and the reference noise signal, adding a window function to each frame, and performing discrete fast Fourier transform on the windowed function;

s3, constructing a covariance matrix data update vector under the current frequency band according to the data subjected to the discrete fast Fourier transform in the S2 and multichannel data of different frequency bands;

s4, updating covariance estimation matrixes under different frequency bands by using the updating vector of S3;

s5, extracting the covariance estimation matrix generalized eigenvector updated in S4 by utilizing a subspace tracking algorithm;

s6, selecting the number of the generalized characteristic vectors to construct a transform filter and filtering the voice data under the sub-band;

and S7, performing discrete inverse Fourier transform on the frequency domain voice data filtered in the S6 to obtain time domain estimation of the pure voice signal subjected to noise reduction.

In S1, the number of array elements of the microphone array is M, N-point noisy speech signal data are collected, and multi-channel observation data Y are obtainedM×NMulti-channel noise reference data VM×N

The specific method of S2 is as follows:

carrying out frame division operation with consistent size on the voice signal with noise and the noise signal to ensure that the length of the windowing function is consistent;

performing discrete fast Fourier transform on each channel data after windowing to obtain time-frequency data:

y(k,n)=[Y1(k,n)Y2(k,n)…YM(k,n)]=x(k,n)+v(k,n)

where k is the index of the frequency band, n is the index of the time frame, Y1,Y2,…,YMThe time domain observation data of the microphones 1, … and M are frequency domain data after fourier transform, x is a frequency domain vector of a time domain voice signal vector after fourier transform, and v is a frequency domain vector of a reference noise vector after fourier transform.

The type of windowing function is either a kessel window or a hamming window.

The specific method of S4 is as follows:

iteratively updating the covariance matrix of the noisy speech signal using the update vector:

wherein gamma isyK is the index of the frequency band, n is the index of the time frame,is a frequency domain covariance matrix of the observed signal, y is an innovation vector of the frequency domain data of the observed signal, yHComplex conjugate transpose of frequency domain information vector for the observed signal;

iteratively updating the covariance matrix of the reference noise signal using the update vector:

wherein

γvA forgetting factor for updating the covariance matrix, the magnitude of which is between (0, 1) to track the changing covariance statistic; i is an identity matrix with the size of M multiplied by M; updating an intermediate variable of the covariance matrix by alpha;the vector of the noise frequency domain data subjected to whitening processing is obtained;the complex conjugate transpose of the vector of the noise frequency domain data subjected to whitening processing;is the inverse of the frequency domain covariance matrix of the reference noise; v is a frequency domain data vector of the reference noise;

estimating the covariance matrix of the clean signal:

is a frequency domain covariance matrix of the reference noise.

Forgetting factor gammayCovariance statistic, forgetting factor gamma for tracking time-varyingyIs in the range of 0 to 1.

The specific method of S5 is as follows:

independently updating the Q weight vectors and carrying out QR decomposition orthogonalization:

for q=1,…,Q

end

is the q normalized weight vector; u. ofqIs the q non-normalized weight vector;is the inverse of the frequency domain covariance matrix of the reference noise;estimating a covariance matrix for a frequency domain of the clean speech signal;transpose of inverse of frequency domain covariance matrix for reference noise;complex conjugate transpose for the q-th non-normalized weight vector;q normalized weight vectors are obtained; u. of1,…,uQThe weight vectors are Q weight vectors which are not normalized;

and performing an inverse whitening process on the weight vector to obtain an estimator of the generalized characteristic vector:

w1,…,wQis a matrix pairQ generalized eigenvectors.

The specific method of S6 is as follows:

selecting generalized eigenvectors to construct a variogram filter:

wherein, delta is a diagonal loading factor and has the function of leading the covariance matrix of the pure signals to be a positive definite matrix,complex conjugate transpose of the qth generalized eigenvector; w is aqIs the qth generalized eigenvector; i is the first column vector of the unit array of size M × M.

Compared with the prior art, the method transforms the data of the time domain to the frequency domain, extracts the generalized eigenvector of the updated signal covariance matrix through the generalized subspace tracking algorithm to construct the variogram filter, and the filter carries out filtering processing on different sub-frequency bands of the data of the frequency domain, so that the filtered signal obtains the statistic similar to the pure voice signal, and a good filtering effect is obtained. The invention has certain expansibility, can coordinate the balance of the speech output signal-to-noise ratio and the speech distortion, and can be applied to the real-time speech noise reduction processing occasion.

Drawings

FIG. 1 is a diagram of an application scenario in accordance with an embodiment of the present invention;

FIG. 2 is a flow chart of the present invention;

FIG. 3 is a diagram illustrating simulation results of the signal-to-noise ratio of the filtering output under the reverberation condition according to the present invention;

fig. 4 is a diagram illustrating simulation results of the signal-to-noise ratio of the filtering output under the reverberation condition.

Detailed Description

The invention is further described below with reference to the accompanying drawings.

Referring to fig. 2, the present invention comprises the steps of:

step 1: microphone array with M array elements for collecting N-point noisy speech signal data and obtaining multi-channel observation data YM×N. The microphone array with the array element number of M is arranged to collect N point reference noise signals to obtain multi-channel noise reference data VM×N

Step 2.1: carrying out frame division operation with consistent size on the voice signal with noise and the noise signal, wherein the length of the added window function is consistent and the type can be a Kaiser window, a Hamming window and the like;

step 2.2: performing discrete fast Fourier transform on each channel data after windowing to obtain time-frequency data:

y(k,n)=[Y1(k,n)Y2(k,n)…YM(k,n)]=x(k,n)+v(k,n)

where k represents the index of the frequency band and n represents the index of the time frame Y1,Y2,…,YMThe time domain observation data of the microphones 1, … and M are frequency domain data after fourier transform, x is a frequency domain vector of a time domain voice signal vector after fourier transform, and v is a frequency domain vector of a reference noise vector after fourier transform.

And step 3: constructing a covariance matrix data update vector under the current frequency band according to the data obtained in the step 2 and the multi-channel data of different frequency bands;

step 4.1: iteratively updating the covariance matrix of the noisy speech signal using the update vector:

wherein gamma isyA forgetting factor, with a magnitude between 0 and 1, for tracking time-varying covariance statistics, k being the index of the frequency band, n being the index of the time frame,is a frequency domain covariance matrix of the observed signal, y is an innovation vector of the frequency domain data of the observed signal, yHIs the complex conjugate transpose of the frequency domain information vector of the observed signal.

Step 4.2: iteratively updating the covariance matrix of the reference noise signal using the update vector:

wherein

γvA forgetting factor for updating the covariance matrix, the magnitude of which is between (0, 1) to track the changing covariance statistic; i is an identity matrix with the size of M multiplied by M; updating an intermediate variable of the covariance matrix by alpha;the vector of the noise frequency domain data subjected to whitening processing is obtained;complex co-processing of whitened vectors for noisy frequency domain dataTransposition of a yoke;is the inverse of the frequency domain covariance matrix of the reference noise; v is a frequency domain data vector of the reference noise;

step 4.3: estimating the covariance matrix of the clean signal:

is a frequency domain covariance matrix of the reference noise.

Step 5.1: independently updating the Q weight vectors and carrying out QR decomposition orthogonalization:

for q=1,…,Q

end

is the q normalized weight vector; uq is the q-th non-normalized weight vector;is the inverse of the frequency domain covariance matrix of the reference noise;estimating a covariance matrix for a frequency domain of the clean speech signal;transpose of inverse of frequency domain covariance matrix for reference noise;complex conjugate transpose for the q-th non-normalized weight vector;q normalized weight vectors are obtained; u. of1,…,uQThe weight vectors are Q weight vectors which are not normalized;

step 5.2: and performing an inverse whitening process on the weight vector to obtain an estimator of the generalized characteristic vector:

w1,…,wQis a matrix pairQ generalized eigenvectors.

Step 6: selecting generalized eigenvectors to construct a variogram filter:

where delta is a diagonal loading factor that acts to make the covariance matrix of the clean signal a positive definite matrix,complex conjugate transpose of the qth generalized eigenvector; w is aqIs the qth generalized eigenvector; i is the first column vector of the unit array of size M × M.

And 7: and 6, performing discrete inverse Fourier transform on the frequency domain voice data filtered in the step 6 to obtain time domain estimation of the pure voice signal subjected to noise reduction.

Example (b):

implementation scene:

application scenario as schematically shown in fig. 1, the environment is a room with a size of 5m × 5m × 3 m. The microphone array of the experiment is a uniform linear array, and the distance between array elements is 0.04 m. The number of array elements is 8, the number of sound time is 1 and is 1.5M out of the front end of the microphone array. The experiment adopts an image model to generate the impulse response of the room. The reverberation time is 150 ms. The speech data is selected from a real speech database TIMIT. The noise data is selected from a real noise database Noisex 92. In this example, we performed two experiments: and respectively extracting 2-Q and 4-Q generalized eigenvectors, respectively carrying out voice enhancement under the experimental setting that the input signal-to-noise ratio is-10 dB, -5dB, 0dB, 5dB and 10dBde, drawing the enhancement effect of the output signal-to-noise ratio of the enhanced signal on different input signal-to-noise ratios, and simultaneously comparing and extracting the influence of different numbers of generalized eigenvectors on the actual enhancement effect.

The implementation process comprises the following steps:

step 1: according to the experimental setup, the number of microphones M is taken to be 8. The sampling frequency is set to be Fs 8000 Hz. Selecting N-32000 point data from a real voice database TIMIT, and obtaining multi-channel observation data Y by convolution with a room pulse vector generated by an image modelM×N. Collecting N-32000 point reference noise signals by a microphone array with the array element number of M-8 to obtain multi-channel noise reference data VM×N

Step 2.1: the noisy speech signal and the noisy signal are processed by frame division operation with the same size, the size of each frame is 32, the length of the added window function is the same, and the size is 128 points. Selecting a Kaiser window with the parameter of 1.9 pi;

step 2.2: performing discrete fast Fourier transform on each channel data after windowing to obtain time-frequency data:

y(k,n)=[Y1(k,n)Y2(k,n)…YM(k,n)]=x(k,n)+v(k,n)

where k represents the index of the frequency band, which ranges from 1 to 128. n represents the index of the time frame, which ranges from 1 to 1000.

And step 3: constructing a covariance matrix data update vector under the current frequency band according to the data obtained in the step 2 and the multi-channel data of different frequency bands;

step 4.1: iteratively updating the covariance matrix of the noisy speech signal using the update vector:

wherein gamma isyA forgetting factor, whose magnitude is set to 0.6, is used to track the time-varying covariance statistic.

Step 4.2: iteratively updating the covariance matrix of the reference noise signal using the update vector:

wherein

Forgetting factor gammavIs set to be gammav=0.6;

Step 4.3: estimating the covariance matrix of the clean signal:

step 5.1: independently updating the Q weight vectors and carrying out QR decomposition orthogonalization:

for q=1,…,Q

end

step 5.2: and performing an inverse whitening process on the weight vector to obtain an estimator of the generalized characteristic vector:

step 6: selecting generalized eigenvectors to construct a variogram filter:

where δ is the diagonal loading factor, which is set to a magnitude of 10-5

And 7: and 6, performing discrete inverse Fourier transform on the frequency domain voice data filtered in the step 6 to obtain time domain estimation of the pure voice signal subjected to noise reduction.

And 8: and estimating the enhancement effect of the de-noised voice.

And (4) experimental conclusion:

the results of the experiment are shown in fig. 3 and 4. The curve labeled Fixed-point in the graph is a performance graph of the algorithm of the invention. Fig. 3 depicts the average output snr after speech noise reduction by extracting Q-2 generalized eigenvectors. Fig. 4 depicts the average output snr after speech noise reduction by extracting Q-4 generalized eigenvectors. As can be seen from FIG. 3, the experimental performance of the algorithm of the present invention approaches that of the conventional high-complexity numerical algorithm with a smaller computational complexity, and the performance is better than that of other iterative algorithms. It can be seen from fig. 3 that, when a larger number of generalized feature vectors are extracted for speech enhancement, the experimental performance of the algorithm of the present invention is closer to that of the conventional high-complexity numerical algorithm and better than that of other iterative algorithms. Based on the simulation results, the practical effectiveness and the calculation superiority of the algorithm are verified.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:人机交互方法、装置、存储介质及终端设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!